Posts

Comments

Comment by shiney on LLMs are (mostly) not helped by filler tokens · 2023-08-10T17:21:03.515Z · LW · GW

Oh hmm that's very clever and I don't know how I'd improve the method to avoid this.

Comment by shiney on LLMs are (mostly) not helped by filler tokens · 2023-08-10T14:27:19.544Z · LW · GW

This is interesting, its a pity you aren't seeing results at all with this except with GPT4 because if you were doing so with an easier to manipulate model I'd suggest you could try snapping the activations on the filler tokens from one question to another and see if that reduced performance.

Comment by shiney on Planecrash Podcast · 2023-04-09T13:44:22.078Z · LW · GW

Can I help somehow

Comment by shiney on Planecrash Podcast · 2023-04-09T12:37:15.859Z · LW · GW

Hello, this is great.

OOI what's the reason you haven't just uploaded all of it? Is this a lot of work for you? Are the AWS credits expensive etc.?

Comment by shiney on ProjectLawful.com: Eliezer's latest story, past 1M words · 2023-04-08T11:15:53.885Z · LW · GW

Yes please

Comment by shiney on Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes · 2023-02-26T14:05:25.118Z · LW · GW

I was reading (listening) to this and I think I've got some good reasons to expect failed AI coups to happen.

In general we probably expect "Value is Fragile" and this will probably apply to AI goals too (and it will think this) this will mean a Consequentialist AI will expect that if there is a high chance of another AI taking over soon then all value in the universe (according to it's definition of value) then even though there is a low probability of a particular coup working it will still want to try it because if it doesn't succeed  then almost all the value will be destroyed. So for example this would mean if there are 4 similarly situated AI labs then an AI at one of them will reason they only have a 25% chance of getting control of all value in the universe so as soon as it can come up with a coup attempt that it believes has a greater than around a 25% chance it will probably want to go for it (maybe this is more complex but I think the qualitative point stands)

Secondly because "Value is Fragile" not only will AI's be worried about other labs AI's they will probably also be pretty worried about the next iteration of themselves after an SGD update, obviously there will be some correlation in beliefs about what is valuable between a similarly weighted Neural Network, but I don't think there's much reason to believe that NN weights will have been optimised to make this consistent.

So I think in conclusion to the extent the doom scenario is a runaway consequentialist AI I think unless ease  of coup attempts succeeding jumps massively from around 0% to around 100%  for some reason, there will be good reasons to expect that we will see failed coup attempts first.

Comment by shiney on SolidGoldMagikarp (plus, prompt generation) · 2023-02-14T21:03:15.637Z · LW · GW

Oh interesting didn't realise there was so much nondeterminism for sums on GPUs

I guess I thought that there's only 65k float 16s and the two highest ones are going to be chosen from a much smaller range from that 65k just because they have to be bigger than everything else.

Comment by shiney on SolidGoldMagikarp (plus, prompt generation) · 2023-02-09T05:20:12.678Z · LW · GW

I might be missing something but why does temperature 0 imply determinism? Neural nets don't work with real numbers, they work with floating points numbers so despitetemperature 0 implying an argmax there's no reason there arent justmultiple maxima. AFAICT GPT3 uses half precision floating point numbers so there's quite a lot of space for collisions.

Comment by shiney on ProjectLawful.com: Eliezer's latest story, past 1M words · 2022-05-11T12:16:09.582Z · LW · GW

Does anyone know if there's work to make a podcast version of this? I'd definitely be more willing to listen even if it is just at Nonlinear library quality rather than voice acted.

Comment by shiney on The Speed + Simplicity Prior is probably anti-deceptive · 2022-04-28T19:49:16.527Z · LW · GW

Getting massively out of my depth here, but is that an easy thing to do given the later stages will have to share weights with early stages?

Comment by shiney on The Speed + Simplicity Prior is probably anti-deceptive · 2022-04-28T09:44:47.234Z · LW · GW

"we don't currently know how to differentiably vary the size of the NN being run. We can certainly imagine NNs being rolled-out a fixed number of times (like RNNs), where the number of rollouts is controllable via a learned parameter, but this parameter won't be updateable via a standard gradient."

Is this really true? I can think of a way to do this in a standard gradient type way. 

Also there looks like there is a paper by someone who works in ML from 2017 where they do this https://arxiv.org/abs/1603.08983

TLDR at each roll out have a neuron that represents the halting probability and then make the result of the roll out the sum of the output vectors at each rollout weighted by the probability the network halted at that rollout.

Comment by shiney on [Closed] Hiring a mathematician to work on the learning-theoretic AI alignment agenda · 2022-04-27T18:13:28.396Z · LW · GW

Thanks, I'll see how that goes, assuming I get enough free time to try this.

Comment by shiney on [Closed] Hiring a mathematician to work on the learning-theoretic AI alignment agenda · 2022-04-19T18:17:54.758Z · LW · GW

If someone wanted to work out if they might be able to develop the skills to work on this sort of thing in the future, is there anything you would point to?

Comment by shiney on [RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm. · 2022-04-09T07:18:14.810Z · LW · GW

I don't think it's that hard e.g see here https://www.econlib.org/archives/2017/01/my_end-of-the-w.html

TLDR person who doesn't think end of the world will happen gives other person money now and it gets paid back double if the world doesn't end.

Comment by shiney on Meetup : Meetup : London - Paranoid Debating 2nd Feb, plus social 9th Feb · 2014-01-27T21:24:18.568Z · LW · GW

Are you sure the not spamming thing is a good idea it means that the nearest meetups section doesn't include London even if there is a meetup?

Comment by shiney on Results from MIRI's December workshop · 2014-01-19T09:34:38.684Z · LW · GW

Maybe I missed something but I could never see why there was anything intrinsically good about (say) the short bias in the Solomonoff prior, it seemed like the whole thinking bigger programs were less likely was just a necessary trick to make the infinite sums finite. If we consider the formalism in the technical note that only keeps a finite number of sentences in memory then we don't have to worry about this issue and can sample uniformly rather than (arbirtrarily?) picking a bias.

In your paper http://ict.usc.edu/pubs/Logical%20Prior%20Probability.pdf you have to worry about the infinite series summing to infinity as the probability distribution is over all sentences in a language, so you have to include this bias.

If we can avoid assuming simplicity is more likely and treat it as a fact about the world which we might learn after choosing a prior isn't that better?

I feel that the bigger issue with this proposal is that it doesn't solve the problem it's trying to solve as statements in S are biased towards being picked.

Comment by shiney on Results from MIRI's December workshop · 2014-01-18T13:55:12.763Z · LW · GW

Can't you somewhat patch Demski's proposal by sampling uniformly from S rather than doing it biased by length. That would generate the right probabilities for the 90% issue, provided that all the ϕ(n) are in S to start with. If not all the sentences were in S then there still be a bias towards ϕ(n) being true but it would be only for the ones such that ϕ(n) is in S and it would be lower.