Posts

What do you value ? 2024-05-10T15:34:22.185Z
Enter the WorldsEnd 2024-03-16T01:34:59.105Z

Comments

Comment by Akram Choudhary (akram-choudhary) on Superbabies: Putting The Pieces Together · 2024-07-29T21:39:07.758Z · LW · GW

Entertaining as this post was, I think very few of us have ai timelines so long that iq eugenics actually matter. Long timelines are like 2040 ish these days so what use is a 16 yo high iq child going to be to secure humanities future ?

Comment by Akram Choudhary (akram-choudhary) on Andrew Burns's Shortform · 2024-06-06T19:21:48.608Z · LW · GW

Wait till you find out that qwen 2 is probably just llama 3 with a few changes and some training on benchmarks to inflate performance a bit

Comment by Akram Choudhary (akram-choudhary) on Teaching CS During Take-Off · 2024-05-22T15:56:11.093Z · LW · GW

What are your thoughts on skills that the government has too much control over? For example If we get ASI in 2030 do you imagine that a doctor will be obsolete in 2032 or will the current regulatory environment still be relevant ? 

And how much of this is determined by "labs have now concentrated so much power that governments are obsolete".

Comment by Akram Choudhary (akram-choudhary) on Please stop publishing ideas/insights/research about AI · 2024-05-04T10:41:29.878Z · LW · GW

Daniel, your interpretation is literally contradicted by Eliezer's exact words. Eliezer defines dignity as that which increases our chance of survival.

 

""Wait, dignity points?" you ask.  "What are those?  In what units are they measured, exactly?"

And to this I reply:  Obviously, the measuring units of dignity are over humanity's log odds of survival - the graph on which the logistic success curve is a straight line.  A project that doubles humanity's chance of survival from 0% to 0% is helping humanity die with one additional information-theoretic bit of dignity."

Comment by Akram Choudhary (akram-choudhary) on What's with all the bans recently? · 2024-04-12T10:38:30.050Z · LW · GW

So I'm one of the rate limited users. I suspect it's because I made a bad early April fools joke about a WorldsEnd movement that would encourage people to maximise utility over the next 25 years instead of pursuing long term goals for humanity like alignment. Made some people upset and it hit me that this site doesn't really have the right culture for those kinds of jokes. I apologise and don't contest being rate limited.

Comment by Akram Choudhary (akram-choudhary) on Enter the WorldsEnd · 2024-03-21T23:11:15.243Z · LW · GW

Just this once I promise

Comment by Akram Choudhary (akram-choudhary) on Enter the WorldsEnd · 2024-03-21T21:00:08.461Z · LW · GW

See my other comment on how this is just a shitpost.

 

Also humans dont base their decisions on raw expected value calculations. Almost everyone would take 1 million over a 0.1% chance of 10 billion though the expected value of the latter is higher (pascals mugging) 

Comment by Akram Choudhary (akram-choudhary) on Enter the WorldsEnd · 2024-03-21T20:57:28.197Z · LW · GW

Early april fools joke. I dont seriously believe this. 

Comment by Akram Choudhary (akram-choudhary) on Enter the WorldsEnd · 2024-03-21T20:55:58.638Z · LW · GW

It was originally intended as an april fools joke lol. This isnt a serious movement but it does reflect a little bit of my hopelessness of ai alignment working 

Comment by Akram Choudhary (akram-choudhary) on What could a policy banning AGI look like? · 2024-03-14T04:28:56.444Z · LW · GW

AI + Humans would just eventually give rise to AGI anyway so I dont see the distinction people try to make here.

Comment by Akram Choudhary (akram-choudhary) on China-AI forecasts · 2024-02-26T16:55:17.079Z · LW · GW

Yh I dont get it either. From what I can tell the best Chinese labs arent even as good as the second tier American labs. The only way I see it happening is if the CCP actively try to steal it.

Comment by Akram Choudhary (akram-choudhary) on Retirement Accounts and Short Timelines · 2024-02-22T18:43:04.793Z · LW · GW

Why do you have 15% for 2024 and only an additional 15 for 2025.

Do you really think there's a 15% chance of AGI this year ?

Comment by Akram Choudhary (akram-choudhary) on AGI is easier than robotaxis · 2023-10-28T14:28:17.492Z · LW · GW

you would have to have ridiculous ai timelines for it to be closer than robotaxis. Closer than 2027?

Comment by Akram Choudhary (akram-choudhary) on Sama Says the Age of Giant AI Models is Already Over · 2023-04-18T03:23:44.571Z · LW · GW

shutting down GPU production was never in the Overton window anyway. This makes little difference. Even if further scaling isnt needed most people cant afford the 100M spent on gpt4.

Comment by Akram Choudhary (akram-choudhary) on But why would the AI kill us? · 2023-04-18T03:03:35.844Z · LW · GW

because when you train something using gradient descent optimised against a loss function it de facto has some kind of utility function. You cant accomplish all that much without a utility function.

Comment by Akram Choudhary (akram-choudhary) on Metaculus Predicts Weak AGI in 2 Years and AGI in 10 · 2023-04-04T22:40:07.565Z · LW · GW

What on earth does the government have to do with a fire alarm? The fire alarm continues to buzz even if everyone in the room is deaf. It is just sending a signal not promising any particular action will be taken as a consequence.

Comment by Akram Choudhary (akram-choudhary) on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T00:46:51.202Z · LW · GW

I dont know how informed you are about politics but most governments dont give a hoot about AI at the moment. There are smaller issues that are getting more attention and a few doomers yelling at them wont change a thing. On my model they dont care even at the moment the die. But I didnt predict the recent 6 month "ban" on training runs either so its possible this becomes a politically charged issue say by 2030 when we have gpt7 and its truly approaching dangerous levels of intelligence.

Comment by Akram Choudhary (akram-choudhary) on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T00:42:18.957Z · LW · GW

You have to be joking. Not a single one of those partial "analysis" says much about whats going on in there. Also Yud has already said he believes that inner goals often wont manifest until high levels of intelligence because no system of reasonable intelligence tries to pursue impossible goals.

Comment by Akram Choudhary (akram-choudhary) on Looking for answers about quantum immortality. · 2023-02-04T13:40:22.696Z · LW · GW

its a 2x2 matrix if you are married tho 

Comment by Akram Choudhary (akram-choudhary) on AI Forecasting: One Year In · 2022-07-05T13:45:11.298Z · LW · GW

how many mathematicians could win the gold at the IMO

I understand its for under 18s but I imagine there are a lot of mathematicians that wouldnt be able to do it either right?

Comment by Akram Choudhary (akram-choudhary) on The AI Countdown Clock · 2022-06-16T23:30:12.958Z · LW · GW

Eliezer seems to think that the shift from proto agi to agi to asi will happen really fast and many of us on this site agree with him 

thus its not sensible that there is a decade gap between "almost ai" and ai on metaculus . If I recall Turing (I think?) said something similar. That once we know the the way to generate even some intelligence things get very fast after that (heavily paraphrased).

So 2028 really is the beginning of the end if we do really see proto agi then.