Posts
Comments
Entertaining as this post was, I think very few of us have ai timelines so long that iq eugenics actually matter. Long timelines are like 2040 ish these days so what use is a 16 yo high iq child going to be to secure humanities future ?
Wait till you find out that qwen 2 is probably just llama 3 with a few changes and some training on benchmarks to inflate performance a bit
What are your thoughts on skills that the government has too much control over? For example If we get ASI in 2030 do you imagine that a doctor will be obsolete in 2032 or will the current regulatory environment still be relevant ?
And how much of this is determined by "labs have now concentrated so much power that governments are obsolete".
Daniel, your interpretation is literally contradicted by Eliezer's exact words. Eliezer defines dignity as that which increases our chance of survival.
""Wait, dignity points?" you ask. "What are those? In what units are they measured, exactly?"
And to this I reply: Obviously, the measuring units of dignity are over humanity's log odds of survival - the graph on which the logistic success curve is a straight line. A project that doubles humanity's chance of survival from 0% to 0% is helping humanity die with one additional information-theoretic bit of dignity."
So I'm one of the rate limited users. I suspect it's because I made a bad early April fools joke about a WorldsEnd movement that would encourage people to maximise utility over the next 25 years instead of pursuing long term goals for humanity like alignment. Made some people upset and it hit me that this site doesn't really have the right culture for those kinds of jokes. I apologise and don't contest being rate limited.
Just this once I promise
See my other comment on how this is just a shitpost.
Also humans dont base their decisions on raw expected value calculations. Almost everyone would take 1 million over a 0.1% chance of 10 billion though the expected value of the latter is higher (pascals mugging)
Early april fools joke. I dont seriously believe this.
It was originally intended as an april fools joke lol. This isnt a serious movement but it does reflect a little bit of my hopelessness of ai alignment working
AI + Humans would just eventually give rise to AGI anyway so I dont see the distinction people try to make here.
Yh I dont get it either. From what I can tell the best Chinese labs arent even as good as the second tier American labs. The only way I see it happening is if the CCP actively try to steal it.
Why do you have 15% for 2024 and only an additional 15 for 2025.
Do you really think there's a 15% chance of AGI this year ?
you would have to have ridiculous ai timelines for it to be closer than robotaxis. Closer than 2027?
shutting down GPU production was never in the Overton window anyway. This makes little difference. Even if further scaling isnt needed most people cant afford the 100M spent on gpt4.
because when you train something using gradient descent optimised against a loss function it de facto has some kind of utility function. You cant accomplish all that much without a utility function.
What on earth does the government have to do with a fire alarm? The fire alarm continues to buzz even if everyone in the room is deaf. It is just sending a signal not promising any particular action will be taken as a consequence.
I dont know how informed you are about politics but most governments dont give a hoot about AI at the moment. There are smaller issues that are getting more attention and a few doomers yelling at them wont change a thing. On my model they dont care even at the moment the die. But I didnt predict the recent 6 month "ban" on training runs either so its possible this becomes a politically charged issue say by 2030 when we have gpt7 and its truly approaching dangerous levels of intelligence.
You have to be joking. Not a single one of those partial "analysis" says much about whats going on in there. Also Yud has already said he believes that inner goals often wont manifest until high levels of intelligence because no system of reasonable intelligence tries to pursue impossible goals.
its a 2x2 matrix if you are married tho
how many mathematicians could win the gold at the IMO
I understand its for under 18s but I imagine there are a lot of mathematicians that wouldnt be able to do it either right?
Eliezer seems to think that the shift from proto agi to agi to asi will happen really fast and many of us on this site agree with him
thus its not sensible that there is a decade gap between "almost ai" and ai on metaculus . If I recall Turing (I think?) said something similar. That once we know the the way to generate even some intelligence things get very fast after that (heavily paraphrased).
So 2028 really is the beginning of the end if we do really see proto agi then.