The Limits of the Existence Proof Argument for General Intelligence
post by Amadeus Pagel (amadeus-pagel) · 2023-09-19T17:45:06.295Z · LW · GW · 3 commentsThis is a link post for https://amadeuspagel.com/p/the-limits-of-the-existence-proof
Contents
3 comments
If you wish to make an apple pie from scratch, you must first invent the universe.
— Carl Sagan
The existence proof argument for general intelligence, the idea that artificial general intelligence must be possible because human general intelligence exists, would not even be conclusive if by artificial general intelligence we meant a brain grown in a vat — the existence of a thing does not prove that it could have come into existence in any different way — but it’s absolutely absurd if by artificial general intelligence we mean a computer running certain software. The existence of anything does not prove that a computer running certain software could perfectly simulate that thing. This is not even true for many things far simpler then a human brain, such as a double pendulum.
If we took the existence proof argument seriously anyway, we would have no reason to be more concerned about artificial intelligence then we are about human intelligence, we would have no reason to be more concerned about an artificial intelligence explosion then we are about a human intelligence explosion, about a human reaching, with the perfect notes app and the perfect nootropics mix, a level of intelligence that then allows him to become even more intelligent ever faster, turning the entire universe into brain mass in the process.
Concerns about general artificial intelligence are based on the idea that it would be fundamentally different from human intelligence. But the existence of one thing does not prove the possibility of a fundamentally different thing.
3 comments
Comments sorted by top scores.
comment by Vaughn Papenhausen (Ikaxas) · 2023-09-20T18:22:11.200Z · LW(p) · GW(p)
I suspect this is getting downvoted because it is so short and underdeveloped. I think the fundamental point here is worth making though. I've used the existence proof argument in the past, and I think there is something to it, but I think the point being made here is basically right. It might be worth writing another post about this that goes into a bit more detail.
comment by thenoviceoof · 2023-09-23T07:25:53.079Z · LW(p) · GW(p)
I think the 1st argument proves too much - I don't think we usually expect simulations to never work unless otherwise proven? Maybe I'm misunderstanding your point? I agree with Vaughn downvotes assessment; maybe more specific arguments would help clarify your position (like, to pull something out of by posterior, "quantization of neuron excitation levels destroys the chaotic cascades necessary for intelligence. Also, chaos is necessary for intelligence because...").
To keep things brief, the human intelligence explosion seems to require open brain surgery to re-arrange neurons, which seems a lot more complicated than flipping bits in RAM.
Replies from: amadeus-pagel↑ comment by Amadeus Pagel (amadeus-pagel) · 2023-10-01T01:54:09.554Z · LW(p) · GW(p)
I think the 1st argument proves too much - I don't think we usually expect simulations to never work unless otherwise proven? Maybe I'm misunderstanding your point?
We usually use the term simulation to refer to models that are meant to help us understand something, maybe even to make predictions, but not to replace what is supposed to be simulated.
To keep things brief, the human intelligence explosion seems to require open brain surgery to re-arrange neurons, which seems a lot more complicated than flipping bits in RAM.
Yes, this is one of the many differences between the brain and the computer, and given so many differences we simply can't conclude from any attribute of a brain that a computer with the same attribute is possible.