Posts
Comments
Perfect simulation is not only really hard, it has been proven to be impossible. See http://en.wikipedia.org/wiki/Halting_problem
Related:
The really important thing is not to live, but to live well. - Socrates
Perhaps their contribution is in influencing the non experts? It is very likely that the non experts base their estimates on whatever predictions respected experts have made.
I believe government should be much more localized and I like the idea of charter cities. Competition among governments is good for citizens just as competition among businesses is good for consumers. Of course, for competition to really work out, immigration should not be regulated.
For some reason, this thread reminds me of this Simpsons quote:
"The following tale of alien encounters is true. And by true, I mean false. It's all lies. But they're entertaining lies, and in the end, isn't that the real truth?"
Oh, and every time someone in this world tries to build a really powerful AI, the computing hardware spontaneously melts.
Would have been a good punch if the humans ended up melting away the aliens' computer simulating our universe.
To expand on what parent said, pretty much all modern computer languages are equivalent to Turing machines (Turing complete). This includes Javascript, Java, Ruby, PHP, C, etc. If I understand Solomonoff induction properly, testing all possible hypothesis implies generating all possible programs in say Javascript and testing them to see which program's output match our observations. If multiple programs match the output, we should chose the smallest one.
efficiently convert ambient energy
Just a nitpick but if I recall correctly, cellular respiration (aerobic metabolism) is much more efficient than any of our modern ways to produce energy.
I think 1 is the most likely scenario (although I don't think FOOM is a very likely scenario). Some more mind blowing hard problems are available here for those who are still skeptical: http://en.wikipedia.org/wiki/Transcomputational_problem
I don't think that's so obviously true. Here are some possible arguments against that theory:
1) There is a theoretical upper limit at which information can travel (speed of light). A very large "brain" will eventually be limited by that speed.
2) Some computational problems are so hard that even an extremely powerful "brain" would take very long to solve (http://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability).
3) There are physical limits to computation (http://en.wikipedia.org/wiki/Bremermann%27s_limit). Bremermann's Limit is the maximum computational speed of a self-contained system in the material universe. According to this limit, a computer the size of the Earth would take 10^72 years to crack a 512 bit key. In other words, even an AI the size of the Earth would not manage to break modern human encryption by brute-force.
More theoretical limits here: http://en.wikipedia.org/wiki/Limits_to_computation
Forgive my stupidity, but I'm not sure I get this one. Should I read it as "[...] it's probably for the same reasons you haven't done it yourself."?
I'm the one who said that. Just to make it clear, I do agree with your first comment: taken literally, the quote doesn't make sense. Do you get it better if I say: "It is easy to achieve your goals if you have no goals"? I concede absurd was possibly a bit too strong here.
I think you're over analyzing here, the quote is meant to be absurd.
You can't simply assert that. It's an empirical question. How have you tried to measure the downsides?
It seems so obvious to me that I didn't bother... Here's some empirical data: http://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen.html . Anyways, if you really want to dispute the fact that we have progressed over the past few centuries, I believe the burden of proof rests on you.
can convey new information to a bounded rationalist
Why limit it to bounded rationalists?
I also strongly doubt the claim that human intelligence has stopped increasing. I was just offering an alternative hypothesis in case that proposition were true. Also, OP was arguing that intelligence stopped increasing at an evolutionary level which the Flynn effect doesn't seem to contradict (after a quick skim of the Wikipedia page).
However, humans and human societies are currently near some evolutionary equilibrium.
I think there's plenty of evidence that human societies are not near some evolutionary equilibrium. Can you name a human society that has lasted longer than a few hundred years? A few thousand years?
On the biological side, is there any evidence that we have reached an equilibrium? (I'm asking genuinely)
It's very possible that individual intelligence has not evolved past its current levels because it is at an equilibrium, beyond which higher individual intelligence results in lower social utility.
The consensus among biologists seems to be that social utility has zero to very little impact on evolution. See http://en.wikipedia.org/wiki/Group_selection
In fact, if you believe SIAI's narrative about the danger of artificial intelligence and the difficulty of friendly AI, I think you would have to conclude that higher individual intelligence results in lower expected social utility, for human measures of utility.
Higher levels of human intelligence result in a lower expected social utility for some other species (we are better at hunting them). It does not result in lower expected social utility for humans as we are generally good to other humans. Higher levels of individual intelligence have brought us the great achievements of human kind with very few downsides. The concern with AGI is that it might treat humans as humans treat some other species.
If anything, the reason we don't see a rapid rise of intelligence among human beings is that it does not provide much evolutionary benefit. In modern societies, people don't die for being dumb (usually) and sexual selection doesn't have much impact since most people only have child with a single partner.
Saying that the study was flawed was indeed a bit strong. What I really meant is that OP's conclusion was wrong (individual intelligence = bad for society).
This suggests that intelligence is an externality, like pollution.
This sentence doesn't really make sense. Intelligence in itself is not a "cost imposed to a third party" (externality's definition)... Perhaps you mean intelligence leads to more externalities?
Furthermore, this study is definitely flawed since it's quite obvious that individual intelligence has done a great deal lot more good for society than bad. Is there even an argument about this?
One way to get around the argument on semantics would be to replace "sound" by its definition.
...
Albert: "Hah! Definition 2c in Merriam-Webster: 'Sound: Mechanical radiant energy that is transmitted by longitudinal pressure waves in a material medium (as air).'"
Barry: "Hah! Definition 2b in Merriam-Webster: 'Sound: The sensation perceived by the sense of hearing.'"
Albert: "Since we cannot agree on the definition of sound and a third party might be confused if he listened to us, can you reformulate your question, replacing the word sound by its definition."
Barry: "OK. If a tree falls in the forest, and no one hears it, does it cause anyone to have the sensation perceived by the sense of hearing?"
Albert: "No."
Isn't it implied that sub-human intelligence is not designed to be self-modifying given that monkeys don't know how to program? What exactly do you mean by "we were not designed explicitly to be self-modifying"?
Human level intelligence is unable to improve itself at the moment (it's not even able to recreate itself if we exclude reproduction). I don't think monkey level intelligence will be more able to do so. I agree that the SIAI scenario is way overblown or at least until we have created an intelligence vastly superior to human one.
You mean explicitly base their every day life beliefs and decisions on Bayesian probability? That strikes me as highly impractical... Could you give some specific examples?
I understand your concern, but at this point, we're not even near monkey level intelligence so when I get to 5 year old human level intelligence I think it'll be legitimate to start worrying. I don't think greater than human AI will happen all of a sudden.
Before I get more involved here, could someone explain me what is
1) x-rationality (extreme rationality) 2) a rationalist 3) a bayesian rationalist
(I know what rationalism and Bayes theorem are but I'm not sure what the terms above refer to in the context of LW)
Hi all! I have been lurking LW for a few months (years?). I believe I was first introduced to LW through some posts on Hacker News (http://news.ycombinator.com/user?id=olalonde). I've always considered myself pretty good at rationality (is there a difference with being a rationalist?) and I've always been an atheist/reductionist. I recently (4 years ago?) converted to libertarianism (blame Milton Friedman). I was raised by 2 atheist doctors (as in PhD). I'm a software engineer and I'm mostly interested in the technical aspect of achieving AGI. Since I was a kid, I've always dreamed of seeing an AGI within my lifetime. I'd be curious to know if there are some people here working on actually building an AGI. I was born in Canada, have lived in Switzerland and am now living in China. I'm 23 years old IIRC. I believe I'm quite far from the stereotypical LWer on the personality side but I guess diversity doesn't hurt.
Nice to meet you all!