Posts

[LINK] 'Blue Brain' Project Accurately Predicts Connections Between Neurons 2012-09-18T00:50:26.952Z
[LINK] Strong AI Startup Raises $15M 2012-08-21T20:47:56.508Z

Comments

Comment by olalonde on Prisoner's dilemma tournament results · 2013-09-15T10:43:22.607Z · LW · GW

Perfect simulation is not only really hard, it has been proven to be impossible. See http://en.wikipedia.org/wiki/Halting_problem

Comment by olalonde on Rationality Quotes September 2012 · 2012-09-06T10:26:29.559Z · LW · GW

Related:

The really important thing is not to live, but to live well. - Socrates

Comment by olalonde on AI timeline predictions: are we getting better? · 2012-08-14T18:11:39.950Z · LW · GW

Perhaps their contribution is in influencing the non experts? It is very likely that the non experts base their estimates on whatever predictions respected experts have made.

Comment by olalonde on Is Politics the Mindkiller? An Inconclusive Test · 2012-07-28T06:54:44.487Z · LW · GW

I believe government should be much more localized and I like the idea of charter cities. Competition among governments is good for citizens just as competition among businesses is good for consumers. Of course, for competition to really work out, immigration should not be regulated.

See: http://en.wikipedia.org/wiki/Charter_city

Comment by olalonde on Rationality Quotes July 2012 · 2012-07-28T06:20:08.251Z · LW · GW

For some reason, this thread reminds me of this Simpsons quote:

"The following tale of alien encounters is true. And by true, I mean false. It's all lies. But they're entertaining lies, and in the end, isn't that the real truth?"

Comment by olalonde on That Alien Message · 2012-07-28T06:13:22.871Z · LW · GW

Oh, and every time someone in this world tries to build a really powerful AI, the computing hardware spontaneously melts.

Would have been a good punch if the humans ended up melting away the aliens' computer simulating our universe.

Comment by olalonde on An Intuitive Explanation of Solomonoff Induction · 2012-07-09T15:25:38.448Z · LW · GW

To expand on what parent said, pretty much all modern computer languages are equivalent to Turing machines (Turing complete). This includes Javascript, Java, Ruby, PHP, C, etc. If I understand Solomonoff induction properly, testing all possible hypothesis implies generating all possible programs in say Javascript and testing them to see which program's output match our observations. If multiple programs match the output, we should chose the smallest one.

Comment by olalonde on Irrationality Game II · 2012-07-06T03:52:18.190Z · LW · GW

efficiently convert ambient energy

Just a nitpick but if I recall correctly, cellular respiration (aerobic metabolism) is much more efficient than any of our modern ways to produce energy.

Comment by olalonde on Irrationality Game II · 2012-07-05T11:38:49.002Z · LW · GW

I think 1 is the most likely scenario (although I don't think FOOM is a very likely scenario). Some more mind blowing hard problems are available here for those who are still skeptical: http://en.wikipedia.org/wiki/Transcomputational_problem

Comment by olalonde on Irrationality Game II · 2012-07-04T21:54:54.378Z · LW · GW

I don't think that's so obviously true. Here are some possible arguments against that theory:

1) There is a theoretical upper limit at which information can travel (speed of light). A very large "brain" will eventually be limited by that speed.

2) Some computational problems are so hard that even an extremely powerful "brain" would take very long to solve (http://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability).

3) There are physical limits to computation (http://en.wikipedia.org/wiki/Bremermann%27s_limit). Bremermann's Limit is the maximum computational speed of a self-contained system in the material universe. According to this limit, a computer the size of the Earth would take 10^72 years to crack a 512 bit key. In other words, even an AI the size of the Earth would not manage to break modern human encryption by brute-force.

More theoretical limits here: http://en.wikipedia.org/wiki/Limits_to_computation

Comment by olalonde on Rationality Quotes May 2012 · 2012-05-04T00:48:32.784Z · LW · GW

Forgive my stupidity, but I'm not sure I get this one. Should I read it as "[...] it's probably for the same reasons you haven't done it yourself."?

Comment by olalonde on Rationality Quotes May 2012 · 2012-05-03T20:37:27.505Z · LW · GW

I'm the one who said that. Just to make it clear, I do agree with your first comment: taken literally, the quote doesn't make sense. Do you get it better if I say: "It is easy to achieve your goals if you have no goals"? I concede absurd was possibly a bit too strong here.

Comment by olalonde on Rationality Quotes May 2012 · 2012-05-02T02:19:34.509Z · LW · GW

I think you're over analyzing here, the quote is meant to be absurd.

Comment by olalonde on Intelligence as a bad · 2012-04-28T10:42:07.409Z · LW · GW

You can't simply assert that. It's an empirical question. How have you tried to measure the downsides?

It seems so obvious to me that I didn't bother... Here's some empirical data: http://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen.html . Anyways, if you really want to dispute the fact that we have progressed over the past few centuries, I believe the burden of proof rests on you.

Comment by olalonde on Applause Lights · 2012-04-27T17:09:57.840Z · LW · GW

can convey new information to a bounded rationalist

Why limit it to bounded rationalists?

Comment by olalonde on Intelligence as a bad · 2012-04-25T21:50:00.199Z · LW · GW

I also strongly doubt the claim that human intelligence has stopped increasing. I was just offering an alternative hypothesis in case that proposition were true. Also, OP was arguing that intelligence stopped increasing at an evolutionary level which the Flynn effect doesn't seem to contradict (after a quick skim of the Wikipedia page).

Comment by olalonde on Intelligence as a bad · 2012-04-25T21:02:59.812Z · LW · GW

However, humans and human societies are currently near some evolutionary equilibrium.

I think there's plenty of evidence that human societies are not near some evolutionary equilibrium. Can you name a human society that has lasted longer than a few hundred years? A few thousand years?

On the biological side, is there any evidence that we have reached an equilibrium? (I'm asking genuinely)

It's very possible that individual intelligence has not evolved past its current levels because it is at an equilibrium, beyond which higher individual intelligence results in lower social utility.

The consensus among biologists seems to be that social utility has zero to very little impact on evolution. See http://en.wikipedia.org/wiki/Group_selection

In fact, if you believe SIAI's narrative about the danger of artificial intelligence and the difficulty of friendly AI, I think you would have to conclude that higher individual intelligence results in lower expected social utility, for human measures of utility.

Higher levels of human intelligence result in a lower expected social utility for some other species (we are better at hunting them). It does not result in lower expected social utility for humans as we are generally good to other humans. Higher levels of individual intelligence have brought us the great achievements of human kind with very few downsides. The concern with AGI is that it might treat humans as humans treat some other species.

If anything, the reason we don't see a rapid rise of intelligence among human beings is that it does not provide much evolutionary benefit. In modern societies, people don't die for being dumb (usually) and sexual selection doesn't have much impact since most people only have child with a single partner.

Comment by olalonde on Intelligence as a bad · 2012-04-25T20:44:34.995Z · LW · GW

Saying that the study was flawed was indeed a bit strong. What I really meant is that OP's conclusion was wrong (individual intelligence = bad for society).

Comment by olalonde on Intelligence as a bad · 2012-04-25T17:05:05.118Z · LW · GW

This suggests that intelligence is an externality, like pollution.

This sentence doesn't really make sense. Intelligence in itself is not a "cost imposed to a third party" (externality's definition)... Perhaps you mean intelligence leads to more externalities?

Furthermore, this study is definitely flawed since it's quite obvious that individual intelligence has done a great deal lot more good for society than bad. Is there even an argument about this?

Comment by olalonde on Disputing Definitions · 2012-04-25T16:50:58.807Z · LW · GW

One way to get around the argument on semantics would be to replace "sound" by its definition.

...

Albert: "Hah! Definition 2c in Merriam-Webster: 'Sound: Mechanical radiant energy that is transmitted by longitudinal pressure waves in a material medium (as air).'"

Barry: "Hah! Definition 2b in Merriam-Webster: 'Sound: The sensation perceived by the sense of hearing.'"

Albert: "Since we cannot agree on the definition of sound and a third party might be confused if he listened to us, can you reformulate your question, replacing the word sound by its definition."

Barry: "OK. If a tree falls in the forest, and no one hears it, does it cause anyone to have the sensation perceived by the sense of hearing?"

Albert: "No."

Comment by olalonde on Welcome to Less Wrong! (2012) · 2012-04-25T11:21:47.583Z · LW · GW

Isn't it implied that sub-human intelligence is not designed to be self-modifying given that monkeys don't know how to program? What exactly do you mean by "we were not designed explicitly to be self-modifying"?

Comment by olalonde on Welcome to Less Wrong! (2012) · 2012-04-25T00:41:07.089Z · LW · GW

Human level intelligence is unable to improve itself at the moment (it's not even able to recreate itself if we exclude reproduction). I don't think monkey level intelligence will be more able to do so. I agree that the SIAI scenario is way overblown or at least until we have created an intelligence vastly superior to human one.

Comment by olalonde on Welcome to Less Wrong! (2012) · 2012-04-25T00:23:50.202Z · LW · GW

You mean explicitly base their every day life beliefs and decisions on Bayesian probability? That strikes me as highly impractical... Could you give some specific examples?

Comment by olalonde on Welcome to Less Wrong! (2012) · 2012-04-25T00:12:41.557Z · LW · GW

I understand your concern, but at this point, we're not even near monkey level intelligence so when I get to 5 year old human level intelligence I think it'll be legitimate to start worrying. I don't think greater than human AI will happen all of a sudden.

Comment by olalonde on Welcome to Less Wrong! (2012) · 2012-04-24T23:14:44.376Z · LW · GW

Before I get more involved here, could someone explain me what is

1) x-rationality (extreme rationality) 2) a rationalist 3) a bayesian rationalist

(I know what rationalism and Bayes theorem are but I'm not sure what the terms above refer to in the context of LW)

Comment by olalonde on Welcome to Less Wrong! (2012) · 2012-04-24T22:54:20.737Z · LW · GW

Hi all! I have been lurking LW for a few months (years?). I believe I was first introduced to LW through some posts on Hacker News (http://news.ycombinator.com/user?id=olalonde). I've always considered myself pretty good at rationality (is there a difference with being a rationalist?) and I've always been an atheist/reductionist. I recently (4 years ago?) converted to libertarianism (blame Milton Friedman). I was raised by 2 atheist doctors (as in PhD). I'm a software engineer and I'm mostly interested in the technical aspect of achieving AGI. Since I was a kid, I've always dreamed of seeing an AGI within my lifetime. I'd be curious to know if there are some people here working on actually building an AGI. I was born in Canada, have lived in Switzerland and am now living in China. I'm 23 years old IIRC. I believe I'm quite far from the stereotypical LWer on the personality side but I guess diversity doesn't hurt.

Nice to meet you all!