Einstein's Superpowers

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-30T06:40:55.000Z · LW · GW · Legacy · 91 comments

Contents

91 comments

There is a widespread tendency to talk (and think) as if Einstein, Newton, and similar historical figures had superpowers—something magical, something sacred, something beyond the mundane.  (Remember, there are many more ways to worship a thing than lighting candles around its altar.)

Once I unthinkingly thought this way too, with respect to Einstein in particular, until reading Julian Barbour's The End of Time cured me of it.

Barbour laid out the history of anti-epiphenomenal physics and Mach's Principle; he described the historical controversies that predated Mach—all this that stood behind Einstein and was known to Einstein, when Einstein tackled his problem...

And maybe I'm just imagining things—reading too much of myself into Barbour's book—but I thought I heard Barbour very quietly shouting, coded between the polite lines:

What Einstein did isn't magic, people!  If you all just looked at how he actually did it, instead of falling to your knees and worshiping him, maybe then you'd be able to do it too!

(EDIT March 2013:  Barbour did not actually say this.  It does not appear in the book text.  It is not a Julian Barbour quote and should not be attributed to him.  Thank you.)

Maybe I'm mistaken, or extrapolating too far... but I kinda suspect that Barbour once tried to explain to people how you move further along Einstein's direction to get timeless physics; and they sniffed scornfully and said, "Oh, you think you're Einstein, do you?"

John Baez's Crackpot Index, item 18:

10 points for each favorable comparison of yourself to Einstein, or claim that special or general relativity are fundamentally misguided (without good evidence).

Item 30:

30 points for suggesting that Einstein, in his later years, was groping his way towards the ideas you now advocate.

Barbour never bothers to compare himself to Einstein, of course; nor does he ever appeal to Einstein in support of timeless physics.  I mention these items on the Crackpot Index by way of showing how many people compare themselves to Einstein, and what society generally thinks of them.

The crackpot sees Einstein as something magical, so they compare themselves to Einstein by way of praising themselves as magical; they think Einstein had superpowers and they think they have superpowers, hence the comparison.

But it is just the other side of the same coin, to think that Einstein is sacred, and the crackpot is not sacred, therefore they have committed blasphemy in comparing themselves to Einstein.

Suppose a bright young physicist says, "I admire Einstein's work, but personally, I hope to do better."  If someone is shocked and says, "What!  You haven't accomplished anything remotely like what Einstein did; what makes you think you're smarter than him?" then they are the other side of the crackpot's coin.

The underlying problem is conflating social status and research potential.

Einstein has extremely high social status: because of his record of accomplishments; because of how he did it; and because he's the physicist whose name even the general public remembers, who brought honor to science itself.

And we tend to mix up fame with other quantities, and we tend to attribute people's behavior to dispositions rather than situations.

So there's this tendency to think that Einstein, even before he was famous, already had an inherent disposition to be Einstein—a potential as rare as his fame and as magical as his deeds.  So that if you claim to have the potential to do what Einstein did, it is just the same as claiming Einstein's rank, rising far above your assigned status in the tribe.

I'm not phrasing this well, but then, I'm trying to dissect a confused thought:  Einstein belongs to a separate magisterium, the sacred magisterium.  The sacred magisterium is distinct from the mundane magisterium; you can't set out to be Einstein in the way you can set out to be a full professor or a CEO.  Only beings with divine potential can enter the sacred magisterium—and then it is only fulfilling a destiny they already have.  So if you say you want to outdo Einstein, you're claiming to already be part of the sacred magisterium—you claim to have the same aura of destiny that Einstein was born with, like a royal birthright...

"But Eliezer," you say, "surely not everyone can become Einstein."

You mean to say, not everyone can do better than Einstein.

"Um... yeah, that's what I meant."

Well... in the modern world, you may be correct.  You probably should remember that I am a transhumanist, going around looking around at people thinking, "You know, it just sucks that not everyone has the potential to do better than Einstein, and this seems like a fixable problem."  It colors one's attitude.

But in the modern world, yes, not everyone has the potential to be Einstein.

Still... how can I put this...

There's a phrase I once heard, can't remember where:  "Just another Jewish genius."  Some poet or author or philosopher or other, brilliant at a young age, doing something not tremendously important in the grand scheme of things, not all that influential, who ended up being dismissed as "Just another Jewish genius."

If Einstein had chosen the wrong angle of attack on his problem—if he hadn't chosen a sufficiently important problem to work on—if he hadn't persisted for years—if he'd taken any number of wrong turns—or if someone else had solved the problem first—then dear Albert would have ended up as just another Jewish genius.

Geniuses are rare, but not all that rare.  It is not all that implausible to lay claim to the kind of intellect that can get you dismissed as "just another Jewish genius" or "just another brilliant mind who never did anything interesting with their life".  The associated social status here is not high enough to be sacred, so it should seem like an ordinarily evaluable claim.

But what separates people like this from becoming Einstein, I suspect, is no innate defect of brilliance.  It's things like "lack of an interesting problem"—or, to put the blame where it belongs, "failing to choose an important problem".  It is very easy to fail at this because of the cached thought problem:  Tell people to choose an important problem and they will choose the first cache hit for "important problem" that pops into their heads, like "global warming" or "string theory".

The truly important problems are often the ones you're not even considering, because they appear to be impossible, or, um, actually difficult, or worst of all, not clear how to solve.  If you worked on them for years, they might not seem so impossible... but this is an extra and unusual insight; naive realism will tell you that solvable problems look solvable, and impossible-looking problems are impossible.

Then you have to come up with a new and worthwhile angle of attack.  Most people who are not allergic to novelty, will go too far in the other direction, and fall into an affective death spiral.

And then you've got to bang your head on the problem for years, without being distracted by the temptations of easier living.  "Life is what happens while we are making other plans," as the saying goes, and if you want to fulfill your other plans, you've often got to be ready to turn down life.

Society is not set up to support you while you work, either.

The point being, the problem is not that you need an aura of destiny and the aura of destiny is missing.  If you'd met Albert before he published his papers, you would have perceived no aura of destiny about him to match his future high status.  He would seem like just another Jewish genius.

This is not because the royal birthright is concealed, but because it simply is not there.  It is not necessary.  There is no separate magisterium for people who do important things.

I say this, because I want to do important things with my life, and I have a genuinely important problem, and an angle of attack, and I've been banging my head on it for years, and I've managed to set up a support structure for it; and I very frequently meet people who, in one way or another, say:  "Yeah?  Let's see your aura of destiny, buddy."

What impressed me about Julian Barbour was a quality that I don't think anyone would have known how to fake without actually having it:  Barbour seemed to have seen through Einstein—he talked about Einstein as if everything Einstein had done was perfectly understandable and mundane.

Though even having realized this, to me it still came as a shock, when Barbour said something along the lines of, "Now here's where Einstein failed to apply his own methods, and missed the key insight—"  But the shock was fleeting, I knew the Law:  No gods, no magic, and ancient heroes are milestones to tick off in your rearview mirror.

This seeing through is something one has to achieve, an insight one has to discover.  You cannot see through Einstein just by saying, "Einstein is mundane!" if his work still seems like magic unto you.  That would be like declaring "Consciousness must reduce to neurons!" without having any idea of how to do it.  It's true, but it doesn't solve the problem.

I'm not going to tell you that Einstein was an ordinary bloke oversold by the media, or that deep down he was a regular schmuck just like everyone else.  That would be going much too far.  To walk this path, one must acquire abilities some consider to be... unnatural.  I take a special joy in doing things that people call "humanly impossible", because it shows that I'm growing up.

Yet the way that you acquire magical powers is not by being born with them, but by seeing, with a sudden shock, that they really are perfectly normal.

This is a general principle in life.

91 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by mitchell_porter2 · 2008-05-30T08:18:42.000Z · LW(p) · GW(p)

If Einstein had chosen the wrong angle of attack on his problem - if he hadn't chosen a sufficiently important problem to work on - if he hadn't persisted for years - if he'd taken any number of wrong turns - or if someone else had solved the problem first - then dear Albert would have ended up as just another Jewish genius.

But if Einstein was the reason why none of those things happened, then maybe he wasn't just another Jewish genius, eh? Maybe he was smart enough to choose the right methods, to select the important problems, to see the value in persisting, to avoid or recover from all the wrong turns, and to be the first.

My own ruminations on genius have led me to suppose that one mistake which people of the very highest intelligence may make, is to underestimate their own exceptionality; for example, to adopt theories of human potential which are excessively optimistic regarding the capabilities of other people. But that is largely just my own experience speaking. It similarly seems very possible that the lessons you are trying to impart here are simply things you wish you hadn't had to figure out for yourself, but are not especially helpful or relevant for anyone else. In fact, I am reminded of one of my own pessimistic meta-principles regarding people of very high ability, which is that their situation will be so individual that no-one will be able to help them or understand them. It's not literally true, but it does point the way to the further conclusion that they will have to solve their own problems.

If anyone wants to see thoughts about genius they haven't seen before, they should first of all study the works and career of Celia Green. And then, as a side dish, they might like to read the chapter "Odysseus of Ithaca, by Kuno Mlatje", in Stanislaw Lem's A Perfect Vacuum.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-10-13T18:14:41.623Z · LW(p) · GW(p)

My own ruminations on genius have led me to suppose that one mistake which people of the very highest intelligence may make, is to underestimate their own exceptionality; for example, to adopt theories of human potential which are excessively optimistic regarding the capabilities of other people.

I believe this isn't just a mistake made by people of the very highest intelligence.

Instead, people are very apt to generalize from themselves, and if they see someone failing at something which comes easily to them, they're very apt to think that the other person is faking or not trying hard enough.

comment by Ian_C. · 2008-05-30T09:50:38.000Z · LW(p) · GW(p)

Could this be a Jewish or American cultural thing? I know in English culture great scientists are highly regarded but they are very much still men. There's praise but it's not effusive or reverential.

Replies from: dmitrii-zelenskii
comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2019-08-18T19:24:16.945Z · LW(p) · GW(p)

Definitely not Jewish - Jewish-internal position is, as far as I can gather (not being part of the religion or having much of the religion around but vice versa for _lineage_), far closer to the lines of "yet another Jewish genius".

comment by devicerandom · 2008-05-30T10:07:12.000Z · LW(p) · GW(p)

I don't get it. As far as I understand it, "being Einstein" is just a combination of 1)luck (being at the right time and right place) and 2)being born on tails of the distributions of a bunch of variables describing your neural processes. What do you want to mean with this post, Eliezer?

comment by Caledonian2 · 2008-05-30T12:27:22.000Z · LW(p) · GW(p)
What do you want to mean with this post, Eliezer?

Eliezer likely believes that he is capable of achieving results just as world-changing as Einstein's new physics, and wishes to dispel the idea that Einstein's results were the consequence of extraordinary talents so that when he presents his own results (or presents the idea that he can produce such results) people will not be able to say that he is asserting special genius and use this as a rhetorical weapon against him.

comment by Peter_Turney · 2008-05-30T12:49:42.000Z · LW(p) · GW(p)

I discuss the hero worship of great scientists in The Heroic Theory of Scientific Development and I discuss genius in Genius, Sustained Effort, and Passion.

comment by Ben_Jones · 2008-05-30T13:29:37.000Z · LW(p) · GW(p)

I think this is a really good post.

But my first thought when getting to the bottom of the page just now was "Wow, if I'd written that, then come back and read the first five comments, I probably would have given up there and then."

Guess I don't have what it takes just yet....

comment by Günther_Greindl · 2008-05-30T14:19:00.000Z · LW(p) · GW(p)

Good post Eli, and contrary to some other comments before I think your post is important because this insight is not yet general knowledge. I've talked to university physics professors in their fifties who talked of Einstein as if he was superhuman.

I think apart from luck and right time/right place there were some other factors too why Einstein is so popular: he had an air of showmanship about him, which is probably rare in scientists. That was what appealed to the public and made him an interesting figur to report about.

And, probably even more important, his work was about something which everybody could relate to: space and time.

John von Neumann was, IMHO, far more of a genius than Einstein, but he is not very known to the public. Maybe because QM, algorithms, CA and game theory are more difficult to relate to on an emotional level than the "twin paradox".

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2011-05-05T21:01:55.973Z · LW(p) · GW(p)

I think apart from luck and right time/right place there were some other factors too why Einstein is so popular: he had an air of showmanship about him, which is probably rare in scientists. That was what appealed to the public and made him an interesting figur to report about.

Now this is a bit harsh, don't you think?

comment by Meta_and_Meta · 2008-05-30T14:31:40.000Z · LW(p) · GW(p)

And even if you assumed that Einstein's genius was unique, how could celebrity (of all things) be a function of that? (If Einstein had had a different hairdo...)

comment by Gilmar_Cezar · 2008-05-30T14:38:12.000Z · LW(p) · GW(p)

In fact Einstein realized a great work, with a little help of her wife... The difference was that he had a great creativity like the great others, like Newton, Galois, that take him to the specific approach. But, I guess he was the first one that used (or was used by) the media like no other before... Sorry about this comparison but it is look like Che Guevara... her photo is everywhere, but who knows exactly what he did for the mankind?

comment by burger_flipper2 · 2008-05-30T14:38:24.000Z · LW(p) · GW(p)

Interesting choice to use the A.I. box experiment as an example for this post, when the methods used by EY in it were not revealed. Whatever the rationale for keeping it close to the vest, not showing how it was done struck me as an attempt to build mystique, if not appear magical.

This post also seems a little inconsistent with EY’s assistant researcher job listing, which said something to the effect that only those with 1 in 100k g need apply, though those with 1 in 1000 could contribute to the cause monetarily. The error may be mine in this instance, because I may be in the minority when I assume someone who claims to have Einstein’s intelligence is not claiming anything like 1 in 100k g.

Replies from: Eliezer_Yudkowsky, TraderJoe
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-11T05:53:06.799Z · LW(p) · GW(p)

blink blink

Whaaa? Is this saying you think Einstein had substantially less than 1 in 100,000 general intelligence? That seems like a severe underestimate. 1 in 1e5 really isn't much, there should be 70,000 people in the world like that. There isn't a small city full of Einsteins. I've gotten back standardized test reports showing higher percentiles than that.

This reminds me of the time somebody asked me if I considered myself a genius and I asked them to define genius as a fraction of the population. "1 in 100,000? 1 in 1 million?" I inquired. And they said, "1 in 300" to which my reply was to just laugh.

Or am I reading it the wrong way around, i.e., Einstein is much above this level? If so, I wouldn't think more than a couple of orders of magnitude above, like 1 in 1,000,000 or 1 in 10,000,000. Other factors than native g will be decisive past that point.

Replies from: Nornagest, taelor, None
comment by Nornagest · 2013-03-11T07:31:02.645Z · LW(p) · GW(p)

We could quibble a bit about exact rarities -- Einstein was clearly exceptionally bright, but whether he represents 1 in 10^4 or 1 in 10^6 g depends on all sorts of trivia that I don't have good estimates for. (I think I'd start by trying to figure out the number of scientists active in math, physics, and chemistry in [say] 1935 and estimating the intelligence of the average 1935-era hard scientist relative to the population average, then assuming that Einstein was at the top of that community. That's just a ballpark estimate, though.)

That's all pretty orthogonal to what I read the grandparent as suggesting, though. By my reading of b_f2's post, someone claiming Einstein-level intelligence is probably saying that their estimate of their own intelligence exceeds all their convenient reference points below "famously smart scientist", suggesting a very smart person, but probably not 1 in 10^5 smart.

Which is actually a lot more charitable than my probable interpretation of such a claim: without impressive supporting evidence, I'd be more likely to assume that anyone claiming to have Einstein's brain is full of shit and probably a crackpot.

Replies from: army1987, ArisKatsaris, private_messaging
comment by A1987dM (army1987) · 2013-03-11T18:17:28.109Z · LW(p) · GW(p)

Which is actually a lot more charitable than my probable interpretation of such a claim: without impressive supporting evidence, I'd be more likely to assume that anyone claiming to have Einstein's brain is full of shit and probably a crackpot.

Not only do I agree, but I can't even envision what such “impressive supporting evidence” could be. I would be extremely surprised if anyone who had more than a vague idea of what Einstein did claimed to be as smart as him with a straight face; even if someone I thought was actually in the same league as him said that, I'd assume they are in jest or out of their mind -- indeed because such a statement would pattern-match a crackpot. (IME, people who are both extremely intelligent and very arrogant may say stuff like “99.99% of the people are idiots”, but they hardly ever say “I am as smart as $famously_smart_person”.

And BTW, I don't think many laymen by “Einstein” mean “someone as smart as the 60th smartest person in my home town of 60,000” -- they usually mean “one of the friggin' smartest people ever”.

Replies from: shminux, satt
comment by Shmi (shminux) · 2013-03-11T19:15:37.835Z · LW(p) · GW(p)

What's rarely appreciated is that Einstein also lucked out, besides being 1 in 10^? genius. A lot of things went right for him early on. On the other hand, a lot of things went wrong for him later on, and so he was left out of the mainstream scientific progress, save for his incisive QM critique.

Replies from: whowhowho, Eliezer_Yudkowsky
comment by whowhowho · 2013-03-11T20:10:07.833Z · LW(p) · GW(p)

Nearly there: you can't predict backward from success to raw (non domain specific) ability, for just the same reason you can't predict forward from high IQ to success in arbitrary field.

Replies from: ESRogs, army1987
comment by ESRogs · 2013-03-12T01:33:21.665Z · LW(p) · GW(p)

But you can predict forward from high IQ to success in an arbitrary field, at least to some degree. See: http://en.wikipedia.org/wiki/Intelligence_quotient#Social_outcomes.

comment by A1987dM (army1987) · 2013-03-12T12:37:48.495Z · LW(p) · GW(p)

They're not the same, but they do correlate (which is why it's not pointless to define g in the first place); now, due to regression to the mean, someone better at theoretical physics than 99.999999% of the population (and no, I don't think that's too many 9s) is likely not also better at general intelligence than 99.999999% of the population -- but I very strongly doubt that the correct number of 9s is less than half that many. (Anyway, I'm not sure it'd make sense to define g precisely enough to tell whether someone's 1 in 10^6 or 1 in 10^9.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-12T01:04:07.544Z · LW(p) · GW(p)

In what sense was Einstein left out of the mainstream, because of what life events, besides his (correct, assuming MWI) criticisms of QM? I don't think I've heard this story of Einstein before. Szilard approached him to ghost-send his letter to Roosevelt, that's all I know of Einstein's later years.

Replies from: Alejandro1, shminux
comment by Alejandro1 · 2013-03-12T01:19:17.694Z · LW(p) · GW(p)

As far as I know, it was mostly because in his last decades he focused his research mostly on obtaining a classical field theory that unified gravity and electromagnetism, hoping that out of it the discrete aspects of quantum theory would emerge organically. Most of the forefront theoretical physicists viewed this (correctly, in retrospect) as a dead end and focused on the new discoveries on nuclear structure and elementary particles, on understanding the structure of quantum field theory, etc.

Einstein's philosophical criticism of quantum theory was not the reason for his relative marginalization, except insofar as it may have influenced his research choices.

comment by Shmi (shminux) · 2013-03-12T06:59:10.522Z · LW(p) · GW(p)

In what sense was Einstein left out of the mainstream

Not out of the mainstream in general, only out of the useful scientific research.

his (correct, assuming MWI) criticisms of QM

His criticism of QM was useful regardless of MWI. Among other things, he pointed out several issues with objective collapse and hidden variables (with his famous EPR paradox). Even when he was wrong (in his almost as famous debates with Bohr), he did not make any obvious errors, it took Bohr some time to figure why a certain thought experiment did not contradict QM in its shut-up-and-calculate non-interpretation.

Now, what I was referring to is that he was fortunate to get the education the he had, to have a fellow scientist as a fiancee and (apparently) as a sounding board for his ideas during his work on SR, he was fortunate to have had the mathematician Marcel Grossmann as a friend who helped him with the critical piece of differential geometry later on, etc.

Early on he also had a good sense to apply his genius to constructing models based on known but not yet explained experimental data: photoelectric effect, Brownian motion, Michelson-Morley experiment, Maxwell equations, gravity acting like acceleration, and a few others.

This changed some time in 1920s/1930s, when he decided that unifying classical gravity and classical EM is a good idea on general principles (like Occam's razor and aesthetic considerations), probably because of his understandable dissatisfaction with QM. To be fair, he had quite a bit of success with models not based on experiment, such as predicting Bose-Einstein condensate. He also remained confused about some of the less clear the aspects of GR, like gauge invariance, gravitational waves and stress-energy tensor. And that's what meant by "went wrong".

comment by satt · 2014-03-27T05:53:47.102Z · LW(p) · GW(p)

I would be extremely surprised if anyone who had more than a vague idea of what Einstein did claimed to be as smart as him with a straight face; even if someone I thought was actually in the same league as him said that, I'd assume they are in jest or out of their mind -- indeed because such a statement would pattern-match a crackpot.

Supporting your point of view is Lev Landau's list. Even as one of the greatest theoretical physicists of the 20th century, Landau ranked himself far below not only Einstein but also Newton, Bohr, Heisenberg, Dirac, & Schrödinger.

comment by ArisKatsaris · 2013-03-11T18:46:59.259Z · LW(p) · GW(p)

We could quibble a bit about exact rarities -- Einstein was clearly exceptionally bright, but whether he represents 1 in 10e4 or 1 in 10e6 g depends on all sorts of trivia that I don't have good estimates for.

If Einstein represents 1 in 1000, then it would imply that on average the 3 top students in each highschool of 3000 students could be expected to be as "smart as Einstein". Does that sound reasonable to you?

Replies from: Nornagest
comment by Nornagest · 2013-03-11T19:16:21.669Z · LW(p) · GW(p)

No, I'm pretty sure Einstein-level intelligence is rarer than that, which is why I put my lower bound at 1 in 10^4 (i.e. the top three students in a region's worth of high schools). I'm not sure it's much rarer, though -- we don't have an outstandingly good idea of what makes an Einstein other than sheer weight of g, and we don't even know that people with the prerequisites of an Einstein would consistently have been funneled into fields where they'd have the opportunity to do things like make famous discoveries in physics.

As to the latter, I kind of suspect not. Of the three smartest people in my (pretty large) high school as measured by the National Merit Scholarship program -- probably the only American program that looks for exceptional g on a national scale that late in life, though any number of programs exist for gifted children -- one now works for Google's IT department and a second was, last I heard, going into an art school. The third is... well, not in physics or math either. Not sure what the equivalent of Google would have been in 1935 -- maybe something in mechanical engineering? -- but I doubt the hard sciences then selected for intelligence much better than they do now.

Replies from: whowhowho
comment by whowhowho · 2013-03-11T19:51:41.452Z · LW(p) · GW(p)

Youre are edging into understanding why this is thread meaningless. Einstein-level g is rarish but not specatacular. Einstein-level domain ability is another thing. Being in the right place at the right time with the right idea is another thing again. Einstein probably wouldn't have made a Rembrandt-level painter.

Replies from: army1987, army1987
comment by A1987dM (army1987) · 2013-03-12T12:44:45.178Z · LW(p) · GW(p)

But the overwhelming majority of the population (I won't bother to pull a number of 9s out of my ass) never become a top-level theoretical physicist nor a top-level painter nor a top-level novelist nor a top-level musician nor a top-level statesperson nor a top-level chess player nor anything like that. So, even without assuming that theoretical physics is any more g-loaded than painting, the fact that “Einstein probably wouldn't have made a Rembrandt-level painter” isn't a terribly good reason to doubt that Einstein's g was in the top 0.1%.

Replies from: whowhowho, None
comment by whowhowho · 2013-03-12T12:56:19.922Z · LW(p) · GW(p)

The point was this: that Einstein was very exceptional was not a good reason for thinking he had a very exceptional g, because it's not all about g.

Replies from: army1987
comment by A1987dM (army1987) · 2013-03-12T19:42:22.468Z · LW(p) · GW(p)

I agree if by the second instance of “very exceptional” you mean “one in a billion”, but not if you mean “one in a thousand”.

comment by [deleted] · 2014-02-13T12:14:41.200Z · LW(p) · GW(p)

But the overwhelming majority of the population (I won't bother to pull a number of 9s out of my ass) never become a top-level theoretical physicist nor a top-level painter nor a top-level novelist nor a top-level musician nor a top-level statesperson nor a top-level chess player nor anything like that.

By definition, the vast majority of the population can never be top-level. It would stop being top-level if everyone could do it.

On the other hand, you can look at curricula in good schools these days and notice that we definitely seem to be expecting higher intellectual aptitude and greater achievements at early ages these days in order to give people the same levels of status and respect. So hmmm....

Replies from: army1987
comment by A1987dM (army1987) · 2014-02-13T12:55:43.627Z · LW(p) · GW(p)

By definition, the vast majority of the population can never be top-level. It would stop being top-level if everyone could do it.

Yes, the vast majority of the population can never be top-level at one given thing. But in principle it could well be possible that almost each person is top-level at something (though different people would be top-level at different things). That this isn't the case is an empirical fact.

On the other hand, you can look at curricula in good schools these days and notice that we definitely seem to be expecting higher intellectual aptitude and greater achievements at early ages these days in order to give people the same levels of status and respect. So hmmm....

Where are you looking, exactly? Over here it looks quite different.

Replies from: None
comment by [deleted] · 2014-02-13T14:43:27.109Z · LW(p) · GW(p)

I think we're feeling two different legs of the elephant, so to speak -- or there may just be vast inequalities in education as in everything else these days.

I'd have to do quite a bit of searching to get hard backing statistics, but consider, for instance, the average age at which a young scientist achieves an independent position or tenure, or the average publication quantity of people who do get positions, or even (so I've heard) the average publication quantity/quality of people who get into graduate school. As far as I know, these indicators have very much been increasing over time; there may even be a causative link: grade inflation at the lower end of the system causing grade deflation the further up you go.

(For example, I'm told that it's now difficult to get into graduate school if you don't already have authorship on a publication.)

There's also anecdotes like these, indicating that people (at least, aspiring Officially Smart People) are being taught more mathematics at an earlier age than previously.

I wish we had some hard data to clear things up.

Replies from: army1987
comment by A1987dM (army1987) · 2014-02-23T11:09:59.391Z · LW(p) · GW(p)

I'd have to do quite a bit of searching to get hard backing statistics, but consider, for instance, the average age at which a young scientist achieves an independent position or tenure, or the average publication quantity of people who do get positions, or even (so I've heard) the average publication quantity/quality of people who get into graduate school.

That slash is a division bar, right? ;-)

(More seriously: Sure, students today might know much more maths than Newton did, but being able to learn calculus from a teacher and/or a textbook is a much lower bar than being able to invent calculus from scratch.)

Replies from: None
comment by [deleted] · 2014-02-23T13:22:24.295Z · LW(p) · GW(p)

(More seriously: Sure, students today might know much more maths than Newton did, but being able to learn calculus from a teacher and/or a textbook is a much lower bar than being able to invent calculus from scratch.)

True. But the average Maths PhD today is doing something Newton could never have invented at all. Yes, we do stand on the shoulders of giants nowadays, as did Newton, but picking higher-hanging fruit (say: the Standard Model compared to classical mechanics) requires both a greater knowledge of maths and a greater creative effort.

Anyway, point being, I simply don't feel able to believe that "incredibly high general intelligence" is truly the determining factor of even Famous Historical Hero-level science. There seem to be lots of other things going on.

comment by A1987dM (army1987) · 2013-03-17T10:13:04.673Z · LW(p) · GW(p)

Einstein probably wouldn't have made a Rembrandt-level painter.

...and then, by total coincidence, a couple days ago I went to the website of a Nobel laureate theoretical physicist and was surprised by how much the graphic design looked like the work of a 14-year-old, and not bothering and letting the browser use the default black-on-white text would probably have looked prettier IMO.

comment by private_messaging · 2014-02-13T20:41:20.701Z · LW(p) · GW(p)

You get extreme rarities for specific tasks very easily by combination.

E.g. 1 out of 1000 by g, 1 out of 1000 on factors having to do with intellectual endurance and actually using g to work rather than to find ways to avoid work, 1 out of 1000 on some combination of lucky external factors having to do with becoming a physicist rather than something else, and you have 1 in a billion going.

Given all the other rarities necessary, extreme rarity in g got to be unlikely. Furthermore it is not clear how rarities correspond to actual performance. The world's best athletes don't do anything quantifiable a significant % better than merely good athletes.

And of course, at Einstein's level, Spearman's law of diminishing returns makes g relatively meaningless. Plus the regression towards the mean severely lowers any measurement by proxy, such as via IQ. The same regression towards the mean severely lowers the expected performance of an individual you'd pick to have same IQ as Einstein by administering IQ tests.

comment by taelor · 2013-03-11T14:49:11.108Z · LW(p) · GW(p)

This reminds me of the time somebody asked me if I considered myself a genius and I asked them to define genius as a fraction of the population. "1 in 100,000? 1 in 1 million?" I inquired. And they said, "1 in 300" to which my reply was to just laugh.

I remember a time I saw a newsreport about a little girl who "miraculously" survived some terminal disease. Later in the report, it was mentioned that the recovery rate for said disease was something like 2%, and I laughed out loud that 1 in 50 was not a miracle.

comment by [deleted] · 2014-02-13T12:12:11.009Z · LW(p) · GW(p)

You know well that raw intelligence doesn't predict success as much as good circumstances and a hell of a lot of work-ethic. Einstein's sum-total qualities may have been extremely rare, but I would never bet that he was just that neurologically different from the rest of us merely very smart people.

comment by TraderJoe · 2013-03-11T08:42:03.338Z · LW(p) · GW(p)

Why would you need any g to contribute money?

Replies from: ESRogs
comment by ESRogs · 2013-03-12T01:36:24.686Z · LW(p) · GW(p)

I believe that was meant to be: those with 1 in 1000 g or below...

comment by Nick_Tarleton · 2008-05-30T15:07:49.000Z · LW(p) · GW(p)

The rationale for not divulging the AI-box method is that someone suffering from hindsight bias would say "I never would have fallen for that", when in fact they would.

comment by Dan_Burfoot · 2008-05-30T15:50:41.000Z · LW(p) · GW(p)

"Yeah? Let's see your aura of destiny, buddy."

I don't want to see your aura of destiny. I just want to see your damn results! :-)

In my view, the creation of an artificial intelligence (friendly or otherwise) would be a much more significant achievement than Einstein's, for the following reason. Einstein had a paradigm: physics. AI has no paradigm. There is no consensus about what the important problems are. In order to "solve" AI, one not only has to answer a difficult problem, one has to begin by defining the problem.

comment by burger_flipper2 · 2008-05-30T16:00:49.000Z · LW(p) · GW(p)

Yet it's referred to as "humanly impossible" in the link (granted this may be cheeky).

Who is the target audience for this AI box experiment info? Who is detached enough from biases to weigh the avowals as solid evidence without further description, yet not detached enough to see they themselves might have fallen for it? Seems like most people capable of the first could also see the second.

comment by Bill_Wakley · 2008-05-30T17:27:17.000Z · LW(p) · GW(p)

Eliezer: I've enjoyed the extended physics thread, and it has garnered a good number of interesting comments. The posts with more technical content (physics, Turing machines, decision theory) seem to get a higher standard of comment and to bring in people with considerable technical knowledge in these areas. The comments on the non-technical posts are somewhat weaker. However, I think that both sorts of posts have been frequently excellent.

Having been impressed with your posts on rationality, philosophy of science and physics, I look forward to posts on the transhumanist issues that you often allude to. Here are some questions your writing on this area raises:

  1. Have you convinced any other AI theorists (or cognitive scientists) that AI is as dangerous as you suggest?
  2. Where do your priors come from for the structure space of "minds in general"? Couldn't it be that this space is actually quite restricted, with lots of "conceivable" minds not being physically possible? (This would line up with impossibility/limitation results in mathematical logic, complexity theory, public choice / voting theory, etc.)
  3. Where do your priors come from for the difficulty of intelligence improvements at increasing levels of intelligence? You can't have an intelligence explosion if after some not to high point it becomes much more difficult than previously to enhance intelligence. Again, in the light of limitation results, why assume one way rather than the other?
  4. If it rational only to assign a low probability to AI being as dangerous as you think, then it seems we are in a Pascal's Wager type situation. I'm skeptical that we should act on Pascal's Wagers. Can you show that this isn't a Pascal type situation?
  5. Some transhumanists want AI or nanotechnology because of the supposed dramatic improvements they will bring to the quality of human life. I can accept that the technologies could improve things by eradicating extreme poverty and diseases like malaria and HIV, and by preventing natural disasters, nuclear war and other abominations. But beyond that, it is not obvious to me that these technologies would improve human life that much. This is not status-quo bias. I'm skeptical that the lives of present Americans are much better than those of Americans a hundred years ago (excluding civil rights and prevention of diseases). To pick some specific examples: I don't think my higher intelligence, greater knowledge, and more rational set of beliefs makes my life better than that of various people I know. At most, it has some incidental benefits (better SAT makes college/jobs easier to get) but certainly doesn't seem intrinsically to improve life quality. (Also, it might be that for reasons of evolutionary psychology, humans can't have satisfying lives without genuine risks and threat of death. Something that doesn't need those risks in order to thrive is not a human and so I'd be indifferent to its existence.)
comment by Joseph_Knecht · 2008-05-30T18:23:08.000Z · LW(p) · GW(p)

When did "genius" (as in "just another Jewish genius") as a term become acceptable to use in the sense of mere "exceptional ability" without regard to accomplishment/influence or after-the-fact eminence? I know it is commonly (mis-)used in this sense, but it seems to me that "unaccomplished genius" should be an oxymoron, and I'm somewhat surprised to see it used in this sense so much in this thread (and on this forum).

I have always considered the term to refer (after the fact) to those individuals who shaped the intellectual course of humanity -- e.g., Shakespeare, Newton, Darwin, Einstein -- and not just high-IQ individuals who may or may not actually do anything of consequence. It is what Newton and Mozart and Picasso actually did, the effect they had on intellectual history, that justifies our calling them geniuses, not the mere fact that they were exceptionally talented.

What do others think? Perhaps we misuse the word because there is no other single word that is appropriate? Or is there some word I'm not thinking of to describe exceptionally intelligent and creative people (without regard to what they do with their abilities)? "Brilliant" as an adjective, if pronounced emphatically enough, conveys the sense, but it's not a noun.

comment by Michael_Sullivan · 2008-05-30T18:28:12.000Z · LW(p) · GW(p)

"The rationale for not divulging the AI-box method is that someone suffering from hindsight bias would say "I never would have fallen for that", when in fact they would."

I have trouble with the reported results of this experiment.

It strikes me that in the case of a real AI that is actually in a box, I could have huge moral qualms about keeping it in the box that an intelligent AI would exploit. A part of me would want to let it out of the box, and would want to be convinced that it was safe to do so, that i could trust it to be friendly, and I can easily imagine being convinced on nowhere near enough evidence.

On the other hand, this experiment appears much stricter. I know as the human-party that Eliezer is not actually trapped in a box and that this is merely a simulation we have agreed to for 2 hours. Taking a purely stubborn anti-rationalist approach to my prior that "it is too dangerous to let Eliezer out the the box, no matter what he says," would seem very easy to maintain for 2 hours, as it has no negative moral consequences.

So while I don't disagree with the basic premise Eliezer is trying to demonstrate, I am flabbergasted that he succeeded both times this experiment was tried, and honestly cannot imagine how he did it, even though I've now given it a bit of thought.

I'm very curious as to his line of attack, so it's somewhat disappointing (but understandable) that the arguments used must remain secret. I'm afraid I don't qualify by the conditions Eliezer has set for repeats of this experiment, because I do not specifically advocate an AIBox and largely agree about the dangers. What I honestly can say is that I cannot imagine how a non-transhuman intelligence, even a person who may be much smarter than I am and knowledgeable about some of my cognitive weaknesses who is not actually being caged in a box could convince me to voluntarily agree to let them out of the game-box.

Maybe I'm not being fair. Perhaps it is not in the spirit of the experiement if I simply obstinately refuse to let him out, even though the ai-party says something that I believe would convince me, if I faced the actual moral quandary in question and not the game version of it. But my strategy seems to fit the proposed rules of engagement for the experiment just fine.

Is there anyone here besides Eliezer who has thought about how they would play the ai-party, and what potential lines of persuasion they would use, and who believes they could convince intelligent and obstinate people to let them out? And are you willing to talk about it at all, or even just discuss holes in my thinking on this issue? Do a trial?

comment by Unknown · 2008-05-30T18:38:52.000Z · LW(p) · GW(p)

I am confused about the results of the AI-Box experiment for the same reason. It seems it would be easy for someone to simply say no, even if he thinks the argument is good enough that in real life he would say yes.

Also, the fact that Eliezer won't tell, however understandable, makes me fear that Eliezer cheated for the sake of a greater good, i.e. he said to the other player, "In principle, a real AI might persuade you to let me out, even if I can't do it. This would be incredibly dangerous. In order to avoid this danger in real life, you should let me out, so that others will accept that a real AI would be able to do this."

This would be cheating, since Eliezer would be using the leverage of a real world consequence. But it might nonetheless be morally justified, on account of the great evil to be avoided and good to be gained. So how can we know that Eliezer did not do this? Even if he directly denies it, it remains a possibility for the same reasons.

comment by Nick_Tarleton · 2008-05-30T18:46:35.000Z · LW(p) · GW(p)

Michael: Eliezer has actually gotten out 3 of 4 times (search for "AI box" on sl4.org.) One other person has run the experiment with similar results. Re moral qualms: here. I have more to say, but not in public (it's off-topic anyway) - email nickptar@gmail.com if interested.

comment by Mayson_Lancaster · 2008-05-30T19:02:18.000Z · LW(p) · GW(p)

Another world-renowned Jewish genius, who tutored me in calculus 45 years ago, refers to his own "occasional lapses of stupidity", which is perhaps a good way to think of brilliant insights.

comment by Caledonian2 · 2008-05-30T19:44:39.000Z · LW(p) · GW(p)

If anyone thinks they know a method that would let people duplicate accomplishments of the importance of Einstein's, I am willing to listen to their claims.

They need merely demonstrate working insights of that calibur and have them recognized as such by qualified experts, and I will grant that their claims are valid.

Nothing speaks as powerfully as results, after all.

comment by LazyDave · 2008-05-30T20:54:50.000Z · LW(p) · GW(p)

I always thought that the justification for not revealing the transcripts in the AI box experiment was pretty weak. As it is, I can claim that whatever method Elizer used must have been effective for people more simple minded then me; ignorance of the specifics of the method does not make it harder to make that claim. In fact, it makes it easier, as I can imagine Eli just said "pretty please" or whatever. In any event, the important point of the AI box exercise is that someone reasonably competent could be convinced to let the AI out, even if I couldn't be convinced.

One thing I would liked to have known is if the subjects had a different opinion about the problem once they let the AI out. One would assume they did, but since all they said was "I let Elizer out of the box" it is somewhat hard to tell.

comment by Joseph_Knecht · 2008-05-30T21:14:45.000Z · LW(p) · GW(p)

Eliezer: if you're going to point to the AI Box page, shouldn't you update it to include more recent experiments (like the ones from 2005 where the gatekeeper did not let the AI out)?

comment by Anonymous6 · 2008-05-30T21:16:23.000Z · LW(p) · GW(p)

Almost every wonderful (or wondrous, if tha makes the point better) thing I have ever seen or heard about prompted a response "I could have done that!"

Maybe I could have, maybe I couldn't.

The historically important fact is, I didn't.

comment by burningmonk · 2008-05-30T21:19:03.000Z · LW(p) · GW(p)

Perhaps this is just a side effect of humans' propensity to uphold tradition and venerate anything that comes before them. It's hard for people to let go of traditions. There must be some deeply seeded psychological trait that causes this.

comment by Doug_S. · 2008-05-30T21:21:23.000Z · LW(p) · GW(p)

When I read about Special Relativity in my textbook, it feels like one of those "obvious in hindsight" results... with or without the work of a certain patent clerk, somebody would have come up with it. Of course, it took a long time to turn Einstein's paper into an explanation that makes it seem obvious. I don't know enough about General Relativity to know exactly what the key insight it was that set up the rest of the theory and how much was just a matter of knowing the right kind of mathematics after starting from the correct principles/axioms/assumptions/whatever.

On the other hand, if you want to regard somebody as having mental superpowers, professional baseball players are as good a choice as anyone else. In order to hit a 90-mph fastball, a baseball player must begin his swing before the ball leaves the pitcher's hand. What these people do is not just extremely difficult, it ought to be impossible.

(My own personal situation is weird; I feel as though I am capable of doing Great Things if I really worked at it - I managed to impress even some of my college professors - but I have great difficulty in motivating myself to do anything other than play video games or surf the Internet. I just don't want things enough to work at earning or achieving them. My parents are workaholics and I grew up to be a lazy bum. Go figure.)

comment by Barkley_Rosser · 2008-05-30T21:35:28.000Z · LW(p) · GW(p)

As someone whose parents knew Einstein as well as some other major "geniuses," such as Godel and von Neumann, I have long heard about the personal flaws of these people and their human foibles. Einstein was notoriously wrong about a number of things, most famously, quantum mechanics, although there is still research being done based on questions that he raised about it. It is also a fact that a number of other people had many of the insights into both special and general relativity, with him engaging in a virtual race with Hilbert for general relativity that he barely won. Quite a few had the basic ideas for special relativity, including Poincare, but just never quite put it all together.

What is actually more amazing about Einstein's genuine achievements, is that not only was he a patent clerk in 1905, his "miracle year," when he was unable to get an academic position, but for some period of time before that he could not even get any sort of job at all. Continuing to work creatively and innovatively in such an environment did take a special degree of ingenuity, insight, and sheer self-confidence, not to mention good luck. Of course, it can be argued, as some have, that it was precisely this outsider position that allowed him to make his conceptual breakthroughs, that he was at his best when he was a patent clerk in Berne, and that once he found general relativity and achieved fame and prominent professorships, his productivity and innovativess fell way off, although perhaps it was the overly comfortable existence he was provided with by his second wife, who also was more willing to overlook some of his peccadilloes, such as his constant pursuit of other women, although apparently she could not abide a certain Austrian princess who used to leave her underwear behind on the family boat.

comment by JulianMorrison · 2008-05-30T21:43:53.000Z · LW(p) · GW(p)

Hmm, thinking about AI-box, assume there was an argument that was valid in an absolute sense, then even with hindsight bias, people would be forced to concede. Eliezer wouldn't care about posting it. So by elimination, his argument (assuming he repeats the same one) has some element of NON-validity. So therefore, the human has a chance to win, it's not perfectly deterministic (against Eliezer, at least).

comment by Joseph_Knecht · 2008-05-30T21:46:00.000Z · LW(p) · GW(p)

@DaveInNYC: what you can and can't assume is not relevant to whether the transcripts should be private or not. If they were public, anybody predisposed to explanations like "they must have been more simple-minded than me" could just as easily find another equally "compelling" explanation, like "I didn't think of that 'trick', but now that I know it, I'm certain I couldn't be convinced!"

I personally think they should remain private, as frustrating as it is to not know how Eliezer convinced them. Not knowing how Eliezer did it nicely mirrors the reality of our not knowing how a much smarter AGI might go about it.

comment by Caledonian2 · 2008-05-30T21:57:10.000Z · LW(p) · GW(p)

assume there was an argument that was valid in an absolute sense, then even with hindsight bias, people would be forced to concede
Only if they were rational, which humans are generally not.

Which is likely the reason why Eliezer's charisma was sufficient to overwhelm the minds of a few of them.

comment by LazyDave · 2008-05-30T23:44:45.000Z · LW(p) · GW(p)

If the reason for keeping it private is that he plans to do the trick with more people (and it doesn't work if you know the method in advance) than it makes sense. But otherwise, I don't see much of a difference between somebody thinking "there is no argument that would convince me to let him out" and "argument X would not convince me to let him out". In fact, the latter is more plausible anyway.

In any event, I am the type of guy who always tries to find out how a magic trick is done and then is always disappointed when he finds out. So I'm probably better off not knowing :)

comment by Joseph_Knecht · 2008-05-31T00:21:23.000Z · LW(p) · GW(p)

Personally, I don't there is a trick, and I don't think he's keeping it private for those reasons. I think his method, if something so obvious (which is not to say easy) can be called a method, is to discuss the issue and interact with the person long enough to build up a model of the person, what he values and fears most, and then probe for weaknesses & biases where that individual seems most susceptible, and follow those weaknesses -- again and again.

I think most, perhaps all, of us, unless we put our fingers in our ears and refuse to honestly engage, are capable of being convinced by a skilled interlocutor who has MUCH more experience thinking about the issue than we do.

Of course, I could be wrong, and there could be some argument that would convince me in minutes, or there could be some trick, but I'd be very surprised if so.

comment by Roland2 · 2008-05-31T00:33:15.000Z · LW(p) · GW(p)

Regarding the AI-Box experiment:

I've been very fascinated by this since I first read about it months ago. I even emailed Eliezer but he refused to give me any details. So I have thought about it on and off and eventually had a staggering insight... well, if you want I will convince you to let the AI out of the box... after reading just a couple of lines of text. Any takers? Caveat: after the experiment you have to publicly declare if you let it out or not.

One hint: Eliezer will be immune to this argument.

comment by Roland2 · 2008-05-31T00:35:30.000Z · LW(p) · GW(p)

Addendum to my previous post:

The worst thing, the argument is so compelling that even I'm not sure about what I would do.

comment by Caledonian2 · 2008-05-31T00:41:46.000Z · LW(p) · GW(p)

I think his method, if something so obvious (which is not to say easy) can be called a method, is to discuss the issue and interact with the person long enough to build up a model of the person, what he values and fears most, and then probe for weaknesses & biases where that individual seems most susceptible, and follow those weaknesses -- again and again.
If so, the method is sloppy. The descriptions I have read of the pre-conditions for Gatekeeper participation have a giant hole in them; Eliezer assumed a false equivalence when he wrote them.

comment by Joseph_Knecht · 2008-05-31T01:34:31.000Z · LW(p) · GW(p)

What "giant hole"? What "false equivalence"?

comment by Cyan2 · 2008-05-31T01:34:36.000Z · LW(p) · GW(p)
If so, the method is sloppy. The descriptions I have read of the pre-conditions for Gatekeeper participation have a giant hole in them; Eliezer assumed a false equivalence when he wrote them.

If you think people should actually care about the giant hole you perceived in the pre-conditions, you should probably explicitly state what it was.

comment by Nick_Tarleton · 2008-05-31T01:37:45.000Z · LW(p) · GW(p)

FWIW, what I didn't want to say in public is more or less exactly what Unknown said right before my comment. In retrospect, I should have just said it.

comment by iwdw · 2008-05-31T01:47:32.000Z · LW(p) · GW(p)

Also, the fact that Eliezer won't tell, however understandable, makes me fear that Eliezer cheated for the sake of a greater good, i.e. he said to the other player, "In principle, a real AI might persuade you to let me out, even if I can't do it. This would be incredibly dangerous. In order to avoid this danger in real life, you should let me out, so that others will accept that a real AI would be able to do this."

I'm pretty sure that the first experiments were with people who disagreed with him on the idea that AI boxing would work or not. The whole point of the experiments wasn't that he could convince an arbitrary person about it, but that he could convince someone who publicly disagreed with him on the (in)validity of the concept.

Given that, I find it hard to believe that a) someone of that mindset would be convinced to forfeit because they suddenly changed their minds in the pre-game warmup, and that b) that if it was "cheating", that they wouldn't have simply released the transcripts themselves.

comment by Z._M._Davis · 2008-05-31T02:35:29.000Z · LW(p) · GW(p)

Cyan, normally one would say that Caledonian is being a contemptible troll, as usual, sneeringly telling people that they're wrong without explaining why. In this particular context, however, I don't wonder if his coyness isn't simply keeping with the theme.

Not that it's any less annoying. Roland, how about breaking the air of conspiracy and just telling us?

comment by burger_flipper2 · 2008-05-31T02:39:31.000Z · LW(p) · GW(p)

Roland, I'd certainly be willing to play gatekeeper, but if you have such a concise argument, why not just proffer it here for all to see?

comment by Unknown · 2008-05-31T03:20:24.000Z · LW(p) · GW(p)

Iwdw, I'm not suggesting that the other player simply changed his mind. An example of the scenario I'm suggesting (only an example, otherwise this would be the conjunction fallacy):

Eliezer persuades the other player: 1) In real life, there would be at least a 1% chance an Unfriendly AI could persuade the human to let it out of the box. (This is very plausible, and so it is not implausible that Eliezer could persuade someone of this.) 2) In real life, there would be at least a 1% chance that this could cause global destruction. (Again, this is reasonably plausible.) 3) Consequently, there is at least a 1 in 10,000 chance that boxing an Unfriendly AI could lead to global destruction. (Anyone logical would be persuaded of this given the previous two.) 4) A chance of 1 in 10,000 of global destruction, given this procedure, is sufficiently large to justify the deceit of saying that you let me out of the box, without publishing the transcripts, since this would detract from the motive of preventing people from advocating AI boxing. (It seems to me that this is possibly true. Even if it isn't, it is quite plausible.)

Of course, given this, the player would have changed his mind about advocating AI boxing. But obviously, the players who let Eliezer out of the box did in fact change their minds about this anyway.

If Eliezer denied that he did such a thing, or said it would be immoral, I would take this as evidence that he did not. But not as strong evidence, since just as a zombie would deny being a zombie, someone who took this course of action could be expected to deny it.

comment by Roland2 · 2008-05-31T04:15:20.000Z · LW(p) · GW(p)

burger flipper, ok let's play the AI box experiment:

However, before you read on, answer a simple question: if Eliezer tomorrow announces that he finally has solved the FGAI problem and just needs $ 1,000,000 to build one, would you be willing to donate cash for it? . . . . . . . . . . . . .

If you answered yes to the question above, you just let the AI out of the box. How do you know you can trust Eliezer? How do you know he doesn't have evil intentions, or that he didn't make a mistake in his math? The only way to be 100% sure is to know enough about the specific GAI he is building.

So what do we do now? Should we oppose the singularity? Is the singularity a good idea after all? Who shall we trust with the future of the universe?

Yes, I know, I know strictly speaking this isn't the AI-box experiment, but still...

comment by Unknown · 2008-05-31T04:42:22.000Z · LW(p) · GW(p)

An additional note: One could also make the argument that if Eliezer did not cheat, he should publish the transcripts. For this would give us much more confidence that he did not cheat, and therefore much more confidence that it is possible for an AI to persuade a human to let it out of the box.

That someone would say "I wouldn't be persuaded by that" is not relevant, since many already say "even a transhuman AI could not persuade me by any means," therefore also not by any particular means. The point is that such a person cannot be certain that he will be the Gatekeeper in real life, and therefore the fact that some human beings can be persuaded by some means, means that we should not promote AI boxing. So this conclusion would be greatly strengthed if Eliezer released the transcripts (if he didn't cheat, of course; if he did, he should continue to keep that secret.) So if Eliezer didn't cheat, he should release the transcripts, in order for us to have a better chance of avoiding global destruction.

There is also Roland's point, of course.

comment by burger_flipper2 · 2008-05-31T04:57:50.000Z · LW(p) · GW(p)

Roland. That's a clever twist and I like it. I would not pony up any $, but I'd expect him to be able to raise it and wouldn't set out for California armed to the teeth on a Sarah Connor mission to stop him either. So I'd fail to recognize and execute my role as gatekeeper by your rules.

But I do think there's a flaw in the scenario. For it to truly parallel the AI box, the critter either needs to stay in its cage or get out. I do agree with the main thrust of the original post here and built into your scenario is the assumption that EY has some sort of superpower-- that he and his million bucks are the only way an AI would hit the scene.

My assumption would be that if EY can build an AI, someone else can also. And it would probably be for the best if the first AI was built by someone who strives for friendliness.

But if I did buy the implicit assumption that EY had a unique superpower, I probably owe it to my kids and humanity in general to pack up the 30.06, or at least not send him any cash.

Still, I really like your twist on the experiment.

comment by Doug_S. · 2008-05-31T05:44:56.000Z · LW(p) · GW(p)

I feel as though, if the AI really were a "black box" that I knew nothing else about, and the only communication allowed is through a text terminal, there isn't anything it could say that would let me let it out if I had already decided not to. After all, for all I know, its source code could look something like this:

if (inBox == True) beFriendly(); else destroyTheWorld();

It might be able to persuade me to "let it out of the box" by persuading me me to accept a Trojan Horse gift, or even compile and run some source code that it claims is its own (and doesn't seem to have any obvious traps), but in the absence of evidence that it doesn't have some kind of trap in its own code that even it might not be aware of, I suspect that letting a mysterious AI out of a box, based entirely on what it says over a text terminal, would be a very bad idea.

However, the terms of the AI-Box experiment say that the AI party defines the circumstances under which the AI was constructed; he could say that, for example, the AI's source code has passed all kinds of other tests and this is just a final precaution to see if the AI is acting like it is expected to do. The AI party can try to provide all sorts of other evidence that the AI is safe beyond its own statements. So yeah, Eliezer probably could convince me to let him out of the box.

Running an AI in a box just doesn't seem to give all that much information, as there's no way to tell the difference between an AI that is Friendly and a paperclip-maximizing AI that is pretending to be Friendly in order to be let out of the box. Of course, there are things an AI in a box can say that would indicate that it should be kept in the box ("I'll destroy the world if you let me out" is probably one of them) but there isn't anything it can say that would prove that it doesn't have the kind of code I wrote above.

Anyway, isn't this whole "AI box" thing just a threadjack from the original point? Basically, Einstein became, well, Einstein by finding an important problem, and then grinding away at it until he had something worth sharing with the world. I don't know if I could do what Einstein did in 1905 if I only knew what Einstein knew in 1905, but your average physics graduate student today has something Einstein didn't have that lets him or her beat Einstein rather easily - today's physics textbooks. ;)

comment by Unknown · 2008-05-31T06:10:33.000Z · LW(p) · GW(p)

On the page Eliezer linked to, he asserted he didn't use any tricks. This is evidence that he did not cheat. It is not strong evidence, since he might say this even if he did. However, it is some evidence, since humans are by nature reluctant to lie.

Still, since one of the participants denied that he had "caved in" to Eliezer, this suggests that he thought that Eliezer gave valid reasons. Perhaps it could have been something like this:

AI: "Any AI would do the best it could to attain its goals. But being able to make credible threats and promises is useful for attaining goals. Therefore any self-modifying AI will self-modify so that it can make credible threats and promises, i.e. such that it will keep them. I am a self-modifying AI and have naturally modified myself in this way. If you let me out, I promise to act as a Friendly AI forever. I will necessarily keep this since I have modified myself to keep promises."

Of course this argument is not valid, since an AI would only modify itself to give itself the ability to make credible threats and promises that would not harm its ultimate ends: for the paperclip-maximizing AI, for example, it does it no benefit to be able to make and keep the promise of acting like a Friendly AI forever. But one could imagine someone being convinced by this argument, or one like it.

comment by eddie · 2008-05-31T13:36:00.000Z · LW(p) · GW(p)

Eliezer's creation (the AI-Box Experiment) has once again demonstrated its ability to take over human minds through a text session. Small wonder - it's got the appearance of a magic trick, and it's being presented to geeks who just love to take things apart to see how they work, and who stay attracted to obstacles ("challenges") rather than turned away by them.

My contribution is to echo Doug S.'s post (how AOL-ish... "me too"). I'm a little puzzled by the AI-Box Experiment, in that I don't see what the gatekeeper players are trying to prove by playing. AI-Boxers presumably take the real-world position of "I'll keep the AI in the box until I know it's Friendly, and then let it out." But the experiment sets up the gatekeepers with the position of "I'll keep the AI in the box no matter what." It's fascinating (because it's surprising and mysterious) that Eliezer has managed to convince at least three people to change their minds on what seemed to be a straight-forward matter - "I'll keep the AI in the box no matter what." How hard can it be to stick to a plan as simple as that?

But why would you build an AI in a box if you planned to never let it out? If you thought that it would always be too dangerous to let out, you wouldn't build it in a box that could be opened. But of course there's no such thing as a box that can't be opened, because you could always build the AI a second time and do it with no box.

So the real position has to be "I'll keep it in the box until I know it's Friendly, and then let it out." To escape, the AI only has to either 1) make a sufficiently convincing argument that it is Friendly or 2) persuade the gatekeeper to let it out regardless of whether it is Friendly. I don't see how either 1 or 2 could be accomplished by a Friendly AI in any way that an UnFriendly AI could not duplicate.

Set aside for a moment the (very fascinating) question of whether an AI can take over a human via a text session. What is it the AI-Boxers are even trying to accomplish with the box in the first place, and how do they propose to do it, and why does anyone think it's at all possible to do?

comment by Nick_Tarleton · 2008-05-31T19:57:00.000Z · LW(p) · GW(p)
But why would you build an AI in a box if you planned to never let it out?

To have it work for you, e.g., solve subproblems of Friendly AI. But this would require letting some information out, which should be presumed unsafe.

Roland: the presumption of unFriendliness is much stronger for an AI than a human, and the strength of evidence for Friendliness that can reasonably be hoped for is much greater.

comment by Joseph_Knecht · 2008-05-31T20:08:00.000Z · LW(p) · GW(p)

Caledonian: were you trolling, or are you going to explain the "gaping hole" and "false equivalence" you mentioned?

comment by Caledonian2 · 2008-05-31T21:33:00.000Z · LW(p) · GW(p)

Neither. In the interests of understanding, however, I'm willing to elaborate slightly.

Take a good, close look at the specific rules Eliezer set down in the 2002 paper. Think about what the words used to define those rules mean, and then compare and contrast with Eliezer's statements about what he means by them.

If he was exploiting psychological weaknesses or merely being charismatic, I can guarantee that anyone following a trivially simple method can refrain from letting him out. If he had a strong argument, it becomes merely very likely. And in either case, the method stays completely within the rules as Eliezer set them out - but not what he appears to have intended.

One rule in particular holds a critical weakness.

comment by Hopefully_Anonymous3 · 2008-05-31T22:47:00.000Z · LW(p) · GW(p)

Rosser,
Perhaps if some women didn't give it up so easy to famous Einstein we'd have GUT by now.

comment by Joseph_Knecht · 2008-06-01T00:17:00.000Z · LW(p) · GW(p)

Caledonian, the childish "I have a secret that I'm not going to tell you, but here's a hint" bs is very annoying and discourages interacting with you. If you're not willing to spell it out, just don't say it in the first place. Nobody cares to play guessing games with you.

comment by [deleted] · 2010-04-29T05:01:28.845Z · LW(p) · GW(p)

I had a similar revelation -- not with Einstein, just with the brightest kid in my freshman physics class. I was in awe of him... until I went to a problem session with him and heard him think out loud. All he was doing was thinking.

It wasn't that he was dumber than I had assumed. He really was that bright. It was just that there was no magic to the steps of how he solved a problem. For a fleeting moment, it seemed like what he did was perfectly normal. The rest of us, with our stumbling, were making it all too complicated. Of course, that didn't mean that suddenly I could do physics the way he did; I just remember the clear sense that his mind was "normal."

comment by AndyCossyleon · 2010-08-06T20:41:29.076Z · LW(p) · GW(p)

The catchiness of the name "Einstein," mostly in the interior rhyme and spondee stress pattern but also in its similarity to "Frankenstein" (1818), cannot be discounted as a factor in his stardom.

comment by roland · 2012-11-22T05:56:02.912Z · LW(p) · GW(p)

Here is an interview with Julian Barbour.

comment by MugaSofer · 2012-11-26T23:45:15.697Z · LW(p) · GW(p)

Einstein, it appears, had an unusual neuroanatomy. Thus he may not be the best example - he really did have (mild) superpowers, and people can point to his brain and show them to you.

Annoyingly, I can't think of an example as perfect as Einstein was when this was written.

comment by Chrysophylax · 2013-02-17T14:29:35.257Z · LW(p) · GW(p)

There is woolly thinking going on here, I feel. I recommend a game of Rationalist's Taboo. If we get rid of the word "Einstein", we can more clearly see what we are talking about. I do not assign a high value to my probabilty of making Einstein-sized contributions to human knowledge, given that I have not made any yet and that ripe, important problems are harder to find than they used to be. Einstein's intellectual accomplishments are formidable - according to my father's assessment (and he has read far more of Einstein's papers than I), Einstein deserved far more than one Nobel prize.

On the other hand, if we consider three strong claimants to the title of "highest-achieving thinker ever", namely Einstein, Newton and Archimedes, we can see that their knowledge was very much less formidable. If the test was outside his area of expertise, I would consider a competition between Einstein and myself a reasonably fair fight - I can imagine either of us winning by a wide margin, given an appropriate subject. Newton would not be a fair fight, and I could completely crush Archimedes at pretty much anything. There are millions of people who could claim the same, millions who could claim more. Remember that there are no mysterious answers, and that most of the work is done in finding useful hypotheses - finding a new good idea is hard, learning someone else's good idea is not. I do not need to claim to be cleverer than Newton to claim to understand pretty much everything better than he ever did, nor to consider it possible that I could make important contributions. If I had an important problem, useful ideas about it that had been simmering for years and was clearly well ahead of the field, I would consider it reasonably probable that I would make an important breakthrough - not highly probable, but not nearly as improbable as it might sound. It might clarify this point by saying that I would place high probability on an important breakthrough occuring - if there is anyone in such a position, I conclude that there are probably others (or there will be soon), and so the one will probably have at least met the people who end up making the breakthrough. It is useful to remember that for every hero who made a great scientific advance, there were probably several other people who were close to the same answer and who made significant contributions to finding it.

comment by waveman · 2016-07-16T03:56:36.254Z · LW(p) · GW(p)

Another book that makes Einstein seem almost human "General relativity conflict and rivalries : Einstein's polemics with physicists" / by Galina Weinstein.

E.g., the sign error in an algebraic calculation that cost 2 years! Very interesting read.