The Level Above Mine

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-26T09:18:34.000Z · LW · GW · Legacy · 357 comments

Contents

357 comments

(At this point, I fear that I must recurse into a subsequence; but if all goes as planned, it really will be short.)

I once lent Xiaoguang "Mike" Li my copy of "Probability Theory: The Logic of Science".  Mike Li read some of it, and then came back and said:

"Wow... it's like Jaynes is a thousand-year-old vampire."

Then Mike said, "No, wait, let me explain that—" and I said, "No, I know exactly what you mean."  It's a convention in fantasy literature that the older a vampire gets, the more powerful they become.

I'd enjoyed math proofs before I encountered Jaynes.  But E.T. Jaynes was the first time I picked up a sense of formidability from mathematical arguments.  Maybe because Jaynes was lining up "paradoxes" that had been used to object to Bayesianism, and then blasting them to pieces with overwhelming firepower—power being used to overcome others.  Or maybe the sense of formidability came from Jaynes not treating his math as a game of aesthetics; Jaynes cared about probability theory, it was bound up with other considerations that mattered, to him and to me too.

For whatever reason, the sense I get of Jaynes is one of terrifying swift perfection—something that would arrive at the correct answer by the shortest possible route, tearing all surrounding mistakes to shreds in the same motion.  Of course, when you write a book, you get a chance to show only your best side.  But still.

It spoke well of Mike Li that he was able to sense the aura of formidability surrounding Jaynes.  It's a general rule, I've observed, that you can't discriminate between levels too far above your own. E.g., someone once earnestly told me that I was really bright, and "ought to go to college".  Maybe anything more than around one standard deviation above you starts to blur together, though that's just a cool-sounding wild guess.

So, having heard Mike Li compare Jaynes to a thousand-year-old vampire, one question immediately popped into my mind:

"Do you get the same sense off me?" I asked.

Mike shook his head.  "Sorry," he said, sounding somewhat awkward, "it's just that Jaynes is..."

"No, I know," I said.  I hadn't thought I'd reached Jaynes's level. I'd only been curious about how I came across to other people.

I aspire to Jaynes's level.  I aspire to become as much the master of Artificial Intelligence / reflectivity, as Jaynes was master of Bayesian probability theory.  I can even plead that the art I'm trying to master is more difficult than Jaynes's, making a mockery of deference.  Even so, and embarrassingly, there is no art of which I am as much the master now, as Jaynes was of probability theory.

This is not, necessarily, to place myself beneath Jaynes as a person—to say that Jaynes had a magical aura of destiny, and I don't.

Rather I recognize in Jaynes a level of expertise, of sheer formidability, which I have not yet achieved.  I can argue forcefully in my chosen subject, but that is not the same as writing out the equations and saying:  DONE.

For so long as I have not yet achieved that level, I must acknowledge the possibility that I can never achieve it, that my native talent is not sufficient.  When Marcello Herreshoff had known me for long enough, I asked him if he knew of anyone who struck him as substantially more natively intelligent than myself.  Marcello thought for a moment and said "John Conway—I met him at a summer math camp."  Darn, I thought, he thought of someone, and worse, it's some ultra-famous old guy I can't grab.  I inquired how Marcello had arrived at the judgment.  Marcello said, "He just struck me as having a tremendous amount of mental horsepower," and started to explain a math problem he'd had a chance to work on with Conway.

Not what I wanted to hear.

Perhaps, relative to Marcello's experience of Conway and his experience of me, I haven't had a chance to show off on any subject that I've mastered as thoroughly as Conway had mastered his many fields of mathematics.

Or it might be that Conway's brain is specialized off in a different direction from mine, and that I could never approach Conway's level on math, yet Conway wouldn't do so well on AI research.

Or...

...or I'm strictly dumber than Conway, dominated by him along all dimensions.  Maybe, if I could find a young proto-Conway and tell them the basics, they would blaze right past me, solve the problems that have weighed on me for years, and zip off to places I can't follow.

Is it damaging to my ego to confess that last possibility?  Yes.  It would be futile to deny that.

Have I really accepted that awful possibility, or am I only pretending to myself to have accepted it?  Here I will say:  "No, I think I have accepted it."  Why do I dare give myself so much credit?  Because I've invested specific effort into that awful possibility.  I am blogging here for many reasons, but a major one is the vision of some younger mind reading these words and zipping off past me.  It might happen, it might not.

Or sadder:  Maybe I just wasted too much time on setting up the resources to support me, instead of studying math full-time through my whole youth; or I wasted too much youth on non-mathy ideas.  And this choice, my past, is irrevocable.  I'll hit a brick wall at 40, and there won't be anything left but to pass on the resources to another mind with the potential I wasted, still young enough to learn.  So to save them time, I should leave a trail to my successes, and post warning signs on my mistakes.

Such specific efforts predicated on an ego-damaging possibility—that's the only kind of humility that seems real enough for me to dare credit myself.  Or giving up my precious theories, when I realized that they didn't meet the standard Jaynes had shown me—that was hard, and it was real.  Modest demeanors are cheapHumble admissions of doubt are cheap.  I've known too many people who, presented with a counterargument, say "I am but a fallible mortal, of course I could be wrong" and then go on to do exactly what they planned to do previously.

You'll note that I don't try to modestly say anything like, "Well, I may not be as brilliant as Jaynes or Conway, but that doesn't mean I can't do important things in my chosen field."

Because I do know... that's not how it works.

357 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Manuel_Moertelmaier · 2008-09-26T09:59:09.000Z · LW(p) · GW(p)

In a few years, you will be as embarrassed of these posts as you are today of your former claims of being an Algernon, or that a logical paradox would make an AI go gaga, the tMoL argumentation you mentioned the last days, the Workarounds for the Laws of Physics, Love and Life Just Before the Singularity and so on and so forth. Ask yourself: Will I have to delete this, too ?

And the person who told you to go to college was probably well-meaning, and not too far from the truth. Was it Ben Goertzel ?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-26T10:16:52.000Z · LW(p) · GW(p)

Despite all fallibility of memory, I would be shocked to learn that I had ever claimed that a logical paradox would make an AI go gaga. Where are you getting this from?

Ben's never said anything like that to me. The comment about going to college was from an earnest ordinary person, not acquainted with me. And no, I didn't snap at them, or laugh out loud; it was well-intentioned advice. Going to college is a big choice for a lot of people, and this was someone who met me, and saw that I was smart, and thought that I seemed to have the potential to go to college.

Which is to imply that if there's a level above Jaynes, it may be that I won't understand it until I reach Jaynes's level - to me it will all just look like "going to college". If I recall my timeline correctly, I didn't comprehend Jaynes's level until I had achieved the level of thinking naturalistically; before that time, to achieve a reductionist view of intelligence was my whole aspiration.

comment by Not_You · 2008-09-26T10:25:22.000Z · LW(p) · GW(p)

Although I've never communicated with you in any form, and hence don't know what it's like for you to answer a question of mine, or correct a misconception (you have, but gradually), or outright refute a strongly held belief...or dissolve a Wrong Question...

...You're still definitely the person who strikes me as inhumanly genius - above all else.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-26T10:33:55.000Z · LW(p) · GW(p)

Unfortunately for my peace of mind and ego, people who say to me "You're the brightest person I know" are noticeably more common than people who say to me "You're the brightest person I know, and I know John Conway". Maybe someday I'll hit that level. Maybe not.

Until then... I do thank you, because when people tell me that sort of thing, it gives me the courage to keep going and keep trying to reach that higher level.

Seriously, that's how it feels.

Replies from: None, diegocaleiro, Technoguyrob
comment by [deleted] · 2011-01-07T01:22:16.009Z · LW(p) · GW(p)

I think maybe Being the Smartest Person is a fundamentally bad, unhelpful motivator, and you should get some cognitive therapy. Of course, you would immediately conclude (correctly) that you are smarter than your mental health professional and stop listening (stupidly and non-volitionally) to them. So this is probably a road you're going to have to walk. Here's hoping you don't have a horrible self- or other-destructive flameout.

comment by diegocaleiro · 2011-08-08T08:13:20.201Z · LW(p) · GW(p)

You are the brightest person I know. And I know Dan Dennett, Max Tegmark, Robert Trivers, Marcello, Minsky, Pinker and Omohundro.

Unfortunately, those are non-math geniuses, so that speaks for only some sub-areas of cognition which, less strictly categorizable than the clearly scalable domain of math, are not subject to your proposed rule of "one standard deviation above you they blurr"

Replies from: JackEmpty, lessdazed, Oscar_Cunningham
comment by JackEmpty · 2011-08-08T12:40:48.676Z · LW(p) · GW(p)

"Know" in the sense EY used it != have read, watched interviews, etc.

I took it to mean more personal interaction (even if through comments online).

Replies from: lessdazed, diegocaleiro
comment by lessdazed · 2011-08-08T13:22:18.976Z · LW(p) · GW(p)

Especially since "know of" exists as a common phrase to cover the meaning "have read, watched interviews, etc."

comment by diegocaleiro · 2011-08-09T03:45:19.555Z · LW(p) · GW(p)

I have had classes with them, asked questions. and met them personally. I should have anticipated disbelief. And yes, I didn't notice that I categorized Marcello as non-math, sorry Marcello!

Replies from: JackEmpty
comment by JackEmpty · 2011-08-09T12:21:13.387Z · LW(p) · GW(p)

Oh. Cool! Less disbelief, more illusion of transparency.

If a randomly selected person says, "I know X (academically) famous people." I myself usually assume through impersonal means.

Update'd. Carry on :D

comment by lessdazed · 2011-08-08T13:27:07.043Z · LW(p) · GW(p)

Non-math geniuses who grok and advocate for unpopular reductionism are in one sense greater than mere superheroes who know the math.

Replies from: None
comment by [deleted] · 2011-08-08T14:33:44.425Z · LW(p) · GW(p)

In another sense, non-math geniuses advocating for reductionism are no better than the anti-vaccine lobby.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2011-11-13T09:21:16.197Z · LW(p) · GW(p)

What sense is that?

Replies from: JohnWittle
comment by JohnWittle · 2013-03-17T16:01:49.227Z · LW(p) · GW(p)

The sense in which they did not come about their beliefs based on starting with sane priors which did not presuppose reductionism, and then update on evidence until they independently discovered reductionism.

I disagree with the grandparent, however: I believe that (most) non-math-geniuses advocating for reductionism are more akin to Einstein believing in General Relativity before any novel predictions had been verified: recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs.

Replies from: whowhowho
comment by whowhowho · 2013-03-17T16:41:29.175Z · LW(p) · GW(p)

The "absurdity" of non-reductionism seems to have evaded Robert Laughlin, Jaorn Lanier and a bunch of other smart people.

Replies from: JohnWittle
comment by JohnWittle · 2013-03-18T21:00:45.914Z · LW(p) · GW(p)

I did not say that non-reductionism is absurd. I said that "recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs".

Nonetheless, I do think that non-reductionism is absurd. I cannot imagine a universe which is not reductionistic.

Can you explain to me how it might work?

Edit: I googled "Robert Laughlin Reductionism" and actually found a longish paper he wrote about reductionism and his beliefs. I have some criticisms:

Who are to enact that the laws governing the behavior of particles are more ultimate than the transcendent, emergent laws of the collective they generate, such as the principles of organization responsible for emergent behavior? According to the physicist George F. R. Ellis true complexity emerges as higher levels of order from, but to a large degree independent of, the underlying low-level physics. Order implies higher-level systemic organization that has real top-down effects on the behavior of the parts at the lower level. Organized matter has unique properties (Ellis 2004).

Yudkowsky has a great refutation of using the description "emergent", at The Futility of Emergence, to describe phenomenon. From there:

I have lost track of how many times I have heard people say, "Intelligence is an emergent phenomenon!" as if that explained intelligence. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that intelligence is "emergent"? You can make no new predictions. You do not know anything about the behavior of real-world minds that you did not know before. It feels like you believe a new fact, but you don't anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts—there's no detailed internal model to manipulate. Those who proffer the hypothesis of "emergence" confess their ignorance of the internals, and take pride in it; they contrast the science of "emergence" to other sciences merely mundane.

And even after the answer of "Why? Emergence!" is given, the phenomenon is still a mystery and possesses the same sacred impenetrability it had at the start.

Further down in the paper, we have this:

They point to higher organizing principles in nature, e.g. the principle of continuous symmetry breaking, localization, protection, and self-organization, that are insensitive to and independent of the underlying microscopic laws and often solely determine the generic low-energy properties of stable states of matter (‘quantum protectorates’) and their associated emergent physical phenomena. “The central task of theoretical physics in our time is no longer to write down the ultimate equations but rather to catalogue and understand emergent behavior in its many guises, including potentially life itself. We call this physics of the next century the study of complex adaptive matter” (Laughlin and Pines 2000).

Every time he makes the specific claim that reductionism makes worse predictions than a belief in "emergent phenomenon" in which "organizational structure" is an additional property that all of reality must have, in addition to "mass" and "velocity", he cites himself for this. He also does not provide any evidence for non-reductionism over reductionism; that is, he cannot name a single prediction where non-reductionism was right, and reductionism was wrong.

He goes on to say that reductionism is popular because you can always examine a system by looking at its internal mechanisms, but you can't always examine a system by looking at it from from a "higher" perspective. A good example, he says, is genetic code: to assume that dna is actually a complete algorithmic description of how to build a human body is an illogical conclusion, according to him.

He would rather suppose that the universe contains rules like "When a wavefunction contains these particular factorizations which happen not to cancel out, in a certain organizational structure, use a different mechanism to decide possible outcomes instead of the normal mechanism" than suppose that the laws of physics are consistent throughout and contain no such special cases. From the standpoint of simplicity, reductionism is simpler than non-reductionism, since non-reductionism is the same thing as reductionism except with the addition of special cases.

He specifically objects that reductionism isn't always the "most complete" description of a given phenomenon; that elements of a given phenomenon "cannot be explained" by looking at the underlying mechanism of that phenomenon.

I think this is nonsense. Even supposing that the laws of physics contain special cases for things like creating a human body out of DNA, or for things like consciousness, then in order for such special case exceptions to actually be implemented by the universe, they must be described in terms of the bottom-most level. Even if a DNA strand is not enough information to create a human being, and the actual program which creates the human being is hard coded into the universe, the object that the program must manipulate is still the most basic element of reality, the wavefunction, and therefore the program must specify how certain amplitude configurations must evolve, and therefore the program must describe reality on the level of quarks.

This is still reductionism, it is just reductionism with the assumed belief that the laws of physics were designed such that certain low-level effects would take place if certain high-level patterns came about in the wavefunction.

This is the only coherent way I could possibly imagine consciousness being an "emergent phenomenon", or the creation of a human body from the blueprints of DNA being impossible without additional information. Do you suppose Laughlin was saying something else?

At first when I read EY's "The Futility of Emergence" article, I didn't understand. It seemed to me that there's no way people actually think of "emergence" as being a scientific explanation for how a phenomenon occurs such that you could not predict that the phenomenon would occur if you know how every piece of the system worked individually. I didn't think it possible that anyone would actually think that knowing how all of the gears in a clock work doesn't mean you'll be able to predict what the clock will say based on the positions of the gears (for sufficiently "complex" clocks). And so I thought that EY was jumping the gun in this fight.

But perhaps he read this very paper, because Laughlin uses the word "emergent phenomenon" to describe behavior he doesn't understand, as if that's an explanation for the phenomenon. Even though you can't use this piece of information to make any predictions as to how reality is. Even though it doesn't constrain your anticipation into fewer possibilities, which is what real knowledge does. He uses this word as a substitute for "magic"; he does not know how an extremely complex phenomenon works, and so he supposes that the actual mechanism for the phenomenon is not enough to fully explain the phenomenon, that additional aspects of the phenomenon are simply uncaused, or that there is a special-case exclusion in the universe's laws for the phenomenon.

He does not explore the logical implications of this belief: that holding the belief that some aspects of a phenomenon have no causal mechanism, and therefore could not have possibly been predicted. He makes the claim that a hypothetical Theory of Everything would not be able to explain some of the things we find interesting about some phenomenon. Does he believe that if we programmed a physics simulator with the Correct Theory of Everything, and fed it the boundary conditions of the universe, then that simulated universe would not look exactly like our universe? That the first time DNA occurred on earth, in that simulated universe, it would not be able to create life (unlike in our universe) because we didn't include in the laws of physics a special clause saying that when you have DNA, interpret it and then tell the quarks to move differently from how they would have?

I believe that DNA contains real instructions for how to construct an entire human from start to finish. I don't think the laws of physics contain such a clause.

I read the whole paper by Laughlin and I was unimpressed. If this is the best argument against reductionism, then reductionism is undoubtedly the winner. You called Laughlin a "smart person", but he isn't smart enough to realize that calling the creation of humans from DNA an "emergent phenomenon" is literally equivalent to calling it a "magic phenomenon", in that it doesn't limit your anticipation of what could happen. If you can equally explain every possible outcome, you have no knowledge...

Replies from: CCC, EHeller, whowhowho
comment by CCC · 2013-03-19T09:29:03.808Z · LW(p) · GW(p)

It's a bit of an aside to your main point, but there are good arguments to support the assertion that DNA is only a partial recipe for an organism, such as a human. The remaining information is present in the environment of the mothers' womb in other forms - for example, where there's an ambiguity in the DNA with regards to the folding of a certain protein, other proteins present in the womb may correct any incorrectly folded samples.

To look at your main point; if I were to present an argument against reductionism, I would point to the personal computer. This is a device constructed in order to run software; that is, to follow a list of instructions that manipulate binary data. Once you have a list of all the instructions that the computer can follow, and what these instructions do, a thorough electrical analysis of the computer's circuitry isn't going to provide much new information; and it will be a lot more complicated, and harder to understand. There's a conceptual point, there, at the level of individual software instructions, where further reductionism doesn't help to understand the phenomenon, and does make the analysis more complicated, and harder to work with.

A thorough electrical analysis is, of course, useful if one wishes to confirm that the stated behaviour of the basic software commands is both correctly stated, and free of unexpected side-effects. However, an attempt to describe (say) the rendering of a JPEG image in terms of which transistors are activated at which point is likely a futile exercise.

Replies from: Morendil, TheOtherDave
comment by Morendil · 2013-03-19T09:37:33.212Z · LW(p) · GW(p)

an attempt to describe (say) the rendering of a JPEG image in terms of which transistors are activated at which point is likely a futile exercise

Well, yes - but that arises from the fact that such devices are man-made, and (out of respect to our brains' limitations) designed to isolate the layers of explanation from one another - to obviate the need for a fully reductionistic account. The argument will not apply to things not man-made.

Replies from: CCC, EHeller
comment by CCC · 2013-03-19T10:26:05.209Z · LW(p) · GW(p)

The entire science of psychology is based on the idea that it is useful to apply high-level rules to the neural functioning of the human brain. If I decide to eat a cookie, then I explain it in high-level terms; I was hungry, the cookie smelt delicious. An analysis in terms of the effect of airborne particles originating from the cookie on my nasal passages, and subsequent alterations in the pattern of neural activations in my brain, can give a far more complicated answer to the question of why I ate the cookie; but, again, I don't see how such a more complicated analysis would be better. If I want to understand my motivations more fully, I can do so in terms of mental biases, subconscious desires, and so forth; rather than a neuron-by-neuron analysis of my own brain.

And while it is technically true that I, as a human, am man-made (specifically, that I was made by my parents), a similar argument could be raised for any animal.

Such situations are rare, but not entirely unknown.

Replies from: JohnWittle
comment by JohnWittle · 2013-03-19T17:39:40.369Z · LW(p) · GW(p)

I disagree with your entire premise. I think we should pin down this concept of "levels of perspective" with some good jargon at some point, but regardless...

You can look at a computer from the level of perspective of "there are windows on the screen and I can move the mouse around. I can manipulate files on the hard drive with the mouse and the keyboard, and those changes will be reflected inside information boxes in the windows." This is the perspective most people see a computer from, but it is not a complete description of a computer (i.e. if someone unfamiliar with the concept of computers heard this description, they could not build a computer from base materials.)

You might also see the perspective, "There are many tiny dots of light on a flat surface, lit up in various patterns. Those patterns are caused by electricity moving in certain ways through silica wires arranged in certain ways." This is, I think, one level lower, but an unfamiliar person could not build a computer from scratch from this description.

Another level down, the description might be: "There is a CPU, which is composed of hundreds of thousands of transistors, arranged into logic gates such that when electricity is sent through them you can perform meaningful calculations. These calculations are written in files using a specific instruction set ("assembly language"). The files are stored on a disk in binary, with the disk containing many cesium atoms arranged in a certain order, which have either an extra electron or do not, representing 1 and 0 respectively. When the CPU needs to temporarily store a value useful in its calculations, it does so in the RAM, which is like the disk except much faster and smaller. Some of the calculations are used to turn certain square-shaped lights on a large flat surface blink in certain ways, which provides arbitrary information to the user". We are getting to the point where an unfamiliar human might be able to recreate a computer from scratch, and therefore can be said to actually "understand" the system.

But still yet there are lower levels. Describing the actual logic gate organization in the CPU, the system used by RAM to store variables, how the magnetic needle accesses a specific bit on the hard drive by spinning it... All of these things must be known and understood in order to rebuild a computer from scratch.

Humans designed the computer at the level of "logic gates", "bits on a hard drive", "registries", etc, and so it is not necessary to go deeper than this to understand the entire system (just as you don't have to go deeper than "gears and cogs" to understand how a clock works, or how you don't have to go deeper than "classical physics (billiards balls bouncing into each other)" to understand how a brain works.

But I hope that it's clear that the mechanisms at the lower levels of a system completely contain within them the behavior of the higher levels of the system. There are no new behaviors which you can only learn about by studying the system from a higher level of perspective; those complicated upper-level behaviors are entirely formed by the simple lower-level mechanisms, all the way down to the wave function describing the entire universe.

That is what reductionism means. If you know the state of the entire wavefunction describing the universe, you know everything there is to know about the universe. You could use it to predict that, in some everette branches, the assassination of Franz Ferdinand on the third planet from the star Sol in the milky way galaxy would cause a large war on that planet. You could use it to predict the exact moment at which any particular "slice" of the wavefunction (representing a particular possible universe) will enter its maximum entropy state. You could use it to predict any possible behavior of anything and you will never be surprised. That is what it means to say that all of reality reduces down to the base-level physics. That is what it means to posit reductionism; that from an information theoretical standpoint, you can make entirely accurate predictions about a system with only knowledge about its most basic level of perspective.

If you can demonstrate to me that there is some organizational structure of matter which causes that matter to behave differently from what would be predicted by just looking at the matter in question without considering its organization (which would require, by the way, all of reality to keep track not only of mass and of velocity but also of its organizational structure relative to nearby reality), then I will accept such a demonstration as being a complete and utter refutation of reductionism. But there is no such behavior.

Replies from: whowhowho, CCC
comment by whowhowho · 2013-03-19T18:52:02.886Z · LW(p) · GW(p)

That is what it means to posit reductionism; that from an information theoretical standpoint, you can make entirely accurate predictions about a system with only knowledge about its most basic level of perspective.

That's a fusion of reductionism and determinism. Reductionism ins't necessarily false in an indeterministic universe. What is more pertinent is being able to predict higher level properties and laws from lower level properties and laws. (synchronously, in the latter case).

Replies from: JohnWittle
comment by JohnWittle · 2013-03-20T06:40:13.131Z · LW(p) · GW(p)

No it isn't? I did not mean you would be able to make predictions which came true 100% of the time. I meant that your subjective anticipation of possible outcomes would be equal to the probability of those outcomes, maximizing both precision and accuracy.

Replies from: whowhowho
comment by whowhowho · 2013-03-20T13:00:37.730Z · LW(p) · GW(p)

No it isn't?

Yes it is.

"A property of a system is said to be emergent if it is in some sense more than the "sum" of the properties of the system's parts. An emergent property is said to be dependent on some more basic properties (and their relationships and configuration), so that it can have no separate existence. However, a degree of independence is also asserted of emergent properties, so that they are not identical to, or reducible to, or predictable from, or deducible from their bases. The different ways in which the independence requirement can be satisfied lead to various sub-varieties of emergence." -- WP

I meant that your subjective anticipation of possible outcomes would be equal to the probability of those outcomes, maximizing both precision and accuracy.

Still deterinism, not reductionism. In a universe where

*1aTthere are lower-level-properties ..

*1b operating according to a set of deterministic laws.

*2a There are also higher-level properties..

*2b irreducible to and unpredictable from the lower level properties and laws...

*2c which follow their own deterministic laws.

You would be able to predict the future with complete accuracy, given both sets of laws and two sets of starting conditions. Yet the universe being described is explicitly non-reductionistic.

Replies from: Kindly, JohnWittle
comment by Kindly · 2013-03-20T13:38:48.868Z · LW(p) · GW(p)

I'm a bit confused. What exactly defines a "higher-level" property, if not that it can be reduced to lower-level properties?

Replies from: whowhowho
comment by whowhowho · 2013-03-20T14:35:09.382Z · LW(p) · GW(p)

eg: being macrscopic, featuring only in the special sciences

comment by JohnWittle · 2013-03-21T06:04:53.726Z · LW(p) · GW(p)

2a There are also higher-level properties.. 2b irreducible to and unpredictable from the lower level properties and laws...

This all this means is that, in addition to the laws which govern low-level interactions, there are different laws which govern high-level interactions. But they are still laws of physics, they just sound like "when these certain particles are arranged in this particular manner, make them behave like this, instead of how the low-level properties say they should behave". Such laws are still fundamental laws, on the lowest level of the universe. They are still a part of the code for reality.

But you are right:

unpredictable from lower level properties

Which is what I said:

That is what it means to posit reductionism; that from an information theoretical standpoint, you can make entirely accurate predictions about a system with only knowledge about its most basic [lowest] level of perspective.

Ergo, a reductionistic universe is also deterministic from a probabilistic standpoint, i.e. the lowest level properties and laws can tell you exactly what to anticipate, and with how much subjective probability.

Replies from: whowhowho
comment by whowhowho · 2013-03-21T09:18:28.339Z · LW(p) · GW(p)

But they are still laws of physics,

Microphysical laws map microphysical states to other microphysical states.Top-down causation maps macrophysical states to microphysical states.

Such laws are still fundamental laws, on the lowest level of the universe.

In the sense that they are irreducible, yes. In the sense that they are concerned only with microphyics, no.

Ergo, a reductionistic universe is also deterministic from a probabilistic standpoint, i.e. the lowest level properties and laws can tell you exactly what to anticipate, and with how much subjective probability.

"Deterministic" typically means that an unbounded agent will achieve probabilities of 1.0.

Replies from: JohnWittle
comment by JohnWittle · 2013-03-21T09:45:06.226Z · LW(p) · GW(p)

Top-down causation maps macrophysical states to microphysical states

Can you name any examples of such a phenomenon?

"Deterministic" typically means that an unbounded agent will achieve probabilities of 1.0.

Oh, well in that case quantum physics throws determinism out the window for sure. I still think there's something to be said for correctly assigning subjective probabilities to your anticipations such that 100% of the time you think something will happen with a 50% chance, it happens half the time, i.e. you are correctly calibrated.

An unbounded agent in our universe would be able to achieve such absolutely correct calibration; that's all I meant to imply.

comment by CCC · 2013-03-20T07:35:26.109Z · LW(p) · GW(p)

You are right; my example was a bad one, and it does not support the point that I thought it supported. The mere fact that something takes unreasonably long to calculate does not mean that it is not an informative endeavour. (I may have been working from a bad definition of reductionism).

If you can demonstrate to me that there is some organizational structure of matter which causes that matter to behave differently from what would be predicted by just looking at the matter in question without considering its organization

Um. I suspect that this may have been poorly phrased. If I have a lump of carbon, quite a bit of water, and a number of other elements, and I just throw them together in a pile, they're unlikely to do much - there may be a bit of fizzing, some parts might dissolve in the water, but that's about it. Yet if I reorganise the same matter into a human, I have an organisation of matter that is able to enter into a debate about reductionism; which I don't think can be predicted by looking at the individual chemical elements alone.

But that behaviour might still be predictable from looking at the matter, organised in that way, at its most basic level of perspective (given sufficient computing resources). Hence, I suspect that it is not a counter-example.

comment by EHeller · 2013-03-21T06:14:01.556Z · LW(p) · GW(p)

The argument will not apply to things not man-made.

Not true. There is a reason no one uses quarks to describe chemistry. Its futile to describe whats happening in a superfluid helium in terms of individual particle movement. Far better to use a two fluid model, and vortices.

Replies from: Morendil
comment by Morendil · 2013-03-21T07:19:19.235Z · LW(p) · GW(p)

Let me amend that: the argument will not necessarily apply to things not man-made. There is a categorical difference in this respect between man-made things and the rest, and my intent was to say: "if you're going to put up an argument against reductionism, don't use examples of man-made things".

Whereas we have good reasons to bar "leaky abstractions" from our designs, Nature labors under no such constraint. If it turns out that some particular process that happens in a superfluid helium can be understood only by referring to the quark level, we are not allowed to frown at Nature and say "oh, poor design; go home, you're drunk".

For instance, it turns out we can almost describe the universe in the Newtonian model with its relatively simple equations, a nice abstraction if it were non-leaky, but anomalies like the precession of Mercury turn up that require us to use General Relativity instead, and take it into account when building our GPS systems.

The word "futile" in this context strikes me as wishful thinking, projecting onto reality our parochial notion of how complicated a reductionistic account of the universe "should" be. Past experience tells us that small anomalies sometimes require the overthrow of entires swathes of science, in the name of reductionism: there keep turning up cases where science considers it necessary, not futile, to work things out in terms of the lower levels of description.

Replies from: EHeller
comment by EHeller · 2013-03-21T07:32:28.805Z · LW(p) · GW(p)

I think you are making a bad generalization when you turn to Newtonian mechanics vs. general relativity. There are important ways in which mesons and hadron are emergent from quarks that have no correspondence to the relationship between Newtonian mechanics and GR.

As length scales increase, quarks go from being loosely bound fundamental degrees of freedom to not-even-good-degrees-of-freedom. At 'normal' length scales, free quarks aren't even allowed. The modern study of materials is also full of examples of emergence (it underlies much work on renormalization groups), although its farther from my expertise so the only example to spring to mind was liquid helium.

comment by TheOtherDave · 2013-03-19T14:44:41.371Z · LW(p) · GW(p)

The remaining information is present in the environment of the mothers' womb in other forms - for example, where there's an ambiguity in the DNA with regards to the folding of a certain protein, other proteins present in the womb may correct any incorrectly folded samples.

As an aside to an aside, I wonder how much information about the DNA reading frame could in principle be extracted from the DNA of a female organism, given the knowledge (or the assumption) that mature females can gestate a zygote? Almost all possible reading frames would be discardable on the grounds that the resulting organism would not be able to gestate a zygote, of course, but I don't have any intuitive sense of how big the remaining search space would be.

And as a nod towards staying on topic:

a thorough electrical analysis of the computer's circuitry isn't going to provide much new information;

Well, it will, and it won't.

If what I mostly care about is the computer's behavior at the level of instructions, then sure, understanding the instructions gets me most of the information that I care about. Agreed.

OTOH, if what I mostly care about is the computer's behavior at the level of electrical flows through circuits (for example, if I'm trying to figure out how to hack the computer without an input device by means of electrical induction, or confirm that it won't catch fire in ordinary use), then a thorough electrical analysis of the computer's circuitry provides me with tons of indispensible new information.

What counts as "information" in a colloquial sense depends a lot on my goals. It might be useful to taboo the word in this discussion.

Replies from: CCC
comment by CCC · 2013-03-20T07:26:22.098Z · LW(p) · GW(p)

As an aside to an aside, I wonder how much information about the DNA reading frame could in principle be extracted from the DNA of a female organism, given the knowledge (or the assumption) that mature females can gestate a zygote? Almost all possible reading frames would be discardable on the grounds that the resulting organism would not be able to gestate a zygote, of course, but I don't have any intuitive sense of how big the remaining search space would be.

My intuition says "very, very big". Consider: depending on womb conditions, the percentage of information expressed in the baby which is encoded in the DNA might change. As an extreme example, consider a female creature whose womb completely ignores the DNA of the zygote, creating instead a perfect clone of the mother. Such an example makes it clear that the search space is at least as large as the number of possible female creatures that are able to produce a perfect clone of themselves.

OTOH, if what I mostly care about is the computer's behavior at the level of electrical flows through circuits (for example, if I'm trying to figure out how to hack the computer without an input device by means of electrical induction, or confirm that it won't catch fire in ordinary use), then a thorough electrical analysis of the computer's circuitry provides me with tons of indispensible new information.

I accept your point. Such an analysis does provide a more complete view of the computer, which is useful in some circumstances.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-03-20T15:32:35.802Z · LW(p) · GW(p)

the search space is at least as large as the number of possible female creatures that are able to produce a perfect clone of themselves.

Sure, I agree that one permissible solution is a decoder which produces an organism capable of cloning itself. And while I'm willing to discard as violating the spirit of the thought experiment decoder designs which discard the human DNA in its entirety and create a predefined organism (in much the same sense that I would discard any text-translation algorithm that discarded the input text and printed out the Declaration of Independence as a legitimate translator of the input text), there's a large space of possibilities here.

Replies from: CCC
comment by CCC · 2013-03-21T19:51:42.091Z · LW(p) · GW(p)

Would you be willing to consider, i.e. not discard, a decoder that used the human DNA as merely a list of indexes, downloading the required genes from some sort of internal lookup table?

By changing the lookup table, one can dramatically change the resulting organism; and having a different result for every viable human DNA is merely a resut of having a large enough lookup table. It would be, to extend your metaphor, like a text-translation algorithm that returned the Declaration of Independance if given as input Alice in Wonderland, and returned Alice in Wonderland if given Hamlet.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-03-21T20:12:03.345Z · LW(p) · GW(p)

(considers)
I would like to say "no", but can't think of any coherent reason to discard such a design.
Yeah, OK; point made.

comment by EHeller · 2013-03-21T07:45:18.175Z · LW(p) · GW(p)

I don't think you understand Laughlin's point at all. Compare a small volume of superlfuid liquid helium, and a small volume of water with some bacteria in it. Both systems have the exact same hamiltonian, both systems have roughly the same amount of the same constituents (protons,neutrons,electrons) but the systems behave vastly differently. We can't understand their differences by by going to a lower level of description.

Modern material science/solid state physics is the study of the tremendous range of different, complex behaviors that can arise from the same Hamiltonians. Things like spontaneous symmetry breaking are rigorously defined, well-observed phenomena that depend on aggregate, not individual behavior.

Replies from: JohnWittle
comment by JohnWittle · 2013-03-21T09:15:27.358Z · LW(p) · GW(p)

Why didn't he mention superfluidity, or solid state physics, then? The two examples he listed were consciousness not being explainable from a reductionist standpoint, and DNA not containing enough information to come anywhere near being a complete instruction set for building a human (wrong).

Also, I'm pretty sure that the superfluid tendencies of liquid helium-4 come from the fact that it is composed of six particles (two proton, two neutron, two electron), each with half-integer spin. Because you can't make 6 halves add up to anything other than a whole number, quantum effects mean that all of the particles have exactly the same state and are utterly indistinguishable, even positionally, and that's what causes the strange effects. I do not know exactly how this effect reduces down to individual behavior, since I don't know exactly what "individual behavior" could mean when we are talking about particles which cannot be positionally distinguished, but to say that superfluid helium-4 and water have the exact same hamiltonian is not enough to say that they should have the same properties.

Spontaneous symmetry breaking can be reduced down to quantum mechanics. You might solve a field equation and find that there are two different answers as to the mass of two quarks. In one answer, quark A is heavier than quark B, but in the other answer, quark B is heavier than quark A, and you might call this symmetry breaking, but just because when you take the measurement you get one of the answers and not the other, does not mean that the symmetry was broken. The model correctly tells you to anticipate either answer with 1:1 odds, and you'll find that your measurements agree with this: 50% of the time you'll get the first measurement, and 50% of the time you'll get the second measurement. In the MW interpretation, symmetry is not broken. The measurement doesn't show what really happened, it just shows which branch of the wavefunction you ended up in. Across the entire wavefunction, symmetry is preserved.

Besides, it's not like spontaneous symmetry breaking is a behavior which arises out of the organization of the particles. It occurs at the individual level.

Replies from: EHeller
comment by EHeller · 2013-03-21T18:35:09.233Z · LW(p) · GW(p)

I don't know why Laughlin wrote what he did, you didn't link to the paper. However, he comes from a world where solid state physics is obvious, and "everyone knows" various things (emergent properties of superfuid helium, for instance). Remember, his point of reference of a solid state physicist is quite different than the non-specialist so there is a huge inferential distance. Also remember that in physics "emergent" is a technical, defined concept.

Your explanation of superfluid helium isn't coherent ,and I had a book length post type up, when a simpler argument presented itself. Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say "in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons," you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should. If this doesn't sway you, lets agree to disagree because I think spontaneous symmetry breaking should be enough to make my point, and its easier to explain.

I don't think you understand what spontaneous symmetry breaking is, I have very little idea what you are talking about. Lets ignore quantum mechanics for the time being, because we can describe whats happening on an entirely classical level. Spontaneous symmetry breaking arises when the hamiltonian has a symmetry that the aggregate ground-state does not. Thats the whole definition, and BY DEFINITION it depends on details of the aggregate ground state and the organization of the particles.

And finally you can rigorously prove via renormalization group methods that in many systems the high energy degrees of freedom can be averaged out entirely and have no effect on the form of low-energy theory. In these systems, to describe low energy structures in such theories (most theories) the details of the microphysics literally do not matter. Computational physicists use this to their advantage all the time- if they want to look at meso- or macro- scale physics they assume very simple micromodels that are easy to simulate, instead of realistic ones, and are fully confident they get the same meso and macro results.

Replies from: EHeller, JohnWittle
comment by EHeller · 2013-03-21T19:32:35.182Z · LW(p) · GW(p)

In a general pattern, I find in a lot of my physics related posts I receive downvotes (both my posts in this very thread), and then I request an explanation for why, and no one responds, and then I receive upvotes. What I really want is just for the people giving the downvotes to give me some feedback.

Physics was my phd subject, and I believe that what I offer to the community is an above-average knowledge of the subject. If you believe my explanation is poorly thought out, incoherent or just hard to parse, please downvote, but let me know what it is thats bugging you. I want to be effectively communicating, and without feedback from the people who think my above post is not helpful, I'm likely to interpret downvotes in a noisy, haphazard way.

Replies from: shminux, JGWeissman
comment by Shmi (shminux) · 2013-03-21T19:49:38.843Z · LW(p) · GW(p)

My experience indicates that a vaguely anti-Eliezerish post, like someone questioning his orthodox reductionism, MWI or cryonics, gets an initial knee-jerk downvote, probably (that's only an untested hypothesis) from those who think that the matter is long settled and should not be brought up again. Eventually a less-partial crowd reads it and it maybe upvoted or downvoted based on merits, rather on the degree of conformance. Drawing attention to the current total vote is probably likely to cause this moderate crowd to actually vote, one way or another.

Replies from: JohnWittle
comment by JohnWittle · 2013-03-22T00:46:55.504Z · LW(p) · GW(p)

When whowhowho posted a list of a couple names of people who don't like reductionism, I said to myself "if reductionism is right, I want to believe reductionism is right. If reductionism is wrong, I want to believe reductionism is wrong" etc. I then went and googled those names, since those people are smart people, and found a paper published by the first name on the list. The main arguments of the paper were, "solid state physicists don't believe in reductionism", "consciousness is too complex to be caused by the interactions between neurons", and "biology is too complex for DNA to contain a complete instruction set for cells to assemble into a human being". Since argument screens off authority and the latter two arguments are wrong, I kept my belief.

EHeller apparently has no argument with reductionism, except that it isn't a "good way to solve problems", which I agree entirely: if you try to build an airplane by modeling air molecules it will take too long. But that doesn't mean that if you try to build an airplane by modeling air molecules, you will get a wrong answer. You will get the right answer. But then why did EHeller state his disagreement?

The paper uses emergent in exactly the way that EY described in the Futility of Emergence, and I was surprised by that, since when I first read The Futility of Emergence I thought that EY was being stupid and that there's no way people could actually make such a basic mistake. But they do! I had no idea that people who reject reductionism actually use arguments like "consciousness is an emergent phenomenon which cannot be explained by looking at the interaction between neurons". They don't come out and say "top-down causality", which really is a synonym for magic, like EHeller did, but they do say "emergence".

When I downvoted, it was after I had made sure I understood spontaneous symmetry breaking, and that it was not top-down causality, since that was the argument EHeller presented that I took seriously. I think fewer people believe in reductionism just because of EY than you think.

Replies from: shminux, EHeller
comment by Shmi (shminux) · 2013-03-22T04:19:32.336Z · LW(p) · GW(p)

Let's start with something rather uncontroversial. We would probably all agree that reducing top-down a complex system, like a brain or a star, we find its increasingly small constituents interacting with other constituents, all the way down to the lowest accessible level. It is also largely uncontroversial that if we were to carefully put together all the constituents exactly the way they were before will give us the original system, or something very close to it. Note the emphasis.

However, this is not what EHeller and myself mean by emergence. This analysis/synthesis process is rather unpredictable and unstable in both directions. The same high-level behavior can be produced by wildly varying constituents, and this variation can happen at multiple levels. For example, ripples in a bowl of liquid can be produced by water, or by something else, or they might be a mirror image of an actual bowl, or they might be a video someone recorded, or a computer simulation, or a piece of fabric in the wind producing a similar effect, etc. You won't know until you start digging. Looking bottom-up, I'd call it "emergence convergence".

If you look at the synthesis part, you will find that rather tiny variations in how you put things together result in enormous changes in the high-level behavior. A minor variation in the mass of one quark (which is probably determined by some hard-to-calculate term in some QFT equation) would result in a totally different universe. An evolution of the same initial conditions is likely to produce different result in the one world you care about (but let's not get sidetracked by MWI). Laplace-style determinism has never worked in practice on any scale, as far as I know. Well, maybe there are some exceptions, I'm not sure. Anyway, my point is that what emerges from putting lots of similar things together in various ways is quite unpredictable, though you often can analyze it in retrospect.

While "emergence" is not a good explanation of anything, it happens often enough, people ought to expect it, so it has some predictive power. 2+2 might be 4, but 2+2+2+...+2 might not even be a number anymore. Like when these twos are U-235 atoms. When you get lots of people together, you might get a mindless mob, or you might get an army, you won't know the first time you try. So, while emergence is not a good explanation, the idea is useful in a Hegel-like meta way: expect qualitative jumps simply from accumulating quantitative changes. Ignore this inevitability at your own peril.

Replies from: JohnWittle
comment by JohnWittle · 2013-03-23T21:00:46.273Z · LW(p) · GW(p)

I have no disagreement that high level behaviors are wildly variable, unpredictable, and all of the other words which mean "difficult to reduce down to lower level behaviors". Yes, wildly different constituent parts can create the same macroscopic behavior, or changing just a single lower level property in a system can cause the system to be unrecognizably different from before. But my point is that, if the universe is a physics simulator, it only has to keep track of the quarks. When I wake up in the morning, the universe isn't running a separate "human wake-up" program which tells the quarks how to behave; it's just running the standard "quark" program that it runs for all quarks. That's all it ever has to run. That's all I'm saying, when I say that I believe in reductionism. Reductionism doesn't say that it's practical for us to think in those terms, just that the universe thinks in those terms.

Finding a counterexample to this, a time when if our universe is a physics simulator, it must run code other than one process of 'quark.c' for each quark, would be a huge blow to reductionism, and I don't think one has been found yet. Perhaps I am wrong, although I have looked pretty thoroughly at this point, as I continue to google "arguments against reductionism" and find that none of them can actually give an example of such top-down causality.

Replies from: shminux, EHeller
comment by Shmi (shminux) · 2013-03-24T01:56:25.862Z · LW(p) · GW(p)

When I wake up in the morning, the universe isn't running a separate "human wake-up" program which tells the quarks how to behave; it's just running the standard "quark" program that it runs for all quarks.

First, I am not comfortable modeling the universe as a computer program, because it's implicitly dualist, with the program separate from the underlying "hardware". Or maybe even "trialist", if you further separate the hardware from the entity deciding what program to run. While this may well be the case (the simulation argument), at this point we have no evidence for it. So please be aware of the limitations of this comparison.

Second, how would you tell the difference between the two cases you describe? What would be an observable effect of running the ""human wake-up" program which tells the quarks how to behave"? If you cannot tell the difference, then all you have left is the Bayesian inference of the balance of probabilities based on Occam's razor, not any kind of certainty. Ellis and Co. actually argue that agency is an example of "top-down causality" (there would be no nuclei colliding in the LHC if humans did not decide to build it to begin with). I am not impressed with this line of reasoning, precisely because when you start investigating what "decide" means, you end up having to analyze humans in terms of lower-level structures, anyway.

Third, (explicitly, not tacitly) adopting the model of the universe as a computer program naturally leads to the separation of layers: you can run the Chemistry app on the Atomic Physics API, and you don't care what the implementation of the API is. That API is rather poorly designed and leaky, so we can infer quite a bit about its innards from the way it behaves. Maybe it was done by a summer student or something. There are some much nicer API inside. For example, whoever wrote up "electron.c" left very few loose ends: it has mass, charge and spin, but no size to probe and apparently no constituents. The only non-API hook into the other parts if the system is its weak isospin. Or maybe this was intentional, too. OK, it's time to stop anthropomorphizing.

Anyway, I tend to agree that it seems quite likely that there is no glitch in the matrix and we probably have only a single implementation instance of the currently-lowest-known-level API (the Standard Model of Particle Physics), but this is not true higher up. The same API is often reused at many different levels and for many different implementations, and insisting that there is only a single true top-down protocol stack implementation is not very productive.

comment by EHeller · 2013-03-24T03:28:45.042Z · LW(p) · GW(p)

No one knows how to define a quark at human length scales, they aren't meaningful degrees of freedom.

comment by EHeller · 2013-03-22T04:40:36.545Z · LW(p) · GW(p)

It would be helpful, since you keep bringing up arguments that are in this paper if you provide a link to the paper in question.

I feel like I have been misunderstood, and we are discussing this in other branches of the thread, but I can't help but feel like Laughlin has also been misunderstood but I can't judge if you don't provide a link.

comment by JGWeissman · 2013-03-21T19:57:03.922Z · LW(p) · GW(p)

Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say "in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons," you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should.

From a "purely reductionist stand-point" you would still need to know the initial conditions to predict how the system evolves. Yet you act as if this is a knockdown argument against reductionism.

Replies from: EHeller
comment by EHeller · 2013-03-21T22:32:38.525Z · LW(p) · GW(p)

I was just trying to make my point clearer, its suggestive, not a knock out. I think the knock-out argument for a strict reductionism is the renormalization argument.

Also, my training is particle physics, so I have no problem with reductionism in general, simply that as an approach its not a great way to understand many problems, and the post I responded to didn't seem to understand that solid state physicists use emergent as more of a term-of-art than a 'magical term.'

Replies from: JGWeissman
comment by JGWeissman · 2013-03-21T23:02:09.386Z · LW(p) · GW(p)

Your argument was not even suggestive, it was just wrong, because it ignores that a reductionist account would look at the initial conditions.

Also, my training is particle physics, so I have no problem with reductionism in general, simply that as an approach its not a great way to understand many problems, and the post I responded to didn't seem to understand that solid state physicists use emergent as more of a term-of-art than a 'magical term.'

I don't think that anyone is arguing that modeling physics at a high level of abstraction is not useful. It's just that the abstract models are computational shortcuts, and where they disagree with less abstract models, the less abstract models will be more accurate.

Replies from: EHeller
comment by EHeller · 2013-03-21T23:42:39.110Z · LW(p) · GW(p)

The point/thrust of JohnWittle's that I'm arguing against is that the idea of emergent phenomena is inherently silly/stupid, and a 'magical word' to gloss over fuzzy thinking. I chose two very different systems in an attempt to show how incredibly sensitive to initial conditions physics can be, which makes the reductionist account (in many instances) the wrong approach. I apologize if this was not clear (and if you were a downvoter, I sincerely appreciate the feedback). Is my point more clear now? (I have resisted the urge to rephrase my original to try to add clarity)

I also purposely chose two systems I believe have emergent behavior (super fluid helium certainly does, biological entities/bacteria were postulated to by Laughlin). Originally I was going to say more about superfuid helium before I realized how much I was going to have to write and decided spontaneous symmetry breaking was much clearer.

It's just that the abstract models are computational shortcuts, and where they disagree with less abstract models, the less abstract models will be more accurate.

Sure, but also its important to remember that there exist aggregate behaviors that don't depend on the microphysics in a meaningful way (the high energy modes decouple and integrate out entirely), and as such can only be meaningfully understood in the aggregate. This is a different issue than the Newtonian physics/GR issue (Newtonian mechanics is a limit of GR, not an emergent theory based on GR, the degrees of freedom are the same).

comment by JohnWittle · 2013-03-22T00:20:24.712Z · LW(p) · GW(p)

Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say "in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons," you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should.

I'll admit that I am not a PhD particle physicist, but what you describe as reductionism is not what I believe to be true. If we ignore quantum physics, and describe what's happening on an entirely classical level, then we can reduce the behavior of a physical system down to its most fundamental particles and the laws which govern the interactions between those basic particles. You can predict how a system will behave by knowing about the position and the velocity of every particle in the system; you do not have to keep separate track of an organizational system as a separate property, because the organization of a physical system can be deduced from the other two properties.

If reductionism, to you, means that by simply knowing the number of electrons, protons, and neutrons which exist in the universe, you should be able to know how the entire universe behaves, then I agree: reductionism is false.

With that in mind, can you give an example of top-down causality actually occurring in the universe? A situation where the behavior of low-level particles interacting cannot predict the behavior of systems entirely composed of those low-level particles, but instead where the high-level organization causes the interaction between the low-level particles to be different?

That's what I think reductionism is: you cannot have higher-level laws contradict lower-level laws; that when you run the experiment to see which set of laws wins out, the lower-level laws will be correct every single time. Is this something you disagree with?

I don't think you understand what spontaneous symmetry breaking is

I probably don't. I was going based off of an AP Physics course in highschool. My understanding is basically this: if you dropped a ball perfectly onto the top of a mexican hat, symmetry would demand that all of the possible paths the ball could take are equally valid. But in the end, the ball only chooses one path, and which path it chose could not have been predicted from the base-level laws. A quick look at wikipedia confirms that this idea at least has something to do with symmetry breaking, since one of the subsections for "Spontaneous Symmetry Breaking" is called "A pedagogical example: the Mexican hat potential", and so I cannot be entirely off.

In classical physics, the ball actually takes one path, and this path cannot be predicted in advance. But in QM, the ball takes all of the paths, and different you's (different slices of the wavefunction which evolved from the specific neuron pattern you call you), combined, see every possible path the ball could have taken, and so across the wavefunction symmetry isn't broken.

Since you're a particle physicist and you disagree with this outlook, I'm sure there's something wrong with it, though.

In these systems, to describe low energy structures in such theories (most theories) the details of the microphysics literally do not matter.

Is this similar to saying that when you are modeling how an airplane flies, you don't need to model each particular nitrogen atom, oxygen atom, carbon atom, etc in the air, but can instead use a model which just talks about "air pressure", and your model will still be accurate? I agree with you; modeling every single particle when you're trying to decide how to fly your airplane is unnecessary and you can get the job done with a more incomplete model. But that does not mean that a model which did model every single atom in the air would be incorrect; it just does not have a large enough effect on the airplane to be noticeable. Indeed, I can see why computational physicists would use higher level models to their advantage, when such high level models still get the right answer.

But reductionism simply says that there is no situation where a high level model could get a more accurate answer than a low level model. The low level model is what is actually happening. Newtonian mechanics is good enough to shoot a piece of artillery at a bunker a mile away, but if you wanted to know with 100% accuracy where the shell was going to land, you would have to go further down than this. The more your model breaks macroscopic behavior down into the interactions between its base components, the closer your model resembles the way reality actually works.

Do you disagree?

Replies from: EHeller
comment by EHeller · 2013-03-22T03:54:57.109Z · LW(p) · GW(p)

So I think perhaps we are talking past each other. In particular, my definition of reductionism is that we can understand and model complex behavior by breaking a problem in to its constituent components and studying them in isolation. i.e. if you understand the micro-hamiltonian and the fundamental particles well, you understand everything. The idea of 'emergence' as physicists understand it (and as Laughlin was using it), is that there are aggregate behaviors that cannot be understood from looking at the individual constituents in isolation.

A weaker version of reductionism would say that to make absolutely accurate predictions to some arbitrary scale we MUST know the microphysics. Renormalization arguments ruin this version of reductionism.

In a sense this

if you wanted to know with 100% accuracy where the shell was going to land, you would have to go further down than this.

seems to be espousing this form of reductionism, which I strongly disagree with. There exist physical theories where knowing microphysics is irrelevant to arbitrarily accurate predictions. Perhaps it would be best to agree on definitions before we make points irrelevant to each other.

Replies from: JohnWittle
comment by JohnWittle · 2013-03-23T21:08:05.089Z · LW(p) · GW(p)

there are aggregate behaviors that cannot be understood from looking at the individual constituents in isolation

Can you give me an example of one of these behaviors? Perhaps my google-fu is weak (I have tried terms like "examples of top down causality", "against reductionism", "nonreductionist explanation of"), and indeed I have a hard time finding anything relevant at all, but I can't find a single clearcut example of behavior which cannot be understood from looking at the individual constituents in isolation.

Replies from: EHeller
comment by EHeller · 2013-03-24T01:05:42.435Z · LW(p) · GW(p)

The fore-mentioned spontaneous symmetry breaking shows up in a wide variety of different systems. But, phase changes in general are probably good examples.

comment by whowhowho · 2013-03-21T21:50:31.613Z · LW(p) · GW(p)

I did not say that non-reductionism is absurd. I said that "recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs".

Nonetheless, I do think that non-reductionism is absurd. I cannot imagine a universe which is not reductionistic.

Can you explain to me how it might work?

One formulation of reductionism is that natural laws can be ordered in a hierarchy, with the higher-level laws being predictable from, or reducible to, the lower ones. So emergentism, in the cognate sense, not working would be that stack of laws failing to collapse down to the lowest level.

Who are to enact that the laws governing the behavior of particles are more ultimate than the transcendent, emergent laws of the collective they generate, such as the principles of organization responsible for emergent behavior? According to the physicist George F. R. Ellis true complexity emerges as higher levels of order from, but to a large degree independent of, the underlying low-level physics. Order implies higher-level systemic organization that has real top-down effects on the behavior of the parts at the lower level. Organized matter has unique properties (Ellis 2004).

There's two claims there: one contentious, one not. That there are multiply-realisable, substrate-independent higher-level laws is not contentious. For instance, wave equations have the same form for water waves, sound waves and so on. The contentious claim is that this is ipso facto top-down causation. Substrate-independent laws are still reducible to substrates, because they are predictable from the behaviour of their substrates.

And even after the answer of "Why? Emergence!" is given, the phenomenon is still a mystery and possesses the same sacred impenetrability it had at the start.

I don't see how that refutes the above at all. For one thing, Laughlin and Ellis do have detailed examples of emergent laws (in their rather weak sense of "emergent"). For another, they are not calling on emergence itself as doing any explaining. "Emergence isn't explanatory" doesn't refute "emergence is true". For a third, I don't see any absurdity here. I see a one-word-must-have-one-meaning assumption that is clouding the issue. But where a problem is so fuzzilly defined that it is hard even to identify the "sides", then one can't say that one side is "absurd".

Every time he makes the specific claim that reductionism makes worse predictions than a belief in "emergent phenomenon" in which "organizational structure" is an additional property that all of reality must have, in addition to "mass" and "velocity", he cites himself for this.

Neither are supposed to make predictions. Each can be considered a methodology for finding laws, and it is the laws that do the predicting. Each can also be seen as a meta-level summary fo the laws so far found.

He also does not provide any evidence for non-reductionism over reductionism; that is, he cannot name a single prediction where non-reductionism was right, and reductionism was wrong.

EY can't do that for MWI either. Maybe it isn't all about prediction.

A good example, he says, is genetic code: to assume that dna is actually a complete algorithmic description of how to build a human body is an illogical conclusion, according to him.

That's robustly true. Genetic code has to be interpreted by a cellular environment. There are no self-decoding codes.

He would rather suppose that the universe contains rules like "When a wavefunction contains these particular factorizations which happen not to cancel out, in a certain organizational structure, use a different mechanism to decide possible outcomes instead of the normal mechanism" than suppose that the laws of physics are consistent throughout and contain no such special cases. From the standpoint of simplicity, reductionism is simpler than non-reductionism, since non-reductionism is the same thing as reductionism except with the addition of special cases.

Reudctionism is an approach that can succeed or fail. It isn't true apriori. If reductionism failed, would you say that we should not even contemplate non-reductionism? Isn't that a bit like eEinstein's stubborn opposition to QM?

He specifically objects that reductionism isn't always the "most complete" description

I suppose you mean that the reductionistic explanation isn't always the most complete explanation...well everything exists in a context.

of a given phenomenon; that elements of a given phenomenon "cannot be explained" by looking at the underlying mechanism of that phenomenon.

There is no apriori guarantee that such an explanation will be complete.

I think this is nonsense. Even supposing that the laws of physics contain special cases for things like creating a human body out of DNA, or for things like consciousness,

That isn't the emergentist claim at all.

then in order for such special case exceptions to actually be implemented by the universe, they must be described in terms of the bottom-most level.

Why? Because you described them as "laws of physics"? An emergentist wouldn't. Your objections seem to assume that some kind of reductionism+determinism combination is true ITFP. That's just gainsaying the emergentist claim.

Even if a DNA strand is not enough information to create a human being, and the actual program which creates the human being is hard coded into the universe, the object that the program must manipulate is still the most basic element of reality, the wavefunction, and therefore the program must specify how certain amplitude configurations must evolve, and therefore the program must describe reality on the level of quarks.

If there is top-down causation, then its laws must be couched in terms of lower-level AND higher-level properties. And are therefore not reductionistic. You seem to be tacitly assuming that there are no higher-level properties.

his is still reductionism, it is just reductionism with the assumed belief that the laws of physics were designed such that certain low-level effects would take place if certain high-level patterns came about in the wavefunction.

Cross-level laws aren't "laws of physics". Emergentists may need to assume that microphysical laws have "elbow room", in order to avoid overdetermination, but that isn't obviously wrong or absurd.

At first when I read EY's "The Futility of Emergence" article, I didn't understand. It seemed to me that there's no way people actually think of "emergence" as being a scientific explanation for how a phenomenon occurs

As it happens, no-one does. That objections was made in the most upvoted response to his article.

such that you could not predict that the phenomenon would occur if you know how every piece of the system worked individually.

Can you predict qualia from brain-states?

I didn't think it possible that anyone would actually think that knowing how all of the gears in a clock work doesn't mean you'll be able to predict what the clock will say based on the positions of the gears (for sufficiently "complex" clocks).

Mechanisms have to break down into their components because they are built up from components. And emergentists would insist that that does not generalise.

But perhaps he read this very paper, because Laughlin uses the word "emergent phenomenon" to describe behavior he doesn't understand, as if that's an explanation for the phenomenon.

Or as a hint about how to go about understanding them.

He does not explore the logical implications of this belief: that holding the belief that some aspects of a phenomenon have no causal mechanism,

That's not what E-ism says at all.

and therefore could not have possibly been predicted. He makes the claim that a hypothetical Theory of Everything would not be able to explain some of the things we find interesting about some phenomenon. Does he believe that if we programmed a physics simulator with the Correct Theory of Everything, and fed it the boundary conditions of the universe, then that simulated universe would not look exactly like our universe?

That's an outcome you would get with common or garden indeterminism. Again: reductionism is NOT determinism.

That the first time DNA occurred on earth, in that simulated universe, it would not be able to create life (unlike in our universe) because we didn't include in the laws of physics a special clause saying that when you have DNA, interpret it and then tell the quarks to move differently from how they would have?

What's supposed to be absurd there? Top-down causation, or top-down causation that only applies to DNA?

I read the whole paper by Laughlin and I was unimpressed.

The arguments for emergence tend not be good. Neither are the arguments against. A dippsute about a poorly-defined distinction wit poor arguments on both sides isn't a dispute where one side is "absurd".

comment by Oscar_Cunningham · 2011-08-08T13:46:47.718Z · LW(p) · GW(p)

Marcello is non-math?

comment by robertzk (Technoguyrob) · 2011-10-25T14:11:42.774Z · LW(p) · GW(p)

For what it's worth, I've worked on a project and had lunch with Conway, and your ideas seem more prescient than his. But being a mathematician, I know people who are in turn far above Conway's level.

comment by Kragen_Javier_Sitaker2 · 2008-09-26T10:56:51.000Z · LW(p) · GW(p)

So how does it work, in your opinion? Because “I may not be as brilliant as Jaynes or Conway, but that doesn't mean I can't do important things in my chosen field,” sounds suspiciously similar to how Hamming asserts that it works in “You and Your Research.” I guess you have a different belief about how doing important things in your chosen field works, but I don't see that you've explained that belief here or anywhere else that I've seen.

I don't suppose Marcello is related to Nadja and Josh Herreshoff?

I don't know if it helps, but while I've appreciated the things I've learned from you, my limited interaction with you hasn't made me think you're the brightest person I know. I think of you as more or less at my level — maybe a couple of standard deviations above or below, I can’t really tell. Certainly you're sharp enough that I'd enjoy hanging out with you. (Let me know the next time you're in Argentina.)

P.S. the impugnment of your notability has now been removed from your Wikipedia page, apparently as a result of people citing you in their papers.

Replies from: logicaldash
comment by logicaldash · 2011-09-24T19:50:04.092Z · LW(p) · GW(p)

I too would like to hear "how it works," because if I don't know how Eliezer thinks it works, it just sounds like he's defining the problem of Being a Great Researcher in the most intimidating way possible. Whatever way that may be. Inflating the problem like that is bad practice, for much the same reason that cheap gestures of humility are bad practice.

I'm commenting on a two-year-old post, so I guess I shouldn't expect a response, but this post is linked from the getting-started page, so I was a bit disappointed that it ended with what looks a lot like a handwave at humility.

comment by lowly_undergrad4 · 2008-09-26T11:19:30.000Z · LW(p) · GW(p)

Wait wait wait wait. Eliezer...are you saying that you DON'T know everything????

~runs off and weeps in a corner in a fetal position~

comment by Manuel_Mörtelmaier · 2008-09-26T11:59:40.000Z · LW(p) · GW(p)

CatAI (1998): "Precautions"/"The Prime Directive of AI"/"Inconsistency problem".

My memory may fail me, and the relevant archives don't go back that far, but I recall Ben (and/or possibly other people) suggesting you going to college, or at least enroll for a grad program in AI, on the Extropy chat list around 1999/2000. I think these suggestions were related to, but not solely based on, your financial situation at that time (which ultimately led to the creation of the SIAI, so maybe we should be glad it turned out the way it did, even if, in my opinion, following the advice would have been beneficial to you and your work.)

comment by Yep · 2008-09-26T13:05:08.000Z · LW(p) · GW(p)

I was curious how you'd react when you eventually realized you weren't as bright as you thought you were. The journey to full comprehension isn't complete yet, but it's interesting seeing this little bit unfold. For all your disdain of modesty arguments, your life makes for a great demonstration of how one can go wrong if they go unheeded.

Replies from: gwern
comment by gwern · 2011-01-06T18:41:30.868Z · LW(p) · GW(p)

At the very least, his life will be interesting and will have affected a great many people in a positive; that is more than most could claim.

Replies from: multifoliaterose
comment by multifoliaterose · 2011-01-07T00:34:51.909Z · LW(p) · GW(p)

The question is just how this compares with what he would have done otherwise.

Replies from: gwern
comment by gwern · 2011-01-07T01:03:27.874Z · LW(p) · GW(p)

Hm, Yep sounds as if Eliezer's life is a net loss:

your life makes for a great demonstration of how one can go wrong if they go unheeded.

Not as if his life simply was suboptimal and with considerable opportunity cost.

comment by Eric5 · 2008-09-26T13:14:12.000Z · LW(p) · GW(p)

I definitely see the "levels" phenomenon very often. Most people I meet who see me play a musical instrument (or 5 or 10 different ones) think I must be a genius at music - unless they're a musician, then they recognize me as an amateur with enough money to buy interesting instruments and enough skill to get a basic proficiency at them quickly.

And even with standard measures of intellect like rationality or math... I don't know that many of my friends who have read any of this blog would recognize you as being smarter than me, despite the fact that you're enough levels above me that my opinion of you is pretty much what "Not You" said above.

I can keep up with most of your posts, but to be able to keep up with a good teacher, and to be that good teacher, is a gap of at least a few levels. But aspiring to your level (though I may not reach it) has probably been the biggest motivator for me to practice the art. I certainly won't be the one who zips by you, but you've at least pulled me up to a level where I might be able to guide one who will down a useful path.

comment by Sebastian_Hagen2 · 2008-09-26T13:14:54.000Z · LW(p) · GW(p)

Up to now there never seemed to be a reason to say this, but now that there is:

Eliezer Yudkowsky, afaict you're the most intelligent person I know. I don't know John Conway.

comment by Caledonian2 · 2008-09-26T13:41:43.000Z · LW(p) · GW(p)

I was curious how you'd react when you eventually realized you weren't as bright as you thought you were.
I still am curious. Maybe someday the cat will be brought back -- but I'm not holding my breath.

comment by RobinHanson · 2008-09-26T13:50:54.000Z · LW(p) · GW(p)

Your faith in math is misplaced. The sort of math smarts you are obsessed with just isn't that correlated with intellectual accomplishment. For accomplishment outside of math, you must sacrifice time that could be spent honing your math skills, to actually think about other things. You could be nearly the smartest math type guy anyone you meet know, and still not accomplish if math is not the key to your chosen subject.

comment by Ken_Sharpe2 · 2008-09-26T13:50:59.000Z · LW(p) · GW(p)

It's interesting, actually. You're motivated by other peoples' low opinions of you -- this pressure you feel in your gut to prove Caledonian et al wrong -- so you've taken that is probably fairly standard human machinery and tried to do something remarkable with it.

My question is, are you still motivated by the doubt you feel about your native abilities, or have you passed into being compelled purely by your work?

comment by Atstjx · 2008-09-26T13:52:30.000Z · LW(p) · GW(p)

Perhaps the truly refulgent (before they had so become) reached a progression tipping point at which they realized (right or wrong, ironically) that they were essentially beyond comparison, and hence stopped comparing.

Then they could allocate the scarce resources of time and thought exclusively to the problems they were addressing, thus actually attaining a level that truly was beyond comparison.

comment by Cyan2 · 2008-09-26T13:55:12.000Z · LW(p) · GW(p)

Jaynes was a really smart guy, but no one can be a genius all the time. He did make at least one notable blunder in Bayesian probability theory -- a blunder he could have avoided if only he'd followed his own rules for careful probability analysis.

comment by ShardPhoenix · 2008-09-26T13:58:33.000Z · LW(p) · GW(p)

You come across as very intelligent when you stick to your areas of expertise, like probability theory, AI and cognitive biases, but some of your more tangential stuff can seem a little naive. Compared to the other major poster on this blog, Robin, I'd say you come across as smarter but less "wise", if that means anything to you. I'm not even a huge fan of the notion of "wisdom", but if there's something you're missing, I think that's it.

comment by Rob3 · 2008-09-26T14:21:24.000Z · LW(p) · GW(p)

If you haven't read it, Simonton's Origins of Genius draws a nice distinction between mental agility and long-term intellectual significance, and explores the correlation between the two. Not a terribly well-written book, but certainly thought-provoking.

comment by retired_urologist · 2008-09-26T14:40:04.000Z · LW(p) · GW(p)

@EY: We are the cards we are dealt, and intelligence is the unfairest of all those cards. More unfair than wealth or health or home country, unfairer than your happiness set-point. People have difficulty accepting that life can be that unfair, it's not a happy thought. "Intelligence isn't as important as X" is one way of turning away from the unfairness, refusing to deal with it, thinking a happier thought instead. It's a temptation, both to those dealt poor cards, and to those dealt good ones. Just as downplaying the importance of money is a temptation both to the poor and to the rich.

How could the writer of the above words be the writer of today's post? Apparently (as I'm told) you knew from the days of the Northwestern Talent Search that you weren't the smartest of those tested (not to mention all those who were not tested), but certainly one of the smartest. Apparently, you were dealt a straight flush to the king, while some in history received a royal flush. What difference does it make whether someone thinks you are the smartest person they have known, unless you are the smartest person? Does a straight flush to the king meet the threshold required to develop a method for "saving humanity"? If not, why aren't you in the camp of those who wish to improve human intelligence? awaits clap of thunder from those dealt better hands

Replies from: faul_sname
comment by faul_sname · 2012-12-04T10:20:34.487Z · LW(p) · GW(p)

It's simply dissolving some cognitive illusions he shouldn't have had in the first place, but that most of us have probably had at some point in our lives. If you've got intelligence at 2 standard deviations above average, and you overestimate your own intelligence by one standard deviation (which is probably a pretty common mistake, and if anything underestimates the effect) than you'll see that you're probably the most intelligent person you interact with on a regular basis. If you're out at 3 standard deviations, it may not be until college that you see that some of your fellow students, or at least some of your professors, are indisputably smarter than you. If you're out at 4 or 5 standard deviations, as I imagine Eliezer is (I myself can't honestly peg myself past 3.5 standard deviations, which means I'm probably around 2 standard deviations above average and can't really distinguish beyond 2 standard deviations above my own level), I have some difficulty imagining what that must be like, only that even in the things you read you won't find many minds as formidable as your (perception of) your own, and even rarer will be minds that clearly surpass your own.

But I think he is in the camp of trying to improve human intelligence (or at least human rationality, gwern seems to be the better poster child for improving human intelligence). Hence the sequences.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-20T19:30:40.493Z · LW(p) · GW(p)

Is a home-schooled person well positioned to judge that sort of thing? They're the smartest kind in a class of one.

Replies from: lavalamp
comment by lavalamp · 2013-01-20T20:18:40.347Z · LW(p) · GW(p)

Not sure how homeschooling is relevant here, but speaking as a homeschooled person: it goes both ways, you're also the stupidest person in a class of one.

Replies from: Kawoomba
comment by Kawoomba · 2013-01-20T20:35:33.280Z · LW(p) · GW(p)

Sidenote: I'd homeschool my kids if it were allowed where I live.

Replies from: lavalamp
comment by lavalamp · 2013-01-20T21:19:37.687Z · LW(p) · GW(p)

(This seems like the wrong thread for a protracted discussion but I'm happy to say more in an open thread or via PM if you want to hear more, although it sounds like it's a moot point for you.)

Replies from: Kawoomba
comment by Kawoomba · 2013-01-20T21:22:44.339Z · LW(p) · GW(p)

(I do want to hear more, go ahead using any means you'd like.)

Replies from: lavalamp
comment by Jef_Allbright · 2008-09-26T15:50:02.000Z · LW(p) · GW(p)

Eliezer, I've been watching you with interest since 1996 due to your obvious intelligence and "altruism." From my background as a smart individual with over twenty years managing teams of Ph.D.s (and others with similar non-degreed qualifications) solving technical problems in the real world, you've always struck me as near but not at the top in terms of intelligence. Your "discoveries" and developmental trajectory fit easily within the bounds of my experience of myself and a few others of similar aptitudes, but your (sheltered) arrogance has always stood out. I wish you continued progress, not so much in ever-sharper analysis, but in ever more effective synthesis of the leading-edge subjects you pursue.

comment by Douglas_Knight3 · 2008-09-26T15:52:27.000Z · LW(p) · GW(p)

How much do you worry about age 40? Is that just based on your father? Conway passed 40 before Marcello was born.

comment by Richard_Hollerith2 · 2008-09-26T15:53:27.000Z · LW(p) · GW(p)
If not, why aren't you in the camp of those who wish to improve human intelligence?

I'll take this one because I'm almost certain Eliezer would answer the same way.

Working on AI is a more effective way of increasing the intelligence of the space and matter around us than increasing human intelligence is. The probability of making substantial progress is higher.

Replies from: Kingreaper
comment by Kingreaper · 2010-10-03T17:58:18.466Z · LW(p) · GW(p)

I disagree. Human intelligence is clearly misoptimised for many goals, and I see no clear evidence that it's easier to design a new intelligence from scratch than to optimise the human one.

They have very different possible effects "FOOM!" vs. "We are awaiting GFDCA [Genetics, Food Drugs and Cybernetics Administration] approval of this new implant/chimerism/genehack", so the average impact of human-optimisation may be lower, but my probability estimate for human-improvement tech is much higher.

comment by Phil_Goetz5 · 2008-09-26T15:53:36.000Z · LW(p) · GW(p)

Wow, chill out, Eliezer. You're probably among the top 10, certainly in the top 20, most-intelligent people I've met. That's good enough for anything you could want to do. You are ranked high enough that luck, money, and contacts will all be more important factors for you than some marginal increase in intelligence.

comment by Roland2 · 2008-09-26T16:34:31.000Z · LW(p) · GW(p)

First, same question as Douglas: what is it with the brick wall at 40?

Second: This is another great post, its rare for people to expose their thoughts about theirselves in such an open way. Congratulations!

Regarding your ability, I'm just a regular guy(studied Math in college) but your writings are the most inspiring I've ever read. So much self-reflection about intelligence and the thinking process. The insight about how certain mental processes feel is totally new to me. You have helped me a lot to identify my own blind spots and mistakes. Now I can look back and see exactly where I did go wrong in the past and I see with clarity where there was once confusion. I wish I've read this stuff when I was still 13 years old, maybe this could have prevented a lot of the mistakes I did later in life.

Also one of the things I learned from you is that hard work can substitute for intelligence. Think of evolution, even a stupid person can accomplish great things if he bangs his head long enough against the problem. Well, there is still the need of a basic level of intelligence, but I guess you have that.

Did you read Richard Feynman's biography? AFAIK he was also not the smartest and had moments of great self-doubt in his career where he even thought of giving up. I think this turned out to be a blessing because it forced him to visualize things in a more intuitive manner if my recollection is correct. Hence the invention of the Feynman diagrams.

Regarding College, well I went to one and it was one of the biggest wastes of time in my life, together with school. I wish I had been as smart as you and left school at the age of 12.

comment by michael_vassar3 · 2008-09-26T17:20:43.000Z · LW(p) · GW(p)

I second Robin's comment.

A friend of mine, Steve Jordan, once asked me just how smart I thought he and I were. I answered that I think that no-one is really as smart as the two of us both think we are. You see, for many many people it is possible to choose a weighting scheme among a dozen or so factors contribute to intellectual work such that they are the best. You simply define the vector to their point on the "efficient aptitude frontier" as "real intelligence". A dozen or so people associated with this blog and/or with SIAI and a smaller number who aren't appear to me to be on points of the "known to Michael Vassar efficient aptitude frontier", though not necessarily equally mission-critical points. For my "save the world dream team" I would pick a 25-year-old Steve Jobs over a 25-year-old Terrance Tao, though I'd like both of course.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-26T17:54:06.000Z · LW(p) · GW(p)

Manuel, "enroll in a grad program for AI" != "you're smart, you should go to college".

Kragen, the short answer is, "It's easy to talk about the importance of effort if you happen to be Hamming." If you can make the ante for the high-stakes table, then you can talk about how little the ante counts for, and the importance of playing your cards well. But if you can't make the ante...

Robin, it's not blind faith in math or math for the sake of impressiveness, but a specific sense that the specific next problems I have to solve, will require more math than I've used up to this point. Not Andrew J. Wiles math, but Jaynes doesn't use Wiles-math either. I quite share your prejudice against math for the sake of looking impressive, because that gets you the wrong math. (Formality isn't about Precision?)

Ken, it's exclusively my work that gives me the motivation to keep working on something for years, but things like pride can give me the motivation to keep working on something for the next minute. I'll take whatever sources of motivation I can get (er, that aren't outright evil, of course).

Douglas, yes, my father changed at 40. But one of my primary sources of hope is that people have been known to do basic research later than this if they changed fields late in life, which suggests that it actually can be a matter of approach/outlook/methodology and avoiding serving on prestigious committees.

Retired, I don't understand the apparent contradiction you see. I participated in the Midwest Talent Search at a young age (not "Northwestern" anything, maybe you're confusing with Northwestern University?) and scored second-best for my grade category, but at that point I'd skipped a grade. But I think I can recall hearing about someone who got higher SAT scores than mine, at age nine. That would be decisive, if the SAT were a perfect noiseless measurement of ability to work on AI.

Vassar: You see, for many many people it is possible to choose a weighting scheme among a dozen or so factors contribute to intellectual work such that they are the best.

Yes, this is the well-known phenomenon where asking someone "How dumb are you?" produces a different answer than "How smart are you?" because they recall a different kind of evidence. But the question I'm trying to answer is "How much potential do you have to solve the remaining FAI problems you know about?" As I said to Robin, I do think this is going to involve taking a step up in math level.

To all commenters who observed that I don't seem to stand out from 10 other smart people they know, either you didn't comprehend the entirety of today's post, or you have very high confidence that you occupy the highest possible rank of human ability.

Replies from: Wei_Dai, ESRogs, itaibn0
comment by Wei Dai (Wei_Dai) · 2011-03-06T09:33:31.636Z · LW(p) · GW(p)

Robin, it's not blind faith in math or math for the sake of impressiveness, but a specific sense that the specific next problems I have to solve, will require more math than I've used up to this point.

I'm curious if this is still your sense, and if so, what kind of math are you talking about?

My sense is that currently the main problems in FAI are philosophical. Skill in math is obviously very useful, but secondary to skill in philosophy, because most of the time it's still "I have no idea how to approach this problem" instead of "Oh, if I can just solve this math problem, everything will be clear".

...or I'm strictly dumber than Conway, dominated by him along all dimensions. Maybe, if I could find a young proto-Conway and tell them the basics, they would blaze right past me, solve the problems that have weighed on me for years, and zip off to places I can't follow.

Marcello observed "In terms of philosophical intuition, you are head and shoulders above Conway." Making progress in FAI theory seems to require a combination of rationality, good philosophical intuition, math talent, motivation, and prerequisite background knowledge. (Am I leaving out anything?) Out of these, perhaps good philosophical intuition is rarest, in large part because we don't know how to teach it (or screen for it at a young age). Is this a problem you've considered?

Replies from: BethMo
comment by BethMo · 2011-05-13T20:42:35.467Z · LW(p) · GW(p)

My sense is that currently the main problems in FAI are philosophical. Skill in math is obviously very useful, but secondary to skill in philosophy...

Philosophy? Really???

My impression of philosophy has been that it is entirely divorced from anything concrete or reality-based with no use in solving concrete, reality-based problems -- that all the famous works of philosophy are essentially elaborate versions of late-night college bull sessions, like irresistible forces vs. immovable objects, or trees falling in forests that do/don't make a sound.

After working my way through a lot of the posts here, I now think that most of philosophy comes down to semantics and definitions of terms (i.e. Eliezer's excellent analysis of the tree-sound argument), and that what remains is still entirely divorced from reality and real-world uses.

What have I missed? How does philosophy bring anything useful to the table?

Replies from: thomblake, Wei_Dai, Kaj_Sotala
comment by thomblake · 2011-05-13T21:28:28.905Z · LW(p) · GW(p)

What have I missed? How does philosophy bring anything useful to the table?

You appear to have missed philosophy. If you take a historical view, all of our contemporary subjects come from philosophy. The core of philosophy is precisely the sort of things we care about here - having an accurate picture of the world and understanding its true nature. To that end, ancient philosophers such as Aristotle invented logic, studied the natural world, discovered the inner workings of the human body, and started to investigate the laws that tie together everything in the world.

Properly defining philosophy in current times is somewhat difficult based on this - now what was once called philosophy is instead called "Science" and other fields. So what is left is anything we don't already have an answer for. Philosophers are those who know what questions still need to be asked, and care about investigating them in a manner that will give them a more accurate picture of the world.

To be a little more concrete (to give a specific example), the field of Ethics is considered a subfield of philosophy (largely because its questions are not yet settled), and one relevant question to FAI is simply "How should an AI behave?", which is an ethical question.

Replies from: BethMo
comment by BethMo · 2011-05-30T07:19:58.349Z · LW(p) · GW(p)

I can agree that most of "science and other fields" came out of what was called "philosophy" if you go back far enough. It just seems that once you pull out all the "science and other fields" what is left has no use for solving practical problems -- including AI. Like the pages and pages of debate I've seen here on Less Wrong about "philosophical" stuff like the nature of morality, or free will, or "zombies" with no consciousness. Obviously a lot of people feel that discussing these topics is worthwhile, but I just don't see the use of it.

In continuing to plod through the older writings here, I've seen numerous passages from Eliezer that disparage philosophy's usefulness, including these that I hit today:

Sorry - but philosophy, even the better grade of modern analytic philosophy, doesn't seem to end up commensurate with what I need, except by accident or by extraordinary competence. (http://lesswrong.com/lw/tg/against_modal_logics/)

and

I suggest that, like ethics, philosophy really is important, but it is only practiced effectively from within a science. Trying to do the philosophy of a frontier science, as a separate academic profession, is as much a mistake as trying to have separate ethicists. You end up with ethicists who speak mainly to other ethicists, and philosophers who speak mainly to other philosophers. (http://lesswrong.com/lw/pg/where_philosophy_meets_science/ )

So I'm still baffled by the comment here that currently the main problems in FAI are philosophical. Is there a summary or chain of posts that spells out this change in position? Or will it just gradually emerge if I manage to read all the posts between those quotes from 2008, up to the quote from 2011? Or, is this just your opinion, not Eliezer's?

Replies from: thomblake
comment by thomblake · 2011-05-31T14:18:59.018Z · LW(p) · GW(p)

Note the distinction between those things being done in the field of "Philosophy", versus philosophy itself. Note that this:

philosophy really is important, but it is only practiced effectively from within a science

is an endorsement of philosophy itself, though the quote goes on to say that the way that much philosophy is done in academia is pretty useless. So I'm not seeing anything that should generate confusion. When Wei Dai said that the problems are philosophical, that does not entail that the problems should be solved by people with doctorates in Philosophy.

Or, is this just your opinion, not Eliezer's?

While I sometimes imagine myself one of the world's foremost experts on the writings of Eliezer, any non-quoted words are my own.

It just seems that once you pull out all the "science and other fields" what is left has no use for solving practical problems -- including AI.

If you think that logic, ethics, applied ontology, epistemology, and philosophy of mind all have no use in AI, then I think you will find yourself in a minority.

comment by Wei Dai (Wei_Dai) · 2011-05-14T00:11:58.059Z · LW(p) · GW(p)

Hi Beth. I agree with Thom's answer, and you can also get a better sense of where I'm coming from if you read Metaphilosophical Mysteries and follow the first link in that post.

comment by Kaj_Sotala · 2011-09-21T14:11:40.948Z · LW(p) · GW(p)

As lukeprog has pointed out, almost everything Eliezer has written on LW is philosophy.

comment by ESRogs · 2013-01-19T01:38:25.732Z · LW(p) · GW(p)

I'd be curious to hear the answers to Wei's questions (in the sibling comment)...

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2013-01-20T03:46:08.643Z · LW(p) · GW(p)

From subsequent discussions, my understanding is that Eliezer doesn't think it's likely that he can recruit/train another FAI researcher with a level of philosophical competence similar to himself, and that he's planning/hoping to solve the main philosophical (as opposed to mathematical) problems himself. I've argued against this plan but I'm guessing that Eliezer is probably still set on this course. Others at SIAI may have more reservations about it.

Replies from: Eliezer_Yudkowsky, lukeprog
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-20T05:04:14.179Z · LW(p) · GW(p)

I certainly intend to try that recruiting thing (Paul Christiano ain't half bad) but recruiting philosophy seems much less straightforward than recruiting mathematical talent. If I have to resolve it all myself, I wouldn't flinch from trying. It seems like that part should be less difficult in an absolute sense than the rest of the labor, though that might just be comparative advantage talking. The resolutions to philosophical confusions usually seem relatively straightforward once you have them, in my experience so far.

Replies from: Wei_Dai, V_V
comment by Wei Dai (Wei_Dai) · 2013-01-20T06:25:36.803Z · LW(p) · GW(p)

If I have to resolve it all myself, I wouldn't flinch from trying.

As I asked in the linked comment, if you're the only philosopher in the team, how will others catch your mistakes? I would not trust that when you stop feeling confused, the problem has been correctly solved, or that your feelings of confusion are a reliable indicator of problems existing in the first place.

I certainly intend to try that recruiting thing (Paul Christiano ain't half bad) but recruiting philosophy seems much less straightforward than recruiting mathematical talent.

Having Paul involved certainly makes me feel better, and if you do succeed in recruiting more philosophy talent, then the issue may be moot. But I'm still concerned about your readiness to go it alone, and what that implies about your views, not only of how hard the problems are, but also how much one needs to worry about making philosophical mistakes.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-20T19:19:29.028Z · LW(p) · GW(p)

Do you have some reliable way of recruiting? What's the policy alternative? You do what you gotta do, if ends up being just you, nonetheless, you do what you gotta do. Zero people won't make fewer mistakes than one person.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2013-01-22T22:39:45.258Z · LW(p) · GW(p)

Quoting Carl Shulman from about a year ago:

If we condition on having all other variables optimized, I'd expect a team to adopt very high standards of proof, and recognize limits to its own capabilities, biases, etc. One of the primary purposes of organizing a small FAI team is to create a team that can actually stop and abandon a line of research/design (Eliezer calls this "halt, melt, and catch fire") that cannot be shown to be safe (given limited human ability, incentives and bias). If that works (and it's a separate target in team construction rather than a guarantee, but you specified optimized non-talent variables) then I would expect a big shift of probability from "UFAI" to "null."

I'm not sure if he had both math and philosophy in mind when he wrote that or just math, but in any case surely the same principle applies to the philosophy. If you don't reach a high confidence that the philosophy behind some FAI design is correct, then you shouldn't move forward with that design, and if there is only one philosopher on the team, you just can't reach high confidence in the philosophy.

Replies from: Eliezer_Yudkowsky, shminux, CarlShulman
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-23T00:20:47.986Z · LW(p) · GW(p)

if there is only one philosopher on the team, you just can't reach high confidence in the philosophy.

This does not sound correct to me. Resolutions of simple confusions usually look pretty obvious in retrospect. Or do you mean something broader by "philosophy" than trying to figure out free will?

Replies from: Wei_Dai, Wei_Dai
comment by Wei Dai (Wei_Dai) · 2013-01-23T04:30:50.609Z · LW(p) · GW(p)

Did you read the rest of that thread where I talked about how in cryptography we often used formalizations of "security" that were discovered to be wrong years later, and that's despite having hundreds of people in the research community constantly trying to attack each other's ideas? I don't see how formalizing Friendliness could be not just easier and less error prone than formalizing security, but so much so that just one person is enough to solve all the problems with high confidence of correctness.

Or do you mean something broader by "philosophy" than trying to figure out free will?

I mean questions like your R1 and R2, your "nonperson predicate", how to distinguish between moral progress and moral error / value drift, anthropic reasoning / "reality fluid". Generally, all the problems that need to be solved for building an FAI besides the math and the programming.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-23T05:39:28.207Z · LW(p) · GW(p)

Yes, formalizing Friendliness is not the sort of thing you'd want one person doing. I agree. I don't consider that "philosophy", and it's the sort of thing other FAI team members would have to be able to check. We probably want at least one high-grade actual cryptographer.

Of the others, the nonperson predicate and the moral-progress parts are the main ones where it'd be unusually hard to solve and then tell that it had been solved correctly. I would expect both of those to be factorable-out, though - that all or most of the solution could just be published outright. (Albeit recent experience with trolls makes me think that no insight enabling conscious simulations should ever be published; people would write suffering conscious simulations and run them just to show off... how confident they were that the consciousness theory was wrong, or something. I have a newfound understanding of the utter... do-anything-ness of trolls. This potentially makes it hard to publicly check some parts of the reasoning behind a nonperson predicate.) Anthropic reasoning / "reality fluid" is the sort of thing I'd expect to be really obvious in retrospect once solved. R1 and R2 should be both obvious in retrospect, and publishable.

I have hopes that an upcoming post on the Lob Problem will offer a much more concrete picture of what some parts of the innards of FAI development and formalizing look like.

Replies from: Wei_Dai, Kawoomba, wedrifid, MugaSofer, shminux, Mitchell_Porter
comment by Wei Dai (Wei_Dai) · 2013-01-23T06:39:37.034Z · LW(p) · GW(p)

Yes, formalizing Friendliness is not the sort of thing you'd want one person doing. I agree. I don't consider that "philosophy", and it's the sort of thing other FAI team members would have to be able to check.

In principle, creating a formalization of Friendliness consists of two parts, conceptualizing Friendliness, and translating the concept into mathematical language. I'm using "philosophy" and "formalizing Friendliness" interchangeably to refer to both of these parts, whereas you seem to be using "philosophy" to refer to the former and "formalizing Friendliness" for the latter.

I guess this is because you think you can do the first part, then hand off the second part to others. But in reality, constraints about what kinds of concepts can be expressed in math and what proof techniques are available means that you have to work from both ends at the same time, trying to jointly optimize for philosophical soundness and mathematical feasibility, so there is no clear boundary between "philosophy" and "formalizing".

(I'm inferring this based on what happens in cryptography. The people creating new security concepts, the people writing down the mathematical formalizations, and the people doing the proofs are usually all the same, I think for the above reason.)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-23T16:00:29.277Z · LW(p) · GW(p)

My experience to date has been a bit difference - the person asking the right question needs to be a high-grade philosopher, the people trying to answer it only need enough high-grade philosophy to understand-in-retrospect why that exact question is being asked. Answering can then potentially be done with either math talent or philosophy talent. The person asking the right question can be less good at doing clever advanced proofs but does need an extremely solid understanding of the math concepts they're using to state the kind-of-lemma they want. Basically, you need high math and high philosophy on both sides but there's room for S-class-math people who are A-class philosophers but not S-class-philosophers, being pointed in the right direction by S-class-philosophers who are A-class-math but not S-class-math. If you'll pardon the fuzzy terminology.

comment by Kawoomba · 2013-01-23T06:56:38.848Z · LW(p) · GW(p)

recent experience with trolls (...) newfound understanding of the utter... do-anything-ness of trolls

What happened (if you don't mind sharing)?

comment by wedrifid · 2013-01-23T07:30:28.111Z · LW(p) · GW(p)

(Albeit recent experience with trolls makes me think that no insight enabling conscious simulations should ever be published; people would write suffering conscious simulations and run them just to show off... how confident they were that the consciousness theory was wrong, or something. I have a newfound understanding of the utter... do-anything-ness of trolls. This potentially makes it hard to publicly check some parts of the reasoning behind a nonperson predicate.)

I get the impression that you have something different in mind as far as 'trolls' go than fools who create stereotypical conflicts on the internet. What kind of trolls are these?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-23T13:31:39.519Z · LW(p) · GW(p)

The kind who persuade depressed people to commit suicide. The kind who post people's address on the internet. The kind that burn the Koran in public.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-23T16:02:58.442Z · LW(p) · GW(p)

My psychological model says that all trolls are of that kind; some trolls just work harder than others. They all do damage in exchange for attention and the joy of seeing others upset, while exercising the limitless human ability to persuade themselves it's okay. If you make it possible for them to do damage on their home computers with no chance of being arrested and other people being visibly upset about it, a large number will opt to do so. The amount of suffering they create can be arbitrarily great, so long as they can talk themselves into believing it doesn't matter for and other people are being visibly upset to give them the attention-reward.

4chan would have entire threads devoted to building worse hells. Yes. Seriously. They really would. And then they would instantiate those hells. So if you ever have an insight that constitutes incremental progress toward being able to run lots of small, stupid, suffering conscious agents on a home computer, shut up. And if somebody actually does it, don't be upset on the Internet.

Replies from: None, wedrifid, DaFranker, Peterdjones, Kawoomba, Psy-Kosh, MugaSofer, Richard_Kennaway, ialdabaoth
comment by [deleted] · 2013-01-23T16:36:09.469Z · LW(p) · GW(p)

4chan would have entire threads devoted to building worse hells. Yes. Seriously. They really would. And then they would instantiate those hells.

In case anyone doubts this, as a long-time observer of the 4chan memeplex, I concur.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-24T11:56:00.361Z · LW(p) · GW(p)

Related:

How often does 4chan torture animals? That's pretty easy to pull off. Are they doing it all the time and I haven't noticed, or is there some additional force preventing it (e.g. Anonymous would hunt them down and post their details online, or 4chan all just like animals.)

Replies from: army1987, None
comment by A1987dM (army1987) · 2013-01-26T17:05:41.978Z · LW(p) · GW(p)

I remember that once, a Facebook page was hacked into (I guess) and started posting pictures and stories about tortured animals. Everybody went WTF and the page was shut down a few days later.

4chan all just like animals

I've never been there, but plenty of people on the internet do. Facebook pages against vivisection etc. seem to get way more likes than those in favour of it, the meme that humanity had better become extinct because wildlife would be better off is quite widespread, and some people even rejoice when a hunter dies (though this is a minority stance).

comment by [deleted] · 2013-01-26T17:42:36.091Z · LW(p) · GW(p)

How often does 4chan torture animals?

Not often. Hurting animals is generally considered Not OK on 4chan, to the extent that anything is Not OK on 4chan.

There are a few pictures and stories that get passed around (some kids kicking a cat against a wall like a football, shoveldog, etc), but many fewer than the human gore pictures. 4channers mostly aggregate this stuff from all over and post it to be edgy and drive people who aren't edgy enough away from 4chan.

And yeah, to the extent that people do torture animals in current events (as opposed to past stories), vast hordes of moralfags and raiders from 4chan tend to hunt them down and ruin their lives.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-28T14:55:48.929Z · LW(p) · GW(p)

And yeah, to the extent that people do torture animals in current events (as opposed to past stories), vast hordes of moralfags and raiders from 4chan tend to hunt them down and ruin their lives.

I wonder if this might happen to people running hells too? I lack the domain expertise to judge if this is ludicrous or impossible to predict or what.

Replies from: None
comment by [deleted] · 2013-01-28T15:06:08.323Z · LW(p) · GW(p)

Really depends on whether the beings in the hell are cute and empathetic.

Humans don't like to hurt things that are cute and empathetic, and don't like them getting hurt. Otherwise we don't care.

comment by wedrifid · 2013-01-23T17:00:02.646Z · LW(p) · GW(p)

4chan would have entire threads devoted to building worse hells. Yes. Seriously. They really would. And then they would instantiate those hells.

They really would at that. It seems you are concerned here about malicious actual trolls specifically. I suppose if the technology and knowledge was disseminated to that degree (before something actually foomed) then that would be the most important threat. My first thoughts had gone towards researchers with the capabilities and interest to research this kind of technology themselves who are merely callous and who are indifferent to the suffering of their simulated conscious 'guinea pigs' for the aforementioned .

So if you ever have an insight that constitutes incremental progress toward being able to run lots of small, stupid, suffering conscious agents on a home computer

At what level of formalization does this kind of 'incremental progress' start to count? I ask because your philosophical essays on reductionism, consciousness and zombies is something that seems to be incremental progress towards that end (but which I certainly wouldn't consider a mistake to publish or a net risk).

Replies from: Multiheaded
comment by Multiheaded · 2013-01-23T17:50:14.525Z · LW(p) · GW(p)

Related.

What is the suffering of a few in the face of Science? Pain is all relative, as is eternity. We've done far worse. I'm sure we have.

(I'm not a huge fan of SCP in general, but I like a few stories with the "infohazard" tag, and I'm amused by how LW-ish those can get.)

At what level of formalization does this kind of 'incremental progress' start to count? I ask because your philosophical essays on reductionism, consciousness and zombies is something that seems to be incremental progress towards that end (but which I certainly wouldn't consider a mistake to publish or a net risk).

Eliezer could argue that the incremental progress towards stopping the risk outweighs the danger, same as with the general FAI/uFAI secrecy debate.

Replies from: MugaSofer, army1987
comment by MugaSofer · 2013-01-24T12:26:13.163Z · LW(p) · GW(p)

I can't find the quote on that page. Is it from somewhere else (or an earlier version) or am I missing something?

Replies from: Multiheaded
comment by Multiheaded · 2013-01-24T12:49:52.897Z · LW(p) · GW(p)

White text. (Apparently there's a few more hidden features in the entry, but I only found this one.)

Replies from: MugaSofer, MugaSofer
comment by MugaSofer · 2013-01-24T14:08:50.928Z · LW(p) · GW(p)

Ah, thanks.

comment by MugaSofer · 2013-01-24T14:11:06.309Z · LW(p) · GW(p)

I, um, still can't find it. This white text is on the page you linked to, yes? About the videos that are probably soultraps?

EDIT: Nevermind, got it.

comment by A1987dM (army1987) · 2013-01-26T17:09:45.876Z · LW(p) · GW(p)

Eliezer could argue that the incremental progress towards stopping the risk outweighs the danger, same as with the general FAI/uFAI secrecy debate.

I think EY vastly overrates security through obscurity. Szilard keeping results about graphite and neutrons secret happened before the Internet; now there's this thing called the Streisand effect.

comment by DaFranker · 2013-01-23T17:30:54.073Z · LW(p) · GW(p)

So if you ever have an insight that constitutes incremental progress toward being able to run lots of small, stupid, suffering conscious agents on a home computer, shut up.

"The Sims" is often heralded as the best-selling videogame of all time, and it attracts players of all ages, races and genders from all across the world and from all walks of life.[citation needed]

Now imagine if the toons in the game could actually feel what was happening to them and react believably to their environment and situation and events?

I'm sure I don't need to quote the Rules of Acquisition; everyone here should know where this leads if word of such a technique gets out.

Replies from: fubarobfusco, shminux, MugaSofer, army1987, wedrifid
comment by fubarobfusco · 2013-01-24T02:44:02.533Z · LW(p) · GW(p)

Now imagine if the toons in the game could actually feel what was happening to them and react believably to their environment and situation and events?

There have always been those who would pull the wings off flies, stomp on mice, or torture kittens. Setting roosters, fish, or dogs to fight each other to death remains a well-known spectacle in many rural parts of the world. In Shakespeare's day, Londoners enjoyed watching dogs slowly kill bulls or bears, or be killed by them; in France they set bushels of cats on fire to watch them burn. Public executions and tortures, gladiatorial combat among slaves, and other nonconsensual "blood sports" have been common in human history.

What's the difference?

Replies from: Eliezer_Yudkowsky, shminux, MugaSofer, army1987
comment by Shmi (shminux) · 2013-01-24T04:11:04.010Z · LW(p) · GW(p)

Now imagine if the toons in the game could actually feel what was happening to them and react believably to their environment and situation and events?

How do you know that they don't?

comment by MugaSofer · 2013-01-24T10:19:27.537Z · LW(p) · GW(p)

The average individual could not hold private gladiatorial contests, on a whim, at negligible cost. Killing a few innocents by torture, as public spectacle, is significantly less than repeatedly torturing large groups, as private entertainment, for as little as the average individual would have paid for their ticket to the cockfight.

Also, some people reckon the suffering of animals doesn't matter. They're wrong, but they wouldn't care about most of your examples (or at least they would claim it's because they increase the risk you'll do the same to humans, which is a whole different kettle of fish.)

comment by A1987dM (army1987) · 2013-01-24T15:42:36.726Z · LW(p) · GW(p)

Not to mention the sizeable fraction of car drives who will swerve in order to hit turtles. What the hell is wrong with my species?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-25T12:15:50.020Z · LW(p) · GW(p)

Link is broken.

... seriously? Poor turtles >:-(

Replies from: army1987, army1987
comment by A1987dM (army1987) · 2013-01-25T12:18:18.195Z · LW(p) · GW(p)

It was mentioned recently on Yvain's blog and a few months ago on LW (can't find it right now).

comment by Shmi (shminux) · 2013-01-24T04:12:58.443Z · LW(p) · GW(p)

Now imagine if the toons in the game could actually feel what was happening to them and react believably to their environment and situation and events?

How do you know that they don't?

Replies from: None, DaFranker
comment by [deleted] · 2013-01-24T05:35:50.986Z · LW(p) · GW(p)

Why do you always have to ask subtly hard questions? I can just see see your smug face, smiling that smug smile of yours with that slight tilt of the head as we squirm trying to rationalize something up quick.

Here's my crack at it: They don't have what we currently think is the requisite code structure to "feel" in a meaningful way, but of course we are too confused to articulate the reasons much further.

Replies from: shminux
comment by Shmi (shminux) · 2013-01-24T06:52:25.226Z · LW(p) · GW(p)

Thank you, I'm flattered. I have asked Eliezer the same question, not sure if anyone will reply. I hoped that there is a simple answer to this, related to the complexity of information processing in the substrate, like the brain or a computer, but I cannot seem to find any discussions online. Probably using wrong keywords.

Replies from: Kaj_Sotala, None
comment by [deleted] · 2013-01-24T15:04:07.919Z · LW(p) · GW(p)

related to the complexity of information processing in the substrate

Not directly related. I think it has a lot to do with being roughly isomorphic to how a human thinks, which requires large complexity, but a particular complexity.

When I evaluate such questions IRL, like in the case of helping out an injured bird, or feeding my cat, I notice that my decisions seem to depend on whether I feel empathy for the thing. That is, do my algorithms recognize it as a being, or as a thing.

But then empathy can be hacked or faulty (see for example pictures of african children, cats and small animals, ugly disfigured people, far away people, etc), so I think of a sort of "abstract empathy" that is doing the job of recognizing morally valuable beings without all the bugs of my particular implementation of it.

In other words, I think it's a matter of moral philosophy, not metaphysics.

comment by DaFranker · 2013-01-24T14:37:23.843Z · LW(p) · GW(p)

How do you know that they don't?

Well, I can't speak for the latest games, but I've personally read (some of) the core AI code for the toons in the first game of the series, and there was nothing in there that made a model of said code or attempted any form of what I'd even call "reasoning" throughout. No consciousness or meta-awareness.

By being simulated by the code simulating the game in which they "are", they could to some extent be said to be "aware" of certain values like their hunger level, if you really want to stretch wide the concept of "awareness". However, there seems to be no consciousness anywhere to be 'aware' (in the anthropomorphized sense) of this.

Since my priors are such that I consider it extremely unlikely that consciousness can exist without self-modeling and even more unlikely that consciousness is nonphysical, I conclude that there is a very low chance that they can be considered a "mind" with a consciousness that is aware of the pain and stimuli they receive.

The overall system is also extremely simple, in relative terms, considering the kind of AI code that's normally discussed around these parts.

comment by MugaSofer · 2013-01-24T10:05:56.584Z · LW(p) · GW(p)

Why would them feeling it help them "react believably to their environment and situation and events"? If they're dumb enough to "run lots of small, stupid, suffering conscious agents on a home computer", I mean.

Of course, give Moore time and this objection will stop applying.

Replies from: DaFranker
comment by DaFranker · 2013-01-24T14:58:53.965Z · LW(p) · GW(p)

We're already pretty close to making game characters have believable reactions, but only through clever scripting and a human deciding that situation X warrants reaction Y, and then applying mathematically-complicated patterns of light and prerecorded sounds onto the output devices of a computer.

If we can successfully implement a system that has that-function-we-refer-to-when-we-say-"consciousness" and that-f-w-r-t-w-w-s-"really feel pain", then it seems an easy additional step to implement the kind of events triggering the latter function and the kind of outputs from the former function that would be believable and convincing to human players. I may be having faulty algorithmic intuitions here though.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-25T09:24:54.574Z · LW(p) · GW(p)

Well, if they were as smart as humans, sure. Even as smart as dogs, maybe. But if they're running lots of 'em on a home PC, then I must have been mistaken about how smart you have to be for consciousness.

comment by A1987dM (army1987) · 2013-01-24T15:43:39.977Z · LW(p) · GW(p)

"The Sims"

I used to torture my own characters to death a lot, back in the day.

EDIT: Not to mention what I did when playing Roller Coaster Tycoon.

comment by wedrifid · 2013-01-25T03:59:46.040Z · LW(p) · GW(p)

Now imagine if the toons in the game could actually feel what was happening to them and react believably to their environment and situation and events?

The favourite Sim household of my housemate was based on "Buffy the Vampire Slayer". Complete with a graveyard constructed in the backyard. Through the judicial application of "remove ladder" from the swimming pool.

And this is all without any particular malice!

comment by Peterdjones · 2013-01-23T17:41:35.491Z · LW(p) · GW(p)

They all do damage in exchange for attention and the joy of seeing others upset,

Is contrarianism (which is all that a lot of low net trolling is) actually damaging? Are people free of responsibility for rising to the bait?

Replies from: JGWeissman
comment by JGWeissman · 2013-01-23T18:43:12.847Z · LW(p) · GW(p)

Are people free of responsibility for rising to the bait?

You ask that as if there were some conservation of responsibility, where the people rising to the bait are responsible for doing so, and this somehow means the troll is not responsible for deliberately giving the opportunity, and by the troll not being responsible, this somehow means the troll is not causing damage.

The damage is the result of interactions of multiple causes, and for each of those causes that is a decision by an agent, you can consider that agent "responsible". This can get complicated when causes include not-so-agenty behaviors of quasi-agents. At some point "responsibility" stops being a useful concept, and you just have to decide whatever policy it was supposed to inform by actually looking at the likely consequences.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-23T19:16:13.821Z · LW(p) · GW(p)

I'm still not convinced there is any damage in the kind of low level trolling which is just teasing.

Replies from: ArisKatsaris, DaFranker, Eliezer_Yudkowsky
comment by ArisKatsaris · 2013-01-23T20:12:15.937Z · LW(p) · GW(p)

A troll (no matter how low-level) wastes time, ruins people's mood, and destroys trust.

Every actual troll increases the probability that some other innocent newcomer will get accused of trolling wrongly, making a community just that tiny bit more hostile to newcomers, and thus less open to the outside world. Then those newcomers judge the community badly for the community's negative judgment of them. Bad feelings all around, and a community which is now less receptive of new ideas, because they might just be trolling.

So, yeah, trolling does damage. Trolling is bad, bad, bad.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-23T20:25:22.044Z · LW(p) · GW(p)

Doesn't it take two to waste time?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-23T20:46:27.770Z · LW(p) · GW(p)

Yes, most crimes take at least two people, the victim and the perpetrator.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-23T20:48:12.266Z · LW(p) · GW(p)

Isn't "crimes" just a wee bit overheated?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-23T20:55:22.226Z · LW(p) · GW(p)

For very minor moral crimes (e.g. insults) the same applies: the insulter and the insulted. The spitter and the spat upon. The troll and the trolled.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-23T20:58:53.163Z · LW(p) · GW(p)

Insults are highly subjective too.

comment by DaFranker · 2013-01-23T20:12:45.450Z · LW(p) · GW(p)

It could be argued that such trolling can cause circumstantial damage or emotional damage, through intermediaries, with an example that takes a couple of weak steps in reasoning.

It could be argued back that this is circumstantial, and therefore not caused knowingly by the troll, and the example taken apart by all those weak points by giving them actual numbers for their probability.

Then it could be counter-argued again that the amount of possible circumstances or possible combinations of circumstances that would bring about some form of damage is such, compared to the circumstances that would not and the probabilistic facts of the trolling, that it ends up being more likely than not that at least one out of the incredibly many possible sets of circumstances will apply for any given instance of trolling.

I need a sanity check for motivated stopping here, but I don't see any good further counter-argument that I could steel-man and would show that this isn't a case of "trolling causes damage in a predictable manner", unless my prior that such damage would not occur in the absence of trolling is completely wrong.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-23T20:24:23.170Z · LW(p) · GW(p)

It could be argued that such trolling can cause circumstantial damage or emotional damage

It could be argued that there is a opposite process by which people label undamaging behaviour as trollig so that,eg, they don't have to updaate.

Replies from: jooyous
comment by jooyous · 2013-01-23T23:13:45.845Z · LW(p) · GW(p)

That's like saying you shouldn't drive on your street when Joe is driving because Joe is a bad driver. It's true that you should update and avoid driving when Joe is out. But you find that out after Joe crashes his car into you. At which point, damage has been done to your car that updating won't fix.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-24T00:37:57.485Z · LW(p) · GW(p)

Too much metaphor. What is this damage?

Replies from: jooyous
comment by jooyous · 2013-01-24T01:35:15.275Z · LW(p) · GW(p)

If you present a depressed person with strong arguments that they should commit suicide, this is likely to cause their beliefs to change. So changing their beliefs back to their old level so that they can continue functioning as before (as opposed to killing themselves) will require work in addition to realizing they shouldn't talk to you anymore, possibly in the form of support and hugs from other supportive people. Similarly, if your car has been damaged in an accident, it will require additional work to run again, such as replacing deformed parts. The car won't magically start running once Joe is off the road.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-24T01:39:22.911Z · LW(p) · GW(p)

And who is doing that?

Replies from: jooyous
comment by jooyous · 2013-01-24T01:41:08.534Z · LW(p) · GW(p)

I think trolls.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-24T01:44:13.984Z · LW(p) · GW(p)

Well, no-one is encouraging suicide here, so there are no trolls here.

Replies from: jooyous
comment by jooyous · 2013-01-24T01:48:37.489Z · LW(p) · GW(p)

Uhh. There are no trolls here, therefore trolls do not cause damage?

Replies from: Peterdjones
comment by Peterdjones · 2013-01-24T01:50:27.770Z · LW(p) · GW(p)

I dare say high-level trolls cause all sorts of damage, but what's the relevance?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-24T03:11:15.106Z · LW(p) · GW(p)

Public notice: I'm now considering this user a probable troll and will act accordingly.

In the future, I may so consider and act, under these sorts of circumstances, without such public notice.

Does any (old-time, trusted) user want to volunteer as the mod who goes back and deletes all the troll comments once a user has been designated a troll?

Replies from: wedrifid, Peterdjones, shminux, yli
comment by wedrifid · 2013-01-24T03:35:54.114Z · LW(p) · GW(p)

Does any (old-time, trusted) user want to volunteer as the mod who goes back and deletes all the troll comments once a user has been designated a troll?

That seems like something that should be automated. I could give you a script that (for example) goes through a user page and deletes (or rather sends the appropriate button press that would be a delete command when run by a mod) any comment with less than 0 karma. It would make more sense for this to be an implemented feature of the site but "something I could implement in an hour without accessing the source code" seems like a suitable proxy for "is easy to make a feature of the site, one way or another".

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-24T04:39:57.501Z · LW(p) · GW(p)

That sounds helpful. Let's give this a shot. (I'm running updated Chrome on Win7 if that's relevant.)

comment by Peterdjones · 2013-01-24T03:37:27.187Z · LW(p) · GW(p)

Hi.

You can respond to the argument.(it might even do you good). or you can refuse to consider criticism. It's your choice. From which I will draw my own conclusions.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-24T11:34:55.348Z · LW(p) · GW(p)

Hi.

You can state that you've never trolled in this site (not even "low-level") and promise to never troll here (not even "low-level") in the future.

As a sidenote, previously you argued that people who respond to trolls are also to blame. Now you argue that EY would be to blame if he does not respond to a possible troll.

From this discrepancy I just drew my own conclusions.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-24T17:15:58.696Z · LW(p) · GW(p)

...um, Aris, you're feeding the troll...

comment by Shmi (shminux) · 2013-01-24T04:20:59.528Z · LW(p) · GW(p)

Thank you! Intentional troll or not, this user's extremely prolific posting of low-value comments is something I'd rather not see here.

comment by yli · 2013-01-24T20:52:04.709Z · LW(p) · GW(p)

He could have just been talking about trolling in the abstract. And even if not, after reading a bit of his history, his "trolling", if any, is at most at the level of rhethorical questions. I'm not really a fan of his commenting, but if he's banned, I'd say "banned for disagreement" will be closer to the mark as a description of what happened than "banned for trolling", though not the whole story.

comment by Kawoomba · 2013-01-23T18:00:48.325Z · LW(p) · GW(p)

So if you ever have an insight that constitutes incremental progress toward being able to run lots of small, stupid, suffering conscious agents on a home computer, shut up.

Most any incremental progress towards AGI, or even "just" EMs, would be dual use (if not centuple use) and could be (ab)used for helping achieve such enterta ... vile and nefarious purposes.

In fact, it is hard to imagine realistic technological progress that can solely be used to run lots of small, stupid, suffering conscious agents but not as a stepping stone towards more noble pursuits (... such as automated poker playing agents).

comment by Psy-Kosh · 2013-01-24T00:37:41.255Z · LW(p) · GW(p)

You know, I want to say you're completely and utterly wrong. I want to say that it's safe to at least release The Actual Explanation of Consciousness if and when you should solve such a thing.

But, sadly, I know you're absolutely right re the existence of trolls which would make a point of using that to create suffering. Not just to get a reaction, but some would do it specifically to have a world they could torment beings.

My model is not that all those trolls are identical (In that I've seen some that will explicitly unambiguously draw the line and recognize that egging on suicidal people is something that One Does Not Do, but I also know (seen) that all too many gleefully do do that.)

Replies from: MugaSofer
comment by MugaSofer · 2013-01-24T10:28:02.569Z · LW(p) · GW(p)

But, sadly, I know you're absolutely right re the existence of trolls which would make a point of using that to create suffering. Not just to get a reaction, but some would do it specifically to have a world they could torment beings.

My model is not that all those trolls are identical (In that I've seen some that will explicitly unambiguously draw the line and recognize that egging on suicidal people is something that One Does Not Do, but I also know (seen) that all too many gleefully do do that.)

It's worth noting that private torture chambers seem different to trolling, but a troll can still set up a torture chamber - they just care about people's reaction to it, not the torture itself.

comment by MugaSofer · 2013-01-24T10:00:20.777Z · LW(p) · GW(p)

If you make it possible for them to do damage on their home computers with no chance of being arrested and other people being visibly upset about it, a large number will opt to do so. [...] 4chan would have entire threads devoted to building worse hells. Yes. Seriously. They really would. And then they would instantiate those hells.

Wishing I could disagree with you, and, suspiciously, I find myself believeing that there would be enough vigilante justice to discourage hellmaking - after all, the trolls are doing it for the attention, and if that attention comes in the form of people posting your details and other people breaking into your house to steal your computer and/or murder you (for the greater good) then I doubt there will be many takers.

I just wish I could trust that doubt.*

*(Not expressing a wish for trust pills.)

EDIT: Animal experimentation and factory farming are still popular, but they have financial incentive ... and I vaguely recall that some trolls kicked a dog across a football field or something and were punished by Anonymous. That's where the analogy comes from, anyway, so I'd be interested if someone knows more.

comment by Richard_Kennaway · 2013-01-24T13:02:32.024Z · LW(p) · GW(p)

So if you ever have an insight that constitutes incremental progress toward being able to run lots of small, stupid, suffering conscious agents on a home computer, shut up.

I sometimes wonder if this does not already exist, except for the suffering and consciousness being merely simulated. That is, computer games in which the entire purpose is to inflict unspeakable acts on powerless NPCs, acts whose depiction in prose or pictures would be grossly illegal almost everywhere. But I've never heard of such a thing actually existing.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-24T14:04:34.771Z · LW(p) · GW(p)

That is, computer games in which the entire purpose is to inflict unspeakable acts on powerless NPCs, acts whose depiction in prose or pictures would be grossly illegal almost everywhere.

What sort of acts are we talking here? Because I'm genuinely having trouble thinking of any "acts whose depiction in prose or pictures would be grossly illegal almost everywhere" except maybe pedophilia. Censorship and all that.

And there are some fairly screwed-up games out there, although probably not as bad as they could be if designed with that in mind (as opposed to, y'know, the enjoyment of the player.)

I've never heard of such a thing actually existing.

Well would you, if it was grossly illegal to describe the contents?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-01-24T14:19:39.006Z · LW(p) · GW(p)

What sort of acts are we talking here?

I didn't want to be explicit, but you thought of the obvious example.

probably not as bad as they could be if designed with that in mind (as opposed to, y'know, the enjoyment of the player.)

For the sort of 4chan people Eliezer mentioned, these would be completely congruent.

I've never heard of such a thing actually existing.

Well would you, if it was grossly illegal to describe the contents?

It is well known that illegal pornography exists on non-interactive media. For interactive media, all I've ever heard of is 18-rated sex scenes.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-24T14:54:44.669Z · LW(p) · GW(p)

What sort of acts are we talking here?

I didn't want to be explicit, but you thought of the obvious example.

I can't think of any other examples, though.

probably not as bad as they could be if designed with that in mind (as opposed to, y'know, the enjoyment of the player.)

For the sort of 4chan people Eliezer mentioned, these would be completely congruent.

... maaaybe. Again, I'm not sure exactly what you have in mind.

I've never heard of such a thing actually existing.

Well would you, if it was grossly illegal to describe the contents?

It is well known that illegal pornography exists on non-interactive media. For interactive media, all I've ever heard of is 18-rated sex scenes.

Good point. Indeed, it's well known that child porn exists on some level.

In fact ... I do vaguely recall something about a Japanese game about rape causing a moral panic of some kind, so ...

EDIT: In fact, it featured kids too! RapeLay. It's ... fairly horrible, although I think someone with the goal of pure horribleness would do .. better? Worse? Whatever.

comment by ialdabaoth · 2013-01-26T18:07:02.998Z · LW(p) · GW(p)

talk themselves into believing it doesn't matter for

That stupid reason is, at core, nihilistic solipsism - and it's not as stupid as you'd think. I'm not saying it's right, but it does happen to be the one inescapable meme-trap of philosophy.

To quote your own fic, their reason is "why not?" - and their consciousness was not grown such that your impassioned defense of compassion and consideration have any intrinsic factor in their utility function.

comment by MugaSofer · 2013-01-23T13:40:39.838Z · LW(p) · GW(p)

(Albeit recent experience with trolls makes me think that no insight enabling conscious simulations should ever be published; people would write suffering conscious simulations and run them just to show off... how confident they were that the consciousness theory was wrong, or something. I have a newfound understanding of the utter... do-anything-ness of trolls. This potentially makes it hard to publicly check some parts of the reasoning behind a nonperson predicate.)

At least for now, it'd take a pretty determined troll who could build an em for the sole purpose of being a terrible person. Not saying some humanity-first movement mightn't pull it off, but by that point you could hopefully have legal recognition (assuming there's no risk or accidental fooming and they pass the Turing test.)

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-23T14:04:43.175Z · LW(p) · GW(p)

I don't think we're talking ems, we're talking conscious algorithms which aren't necessarily humanlike or even particularly intelligent.

And as for the Turing Test, one oughtn't confuse consciousness with intelligence. A 6-year old human child couldn't pass off as an adult human, but we still believe the child to be conscious, and my own memories indicate that I indeed was at that age.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-23T14:21:48.041Z · LW(p) · GW(p)

Well, I think consciousness, intelligence and personhood are sliding scales anyway, so I may be imagining the output of a Nonperson Predicate somewhat differently to LW norm. OTOH, I guess it's not a priori impossible that a simple human-level AI could fit on something avvailable to the public, and such an insight would be ... risky, yeah. Upvoted.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-23T14:54:57.581Z · LW(p) · GW(p)

First of all, I also believe that consciousness is most probably a sliding scale.

Secondly, again you just used "human-level" without specifying human-level at what, at intelligence or at consciousness; as such I'm not sure whether I actually communicated adequately my point that we're not discussing intelligence here, but just consciousness.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-24T13:58:51.773Z · LW(p) · GW(p)

Well, they do seem to be correlated in any case. However, I was referring to consciousness (whatever that is.)

comment by Shmi (shminux) · 2013-01-24T01:44:21.047Z · LW(p) · GW(p)

Re non-person predicates, do you even have a non-sharp (but non-trivial) lower bound for it? How do you know that the Sims from the namesake game aren't persons? How do we know that Watson is not suffering indescribably when losing a round of Jeopardy? And that imagining someone (whose behavior you can predict with high accuracy) suffering is not as bad as "actually" making someone suffer? If this bound has been definitively established, I'd appreciate a link.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-24T10:24:58.688Z · LW(p) · GW(p)

It's unclear where our intuitions on the subject come from or how they work, and they are heavily .... distorted ... by various beliefs and biases. OTOH, it seems unlikely that rocks are conscious and we just haven't extrapolated far enough to realize. It's also unclear whether personhood is binary or there's some kind of sliding scale. Nevertheless, it seems clear that a fly is not worth killing people over.

Even a person who has never introspected about their moral beliefs can still know that murder is wrong. They're more likely to make mistakes, but still.

comment by Mitchell_Porter · 2013-02-05T07:58:43.858Z · LW(p) · GW(p)

Anthropic reasoning / "reality fluid"

How are these related? One is epistemology and one is ontology.

comment by Wei Dai (Wei_Dai) · 2013-01-23T07:40:11.527Z · LW(p) · GW(p)

Resolutions of simple confusions usually look pretty obvious in retrospect.

Can you give some more examples of this, besides "free will"? (I don't understand where your intuitions comes from that certain problems will turn out to have solutions that are obvious in retrospect, and that such feelings of obviousness are trustworthy. Maybe it would help me see your perspective if I got some more past examples.)

Replies from: MugaSofer
comment by MugaSofer · 2013-01-23T13:26:55.227Z · LW(p) · GW(p)

A tree falls in a forest with no-one to hear it. Does it make a sound?

Replies from: Peterdjones, Peterdjones
comment by Peterdjones · 2013-01-23T18:10:22.634Z · LW(p) · GW(p)

I don't class that as a problem that is discussed by professional philosophers. It's more of a toy question that introduces the nature of phil. problems -- and the importance of asking "it depends on what you mean..." -- to laypeople.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-24T11:45:55.420Z · LW(p) · GW(p)

I agree, but that's not what I was aiming for. It's an example of obviousness after the fact, not philosophers being wrong/indecisive.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-24T13:25:29.165Z · LW(p) · GW(p)

It's not an example that lends much credence to the idea that all problems can be solved that way, even apart from the generalisation-from-one-example issue.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-24T14:08:05.601Z · LW(p) · GW(p)

I'm not claiming it proves anything, and I'm not taking sides in this discussion. Someone asked for an example of something - something which varies from person to person depending on whether they've dissolved the relevant confusions - and I provided what I thought was the best example. It is not intended to prove anyone's point; arguments are not soldiers.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-25T11:07:02.842Z · LW(p) · GW(p)

The counterargument to "arguments are not soldiers" is "a point should have a point".

Replies from: MugaSofer
comment by MugaSofer · 2013-01-25T11:38:02.828Z · LW(p) · GW(p)

It wasn't an argument at all. That you chose to interpret it as an enemy soldier is your mistake, not mine. It's not a weak soldier, it's a ... medic or something.

comment by Shmi (shminux) · 2013-01-23T00:30:14.620Z · LW(p) · GW(p)

Do you have an example in mind where a certain philosophical question claimed to have been solved or dissolved by Eliezer turned out to be not solved after all, or the solution was wrong?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2013-01-23T01:05:31.475Z · LW(p) · GW(p)

Do you have an example in mind where a certain philosophical question claimed to have been solved or dissolved by Eliezer turned out to be not solved after all, or the solution was wrong?

Also, instances where Eliezer didn't seem to realize that a problem existed until someone pointed it out to him:

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-23T01:16:02.730Z · LW(p) · GW(p)

Order-dependence and butterfly effects - knew about this and had it in mind when I wrote CEV, I think it should be in the text.

Counterfactual Mugging - check, I don't think I was calling TDT a complete solution before then but the Counterfactual Mugging was a class of possibilities I hadn't considered. (It does seem related to Parfit's Hitchhiker which I knew was a problem.)

Solomonoff Induction - again, I think you may be overestimating how much weight I put on that in the first place. It's not a workable AI answer for at least two obvious reasons I'm pretty sure I knew about from almost-day-one, (a) it's uncomputable and (b) it can't handle utility functions over the environment. However, your particular contributions about halting-oracles-shouldn't-be-unimaginable did indeed influence me in toward my current notion of second-order logical natural induction over possible models of axioms in which you could be embedded. Albeit I stand by my old reply that Solomonoff Induction would encompass any computable predictions or learning you could do about halting oracles in the environment. (The problem of porting yourself onto any environmental object is something I already knew AIXI would fail at.)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2013-01-23T23:28:11.107Z · LW(p) · GW(p)

Order-dependence and butterfly effects - knew about this and had it in mind when I wrote CEV, I think it should be in the text.

Ok, I checked the CEV writeup and you did mention these briefly. But that makes me unsure why you claimed to have solved metaethics. What should you do if your FAI comes back and says that your EV shows no coherence due to order dependence and butterfly effects (assuming it's not some kind of implementation error)? If you're not sure the answer is "nothing", and you don't have another answer, doesn't that mean your solution (about the meaning of "should") is at least incomplete, and possibly wrong?

Counterfactual Mugging - check, I don't think I was calling TDT a complete solution before then but the Counterfactual Mugging was a class of possibilities I hadn't considered. (It does seem related to Parfit's Hitchhiker which I knew was a problem.)

You said that TDT solves Parfit's Hitchhiker, so I don't know if you would have kept looking for more problems related to Parfit's Hitchhiker and eventually come upon Counterfactual Mugging.

Solomonoff Induction - again, I think you may be overestimating how much weight I put on that in the first place. It's not a workable AI answer for at least two obvious reasons I'm pretty sure I knew about from almost-day-one, (a) it's uncomputable and (b) it can't handle utility functions over the environment

Both of these can be solved without also solving halting-oracles-shouldn't-be-unimaginable. For (a), solve logical uncertainty. For (b), switch to UDT-with-world-programs.

Also, here is another problem that maybe you weren't already aware of.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-24T10:31:01.595Z · LW(p) · GW(p)

What should you do if your FAI comes back and says that your EV shows no coherence due to order dependence and butterfly effects (assuming it's not some kind of implementation error)?

Wouldn't that kind of make moral reasoning impossible?

comment by CarlShulman · 2013-01-23T07:22:19.333Z · LW(p) · GW(p)

I'm not sure if he had both math and philosophy in mind when he wrote that or just math,

Both.

comment by V_V · 2013-01-25T15:38:19.358Z · LW(p) · GW(p)

It seems like that part should be less difficult in an absolute sense than the rest of the labor

You never did any engineering-level mathematical modeling of real system, did you?

The main difficulty is not proving the theorems, it is finding the right axioms to describe the relevant aspects of the system and the properties of interest. And that's where errors often occur.

Now, typical engineering tasks pale in comparison to the task you are trying to undertake: creting a fully specified mathematical model of ethics.

though that might just be comparative advantage talking.

Most likely it's just the Dunning–Kruger effect

The resolutions to philosophical confusions usually seem relatively straightforward once you have them, in my experience so far.

Just like when you "resolved" the interpretation of quantum mechanics? Well, good thing that you are never going to make anything close to an AGI and that AGI risk is probably overrated, otherwise it wouldn't end well...

comment by lukeprog · 2013-01-20T05:09:49.211Z · LW(p) · GW(p)

I've argued against this plan but I'm guessing that Eliezer is probably still set on this course. Others at SIAI may have more reservations about it.

For the record: I, too, want an FAI team in which Eliezer isn't the only one with Eliezer-level philosophical ability or better. This is tougher than "merely" finding 1-in-10-million math talents, but still do-able.

What am I doing about it? I wrote a post encouraging a specific kind of philosophical education that I think will be more likely to produce Eliezer-level philosophers than a "normal" philosophical education (or even a CMU or UPitts one). When Louie came up with the idea to write a list of Course recommendations for Friendliness researchers, I encouraged it. Also, one of the reasons I ended up supporting the plan to launch CFAR in 2012 was its potential not only to make people more effective at achieving their goals, but also to learn ways to make some people better philosophers (see my last paragraph here). And there's more, but I can't talk about it yet.

Also, as Eliezer said, Paul Christiano's existence is encouraging.

Replies from: Kawoomba
comment by Kawoomba · 2013-01-20T08:27:27.434Z · LW(p) · GW(p)

What about Kawoomba's existence? :-(

CFAR and related are good efforts at raising the sanity waterline (which is an average), not so much for identifying the extreme outliers that could Alan-Turing their way towards an FAI. Those will make waves on their own.

Such grassroots organisations may be good ways of capturing the attention of a wider audience, although second to publishing in the field / personally building a network at conferences.

The time horizon and viability of having a few hundred self-selected college aged students and trying to grow them into a seminal figure of extraordinary capabilities seems prohibitive, especially when there are already exceedingly capable people at Stanford et al, who already bring the oomph and just lack the FAI-motivation.

Replies from: lukeprog
comment by lukeprog · 2013-01-20T08:34:41.189Z · LW(p) · GW(p)

Can you name some older academics that have the requisite philosophical skill? (And if your first line isn't a joke, perhaps you can link me to some of your own philosophical works?)

Replies from: Kawoomba
comment by Kawoomba · 2013-01-20T12:10:06.556Z · LW(p) · GW(p)

Sipser, Russell&Norvig et al are core parts of your proposed philosophical curriculum, Louis' course recommendations reads like my former grad CS reading list.

It follows that, say, many with or pursuing a PhD in Machine Learning and related have also picked up a majority of your desired (per your recommendations) philosophical skills.

I'm not postulating that Bayesian superstars also make the best drummers and fencing masters, but between your analytical CS-style philosophy and Machine Learning groups there is a cross-domain synergy effect that comes with the clarity of designing minds - or advanced algorithms.

(As for myself, the first line was meant as a joke - alas! How sad!)

Replies from: lukeprog
comment by lukeprog · 2013-01-20T18:53:19.454Z · LW(p) · GW(p)

It follows that, say, many with or pursuing a PhD in Machine Learning and related have also picked up a majority of your desired (per your recommendations) philosophical skills

No, I wouldn't say that. The problem is that we (humans) don't know how to teach the philosophical skill I'm talking about, so there aren't classes on it, so I can only recommend courses on "the basics" or "prerequisites." I don't know how to turn a math/CS PhD under Stuart Russell into the next Eliezer Yudkowsky.

comment by itaibn0 · 2013-03-22T14:46:58.951Z · LW(p) · GW(p)

To all commenters who observed that I don't seem to stand out from 10 other smart people they know, either you didn't comprehend the entirety of today's post, or you have very high confidence that you occupy the highest possible rank of human ability.

The only thing that's certain is that somebody has trouble properly apprehending your intelligence.

comment by Marshall · 2008-09-26T18:00:38.000Z · LW(p) · GW(p)

Vassar - your English is encrypted - more an assumption of intelligence than a sign.

EY - I admire your work. Along with Robin this is the best Show in Town and I will miss it, when it stops.

I actually doubt whether you are accomplishing anything - but this does not seem so important to me, because the effort itself is worthwhile. And we are educated along the way.

This is a youthful blog with youthful worries. From the vantage point of age worrying about intelligence seems like a waste of time and unanswerable to boot.

But those are the stones in your shoes.

comment by Roland2 · 2008-09-26T18:26:40.000Z · LW(p) · GW(p)

@Jef Allbright:

Can you be concrete and specific about where Eliezer is or has been arrogant?

comment by Vladimir_Nesov · 2008-09-26T18:36:46.000Z · LW(p) · GW(p)

"Most intelligent people I've met" is not informative, we need to give quantitative estimates. My estimate is calibrated based on knowing people who passed various screenings, such as math, physics and programming contests (including at international level), test results on screening exams to top universities, performance in hard university courses, people starting to grasp research and programming, etc. Based on population of regions covered by various screenings, and taking age, gender and different background into account, I can approximately rate these people on the "1 in XXX" scale. I'd say that you need to be at a level of 1 in 300 or so to be able to deeply understand any technical field of human knowledge given reasonable effort, and 1 in 100 to be a competent technical specialist. There is a significant difference (which can cash out as, say, 3x speedup at obtaining given level of aptitude) between people who are 1 in 1000 and 1 in 10000. I know too few people beyond 1 in 10000 (about top 30 in a contest over population of 20 million within a 3-year age interval, given average lifespan of 60 and background selection of 1 in 3 top people to enter the contest) to say whether there is a general enough advantage of being there, or if the performance levels off and more rarely occurring extraordinary ability only presents itself on very narrow task, like blasting through programming contests.

People at all levels are stupid at unknown domains, it takes much effort to start demonstrating "raw intelligence" at anything (although in many things skills partially translate between domains). You can't learn to be creative enough if you don't pass a necessary threshold, but on the other hand if you are past it, sufficient effort will make you able to solve any problem other people can solve, although it'll take more time to attain that level and to solve problems once you did that.

The main problem for getting results is that it's very hard to port smart people on a new field of expertise, to convince them to start thinking about something or to actively work on improving their performance in a given domain. So, it seems that the main problem with seeing (or finding) enough brilliant people in any given field or group is not in the rarity of talent, but in the roads they all took, too few of which lead where you look.

People won't risk working on hard important problems or even think too much about exploring which problems could be important, they choose convenient, safe or enjoyable paths, they choose intellectual dynamic, the process, rather than proper understanding of results or appearance. People you hear from are not the smartest there are in a given subject.

I estimate myself to be around 1 in 1000, more specifically a somewhat blinder, slower and faulty-memory version of 1 in 5000 (as I understand, it's not how many other people perceive their limitations). I clearly see the advantages that people with clearer minds get, but as far as I can tell I'm still able to excel at anything if I devote enough attention to it, given enough time. Extraordinary intellectual productivity is a result of taking the right road, which may depend on happenstance beyond your control. Digging yourself from the pit of blind stupidity (relatively), of seeing only a surface level and stopping the investigation there, is the most important thing (which is what the art of rationality is about, not being stupid, using what you've got, while it falls short of understanding intelligence deeper).

From what I've read, I think that Eliezer is somewhere at 1 in 5000 on this scale, given time he devoted to the study of the subjects and results he produced. He stands out in comparison mainly because too few smart enough people engage in the questions he addresses from the same side, and of those who do hardly anybody devoted much serious thought to them and at the same time didn't get lost on a false road. You don't see the absence of talent, but the initial stupidity in unfamiliar domain or entrenched mistakes where there isn't a solid body of knowledge and authority to force them out.

So, I think that his estimate of 1 in 10000-100000 is too high. The problem is more of convincing the right people to work on the problem and pointing them to the right path, rather than of finding the right people at all. Having an introductory text showing the path is a huge asset, so the decision to compose this book might be a fruitful one.

comment by Cassandra2 · 2008-09-26T19:01:31.000Z · LW(p) · GW(p)

My own potential intelligence does worry me fairly often. I am currently studying to become an engineer and hope to work on some of the awesome ideas I read about on sites like this. The thing is though, I wasted the first twenty third years of my life. I am currently at twenty-five years old and I have been forced to pretty much start from scratch on everything from social skills to education and after two years I think I am making some headway. I am even starting to understand what Eliezer talks about in all these posts and apply it to my own life as best I can. The math still escapes me but I managed to make it through about half of the Bayesian explanation before getting completely and utterly lost. So I think it is certainly possible learn a huge amount of things even after young childhood but it is rather less efficient. I have had to really struggle to get to where I am now. And since Eliezer is one of the big reasons I am so excited about getting into science I would like to attempt to work in a similar field of research. Yep.

Replies from: Raw_Power
comment by Raw_Power · 2010-09-12T15:28:23.233Z · LW(p) · GW(p)

I am in a situation that is practically the same as yours.

As an ex-child prodigy, I identify with the insecurities this thread is about. However, having studied Math at Colege for a year and a half, and having met there people who were so beyond my level that it heavily obstructed communication with them, that they were so far beyond my understanding they could have not been human for all it mattered, I can confidently say that my ambitions of genius-dom have been utterly crushed beyond repair. As I am, I will be very content to be a competent, respected engineer.

This said, I think Eleizer is protesting too much. You shouldn't care how smart you are, only that you are getting your job done: that arrogance of yours needs some serious work, as you know, we know you know, etc. And it seems unfortunate in that regard that exposure to a large amount of far more brilliant minds than yours is an experience that you will have to go seriously out of your way to get, since that is probably the shortest, easiest way to humility.

If you find an alternate path that is not as bitter, please let us know.

comment by Aron · 2008-09-26T19:12:39.000Z · LW(p) · GW(p)

Let me give a shout out to my 1:50 peeps! I can't even summarize what EY has notably accomplished beyond highlighting how much more likely he is to accomplish something. All I really want is for Google to stop returning pages that are obviously unhelpful to me, or for a machine to disentangle how the genetic code works, or a system that can give absolute top notch medical advice, or something better than the bumbling jackasses[choose any] that manage to make policy in our country. Give me one of those things and you will be one in a million, baby.

comment by Jef_Allbright · 2008-09-26T19:22:25.000Z · LW(p) · GW(p)

@Roland

I suppose you could google "(arrogant OR arrogance OR modesty) eliezer yudkowsky" and have plenty to digest. Note that the arrogance at issue is neither dishonest nor unwarranted, but it is an impairment, and a consequence of trade-offs which, from within a broader context, probably wouldn't be taken in the same way.

That's as far as I'm willing to entertain this line of inquiry, which ostensibly neutral request for facts appears to belie an undercurrent of offense.

comment by Phil_Goetz5 · 2008-09-26T19:39:07.000Z · LW(p) · GW(p)
You're probably among the top 10, certainly in the top 20, most-intelligent people I've met. That's good enough for anything you could want to do.

Okay, I realize you're going to read that and say, "It's obviously not good enough for things requiring superhuman intelligence!"

I meant that, if you compare your attributes to those of other humans, and you sort those attributes, with the one that presents you the most trouble in attaining your goal at the top, intelligence will not be near the top of that list for you, for any goal.

comment by denis_bider2 · 2008-09-26T19:39:48.000Z · LW(p) · GW(p)

Eliezer,

what's with the ego?

In other words - why are you so driven?

I gather from your posts that you have metaphysical views which make you believe that solving the FAI problem is the most important thing you should be doing.

But is it really that important that you are the one to bring this work to fruition?

Do you think your life will have been unfulfilled, or your opportunity wasted, if you don't finish this, and finish it as soon as you can?

Would building an exceptional foundation, which future exceptional people can improve on, not be achievement enough?

What does it matter how smart you are, if you are doing what you love, and giving it your best effort?

Perhaps it is the fear of being too late that is causing you distress. Perhaps you fear that humanity is going to be destroyed because you didn't build an FAI soon enough. Perhaps you fear that your life will end some 10,000 years sooner than you'd like.

But it is not your responsibility to save the world. It can be fun if you contribute to the effort. But planets are a dime a dozen, and lives are even cheaper than that. We are not really that important. No one is. In the grand scheme of things, our dramas and concerns are lightweight fun.

One of the problems of always being the top banana is that you never learn to realize that you don't have to be the top banana to be fulfilled in your life.

There's no need to worry so much about being on Jaynes's or Conway's level. Do what you do best, and do it because it's fun. If you've been given what it takes, then this is the fastest way to become the master of your field. And even if you didn't have what it takes - which in your case is unlikely - you would still be making a contribution and having fun.

comment by Roland2 · 2008-09-26T19:47:16.000Z · LW(p) · GW(p)

@Jef Allbright:

I suppose you could google "(arrogant OR arrogance OR modesty) eliezer yudkowsky" and have plenty to digest.

Well, I was asking you, not google. But it seems that you are not willing to stand behind your words, making claims then failing to provide evidence when asked. Refering to a third party is an evasive maneuver. Show us your cards!

That's as far as I'm willing to entertain this line of inquiry, which ostensibly neutral request for facts appears to belie an undercurrent of offense.

That's your supposition.

comment by Robin3 · 2008-09-26T19:53:23.000Z · LW(p) · GW(p)

Eliezer, can you clarify what you mean by "You'll note that I don't try to modestly say anything like, "Well, I may not be as brilliant as Jaynes or Conway, but that doesn't mean I can't do important things in my chosen field."

Because I do know... that's not how it works."

Replies from: ejstheman
comment by ejstheman · 2011-09-25T12:41:18.746Z · LW(p) · GW(p)

He didn't reply to this, so I'll take a stab at unpacking/justifying that statement.

For the number of people in the world, smaller numbers are more conservative, so suppose that number is 6 billion. For the number of truly distinct things that there are to be good at, larger numbers are more conservative, so let us suppose that number to be 600,000. (I doubt if any reader can come up with even 10,000 such things, even allowing trivial variations like "experimental high energy physicist with subspecialty A" and "experimental high energy physicist with subspecialty B".) Under those conservative assumptions, it is mathematically necessary that AT LEAST 99% of people are not in the top 100 at anything. Further, that boundary only happens with the additional conservative assumptions that 1) literally everyone pursues the exact career (among 600,000 choices!) that an infallible oracle told them they were the best at, and 2) people match careers in such a way that each one is assigned by the oracle to at least 100 people. The farther reality is from those, the more the percentage gets closer to 100%, or the rank required to be at the top of something dips lower than 100.

Because of the preceding argument, it seems likely that most people are not very important to the field to which they are the most important. Therefore, it seems irrational to believe "I am important to my chosen field" without specific, relatively strong evidence that this is the case. Certainly "lots of people say that that's true of everyone" isn't strong enough, since that evidence has the more-plausible alternate explanation that people say that because it's comforting to them to believe it, in ignorance of whether it's actually true.

comment by denis_bider2 · 2008-09-26T19:55:24.000Z · LW(p) · GW(p)

Vladimir Nesov: thanks for your comment. I found it insightful.

comment by behemoth · 2008-09-26T20:12:31.000Z · LW(p) · GW(p)

You say 'That's not how it works.' But I think that IS how it works!

If progress were only ever made by people as smart as E.T. Jaynes, humanity would never have gotten anywhere. Even with fat tails, intelligence is still roughly normally distributed, and there just aren't that many 6 sigma events. The vast majority of scientific progress is incremental, notwithstanding that it's only the revolutionary achievements that are salient.

The real question is, do you want Friendly A.I. to be achieved? Or do you just want friendly A.I. to be achieved by YOU? There's no shame in the latter one, but the preclusion of the latter speaks little about progress towards the former (which I happen to think this blog is immensely valuable towards).

comment by Doug_S. · 2008-09-26T20:37:14.000Z · LW(p) · GW(p)

I find myself, except in the case of people with obvious impairments, completely unable to determine how intelligent someone is by interacting with them. Sometimes I can determine who is capable of performing specific tasks, but I have little confidence in my ability to assess "general intelligence".

To some extent, this is because different people have acquired different skills. Archimedes of Syracuse may have been the greatest mathematician in history, but he wouldn't be able to pass the exams in a high school calculus class. Obviously, the reason he couldn't solve these math problems is not that he isn't as intelligent as today's high school students. It's because he never had a calculus textbook.

If you had two black boxes, one of which contained a 14-year-old who scores in the 98th percentile on IQ tests, and the other contained the median college graduate with a degree in some technical field, such as electrical engineering, which black box would appear more intelligent?

It's hard to tell the difference between someone who is actually smarter and someone who has simply learned more. One thing that I learned how to do very well, which contributed greatly to much of my academic success, is translate "word problems" into mathematical equations. There's a systematic way to do this that works on just about any (reasonable) textbook, and it's a task that that I found many of my fellow high school students having trouble with in my science classes.

To what extent is "intelligence" simply a matter of having already learned the best ways to learn?

comment by Doug_S. · 2008-09-26T20:41:37.000Z · LW(p) · GW(p)

Also...

I believe that you don't really understand something until you can explain it to someone else, and have them understand it, too.

comment by JulianMorrison · 2008-09-26T20:55:15.000Z · LW(p) · GW(p)

There's basically two reasons to get called arrogant. One is acting like you're better when you aren't. The other is refusing to politely pretend that the inferential chasm is small. Given where E is and where the mass of humanity are, if I had to make blind-guess assignments for 100 accusers picked at random, and I assigned them all into the "inferential distance" bin, I don't think I'd be wrong once. So, a person asking to be put, or to put some accuser into the "undeserved airs" bin, had better show some sharp evidence!

comment by Zubon · 2008-09-26T21:24:33.000Z · LW(p) · GW(p)

"Math is a game for the young."

Replies from: Pablo_Stafforini, insufferablejake
comment by insufferablejake · 2013-01-20T06:00:22.389Z · LW(p) · GW(p)

Why was this down voted? The comic is a take on a fairly prevalent belief, heck, Hardy said it!

I wish more people on this forum would explain why they were down voting something, that on the face of it, seems reasonable.

I'm up voting this.

EDIT: When I posted this, I was of the opinion that the comic was just giving a funny take on the maths is a young man's game thing. Now, after looking it several times, I am of the opinion that it was trying to poke fun of this said misconception. And, giving the benefit of doubt to the original poster, I still stand by my upvote.

Replies from: Kawoomba, Qiaochu_Yuan, army1987
comment by Kawoomba · 2013-01-20T06:48:06.034Z · LW(p) · GW(p)

Let's find an archaeologist to exhume the remains of the long since dead reader who downvoted that comment a mere 52 months ago. Who knows what his thought process was? Did he hunt saber-tooths and not appreciate the cave-man like quality of XKCD stick figures? And where did he even get his computer, or did he MacGyver a Turing Machine out of sticks and stones?

Replies from: army1987
comment by Qiaochu_Yuan · 2013-01-20T07:03:20.356Z · LW(p) · GW(p)

I did. I didn't want to explain why because it's a long conversation I don't want to have, but basically I think this quote promotes a misleading conception of mathematics. For the record, Hardy is thought to have been suffering from depression when he wrote this.

Replies from: katydee, insufferablejake
comment by katydee · 2013-01-20T11:45:44.063Z · LW(p) · GW(p)

Am I misreading the comic? It appears to be making fun of that quote. Maybe it's more haha only serious than I thought?

EDIT: By the way, I upvoted you even though I think you may have misinterpreted the comic in question, as I am in favor of people explaining their downvotes. Keep up the good work!

comment by insufferablejake · 2013-01-20T13:28:43.373Z · LW(p) · GW(p)

I know, and I've read too, that Hardy was apparently not in the pink when he said this. And in all honesty the comic seems to be making fun of the conception that maths was for the young.

comment by A1987dM (army1987) · 2013-01-20T10:52:46.285Z · LW(p) · GW(p)

I wish more people on this forum would explain why they were down voting something, that on the face of it, seems reasonable.

Me too. (Not sure that a link to that xkcd comic in this context “on the face of it, seems reasonable”, though.)

Replies from: insufferablejake
comment by insufferablejake · 2013-01-20T13:23:42.078Z · LW(p) · GW(p)

Does a link to a comic have a place in this forum? I don't know the answer to that, perhaps it is not. That said, my comment was more a reaction to other down votes, this just happened to be the straw I was commenting on.

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-20T13:33:28.409Z · LW(p) · GW(p)

(I meant “context” not “comment” BTW -- fixed that.)

Does a link to a comic have a place in this forum?

Yes. I've posted such links myself. But that particular one seems to me to have very dubious topicality.

Replies from: insufferablejake
comment by insufferablejake · 2013-01-20T13:39:38.979Z · LW(p) · GW(p)

Really? I think, after staring at it for some time, that the comic is making fun of the thinking that maths is a young man's game.

comment by Peter4 · 2008-09-26T21:41:49.000Z · LW(p) · GW(p)

"Perhaps it is the fear of being too late that is causing you distress. Perhaps you fear that humanity is going to be destroyed because you didn't build an FAI soon enough. Perhaps you fear that your life will end some 10,000 years sooner than you'd like."

Humanity's alleged demise is not the only possible way he could be too late. I wonder where Eliezer would turn his attention if someone (or some group) solved the problems of FAI before him.

Eliezer has written a number of times about how comparing your intelligence and rationality to those around you is pointless (e.g. it's never good enough to be good in comparison, etc.). This philosophy has thus far been directed at comparing one's self to lower levels of cognition - but I don't see why it shouldn't work bottom up also. Learn from the levels above you, but do not lionize them. As we all aspire to embody the higher levels, I'm sure Jaynes must have also (an old vampire, but not old enough).

Eliezer: I don't think we should worry about our particular positions on the bell curve, or set goals for where we want to be. Don't fret over the possible limitations of your brain, doing so will not change them. Just work hard and try your best, always attempt to advance - push the limitations. Jaynes was struggling against his meat-brain too. It's human - you both surpassed the village idiots and college professors, now the difference in levels becomes more and more negligible with each step approaching the limit. Everybody is working with meat designed by the "idiot god". Push it to the limit, hate the limit, but don't be self-conscious about it.

We all wish we had gotten an earlier start on things. The importance of them is perhaps something you have to learn as you grow.

comment by michael_vassar3 · 2008-09-26T23:00:13.000Z · LW(p) · GW(p)

Eliezer: It seems to me that uncertainty about your abilities is dwarfed by uncertainty about the difficulty of the problem.

Doug S: The median college graduate in a technical field probably would test in the 95th percentile on most IQ tests and at the 98th percentile on tests weighted heavily towards non-vocabulary crystalline g

comment by Psy-Kosh · 2008-09-27T00:09:34.000Z · LW(p) · GW(p)

Eliezer: Not sure to what extent this helps or answers your questions, but I increasingly as of late find that much of my current "cached wisdom" seems to be derived from stuff you've said.

As far as as actually finding the next generation or whatever, maybe some people here that know how ought to start some "private school for the gifted" that explicitly is meant to try to act almost like a Bayes Dojo or whatever and otherwise train up people in really precise thinking?

Replies from: Raw_Power
comment by Raw_Power · 2010-09-12T17:32:21.305Z · LW(p) · GW(p)

If such an insitution is ever built, and it is what it claims to be (remember that all causes want to become cults), then, believe me, I'd send my children there as a top priority. Such a school should not be for geniuses only, but for the development of the teaching of the Methods of Rationality to all of humanity. To set an example to be followed by all who wish not to be left behind.

I remeber Aldous Huxley mentioning, in Brave New World Revisited, a similar project by some philantrope, who wished to make children immune to propaganda and manipulation. The authorities shut it down because, they said, it turned the teenagers and young adults too cynical and disobedient to Authority. For example, they were able to calmly analyze the Drill Sergeant's speech as it was given. And obviously, we couldn't have that, could we? I think such a project has considerably greater chances of success nowadays.

comment by Marcello · 2008-09-27T01:22:32.000Z · LW(p) · GW(p)

While Conway has a huge jump on you in mathematical ability, and I'm pretty sure you're not going to catch up to him, rest assured that you are not strictly dumber than Conway in every respect.

You should bear in mind how the statement "Maybe anything more than around one standard deviation above you starts to blur together, though that's just a cool-sounding wild guess" might apply to me. If your guess is literally true, then, because math is my strong-suit, high mathematical ability is the smartest kind of smart that I can detect at all. For me, philosophical ability and the like would blur into "go to college"-land sooner.

In terms of philosophical intuition, you are head and shoulders above Conway. Remember Conway's "Free will theorem" (a brilliant piece of math to be sure, but very misleadingly named.) Yet, you report never having been confused about free will. My sense of awe at your philosophical intuition has only increased after reading the overcoming bias posts. It's doubly impressive to me, because I keep realizing that you are making explicit more of the helpful little nudges you gave me over the course of our work together, and I am impressed at how helpful some of these things were in practice, and your ability to communicate things which seemed so elusive so clearly. I'm not sure how much of that was native intelligence and how much was starting with a good ideas in your mental toolbox, but I could ask the same thing about Conway.

comment by Dave6 · 2008-09-27T02:02:47.000Z · LW(p) · GW(p)

Eliezer: Look on the bright side, you haven't yet relegated yourself to being a mere administrator and occasional sounding board for others' AI research projects! Ego subjugation is a bitch, but it can have minor rewards of self-satisfaction when actions driven by pressure-free buckshot mental synthesis actually bear fruit. I don't envy that it's of no help to you that the luxury of being carefree relies on the knowledge that smarter people are doing the heavy lifting, and today you're at the top tier of that brain chain!

comment by michael_vassar · 2008-09-27T07:02:00.000Z · LW(p) · GW(p)

Maksym: We actually do need someone to translate all this OB stuff very badly, though maybe it's desirable to wait for the book. Still, someone should be presenting it. As for convincing smart college students, there are three fairly separate barriers here, those to rationality, those of information and those to action. I recommend working on barriers to rationality and action first and in conjunction, belief second, and let people find the info themselves. Politics is the natural subject to frame as rationality. Simply turn every conversation where politics comes up into an opportunity to discourse on OB. Rules of etiquette are weak at Harvey Mudd, so this should be OK.

Denis: In technical fields? If so, I unhesitatingly deny the data. I suggest you look at Gottfredson. Lynn is far from trustworthy, but may also be summarizing. Do you really think that people who can't pull 600 on the SAT Math can do engineering?

comment by WTF · 2008-09-27T15:22:00.000Z · LW(p) · GW(p)

Dude, you honestly make me ill sometimes. You spoke nothing of the circumstances that got these people to where they are or where they came from. There are people just as "sparkly" and some smarter than these people who have not had the opportunity that these people have. You are blinded by your arrogance and are locked in the present time. You are a smart guy, but you would have a lot to gain in building interpersonal wisdom.

comment by Sparkly · 2008-09-27T15:59:00.000Z · LW(p) · GW(p)

The sparkle you describe is meaningless; non-sparkling borderline-autistic types do just as fine work as the most invigoratingly sparkling individuals. I choose to sparkle through my work, in quiet solitude, not through swaying my limbs excitedly, motor-mouthing like a sports commentator on amphs.

comment by Ali · 2008-09-27T16:09:00.000Z · LW(p) · GW(p)

Its a benefit for me to read this post having not read your others, because I can give you an untainted view of it. You are too concerned with intelligence. As long as you stay in this state, you are unusable, and pass up opportunities on becoming usable.
Snap out of it. Accept that there are more intelligent people than you, and they are not flailing, they just get on with it.

comment by Laura B (Lara_Foster) · 2008-09-27T17:51:00.000Z · LW(p) · GW(p)

Again, I have difficulty understanding why so many people place such a high value on 'intelligence' for its own sake, as opposed to a means to an end. If Eliezer is worried that he does not have enough mathematical intelligence to save the universe from someone else's misdesigned AI, than this is indeed a problem for him, but only because the universe will not be saved. If someone else saves the universe instead, Eliezer should not mind, and should go back to writing sci-fi novels. Why should Eliezer's ego cry at the thought of being upstaged? He should want that to happen if he's such an altruist.

I don't really give a damn where my 'intelligence' falls on some scale, so long as I have enough of it to accomplish those things I find satisfying and important TO ME. And if I don't, well, hopefully I have enough savvy to get others who do to help me out of a difficult situation. Hopefully Eliezer can get the help he needs with fAI (if such help even exists and such a problem is solvable).

Also, to those who care about intelligence for its own sake, does the absolute horsepower matter to you, or only your abilities relative to others? IE, would you be satisfied if you were considered the smartest person in the world by whatever scale, or would that still not be enough because you were not omniscient?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-27T17:58:00.000Z · LW(p) · GW(p)

Of course I want there to be someone smarter than me to take over, from an altruistic perspective. Or even from just a selfish perspective of being scared, wanting a vacation, and feeling a bit isolated.

And of course if that actually happened, it would be a severe blow to my ego.

And so long as I can do the expected-utility-maximizing thing and invest the appropriate amount of effort into preparing for the possibility without betting the whole farm on it, I have no intention of hacking at my emotions on either score.

comment by Michael_Hartl · 2008-09-27T18:29:00.000Z · LW(p) · GW(p)

I know how you feel, in a couple ways. My high-school guidance counselor looked at my middle school transcript and told me I might realistically aspire to go to a UC school (as opposed to a school in the Cal State system). (I ended up going to Harvard and Caltech.) On the other hand, the year I finished my Ph.D. (at the age of 29) one of my college acquaintances, a brilliant mathematician, became one of the youngest full professors in the history of Princeton University, and when my Ph.D. advisor was 29 he had already been a professor at Caltech for several years (after graduating top of his Caltech class, finishing a Ph.D. in three years at Cambridge on a Marshall Scholarship, and doing a postdoc at the Institute for Advanced Study).

Though I know I'm not (alas) at the highest level, I feel fortunate to be smart enough to know how smart I'm not.

N.B. The introduction contains a small non sequitur unless you happen to know the book's author. Here's a quick edit:

I once lent Xiaoguang "Mike" Li my copy of Probability Theory: The Logic of Science by E.T. Jaynes.

comment by Anonymous_Rex · 2008-09-27T18:32:00.000Z · LW(p) · GW(p)

It's a general rule, I've observed, that you can't discriminate between levels too far above your own.
Although I agree with this in general, it seems that there are a few specific counterexamples. For example, it seems that people with very low ability in sports actually can discriminate ability from the local high school level to the international stage.

Do other people agree? If so, what do you propose distinguishes between intelligence/mathematical ability and athletic ability?

comment by Caledonian2 · 2008-09-27T19:06:00.000Z · LW(p) · GW(p)

If so, what do you propose distinguishes between intelligence/mathematical ability and athletic ability?
To evaluate athletic ability, you use your judgment. What can you use to evaluate your judgment?

It is possible for a person to produce an accurate evaluation of a subset of their own intellectual skills, but certain skills cannot be evaluated, because presumptions about those skills are required for the evaluation to take place. You should not ask questions about subjects in which you presume you already know the answers, and you cannot ask questions about subjects where answers must be presumed in order to be able to ask at all.

comment by komponisto2 · 2008-09-27T19:33:00.000Z · LW(p) · GW(p)

Lara, I don't think they value it "for its own sake" as opposed to as a means to an end; rather, they see it as a necessary condition for achieving their ends, and are worried they don't have what it takes. Nothing but an anxiety trip.

And of course, there's also the ego thing -- when people build superiority over others into their self-image. This is counterproductive, of course. When someone else demonstrates that they're "smarter" than you by offering unexpected insight, you don't fatalistically wallow in jealous misery; you listen to the content of what they say, in the hope of becoming as smart as they are.

Eliezer of all people ought to realize this (actually I suspect he does).

FWIW, I've met both Eliezer and John Conway, and have spent approximately the same total amount of time with both of them (on the order of 10 hours). I don't know which of them is smarter. Yet I suspect neither is too far above my own level for me to be able to e.g. benefit from listening to a conversation between them.

comment by Laura B (Lara_Foster) · 2008-09-27T20:30:00.000Z · LW(p) · GW(p)

Eliezer, Komponisto,

I understand the anxiety issues of, 'Do I have what it takes to accomplish this..."

I don't understand why the existence of someone else who can would damage Eliezer's ego. I can observe that many other people's sense of self is violated if they find out that someone else is better at something they thought they were the best at-- the football champion at HS losing their position at college, etc. However, in order for this to occur, the person needs to 1) in fact misjudge their relative superiority to others, and 2) value the superiority for its own sake.

Now, Eliezer might take the discovery of a better rationalist/fAI designer as proof that he misjudged his relative superiority-- but unless he thinks his superiority is itself valuable, he should not be bothered by it. His own actual intelligence, afterall, will not have changed, only the state of his knowledge of others' intelligence relative to his own.

Eliezer must enjoy thinking he is superior for loss of this status to bother his 'ego'.

Though I suppose one could argue that this is a natural human quality, and Eliezer would need to be superhuman or lying to say otherwise.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-27T21:02:00.000Z · LW(p) · GW(p)

I have no idea if it's a natural human quality. It's surely one of my qualities. It's not that I would permit my mind to think verbal thoughts like "How good it is to be above others." But there's a zest in being the best. It feels good to complete a difficult race and it feels good to win a gold medal; they are separate, different good feelings. I can imagine people who would only care about having completed the challenge, but they wouldn't be me.

Since my mind doesn't want whatever I choose it to want, I accept that both desires are a part of me and that both desires keep me motivated to continue studying. Though even the desire to solve hard problems for yourself, is not without its dangers.

Doesn't fit the stereotype of Deep Wisdom, I know. I would be prouder if I was Gandalf, because that would be, you know, cool. But you see, this isn't about my pride.

comment by pdf23ds · 2008-09-27T21:38:00.000Z · LW(p) · GW(p)

I can imagine people who would only care about having completed the challenge, but they wouldn't be me.

I'm not sure there are any people like this who are capable of occasionally winning. OTOH, the prospect of never winning might force someone to rationalize themselves into this position.

comment by Prot · 2008-09-27T21:57:00.000Z · LW(p) · GW(p)

The proof is in the math and/or in the protopudding, is it not? There are people/groups who already have either or both. If you have neither, what's your sense of relative achievement/skill/IQ based on?

What (math &/ prototype) do you have? If none, what do you plan to have, when? It seems you'd have to blaze past those who already have their stuff out in the real world behaving ever more AGI-ishly by the day, to meet your criteria for success. A tall order to be sure.

comment by Ben_Goertzel · 2008-09-28T21:55:00.000Z · LW(p) · GW(p)

Some else wrote

"
This is a youthful blog with youthful worries. From the vantage point of age worrying about intelligence seems like a waste of time and unanswerable to boot.
"

and I find this observation insightful, and even a bit understated.

Increasingly, as one ages, one worries more about what one DOES, rather than about abstract characterizations of one's capability.

Obviously, one reason these sorts of questions about comparative general intelligence are unanswerable is that "general intelligence" is not really a rigorously defined concept -- as you well know! And the rigorous definitions that have been proposed (e.g. in Legg and Hutter's writing, or my earlier writings, etc.) are basically nonmeasurable in practice -- they're only crudely approximable in practice, and the margin of error of these approximations is almost surely large enough to blur whatever distinctions exist between various highly clever humans.

I have no doubt that you're extremely smart, and especially talented in some particular areas (such as mathematics and writing, to give a nonexhaustive list) ... and that you're capable of accomplishing great things intellectually.

As an aside, the notion that Conway, or von Neumann or any other historical math figure is "more intelligent than Eliezer along all dimensions" seems silly to me ... I'm sure they weren't, under any reasonable definition of "dimensions" in this context.

To take a well-worn example: from my study of the historical record, it seems clear that Einstein and Godel were both less transparently, obviously clever than von Neumann. My guess is that von Neumann would have scored higher on IQ tests than either of those others, because he was incredibly quick-minded and fond of puzzle-type problems. However, obviously there were relevant dimensions along which both Einstein and Godel were "smarter" than von Neumann; and they pursued research paths in which these dimensions were highly relevant.

"General intelligence" has more and more meaning as one deals with more and more powerful computational systems. For humans it's meaningful but not amazingly, dramatically meaningful ... what's predictive of human achievement is almost surely a complex mixture of human general intelligence with human specialized intelligence in achievement-relevant domains.

Pragmatically separating general from specialized intelligence in oneself or other humans is a hard problem, and not really a terribly useful thing to try to do.

Achieving great things seems always to be a mixture of general intelligence, specialized intelligence, wise choice of the right problems to work on, and personality properties like persistence ...

-- Ben G

comment by pdf23ds · 2008-09-28T22:16:00.000Z · LW(p) · GW(p)

Achieving great things seems always to be a mixture of general intelligence, specialized intelligence, wise choice of the right problems to work on, and personality properties like persistence ...

With a pinch of being in the right place and the right time, bake on 350 for 10-30 years.

comment by Hopefully_Anonymous3 · 2008-09-28T22:19:00.000Z · LW(p) · GW(p)

Ben,
I kind of disagree with you. First, what we call "general intelligence" is itself a form of specialized intelligence: specializing optimizing successful outcomes in real time in our apparent reality. so the mix you recommend in "achieving great things" would itself be "general intelligence", not general intelligence plus something else (other than luck).

Since most people who "achieve great things" seem to me to be playing life at least in part as a poker game (they don't seem to put all their cards out on the table) I think outcomes may be a better measure than propaganda. "Increasingly, as one ages, one worries more about what one DOES, rather than about abstract characterizations of one's capability." I'm not sure that comes from wisdom, rather than the rationally adjusted propaganda of an older person (look at my status enabled institutional power to achieve) contrasted with that of the younger person (look at my superior capabilities, with a brain at its physical prime).

comment by AnneC · 2008-09-29T01:46:00.000Z · LW(p) · GW(p)

Increasingly, as one ages, one worries more about what one DOES, rather than about abstract characterizations of one's capability.

This definitely happened to me. Between the ages of about 10 - 14, I was utterly obsessed with finding out what my IQ was. Somehow, somewhere along the way, I'd picked up the notion that Smartness in quantity was the most important thing a person could possibly have.

And it drove me frankly batty not knowing how much Smartness I had, because (a) I was insecure and felt like I needed to find out I had a "high enough" number in order to permit myself any sense of self-worth, and (b) I had an idea fixed in my mind that only "geniuses" with IQs 150 or above could have any hope of addressing any of the interesting questions and topics that dominated my thoughts as a geeky little kid: faster-than-light travel, Grand Unified Theories, etc.

I spent a lot of time trying to find any papers/reports/test scores my parents might be hiding away, hoping that I'd be able to discover through doing this some idea of the quantitative value stamp I was convinced must be on my brain somewhere (though not directly viewable by me).

I didn't actually find any of these papers until I was in my late teens, and by then I found with some surprise that I didn't care all that much what they said. At some point between the ages of 14 and 17 I'd managed to get over my IQ obsession and move toward a different brain-related obsession (one considerably less worry-inducing): that of how brains, and in particular mine, worked at all. And in ceasing to be obsessed with quantitative test-based measurements, lo and behold, I found it far easier to actually think about things and just plain learn.

I do now know what my age-4 Weschler score was, and it wasn't 150. Not even close. I took another Weschler (the adult scale) in college, and while that score ended up being quite a bit higher than my age-4 score, it was still lower than I'd originally hoped it would be. But it didn't matter to me in the least from an emotional standpoint by then, because I'd already managed to accomplish things (like getting an A in calculus) that I'd have considered the province of people with far higher IQ scores than I actually had. Not to mention the fact that when I looked at my subtest scores, they were all over the map -- I had a higher than average Block Design, but lower than average Picture Arrangement, for instance.

At this point I tend to see IQ (at least as measured on tests) as being very limited in terms of what information it actually tells you about what someone is capable of doing. E.g., I don't think IQ scores can definitively tell you when someone is going to "hit a wall", so to speak, in terms of what mathematical theorem they will absolutely get stuck on when they encounter it (or what engineering problem they might be able to solve, etc.).

It almost seems like some of these posts are suggesting a desire for much greater predictive ability than any test or ten-minute impression could possibly actually reveal in something as complex and feedback-sensitive as a human individual. And while I'd like as much as anyone for the world and everyone in it not to be destroyed (whether in one great cataclysm or a gradual tragic fade-out), I've come to terms with the fact that, as corny as it sounds, all we can do is our best, and we must do this in the utter absence of perfect knowledge regarding the limits of our individual or collective capacity.

comment by pazqzfar · 2008-10-04T10:16:00.000Z · LW(p) · GW(p)

Eliezer, don't think to yourself that you only have until you are 40. As somebody else noted and you didn't acknowledge, Marcello was not yet born when Conway passed 40. You mentioned your father, and I don't know the specifics, but surely you know that plenty of people have done great work, sometimes their best, past 40, and that with every passing year, due to advances in health, medicine, etc., "youth" extends further and further into our life.

And as another poster mentioned, I have almost no doubt that Von Neumann would have blown Einstein (possibly Godel) out of the water in terms of ability, and at the age of 10 or 11, William James Sidis would have smoked them all (at comparable ages). Even if there is such a thing as general intelligence and even if one has less of it than somebody else, it is still very much possible that one is vastly more talented in particular areas.

Einstein probably didn't have an IQ in the realm of Von Neumann or Newton or Sidis, but what he did have was an incredible physical intuition and the ability to think deeply about the philosophical underpinnings of physics in a way that few others could (or can). As others have said, even if Conway actually is smarter (which you should not just blindly accept, because we all have cognitive biases that might prevent us from making an accurate judgment), it doesn't mean that you don't have profound gifts that will end up being the deciding factor in solving FAI or whatever else you set your mind to.

Looking at Von Neumann at age 24 and Einstein at age 24, NOBODY would have thought it remotely possible that Einstein would be the one who would achieve enough in his field to be considered a successor to Newton. Von Neumann, though he accomplished more than 100 "mere" geniuses could ever hope to accomplish, cannot be said to be a successor to (or in the same league as) a Gauss or somebody of similar stature.

Since you're so fond of quotes, here's one for you:

If you can solve your problem, then what is the need of worrying? If you cannot solve it, then what is the use of worrying?

- Shantideva (Indian Buddhist philosopher)

Just get on with it. Self-doubt and endless self-analysis do not contribute to the end goal.

comment by retired_urologist · 2008-10-04T15:09:00.000Z · LW(p) · GW(p)

Supporting Ben Goertzel's comment:

Michael Shermer revised his book, Why People Believe Weird Things, to contain a chapter called “Why Smart People Believe Weird Things”. In it, he quotes studies by Hudson, Getzels, and Jackson showing that “creativity and intelligence are relatively orthogonal (i.e., unrelated statistically) at high levels of intelligence. Intuitively, it seems like the more intelligent people are the more creative they will be. In fact, in almost any profession significantly affected by intelligence, once you are at a certain level among the population of practitioners (and that level appears to be an IQ score of about 125), there is no difference in intelligence between the most successful and the average in that profession. At that point other variables, independent of intelligence, take over, such as creativity, or achievement motivation and the drive to succeed.”

comment by michael_vassar · 2008-10-04T15:41:00.000Z · LW(p) · GW(p)

Actually RU, that's a good approximation for many/most professions, but not all that good an approximation.
http://www.vanderbilt.edu/Peabody/SMPY/DoingPsychScience2006.pdf
gives more detail, showing a significant marginal impact from, at the least, 99.99th percentile math achievement at age 12 relative to merely 99.8th percentile math achievement at age 12.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-04T15:47:00.000Z · LW(p) · GW(p)

Is this study talking about Nobel Prize winners - or better yet, Fields Medal-winning mathematicians? Or just authors or something? I'm about ready to say "I defy the data; what about von Neumann?" Maybe there are people who can achieve through diligence what others achieve by genius, but to say that genius doesn't help at all... I defy the data.

(If you told me that IQ didn't make a difference past 140, I'd be quite willing to believe that IQ tests don't work past 140. Richard Feynman's measured IQ was 137, which as John K Clark observed, says more about IQ tests than it does about Feynman.)

comment by Caledonian2 · 2008-10-04T16:00:00.000Z · LW(p) · GW(p)

Feynman's measured IQ was 123, not 137. And we already know that IQ tests do not measure vitally important aspects of cognition -- in Feynman's case especially, he was quite strong in those aspects while being weak in the aspects measured. (At least, I know that. What the rest of you know is less certain.)

This is one of the primary reasons why people who think we can use IQ scores as a representation for the higher-level aspects we can't measure well (because they're supposedly correlated with IQ) are wrong. (I'm looking at you, Vasser.)

IQ tests do not measure synthetic capacity, imagination, creative potential, or self-restraint / the ability to inhibit low-level drives and impulses. They measure only the ability to complete certain atomic functions in a limited subset of cognitive tasks. That makes them useful -- extremely so -- but not definitive. Not even close.

Replies from: gwern
comment by gwern · 2011-01-06T21:29:09.069Z · LW(p) · GW(p)

Feynman's measured IQ was 123, not 137. And we already know that IQ tests do not measure vitally important aspects of cognition -- in Feynman's case especially, he was quite strong in those aspects while being weak in the aspects measured. (At least, I know that. What the rest of you know is less certain.)

You don't even know that. This sort of thing is why no one here likes you. Here, let me provide some more details about that IQ score you put such weight on as a criticism. To quote a previous comment of mine on this topic:

  • Feynman was younger than 15 when he took it, and very near this factoid in Gleick's bio, he recounts Feynman asking about very basic algebra (2^x=4) and wondering why anything found it hard
  • the IQ is mentioned immediately before the section on 'grammar school', or middle school, implying that the 'school IQ test' was done well before he entered high school, putting him at much younger than 15. (15 is important because Feynman had mastered calculus by age 15, Gleick says, so he wouldn't be asking his father why algebra is useful at age >15.)
  • Given that Feynman was born in 1918, this implies the IQ test was done around 1930 or earlier. Given that it was done by the New York City school district, this implies also that it was one of the 'ratio' based IQ tests - utterly outdated and incorrect by modern standards.
  • Finally, it's well known that IQ tests are very unreliable in childhood; kids can easily bounce around compared to their stable adult scores.

So, it was a bad test, which even under ideal circumstances is unreliable & prone to error, and administered in a mass fashion and likely not by a genuine psychometrician.

Replies from: jimrandomh, Will_Sawin
comment by jimrandomh · 2011-01-06T21:33:00.934Z · LW(p) · GW(p)

This seems awfully hostile for a reply to a post that's more than two years old.

Replies from: thomblake, Jack, gwern
comment by thomblake · 2011-01-06T21:35:39.258Z · LW(p) · GW(p)

and originally posted to a different site

comment by Jack · 2011-01-06T21:36:29.791Z · LW(p) · GW(p)

This sort of thing is why no one here likes you.

Hahaha.

Replies from: gwern
comment by gwern · 2011-01-06T22:26:34.123Z · LW(p) · GW(p)

I would hope that incredibly sloppy thinking, manifested in such things as posting confidently as a knockdown argument a proposition that is anything but and can be revealed as such with just a tiny understanding of psychometrics, is why Caledonian was so often downvoted and criticized by OB/LW - and not because we didn't like his haircut.

Replies from: jimrandomh
comment by jimrandomh · 2011-01-06T22:34:55.051Z · LW(p) · GW(p)

OB didn't have downvoting.

Replies from: gwern
comment by gwern · 2011-01-06T22:41:27.949Z · LW(p) · GW(p)

LW, fortunately, does. And I think Caledonian ultimately wound up being banned, which is a rather extreme downvote from my point of view.

Replies from: ata
comment by ata · 2011-01-06T23:10:04.390Z · LW(p) · GW(p)

I don't think he was ever banned (though his comments were sometimes edited and sometimes deleted). In fact, he stuck around on LW for a while, under the username "Annoyance".

comment by gwern · 2011-01-06T22:25:01.248Z · LW(p) · GW(p)

Stupidity is stupidity regardless of whether it was posted 2 seconds or 2 years ago. Funnily enough, people (like me) are still reading old posts...

comment by Will_Sawin · 2011-01-08T01:20:51.493Z · LW(p) · GW(p)

to get more upvotes and less downvotes, from me at least, continue to post evidence-based criticisms of other's faulty points, without unnecessary vitriol.

Replies from: gwern
comment by gwern · 2011-01-08T01:22:34.714Z · LW(p) · GW(p)

If 1 line of vitriol followed by >20 lines of 'evidence-based criticisms' is still wrong, then I'm not sure I want to be right.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-01-08T01:42:06.947Z · LW(p) · GW(p)

Considering the strong evidence that lesswrong isn't nice enough, unnecessary vitriol should always be removed.

Whether a comment with good parts and bad parts (and more good than bad) should get upvotes or downvotes is a complicated question. If votes serve as a signal, probably downvotes, but if they serve as advice on what to read, upvotes.

Replies from: shokwave
comment by shokwave · 2011-01-08T05:20:59.200Z · LW(p) · GW(p)

Considering the strong evidence that lesswrong isn't nice enough, unnecessary vitriol should always be removed.

I'm curious! I updated my views on lesswrong's niceness (based on the top-level post about the issue and giving extra weight to the comments section because this is a case about the lesswrong community) to "I don't have a fucking clue". And if at all possible I would dearly like to have a clue.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-01-08T12:55:42.515Z · LW(p) · GW(p)

The evidence I saw is that people left because it wasn't nice enough. No one seemed to think it was too nice, and some people saw drawbacks with increased niceness, but this doesn't seem like a case where those drawbacks are significant.

comment by pdf23ds · 2008-10-04T16:43:00.000Z · LW(p) · GW(p)

There's another aspect of the shortcomings of IQ tests that people might not be aware of. Cognition is quite flexible, and abstract problem-solving ability can be met by many combinations of underlying, modular capacities. A person lacking in certain respects can make up for the lack, at the price, perhaps, of thinking a little more slowly.

Take me for an example. On the WISC-III IQ test, my combined score is 145. There are two composite scores that the combined score is made up of, the verbal score (I got 155, the maximum possible on that test) and the performance score (I got 125). There are also a number of different individual capacity scores. On most, I scored above the 95 percentile. On two or three, I scored right in the middle, and in one (visual short term memory) I scored in the first percentile.

Let me repeat that. I scored in the first percentile for the capacity to keep visual information in my short-term memory. (I scored in the 97th for aural short term memory, and 99.9th for linguistic.) How does that change how I solve problems, how I think about the world? Well, I perform many tasks about twice as slowly (but just as accurately) as others with my composite IQ. I have to use other circuits than most people do to solve the same problems, circuits that aren't as efficient. Circuits that may even work slightly differently, giving me a different perspective on problems, which may be superior or inferior, I don't know (likely depending on the individual problem). I strongly suspect that this is a large part of the cause of my intense dislike of school.

(BTW, people with a large difference between performance and verbal IQ are classified as having non-verbal learning disorder. That's right, even really smart people can have learning disorders.)

IQ is not a single number. Even IQ recognizes a large part of the complexity of human intelligence. It's not the psychologists that make the mistake of reducing it to a single number.

Why did I write this long comment on a dead thread? Dunno.

Replies from: Kenny
comment by Kenny · 2013-06-01T02:12:57.440Z · LW(p) · GW(p)

This thread lives on! And you're partially responsible. Thanks!

comment by frelkins · 2008-10-04T17:17:00.000Z · LW(p) · GW(p)

Gentlemen - Let me propose that the heart of serious intellectual achievement is synthesis, creativity, simplicity.

These are factors that actually increase with age and are not "IQ" or "g" driven. In fact I believe Edward de Bono argued that creativity drops at IQ 125 or so: maybe because people begin to fall into an "expert trap," where they have to maintain their previous work and expert status more than anything else.

Creativity need not decline with age at all - if you can avoid common habit errors.

My objection to Vassar is just that all these "tests" are highly flawed and biased - they consistently disfavor certain people and favor others. They just do, sorry, and this alone invalidates them or at least diminishes their usefulness.

My other comment to you all has to do with Feynman. I once asked another member of the Project, who is a famous emeritus experimentalist, about him. He told me that what distinguished Feynman was his wit and curiosity about things that others didn't think were "on the critical path," so to speak. Wit and curiosity are completely untestable, but if you look at real achievers I believe you'll find these qualities extremely important.

The courage to appear silly to avoid the expert trap - wit - careful avoidance of habit error - constant search for bias - doubting intuitions - deliberately slowing down to allow more time for divergent thought esp. if you are overclocked - synthesis - simplicity - tenacity - the person with these 9 qualities will be a thinker for the ages.

Replies from: Oxfordrunner
comment by Oxfordrunner · 2013-01-05T17:27:54.095Z · LW(p) · GW(p)

Why make the assumption at all, and much less so blatantly, that women are not reading your messages or posting on this site?

comment by pdf23ds · 2008-10-04T17:39:00.000Z · LW(p) · GW(p)

I think you're on the right path, frelkins, but this?

all these "tests" are highly flawed and biased - they consistently disfavor certain people and favor others.

How does the latter follow at all? If we had a test that measures everything you think constitutes real intelligence, it would consistently disfavor certain people and favor others. It would disfavor stupid people and favor smart people. That's the point of an intelligence test.

comment by Ian_C. · 2008-10-04T17:39:00.000Z · LW(p) · GW(p)

I don't believe IQ tests measure everything. There's a certain feeling when being creative, and when completing these tests I have not felt it, so I don't think it's measuring it.

Also I am not sure intelligence is general. At the level of ordinary life it certainly is, but geniuses are always geniuses at something, e.g. maths, physics, composing. Why aren't they geniuses at everything.

comment by Douglas_Knight2 · 2008-10-04T19:06:00.000Z · LW(p) · GW(p)

Does anyone have a reputable source for Feynman's 137? google makes it look very concentrated in this group, probably the result of a single confabulation.

Sykes and Gleick's biographies both give 12x. Sykes quotes Feynman's sister remembering sneaking into the records as a child. This seems important to me: Feynman didn't just fabricate the 12x.

comment by Danila_Medvedev · 2008-10-12T07:20:00.000Z · LW(p) · GW(p)

Math smarts are not the most important thing. Basic reasoning skills are vital (even if they are based on heuristics that are sometimes wrong), management skills are extremely important, intelligence augmentation skills are a must, touchtyping is very useful, etc.

Overall you should think not in terms of competitiveness (whether you are smarter than everybody else), but in terms of co-operation (how you can complement others, how they can contribute their skills to complement yours).

And for the record, I don't think you are the smartest person I know (although you are very smart). I suspect that I may have a better skillset than you do. :)

comment by Jonii · 2009-07-27T07:56:09.474Z · LW(p) · GW(p)

Since this is now kinda on-topic... I don't think Eliezer Yudkowsky is considerably more intelligent that I am. I'm aware of Dunning-Kruger effect, but the interesting part is that I simply don't find any way to overcome this. I'm fairly intelligent, but since people around here regard my barely-MENSA(probably not even that) -level of IQ a minium requirement to even read this blog, the situation I'm in is fairly interesting. I see repeated claims of super-intelligence, but I can see just someone who has had few more years to hone his skills and who has wasted less years on doing pointless things.

So, I'm kinda curious: What's it with Elizer Yudkowsky that makes everyone look up to him? I see some fairly interesting all-around posts, but is it that you see something more? I think there is a possibility here that the reason you admire Yudkowsky and I don't is partially because he Seems Deep(in the sense that he makes sense immediatly but seems novel), while I spent my earlier life discovering much the same stuff alone. It lacks novelty(to me), but adds details and strengths which I attribute to experience, and thus I judge Eliezer less "deep", and, consequently, less superhuman? This possibility gains some credibility from the fact that in my own little circles, I have pretty much the same sort of reputation as Eliezer has here(and, I think this is surprisingly much about personality, and thus I assume it's surprisingly rare for people here to be have that reputation), and before I knew of Eliezer, I had plans of becoming something much like he is now.

I presented two possibilites: Should I accept that I'm just incapable of distinguishing the level so much above my own, or should I defy the public opinion and regard Eliezer as "not that smart", because it's more about personality and "seeming deep" than about real difference in mental machinery. However, third option exists: I haven't read the stuff that makes everyone admire Eliezer so.

I'm really interested in manifestations of intelligence, so this issue is of a great importance to me. Especially if it is about Dunning-Kruger, I wanna understand how to overcome that. Maybe it's just that I pass those "technical-seeming" parts that actually demonstrate amazing intellectual stunts, and I should make a better mental notes every time I'm forced to skip some phrase without completely grasping the meaning.

Replies from: cousin_it
comment by cousin_it · 2009-07-27T09:42:32.990Z · LW(p) · GW(p)

On one hand, Eliezer writes extremely good explanations. I'm learning from his style a lot.

On the other hand, many people have pointed out that he doesn't publish novel rigorous results, which kinda detracts from the aura.

On the third hand, he often finds and corrects non-obvious mathematical mistakes made by other people, including me, and he's turned out right every time that I know of.

On the fourth hand, I've seen multiple cases where he made math mistakes of his own, and have discovered a couple of those myself. But that could be attributed to the fact that he publishes so much, and his error frequency is certainly many times lower than mine.

On the fifth hand, he has published novel non-rigorous arguments on real world topics that I don't completely agree with but find pretty important. Biggest of them is the idea of Friendly AI.

The weighting coefficients you give to those considerations are, of course, up to you.

ETA: on an unrelated topic, would you like to write a post on Go? CronoDAS has just turned our attention to something interesting.

Replies from: Eliezer_Yudkowsky, SamAdams, AndyCossyleon
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-27T15:15:04.686Z · LW(p) · GW(p)

If that had been a novel rigorous result it would not have been wrong. It was just a bit of eyeballing mathematics, which I've done in any number of places.

Replies from: cousin_it
comment by cousin_it · 2009-07-27T15:18:35.950Z · LW(p) · GW(p)

Edited to amend.

comment by SamAdams · 2009-07-28T02:13:56.298Z · LW(p) · GW(p)

"On one hand, Eliezer writes extremely good explanations. I'm learning from his style a lot."

Yeah, but they are rather verbose he tends to use 5 words when 2 would do.

"On the other hand, many people have pointed out that he doesn't publish novel rigorous results, which kinda detracts from the aura."

If you want to be in science this is a big issue unless your trying to pull a Wolfram and we all know how that turned out.

"On the third hand, he often finds and corrects non-obvious mathematical mistakes made by other people, including me, and he's turned out right every time that I know of."

But the math on this site what little there is tends to be toy problems and very simple. Let's see him find and correct a mistake in some higher order fluid mechanics equations. I would personally like to see him solve a non-trivial second order non-linear partial differential equation.

"On the fourth hand, I've seen multiple cases where he made math mistakes of his own, and have discovered a couple of those myself. But that could be attributed to the fact that he publishes so much, and his error frequency is certainly many times lower than mine."

That's horrifying if you're going to do science you have to control your error rate and that is where peer review comes in. (I recently submitted a paper where I was sloppy on some rounding of some of my results and I got slammed for it, science is all about precision and doing it right) If you don't do the peer review then you may think your idea is good when if you actually had someone else look at it you'd see it was total trash.

"On the fifth hand, he has published novel non-rigorous arguments on real world topics that I don't completely agree with but find pretty important. Biggest of them is the idea of Friendly AI."

But for science and AI this is essentially meaningless since if your goal is to make an FAI then math and rigor is necessary. The ability to write non-technical papers arguing for some idea that is technical is trivial. The challange is getting the technical detail right. This is where I would like to see Eliezer submit some of his work on decision theory show that he is actually making a theory that is properly rigorous.

I think the worst thing would be if people here just wait for Eliezer and he shows up at the end of 10 years with an extremely long non-technical paper that gets us no closer to a real FAI.

But those are just my thoughts.

Replies from: byrnema
comment by byrnema · 2009-07-28T04:40:02.281Z · LW(p) · GW(p)

While awesome math ability is a great thing to have, it would only complement whatever skills Eliezer needs to succeed in his AI goals. If Eliezer finds that he lacks the math skills at a certain point to develop some new piece of mathematics, he can find a math collaborator that will be thrilled about having a novel problem to work on.

I'm also not concerned about error rate. You write that the challenge is "getting the technical details right" -- this is simply not true. It's the main, big, mostly correct ideas we need to progress in science, not meticulousness.

(I recently submitted a paper where I was sloppy on some rounding of some of my results and I got slammed for it, science is all about precision and doing it right)

Publication is all about precision and doing it right, and it should be. But don't you feel like the science was done before the more careful rounding?

comment by AndyCossyleon · 2010-08-06T20:03:31.162Z · LW(p) · GW(p)

That's a lot of hands.

comment by xamdam · 2010-04-19T15:28:02.141Z · LW(p) · GW(p)

a friend of mine thought this was relevant: “Mediocrity knows nothing higher than itself, but talent instantly recognizes genius.” - Conan Doyle

Replies from: private_messaging
comment by private_messaging · 2012-06-27T07:52:03.804Z · LW(p) · GW(p)

But then there's also the technically mediocre who are talented signalers.

comment by rabidchicken · 2011-03-16T01:41:10.195Z · LW(p) · GW(p)

I find the idea that there are a lot of more intelligent people in the world than me comforting, especially in my chosen fields. Not because I feel this gives me an excuse to slack off and let them do the hard work, but because competition seems to drive me and keep me happier than anything else. Since finding lesswrong and related sites where people discuss AI, programming, and rationality, my efforts have improved considerably. I am far from competing with most of the people here, particularly you, but at least I have mental patterns I can model to improve.

I know people with greater mental horsepower than you, but none of them ever persisted at any problems that are hard enough to test the limits of their abilities.

comment by buybuydandavis · 2011-10-31T05:02:15.767Z · LW(p) · GW(p)

I doubt that Jaynes became Jaynes by aspiring to a level. Too bad we can't ask him.

Don't despair of surpassing Jaynes. He, and a great many others, have given you a leg up that Jaynes never had. People seem formidable because they're practiced in mental kung fu that you don't know. Darwin is remembered for an idea you can teach an 8 year old today.

comment by [deleted] · 2012-06-05T20:01:54.747Z · LW(p) · GW(p)

I am blogging here for many reasons, but a major one is the vision of some younger mind reading these words and zipping off past me.

Thank you.

comment by TheOtherDave · 2013-01-20T18:37:15.227Z · LW(p) · GW(p)

I suspect you and Luke do not share a referent for "better philosophy" here. In particular, I doubt either Luke or Eliezer would agree that the ability to write clearly, or to analyze and formulate arguments for purposes of compellingly engaging with existing arguments in the tradition of analytic philosophy, is the rare skill that Luke is talking about.

Trying to have a conversation about how hard it is to find an X without common referents for X is not likely to lead anywhere productive.

comment by lukeprog · 2013-01-20T19:16:45.621Z · LW(p) · GW(p)

You're right, I should say more about what I mean by "Eliezer-level philosophical ability." Clearly, I don't mean "writing clarity," as many of my favorite analytic philosophers write more clearly than Eliezer does.

It'll take me some time to prepare that explanation. For now, let me show some support for your comment by linking to another example of Eliezer being corrected by a professional philosopher.

Replies from: protest_boy
comment by protest_boy · 2014-06-20T08:09:54.548Z · LW(p) · GW(p)

Do you have anything quick to add about what you mean by "Eliezer-level philosophical ability"?

comment by TheOtherDave · 2013-01-20T23:31:23.629Z · LW(p) · GW(p)

I'm not claiming that clarity isn't a benefit, and as far as I can tell nobody else is either.
I agree that it's not hard to write "someone who can do philosophy well in the LessWrongian style".
And sometimes one person can miscommunicate all by themselves.

comment by MugaSofer · 2013-01-21T13:34:25.097Z · LW(p) · GW(p)

Downvoted because:

  • In my experience of philosophy, there are very few philosophers at EY's level.
  • You provided no evidence for your claims; and when you edited your comment ...
  • ... and the link you gave consists of him clarifying his terminology, and thanking them for interpreting his unclear wording charitably.

Also because it irritates me that this site is scattered with comments at anything from -3 to +15 (not exact figures) that criticize cryonics/ASI/other things lots of us believe in, LW policies, or EY, and then talk about how they're going to get downvoted into oblivion for speaking out against the consensus.

[Edited for formatting.]

Replies from: Kindly
comment by Kindly · 2013-01-21T13:58:28.612Z · LW(p) · GW(p)

In my experience of philosophy, there are very few philosophers at EY's level.

Can you qualify that with describing your experience with philosophers? E.g. "There are very few philosophers at EY's level, and I've met Philosopher John Conway." Whoever Philosopher John Conway turns out to be.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-21T15:04:52.585Z · LW(p) · GW(p)

Sadly, I have not met Philosopher John Conway - although I have met (well, read) more philosophy than the average bear, due to a childhood interest that's continued (mostly autodidactically) throughout my life. And I was always a pretty heavy reader, so it's fairly extensive. OTOH, I'm probably more critical of philosophers than the average bear, regularly exclaiming "but that's nonsense, X!", so there's that. Pretty much the same regarding theology, which was/is a nasty surprise for my family. And a few unrelated other topics. I would feel comfortable debating with the average author on such topics, not sure how that compares to the average graduate or whatever.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-06-20T09:19:13.352Z · LW(p) · GW(p)

John Conway is principally a mathematician.

Aren't you still in high school?

Replies from: MugaSofer
comment by MugaSofer · 2014-06-23T13:32:59.011Z · LW(p) · GW(p)

As I said, I can only comment on my general impression of the field. I'm not an expert - in anything - and I don't move in philosophy circles.

Worse still, because of my experiences with e.g. theology, I may be unusually biased towards criticising philosophers without properly engaging with them. "My experience of philosophy" is more valuable than a high-schooler's, but it's probably nothing special as experience-of-philosophy goes.

Nevertheless, as I said, in my experience of philosophy there are few philosophers at EY's level.

I think that's relevant, and hey, if someone with better evidence disagrees then all the more data for me. But the grandparent didn't provide evidence; it used underhanded tactics to deflect away from it instead.


John Conway ... is awesome and much higher-level than me, but I'm not sure how that relates to the above discussion. As you say, he's a mathematician; I'm not sure who Philosopher!John Conway would be, even if I had met them. Which I haven't.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-06-24T08:48:06.119Z · LW(p) · GW(p)

It takes some engagement with philosophy to know how to rate a philosopher. OTOH, someone can always gain popularity by telling people what they want to hear. Someone once told me that the Best Philosopher was Timothy Leary.

comment by MugaSofer · 2013-01-21T18:59:31.266Z · LW(p) · GW(p)

In my experience of philosophy,

Which is what?

Mine.

In my experience of philosophy, there are very few philosophers at EY's level. i don;t see why disagreement would merit a downvote. Isn;t that the whole problem?

You made a claim that I - and, you yourself acknowledged, most LWers - had a low prior for. You did not provide any evidence for this claim. That seems worth a downvote, as long as you don't provide a disclaimer that you can't provide your evidence right now or something.

there are very few philosophers at EY's level.

"Level" meaning credibility by normal standards (ie he as good as Dennett, Chalmers, Parfitt..) or "level" meaning he's good at being lessWongian because he invented it?

Well, I was impressed by LW, so there's naturally going to be some correlation.

Level meaning how good he seems at evaluating philosophical arguments, identifying flaws, and mostly avoiding the flaws that seem especially common in philosophy. I'm not sure how abnormal these criteria are, but then normally we're not talking about FAI.

You provided no evidence for your claims;

Not in the first draft. You downvoted a version with evidence.

One piece of evidence, which, as I noted, is at best extremely weak and misleadingly labelled.

and when you edited your comment.

That's incomplete. I hardly need point out the problems of making mistakes when you are pointing out mistakes.

Whoops. that should be an ellipsis. Fixed.

The link you gave consists of him clarifying his terminology, and thanking them for interpreting his unclear wording charitably.

So? Philosophy is mostly arguing, and arguing more clearly is arguing better...is better philosophy. Are you saying the link was irrelevant or what?

To the extent that rhetorical skills are uncorrelated with epistemic skills, yes, I am. He's not trying to persuade the problem, he's trying to solve it.

Also because it irritates me that this site is scattered with comments at anything from -3 to +15 (not exact figures) that criticize cryonics/ASI/other things lots of us believe in, LW policies, or EY, and then talk about how they're going to get downvoted into oblivion for speaking out against the consensus.

ETA: But the fist draft was downvoted! As was pretty well everything I have said on the subjecta

But not, I suspect, simply because you're contradicting the consensus. You have made sweeping claims, not considered likely by most LWers, regarding, among other things, users on this site (which we can all see,) yet you failed to provide examples; your lack of examples is at best unhelpful and at worst ... well dishonest or symptomatic of massive bias, whichever of those is "worst".

Also I am not clear why that is irritating.

Well, they never get called on it, and it promotes misconceptions regarding the site and the karma system specifically.

ETA. But thanks for the comment anyway.

Thank you for replying. I upvoted the reply, incidentally.

comment by MugaSofer · 2013-01-21T21:58:43.740Z · LW(p) · GW(p)

X being a surprisingly simple counterargument. Not a new one, necessarily, just one that the text either fails to address or quietly dismisses without actually refuting it.

An annoying amount of philosophy is guys sitting in armchairs going "but what if contradictory things can be true? Didn't think of that, didja?"

EDIT:

For example:

Swampman, original formulation:

"it can't recognize anything, because it never cognized anything in the first place."

(Based on his glass marble thought experiment, Davidson defines "recognize" causally; that is, whatever caused the idea of the marble is what he means by "the marble", even if it has changed shape or been replaced with a copy. A nonstandard but useful definition, which implies we can be mistaken about what we mean/recognize/etc.)

Swampman is, of course, essentially a Boltzmann brain; he was formed by chance into a form that happens to be conscious. But more than that, he has been formed into a copy of someone else. Now, if he was formed entirely by chance, then an observer should not consider his beliefs to be correct, any more than a random number generator can reliably predict the weather. In this sense the concept of "recognize" is working perfectly; the author is just being mean.

However, the reader of course knows that Swampman is causally connected to Davidson, via the author's fiat; thus our intuitions in this case are closer to a clone than a true Boltzmann brain. And if a character in the thought experiment were to notice that Swampman was a perfect copy (which seems reasonable) they would have sufficient evidence postulate the author. Or at least some other god-level power screwing with them. In this sense Swampman is causally connected to Davidson - who really did "cognize" his friends - and thus genuinely "recognizes" them."

And, of course, an unfortunate amount of philisophy magically disappears when you start playing taboo and otherwise avoiding arguments over definitions.

SECOND EDIT:

OK, I can't reply to comments here due to the karma toll. If anyone's interested, I could post the example on an open thread for criticism?

comment by MugaSofer · 2013-01-22T09:21:53.555Z · LW(p) · GW(p)

EDIT: there goes another conversation. Thank you karma toll.

You made a claim that I - and, you yourself acknowledged, most LWers - had a low prior for.

i know. But you are not supposed to downvote for disagreement.

I'm not downvoting for disagreement, I'm downvoting for absurd claims without any damn evidence. If you had provided, say, an example of a LW user who is better at philosophy - as opposed to a terminology quibble - then I would not have downvoted even if I didn't think it was sufficient.

Well, I was impressed by LW, so there's naturally going to be some correlation.

Level meaning how good he seems at evaluating philosophical arguments, identifying flaws, and mostly avoiding the flaws that seem especially common in philosophy.

And i would say he isnot good at coming to clear conclusions,

I think you misspelled "correct" there.

examing all the relevant arguments,

I always considered phiosophers far to willing to spend ages dealing with some BS position that could be demolished in seconds if they were trying to win and not just "do philosophy" for as long as possible, but whatever.

aviding known errors,

If you have spotted an error, I advise you to point it out. Not just make vuage statements about how there are totally loads of errors.

avoiding hubris,

Truly, the gods shall punish him for his arrogance.

using standard terminology, etc.

I suppose standard terminology might help people critique his work, at that. Still, it seems a minor flaw TBH.

One piece of evidence, which, as I noted, is at best extremely weak and misleadingly labelled.

I fail to see what was misleading.

You called it EY "thanking real philosophers for fixing his arguments" which, in the context of you saying that there were better philosophers on LW, rather implies that their skill at philosophy allowed them to identify a mistake well known to, well, experts in the field; not them interpreting his unclear terms charitably, which while a good thing is hardly relevant.

So? Philosophy is mostly arguing, and arguing more clearly is arguing better...is better philosophy. Are you saying the link was irrelevant or what?

I think you accidentally quoted your self instead of me there.

"Rhetorical" is a rather rtheorical way of saying "clear". EY can claim to have a Wonderful Solution in his head that no-one can undestand, but thre is no reason anyone else shouild buy that. To solve a probllem is to publish a solution that persuades others, for all practical purposes.

Um, no. To solve a problem is to find the correct solution, which may or may not require persuading others that it is correct, or indeed that it is incorrect or that they shouldn't eat tuna.

But clarity has nothing to do with "persuasion" in the advertiser's or politician's sense. Clarity is not a varnish that makes bad arguments look good: clarification is a major part of the substance of philosophical solution.

Considering how unclear to a layman much philosophy is, I assume you mean that "clarity" is helping experts to find any mistakes you may have made. This is important, and indeed a separate discipline from what we might call the Dark Arts. Eliezer could probablydo with some more of this, yeah. OTOH, philosophers generally seem pretty bad at this, so I'm not sure if Eliezer is unusually bad. And it's certainly not the only or even main thing philosophers are supposed to be good at.

But not, I suspect, simply because you're contradicting the consensus. You have made sweeping claims, not considered likely by most LWers, regarding, among other things, users on this site (which we can all see,) yet you failed to provide examples; your lack of examples is at best unhelpful and at worst ... well dishonest or symptomatic of massive bias, whichever of those is "worst".

I argue my points.

Did you quote the wrong thing here too? I'm not sure how this link is relevant.

You did not provide any evidence for this claim.

ETA:

What would evidence that someone is bad at something look like?

Evidence that someone is unusually bad would look like lots of people being better. So would evidence that "A number of better-than-EY philosophers have visited this site, sometimes displaying their skills in discussions with EY."

comment by Vladimir_Nesov · 2013-01-24T14:50:36.897Z · LW(p) · GW(p)

Moved due to low karma.

This restriction applies as intended, don't evade it.

http://wiki.lesswrong.com/wiki/Deletion_policy#Prolific_trolls

(You now have minus 250 30-day Karma, so I'll start banning/hiding some of your comments (they will remain accessible from your user page).)

comment by ArisKatsaris · 2013-01-24T16:15:52.491Z · LW(p) · GW(p)

I won't play the definitional games -- you yourself talked about "low-level trolling" which you excused as "teasing", and so you could have used your definition, whatever it is. But you didn't; and instead you avoided promising not to troll or stating that you've not trolled before.

I won't respond to you again, atleast until such a promise has been made, and perhaps not even then.

comment by Fronken · 2013-01-25T15:21:44.296Z · LW(p) · GW(p)

This link seems not to answer the comment ,:-. is this mistaken or did EY use that fallacy?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-26T20:55:07.271Z · LW(p) · GW(p)

Replying to this comment because the comment below (and above) was deleted while I was away from the PC:

Who on earth downvotes a comment that says "edited"?

(Also, anyone know why the comment above was deleted? Unless it was edited again, it seemed like a perfectly reasonable comment.)

EDIT: Well, my query about the deleted comment has been answered, and politely, but for some reason this has been downvoted below the Karma Toll level. Not sure why - the main question hasn't been replied to, so maybe I'm missing something - so I'm retracting this on the basis that I either inadvertently said something stupid or whoever's been karmassassinating me really hates this comment.

comment by whowhowho · 2013-01-25T20:38:16.666Z · LW(p) · GW(p)

Black swans flock. There are any number of things with very low probablity and very high positive and negative payoffs. You can;t control for them all. I suggest renormalisation: hand-wave them away by assuming they cancel out.

comment by Shmi (shminux) · 2013-01-26T22:21:13.506Z · LW(p) · GW(p)

Given the fairly uniform negative reaction to your posts, judging by your 30-day karma, you may want to consider looking for a forum where you will be better appreciated.

Replies from: Desrtopa
comment by Desrtopa · 2013-01-26T22:43:41.804Z · LW(p) · GW(p)

I haven't been following his recent posts closely, although those I've noticed which were downvoted mostly seemed deserving of it, but I will note that although his drop in karma has been precipitous, it's only recently reached zero. He earned at least a couple hundred karma before blowing it all.

comment by Desrtopa · 2013-01-26T22:49:51.527Z · LW(p) · GW(p)

You accused Eliezer of committing the noncentral fallacy, you did not demonstrate that he committed it.

Eliezer posited a categorization of "trolls" defined by the practice of deliberately pissing people off on the internet, of which people who incite others to commit suicide are an extreme example. If this is the uniting quality of "trolls," then it's not unreasonable to conclude that we don't want any of them in the community, not just the more extreme examples.

Not all diseases will kill you or make you wish you were dead, so it may seem "unfair" to group ones that will with ones that merely, say, make your feet itchy, but that doesn't mean it's unreasonable to prefer to have no diseases.

comment by gwern · 2013-02-02T17:10:52.749Z · LW(p) · GW(p)

Surely low enough not to be overcome by you being impressed or you agreeing with his philosophy

"Here is a very simple example of Bayesian reasoning, that most people are in fact capable of. Suppose we draw a random number between 1 and a million; the prior for any particular number between 1 and a million is straightforwardly very low - one in a million, of course. Now, I have just generated the number 493250 using random.org. Surely this prior of 1 in a million that I have generated any specific number like 493250 is low enough to not be overcome by you being impressed by looking at this comment and see '493250' in it? The prior for you having very special powers of perception of the right number is likewise proportionally low to how very special it is, and so on."

"Here is a very simple example of Bayesian reasoning, that most people are in fact capable of. Suppose we are looking at people who write clip art web comics; the prior for any particular clip art being the best or most popular is straightforwardly very low - one in a million, say, or what ever is your number. Now, we look at http://www.qwantz.com/index.php Surely this prior of 1 in a million is low enough to not be overcome by you being impressed by looking at this Dinosaur Comics? The prior for you having very special powers of perception of clip art is likewise proportionally low to how very special it is, and so on."

The ensuring debates and demands for evidence that something with very low prior isn't true, are particularly illuminating with regards to just how incapable certain self proclaimed Bayesians are of the most basic forms of probabilistic reasoning.

Yes. I agree. Some of these self proclaimed Bayesians cannot even fully specify their examples or prove their arguments or explain the crucial part of what they were probably arguing.

Replies from: whowhowho
comment by whowhowho · 2013-02-02T17:52:52.689Z · LW(p) · GW(p)

"Here is a very simple example of Bayesian reasoning, that most people are in fact capable of. Suppose we are looking at people who write clip art web comics; the prior for any particular clip art being the best or most popular is straightforwardly very low - one in a million, say, or what ever is your number. Now, we look at http://www.qwantz.com/index.php Surely this prior of 1 in a million is low enough to not be overcome by you being impressed by looking at this Dinosaur Comics? The prior for you having very special powers of perception of clip art is likewise proportionally low to how very special it is, and so on."

So, putting the analogy into reverse, the top post is wrong. You can judge N levels above your own.

Replies from: gwern
comment by gwern · 2013-02-02T18:00:24.214Z · LW(p) · GW(p)

I was making the point that Dmytry's claim was flawed in 2 separate ways; 'you can judge N levels above your own' is closer to the point of the random.org example than the DC example. (The DC example was more about neither DC nor EY being a random selection, not the strength of personal judgment.)

comment by gwern · 2013-02-02T21:32:09.663Z · LW(p) · GW(p)

I would of thought the latter but I now think you're honest given the earlier conversation involving 'crazy' anthropic reasoning of mine which turned out to be favoured by much everyone at SI as well, contrary to your claims.

When did I claim no one at SI held your views? That would've been hard since you refused to use standard terminology like SIA or SSA which I could then go 'ah yes, that's Bostrom's current view'.

If you had some omniscient Omega that had a web interface where you could enter "Pick an 1 in a million - quality philosopher" and it would reply "Eliezer Yudkowsky" and that's how you came around Yudkowsky, then it would have been analogous to that random.org example.

...Prior for the comic is low. You update it away if the choice of comic is very well correlating with what you consider the "best". If you were just shown various clip art at random you'd have a lot of trouble guessing the most popular one, because your eye for popularity certainly won't provide enough evidence.

No, that's not the analogy. The analogy is that there are at least 2 ways in which we are long past a prior of 1 in a million and don't have judgments which are equivalent to random choice, and those were illustrating them: the first is one's own ability to recognize some level of quality in a philosopher, and the second is about looking at a non-random selection at the end of a process with some selection for quality.

Are you making a point that popularity of a philosopher among non philosophers is very correlated to their philosophical ability?

Even a small correlation is enough to move the needle.

What's about lack of recognition by other philosophers, how is that correlated with philosophical ability? What's about Jesus, a dead philosopher who's quite damn popular?

So you cite, in a statistical claim throwing around numbers like 1 in a million, a single example? And I wonder how many people really consider Jesus a philosopher, as opposed to an excuse like GWB to signal their religion and cover up that they don't actually have any preferences as to secular philosophers...

But yeah, popularity is a meaningful index! Go down the list of great philosophers and you'll find they are popular and even appear in pop culture; Zeno, Plato, Socrates, Aristotle, Confucius, Descartes, Nietzsche, Russell, Wittgenstein to name a few off the top of my head are all widely read by laymen and appear in popular culture, and were often world-famous in their own lifetime. Of course it's not a perfect correlation - not all great philosophers will find popularity after their death among non-philosophers (Plotinus or Spinoza or Hume may have been the greatest philosophers of their time but only philosophers read them these days) - but think of how many minor or poor philosophers from those respective time periods remain obscure... Very few of them succeed like Ayn Rand in being a poor philosopher and also popular.

comment by gwern · 2013-02-03T01:18:54.681Z · LW(p) · GW(p)

If you don't know it other than by name, that's not my problem. It was straightforward mathematics.

When did I claim no one at SI held your views on anthropics? And I really don't think anthropics could be called straightforward by anyone.

There are pathological, intuitively confusing cases such as the number example; reading a number is incredibly selective for it being that number, so the update, in fact, does pull the probability up.

Congratulations, you understood the point. Similarly, decent arguments are highly diagnostic of philosophical ability because most people couldn't make an even half-assed argument if they sat down and spent all day at it; by LW standards, most philosophy grads can't find their asses either, and that's a very selective filter as well (philosophy majors are the highest-scoring group on the GRE for both verbal and writing sections, and are around 4 or 5 for the math section below physics & mathematics as one would expect). Making an argument that doesn't instantly fail is sadly so rare that just seeing one moves you a long way towards '1 in a million'.

Yudkowsky is not exactly Ayn Rand level popular, is he? If that's what you're after, pick anyone more famous than Yudkowsky and you're done. Easy.

I never said that fame scaled smoothly with importance. If I had to put the cutoff where fame stops adding additional evidence, I think I'd put it somewhere upwards of a Wikipedia article.

Look back up, Muehlhauser has stated that it is a tough task choosing someone of Yudkowsky's level of philosophical ability.

Sure. You're fishing from a limited pool to begin with: there aren't many professional philosophers these days, their numbers are probably shrinking as humanities programs get pressured. To put some numbers in perspective: the annual East coast meeting of the American Philosophical Association (APA) is the biggest single gathering of philosophers (tenured professors, associates, adjuncts, grad students, etc) in the world as far as I know. It numbers ~2000 attendants. Making things even more difficult, if I were one of them, I doubt I would spend much time on MIRI/FHI-related issues even if I were a true believer: it'd be way too risky for my already extremely precarious career. (Recruiting-wise, it might be best just to try to find computer science people and have them try their hand at philosophy; there's a lot of them, they're almost as smart in general, they have direct familiarity with a lot of the issues, they'll have the right intuitions about things like 'computers really are just machines that do what the programs say', and funding is a lot easier for them.)

By the way there's a pattern, various Ayn Rands and Keith Ranieres and Ron Hubbards and other self improvement gurus slash philosophers slash world saviours are popular philosophers among non-philosophers but not recognized by other philosophers.

I've actually never heard of Keith Raniere despite growing up in NY and visiting RPI; Wikipedia doesn't do a good job of describing what's so bad about it... ("Expensive brainwashing"? Brainwashing doesn't work, that's why cults have annual attrition rates in the double-digits.)

Anyway; yes, I would agree that the previous points also increase the chance EY would fall into that category of frauds. After all, such frauds are also pretty rare, so it's hardly impossible for evidence to increase our beliefs both that EY is a good philosopher and also such a fraud.]

(An example: houses catching on fire are rare. Houses not on fire with red spotlights around them are also rare. If I see in the sky above a house in the woods flickering red light, this is consistent with both the house being on fire and them having set up spotlights for a party; and my beliefs in the possibility of a fire and the possibility of spot spotlights will both increase quite a bit even though they're mutually exclusive scenarios.)

comment by BerryPick6 · 2013-02-03T22:07:22.768Z · LW(p) · GW(p)

What evidence gave you this impression?

comment by BerryPick6 · 2013-02-03T22:08:40.060Z · LW(p) · GW(p)

Yes. But he is a barely mediocre philosopher who is in no position to recognise real talent, whether EY's, if it exists, or anyone else's. He confuses ability with style or adherence to doctrines that he approves of.

How much of CSA have you read? Search for the sweet-spot just before Luke discovered LW and you should find high level philosophy going on.

comment by Kawoomba · 2013-02-03T22:09:07.619Z · LW(p) · GW(p)

Whatever caused your slide into jadedness?

comment by fubarobfusco · 2013-02-03T23:18:53.766Z · LW(p) · GW(p)

by LW standards, most philosophy grads can't find their asses

By philosophy standards, most LWers can't find their arses.

By arse standards, most philosophy grads can't find LW.

(Sorry, what was this permutation meant to accomplish?)

comment by TheOtherDave · 2013-02-04T13:34:05.165Z · LW(p) · GW(p)

Do you mean to claim that people who are competent by the standards of philosophy would not treat other people like this?

If so, what is your estimate of the percentage of people with graduate degrees in philosophy who are competent by the standards of philosophy?

comment by TheOtherDave · 2013-02-04T17:32:36.338Z · LW(p) · GW(p)

anyone competent by any standard would not do that by accident

So, I know lots of people who are demonstrably competent by the standard of "computer programmer," just to pick an example, who would absolutely do stupid shit like that without intending to be hurtful. So if you really mean "by any standard," then I simply don't believe you.

Conversely, if you just meant that as rhetorical exaggeration, I'm still left with the question of what standards you're actually talking about. Which is what I was trying to clarify with my questions, which you ignored.

Not sure how to proceed from here. Probably best to drop it.

comment by TheOtherDave · 2013-02-04T18:41:32.626Z · LW(p) · GW(p)

I choose the profession as my example because I know a lot more computer programmers than any other single profession.
Almost all the computer programmers I know are not self-obsessed jerks.
I'll answer your questions when you answer mine.

Replies from: ArisKatsaris, private_messaging
comment by ArisKatsaris · 2013-02-04T20:07:40.950Z · LW(p) · GW(p)

My recent experience with reading private_messaging (Dmytry) in both Kruel's blog and the rationalwiki talk page, has led me to the opinion that he has a complete disregard for the truth-value or consistency in any accusation or insult he makes against LW or SIAI - he just throws at them everything he can, the juicier accusation the better, even if it contradicts some earlier accusation. (e.g. half the time SIAI are self-deluded fools, the other half the time they're conscious fraudsters -- half the time there's no danger from AI and SIAI are therefore villains for claiming there is a danger, the other half time SIAI's ideas would create dangerously insane AIs, so it seems AI can be a danger after all)

comment by private_messaging · 2013-02-05T08:09:11.139Z · LW(p) · GW(p)

Ok, here:

Do you mean to claim that people who are competent by the standards of philosophy would not treat other people like this?

Not claiming that. Claiming that people who are competent by the standards of philosophy (or many other disciplines for that matter) would get from evolutionary psychology to anticipation of reaction. Seriously, science is about anticipating consequences. It isn't about epiphanies about why you feel what you feel. Programmers would probably hurt others less if they learn some psychology, even evolutionary psychology (which I am very skeptical of but it is still a framework into which you can put valid findings so it does have value).

There's other issue at play here, just because someone value honesty does not mean they will not act on what you tell them. Let's reverse gender stereotypes here. A girl tells Luke that she's breaking up with him because he does not make enough money, and gives evolutionary psychological explanation for an hour to boot. Whereas Luke is only interested in the girl conditionally on her not being into him for money. However much he values honesty, that doesn't change the fact that he will no longer be interested (We will assume, for the purpose of the hypothetical, that he was not also looking for a very honest girl that is into him for the money). When people tell they value honesty, that does not mean honesty will result in them giving you cookies, they're not offering a trade where you are guaranteed to net positive in return for honesty.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-02-05T15:41:09.139Z · LW(p) · GW(p)

Thanks for answering the question. You seem to have deleted the OP that contained the questions I promised to answer in return, so I no longer know what they are. Sorry.

Replies from: private_messaging
comment by private_messaging · 2013-02-05T15:57:24.436Z · LW(p) · GW(p)

You seem to have deleted the OP

I didn't delete any posts, and I don't think I edited any questions out.

comment by ArisKatsaris · 2013-02-04T19:49:51.819Z · LW(p) · GW(p)

Says the person whose whole argument of opposition to compatibilism was basically the cry "but where is the choice?!?"

comment by Strangeattractor · 2013-07-13T10:42:35.401Z · LW(p) · GW(p)

There are always higher levels. If nothing else, you can invent them yourself.

That's what came to mind after reading this post, after reflexively comparing how intelligent I think I am to how intelligent I perceive the author and commenters to be.

Another thing that came to mind was a grumpy sense that the whole issue had not been framed in a useful way, and an urge to meddle with how the ideas are arranged.

When I see someone's work who is at higher levels than my own current understanding and abilities allow me to achieve, (notice I am not phrasing that as if it is innate and cannot be learned,) I often feel frustrated at the poor quality of the educational materials or culturally transmitted knowledge that I have come across so far, the lack of map to get there.

I also usually feel relief, and, depending how much I care about the ability and subject matter, sometimes fascination at the work itself, and at the puzzle of how to get closer to that level of ability. I also feel more hope for the world.

It is easier to see brilliance than create brilliance.

In some fields of study there are many people working on a problem. But if you are the only one who you know of who devotes a lot of time to the problem, who cares that much, and who is taking an approach to the problem that is not clearly incorrect, then, it can get...lonely. Except lonely is not the right word. It is not like being lonely for social reasons. It is the murkiness of familiar uncertainty, of having no reliable measuring tools, of staying there for a long time. At times, it can be disorienting.

I'm not sure I am actually addressing the topics in the post, because the post seems to be a mix of several ideas, and the way it is written feels incomplete and imprecise, at least in so much as it conveys meaning to me. And, it is a ~5 year old post, probably the people involved have updated their thinking by now. Nevertheless, it struck a chord with me.

If there was gasp someone who was smarter than you in the ways you care about being smart yourself...then so what? I'm not sure what that has to do with your ability to revolutionize your field, or the sense of being limited by past choices, or concerns you are not using your time in the optimal way, or the way in which expertise and levelling up alters one's perceptions of others and their work. Perhaps these issues all connect in some way, but they seem like they have been unnecessarily conflated into a bit of a mess, to me.

If anything, I would think that someone else being able to achieve something great would make it more likely that you could do so, by showing that it is humanly possible, by giving an example to study and learn from, by expanding the knowledge available to humanity. Having more building blocks with which to build new stuff is, usually, a net benefit.

Also, some things get invented all at once, others in incremental stages. The history of the electric guitar, how several people tinkering with technology led to the instrument we have today, is a counterexample to the single brilliant person model of discovery and invention.

If someone comes along and solves all the problems you've been working on, wouldn't that be a good thing? Do you really not have confidence in your ability to take the next step, or to find something else to work on? I personally have a long list of things that I want to do, and to do them all would take several lifetimes, if lifespans remain about the same as they currently are. Wouldn't that free you up to do something else? I do not get the sense that you have any shortage of things you desire to spend your time on or accomplish.

If no one solves it, if you do not finish, or even fruitfully work on your chosen task, if you fail, is that the bigger problem? To put in so much time and effort, and have it be for nothing? I don't have much to say to that one, except that this is the risk of working on interesting important problems.

comment by AshwinV · 2014-10-06T03:41:44.350Z · LW(p) · GW(p)

Can anyone tell me whether Jaynes' book can be read and understood without any particular formal training? I do know the basic concepts of probability, and I usually score around the 85th percentile on math tests... And how hard/time-consuming exactly will the book be? I am employed in a somewhat high pressure job on a full time basis...

Replies from: Cyan
comment by Cyan · 2014-10-06T04:33:02.915Z · LW(p) · GW(p)

Try it -- the first three chapters are available online here. The first one is discursive and easy; the math of the second chapter is among of most difficult in the book and can be safely skimmed; if you can follow the third chapter (which is the first one to present extensive probability calculations per se) and you understand probability densities for continuous random variables then you'll be able to understand the rest of the book without formal training.

Replies from: AshwinV
comment by AshwinV · 2014-10-06T06:10:04.587Z · LW(p) · GW(p)

Thanks!

comment by alex_zag_al · 2014-10-11T22:59:39.186Z · LW(p) · GW(p)

You'll note that I don't try to modestly say anything like, "Well, I may not be as brilliant as Jaynes or Conway, but that doesn't mean I can't do important things in my chosen field."

Because I do know... that's not how it works.

Maybe not in your field, but that is how it usually works, isn't it?

(the rest of this comment is basically an explanation of comparative advantage)

Anybody can take the load off of someone smarter, by doing the easiest tasks that have been taking their time.

As a most obvious example, a brilliant scientist's secretary. Another example: a brilliant statistician that employs a programmer, who turns his statistical and computational ideas into efficient, easy-to-use software. He doesn't have to be the best programmer, and doesn't have to be that great at statistics, but he allows the statistician to publish usable implementations of his statistical methods without having to code them himself.

Or, here's another one: I've heard MIRI needs a science writer, or needs funding for one. You don't have to be Eliezer Yudkowsky-level at thinking about FAI to save Yudkowsky the time it takes to write manuscripts that can be published in science journals, and then Yudkowsky uses that time for research.

This is "important work." It's not the kind of important work Jaynes or Conway does, and it doesn't put your name in the history books, and if that's what was meant by the article I have no disagreement. But by any utilitarian standard of importance, it's important.

comment by AshwinV · 2014-10-25T06:35:15.867Z · LW(p) · GW(p)

For whatever reason, the sense I get of Jaynes is one of terrifying swift perfection—something that would arrive at the correct answer by the shortest possible route, tearing all surrounding mistakes to shreds in the same motion. Of course, when you write a book, you get a chance to show only your best side. But still.

Just reminded me of a Lord Acton's quotes : "Judge character at its worst, but talent at its best." (Paraphrased from memory)

comment by private_messaging · 2014-10-25T07:38:24.937Z · LW(p) · GW(p)

The 'thousand years old' vampire impression could be close to truth. My understanding is that people like Jaynes think very long hours, and can clock as much relevant brain time by their thirties as a person of lesser mental endurance may clock in centuries. And it is entirely possible that Jaynes did as much math by the time he wrote the book (correct math - checked and verified) as a hobbyist would in thousands years.

comment by TheAncientGeek · 2015-10-18T12:02:23.803Z · LW(p) · GW(p)

Contrarian view about Jaynes' super-smartness,- from David Chapman.

Apparently, Jaynes "was completely confused about the relationship between probability theory and logic." and "There’s strong evidence that when people tried to de-confuse him, he pig-headedly refused to listen."

Replies from: entirelyuseless
comment by entirelyuseless · 2015-10-18T15:15:29.520Z · LW(p) · GW(p)

How is that a contrarian view about super-smartness?

Replies from: TheAncientGeek, gjm
comment by TheAncientGeek · 2015-10-18T17:11:18.278Z · LW(p) · GW(p)

Jaynes is not a good example of it.

comment by gjm · 2015-10-19T11:20:16.578Z · LW(p) · GW(p)

(The following remark will be useful only if you happen to have misunderstood TheAncientGeek in a particular way which you might not have.)

By "Jaynes' super-smartness", TAG means not "a notion of super-smartness promulgated by Jaynes" but "the idea that Jaynes was super-smart" (that idea being professed, e.g., by Eliezer). Chapman argues that Jaynes's big idea was wrong-headed and that Jaynes failed to grasp its problems when they were pointed out to him, which suggests that he wasn't super-smart.

Whether that's contrarian around here, I'm not so sure. In so far as there's an LW orthodoxy, I think it involves the idea that an ideal reasoner would be basically Bayesian in something like the manner described by Jaynes. I'm not so sure it involves the idea that Jaynes was super-smart, as such.

Replies from: entirelyuseless
comment by entirelyuseless · 2015-10-19T13:05:46.495Z · LW(p) · GW(p)

I was mostly, but not entirely, joking. In other words I think it is pretty common (although certainly not universal) for a very smart person to fail to grasp a real problem when people point it out, in part because his prior for being personally right and for someone who disagrees being wrong, is higher than it is for people in general.

Since it was a joke, I wasn't commenting on the question (and don't know) whether or not there was some real problem like that which Jaynes failed to notice.

comment by Lior Hecht · 2020-02-23T18:47:09.304Z · LW(p) · GW(p)

Honestly, my favorite thread I've read so far. I lived a similar scenario so many time, and while I doubt any of my "level above mine models" are anywhere near Jeynes, I'm very proud knowing I did manage to catch up and even surepress some. In some parts, thanks to 'less wrong' and 'Rationality from AI to Zombie'.

So thank you (Eliezer and many others on this blog) for sharing your experience and knowledge. You're some of my best teachers

comment by Emiya (andrea-mulazzani) · 2020-12-09T13:05:05.927Z · LW(p) · GW(p)

Oh. 

All this time, and I've never thought once that Eliezer could be thinking about other people nearly the exact way I thought about him.