Bloggingheads: Robert Wright and Eliezer Yudkowsky

post by Liron · 2010-08-07T06:09:32.684Z · LW · GW · Legacy · 129 comments

Contents

129 comments

Sweet, there's another Bloggingheads episode with Eliezer.

Bloggingheads: Robert Wright and Eliezer Yudkowsky: Science Saturday: Purposes and Futures

129 comments

Comments sorted by top scores.

comment by Will_Newsome · 2010-08-08T16:11:03.913Z · LW(p) · GW(p)

Maybe it's because his brain is so large that my mirror neurons have to fire three times faster to compensate, but I always get so frustrated when watching Eliezer discussing things with non-SIAI people. It's almost kinda painful to watch, because even though I wish someone would come along and pwn Eliezer in an argument, it never ever happens because everyone is more wrong than him, and I have to sit there and listen to them fail in such predictably irrational ways. Seriously, Eliezer is smart, but there have to be some academics out there that can point to at least one piece of Eliezer's fortress of beliefs and find a potentially weak spot. Right? Do you know how epistemically distressing it is to have learned half the things you know from one person who keeps on getting proven right? That's not supposed to happen! Grarghhhhhh. (Runs off to read the Two Cult Koans.) (Remembers Eliezer wrote those, too.) (God dammit.)

(And as long as I'm being cultish, HOW DEAR PEOPLE CALL OUR FEARLESS LEADER 'YUDKOWSKI'?!?!??!? IT COMPLETELY RUINS THE SYMMETRY OF THE ETERNAL DOUBLE 'Y'S! AHHH! But seriously, it kinda annoys me in a way that most trolling doesn't.)

Replies from: sketerpot, JenniferRM, ciphergoth, XiXiDu, Liron, CarlShulman, timtyler
comment by sketerpot · 2010-08-08T19:31:01.990Z · LW(p) · GW(p)

It reminds me of when Richard Dawkins was doing a bunch of interviews and discussions to promote his then-latest book The God Delusion. It was kind of irritating to hear the people he was talking with failing again and again in the same predictable ways, raising the same dumb points every time. And you could tell that Dawkins was sick of it, too. The few times when someone said something surprising, something that might force him to change his mind about something (even a minor point), his face lit up and his voice took on an excited tone. And when he was particularly uncertain about something, he said so.

People accused him of being arrogant and unwilling to change his mind; the problem is that the people he was arguing with were just so piteously wrong that of course he's not going to change his mind from talking with them. It's funny, because one of the things I really like about Dawkins is that he's genuinely respectful in discussions with other people. Sometimes barbed, but always fundamentally respectful. When the other person says something, he won't ignore it or talk past them, and he assumes (often wrongly) that whoever he's speaking with is intelligent enough and sane enough to handle a lack of sugarcoating.

And of course, all this led to accusations of cultishness, for exactly the same reasons that are making you uncomfortable.

comment by JenniferRM · 2010-09-09T01:08:34.746Z · LW(p) · GW(p)

Maybe it's because his brain is so large that my mirror neurons have to fire three times faster to compensate, but I always get so frustrated when watching Eliezer discussing things with non-SIAI people.

Start with a bit of LW's own "specialized cult jargon" (I kid, really!)... specifically the idea of inferential distance.

Now imagine formalizing this concept more concretely than you get with story-based hand waving, so that it was more quantitative -- with parametrized shades of grey instead of simply being "relevant" or "not relevant" to a given situation. Perhaps it could work as a quantitative comparison between two people who could potentially Aumann update with each other, so that "ID(Alice,Bob) == 0 bits" when Alice knows everything Bob knowsand they already believe exactly the same thing, and can't improve their maps by updating about anything with each other. If its 1 bit then perhaps a single "yes/no Q&A" will be sufficient to bring them into alignment. Larger and larger values imply that they have more evidence (and/or more surprising evidence) to share.

(A simple real world proxy for ID(P1,P2) might be words read or heard by P1 that P2 wrote or spoke. The naive conversion from words to bits would then be to multiply words by ~10 to get bits of information while crossing your fingers and hoping that every word was a novel report of evidence rather than a re-summarization of basically the same information that might let evidential double-counting sneak in the back door. So maybe "ID(Alice,Bob) == 50 bits" means there are five perfectly chosen words that Bob could say to let Alice sync with him?)

Now consider naively (IE imagine that everyone is a baseline human operating mostly on folk wisdom) that Alice and Bob are in a debate being judged by Jim where Jim is forced to judge in favor of one or the other debater, but not both or neither. Given this background information, H, what do you think of the specific probability estimate:

PROB ( J judges for A | H and ID(J,A) < ID(J,B) )

If this is 0.5 then the concept of inferential distance gives no special predictive power about about how Jim will judge. I think this is unlikely, however, given what I suspect about the kinds of mistakes Alice and Bob will make (assuming things intelligible to themselves are intelligible to everyone) and the kinds of mistakes that Jim will make (thinking that if something isn't transparently obvious then whatever was said is just wrong). My guess would be that Jim would judge in favor of Alice more often, simply because he already deeply understands more of what she says in the course of the debate.


So... I think the critical question to ask is what evidence from the world might Robert Wright have talked about if he hadn't been wrongfooted when he was pulled into Eliezer's unfamiliar frameworks for describing optimization processes and for doing expectation-based-argumentation (that you're already familiar with but that Robert presumably hasn't read up on).

In point of fact, Robert has published several books with lots of evidence even if he isn't good at defending himself from Eliezer's rhetorical jujitsu. Basically none of the contents of his books came out because, although Robert offered helpfully leading questions about Eliezer's area of specialization (which Eliezer complimented him on -- I think maybe misunderstanding his basic-conversational-generosity for agreement-and-hence-intelligence) Eliezer didn't reciprocate which meant that the video audience didn't get to see Robert's specialist knowledge.

Here is a bit from Amazon's quote of Publisher's Weekly review of Robert's book "Nonzero" describing the kinds of things Robert could have been talking about if Eliezer had "played along for the sake of argument" before going into attack mode:

The non-zero-sum dynamic, Wright says, is the driving force that has shaped history from the very beginnings of life, giving rise to increasing social complexity, technological innovation and, eventually, the Internet. From Polynesian chiefdoms and North America's Shoshone culture to the depths of the Mongol Empire, Wright plunders world history for evidence to show that the so-called Information Age is simply part of a long-term trend. Globalization, he points out, has been around since Assyrian traders opened for business in the second millennium B.C. Even the newfangled phenomenon of "narrowcasting" was anticipated, he claims, when the costs of print publishing dropped in the 15th century and spawned a flurry of niche-oriented publications. Occasionally, Wright's use of modish terminology can seem glib: feudal societies benefited from a "fractal" structure of nested polities, world culture has always been "fault-tolerant" and today's societies are like a "giant multicultural brain." Despite the game-theory jargon, however, this book sends an important message that, as human beings make moral progress, history, in its broadest outlines, is getting better all the time.

This sounds to me like a lot of non-fictional evidence. My guess is that Wright is ultimately just more interested in the Invisible Hand than in Azathoth and sees the one "deity" as being more benevolent than the other. If I generously misinterpret him as claiming this, I notice that I'm already willing to believe this because Azathoth seems kind of scary and horrifying to me. If I imagine more evidence this way I'm more inclined to believe it...

So expect that if the conversation in the video had been more about "cooperative truth seeking" than about "debate winning", then Robert's would have said something and justified it in a way that improved my thinking.

I think a lot of what's scary about many real world epistemic failure modes is not that they are full of gross logical fallacies, or involve wearing silly clothes, or get you to work on truly positive "public goods", but that that they deflect you from acquiring certain kinds of evidence without your even noticing it.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-09-09T02:26:00.843Z · LW(p) · GW(p)

Why must you ruin my self-conscious countersignalling with good epistemology?!

But seriously... Ack! Jennifer, you're brilliant. I dunno what they put in the water at that CCS place. Would you accept me as your apprentice? I hear tell you have a startup idea. I can't code, but I live very cheaply and can cheerfully do lots of menial tasks and errands of all kind, from in-the-field market research to buying donuts to washing dishes to answering customer questions and everything else. I'm versatile, energetic and a wicked good rationalist. And I feel that working for you even for a short while would significantly build my understanding of social epistemology and epistemology generally, helping me in my quest to Save the World. Doesn't that sound like a totally awesome idea? :D

Replies from: JenniferRM
comment by JenniferRM · 2010-09-10T01:46:52.664Z · LW(p) · GW(p)

Your compliments are appreciated but, I suspect, unwarranted :-P

I'm not saying "definitely no" and I think it would be cool to work with you. But also you should probably reconsider the offer because I think the right question (tragically?) is not so much "Can I work with you to somehow learn your wisdom by osmosis?" but "Where are the practice grounds for the insight just displayed?" My working theory of "intellectual efficacy" is that it mostly comes from practice.

Following this theory, if you're simply aiming for educational efficiency of the sort that was applied here, you could do much worse than getting some practice at competitive inter-collegiate policy debate (sometimes called CEDA or NDT depending on the region of the US).

I would attribute my insight here not to "something in the water" at the CCS (the College of Creative Studies at UCSB, which for the record, I just hung out at because that's where my friends were), but to experiences before that on a college debate team in a two year program that included a debate tournament approximately every third weekend and about 10 hours per week in a college library doing research in preparation for said tournaments.

Here is a partial list of four year colleges that have policy debate teams.

If you were going to go for the best possible debate experience in the U.S. I'd estimate that the best thing to do would be to find a school that was valuable for other reasons and where (1) the head coach's favorite event is CEDA/NDT (2) the ((debate program budget)/debater) value is high. The funding is important because practical things like a room just for the debate team, travel/food/hotel subsidies are important for filling out a debate team and giving them a sense of community and the size and quality of the team will be a large source of the value of the experience. You might also try to maximize the "tournaments per team member per year" which might vary from school to school based on the costs of travel given the school's location.

The only major warning with this suggestion, is that a lot of the value of learning to debate rigorously is just that you'll pick up library skills, policy debate theory, the ability to notice (and produce) debating tricks on the fly, and confidence speaking in front of an audience. Learning debate to practice rationality is kind of like learning to knife fight in order to practice saving people. The skill might have uses in the target domain, but they are definitely not the same thing.

(Though now that I spell out the warning, it might work as a vote for being paid to work in a startup where calculating semi-autonomy is encouraged rather than paying for school in pursuit of theoretically useful ideas? Hmmm...)

Replies from: Will_Newsome
comment by Will_Newsome · 2010-09-10T02:58:36.766Z · LW(p) · GW(p)

not so much "Can I work with you to somehow learn your wisdom by osmosis?" but "Where are the practice grounds for the insight just displayed?"

It's less the insight just displayed and more a general tendency to see Pareto improvements in group rationality. But debate's an interesting idea.

comment by Paul Crowley (ciphergoth) · 2010-08-08T21:17:49.661Z · LW(p) · GW(p)

Bear in mind that, like many good works of pop science, the vast majority of what the Sequences present is other people's ideas; I'm much more confident of the value of those ideas than of the parts that are original to Eliezer.

Replies from: AndyWood
comment by AndyWood · 2010-08-10T06:32:34.677Z · LW(p) · GW(p)

And who filtered that particular and exceptionally coherent set of "other people's ideas" out of a vastly larger total set of ideas? Who stated them in (for the most part) clear anti-jargon? I would not even go into the neighborhood of being dismissive of such a feat.

Originality is the ultimate strawman.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-10T07:21:24.552Z · LW(p) · GW(p)

I don't mean to be dismissive at all - leaving aside original content like the FAI problem, the synthesis that the Sequences represent is a major achievement, and one that contributes to making the clarity of writing possible.

comment by XiXiDu · 2010-08-08T20:15:42.499Z · LW(p) · GW(p)

There's not much he could be proven wrong about. What EY mainly accomplished is to put the right pieces, that have already been out there before him, together and create a coherent framework.

But since I've only read maybe 5% of LW I might be wrong. Is there something unique that stems from EY?

Another problem is that what EY is saying is sufficiently vague so that you cannot argue with it if you do not doubt some fundamental attributes of reality.

I'm not trying to discredit EY. I actually don't know of any other person that comes even close to his mesh of beliefs. To the extent that I'm much more relaxed since I know about him for that if I was going to die there's everything and much more I ever came up with contained inside EY' mind :-)

Anyway, I can't help and often muse about the possibility that EY is so much smarter that he actually created the biggest scam ever around the likelihood of uFAI to live of donations by a bunch of nonconformists. - "Let's do what the Raelians do! Let's add some nonsense to this meme!"

Of course I'm joking, hail to the king! :-)

comment by Liron · 2010-08-08T20:13:46.885Z · LW(p) · GW(p)

Do you know how epistemically distressing it is to have learned half the things you know from one person who keeps on getting proven right?

Yeah, huge red flag. I'll also note that reading Eliezer's stuff made me feel like I got to extend my beliefs in the same direction away from mainstream that they were already skewed, which is probably why I was extremely receptive to it.

Even though I've learned a lot, I don't get to congratulate myself for a real Mind Change.

comment by CarlShulman · 2010-08-09T00:17:23.869Z · LW(p) · GW(p)

(Runs off to read the Two Cult Koans.)

Or this.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-08-09T06:54:46.487Z · LW(p) · GW(p)

Thanks! I guess there's a good reason not to have a 'cultishness' tag, but still, it'd be kinda cool...

comment by timtyler · 2010-08-10T20:37:41.547Z · LW(p) · GW(p)

There are not very many seriously written-up position statements from Eliezer. So, it probably doesn't represent a very attractive target for "academics" to attack.

There are a couple of papers about the possibility of THE END OF THE WORLD. That is an unconvential academic subject - partly because no instances of this have ever been observed.

comment by timtyler · 2010-08-07T11:24:34.059Z · LW(p) · GW(p)

Rabbits and foxes are used as a stereotypical example of conflict. However, "even" foxes and rabbits actually cooperate with each other - as follows:

A fox slinks about, looking for food. When he spies a rabbit munching in the grass, he begins to creep closer. If the rabbit sees the fox coming, it will stand on its hind legs, observing the fox. The fox now realizes that its been discovered, and it will turn away from the hunt. The rabbit could run, but that would entail wasteful energy expenditure. So it simply signals the fox. The fox gets the "I see you" signal, and turns away, because it also doesn't want to expend energy on a futile chase. So both animals come out ahead, by the use of a signal. The rabbit's work loop (stay alive) has been completed with minimum energy expended, and the fox's work loop (find food) has been terminated unsuccessfully, but with less energy used than if it had included a fruitless chase.

The rabbit helps the fox save energy, the fox helps the rabbit save energy - it's a deal. They don't want exactly the same thing - but that is true for many traders, and it doesn't prevent cooperative trade arising between them. Nature is full of such cooperation.

Replies from: RobinZ, SilasBarta, SilasBarta
comment by RobinZ · 2010-08-07T16:09:06.072Z · LW(p) · GW(p)

Actually, do you have a citation for this datum?

Edit: The author has commented downthread.

Replies from: timtyler, Richard_Kennaway
comment by timtyler · 2010-08-07T20:05:18.317Z · LW(p) · GW(p)

It's an anecdote, which I presented very bady :-(

I was actually looking for evidence that white bunny tails signalled to foxes - but people mostly seem to think they signal danger to other rabbits.

Update - abstract of "Do Brown Hares Signal to Foxes?":

"Of a total of 32 sedentary brown hares (Lepus europaeus) approached across open ground by foxes (Vulpes vulpes), 31 reacted when the fox was 50 m or less from them by adopting a bipedal stance directly facing the fox. Of five sedentary hares approached by foxes from nearby cover, none stood, three moved away and two adopted the squatting (primed for movement) posture. Hares stood before foxes in all heights of vegetation and on 42% of occasions were solitary. Hares did not stand before approaching dogs (Canis familiaris). The functions of this behaviour are considered and competing hypotheses of Predator Surveillance and Pursuit Deterrence are examined by testing predictions against results obtained. The results suggest that by standing erect brown hares signal to approaching foxes that they have been detected."

comment by Richard_Kennaway · 2010-08-07T17:48:42.115Z · LW(p) · GW(p)

GIYF. Author.

Replies from: RobinZ
comment by RobinZ · 2010-08-07T19:57:57.465Z · LW(p) · GW(p)

That's the same source I found for the quotation when I hit up the search engines, but I was rather hoping for a naturalist of some description to back up the theory. I don't see that you could be confident of that explanation without some amount of field work. Who put in the eye-hours to develop and confirm this hypothesis?

Edit: I mean, if Eb the author did, that's fine, but he doesn't even mention growing up in the country.

Replies from: EbfromBoston, EbfromBoston
comment by EbfromBoston · 2010-08-07T20:13:39.611Z · LW(p) · GW(p)

Sorry for not citing my fox/rabbit scenario; I am the author in question... I was basing my tale on observations made by some European ethologist/semiotician. The signals given by animals as they navigate the "umwelt". I read Uexkull, Kalevi Kull, Jesper Hoffmeyer, and Thomas Sebeok, among others.

Somewhere was the description in question. The author said that he had something like 10,000 hours of observation.

Sorry for not citing my sources. I'll try to be more precise in note-taking.

But it was a thrill that someone read my website!

http://adaptingsystems.com

Eb

Replies from: RobinZ
comment by RobinZ · 2010-08-07T20:28:09.094Z · LW(p) · GW(p)

Thanks for the quick response! If you can find the citation again among the sources you were reading, I'd appreciate it - perhaps you can add a footnote on the page RichardKennaway links.

Welcome to Less Wrong, by the way! I don't know if you read the About page, but if you're interested in rationality, etc., there's a lot of good essays scattered about this blog.

comment by EbfromBoston · 2010-08-07T20:15:53.444Z · LW(p) · GW(p)

O, btw, I grew up in the country. Spent several years on the sheep farm. Interestingly, the herd dogs use the same "signal" mechanism to move sheep. Rather than run around and bark, they get in "predator" pose and the sheep move accordingly.

Interesting to watch low-power energy, i.e. "signals", accomplish work.

Replies from: RobinZ
comment by RobinZ · 2010-08-07T20:32:50.801Z · LW(p) · GW(p)

Now this is a completely irrelevant aside, but I remember hearing about a party at a house with three dogs, mostly in one room. A guest left to use the bathroom, and when she came back, she could see that everyone was packed in a neat group in the center of the room with the dogs patrolling and nudging the strays back in.

That is a neat story about the dogs using the predator pose. Thanks.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-07T20:35:59.782Z · LW(p) · GW(p)

Would you know whether the dogs were border collies?

One of my friends had a border collie when she was a kid, and she told me that the dog was only really happy when the whole family was seated around the dining table.

Replies from: RobinZ, EbfromBoston, RobinZ
comment by RobinZ · 2012-08-21T16:18:39.340Z · LW(p) · GW(p)

I finally got around to asking - they were indeed border collies. +1 for a correct prediction!

comment by EbfromBoston · 2010-08-07T21:40:38.396Z · LW(p) · GW(p)

Yes, border collies. The good border collies complete the work loop (move sheep) with minimal expenditure of energy. One would merely raise an eyebrow and the sheep got the message, and moved. Very impressive.

comment by RobinZ · 2010-08-07T20:38:04.579Z · LW(p) · GW(p)

May well have been - I got the story secondhand myself, and I have a terrible recall for details.

comment by SilasBarta · 2010-08-07T13:37:02.285Z · LW(p) · GW(p)

That doesn't seem like a stable equilibrium -- too much incentive for the rabbits to be "over-cautious" for foxes at the expense of running ability. If they can figure that "being able to notice and turn toward a fox" is just as good as having the energy to escape a fox, then they'll over-invest in being good at this signal until the foxes realize it's not a reliable signal of a failed hunt.

Replies from: timtyler, AlephNeil
comment by timtyler · 2010-08-07T14:02:02.617Z · LW(p) · GW(p)

Note that this type of signalling to predators is well established in many other creatures:

http://en.wikipedia.org/wiki/Stotting

Replies from: Jonathan_Graehl, sark
comment by Jonathan_Graehl · 2010-08-07T22:51:26.656Z · LW(p) · GW(p)

Stotting is awesome. Thanks for that. I'm puzzled at the controversy over the original point, which is so plausible it's hard not to believe.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-09T16:44:33.324Z · LW(p) · GW(p)

Well, let me explain my intuition behind my objection, even if there's a reason why it might be wrong in this case.

I am, in general, skeptical of claims about Pareto-improvements between agents with fundamentally opposed goals (as distinguished from merely different goals, some of which are opposed). Each side has a chance to defect from this agreement to take utility from the other.

It's a quite famliar case for two people to recognize that they can submit their disagreement to an arbitrator who will render a verdict and save them the costs of trying to tip the conflict in their favor. But to the extent that one side believes the verdict will favor the other, then that side will start to increase the conflict-resolution costs if it will get a better result at the cost of the other. For if a result favors one side, then a fundamentally opposed other side should see that it wants less of this.

So any such agreement, like the one between foxes and rabbits, presents an opportunity for one side to abuse the other's concessions to take some of the utility at the cost of total utility. In this case, since the rabbit is getting the benefit of spending the energy of a full chase without spending that energy, the fox has reason to prevent it from being able to make the conversion. The method I originally gave shows one way.

Another way foxes could abuse the strategy is to hunt in packs. Then, when the rabbit spots one of them and plans to run one direction, it will be ill-prepared for if another fox is ready to chase from another direction (optimally, the opposite) -- and gives away its location! (Another fox just has to be ready to spring for any rabbit that stands and looks at something else.)

So even if the "stand and look"/"give up" pattern is observed, I think the situation is more complicated, and there are more factors at play than timtyler listed.

Replies from: timtyler, timtyler, Jonathan_Graehl
comment by timtyler · 2010-08-10T02:37:14.420Z · LW(p) · GW(p)

Re "pack hunting" - according to this link, the phenomenon happens with foxes - but not dogs: "Hares did not stand before approaching dogs". Perhaps they know that dogs pay no attenntion - or perhaps an increased chance of pack hunting is involved.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-10T03:07:56.413Z · LW(p) · GW(p)

Okay, thank you, that answers the nagging concern I had about your initial explanation. There are reasons why that equilibrium would be destabilized, but it depends on whether the predator species would find the appropriate (destabilizing) countermeasures, and this doesn't happen with foxes.

Confusion extinguished!

comment by timtyler · 2010-08-09T20:00:58.083Z · LW(p) · GW(p)

The basic idea is that both parties have a shared interest in avoiding futile chases - see the stotting phenomenon. Cooperation can arise out of that.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-09T20:25:31.545Z · LW(p) · GW(p)

Yes, I'm familiar with stotting. But keep in mind, that doubles as an advertisement of fitness, figuring into sexual selection and thus providing an additional benefit to gazelles. So it's a case where other factors come into play, which is my point about the rabbit fox example -- that it can't be all that's going on.

Replies from: timtyler
comment by timtyler · 2010-08-10T02:28:15.898Z · LW(p) · GW(p)

There's often "other things going on" - but here is a description of the hypothesis:

Pursuit-deterrent signals represent a form of interspecific communication, whereby the prey indicates to a predator that pursuit would be unprofitable because the signaler is prepared to escape (Woodland et al. 1980). Pursuit-deterrent signals provide a benefit to both the signaler and receiver; they prevent the sender from wasting time and energy fleeing, and they prevent the receiver from investing in a costly pursuit that is unlikely to result in capture. Such signals can advertise prey's ability to escape, and reflect phenotypic condition (quality advertisement, sensu Zahavi 1977; also see Hasson 1991), or can advertise that the prey has detected the predator (perception advertisement, sensu Woodland et al. 1980). Pursuit-deterrent signals have been reported for a wide variety of taxa, including fish (Godin and Davis 1995), lizards (Cooper et al. 2004), ungulates (Caro 1995), rabbits (Holley 1993), primates (Zuberbühler et al. 1997), rodents (Shelley and Blumstein 2005), and birds (Alvarez 1993).

Another example of signalling from prey to predator is the striped pattern on wasps.

comment by Jonathan_Graehl · 2010-08-10T04:23:58.738Z · LW(p) · GW(p)

The intuition does make sense, but I don't think it serves to refute the proposed co-evolved signal in this case. Perhaps the prey also likes to maintain view of its hunter as it slinks through the brush.

comment by sark · 2010-08-07T14:46:05.803Z · LW(p) · GW(p)

Stotting is costly, hence reliable. 'Noticing and turning to the fox' is not.

Replies from: wedrifid, timtyler
comment by wedrifid · 2010-08-08T08:12:02.782Z · LW(p) · GW(p)

Doing things that are costly isn't the only way to reliably signal. In this case the rabbit reliably communicates awareness of the fox's presence. It cannot be fake because the rabbit must look in the right direction. The fact that it's prey is not unaware of its presence is always going to be useful to a fox. It will still attempt to chase aware rabbits some times but the exchange of information will help both creatures in their decision making.

This is an equilibrium that all else being equal will be stable. Speed will still be selected for for exactly the same reason that it always was.

Replies from: Matt_Simpson, sark
comment by Matt_Simpson · 2010-08-08T08:51:53.129Z · LW(p) · GW(p)

Just to elaborate (for clarity's sake), by standing up and looking directly at the fox, the rabbit is changing the fox's expected utility calculation. If the rabbit doesn't see the fox, the fox will have the advantage of surprise and be able to close some of the distance between itself and the rabbit before the rabbit begins to run. This makes the chase less costly to the fox. If the rabbit does see the fox, when the fox begins the attack the rabbit will see it and be able to react immediately, neutralizing any surprise advantage the fox has. So if the fox knows that the rabbit knows that the fox is nearby, the fox may well not attack because of the amount of extra energy it would take to capture the rabbit.

The rabbit standing up and staring at the fox is an effective signal of awareness of the fox because it is difficult to fake (costliness is only one way that a signal can be difficult to fake). The rabbit can stand up and stare in a random direction if it wants to, but the probability of a rabbit doing that and being able to randomly stare directly at the fox is pretty slim. So if the fox sees the rabbit staring at it, then the fox can be pretty certain that the rabbit knows where the fox is at.

Replies from: sark
comment by sark · 2010-08-08T10:05:41.483Z · LW(p) · GW(p)

Very clear. Thanks.

comment by sark · 2010-08-08T10:04:19.887Z · LW(p) · GW(p)

So 'noticing the fox' signals that the rabbit notices the fox and will run when it sees the fox beginning to chase. The fox uses the signal thus: "If the rabbit notices me it gets a headstart. With such a head start, and the fact that the rabbit runs at a certain minimum speed, I would not be able to catch it".

Even though the reliability of the signal is independent of the running, its effectiveness/usefulness depends on the rabbit's speed.

Once we have the free riding rabbits placing resources into noticing and away from running, foxes will realize this, and they will chase even when they have been noticed. So now noticing does not prevent the fox from chasing anymore, so there is less pressure on even fast rabbits to signal it.

And then the signaling collapses?

I admit to being quite confused over this. Waiting for someone to clear it all up!

Replies from: wedrifid, NancyLebovitz
comment by wedrifid · 2010-08-08T10:49:34.316Z · LW(p) · GW(p)

Once we have the free riding rabbits placing resources into noticing and away from running

Placing emphasis on 'noticing vs running' is just confusing you. Noticing helps the rabbit run just as much as it helps it look in the right direction.

And then the signaling collapses?

No. Silas was just wrong. If average rabbit speed become slower then there will be a commensurate change in the threshold at which foxes chase rabbits even when they have been spotted. It will remain useful to show the fox that it has been spotted in all cases in which about 200ms of extra head start is worth sacrificing so that a chase may potentially be avoided.

If you are still confused, consider a situation in which rabbits and foxes always become aware of each other's presence at a distance of precisely 250m. Would anyone suggest that rabbits would freeload and not bother to be fast themselves in that circumstance? No. In the 'rabbits standing up' situation the rabbits will still want to be fast for precisely the same reason. All standing up does is force the mutually acknowledged awareness.

Replies from: sark
comment by sark · 2010-08-08T11:07:45.245Z · LW(p) · GW(p)

Placing emphasis on 'noticing vs running' is just confusing you. Noticing helps the rabbit run just as much as it helps it look in the right direction.

Sorry I wasn't being clear, previously I had always meant noticing=='showing the fox you have noticed it'.

If average rabbit speed become slower then there will be a commensurate change in the threshold at which foxes chase rabbits even when they have been spotted.

What threshold? I'm guessing other factors such as the fox's independent assessment of the rabbit's speed?

It will remain useful to show the fox that it has been spotted in all cases in which about 200ms of extra head start is worth sacrificing so that a chase may potentially be avoided.

I didnt consider the fact that signaling having noticed required that sacrifice. Does it affect the analysis?

consider a situation in which rabbits and foxes always become aware of each other's presence at a distance of precisely 250m. Would anyone suggest that rabbits would freeload and not bother to be fast themselves in that circumstance?

I don't understand this part.

Replies from: wedrifid
comment by wedrifid · 2010-08-08T13:51:48.576Z · LW(p) · GW(p)

What threshold? I'm guessing other factors such as the fox's independent assessment of the rabbit's speed?

If the average rabbit becomes slower then the average fox will be more likely to estimate that a given rabbit chase is successful.

I didnt consider the fact that signaling having noticed required that sacrifice. Does it affect the analysis?

Not particularly. We haven't been quantising anyway and it reasonable to consider the overhead here negligible for our purposes. '

I don't understand this part.

You don't particularly need to. Just observe that rabbits running fast to avoid foxes is a stable equilibrium. Further understand that nothing in this scenario changes the fact that running fast is a stable equilibrium. The whole 'signalling makes the equilibrium unstable' idea is a total red herring, a recipe for confusion.

comment by NancyLebovitz · 2010-08-08T10:20:08.462Z · LW(p) · GW(p)

Hypothesis: most rabbits which are in good enough shape to notice are also in good enough shape to escape.

There simply aren't enough old? sick? rabbits to freeload to make the system break down.

Anyone know whether inexperienced foxes chase noticing rabbits? If so, this make freeloading a risky enough strategy that it wouldn't be commonly used.

Replies from: sark, wedrifid
comment by sark · 2010-08-08T10:22:56.223Z · LW(p) · GW(p)

I expected the devil would be in the details! But yeah, your hypothesis sounds plausible, and freeloading seems risky.

comment by wedrifid · 2010-08-08T10:36:29.850Z · LW(p) · GW(p)

Hypothesis: most rabbits which are in good enough shape to notice are also in good enough shape to escape.

That correlation can not (and need not) be counted on to make the equilibrium stable over a large number of generations.

comment by timtyler · 2010-08-07T14:54:31.418Z · LW(p) · GW(p)

Standing on your hind legs - which is the behaviour under discussion - is costly to rabbits - since it increases the chance of being observed by predators - so they can't do it all the time.

However, that is not really the point. The signal is not: "look how fast I can run" - it is "look how much of a head my family and I have - given that I can see you now".

Replies from: sark, wedrifid
comment by sark · 2010-08-08T01:19:08.151Z · LW(p) · GW(p)

Not all the time of course. I was refering to SilasBarta's observation that this might not be a stable equilibrium. Because noticing the fox and turning to it is much cheaper than being able to run fast enough such that the fox will not catch you once you notice it. A good noticer but bad runner can take advantage of the good noticer/good runner's signal and free ride off it. The fox wouldn't care if you were a good noticer if you weren't also a good runner, since it can still catch you once you have noticed it.

Replies from: timtyler
comment by timtyler · 2010-08-08T07:21:53.263Z · LW(p) · GW(p)

Maybe. Rabbits go to ground. Escape is not too tricky if they have time to reach their burrow. Running speed is probably a relatively small factor compared to how far away the fox is when the rabbit sees it.

Replies from: sark
comment by sark · 2010-08-08T09:41:05.082Z · LW(p) · GW(p)

Yeah, running speed may not be such an important factor.

comment by wedrifid · 2010-08-08T08:01:52.731Z · LW(p) · GW(p)

Standing on your hind legs - which is the behaviour under discussion - is costly to rabbits - since it increases the chance of being observed by predators - so they can't do it all the time.

Not only that, you can only look in one direction at a time. You do need to know where the fox is. The rabbit only loses a couple of hundred milliseconds if the fox decides to make a dash for it anyway.

comment by AlephNeil · 2010-08-08T13:50:11.303Z · LW(p) · GW(p)

You've described how a scenario in which the rabbits use this behaviour might be unstable, but so what? For any kind of behaviour whatsoever one can dream up a scenario where animals over or underuse it and subsequently have to change.

Prima facie there's nothing at all implausible about a stable situation where rabbits use that behaviour some of the time and it makes the foxes give up some of the time.

comment by SilasBarta · 2010-08-09T16:46:19.094Z · LW(p) · GW(p)

Elaboration of my skepticism on this claim here.

comment by Paul Crowley (ciphergoth) · 2010-08-08T17:38:27.531Z · LW(p) · GW(p)

Is there software that would objectively measure who spoke most and who interrupted who most? If so, Bloggingheads should run such software as a matter of course and display the results alongside each conversation.

EDIT: it should also measure how often each participant allows the other to interrupt, versus simply raising their voice and ploughing on.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-08T20:06:03.862Z · LW(p) · GW(p)

I'm willing to bet that such software would be very hard to develop.

Requiring 5 second pauses after speaking to allow for thought would be an interesting experiment.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-08T20:36:49.080Z · LW(p) · GW(p)

Could you say more about the difficulties you foresee? I'm guessing that Bloggingheads would have the two separate streams of audio from each microphone, which should make it somewhat easier, but even without that figuring out which speaker is which doesn't seem beyond the realms of what audio processing might be able to do.

Replies from: NancyLebovitz, timtyler
comment by NancyLebovitz · 2010-08-08T20:51:48.723Z · LW(p) · GW(p)

I may have been overpessimistic. I didn't think about the separate feeds, and you're right about that making things easier.

There might be questions about who has the "right" to be speaking at a given moment-- that would define what constitutes an interruption.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-08T20:55:33.216Z · LW(p) · GW(p)

Need it be more complex than: person A begins to speak while person B is still speaking? It might get a few false positives, but it should be a useful metric overall.

comment by timtyler · 2010-08-09T08:10:04.315Z · LW(p) · GW(p)

I think people just use standard video-editing software to combine the videos and their audio streams before uploading them.

comment by knb · 2010-08-07T19:02:53.564Z · LW(p) · GW(p)

Fun fact: if you pause the video and click to different random points, you get to look at a random sampling of Wright's facial expressions, which oscillate between frustration, exasperation, and red-faced rage. Eliezer's expressions move between neutral, amused, serene, and placid.

Replies from: Liron, Letharis
comment by Liron · 2010-08-08T01:52:28.592Z · LW(p) · GW(p)

Eliezer's repertoire is higher-status because it's less reactive.

comment by Letharis · 2010-08-09T01:20:04.797Z · LW(p) · GW(p)

I agree that Eliezer maintained his calm better, but I don't believe that Wright is the simpleton you seem to be painting him to be. I've watched a lot of his videos, and I would say there are very rarely moments of "red-faced rage," and certainly none in this video. He was at times frustrated, but he really is working to understand what Eliezer is saying.

Replies from: knb
comment by knb · 2010-08-09T02:57:25.139Z · LW(p) · GW(p)

Nothing I said implied Wright is a "simpleton", and I certainly don't think he is. I was merely pointing out an amusing aspect of their conversation.

And, yes he did have a moment of "red-faced rage" when he yelled at Eliezer (I believe it was toward the middle of the video). I certainly understand his frustration since the conversation didn't really get anywhere and they seemed stuck on semantic issues that are hard to address in a 60 minute video.

comment by simplicio · 2010-08-08T04:43:45.736Z · LW(p) · GW(p)

Wright gives the impression of a hostile conversation partner, one who is listening to you only to look for a rhetorical advantage via twisted words.

And most of the points he makes are very em... cocktail-party philosophical?

comment by Morendil · 2010-08-07T10:21:07.955Z · LW(p) · GW(p)

Favorite bit:

  • RW: "We will give [the superintelligent AI] its goals; isn't that the case with every computer program bult so far?"
  • EY: "And, there's also this concept of bugs."
comment by simplicio · 2010-08-08T05:42:56.562Z · LW(p) · GW(p)

Okay, so from what I can tell, Wright is just playing semantics with the word "purpose," and that's all the latter part of the argument amounts to - a lot of sound and noise over an intentionally bad definition.

He gets Eliezer to describe some natural thing as "purposeful" (in the sense of optimized to some end), then he uses that concession to say that it "has purpose" as an extra attribute with full ontological standing.

I guess he figures that if materialists and religionists can both agree that the eye has a "purpose," then he has heroically bridged the gap between religion and science.

Basically, it's an equivocation fallacy.

comment by SeventhNadir · 2010-08-07T12:24:47.662Z · LW(p) · GW(p)

Maybe I'm just too dumb to understand what Robert Wright was saying, but was he being purposely evasive and misunderstanding what Eliezer was saying when he realised he was in trouble? Or was that just me?

Replies from: Craig_Heldreth, Matt_Simpson, MartinB
comment by Craig_Heldreth · 2010-08-07T22:34:33.863Z · LW(p) · GW(p)

The reason Wright got bent out of shape (my theory): Eliezer seemed to imply the communal mind theory is Wright's wishful thinking. This seems a little simplistic. I do believe Wright is a little disingenuous, but it is a little more subtle than that. It appears to me he thinks he has an idea that can be used to wean millions of the religious faithful to a more sensible position, and he is trying to market it. And he would sort of like to have it both ways. With hard edged science folk he can say all that with a wink because we are sophisticated and we get it. And the rubes can all swallow it hook line sinker.

I forget the exact term Eliezer used that seemed to set him off. It was something like wishing or hoping or rooting-for. Then Wright's speech got loud and fast and confused and his blood pressure went up. He seemed to feel like he was being accused of acting in bad faith when he was claiming to try to be helpful.

Maybe Wright's friends thought he did great under fire?

Replies from: SeventhNadir
comment by SeventhNadir · 2010-08-08T06:48:33.076Z · LW(p) · GW(p)

Maybe Wright's friends thought he did great under fire?

I wish I could have watched it without knowing who either person was, rather than just not knowing who Wright was. That would be interesting

comment by Matt_Simpson · 2010-08-08T08:39:36.971Z · LW(p) · GW(p)

I wouldn't say the evasiveness was purposeful. Robert misunderstood something Eliezer said fairly early, taking it as an attack when Eliezer was trying to make a point about normative implications. This probably switched Robert out of curiosity-mode and into adversarial-mode. Things were going fine after Eliezer saw what was happening and dropped the subject. But later, when Robert didn't understand Eliezer's argument, adversarial-mode was active and interpreted it as Eliezer continuing (in Robert's mind) to be a hostile debate partner. I doubt Robert thought he was in trouble; more likely he thought Eliezer was in trouble and was being disingenuous.

comment by MartinB · 2010-08-07T19:22:12.563Z · LW(p) · GW(p)

I do not know his position or view to see that. But i got the impression he was badly prepared. Severe misunderstandings, and a lesson in staying calm.

comment by timtyler · 2010-08-07T10:30:27.030Z · LW(p) · GW(p)

On first watching, I didn't see where was Eliezer coming from at the end. My thoughts were:

The genetic code was produced by a optimisation process. Biochemists have pretty broad agreement on the topic. There are numerous adaptations - including an error correcting code. It did not happen by accident - it was the product of an optimisation process, executed by organisms with earlier genetic substrates. Before DNA and proteins came an RNA world with a totally different "code"-with no amino acids. It is not that there is no evidence for this - we now have quite a mountain of evidence for it.

The modern ecosystem is the product of a designoid organism containing the first DNA. That creature conquered the world with its descendants - and was massively successful at pursuing its goals, and spreading them everywhere. Accidents might have been involved along the way, but the results are purposeful - to the same extent that other things in biology are purposeful [insert your favourite teleonomy justification for teleology here if so inclined].

comment by timtyler · 2010-08-07T13:02:53.728Z · LW(p) · GW(p)

One of the better BHTV episodes, IMO. Robert Wright was a bit heavy on rhetoric for me: Have you sobered up? Why don't you accuse me of blah. Oh, if you are going to fling acusations around, that isn't very scientific - etc. Also the enthusiasm for extracting some kind of concession from Eliezer about updating his position at the end.

Wright gets a bit excited towards the end. It has some entertainment value - but tends to interfere with the discussion a little. It would have helped if he could have read some EY.

Interesting topics, though.

comment by LucasSloan · 2010-08-07T08:37:31.346Z · LW(p) · GW(p)

The main problem in the discussion that appeared to me is the fact that the present state of the universe is really unlikely, and you would never get it by chance. This is true and the universe does naively appear to have been designed to produce us. However, this is a priori massively unlikely. This implies that we exist in a universe that tries out many possibilities (many worlds interpretation) and anthropic bias ensures that all observers see weird and interesting things. Robert's problem is that he gets an emotion kick out of ascribing human-friendly purpose to survivorship bias. I'm pretty sure that nothing other than the most painstaking argument is going to get him to realize his folly, and that just isn't going to happen in one hour video chats.

Replies from: CarlShulman, None, teageegeepea, timtyler
comment by CarlShulman · 2010-08-07T09:31:28.248Z · LW(p) · GW(p)

This implies that we exist in a universe that tries out many possibilities (many worlds interpretation)

Big World rather. Many-worlds doesn't give different laws of physics in the way that the string theory landscape or Tegmark's mathematical universe hypothesis do.

comment by [deleted] · 2010-08-09T07:37:40.676Z · LW(p) · GW(p)

The main problem in the discussion that appeared to me is the fact that the present state of the universe is really unlikely, and you would never get it by chance.

Any hypothesis that assigns a really low probability to the present state of the universe is probably wrong.

Replies from: LucasSloan
comment by LucasSloan · 2010-08-09T17:41:47.922Z · LW(p) · GW(p)

That's what I said.

(The universe is in a state such that to uniquely determine it, we need a very complicated theory. Therefore, we should look for less complicated theories which contain it and many other things, and count on anthropics to ensure we only see the parts of the universe we're accustomed to.)

comment by teageegeepea · 2010-08-08T16:31:49.525Z · LW(p) · GW(p)

Have you read Sean Carroll's "From Eternity to Here"? It's a fairly layman-friendly take on that problem (or I suppose more accurately, the problem of why the past was in such an improbable state of low entropy). I think his explanation would fall under Carl Schulman's "Big World" category.

comment by timtyler · 2010-08-07T11:49:46.217Z · LW(p) · GW(p)

I think this argument is mostly about whether purpose is there - not about where it comes from.

Designoid entities as a result of anthropic selection effects seem quite possible in theory - and it would be equally appropriate to describe them as being purposeful [standard teleology terminology disclaimers apply, of course].

Replies from: pjeby
comment by pjeby · 2010-08-07T17:37:01.633Z · LW(p) · GW(p)

and it would be equally appropriate to describe them as being purposeful

Especially if you unpack "purposeful" as meaning "stimulating that portion of the human brain that evolved to predict the behavior of other entities". ;-)

The real confusion about purpose arises when we confuse the REAL definition of purpose (i.e. that one), with the naive inbuilt notion of "purposeful" (i.e. "somebody did it on purpose").

Replies from: timtyler
comment by timtyler · 2010-08-07T20:25:31.075Z · LW(p) · GW(p)

That should not be the definition of purpose - if we are trying to be scientific. Martian scientists should come to the same conclusions.

"Purpose" - in this kind of context - could mean "goal directed" - or it could mean pursuing a goal with a mind that predicts the future. The former definition would label plants and rivers flowing downhill as purposeful - whereas the latter would not.

Replies from: pjeby
comment by pjeby · 2010-08-08T00:02:19.009Z · LW(p) · GW(p)

That should not be the definition of purpose - if we are trying to be scientific. Martian scientists should come to the same conclusions.

Do you mean that a Martian scientist would not conclude that when a human being uses that word, they are referring to a particular part of their brain that is being stimulated?

What I'm saying is that the notion of "purpose" is an interpretation we project onto the world: it is a characteristic of the map, not of the territory.

To put it another way, there are no purposeful things, only things that "look purposeful to humans".

Another mind with different purpose-detecting circuitry could just as easily come to different conclusions -- which means that the Martians will be led astray if they have different purpose-recognition circuits, following which we will have all sorts of arguments on the boundary conditions where human and Martian intuitions disagree on whether something should be called "purposeful".

tl;dr: if it's part of the map, the description needs to include whose map it is.

"Purpose" - in this kind of context - could mean "goal directed" - or it could mean pursuing a goal with a mind that predicts the future.

Now you have to define "mind" as well. It doesn't seem to me that that's actually reducing anything here. ;-)

Replies from: JamesAndrix, timtyler
comment by JamesAndrix · 2010-08-08T04:06:26.317Z · LW(p) · GW(p)

I'm not sure we can rule out a meaningful and objective measure of purposfulness, or something closely related to it.

If I saw a Martian laying five rocks on the ground in a straight line, I would label it an optimization process. Omega might tell me that the Martian is a reasonable powerful geral optimization process, currently optimizing for a target like 'Indicate direction to solstice sunrise." or "Communicate concept of five-ness to Terran". In a case like that the pattern of five rocks in a line is highly intentional.

Omega might instead tell me that the Martian is not a strong general optimization process, but that member of its species frequently arrange five stones in a line as part of their reproductive process, that would be relatively low in intentionality.

But intentionality can also go with high intelligence. Omega could tell me that the Martian is a strong general optimization agent, is currently curing Martian cancer, and smart martians just rocks in a line when they're thinking hard. (Though you might reparse that as there is a part of the martian brain that is a specialized optimizer for putting stones in a line. I think knowing whether this is valid would depend on the specifics of the thinking hard->stones in a line chain of causality.)

And if I just found five stones in a line on Mars, I would guess zero intentionality, because that doesn't constitute enough evidence for an optimization process, and I have no other evidence for Martians.

Replies from: pjeby
comment by pjeby · 2010-08-08T04:22:57.562Z · LW(p) · GW(p)

I would label it an optimization process

Evolution is an optimization process, but it doesn't have "purpose" - it simply has byproducts that appear purposeful to humans.

Really, most of your comment just helps illustrate my point that purposefulness is a label attached by the observer: your knowledge (or lack thereof) of Martians is not something that changes the nature of the rock pattern itself, not even if you observe the Martian placing the rocks.

(In fact, your intiial estimate of whether the Martian's behavior is purposeful is going to depend largely on a bunch of hardwired sensory heuristics. If the Martian moves a lot slower than typical Earth wildlife, for example, you're less likely to notice it as a candidate for purposeful behavior in the first place.)

Replies from: JamesAndrix, timtyler, JamesAndrix, timtyler
comment by JamesAndrix · 2010-08-08T05:07:36.003Z · LW(p) · GW(p)

Evolution is an optimization process, but it doesn't have "purpose" - it simply has byproducts that appear purposeful to humans.

How do you know it doesn't have purpose? Because you know how it works, and you know that nothing like "Make intelligent life." was contained in it's initial state in the way it could be contained in a Martian brain or an AI.

The dumb mating martian also did not leave the rocks with any (intuitively labeled) purpose.

I'm saying: Given a high knowledge of the actual process behind something, we can take a measure that can useful, and corresponds well to what we label intentionality.

In turn, if we have only the aftermath of a process as evidence, we may be able to identify features which correspond to a certain degree of intentionality, and that might help us infer specifics of the process.

comment by timtyler · 2010-08-09T08:22:34.009Z · LW(p) · GW(p)

What Wright said in response to that claim was: how do you know that?

"Optimisationverse

The idea that the world is an optimisation algorithm is rather like Simulism - in that it postulates that the world exists inside a computer.

However, the purpose of an optimisationverse is not entertainment - rather it is to solve some optimisation problem using a genetic algorithm.

The genetic algorithm is a sophisticated one, that evolves its own recombination operators, discoveres engineering design - and so on."

In this scenario, the process of evolution we witness does have a purpose - it was set up deliberately to help solve an optimisation problem. Surely this is not a p=0 case...

Replies from: pjeby
comment by pjeby · 2010-08-09T17:13:11.107Z · LW(p) · GW(p)

In this scenario, the process of evolution we witness does have a purpose

That's not the same thing as acting purposefully -- which evolution would still not be doing in that case.

(I assume that we at least agree that for something to act purposefully, it must contain some form of representation of the goal to be obtained -- a thermostat at least meets that requirement, while evolution does not... even if evolution was as intentionally designed and purposefully created as the thermostat.)

comment by JamesAndrix · 2010-08-08T09:25:02.905Z · LW(p) · GW(p)

My purposeful thinking evolved into a punny story:

http://lesswrong.com/lw/2kf/purposefulness_on_mars/

comment by timtyler · 2010-08-08T07:17:58.638Z · LW(p) · GW(p)

It would have a purpose in my proposed first sense - and in my proposed second sense - if we are talking about the evolutionary process after the evolution of forward-looking brains.

Evolution (or the biosphere) was what was being argued about in the video. The claim was that it didn't behave in a goal directed manner - because of its internal conflicts. The idea that lack of harmony could mess up goal-directedness seems OK to me.

One issue of whether the biosphere has enough harmony for a goal-directed model to be useful. If it has a single global brain, and can do things like pool resources to knock out incoming meteorites, it seems obvious that a goal-directed model is actually useful in predicting the behaviour of the overall system.

comment by timtyler · 2010-08-08T07:10:05.502Z · LW(p) · GW(p)

Most scientific definitions should try to be short and sweet. Definitions that include a description of the human mind are ones to eliminate.

Here, the idea that purpose is a psychological phenomenon is exactly what was intended to be avoided - the idea is to give a nuts-and-bolts description of purposefulness.

Re: defining "mind" - not a big deal. I just mean a nervous system - so a dedicated signal processing system with I/O, memory and processsing capabilities.

Replies from: JoshuaZ, pjeby
comment by JoshuaZ · 2010-08-09T03:38:15.180Z · LW(p) · GW(p)

Re: defining "mind" - not a big deal. I just mean a nervous system - so a dedicated signal processing system with I/O, memory and processsing capabilities.

Any nervous system? That seems like a bad idea. Is a standard neural net trained to recognize human faces a mind? Is a hand-calculator a mind? Also, how does one define having a memory and processing capabilities. For example, does an abacus have a mind? What about a slide rule? What about a Pascaline or an Arithmometer?

Replies from: timtyler
comment by timtyler · 2010-08-09T06:46:57.828Z · LW(p) · GW(p)

I just meant "brain". So: caclulator - yes, computer - yes.

Those other systems are rather trivial. Most conceptions of what constitutes a nervous system is run into the "how many hairs make a beard" issue at the lower end - it isn't a big deal for most purposes.

comment by pjeby · 2010-08-08T16:58:37.841Z · LW(p) · GW(p)

Definitions that include a description of the human mind are ones to eliminate. .... Re: defining "mind" - not a big deal.

Hm. Which one is it? ;-)

I just mean a nervous system - so a dedicated signal processing system with I/O, memory and processsing capabilities.

So, a thermostat satisfies your definition of "mind", so long as it has a memory?

Replies from: timtyler
comment by timtyler · 2010-08-08T21:00:50.665Z · LW(p) · GW(p)

Human mind: complex. Cybernetic diagram of minds-in-general: simple.

A thermostat doesn't have a "mind that predicts the future". So, it is off the table in the second definition I proposed.

Replies from: pjeby
comment by pjeby · 2010-08-09T03:04:39.944Z · LW(p) · GW(p)

Human mind: complex. Cybernetic diagram of minds-in-general: simple.

Dude, have you seriously not read the sequences?

First you say that defining minds is simple, and now you're pointing back to your own brain's inbuilt definition in order to support that claim... that's like saying that your new compressor can compress multi-gigabyte files down to a single kilobyte... when the "compressor" itself is a terabyte or so in size.

You're not actually reducing anything, you're just repeatedly pointing at your own brain.

Replies from: timtyler
comment by timtyler · 2010-08-09T06:51:25.739Z · LW(p) · GW(p)

Re: "First you say that defining minds is simple, and now you're pointing back to your own brain's inbuilt definition in order to support that claim... "

I am talking about a system with sensory input, motor output and memory/processing. Like in this diagram:

http://upload.wikimedia.org/wikipedia/commons/7/7a/SOCyberntics.png

That is nothing specifically to do with human brains - it applies equally well to the "brain" of a washing machine.

Such a description is relatively simple. It could be presented to Martians in a manner so that they could understand it without access to any human brains.

Replies from: pjeby
comment by pjeby · 2010-08-09T17:09:21.584Z · LW(p) · GW(p)

it applies equally well to the "brain" of a washing machine.

That diagram also applies equally well to a thermostat, as I mentioned in a great-great-grandparent comment above.

comment by Apteris · 2012-05-14T18:31:54.943Z · LW(p) · GW(p)

I'm watching this dialogue now, I'm 45 (of 73) minutes in. I'd just like to remark that:

  1. Eliezer is so nice! Just so patient, and calm, and unmindful of others' (ahem) attempts to rile him.
  2. Robert Wright seemed more interested in sparking a fiery argument than in productive discussion. And I'm being polite here. Really, he was rather shrill.

Aside: what is the LW policy on commenting on old threads? All good? Frowned upon?

Replies from: thomblake
comment by thomblake · 2012-05-14T18:34:26.101Z · LW(p) · GW(p)

what is the LW policy on commenting on old threads? All good? Frowned upon?

It's pretty much okay. If there is a recent "Sequence rerun" thread about in in Discussion, then the discussion should happen there instead, but otherwise there's no particular issues.

comment by knb · 2010-08-09T03:02:05.577Z · LW(p) · GW(p)

I really didn't care much for this one. I usually feel like I learned something when I watch a Bloggingheads video (there is a selection effect, because I only watch ones with people I already find interesting). But I'm afraid this one was wasted in misunderstandings and minor disagreements.

comment by timtyler · 2010-08-07T14:51:43.959Z · LW(p) · GW(p)

Re: panspermia.

Applying Occam's razor isn't trivial here. The difficulty of the journey to earth makes panspermia less probable, but all the other places where life could then have previously evolved makes it more probable. The issue is - or should be - how these things balance.

If you write down the theory, panspermia has a longer description. However, that's not the correct way to decide between the theories in this kind of case - you have to look a bit deeper into the probabilities involved.

comment by timtyler · 2010-08-07T09:56:04.706Z · LW(p) · GW(p)

I think it is quite acceptable to describe technological evolution as "purposeful" - in the same way as any other natural system is purposeful.

‘Teleology is like a mistress to a biologist: he cannot live without her but he’s unwilling to be seen with her in public.’ Today the mistress has become a lawfully wedded wife. Biologists no longer feel obligated to apologize for their use of teleological language; they flaunt it. The only concession which they make to its disreputable past is to rename it ‘teleonomy’. - D. Hull.

So, I am sympathetic to Robert Wright. Evolution is a giant optimisation process, which acts to dissipate low-entropy states - and cultural evolution is evolution with a different bunch of self-reproducing agents.

Whether all the parts cooperate with each other or not makes no real difference to the argument. A goal-directed system doesn't need all of its sub-components to cooperate with each other. Cooperation adds up - while conflict cancels out. A bit of cooperation is more than enough - and as the internet shows, the planet has enough cooperation to construct large-scale adaptations.

Replies from: TobyBartels, SilasBarta, timtyler
comment by TobyBartels · 2010-08-07T17:44:48.725Z · LW(p) · GW(p)

"‘Teleology is like a mistress to a biologist: he cannot live without her but he’s unwilling to be seen with her in public.’ Today the mistress has become a lawfully wedded wife. Biologists no longer feel obligated to apologize for their use of teleological language; they flaunt it. The only concession which they make to its disreputable past is to rename it ‘teleonomy’."

So when unmarried, biology and teleogy happened to have the same last name. But after the marriage, teleology changed her surname to be different? No wonder ordinary people don't understand science!

comment by SilasBarta · 2010-08-08T02:45:16.042Z · LW(p) · GW(p)

"‘Teleology is like a mistress to a biologist: he cannot live without her but he’s unwilling to be seen with her in public.’ Today the mistress has become a lawfully wedded wife. Biologists no longer feel obligated to apologize for their use of teleological language; they flaunt it.

Sure, so long as you recognize that "purpose" in

"The purpose of the heart is to pump blood."

cashes out as something different from

"The purpose of the silicon CPU is to implement a truth table."

In my experience, there are about zero philosophers of science who both understand this distinction, and harp on this point about teleology in biology. Here is one I read recently.

Replies from: timtyler
comment by timtyler · 2010-08-08T07:26:04.538Z · LW(p) · GW(p)

"Cashes out" seems rather vague.

In one case, we have a mind to attribute purpose to - and in the other we don't.

However, both are complex adapted systems, produced by other, larger complex adapted systems as part of an optimisation process. If that is all we mean by "purpose", these would be classified in much the same way.

I didn't like the "No Teleology!" link much - it seemed pointless.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-09T17:16:37.817Z · LW(p) · GW(p)

I use the term "cashes out" because that's the lingo here. But I'll expand out the two claims to show how they're different in a crucial way.

In the case of the heart's purpose, the statement means, "Historically, genes were copied to the next generation in proportion to the extent to which they enhanced the capability/tendency of organisms whose DNA had that gene to make another organism with that gene. At an organism's present state, a gene complex causes a heart to exist, which causes blood to increase in pressure at one point in its circulation, which causes the organism to stay away from equilibrium with its environment, which permits it to pass on the genes related to the heart (the latter being the critical explanatory feature of organism). If the heart ceased causing the blood to increase in pressure, the organism would lose its ability to remain far from equilibrium (which as mentioned above relates to an aspect with critical explanatory power) much faster and securely than if the heart ceased causing any of its other effects, such as generation of heat."

In the case of the CPU's purpose, the statement means, "The CPU was added to the computer system because a human designer identified that fast implementation of a truth table would be required for the computer system to do what the human designer intended (which is fast input/output of computations related to what human users will want out of it), and they recognized that inclusion of the CPU would lead to fast implementation of a truth table."

Quite a mouthful in each case! So it's quite understandable when the distinctions are glossed over in simplified explanations of the topics.

But the important thing to notice that if you take the meaning of "purpose" in the biological context to mean something more like it does in the computer context, you are led into critical errors. For example, you will have a hard time recognizing evolutionary paths, particularly when organs are -- for lack of a better term -- "repurposed" (or "exapted" in the lingo) to do something in a later generation, that they didn't do -- or which wasn't as critical -- in an earlier generation. (Or in a comtemporary homologue.)

After all, if "the" purpose of feathers is to keep warm in one generation, how can "the" purpose of those features be to fly in another? If "the" purpose of a limb is to swim in one generation, how can it be to walk in another?

I didn't like the "No Teleology!" link much - it seemed pointless.

Why? Gene Callahan seems to be arguing the same thing you are about biology. At the end of the exchange, bestquest describes the sense in which "the function of the heart is to pump blood", and then Callahan erroneously dismisses that phrasing as being teleological.

comment by timtyler · 2010-08-07T11:14:04.330Z · LW(p) · GW(p)

As Wright says, organisms are not that harmonious anyway. At every polymorphic locus, two alleles are engaged in a battle-to-the-death.

There is still enough cooperation for people to describe the resulting behaviour as "purposeful" - despite all the underlying replicator-level conflicts that produce it.

comment by Alexandros · 2010-08-08T13:00:13.718Z · LW(p) · GW(p)

By replacing 'has purpose X' with 'is suitable for X', a lot of Wright's points become meaningless.

That said, I am also unsure about Eliezer's argument for purposelessness from competing designs.

comment by Clippy · 2010-08-07T14:53:49.548Z · LW(p) · GW(p)

I like that one of the humans acknowledged the existence of paperclip maximizers (around 7:50).

Replies from: JamesAndrix, James_K
comment by JamesAndrix · 2010-08-08T04:16:04.863Z · LW(p) · GW(p)

Why do you like it? How does it maximize paperclips?

Replies from: Clippy
comment by Clippy · 2010-08-08T17:48:22.435Z · LW(p) · GW(p)

Well, it's hard to tell people about the importance of paperclips when they don't even know that some of us find them really important in the first place.

comment by James_K · 2010-08-08T08:12:20.219Z · LW(p) · GW(p)

It seems to me your task would be easier if people didn't acknowledge paperclip maximisers. Humans aren't going to start maximising paperclips, so your best odds are to do and end run around us. That's easier to do if we aren't aware of your existence.

Replies from: None, katydee
comment by [deleted] · 2010-08-09T07:43:43.120Z · LW(p) · GW(p)

People have been involved in the production of every currently existing paperclip.

comment by katydee · 2010-08-08T23:22:51.456Z · LW(p) · GW(p)

Keep in mind that people could, in theory, be compelled or incentivized to maximize paperclips, or at least to consider paperclips much more important than they are now.

comment by timtyler · 2010-08-07T10:07:00.928Z · LW(p) · GW(p)

Wright gets a bit excited towards the end. It has some entertainment value - but tends to interfere with the discussion a little. It would have helped if he could have read some EY.

comment by blogospheroid · 2010-09-09T07:24:33.387Z · LW(p) · GW(p)

During the dialogue, Eliezer wanted Robert to distinguish between the "accident hypothesis" and the non-zero hypothesis. He also mentioned that he would see the difference between the two by solomonoff induction, as in the shortest computer program that can output the result seen.

Now, any accident hypothesis involves a random number function, right?

The best random number functions are those that either go beyond the matrix or are very long.

So, does solomonoff induction imply that an intelligent designer is the better hypothesis once the length of the random function exceeds the length sufficient to generate a general intelligence (say humans)?

Which would imply that from studying the randomness of nature and the nature of intelligence, we can figure out some day whether we are in a purposeful or a random universe. Is this correct?

comment by Ivan_Tishchenko · 2010-08-15T10:25:20.165Z · LW(p) · GW(p)

Well, for me, there was only emotional disagreement between RW and EY. And, EY explanation did not make it through completely to RW.

To summarize the second part of the video:

RW: Can it be that evolution of the Earth biosphere is purposeful? EY: Yes, but that's very improbable.

That's it. Isn't it?

And by the way, RW was doing a very good argument! I saw that when I finally understood what RW was talking about, trying to compare a fox to the Earth. Because, you see, I too do not see that much of a difference between them -- provided that we agree on his conditions:

  • a single fox is presented to a viewer (that is, one fertilized cell, which starts to replicate right away)

  • viewer is completely ignorant of natural context/surroundings in which this fox lives on Earth

  • viewer does not even know it is from Earth or whatever else; viewer is not provided any information about this cell whatsoever

  • well, we would have to somehow provide this fox with oxygen and food and etc. -- let's imagine we managed to do it without exposing much of its natural surroundings

Now, thinking of a fox this way, I can see that a fox and the Earth are very alike.

  • both start with something more simple (one cell vs. single-cell organisms),

  • eventually grows into something more and more complex (fox vs. current biosphere),

  • consists of various tissues and "particles", which are in turn quite complex things in itself

And an argument of EY, about "particles" in fox not "eating" each other, while particles on Earth (foxes) eating other particles -- it comes from our subconscious, we feel that it's bad since rabbit dies, so we clearly see a distinction since within our organism nothing actually "dies" in that sense. But if we look at the Earth as a single organism, we can think of this event (fox eating rabbit) as exact analogy of "blood eating oxygen and then transferring it to muscles" -- except that with blood it is straightforward, and with biosphere food chain has way more nodes.

So, I am trying to interpret here, but another thing I think EY meant in the video, is: purposefulness of the biosphere (or panspermia to that effect) may well be the case, but from the point of view of our current body of knowledge, this idea just replaces existing hard-to-answer questions by other equally hard-to-answer questions.

I guess, if humans had a chance to view the Earth with its complete context, or even better -- see other cases of similar biospheres life (and death) -- just as we saw many lives of many foxes and other organisms on Earth -- then we would be able to judge it as purposeful or not, and panspermia hypothesis would not be so useless.

comment by timtyler · 2010-08-07T09:32:41.311Z · LW(p) · GW(p)

...and what we end up doing with all the galaxies we see in our telescopes - assuming there's no one out there - which seems to be the case. - 24:30

There aren't any aliens in all the visible galaxies?!? I thought we were likely to see a universe with many observers in it. What gives?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-08-08T14:43:45.426Z · LW(p) · GW(p)

Our universe does seem to have infinitely many observers in it but that doesn't necessarily mean it has to have a particularly high density of them. It instead indicates that particularly densely populated universes are unlikely for some other reason (e.g. uFAI or other planet-wide or lightcone-wide existential risks). Alternatively, it could be that for some reason the computation 'Earth around roughly 2010' includes a disproportionately large amount of the measure of agents in the timtyler reference class. Perhaps we third millennium human beings are a particularly fun bunch to simulate and stimulate.

comment by CarlShulman · 2010-08-07T08:47:51.391Z · LW(p) · GW(p)

You needed to raise observer selection effects: the laws of physics and conditions on Earth are pretty favorable compared to alternatives for the development of intelligence. And of course intelligent observers would be most common in regions of the multiverse with such conditions, and the Fermi Paradox, at least, tells us that Earth is unusually favorable to the development of intelligent life among planets in our galaxy.

Had that been explained and terms made clear, then I think the disagreement could have been made clear, but without it you were just talking at cross-purposes.

In this article Wright makes it fairly clear that this is just a typical anthropic design argument, invoking design rather than observer selection effects.

comment by Gotcha · 2010-08-07T08:44:04.405Z · LW(p) · GW(p)

Bob badgered Dan Dennett to get an "admission" of design/purpose some years ago, and has regularly cited it (with misleading context) for years. One example in this comment thread.

comment by SforSingularity · 2010-08-07T22:31:41.165Z · LW(p) · GW(p)

I was on Robert Wright's side towards the end of this debate when he claimed that there was a higher optimization process that created natural selection for a purpose.

The purpose of natural selection, fine-tuning of physical constants in our universe, and of countless other detailed coincidences (1) was to create me. (Or, for the readers of this comment, to create you)

The optimization process that optimized all these things is called anthropics. Its principle of operation is absurdly simple: you can't find yourself in a part of the universe that can't create you.

When Robert Wright looks at evolution and sees purpose in the existence of the process of evolution itself (and the particular way it happened to play out, including increasing complexity), he is seeing the evidence for anthropics and big worlds.

Once you take away all the meta-purpose that is caused by anthropics, then I really do think there is no more purpose left. Eli should re-do the debate with this insight on the table.

(note 1) (including that evolution on earth happened to create intelligence, which seems to be a highly unlikley outcome of a generic biochemical replicator process on a generic planet; we know this because earth managed to have life for 4 billion years -- half of its total viability as a place for life -- without intelligence emerging, and said intelligence seemed to depend in an essential way on a random asteroid impact at approximately the right moment )

Replies from: RobinZ, zero_call, timtyler
comment by RobinZ · 2010-08-07T23:18:53.861Z · LW(p) · GW(p)

The purpose of natural selection, fine-tuning of physical constants in our universe, and of countless other detailed coincidences (1) was to create me. (Or, for the readers of this comment, to create you)

The optimization process that optimized all these things is called anthropics. Its principle of operation is absurdly simple: you can't find yourself in a part of the universe that can't create you.

That's not what "purpose" means.

Replies from: timtyler
comment by timtyler · 2010-08-08T07:46:18.487Z · LW(p) · GW(p)

The discussion is about what "purpose" means - in the context of designoid systems.

I for one am fine with attributing "purpose" to designoid entities that were created by an anthropic selective process - rather than by evolution and natural selection.

Replies from: RobinZ
comment by RobinZ · 2010-08-08T16:38:58.056Z · LW(p) · GW(p)

I guess I can see that.

comment by zero_call · 2010-08-08T19:07:39.727Z · LW(p) · GW(p)

I don't think this is much of an insight, to be honest. The "anthropic" interpretation is a statement that the universe requires self-consistency. Which is, let's say, not surprising.

The purpose of natural selection, fine-tuning of physical constants in our universe, and of countless other detailed coincidences (1) was to create me. (Or, for the readers of this comment, to create you)

My feeling is that this is a statement about the English language. This is not a statement about the universe.

comment by timtyler · 2010-08-08T07:52:34.686Z · LW(p) · GW(p)

Once you take away all the meta-purpose that is caused by anthropics, then I really do think there is no more purpose left.

There's also the possibility of "the adapted universe" idea - as laid out by Lee Smolin in "The Life of the Cosmos" and James Gardner in "Biocosm" and "Intelligent-Universe".

Those ideas may face some Occam pruning - but they seem reasonably sensible. The laws of the universe show signs of being a complex adaptive system - and anthropic selection is not the only possible kind of selection effect that could be responsible for that. There could fairly easily be more to it than anthropic selection.

Then there's Simulism...

I go into the various possibilites in my "Viable Intelligent Design Hypotheses" essay:

http://originoflife.net/intelligent_design/

Robert Wright has produced a broadly similar analysis elsewhere.