Observing Optimization
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-11-21T05:39:25.000Z · LW · GW · Legacy · 28 commentsContents
28 comments
Followup to: Optimization and the Singularity
In "Optimization and the Singularity" I pointed out that history since the first replicator, including human history to date, has mostly been a case of nonrecursive optimization - where you've got one thingy doing the optimizing, and another thingy getting optimized. When evolution builds a better amoeba, that doesn't change the structure of evolution - the mutate-reproduce-select cycle.
But there are exceptions to this rule, such as the invention of sex, which affected the structure of natural selection itself - transforming it to mutate-recombine-mate-reproduce-select.
I was surprised when Robin, in "Eliezer's Meta-Level Determinism" took that idea and ran with it and said:
...his view does seem to make testable predictions about history. It suggests the introduction of natural selection and of human culture coincided with the very largest capability growth rate increases. It suggests that the next largest increases were much smaller and coincided in biology with the introduction of cells and sex, and in humans with the introduction of writing and science. And it suggests other rate increases were substantially smaller.
It hadn't occurred to me to try to derive that kind of testable prediction. Why? Well, partially because I'm not an economist. (Don't get me wrong, it was a virtuous step to try.) But also because the whole issue looked to me like it was a lot more complicated than that, so it hadn't occurred to me to try to directly extract predictions.
What is this "capability growth rate" of which you speak, Robin? There are old, old controversies in evolutionary biology involved here.
Just to start by pointing out the obvious - if there are fixed resources available, only so much grass to be eaten or so many rabbits to consume, then any evolutionary "progress" that we would recognize as producing a better-designed organism, may just result in the displacement of the old allele by the new allele - not any increase in the population as a whole. It's quite possible to have a new wolf that expends 10% more energy per day to be 20% better at hunting, and in this case the sustainable wolf population will decrease as new wolves replace old.
If I was going to talk about the effect that a meta-level change might have on the "optimization velocity" of natural selection, I would talk about the time for a new adaptation to replace an old adaptation after a shift in selection pressures - not the total population or total biomass or total morphological complexity (see below).
Likewise in human history - farming was an important innovation for purposes of optimization, not because it changed the human brain all that much, but because it meant that there were a hundred times as many brains around; and even more importantly, that there were surpluses that could support specialized professions. But many innovations in human history may have consisted of new, improved, more harmful weapons - which would, if anything, have decreased the sustainable population size (though "no effect" is more likely - fewer people means more food means more people).
Or similarly: there's a talk somewhere where either Warren Buffett or Charles Munger mentions how they hate to hear about technological improvements in certain industries - because even if investing a few million can cut the cost of production by 30% or whatever, the barriers to competition are so low that the consumer captures all the gain. So they have to invest to keep up with competitors, and the investor doesn't get much return.
I'm trying to measure the optimization velocity of information, not production or growth rates. At the tail end of a very long process, knowledge finally does translate into power - guns or nanotechnology or whatever. But along that long way, if you're measuring the number of material copies of the same stuff (how many wolves, how many people, how much grain), you may not be getting much of a glimpse at optimization velocity. Too many complications along the causal chain.
And this is not just my problem.
Back in the bad old days of pre-1960s evolutionary biology, it was widely taken for granted that there was such a thing as progress, that it proceeded forward over time, and that modern human beings were at the apex.
George Williams's Adaptation and Natural Selection, marking the so-called "Williams Revolution" in ev-bio that flushed out a lot of the romanticism and anthropomorphism, spent most of one chapter questioning the seemingly common-sensical metrics of "progress".
Biologists sometimes spoke of "morphological complexity" increasing over time. But how do you measure that, exactly? And at what point in life do you measure it if the organism goes through multiple stages? Is an amphibian more advanced than a mammal, since its genome has to store the information for multiple stages of life?
"There are life cycles enormously more complex than that of a frog," Williams wrote. "The lowly and 'simple' liver fluke..." goes through stages that include a waterborne stage that swims using cilia; finds and burrows into a snail and then transforms into a sporocyst; that reproduces by budding to produce redia; that migrate in the snail and reproduce asexually; then transform into cercaria, that, by wiggling a tail, burrows out of the snail and swims to a blade of grass; where they transform into dormant metacercaria; that are eaten by sheep and then hatch into a young fluke inside the sheep; then transform into adult flukes; which spawn fluke zygotes... So how "advanced" is that?
Williams also pointed out that there would be a limit to how much information evolution could maintain in the genome against degenerative pressures - which seems like a good principle in practice, though I made some mistakes on OB in trying to describe the theory. Taxonomists often take a current form and call the historical trend toward it "progress", but is that upward motion, or just substitution of some adaptations for other adaptations in response to changing selection pressures?
"Today the fishery biologists greatly fear such archaic fishes as the bowfin, garpikes , and lamprey, because they are such outstandingly effective competitors," Williams noted.
So if I were talking about the effect of e.g. sex as a meta-level innovation, then I would expect e.g. an increase in the total biochemical and morphological complexity that could be maintained - the lifting of a previous upper bound, followed by an accretion of information. And I might expect a change in the velocity of new adaptations replacing old adaptations.
But to get from there, to something that shows up in the fossil record - that's not a trivial step.
I recall reading, somewhere or other, about an ev-bio controversy that ensued when one party spoke of the "sudden burst of creativity" represented by the Cambrian explosion, and wondered why evolution was proceeding so much more slowly nowadays. And another party responded that the Cambrian differentiation was mainly visible post hoc - that the groups of animals we have now, first differentiated from one another then, but that at the time the differences were not as large as they loom nowadays. That is, the actual velocity of adaptational change wasn't remarkable by comparison to modern times, and only hindsight causes us to see those changes as "staking out" the ancestry of the major animal groups.
I'd be surprised to learn that sex had no effect on the velocity of evolution. It looks like it should increase the speed and number of substituted adaptations, and also increase the complexity bound on the total genetic information that can be maintained against mutation. But to go from there, to just looking at the fossil record and seeing faster progress - it's not just me who thinks that this jump to phenomenology is tentative, difficult, and controversial.
Should you expect more speciation after the invention of sex, or less? The first impulse is to say "more", because sex seems like it should increase the optimization velocity and speed up time. But sex also creates mutually reproducing populations, that share genes among themselves, as opposed to asexual lineages - so might that act as a centripetal force?
I don't even propose to answer this question, just point out that it is actually quite standard for the phenomenology of evolutionary theories - the question of which observables are predicted - to be a major difficulty. Unless you're dealing with really easy qualitative questions like "Should I find rabbit fossils in the pre-Cambrian?" (I try to only make predictions about AI, using my theory of optimization, when it looks like an easy question.)
Yes, it's more convenient for scientists when theories make easily testable, readily observable predictions. But when I look back at the history of life, and the history of humanity, my first priority is to ask "What's going on here?", and only afterward see if I can manage to make non-obvious retrodictions. I can't just start with the goal of having a convenient phenomenology. Or similarly: the theories I use to organize my understanding of the history of optimization to date, have lots of parameters, e.g. the optimization-efficiency curve that describes optimization output as a function of resource input, or the question of how many low-hanging fruit exist in the neighborhood of a given search point. Does a larger population of wolves increase the velocity of natural selection, by covering more of the search neighborhood for possible mutations? If so, is that a logarithmic increase with population size, or what? - But I can't just wish my theories into being simpler.
If Robin has a simpler causal model, with fewer parameters, that stands directly behind observables and easily coughs up testable predictions, which fits the data well, and obviates the need for my own abstractions like "optimization efficiency" -
- then I may have to discard my own attempts at theorizing. But observing a series of material growth modes doesn't contradict a causal model of optimization behind the scenes, because it's a pure phenomenology, not itself a causal model - it doesn't say whether a given innovation had any effect on the optimization velocity of the process that produced future object-level innovations that actually changed growth modes, etcetera.
28 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Will_Pearson · 2008-11-21T10:37:57.000Z · LW(p) · GW(p)
Should you expect more speciation after the invention of sex, or less
What meaning of species are you using? Under some definitions speciation doesn't make sense without sex... as a species is defined as a group of organisms capable of interbreeding.
comment by Eric5 · 2008-11-21T13:18:14.000Z · LW(p) · GW(p)
I kinda like this whole back and forth between Robin and Eliezer. At the same time... The blog post v. blog post style almost feels like being a child stuck between two divorced parents who are ignoring each other but keep saying things very loudly so the other one can hear ;-)
comment by Will_Pearson · 2008-11-21T14:43:21.000Z · LW(p) · GW(p)
Bacteria have a form of sex. It is horizontal rather than vertical biological information transfer though. I don't know what the limits of the transfer are, but it might make sense to talk about groups of bacteria that can transfer genes between them as a species. Evolution is not a simple thing.
comment by Vladimir_Slepnev · 2008-11-21T15:08:11.000Z · LW(p) · GW(p)
Eric, it's more amusing that both often cite a theorem that agreeing to disagree is impossible. And even more amusing that in "Nature of Logic" Eliezer practically explained agreeing to disagree: our mind is more cognition than logic. Eliezer and Robin generalize from facts to concepts differently which leads them to different predictions. When they try using logic to reconcile, logic kinda bottoms out at the concepts and there doesn't seem to be any way out except to test both theories. The argument goes on because both are polite and respectful, but it doesn't seem to shed any light.
(I apologize to the hosts for harping on the same topic repeatedly.)
comment by RobinHanson · 2008-11-21T15:42:23.000Z · LW(p) · GW(p)
If you can't usefully connect your abstractions to the historical record, I sure hope you have some data you can connect it to. Otherwise I can't imagine how you could have much confidence in them.
comment by Jeff2 · 2008-11-21T19:43:29.000Z · LW(p) · GW(p)
Eli: "I'd be surprised to learn that sex had no effect on the velocity of evolution. It looks like it should increase the speed and number of substituted adaptations, and also increase the complexity bound on the total genetic information that can be maintained against mutation."
Without crossover, the average rate of fitness gain with optimal mutation rates is 1/2 bit per genome per generation, and the maximum tolerable error rate is one error per genome per generation. For a fixed error probability m of each bit being flipped in reproduction, the largest possible genome size is of order 1/m.
With crossover, both the average rate of fitness gain and the tolerable errors per generation are equal to the square root of the genome size in bits; the largest possible genome size is of order 1/(m^2).
So yes, sex has an effect, and it's enormous. Rather than work out the math here and probably get it wrong, I'll point to an excellent (and downloadable) textbook that discusses the issue: MacKay's "Information Theory, Inference, and Learning Algorithms" at http://www.inference.phy.cam.ac.uk/mackay/itila/ . Chapter 19 contains the discussion of sexual vs. asexual evolution.
--Jeff
comment by Tim_Tyler · 2008-11-21T19:51:30.000Z · LW(p) · GW(p)
It might be difficult to measure evolutionary progress, but that is no excuse for ditching the whole concept:
Evolution is an optimisation process - similar to a genetic algorithm. We even know what the fitness function is - and so can see what is being optimised. Optimisation processes are - by their very nature - powerfully directional processes. There is a way in which such a process could fail to be directional - if it had too large a mutation rate. That might happen if our planet was extremely radioactive, or if it was frequently bombed by large meteorite strikes. However, in practice, neither of these things applies.
A brief examination of the history of life shows that it is characterised by an accumulation of "survival technology" - i.e. either adaptations or technology. In particular, the biosphere is accumulating "natural technology" that helps it more rapidly identify sources of potential energy - and degrade them in the process of constructing offspring. The accumulation is progressive, cumulative - and inexorable in character - and the resulting ratchet mechanism provides evolution with a powerful progressive, directional character.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-11-21T19:52:31.000Z · LW(p) · GW(p)
If you can't usefully connect your abstractions to the historical record, I sure hope you have some data you can connect it to. Otherwise I can't imagine how you could have much confidence in them.
Depends on how much stress I want to put on them, doesn't it? If I want to predict that the next growth curve will be an exponential and put bounds around its doubling time, I need a much finer fit to the data than if I only want to ask obvious questions like "Should I find rabbit fossils in the pre-Cambrian?" or "Do the optimization curves fall into the narrow range that would permit a smooth soft takeoff?"
comment by Caledonian2 · 2008-11-21T20:06:14.000Z · LW(p) · GW(p)
Eric, it's more amusing that both often cite a theorem that agreeing to disagree is impossible.
It's only impossible for rational Bayesians, which neither Hanson nor Yudkowsky are. Or any other human beings, for that matter.
comment by RobinHanson · 2008-11-21T20:11:55.000Z · LW(p) · GW(p)
Eliezer, it seems to me that we can't really debate much more until you actually directly make your key argument. If, at it seems to me, you are still in the process of laying out your views tutorial-style, then let's pause until you feel ready.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-11-21T20:31:40.000Z · LW(p) · GW(p)
I think we ran into this same clash of styles last time (i.e. back at Oxford). I try to go through things systematically, locate any possible points of disagreement, resolve them, and continue. You seem to want to jump directly to the disagreement and then work backward to find the differing premises. I worry that this puts things in a more disagreeable state of mind, as it were - conducive to feed-backward reasoning (rationalization) instead of feed-forward reasoning.
It's probably also worth bearing in mind that these kinds of meta-discussions are important, since this is something of a trailblazing case here. And that if we really want to set up conditions where we can't agree to disagree, that might imply setting up things in a different fashion than the usual Internet debates.
comment by RobinHanson · 2008-11-21T20:52:04.000Z · LW(p) · GW(p)
When I attend a talk, I don't immediately jump on anything a speaker says that sounds questionable. I wait until they actually make a main point of their talk, and then I only jump on points that seem to matter for that main point. Since most things people say actually don't matter for their main point, I find this to be a very useful strategy. I will be very surprised indeed if everything you've said mattered regarding our main point of disagreement.
comment by HalFinney · 2008-11-21T20:53:31.000Z · LW(p) · GW(p)
You guys should keep in mind the nature of the disagreement results. You don't have to go through all these details to reach agreement. The mere fact that each of you holds to his view even in light of the full knowledge that the other not only disagrees, but disagrees knowing that your own position is very different, should send you a message that you are very possibly - one might even say very probably - wrong. You can reach agreement very quickly once you acknowledge this fact. I hope you are both willing at least to say that the other's contrary stance tells you that there is a good likelihood that you are wrong.
Now, the procedure you are following, of laying out your arguments and premises and thinking processes, is likely to lead to a higher quality agreement than merely agreeing to agree. But there's no reason you can't agree first, and then mutually explore the issues second. Agreement does not have to be the end point of the discussion - I think that is something which Robin has emphasized many times.
The only explanation I can see for your having failed to agree is that you each suspect that the other is being irrational. It might be impolite, but if that is your reason, perhaps you could be open about those judgements and make progress from there. Or if there is some other reason why the other person's stubbornness is not as convincing as it should be, it might help to explain why. But I don't see much point in continuing to go over the reasons for your beliefs; those are irrelevant. The mere fact of contrary belief held by a rational individual should be enough.
comment by RobinHanson · 2008-11-21T21:15:10.000Z · LW(p) · GW(p)
Hal, I must consider the opinions of everyone on this subject, weighed by their expertize. I'd love to see more concrete data, but it seems to me that Eliezer is a bit of an outlier on this, and robustly so for a variety of ways to weigh expertize.
comment by Phil_Goetz5 · 2008-11-21T21:27:08.000Z · LW(p) · GW(p)
If I want to predict that the next growth curve will be an exponential and put bounds around its doubling time, I need a much finer fit to the data than if I only want to ask obvious questions like..."Do the optimization curves fall into the narrow range that would permit a smooth soft takeoff?"This implies that you have done some quantitative analysis giving a probability distribution of possible optimization curves, and finding that only a low-probability subset of that distribution allows for soft takeoff.
Presenting that analysis would be an excellent place to start.
comment by Caledonian2 · 2008-11-21T21:34:42.000Z · LW(p) · GW(p)
I hope you are both willing at least to say that the other's contrary stance tells you that there is a good likelihood that you are wrong.
If Robin knows that Eliezer believes there is a good likelihood that Eliezer's position is wrong, why would Robin then conclude that his own position is likely to be wrong? And vice versa?
The fact that Eliezer and Robin disagree indicates one of two things: either one possesses crucial information that the other does not, or at least one of the two have made a fatal error.
The disagreement stems from the fact that each believes the other to have made the fatal error, and that their own position is fundamentally sound.
comment by HalFinney · 2008-11-21T23:04:57.000Z · LW(p) · GW(p)
Robin, I agree that everyone's opinion on this matter is certainly relevant information which you should incorporate. And arguably Eliezer's position is an outlier. But you are not actively interacting this week with the AI expert community (I assume). You are interacting with Eliezer. He is the one who knows your opinion and knows you have not changed it despite knowing his opinion, and so on ad infinitum. It is the two of you who are interacting and the two of you who must agree.
Everyone else's opinions are relevant data which will influence both of your initial estimates, to be sure. But that is the extent of that influence. The fact that everyone else holds a different position than one of you does not prevent you from coming to agreement. Assuming that Eliezer is aware of his differences with other experts, the fact that he still holds to his view ought to be even more persuasive to you that he has good grounds for his position. And if somehow you thought that Eliezer was not aware of his outlying position, your own stubbornness on these grounds would have communicated to Eliezer that you have persuasive private information that he is wrong. And again in that case, the fact that he has not changed his mind despite your persistence must again imply that his own reasons for his belief are robust and powerful.
I can understand why you might be puzzled as to how Eliezer can reach such a different view than the AI expert community, and with such confidence, but fundamentally that is irrelevant. The fact remains that regardless of how reliable you think Eliezer's conclusions are a priori, once you interact with him and he remains set on his position, as long as you see each other as rational (and meta-rational), you must come to agree.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-11-21T23:07:40.000Z · LW(p) · GW(p)
Hal, I don't think my belief in simply the intelligence explosion is an outlier in the AI community. My detailed explanation is my own; my beliefs about theory-driven AGI are if anything more unusual than my belief in simply the intelligence explosion; my concern for Friendliness is not so much outlier as something that others haven't thought much about.
comment by HalFinney · 2008-11-22T00:00:01.000Z · LW(p) · GW(p)
Eliezer, that's actually good news, as it sounds like whether you are an outlier or not is something else that you and Robin disagree on. Now it's not 100% clear which exact position it is that you are potentially an outlier on, but I gather that it would include whether the chance that initially hand-coded AI may reach a dangerous potentiality within this century is closer to 70% or 1%. The good part is that your outlier-hood is probably something that could be empirically checked pretty easily, by asking a bunch of AI researchers.
Now, from the point of view of exploring how to overcome disagreement this is a bit of a cheat, like looking up the answer in the back of the book. It shouldn't be necessary to be bothered by anything as mundane as the facts, in order to reach agreement on this issue or any other. But if it happens that facts are available, maybe you should use them. And since one of you would be surprised by the result, it might lead to progress on your larger and more important issue.
comment by RobinHanson · 2008-11-22T01:53:08.000Z · LW(p) · GW(p)
Hal, you do not know that I have not changed my opinion since learning of Eliezer's opinion, and I do not assume that he has not changed his opinion. A large fraction of all those other people with opinions are not changing them even though they are aware of others' opinions. I do give Eliezer's opinion more weight but even so he seems, according to my informal survey, to be an outlier on the need for friendliness.
We can practice disagreeing and understanding its rationality on most any topic. But since this topic is important, my focus here is on gaining a better understanding of it.
comment by HalFinney · 2008-11-22T06:32:25.000Z · LW(p) · GW(p)
That's true, Robin, I did not consider that you guys might be updating your opinions as you go along.
I can understand your interest in pursuing the truth of the matter rather than just rushing to a declaration of agreement. Still I think there is a problem when you, a prominent advocate of the no-disagreement results, appear to have a persistent disagreement with a seemingly intelligent and rational person like Eliezer. It tends to discredit the disagreement theorems. If the truth is that your disagreement is more apparent than actual, and is in the process of being resolved, I think it would be beneficial to make note of that.
It is bad luck that this one issue where you and Eliezer have at least the appearance of persistent disagreement is also one which you both view as important, hence a matter where you prefer to devote your energy to airing the issues rather than working through the mechanics of agreement. It means that the dialog ends up being not much different from that of two people who are unfamiliar with the disagreement results. I was hoping, given the history of this forum, that you two might choose to model how agreement can be reached quickly, while maintaining vigorous discussion of the issues.
comment by Tim_Tyler · 2008-11-22T13:05:20.000Z · LW(p) · GW(p)
The only explanation I can see for your having failed to agree is that you each suspect that the other is being irrational.
IMO, a more obvious explanation is that one or both parties does not have believing the truth (and helping others to believe it) as their primary aim.
Robin seems to have said his aim is to believe the truth. The last I heard, Eleizer's aim was to reach something called "the singularity" as fast as possible.
So, Robin claims to be a pure truth seeker - though AFAIK, he hasn't explicitly stated that he in interested in helping others sort out their views. Eleizer doesn't claim to be a pure truth seeker in the first place.
I suspect that Robin does not actually value the truth that much. Four billion years of his ancestors valuing other things probably bears on him substantially. For example, Robin has stated: "I think about sex an awful lot." I don't see what that has got to do with believing the truth. I suspect that believing the truth is something Robin consciously aspires to do - and isn't what an unbiased economic analysis of him would uncover as his actual aim.
So: the premises of the disagreement theorems seem unlikely to be satisfied.
Replies from: wallowinmaya, Luke_A_Somers↑ comment by David Althaus (wallowinmaya) · 2011-05-22T11:44:30.120Z · LW(p) · GW(p)
Eleizer doesn't claim to be a pure truth seeker in the first place.
Are you kidding? If you go to his website the first thing you'll see is a big, fat quote: "That which can be destroyed by the truth should be." Come on, if you read only a few lines of Eliezer you'll see that he is a "pure truth seeker", or at least he claims to be one. From that I have to infer that you are not a "pure truth seeker" or you are simply not interested in Eliezer's opinions.
↑ comment by Luke_A_Somers · 2012-11-12T15:19:11.403Z · LW(p) · GW(p)
IMO, a more obvious explanation is that one or both parties does not have believing the truth (and helping others to believe it) as their primary aim.
... you take this over the idea that with their vastly different stores of theory and practical knowledge, they can reach drastically different conclusions on something outside of both of their domains of direct experience, while still processing as something reasonably approximating rationality?
comment by HalFinney · 2008-11-22T18:50:18.000Z · LW(p) · GW(p)
Tim, I've asked Eliezer about this now and then in various comments, and he has always maintained that he is being sincere and honest in his writings on the singularity. Recently he had a series of postings on the importance of honesty, such as Protected from Myself. The mere fact that he has put so much time and energy into working on this issue over many years is strong evidence that he sincerely believes that it is a real possibility, for good or evil. So I don't see any particular reason to discount his recent estimate of at least a 70% chance that AI could become super-intelligent within 100 years. Likewise with Robin's initial estimate of less than 1% for this event; I don't see any reason why he wouldn't be reporting that honestly.
For the disagreement results to hold, the participants don't have to be perfect truth-tellers, they just need to honestly give their opinions on the issue in question. Now Robin hints above that he may be revising his estimate, and says he considers it possible that Eliezer's position is also shifting. Maybe 1% vs 70% is no longer the state of play. But if they do choose to offer their revised estimates at some time in the future, and if they were to continue to do so, the disagreement theorems would pretty well force them to agree within a few rounds, I think. If that didn't happen then yes, maybe they would be lying; more likely IMO is that they suspect that the other is lying; and most likely I still think is that they suspect the other is simply being unreasonable.
comment by Caledonian2 · 2008-11-22T19:13:48.000Z · LW(p) · GW(p)
The mere fact that he has put so much time and energy into working on this issue over many years is strong evidence that he sincerely believes that it is a real possibility
Only if there are no other consequences of his actions that he desires. People working to forward an ideology don't necessary believe the ideology they're selling - they only need to value some of the consequences of spreading it.
comment by Tim_Tyler · 2008-11-22T20:37:43.000Z · LW(p) · GW(p)
Eliezer Yudkowsky gave > 70% to "a key event occurring sometime in the next century".
Robin Hanson gave < 1% to "a hand-coded version will suddenly and without warning change from nearly powerless to overwhelmingly powerful" and "a friendliness-needing event by 2108".
So far, ISTM, that the participants haven't publicly established that they are talking about the same thing. Also, what they do seem to be talking about is rather vague. How key is "key"? What does "friendliness-needing" actually mean? How suddenly is "suddenly"? - and so on.
Robin seems to visualise uploaded humans running over the planet and taking everyone's jobs in the none-too-distant. Does he think that is "friendly"? Does Eliezer agree about that?
As far as disagreement goes, what is the evidence that the participants are seeking the truth on this topic - or any other? Are we supposed to ignore the council of sociobiologists that tell us that such debates frequently revolve around social factors such as status, reputation, money, signalling - and so on?