Contra Yudkowsky on Epistemic Conduct for Author Criticism
post by Zack_M_Davis · 2023-09-13T15:33:14.987Z · LW · GW · 38 commentsContents
38 comments
In a comment on the Effective Altruism Forum [EA(p) · GW(p)] responding to Omnizoid's "Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong" [LW · GW], Eliezer Yudkowsky writes:
You will mark that in this comment I first respond to a substantive point and show it to be mistaken before I make any general criticism of the author; which can then be supported by that previously shown, initial, first-thing, object-level point. You will find every post of the Less Wrong sequences written the same way.
As the entire post violates basic rules of epistemic conduct by opening with a series of not-yet-supported personal attacks, I will not be responding to the rest in detail. I'm sad about how anything containing such an egregious violation of basic epistemic conduct got this upvoted, and wonder about sockpuppet accounts or alternatively a downfall of EA. The relevant principle of epistemic good conduct seems to me straightforward: if you've got to make personal attacks (and sometimes you do), make them after presenting your object-level points that support those personal attacks. This shouldn't be a difficult rule to follow, or follow much better than this; and violating it this hugely and explicitly is sufficiently bad news that people should've been wary about this post and hesitated to upvote it for that reason alone.
I agree that the dictum to refute an author's arguments before commenting on their character or authority is good writing advice, which I generally endeavor to follow. However, I argue that Yudkowsky errs in characterizing it as a "basic rule[ ] of epistemic conduct."
It seems to me that the reason "refutation first, character attacks only afterwards (if at all)" is good writing advice is because it guards against the all-too-human failure mode of previously intellectually fruitful conversations degenerating into ad hominem and name-calling, which are not intellectually fruitful.
When I'm debating someone about some subject of broader interest to the world—for example, stamp collecting—I want to keep the conversation's focus on the subject of interest [LW · GW]. If my interlocutor is untrustworthy, it might be worth arguing that to the audience in order to help them not be misled by my interlocutor's false claims about the subject of interest. But the relevance of the character claim to the debate needs to be clearly established. The mere truth of the claim "My interlocutor is untrustworthy" is no defense if the claim is off-topic (because argument screens off authority [LW · GW]). The audience doesn't care about either of us. They want to hear about the stamps!
(This is probably not the only reason to avoid personal attacks, but I think it's the important one.)
However, sometimes the character or authority of an author is the subject of interest. This is clearly the case for Omnizoid's post. The post is not a derailment of already ongoing discussions of epiphenominalism, decision theory, and animal consciousness. Rather, the central thesis that Omnizoid is trying to convince readers of is that Yudkowsky is frequently, confidently, egregiously wrong. The aim of the article (as Omnizoid explains in the paragraphs beginning with "The aim of this article [...]") is to discourage readers from deferring to Yudkowsky as an authority figure.
"Eliezer Yudkowsky is frequently, confidently, egregiously wrong" is a testable claim about the real world. It might be a claim of less broad interest to Society than the questions debated by students of decision theory, animal consciousness, or stamp collecting. (If someone told you that Mortimer Q. Snodgrass is frequently, confidently, egregiously wrong, you would ask, "Who is that? Why should I care?" I don't know, either.) Nevertheless, it is a claim that someone apparently found worthwhile to write a blog post about, and fair-minded readers should hold that post to the same standards as they would a post on any other testable claim about the real world.
Yudkowsky condemns Omnizoid's post as "violat[ing] basic rules of epistemic conduct by opening with a series of not-yet-supported personal attacks", citing an alleged "principle of epistemic good conduct" that personal attacks must be made "after presenting your object-level points that support those personal attacks." The conduct complaint seems to be not that Omnizoid fails to argue for their thesis, nor that the arguments are bad, but merely that the arguments appear later in the post. Yudkowsky seems pretty unambiguous in his choice of words on this point, writing "not-yet-supported", rather than "unsupported" or "poorly supported".
Frankly, this is bizarre. It's pretty common for authors to put the thesis statement first! If I wrote a blog post that said, "Gummed stamps are better than self-adhesive stamps; this is because licking things is fun", I doubt Yudkowsky would object and insist that I should have written, "Licking things is fun; therefore, gummed stamps are better than self-adhesive stamps." But why would the rules be different when the thesis statement happens to be a claim about an author rather than a claim about stamps?
(I could understand why humans might want rules that treat claims about humans differently from claims about other things [LW · GW]. So to clarify, when I ask, "Why would the rules be different?", I'm talking about the real rules—the epistemic rules [LW · GW].)
"You will find every post of the Less Wrong sequences written the same way," Yudkowsky writes, claiming to have observed his stated principle of good conduct. But this claim is potentially falsified by a November 2007 post by Yudkowsky titled "Beware of Stephen J. Gould" [LW · GW],[1] which opens with
If you've read anything Stephen J. Gould has ever said about evolutionary biology, I have some bad news for you. In the field of evolutionary biology at large, Gould's reputation is mud.
before describing Gould's alleged errors.
Indeed, "Beware of Stephen J. Gould" would seem to have essentially the same argument structure as "Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong".
In the former, Yudkowsky argues that readers should distrust Stephen Jay Gould on the grounds that Gould is not only wrong, but misrepresents the consensus of relevant academic experts. ("Gould systematically misrepresented what other scientists thought; he deluded the public as to what evolutionary biologists were thinking.")
In the latter, Omnizoid argues that readers should distrust Eliezer Yudkowsky on the grounds that Yudkowsky is not only wrong, but misrepresents the consensus of relevant academic experts. ("Eliezer's own source that he links to to describe how unstrawmanny it is shows that it is a strawman" [...] "Eliezer admits that he has not so much as read the arguments people give" [...] "If you're reading something by Eliezer and it seems too obvious, on a controversial issue, there's a decent chance you are being duped.")
Thus, it's hard to see how "Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong" could be in "egregious violation of basic epistemic conduct" while "Beware of Stephen J. Gould" is not. If anything, Omnizoid does a better job of showing their work than Yudkowsky (Omnizoid taking a negative view of Yudkowsky's predictive track record and three alleged "critical errors", in contrast to Yudkowsky only showing the rebuttal to Gould's thesis in Full House and merely quoting evolutionary biologists in support of the claim that such misrepresentations were "a common pattern throughout Gould's 'work'"). One might, of course, find Yudkowsky's arguments contra Gould more compelling on the object level than Omnizoid's arguments contra Yudkowsky, but that would seem to be irrelevant to [LW · GW] the "epistemic conduct" allegation insofar as the conduct complaint is about form rather than correctness.
To be clear, this is not necessarily to accuse Yudkowsky of hypocrisy. "Beware of Stephen J. Gould" was written over fifteen years ago. People can change a lot over fifteen years! It's plausible that Yudkowsky now regrets that post as failing to live up his current, improved conception of epistemic conduct. It would certainly be possible to write a similar post that complied with the defer-personal-attacks rule. Instead of "Beware of Steven J. Gould", it could be titled "Evolutionary Complexity Is Not a Random Walk" or (to get the criticism-target's name in the title) "Contra Gould on Evolutionary Complexity", and rebut the Full House argument first, only remarking on Gould's untrustworthiness as an afterthought, rather than centering it as the main thesis and putting it in the title.
But would that post be better at communicating what the author really had to say? The Yudkowsky of 2007 wasn't writing to an audience that already believed Gould's ideas about the complexity of evolved organisms and needed to be set straight on that technical point. Rather, he specifically wanted to warn his audience not to trust Stephen Jay Gould in general. An alleged "basic rule of epistemic conduct" that prevented him from focusing on his actual concern would be obfuscatory, not clarifying.
Perhaps there's something to be said for norms that call for obfuscation in certain circumstances. If you don't trust your psychiatric patients not to harm themselves, take away their pens and shoelaces; if you don't trust your all-too-human forum participants not to succumb to ad hominem and name-calling, don't let them talk about people's motivations at all.
What is less defensible is meta-obfuscation about which norms achieve their function via obfuscation. If Yudkowsky had merely condemned Omnizoid's post as violating norms of Less Wrong or the Effective Altruism Forum, I would not perceive an interest in replying; the internal politics of someone's internet fanfiction cult are of little interest to me (except insofar as I am unfortunate or foolish enough to still live here).
But Yudkowsky specifically used the phrases "epistemic conduct" or "epistemic good conduct". (Three times in one paragraph!) The word epistemic is not just a verbal tic you can throw in front of anything to indicate approval of your internet cult. It means something. ("Of or relating to cognition or knowledge, its scope, or how it is acquired.")
I think someone who wants to enforce an alleged "basic rule of epistemic conduct" of the form "if you've got to [say X] [...] [do so] after presenting your [...] points that support [X]" should be able to explain why such a rule systematically produces maps that reflect the territory when X happens to be "negative assessments of an author's character or authority" (what are called "personal attacks"), but not for other values of X (given how commonplace it is to make a thesis statement before laying out all the supporting arguments).
I don't think Yudkowsky can do that, because I don't think any such rule exists. I think the actual rules of basic epistemic conduct are things like "You can't make a fixed conclusion become more correct by arguing for it" [LW · GW] or "More complicated hypotheses [LW · GW] are less probable ex ante [LW · GW] and therefore require more evidence [LW · GW] to single out for consideration [LW · GW]"—and that the actual rules don't themselves dictate any particular rhetorical style [LW · GW].
Having written this post criticizing Yudkowsky's claim about what constitutes a basic rule of epistemic conduct, I suppose I might as well note in passing that if you find my arguments convincing, you should to some degree [LW · GW] be less inclined to defer [LW · GW] to Yudkowsky as an expert on human rationality. But I don't think that should be the main thing readers should take away from this post in the absence of answers to the obvious followup questions: Yudkowsky? Who is that? Why should you care?
I don't know, either.
I say only "potentially" falsified, because "Beware Steven J. Gould" was not included in a later collection of Yudkowsky's writings from this period and is not part of a "sequence" in the current lesswrong.com software; one could perhaps argue on those grounds that it should not be construed as part of "the Less Wrong sequences" for the purposes of evaluating the claim that "You will find every post of the Less Wrong sequences written the same way." ↩︎
38 comments
Comments sorted by top scores.
comment by Raemon · 2023-09-15T18:46:21.185Z · LW(p) · GW(p)
I just wanted to add some context (that I thought of as "obvious background context", but probably not everyone is tracking), that Eliezer wrote more about the "rule" here in the 8th post [? · GW] of the Inadequate Equilibria sequence:
Replies from: martin-randallI’ve now given my critique of modesty as a set of explicit doctrines. I’ve tried to give the background theory, which I believe is nothing more than conventional cynical economics, that explains why so many aspects of the world are not optimized to the limits of human intelligence in the manner of financial prices. I have argued that the essence of rationality is to adapt to whatever world you find yourself in, rather than to be “humble” or “arrogant” a priori. I’ve tried to give some preliminary examples of how we really, really don’t live in the Adequate World where constant self-questioning would be appropriate, the way it is appropriate when second-guessing equity prices. I’ve tried to systematize modest epistemology into a semiformal rule, and I’ve argued that the rule yields absurd consequences.
I was careful to say all this first, because there’s a strict order to debate. If you’re going to argue against an idea, it’s bad form to start off by arguing that the idea was generated by a flawed thought process, before you’ve explained why you think the idea itself is wrong. Even if we’re refuting geocentrism, we should first say how we know that the Sun does not orbit the Earth, and only then pontificate about what cognitive biases might have afflicted geocentrists. As a rule, an idea should initially be discussed as though it had descended from the heavens on a USB stick spontaneously generated by an evaporating black hole, before any word is said psychoanalyzing the people who believe it. Otherwise I’d be guilty of poisoning the well, also known as Bulverism.
But I’ve now said quite a few words about modest epistemology as a pure idea. I feel comfortable at this stage saying that I think modest epistemology’s popularity owes something to its emotional appeal, as opposed to being strictly derived from epistemic considerations. In particular: emotions related to social status and self-doubt.
↑ comment by Martin Randall (martin-randall) · 2023-09-18T13:06:39.948Z · LW(p) · GW(p)
I agree that "poisoning the well" and "Bulverism" are bad ideas when arguing for or against ideas. If someone wrote a post "Animals are Conscious" then it would be bad form to spend the first section arguing that Yudkowsky is frequently, confidently, egregiously wrong. However, that is not the post that omnizoid wrote, so it is misdirected criticism. Omnizoid's post is a (failed) attempt at status regulation.
comment by jimrandomh · 2023-09-13T18:54:22.938Z · LW(p) · GW(p)
Expressing negative judgments of someone's intellectual output could be an honest report, generated by looking at the output itself and extrapolating a pattern. Epistemically speaking, this is fine. Alternatively, it could be motivated by something more like politics; someone gets offended, or has a conflict of interest, then evaluates things in a biased way. Epistemically speaking, this is not fine.
So, if I were to take a stab at what the true rule of epistemic conduct here is, the primary rule would be that you ought to evaluate the ideas first before evaluating the person, in your own thinking. There are also reasons why the order of evaluations should also be ideas-before-people in the written product; it sets a better example of what thought processes are supposed to look like, it's less likely to mislead people into biased evaluations of the ideas; but this is less fundamental and less absolute than the ordering of the thinking.
But.
Having the order-of-evaluations wrong in a piece of writing is evidence, in a Bayesian sense, of having also had the order-of-evaluations wrong in the thinking that generated it. Based on the totality of omnizoid's post, I think in that case, it was an accurate heuristic. The post is full of overreaches and hyperbolic language. It presents each disagreement as though Eliezer were going against an expert consensus, when in fact each position mentioned is one where he sided with a camp in an extant expert divide.
And...
Over in the legal profession, they have a concept called "appearance of impropriety", which is that, for some types of misconduct, they consider it not only important to avoid the misconduct itself but also to avoid doing things that look too similar to misconduct.
If I translate that into something that could be the true rule, it would be something like: If an epistemic failure mode looks especially likely, both in the sense of a view-from-nowhere risk analysis and in the sense that your audience will think you've fallen into the failure mode, then some things that would normally be epistemically superogatory become mandatory instead.
Eliezer's criticism of Steven J. Gould does not follow the stated rule, of responding to a substantive point before making any general criticism of the author. I lean towards modus tollens over modus pollens, that this makes the criticism of Steven J. Gould worse. But how much worse depends on whether that's a reflection of an inverted generative process, or an artifact of how he wrote it up. I think it was probably the latter.
Replies from: martin-randall, SaidAchmiz, omnizoid↑ comment by Martin Randall (martin-randall) · 2023-09-14T03:29:01.762Z · LW(p) · GW(p)
I expect that most people (with an opinion) evaluated Yudkowsky's ideas prior to evaluating him as a person. After all, Yudkowsky is an author, and almost all of his writing is intended to convey his ideas. His writing has a broader reach, and most of his readers have never met him. I think the linked post is evidence that omnizoid in particular evaluated Yudkowsky's ideas first, and that he initially liked them.
It's not clear to me what your hypothesis is. Does omnizoid have a conflict of interest? Were they offended by something? Are they lying about being a big fan for two years? Do they have some other bias?
Even if someone is motivated by an epistemic failure mode, I would still like to see the bottom line up front, so I can decide whether to read, and whether to continue reading. Hopefully the failure mode will be obvious and I can stop reading sooner. I don't want a norm where authors have to guess whether the audience will accuse them of bias in order to decide what order to write their posts in.
↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-13T20:59:53.901Z · LW(p) · GW(p)
Having the order-of-evaluations wrong in a piece of writing is evidence, in a Bayesian sense, of having also had the order-of-evaluations wrong in the thinking that generated it.
As I understand it, this is an accusation of an author having written the bottom line first—yes?
If so, it would be good to be clear on that point. In other words, we should be clear that the problem isn’t anything about ordering, but that the author’s stated reasons, and reasoning, were actually not what led them to their stated conclusion.
And there is another point. Nobody[1] starts disliking someone for no reason, as a wholly uncaused act of hate-thought. So there must’ve been some other reason why the author of a post attacking someone (e.g. Eliezer, S. J. Gould, etc.) decided that said person was bad/wrong/whatever. But since the stated reason isn’t (we claim) the real reason, therefore the real reason must be something else which we are not being told.
So the other half of the accusation is that the post author is hiding from us the real reasons why they believe their conclusion (while bamboozling us with fake reasons).
This again has nothing to do with any questions of ordering. Clarity of complaints is paramount here.
Exceptions might be caused by mental illness or some such, which is irrelevant here. ↩︎
↑ comment by omnizoid · 2023-09-18T02:18:14.355Z · LW(p) · GW(p)
//It presents each disagreement as though Eliezer were going against an expert consensus, when in fact each position mentioned is one where he sided with a camp in an extant expert divide.//
Nope false. There are no academic decision theorists I know of who endorse FDT, no philosophers of mind who agree with Eliezer's assessment that epiphenomenalism is the term for those who accept zombies, and no relevant experts about consciousness who think that animals aren't conscious with Eliezer's confidence--that I know of.
comment by Thomas Sepulchre · 2023-09-13T18:43:08.049Z · LW(p) · GW(p)
I think one confusing aspect is the fact that the person being critical about the structure of the post is also the target of the post, therefore it is difficult to assume good intent.
If another well respected user had written a similar comment about why the post should have been written differently, then it would be a much cleaner discussion about writing standards and similar considerations. Actually, a lot of people did, not really about the structure (at least I don't think so), but mostly about the tone of the post.
As for EY, it is difficult not to assume that this criticism isn't completely genuine, and is some way to attack the author. That being said, maybe we should evaluate arguments for what they are, regardless of why they were stated in the first place (or is it being too naive?)
In that regard, your post is very interesting because it addresses both questions: showing that EY hasn't always followed this stated basic standard (i.e. claiming that the criticism is not genuine), and discussing the merit of this rule/good practice (i.e. is it a good basis for criticism)
Anyway, interesting post, thanks for writing it!
comment by habryka (habryka4) · 2023-09-13T17:55:40.088Z · LW(p) · GW(p)
I think this post makes a few good points, but I think the norm of "before you claim that someone is overconfident or generally untrustworthy, start by actually showing that any of their object-level points are inaccurate" seems pretty reasonable to me, and seems to me more like what Eliezer was talking about.
Like, your post here seems to create a strong distinction between "arguing against Eliezer on the issues of FDT" and "arguing that Eliezer is untrustworthy based on his opinion on FDT", but like, I do think that the first step to either should be to actually make object-level arguments (omnizoid's post did that a bit, but as I commented on the post, the ratio of snark to object-level content was really quite bad).
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-13T18:13:29.860Z · LW(p) · GW(p)
The relevant point here, I think, is that there’s nothing wrong with stating the general characterization first, if that’s the point of your post. Of course support your claims, of course make sure most of your post is actual substance and not empty snark; but the ordering should follow the rules of clear writing (which may dictate one or another order, as befits the case), not some purported (and, as Zack says, in truth nonexistent) epistemic rule that the object level must come first.
This is really nothing more than the perfectly ordinary “tell them what you’re going to tell them, then tell them, then tell them what you’ve told them” sort of thing. Obviously you must not skip the middle part, but neither is there any law that says that actually, the middle part must come first.
comment by Holly_Elmore · 2023-09-13T19:42:54.611Z · LW(p) · GW(p)
Yeah, it felt like Eliezer was rounding off all of the bad faith in the post to this one stylistic/etiquette breach, but he didn't properly formulate the one rule that was supposedly violated.
comment by trevor (TrevorWiesinger) · 2023-09-15T18:31:27.213Z · LW(p) · GW(p)
The context matters here.
The original post by Omnizoid spent the first third ranting about their journey discovering that Yud was a liar and a fraud, very carefully worded to optimize for appeal to ordinary EAforum users, and didn't back up any of their claims until the second two thirds, which were mostly esoteric consciousness arguments and wrong decision theory. What ultimately happened was that 95% of readers only read the slander that amde up the first third, and not any of the difficult-to-read arguments that Omnizoid routinely implied would back them up. People line up to do things like that.
That was what happened, and the impression I got from Yud's response was that he wasn't really sure whether to engage with it at all, since it might incentivize more people to take a similar strategy in the future. I also was confused by Yud calling it a "basic rule of epistemic conduct", but if that seemed like a good way to mitigate the harm while it was ongoing, then that was his call.
From my perspective, when Yud's reputation is bad, it's basically impossible to convince important and valuable people to read The Sequences and HPMOR and inoculate themselves against a changing world of cognitive attacks [LW · GW], whereas it's a downhill battle to convince them to read those texts when Yud's reputation is good. So if he fumbled on the task of mitigating the damage from Omnizoid's bad faith attacks, then yes, that's too bad. But he was also under time constraints, since it was an ongoing situation where it was getting tons of attention on EAforum and the harm increased with every passing hour, requiring a speedy response; if you want to judge how bad/careless the fumble was, then you have to understand the full context of the situation that was actually taking place.
Replies from: SaidAchmiz, omnizoid, Zack_M_Davis, TAG↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-15T19:47:34.911Z · LW(p) · GW(p)
Certainly omnizoid’s post was bad—there’s hardly any disputing that. The ratio of snark to content was egregious, the snark itself was not justified by the number and type of the examples, and the actual examples were highly questionable at best. I think that most folks in this discussion basically agree on this.
(I, personally, also think that omnizoid’s general claim—that Eliezer is “frequently, confidently, egregiously wrong”—is false, regardless of how good or bad are any particular arguments for said claim. On this there may be less general agreement, I am not sure.)
The question before us right now concerns, specifically, whether “put the object-level refutation first, then make comments about the target’s character” is, or is not, a “basic rule of epistemic conduct”. This is a narrow point—an element of “local validity” [LW · GW]. As such, the concerns you mention do not bear on it.
Replies from: omnizoid↑ comment by omnizoid · 2023-09-18T02:21:01.558Z · LW(p) · GW(p)
I dispute that . . .
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-18T02:26:17.288Z · LW(p) · GW(p)
I think that “the author of the post does not think the post he wrote was bad” is quite sufficiently covered by “hardly any”.
Replies from: omnizoid↑ comment by Zack_M_Davis · 2023-09-15T21:51:24.621Z · LW(p) · GW(p)
This is an amazingly clarifying comment!! Thanks for writing it!
I also was confused by Yud calling it a "basic rule of epistemic conduct", but if that seemed like a good way to mitigate the harm while it was ongoing, then that was his call. [...] when Yud's reputation is bad, it's basically impossible to convince important and valuable people to read The Sequences and HPMOR and inoculate themselves against a changing world of cognitive attacks, whereas it's a downhill battle to convince them to read those texts when Yud's reputation is good
I want people to read the Sequences because they teach what kinds of thinking systematically lead to beliefs that reflect reality. I'm opposed to people making misleading claims about what kinds of thinking systematically lead to beliefs that reflect reality (in this case, positing a "basic rule of epistemic conduct" that isn't one) just because it seems like a good way to mitigate ongoing harm to their reputation. That's not what the Sequences say to do!
Omnizoid's bad faith attacks
What do you mean by "bad faith" in this context? Following Wikipedia, I understand bad faith to mean "a sustained form of deception which consists of entertaining or pretending to entertain one set of feelings while acting as if influenced by another"—basically, when the stated reasons aren't the real reasons. (I recently wrote a post about why I don't find the term that useful, because I think it's common for the stated reasons to not be the real reasons, rather than a rare deviation. [LW · GW])
I agree that Omnizoid's post was bad. (At a minimum, the author failed to understand what problems timeless decision theory is trying to solve.) But I don't want to call it "bad faith" without simultaneously positing that the author has some particular motive or hidden agenda that they're not being forthcoming about.
What motive would that be, specifically? I think the introduction was pretty forthcoming about why Omnizoid wanted to damage Yudkowsky's reputation ("Part of this is caused by personal irritation", "But a lot of it is that Yudkowsky has the ear of many influential people", "Eliezer's influence is responsible for a narrow, insular way of speaking among effective altruists", "Eliezer's views have undermined widespread trust in experts"). I don't see any particular reason to doubt that story.
Replies from: Jirocomment by mrfox · 2023-09-13T17:42:27.703Z · LW(p) · GW(p)
I'm conflicted.
Upvoted because I strongly agree, downvoted because it would've been fine as a comment and appears to me as too much drama as a post (in its current form). You do seem to acknowledge that in the end? Perhaps I would've appreciated it more if you focused on this:
The word epistemic is not just a verbal tic you can throw in front of anything to indicate approval of your internet cult. It means something.
as a refresher of sorts, used Yudkowsky comment as an example and kept it shorter.
comment by Max H (Maxc) · 2023-09-14T02:45:00.560Z · LW(p) · GW(p)
Indeed, "Beware of Stephen J. Gould" would seem to have essentially the same argument structure as "Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong".
The 2007 post isn't an argument aimed at the ghost of Gould or his followers; it's a bunch of claims about the intellectual history of the field of evolutionary biology, and Gould's stature within it. Lots of commenters showed up to dispute or add color to Eliezer's claims about how other eminent scientists viewed Gould, but both the post and comment section are light on object-level arguments because there simply isn't that much to argue about, nor anyone interested in arguing about it.
(Also, Gould died in 2002. Eliezer's comment on epistemic conduct seems more applicable to discourse between contemporaries.)
Omnizoid's post, by contrast, opened with a bunch of inflammatory remarks about a still-living person, aimed directly at an audience with strong preexisting awareness of and opinions about that person. It then goes on to make a bunch of (wrong, laughably overconfident [EA(p) · GW(p)]) object-level arguments about very obviously non-settled topics in science and philosophy.
I think someone who wants to enforce an alleged "basic rule of epistemic conduct" of the form "if you've got to [say X] [...] [do so] after presenting your [...] points that support [X]" should be able to explain why such a rule systematically produces maps that reflect the territory when X happens to be "negative assessments of an author's character or authority" (what are called "personal attacks"), but not for other values of X (given how commonplace it is to make a thesis statement before laying out all the supporting arguments).
If more people read the beginning of an argument than the end, putting the personal attacks at the beginning will predictably lead to more people seeing the attacks than the arguments that support them. Even if such readers are not consciously convinced by attacks without argument, it seems implausible that their beliefs or impressions will not be moved at all.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2023-09-14T14:27:08.532Z · LW(p) · GW(p)
It then goes on to make a bunch of (wrong, laughably overconfident) object-level arguments about very obviously non-settled topics in science and philosophy.
But that's the real problem, no? If it had opened with a "bunch of inflammatory remarks" and then solidly backed them up you'd be fine with it, right?
Replies from: Maxc↑ comment by Max H (Maxc) · 2023-09-14T16:27:46.653Z · LW(p) · GW(p)
No, it's about whether such backup is needed in the first place.
If you're making claims which are likely to be widely contested at the object level, it's better to leave the inflammatory remarks to the end or not make them at all. Conversely, if you're not claiming to make original or controversial object-level claims of your own, it's fine to dive straight into the negative character assessments, though preferably supported with evidence and citations rather than personal attacks and charged rhetoric.
Note that you can often judge how much object-level support is required without knowing whether your object-level arguments are solid or even how your audience will react them - based on the comments section of their respective posts, both Eliezer and Omnizoid made this judgement correctly.
Or, to break it down a different way, consider the following possible claims an author can make, where X is an object-level statement:
- X is fasle.
- Among some particular group (e.g. academic philosophers), it is well-known and uncontroversial that X is false.
If you're going to write a post arguing that the second claim is true, and use that as a justification to attack someone who believes X, you better be very careful not get sidetracked arguing about X, even if you're correct that X is false! (Because if an academic philosopher happens to wander into your object-level argument on the wrong side, that falsifies your second claim, independent of the truth value of X.)
comment by frontier64 · 2023-09-14T20:41:18.820Z · LW(p) · GW(p)
My opinion is that Eliezer thought he needed a more technical rebuttal of Omnizoid's post than he did. Omnizoid was wrong, pointlessly mean, and had terrible support for most of the object-level points that he did make. In general the post was just bad and that's probably why it got no play on Lesswrong. That's all the criticism needed. I was expecting a well-written, justifiably rude, and factually-supported takedown of Eliezer. I didn't get it and I was disappointed. The top comment at the EA forum however directed me to that great takedown I was looking for and the takedown was even better for not being rude at all.
It's like trying to dispute a math proof that's written illegibly and is missing the whole middle part of the proof. Eliezer wanted to find a fatal flaw and refute the central point. But he probably couldn't find that fatal flaw. And even if he did find one one, the real issue with the proof is the bit about it being terrible in general. Finding the fatal flaw is kind of a bonus at best and at worst finding something that isn't even there!
comment by RussellThor · 2023-09-16T05:30:06.766Z · LW(p) · GW(p)
As as semi-newbie (read the major arguments, thought about them but not followed the detailed maths) and someone who has been following the fields of AI and then alignment for 15-20 years it isn't just EY that I now feel is clearly incorrect. For example it sure seemed to be from reading the likes of Superintelligence that a major reason for a paperclip optimizer would be that the AI would do what you say but not know what you mean. That seems pretty much impossible now - GPT4 understands what I mean better than many people, but has no ability to take over the world. More generally I feel that the alignment literature would not have benefitted from more time before GPT4, it would just have got more convinced about incorrect conclusions.
It is also believable that there are several other important further conclusions from the literature that are not correct, and we don't know what they are yet. I used to believe that a fast take-off was inevitable, now after reading Jacob Cannell etc think it is very unlikely. EY was very good at raising awareness, but that does not mean he should somehow represent the AI safety field because of that.
On a personal note, I distinctly remember EY being negative against deep learning to my surprise at the time (that is before Alpha Go etc) because I felt it was inevitable that deep learning/neuromorphic systems would win for the entire time I studied the field. (Unfortunately I didn't keep the reference to his comment so I don't have proof of that to anyone else).
I have deployed GOFAI signal processing systems I have written from scratch, studied psychology etc which lead me to the conclusion that deep learning would be the way to go. GOFAI is hopelessly brittle, NN are not.
I also strongly disagree with EY about the ethics of not valuing the feelings of non-reflective mammals etc.
Replies from: quetzal_rainbow↑ comment by quetzal_rainbow · 2023-09-16T08:31:09.125Z · LW(p) · GW(p)
The problem always was about not "knowing what you mean", but about "caring about what you mean".
Replies from: RussellThor, TAG↑ comment by RussellThor · 2023-09-16T09:52:40.806Z · LW(p) · GW(p)
Well that certainly wasn't the impression I got - some texts explicitly made that clear.
Replies from: quetzal_rainbow↑ comment by quetzal_rainbow · 2023-09-16T10:32:03.186Z · LW(p) · GW(p)
Genie knows, but doesn't care [LW · GW], for example.
Replies from: RussellThor↑ comment by RussellThor · 2023-09-16T20:43:57.315Z · LW(p) · GW(p)
OK do you disagree with Nora's assessment of how Superintelligence has aged?
https://forum.effectivealtruism.org/posts/JYEAL8g7ArqGoTaX6/ai-pause-will-likely-backfire#fntcbltyk9tdq [EA(p) · GW(p)]
The genie you have there seems to require a very fast takeoff to be real and overwhelmingly powerful compared to other systems.
Replies from: quetzal_rainbow↑ comment by quetzal_rainbow · 2023-09-17T15:15:57.213Z · LW(p) · GW(p)
I honestly think thay many of such opinions come from overupdates/overgeneralizations on ChatGPT.
↑ comment by TAG · 2023-09-16T14:55:10.287Z · LW(p) · GW(p)
Yeah, but that argument was wrong, too [LW(p) · GW(p)]
Replies from: quetzal_rainbow↑ comment by quetzal_rainbow · 2023-09-16T15:34:04.452Z · LW(p) · GW(p)
Making one system to change another is easy, making one system changing another into aligned superintelligence is hard.
Replies from: TAGcomment by green_leaf · 2023-09-13T21:29:09.521Z · LW(p) · GW(p)
omnizoid's post as an example of where not to take EY's side was poorly chosen. He two-boxes on Newcomb's problem and any confident statements he makes about rationality or decision theory should be, for that reason, ignored entirely.
Of course, you go meta, without claiming that he's object-level right, but I'm not sure using an obviously wrong post to take his side on the meta level is a good idea.
Replies from: Vladimir_Nesov, omnizoid↑ comment by Vladimir_Nesov · 2023-09-13T22:34:06.137Z · LW(p) · GW(p)
Digging into local issues for their own sake keeps arguments sane [LW · GW]. There are norms that oppose this, put preconditions of context [LW · GW]. A norm is weakened by not being fed [LW(p) · GW(p)] with enactment of its endorsed pattern.
Of course, you go meta, without claiming that he's object-level right, but I'm not sure using an obviously wrong post to take his side on the meta level is a good idea.
So it's precisely situations where contextualizing norms would oppose engagement in discussion of local validity where that kind of discussion promotes its own kind. Framing this activity as "taking a side" seems like the opposite of what's going on.
↑ comment by omnizoid · 2023-09-18T02:23:11.140Z · LW(p) · GW(p)
About three quarters of academic decision theorists two box on Newcombe's problem. So this standard seems nuts. Only 20% one box. https://survey2020.philpeople.org/survey/results/4886?aos=1399
Replies from: green_leaf↑ comment by green_leaf · 2023-09-29T01:54:25.149Z · LW(p) · GW(p)
That's irrelevant. To see why one-boxing is important, we need to realize the general principle - that we can only impose a boundary condition on all computations-which-are-us (i.e. we can choose how both us and all perfect predictions of us choose, and both us and all the predictions have to choose the same). We can't impose a boundary condition only on our brain (i.e. we can't only choose how our brain decides while keeping everything else the same). This is necessarily true.
Without seeing this (and therefore knowing we should one-box), or even while being unaware of this principle altogether, there is no point in trying to have a "debate" about it.