Eliezer's Sequences and Mainstream Academia
post by lukeprog · 2012-09-15T00:32:03.383Z · LW · GW · Legacy · 148 commentsContents
Obviously connected to mainstream academic work Less obviously connected to mainstream academic work I don't think Eliezer had encountered this mainstream work when he wrote his articles None 148 comments
Due in part to Eliezer's writing style (e.g. not many citations), and in part to Eliezer's scholarship preferences (e.g. his preference to figure out much of philosophy on his own), Eliezer's Sequences don't accurately reflect the close agreement between the content of The Sequences and work previously done in mainstream academia.
I predict several effects from this:
- Some readers will mistakenly think that common Less Wrong views are more parochial than they really are.
- Some readers will mistakenly think Eliezer's Sequences are more original than they really are.
- If readers want to know more about the topic of a given article, it will be more difficult for them to find the related works in academia than if those works had been cited in Eliezer's article.
I'd like to counteract these effects by connecting the Sequences to the professional literature. (Note: I sort of doubt it would have been a good idea for Eliezer to spend his time tracking down more references and so on, but I realized a few weeks ago that it wouldn't take me much effort to list some of those references.)
I don't mean to minimize the awesomeness of the Sequences. There is much original content in them (edit: probably most of their content is original), they are engagingly written, and they often have a more transformative effect on readers than the corresponding academic literature.
I'll break my list of references into sections based on how likely I think it is that a reader will have missed the agreement between Eliezer's articles and mainstream academic work.
(This is only a preliminary list of connections.)
Obviously connected to mainstream academic work
-
Eliezer's posts on evolution mostly cover material you can find in any good evolutionary biology textbook, e.g. Freeman & Herron (2007).
-
Likewise, much of the Quantum Physics sequence can be found in quantum physics textbooks, e.g. Sakurai & Napolitano (2010).
-
An Intuitive Explanation of Bayes' Theorem, How Much Evidence Does it Take, Probability is in the Mind, Absence of Evidence Is Evidence of Absence, Conservation of Expected Evidence, Trust in Bayes: see any textbook on Bayesian probability theory, e.g. Jaynes (2003) or Friedman & Koller (2009).
-
What's a Bias, again?, Hindsight Bias, Correspondence Bias; Positive Bias: Look into the Dark, Doublethink: Choosing to be Biased, Rationalization, Motivated Stopping and Motivated Continuation, We Change Our Minds Less Often Than We Think, Knowing About Biases Can Hurt People, Asch's Conformity Experiment, The Affect Heuristic, The Halo Effect, Anchoring and Adjustment, Priming and Contamination, Do We Believe Everything We're Told, Scope Insensitivity: see standard works in the heuristics & biases tradition, e.g. Kahneman et al. (1982), Gilovich et al. 2002, Kahneman 2011.
-
According to Eliezer, The Simple Truth is Tarskian and Making Beliefs Pay Rent is Peircian.
-
The notion of Belief in Belief comes from Dennett (2007).
-
Fake Causality and Timeless Causality report on work summarized in Pearl (2000).
-
Fake Selfishness argues that humans aren't purely selfish, a point argued more forcefully in Batson (2011).
Less obviously connected to mainstream academic work
-
Eliezer's metaethics sequences includes dozens of lemmas previously discussed by philosophers (see Miller 2003 for an overview), and the resulting metaethical theory shares much in common with the metaethical theories of Jackson (1998) and Railton (2003), and must face some of the same critiques as those theories do (e.g. Sobel 1994).
-
Eliezer's free will mini-sequence includes coverage of topics not usually mentioned when philosophers discuss free will (e.g. Judea Pearl's work on causality), but the conclusion is standard compatibilism.
-
How an Algorithm Feels From Inside and Dissolving the Question suggest that many philosophical problems can be dissolved into inquiries into the cognitive mechanisms that produce them, as also discussed in, for example, Shafir (1998) and Talbot (2009).
-
Thou Art Godshatter, Not for the Sake of Happiness Alone, and Fake Utility Functions make the point that value is complex, a topic explored in more detail in affective neuroscience (Kringelbach & Berridge 2009), neuroeconomics (Glimcher 2010; Dolan & Sharot 2011), and other fields.
-
Newcomb's Problem and the Regret of Rationality repeats a common debate among philosophers. Thinking that CDT must be right even though it "loses" to EDT on Newcomb's Problem, one group says "What can we do, if irrationality is rewarded?" The other group says "If you're so smart, why aren't you rich? What kind of rationality complains about the reward for irrationality?" For example, see Lewis (1981).
I don't think Eliezer had encountered this mainstream work when he wrote his articles
-
Eliezer's TDT decision algorithm (2009, 2010) had been previously discovered as a variant of CDT by Wolfgang Spohn (2003, 2005, 2012). Both TDT and Spohn-CDT (a) use Pearl's causal graphs to describe Newcomblike problems, then add nodes to those graphs to represent the deterministic decision process the agent goes through (Spohn calls them "intention nodes," Yudkowsky calls them "logical nodes"), (b) represent interventions at these nodes by severing (edit: or screening off) the causal connections upstream, and (c) propose to maximize expected utility by summing over possible values of the decision node (or "intention node" / "logical node"). (Beyond this, of course, there are major differences in the motivations behind and further development of Spohn-CDT and TDT.)
-
Many of Eliezer's points about intelligence explosion and machine ethics had been made in earlier writings Eliezer did cite, e.g. Williamson (1947), Good (1965), and Vinge (1993). Others of Eliezer's points appear in earlier writings he did not cite but probably had read: e.g. Minsky (1984), Schmidhuber (1987), Bostrom (1997), Moravec (1999). Others of Eliezer's points appear in earlier writings he probably hadn't read: e.g. Cade (1966), Good (1970), Versenyi (1974), Lukasiewicz (1974), Lampson (1979), Clarke (1993, 1994), Sobel (1999), Allen et al. (2000). (For a brief history of these ideas, see here and here.)
-
A Technical Explanation of Technical Explanation retreads much ground from the field of Bayesian epistemology, surveyed for example in Niiniluoto (2004) and Howson & Urbach (2005).
148 comments
Comments sorted by top scores.
comment by Wei Dai (Wei_Dai) · 2012-09-15T04:57:22.120Z · LW(p) · GW(p)
Wow this is awesome. Some comments and questions:
- Spohn's decision theory does look very similar to Eliezer's, but Spohn couldn't give a good argument for the plausibility of rational cooperation in one-shot PD (he tried in the 2003 paper) because his didn't have the concepts of decision making as an algorithm, and of logical correlation between instances of such algorithms.
- The kind of AI cooperation discussed by Eliezer is not the type discussed as "program equilibrium". Instead "program equilibrium" is very similar (essentially the same?) as cousin_it's initial approach to AI cooperation, which he came up with in part due to dissatisfaction with Eliezer's approach. (cousin_it later moved on to "Lobian cooperation", which is closer to Eliezer's idea, and as far as anyone knows those results weren't previously discovered in academia.)
- In your research, did you fail to find previous academic work for some elements of the sequences? In other words, which other elements are not (known to be) expositions or reinventions of previous academic work?
↑ comment by lukeprog · 2012-09-15T21:13:46.831Z · LW(p) · GW(p)
The kind of AI cooperation discussed by Eliezer is not the type discussed as "program equilibrium".
Fixed, thanks.
In your research, did you fail to find previous academic work for some elements of the sequences?
I didn't look very hard. I merely thought about the stuff I already knew about, and then picked a subset of those things to list here.
comment by [deleted] · 2012-09-15T01:00:38.454Z · LW(p) · GW(p)
Some readers will mistakenly think that common Less Wrong views are more parochial than they really are.
This one is probably important. With non-LW newcomers to my meetup, I find explaining that we draw most of our ideas from LW kind of weird in the sense that 50% of my simulations of them conclude "these guys are some wacko internet cult". Only some of them come back.
I'd like to see at least some work on how to talk about LW without implying insularity. We can't just drop the LW affiliation, because nowhere else really compares (even if everything on LW exists somewhere else, it doesn't exist anywhere else all in the same place).
Replies from: RomeoStevens, lukeprog↑ comment by RomeoStevens · 2012-09-15T05:12:10.408Z · LW(p) · GW(p)
HI WE'RE AN INTERNET MEETUP GROUP!
The internet is low status due to the low barriers to entry. Mention higher status things than the internet.
Future of Humanity Institute at Oxford (Hey I've heard of Oxford)
Vinge (a published author many have heard of)
Center for Applied Rationality which does real things in real life with real people
Talk about cognitive science. Talk about economics. Talk about anything but the internet. LessWrong? Oh it's just for coordinating all the interesting people who are interested in these interesting things.
↑ comment by lukeprog · 2012-09-16T14:52:02.718Z · LW(p) · GW(p)
I'd like to see at least some work on how to talk about LW without implying insularity.
Name-drop like a motha...
"Did you read Predictably Irrational by Dan Ariely or Thinking Fast and Slow by Nobel laureate Daniel Kahneman? We study their field of predictable human thinking errors and try to figure out how best to apply those lessons to everyday human life so that we can learn how to make decisions that are more likely to achieve our goals.
"We talk some about Alan Turing's idea that machines could one day become smarter than humans, and how shortly thereafter we might expect them to become more powerful than humans. One of the mathematicians who worked with him to crack the German Enigma Code, I.J. Good, explained that a smarter-than-human machine could use its intelligence to improve its own inteligence. And since neuroscientists like Paul Glimcher at NYU and Kent Berridge at U Michigan are learning that what humans care about is incredibly complex, it's unlikely that we'll be able to figure out how to program smarter-than-human machines to respect every little detail of what we care about."
↑ comment by siodine · 2012-09-16T16:02:58.512Z · LW(p) · GW(p)
Or, more meta-ly, you're not going to be very persuasive if you ignore pathos and ethos. I think this might be a common failure mode of aspiring rationalists because we feel we shouldn't have to worry about such things, but then we're living in the should-world rather than the real-world.
comment by Thrasymachus · 2012-09-18T08:13:54.653Z · LW(p) · GW(p)
Some readers will mistakenly think that common Less Wrong views are more parochial than they really are.
I think the parochialism comes from high handed smack-talk like "The obvious answer to philosophically recondite issue is X, and all you need to see this is obvious is our superior rationality". Best example here.
One of the easiest hard questions, as millennia-old philosophical dilemmas go. Though this impossible question is fully and completely dissolved on Less Wrong, aspiring reductionists should try to solve it on their own.
I get a similar vibe regarding QM (obviously many worlds), religion (obviously atheism), phil of mind (obviously reductionsim), and (most worrying) ethics and meta-ethics.
The fact the candidate views espoused are part of the academic mainstream doesn't defray the charge of parochialism due to the tup-thumping, uncharitable-to-opponents and generally under-argued way these views are asserted. Worse, it signals lack of competence on the part of LW: given the views of virtually all domain experts on any of these things, your degree of confidence is better explained by inferior, not superior knowledge, and even if you happen to get the right answer, I doubt you're p-reliable or tracking.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-09-18T08:31:34.063Z · LW(p) · GW(p)
I don't think there's much value in pretending that issues like God (and the absence thereof) or the compatibility between determinism and (any logically coherent view of) free will haven't been decisively answered.
Seriously now, the compatibility between free will and determinism is something that I was figuring out by myself back in junior high. Eliezer with his "Thou Art Physics" expressed it better and more compactly than I ever did to myself (I was instead using imagery of the style "we're the stories that write themselves", and this was largely inspired by Tolkien's Ainulindale, where the various gods sing a creation song that predicts all their future behaviour), but the gist is really obvious once you get rid of the assumption that determinism and free will must somehow be opposed.
In every discussion I've had since, in any forum, nobody who thinks them to be incompatible can describe even vaguely what "free will" would be supposed to look like if it does not contain determinism inside it.
Replies from: Thrasymachus, Peterdjones, Peterdjones↑ comment by Thrasymachus · 2012-09-21T14:58:07.569Z · LW(p) · GW(p)
I think this is a case of exactly the problem I diagnosed above.
Compatibilism (and related views) have been mentioned at least since Hume, and have been discussed extensively in modern analytic philosophy. Although it commands a slender majority of philosophers of action, it is not like the entire philosophical community considers compatibilism obviously or decisively the 'right answer' (see here, and here for a long index of reasons/objections etc.). You'd be pretty hard pressed to find a single philosopher of action who considers free will a 'solved problem'.
Yet it seems the less wrong community considers it solved based on a sequence of blog posts which merely explicates compatibilism: I couldn't find any discussion of compatibilism which goes beyond undergrad philosophy level, no discussion of common objections to compatibilism, engagement with any thinkers arguing against, nothing.
The two best explanations I have for this is either compatibilism is just obvious and people of sufficient rationality can be confident that domain experts on free will who don't buy compatibilism are wrong, or that the LW 'solution' is frankly philosophically primitive but LWers are generally too far on the wrong side of the Dunning-Kruger effect to appreciate why it isn't the decisive answer to a 'millenia old philosophical dilemma' they think it is.
Surely the outside view would find the latter account much more plausible?
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-09-21T15:43:46.362Z · LW(p) · GW(p)
Although it commands a slender majority of philosophers of action, it is not like the entire philosophical community considers compatibilism obviously or decisively the 'right answer'
Thanks for that poll. It's a slender majority, but a very strong plurality, since the next most favourite option is less than half as popular, and if you examine only the 'Accept' answers instead of the 'lean towards' answers, the compatibilists are also much more certain in their belief, while the libertarians and no-free-willers tend to be uncertain much more often.
And the faculty is more definitely compatibilistic than the students, which seems to indicate education correlates with acceptance of compatibilism.
But more importantly: these people also seem to prefer to two-box in Newcomb's problem. So why should I put much weight in their opinion?
Replies from: Thrasymachus, DragonGod↑ comment by Thrasymachus · 2012-09-21T16:15:14.135Z · LW(p) · GW(p)
A weak majority/strong plurality of relevant domain experts does not make the question decisively answered. I don't have survey data on this, but I'm pretty sure none of the compatibilists (even those who 'accept' it), take the question to be obviously answered etc. etc.
But more importantly: these people also seem to prefer to two-box in Newcomb's problem. So why should I put much weight in their opinion?
The majority of decision theory specialists two-box. I'm sure you can guess what I'm going to say about doman expertise and dunning-kruger effect here, too.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-09-21T16:24:30.720Z · LW(p) · GW(p)
A weak majority/strong plurality of relevant domain experts does not make the question decisively answered.
Tell me, do you have any criterion over whether something is "decisively answered" other than how many "relevant domain experts" agree with it? If your definition of "decisively answered" is solely dependent on this, then we can just agree that we were using different definitions for the term.
The majority of decision theory specialists two-box.
So much for the decision theory specialists. Implement a real life version of Newcomb's box, where you fill in the opaque box based on whether they said they'll one-box or two-box. Assuming everyone follows what they said they should do, the one-boxers will just win, and the two-boxers will be weeping.
Replies from: Thrasymachus↑ comment by Thrasymachus · 2012-09-22T15:52:29.560Z · LW(p) · GW(p)
Tell me, do you have any criterion over whether something is "decisively answered" other than how many "relevant domain experts" agree with it? If your definition of "decisively answered" is solely dependent on this, then we can just agree that we were using different definitions for the term.
I take 'decisively answered' to mean something along the lines of "here is an account, which, properly understood, solves this problem to the satisfaction of reasonable people". So (near) unanimity among relevant domain experts is necessary but not sufficient for this. I can't think of anything in natural language we would call a 'decisive answer' or similar in which 40% or so of relevant domain experts disagree with.
So much for the decision theory specialists. Implement a real life version of Newcomb's box, where you fill in the opaque box based on whether they said they'll one-box or two-box. Assuming everyone follows what they said they should do, the one-boxers will just win, and the two-boxers will be weeping.
This is recapitulating a standard argument for one-boxing, and it is well discussed in the literature. The fact the bulk of people who spend their time studying this issue and don't find this consideration decisive should make you think it is less a silver bullet than you think it is.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-09-22T23:20:22.721Z · LW(p) · GW(p)
This is recapitulating a standard argument for one-boxing, and it is well discussed in the literature. The fact the bulk of people who spend their time studying this issue and don't find this consideration decisive should make you think it is less a silver bullet than you think it is.
I should update slightly towards that direction, yes, but I have to note that the poll you gave me are not just about people who study the issue, but people who also seem to have made a career out of discussing it, and therefore (I would cynically suggest) perhaps wouldn't like the discussion to be definitively over.
e.g. Theologists and Priests are perhaps not the best people to poll, if you want to determine the existence of God.
Ah, but I just remembered atheism was one of the things you complained about being treated as obviously correct by most of us here? Because the domain experts about God (Theologists and Priests) haven't come to same conclusion?
This is recapitulating a standard argument for one-boxing,
I don't feel a pressing need to be non-standard: One-boxing wins, two-boxing loses -- that's all one needs to know for the purpose of choosing between them.
Replies from: Thrasymachus↑ comment by Thrasymachus · 2012-09-24T21:19:17.236Z · LW(p) · GW(p)
I should update slightly towards that direction, yes, but I have to note that the poll you gave me are not just about people who study the issue, but people who also seem to have made a career out of discussing it, and therefore (I would cynically suggest) perhaps wouldn't like the discussion to be definitively over.
Sure, but I gather there are other things you can discuss in decision theory besides Newcomb's problem, so it isn't like the decision theorists need an artificial controversy about this to keep their jobs.
There are dissimilarities between decision theorists and (say) theologians, priests etc. Decision theorists are unlikely to have prior convictions about decision theory before starting to study it, unlike folks who discuss religion. The relevant domain expert in 'Does God exist' would likely be philosophers of religion, although there is a similar selection effect. However, for what it's worth, I doubt atheist philosophers of religion would consider the LW case for atheism remotely creditable.
↑ comment by Peterdjones · 2012-09-21T12:11:57.998Z · LW(p) · GW(p)
I don't think there's much value in pretending that issues like God (and the absence thereof) or the compatibility between determinism and (any logically coherent view of) free will haven't been decisively answered.
There are plenty of reasons for putting forward you conclusions as non decisive: (edited)
Not sounding as though you are suffering from the Dunning Kruger effect
Academic Modesty.
You might actually be wrong. No one who calls themselves a rationalist should confuse "Seems true to me" with "is true".
↑ comment by ArisKatsaris · 2012-09-21T12:29:40.519Z · LW(p) · GW(p)
Not sounding as though you are suffering from the Dunning Kruger effect
Academic Modesty.
Are those separate points?
- You might actually be wrong.
I 'might' also be wrong about the Earth not being flat. That still doesn't mean that we shouldn't consider the shape of the earth decisively answered.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-21T12:34:05.873Z · LW(p) · GW(p)
They may overlap. Are they bad points?
I 'might' also be wrong about the Earth not being flat. That still doesn't mean that we shouldn't consider the shape of the earth decisively answered.
The pertinent point is that all informed opinion considers it decisiley answered. That is not the case with the two issues you cited as having been decisevly answered by EY.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-09-21T14:09:56.597Z · LW(p) · GW(p)
Are they bad points?
They're insufficient for me. Other people may find them sufficient.
The pertinent point is that all informed opinion considers it decisively answered
So, according to you, it seems I shouldn't pronounce something decisively answered unless "all informed opinion" considers it decisively answered.
Don't you see the paradox in this? How is the first person to consider it 'decisively answered' supposed to call it 'decisively answered', if he/she must first wait for all other people to call it 'decisively answered' first?
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-21T14:42:40.535Z · LW(p) · GW(p)
he/she must first wait for all other people to call it 'decisively answered' first?
No they needn't. They only need wait for the point to be reached where an overwhelming majority agree with an answer. Having noted that , they can correctly state that it has been decisevely answered. They only need others to agree with the anwer, not for others to agree that the question has been decisvely answered.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-09-21T15:02:31.339Z · LW(p) · GW(p)
They only need wait for the point to be reached where an overwhelming majority agree with an answer
I don't think that "decisively answered" need have anything to do with democracy -- for example I'm sure that if you poll Czech scientists about the existence of God, you'll get a different distribution than if you ask Iranian scientists. Even if they're equally informed, political considerations will make them voice different things.
The policy you suggest seems designed to minimize conflict with your academic peers, not designed to maximize effectiveness in the pursuit of understanding the universe.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-21T15:34:15.114Z · LW(p) · GW(p)
Churchill said democracy was the worst system apart from all the others. Do you have an alternative way of establishing Deciiveness that improves on the Majority of Informed Opinion?
'm sure that if you poll Czech scientists about the existence of God, you'll get a different distribution than if you ask Iranian scientists
Neither of those subsets would get me the majority of informed opinion. I believe I have already solved that problem.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-09-21T16:01:23.794Z · LW(p) · GW(p)
Churchill said democracy was the worst system apart from all the others.
Churchill's exact quote was "Democracy is the worst form of government, except for all those other forms that have been tried from time to time" He was talking about forms of government, not methods of understanding the universe.
Do you have an alternative way of establishing Deciiveness that improves on the Majority of Informed Opinion?
As a sidenote, let me note here that even on the issue you argued about, this "majority" seems to actually exist. The majority of philosophers are compatibilists, according to Thrasymachus's linked poll above.
And there seems to be an > 80% percentage (an overwhelming majority) against libertarian free will. According to your own argument then, even if you don't find compabilism a "decisive answer", you should find libertarianism a "decisive failure of an answer".
But getting back to your question: "Do you have an alternative way of establishing Deciiveness that improves on the Majority of Informed Opinion?"
Well, even if we don't speak about things like "Science" or "Testing" or "Occam's Razor properly utilized", I think I'll prefer the "Majority of Informed Opinion that Also Has IQ > 130 And Also One-Boxes in Newcomb's Dilemma".
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-21T16:36:22.785Z · LW(p) · GW(p)
Churchill's exact quote was "Democracy is the worst form of government, except for all those other forms that have been tried from time to time" He was talking about forms of government, not methods of understanding the universe.
I was only drawing a loose analogy.
The majority of philosophers are compatibilists, according to Thrasymachus's linked poll above.
Then Hobbes decisively solved it, not EY. OTOH, if you are talking about EY's specific form of compatibilism.. then he has no majority on his side.
Well, even if we don't speak about things like "Science" or "Testing" or "Occam's Razor properly utilized", I think I'll prefer the "Majority of Informed Opinion that Also Has IQ > 130 And Also One-Boxes in Newcomb's Dilemma".
Why is it an improvment to make it parochial? Can't you see that it trivialises the claim "EY has decisevely solved FW" to add the rider "..by the LW/EY definition of decisivness". I could also claim to have solved it by my definition. Parochialism devalues the currency.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-09-21T16:44:00.230Z · LW(p) · GW(p)
Can't you see that it trivialises the claim "EY has decisevely solved FW"
Downvoted, because I never made that claim, and nobody has made that claim. I said FW/determinism has been solved, I didn't present EY as the originator of compatibilism, any more than I would have assigned the invention of atheism to him.
I may have tapped out, but don't you dare make this into an opportunity to misrepresent me. I will still disavow any false statement you assign to me. I'm very territorial about what I have actually said, vs what people attempt to falsely assign to me.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-21T16:48:09.668Z · LW(p) · GW(p)
Do you think there is any novelty to EY's compatibilism?
↑ comment by Peterdjones · 2012-09-21T12:24:02.213Z · LW(p) · GW(p)
In every discussion I've had since, in any forum, nobody who thinks them to be incompatible can describe even vaguely what "free will" would be supposed to look like if it does not contain determinism inside it.
We haven't met, then. I'm natuarlistic libertarian and therefore an incompatibilist. I may not be right, but so long as I am not completely wrong, EY does not have "the" answer.
Anyway, on to the refutation of "FW requires determinism"..
II.1.i Introduction Compatibilism comes in a stronger form, which does not have a traditional name; We will call it supercompatibilism. According to supercompatibilism, free will is not only capable of existing alongside causal determinism, it cannot exist without determinism.
Here is case that free will requires causal indeterminism.
PRO: "Free will requires the ability to have done otherwise under the very same circumstances, which is only possible in a universe with some degree of indeterminism."
The following argument goes against the usefulness of indeterminism to free will:-
ANTI: "Freedom of a kind worth wanting requires rationality, and rationality mean following rules. Causal Determinism would ensure that the rules are followed. Indeterminism would disrupt the process of rational thought, and result in a capricious, irrational kind of freedom not worth having. An individual cannot call an action his or her own unless they can account rationally account for it, and unless it was caused by their own intentions. Reasons are causes, so to be rational is to be determined.
We will now argue against the ANTI argument, taking it a sentence at a time.
II.1.ii Objection 1: "Freedom of a kind worth wanting requires rationality"
Yes, but it is by no means limited to rationality. free will of a kind worth wanting must facilitate the whole gamut of human behaviour, including creativity, imagination, inventiveness etc.
II.1.iii Objection 2: "Causal Determinism would ensure that the rules are followed."
Casual Determinism only guarantees that everything follows the laws of nature, not that anything follows the laws of thought. If you believe the universe is deterministic, you have to admit that any lapse of rationally is just as determined as everything else. Indeed, we are more likely to look for a deterministic — in the sense of an external cause —explanation for uncharacteristically irrational behaviour than for rational behaviour — so-and-so was drunk, drugged, etc.
II.1.iv Objection 3: "Indeterminism would disrupt the process of rational thought, and result in a capricious, irrational kind of freedom not worth having."
Is that so ? Computer programmes can consult random-number generators where needed, including 'real' ones implemented in hardware. (like this) The rest of their operation is perfectly deterministic. Why should the brain not be able to call on indeterminism as and when required, and exclude it the rest of the time ? And if random numbers are useful for computers, why should indeterministic input be useless for brains ? Is human rationality that much more hidebound than a computer ? Even including such faculties as creativity and imagination ? Pseudo-random numbers (which are really deterministic) may be used in computers, and any indeterminism the brain calls on might be only pseudo-random. But it does not have to be, and if we assume it is not, we can explain realistically why we have the sense of being able to have done otherwise.
And is it so great to be compelled into rule-following rationality ? If you asked some what 5 and 7 make, you would expect the answer 12. But if you asked them the same question ten times in a row, you would expect them to object at some stage, and stop answering, unlike a pocket calculator which will spit out the same answer ad infinitum. Surely the choice of whether or not to follow a set of rules is part of rationality ?
II.1.iv Objection 4: "An individual cannot call an action his or her own unless they can rationally account for it and unless it was caused by their own intentions"
It is true that we would not consider an individual to 'own' a an action or decision if it had nothing to do with his beliefs and aims at the time he made it — that is, if we assume that indeterminism erupts in-between everything that happened to make him the individual he is, and the act itself.
But, we libertarians claim. an act is also not an individual's own if it entirely attributable to causes lying outside him, ultimately traceable to circumstances before he came into existence.
There is no need to be disheartened. If the causal origins of our actions cannot lie before our births or after our decisions have been made, they can still lie, just where they should: during our lifetimes.
Our actions can be determined by our preceding mental state, providing that our mental state is not itself entirely attributable to causes outside of ourselves. This means that, although we can pin actions to immediate purposes, we cannot trace back a chain of purposes-for-purposes ad infinitum.
I do not think that is any loss, since determinism fares no better. All purposes may be causes, but not all causes are purposes. The deterministic causal chain, if traced back, is bound to encounter factors which do not rationally explain, any more than a random occurrence does. Moreover, this process of looking for ultimate rational explanations is unusual to say the least. Our normal attitude is that John and Mary have their reasons, which are very much part of who they are, and that's that. II.1.v Reasons are causes, so to be rational is to be determined. Is it the case that our actions are determined by our reasons, so that indeterminism must entail irrationality?
The reasons we have for actions might be said to cause those actions, but there are a number of differences between reasons and causes, and in any case reasons are not determining causes, as will be argued.
Whereas any event can involve causation, only a small subset of entities in the universe, rational agents, can base their actions on reasons. Causes lie in the past, whereas reasons are generally directed towards some future state of affairs. A causal statement is not a direct substitute for a rational justification. If I kick someone and am asked why I did it, it is no justification — for all that it is true — to say that nerve signals from my brain caused my leg to flex. A Causal chain can continue back to the Big Bang, but a rational explanation for behavour cannot. An individual must explain their behaviour in tersm of their own aims an desires. Once they start claiming they were caused to behave in a certain way, by their environment, genes,etc, they are no longer offering reasoned explanaion. Causal explanation is 'classical' — everything is brought under a uniform, impersonal set of laws. Rational explanation is 'romantic' — people have their own unaccountably individual reasons for doing things. There is no single right answer to "what should I do", as it depends on what you as an individual want to do — nor is there any single right answer to "what should I want".
That reasons are not determining causes is established by the fact that one can have multiple reasons for multiple courses of action. If one accepts the slice of cake, one pleases the hostess; if one rejects it, one sticks to ones diet. If reasons and actions are chosen in pairs there will always be a reason for ones action irrespective of what is actually chosen. This is still true if a choice is made randomly. If one does choose a reason-action pair randomly, it is admittedly true that a reason-for-the reason cannot be given. But it is always the case that chains of reasoning cannot be pursued to infinity; they have to stop somewhere (or be circular). Since real-world reasoning is constrained in this way, limited amounts of indeterminism would not render people any less able to provide chains of reasoning than they are anyway, even though it remains the case that complete indeterminism entails complete irrationality.
If reasons are not determining causes, are they causes at all? It would certainly be a peculiar situation if reasons were causally completely detached from the actions they explain. At least the neural correlates of reasons need to be causes. Reasons themseleves arguably belong to a different language game so that it is a category-error to substitute talk of reasons directly for talk of causes an vicer-versa. This need not imply any ontological dualism, but rather a kind of anomolous monism.
It is sometimes said that we are free to do what we want, but not to choose what we want. The approach sketched so far is pretty much the opposite of this. If we have clear reasons for doing something and are in a rational frame of mind our actions follow almost inevitably. Freedom lies in the fact that our basic aims and goals, our basic nature is not inevitable, We might not have done as we did because we might not have been that kind of person. Freedom is not mere caprice, not does it lie in being the puppet of circumstances, it is self determination, a gradual evolution of selfhood.
Replies from: ArisKatsaris, None↑ comment by ArisKatsaris · 2012-09-21T12:44:36.812Z · LW(p) · GW(p)
Downvoted for being atleast twenty times more long-winded than necessary, and still failing to describe what a "free will" without determinism would look like.
Computer programmes can consult random-number generators where needed, including 'real' ones implemented in hardware.
Pseudo-random, but I'll let that trivial point slide. The more significant point is that those random numbers are utterly meaningless in themselves -- the meaning and worth of a program lies in those aspects that are not random, or in how it deterministically uses a random variation. We can use a pseudo-random generator to use in a cryptographical program, or in an artistic program, or in a mutation-simulation program -- but that pseudo-randomness is only meaningful in how it's deterministically used.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-21T12:55:41.465Z · LW(p) · GW(p)
(Edit)
Downvoted for being atleast twenty times more long-winded than necessary, and still failing to describe what a "free will" without determinism would look like.
Hmmm. Well, that's in the full lenght version of which this is an extract. I notice that EYs disquisition, which is problably longer, doesn't suffer from the problem of being "too long".
Pseudo-random, but I'll let that trivial point slide
Which instance of "random" do you think should have been pseudo random? Note that there are devices commercially marketed as supplying "real" randomness based on quantum physics.
The more significant point is that those random numbers are utterly meaningless in themselves -- the meaning and worth of a program lies in those aspects that are not random,
Says who? Are you saying that the use of randomisation in software is always a misttake, that programmers who feel it is necessary are just incompetent?
that pseudo-randomness is only meaningful in how it's deterministically used.
It is true that a random number is no good in itself, but equally you can't solve every problem with pure determinism. So the value of a deterministic+random algorithm is in its determinsm+randomness.
Replies from: ArisKatsaris, ArisKatsaris↑ comment by ArisKatsaris · 2012-09-21T14:04:06.372Z · LW(p) · GW(p)
I notice that EYs disquisition, which is probably longer, doesn't suffer from the problem of being "too long".
According to word count tool:
"Thou Art Physics" article: 1032 words
Your comment: 1495 words
But I'm not thinking of the number of words alone but the number of words per point of communicated meaning.
↑ comment by ArisKatsaris · 2012-09-21T13:55:00.923Z · LW(p) · GW(p)
Which instance of "random" do you think should have been pseudo random?
The instance I quoted. But as I said the point is trivial.
Are you saying that the use of randomisation in software is always a misttake
I don't think I used the word "mistake", at all. I didn't even imply that it's sometimes a mistake, let alone always.
It is true that a random number is no good in itself, but equally you can't solve every problem with pure determinism.
Please name three problems you can't solve with determinism but you can solve with random-number generators. Besides encryption which depends on secrecy and therefore depends on not knowing what will come out, I can't think of any.
Replies from: bogus, Peterdjones↑ comment by bogus · 2012-09-21T14:34:43.240Z · LW(p) · GW(p)
Since quantum algorithms are inherently random, these three problems qualify:
- Solve the Deutsch-Jozsa problem in constant time.
- Search an unstructured database in O(sqrt(n)) time.
- Factorize integers in polynomial time.
Moreover, randomized algorithms are occasionally useful in a classical computer, since they give good expected performance even for some classes of degenerate inputs.
↑ comment by Peterdjones · 2012-09-21T15:06:12.692Z · LW(p) · GW(p)
Which instance of "random" do you think should have been pseudo random? The instance I quoted. But as I said the point is trivial.
So you are saying that the sentence "Computer programmes can consult random-number generators where needed, including 'real' ones implemented in hardware." Should have read "Computer programmes can consult pseudo- random-number generators where needed, including 'real' ones implemented in hardware". Are you aware that your change renders the sentence contradictory? The point of real randomness generators is that, given certain assumptions about physics, they are not pseudo?
I don't think I used the word "mistake",
If you had used it, I would have had no need to ask the question. I was trying to put you vaguely negative comments about the use of randomness in software on a more precise basis.
Besides encryption which depends on secrecy and therefore depends on not knowing what will come out,
I dont see why an example that works should be excluded becuase it works.
Another example I like is the way ethernet works: when two MAC's try to send simultaneously, then result is garbled and they need to back off and retry. However, backing off according to a deterministic algorithm would lead to another collision on the retry, ad infinitum. Backing off for a random time solves that simply.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-09-21T16:12:49.950Z · LW(p) · GW(p)
I dont see why an example that works should be excluded becuase it works.
The reason encryption requires randomness is not relevant to free will. The reason MACs need to back off for a random time likewise does not seem relevant to free will either.
I think I'll tap out at this point. I don't think there's anything I can contribute to this discussion beyond what I've already said.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-09-21T16:28:39.517Z · LW(p) · GW(p)
The reason encryption requires randomness is not relevant to free will.
I don't see what you mean. Randomness is relevant to FW because determinism is, prima facie. (Compatibilists feel the need to argue that it in fact isn't, rather than taking it as obvious). Randomness is relevant to solving problems. A kind of FW that allows you to solve problems is worth having. If you want a more obviously relevant example, consider that evading a predator with random moves is more effective than adopting a potentially predictable "evasive pattern delta"
↑ comment by [deleted] · 2012-10-12T10:02:00.792Z · LW(p) · GW(p)
To have an opinion about free will, you must first observe the existence of the issue.
Most people do this with introspection: The world outside you seem to conform to \ while inside you, it seems that indeed you control every movement and thought.
Lord Kelvin has voiced the above statement quite poetically.
Now, the keyword here is 'seem'. Your argument hitches on an anecdote from your own, non-optimal cognitive machinery.
What EY did was point at this 'seem' and explain it. He did not point at free will and explained it, he explained why the cognitive machinery hands you the anecdote. And then from there you can crank the handle of modus ponnens and conclude that 'free will' goes in the same category as 'redness of red'.
Also on a technical note you forego that you live 80 milliseconds in the past (sensory lag to synch toe-tips to retinas) and you have more subconscious processes than conscious ones, processes you can only rarely consciously affect. This gives nondeterminism a low prior.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-10-12T11:32:23.081Z · LW(p) · GW(p)
To have an opinion about free will, you must first observe the existence of the issue.
There needs to be a prima facie case. I don't think it is restictied to intospection though.
Most people do this with introspection: The world outside you seem to conform to \
Which could include nondeterminism. It is not as though anybody can predict evey physical occurence.
while inside you, it seems that indeed you control every movement and thought.
Movements occur on the outside.
Lord Kelvin has voiced the above statement quite poetically.
Where?
Now, the keyword here is 'seem'. Your argument hitches on an anecdote from your own, non-optimal cognitive machinery.
I didn't base my agument solely on introspection. In fact, very little of the quoted passage leans on introspective evidence. And everything hinges on non-optimal congitive machinery, including what you are saying.
What EY did was point at this 'seem' and explain it.
Things would seem the way they seem if what he says is correct, and they would seem the way they seem if what I say is correct. You have no grounds for saying that he has the explanation other than that you happen to like it.
He did not point at free will and explained it, he explained why the cognitive machinery hands you the anecdote. And then from there you can crank the handle of modus ponnens and conclude that 'free will' goes in the same category as 'redness of red'.
Whatever that is. I'm a qualiaphile, BTW.
Also on a technical note you forego that you live 80 milliseconds in the past (sensory lag to synch toe-tips to retinas) and you have more subconscious processes than conscious ones, processes you can only rarely consciously affect.
I need an argument agains that, and I have one.
I
comment by ChrisHallquist · 2012-09-18T00:45:11.089Z · LW(p) · GW(p)
Just want to give one piece of positive feedback: I've been meaning to get some recommendations for reading on many of these topics, and these citations are way awesomer than what I would've hoped to get before. Thanks, Luke!
Replies from: lukeprogcomment by Filipe · 2012-09-15T10:45:04.829Z · LW(p) · GW(p)
What about Drescher's Good and Real: Demystifying Paradoxes from Physics to Ethics? Eliezer said it's "pratically Less Wrong in book form."
Replies from: Manfred, betterthanwell↑ comment by betterthanwell · 2012-09-20T00:09:33.465Z · LW(p) · GW(p)
Yep. Gloriously lucid and quite readable book.
Encapsulates good chunks of the sequences.
Much more accessible than I had anticipated.
comment by Morendil · 2012-09-15T07:51:41.152Z · LW(p) · GW(p)
I'd also mention
- Hayakawa's Language In Thought and Action
- Axelrod's The Evolution of Cooperation
- Rawls' Theory of Justice (though the Sequences don't discuss CEV much)
- maybe Peter Singer?
↑ comment by lukeprog · 2012-09-15T21:16:00.170Z · LW(p) · GW(p)
Hayakawa's Language In Thought and Action
...is not what I'd call "mainstream academia." Its program of "general semantics" is instead what Martin Gardner labeled as "cultism and pseudo-science" in one chapter of Fads and Fallacies in the Name of Science. Despite this, Language in Thought and Action is pretty good.
↑ comment by Manfred · 2012-09-15T08:09:00.697Z · LW(p) · GW(p)
Could you connect them to the sequences like Luke did please? To the extent that I am familiar with your list, I'm having a hard time seeing it.
Replies from: Morendil↑ comment by Morendil · 2012-09-15T08:59:19.143Z · LW(p) · GW(p)
Eliezer has explicitly mentioned Hayakawa in Intensions and Extensions. Axelrod is important to understanding the iterated Prisoner's Dilemma. Singer is one of the better-known proponents of a "shut up and multiply" approach to utilitarianism, agreeing with Eliezer's conclusions in various places (eg). Rawls' notions of the "veil of ignorance" and "reflective equilibrium" have been mentioned in connection with Coherent Extrapolated Volition - when I first came across CEV the similarities with Rawls stuck out like a sore thumb.
Replies from: Manfred↑ comment by wedrifid · 2012-09-21T11:34:48.546Z · LW(p) · GW(p)
And remember, billionaire friend Peter Thiel financed this immense production, which E.Y. wrote while on Thiel's payroll at SIAI. It's immensity isn't a product of original thought but, in the end, of a pampered parasite with too much time on his hands.
Seriously? DevilWorm is criticizing Eliezer for managing to find financial support while producing his intellectual output? That's how most intellectual work gets done---both within and without formal academic establishments and in various forms across time. This criticism is utterly pathetic.
I don't understand how DevilWorm has been upvoted here (+3 prior to my vote). The remainder of the comment is almost as bad---as is every other comment he has written. Moreover a brief glance at the user's comment history identifies the DevilWorm account as rather obviously yet another sock-puppet created for the purpose of acting out his personal animosity.
How the heck are our new anti-trolling measures going to work if people go around upvoting blatant trolling like this? I mean sure, this guy seems to have a basic grasp of grammar and punctuation but apart from that he doesn't seem to have contributed anything but some filler in which to embed some unnecessary insults.
DevilWorm's comments (and all other slander-sockpuppets) should not be upvoted. They should be downvoted to invisibility until someone has a chance to ban the comments---and that only because the 'ban user by name and IP' is missing.
Replies from: Vladimir_Nesov, metatroll↑ comment by Vladimir_Nesov · 2012-09-21T11:40:47.504Z · LW(p) · GW(p)
I'm actually moderately alarmed by the level at which their comments are upvoted (with all the bold face and whatnot). This event potentially adds motivation to the "Don't let users with low Karma upvote things" change.
Replies from: mrglwrf↑ comment by mrglwrf · 2012-09-21T16:34:55.248Z · LW(p) · GW(p)
If a non-negligible number of people upvote comments expressing negative opinions of Eliezer Yudkowsky or the Sequences, what leads you to the conclusion that the best response is to label these comments "slander" and cast for roundabout ways to suppress them? If you want an echo-chamber (a reasonable thing to want), that can be easily and non-disingenuously accomplished, for instance by making it explicit policy that disagreement with local authority figures is not permitted.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-09-21T16:53:10.603Z · LW(p) · GW(p)
I'm afraid of the acceptance and approval of low quality comments, irrespective of the positions they express.
Replies from: mrglwrf↑ comment by mrglwrf · 2012-09-21T17:50:18.746Z · LW(p) · GW(p)
I am not calling you a liar, because I accept that you are sincere, but I don't believe you. The claim that you determine the quality of comments without regard to the positions they express is outlandish, for at least two reasons. One, that you are human, and therefore subject to the same biases as every other human known to have ever existed, meaning that you will inevitably tend to appraise posts that agree with your views more favorably than those that disagree. Two, that if you aren't judging comments' quality by the positions they express, there's little of substance left by which you could judge them. The vast majority of comments and posts are neither formal nor rigorous enough for their reasoning, when considered solely on the comments' own merits, to hold up to any serious scrutiny. So that leaves presentation, and...?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-09-21T17:58:25.641Z · LW(p) · GW(p)
The claim that you determine the quality of comments without regard to the positions they express
...is not one I've made. Caring about quality of comments doesn't require having perfect perception of this quality that is unaffected by other things. But I'm quite confident in my judgment of the quality of the comments that started this conversation.
The vast majority of comments and posts are neither formal nor rigorous enough for their reasoning, when considered solely on the comments' own merits, to hold up to any serious scrutiny
They should be amenable to steel-manning, not impossible to criticize (though this is not what I was talking about).
↑ comment by metatroll · 2012-09-21T23:37:16.132Z · LW(p) · GW(p)
Owing to my low karma, I can't respond directly to DevilWorm's comment. But this is for you, my friend.
DevilWorm, I don't like to judge a person too harshly. I acknowledge that in principle your trolling has the potential to make a positive contribution to the community. It would be cruel to just say "go back to troll school until you measure up". But to be honest, I do have my doubts about whether you have what it takes. To be brutally honest, the world only needs good trolls and bad trolls, and you are neither.
Bad trolls (in the good sense of bad) are the ones who are born to trolldom. They are the griefers, the naturally vindictive spirits. But among genuinely evil trolls, you wouldn't last a day. If you enrolled in the Sith Academy's trolling masterclass, they would end up using you for target practice.
Good trolls have trolldom thrust upon them. Through a mysterious mutant strain in their makeup, they see something that others cannot, and they are driven to the status of troll just by bearing witness to truth. But you are that sad case of someone who mistakes their own blind spot for superior vision: you think you can see that there is nothing to see here.
Finally we have the ugly trolls, the wannabes, the ones who aspire to trolldom. My advice is, give it up. Don't use boldface, don't use italics, don't call people semiliterate when you don't know where to place an apostrophe yourself. Be humble, accept your station in life as just another commenter who occasionally makes a valid point. It's better to be an ordinary person who makes a small but genuine contribution, than a blowhard who contributes nothing at all.
Your concerned friend, metatroll
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-15T14:04:28.693Z · LW(p) · GW(p)
And lo, people began tweeting:
Which is false. This pushes as far in the opposite wrong direction as the viewpoint it means to criticize.
Evolutionary biology, the non-epistemological part of the exposition of quantum mechanics, and of course heuristics and biases, are all not original. They don't look deceptively original either; they cite or attributed-quote the sources from which they're taken. I have yet to encounter anyone who thinks the Sequences are more original than they are.
When it comes to the part that isn't reporting on standard science, the parts that are mostly dealt with by modern "philosophers" rather than experimental scientists of one kind or another, the OP is vastly overstating how much of the Sequences are similar to the standard stuff out there. There is such a vast variety of philosophy that you can often find a conclusion similar to anything, to around the same degree that Leibniz's monadology anticipated timeless quantum mechanics, i.e., not very much. The motivations, the arguments by which things are pinned down, the exact form of the conclusions, and what is done with those conclusions, is most of the substance - finding a conclusion that happens to look vaguely similar does not mean that I was reporting someone else's academic work and failing to cite it, or reinventing work that had already been done. It is not understating any sort of "close agreement" with even those particular concluders, let alone the field as a whole within which those are small isolated voices. Hofstadter's superrationality is an acknowledged informal forerunner of TDT. But finding other people who think you ought to cooperate in the PD, but can't quite formalize why, is not the same as TDT being preinvented. (Also TDT doesn't artifically sever decision nodes from anything upstream; the idea is that observing your algorithm, but not its output, is supposed to screen off things upstream. This is "similar" to some attempts to rescue evidential decision theory by e.g. Eels, but not quite the same thing when it comes to important details like not two-boxing on Newcomb's Problem.) And claiming that in principle philosophical intuitions arise within the brain is not the same as performing any particular dissolution of a confused question, or even the general methodology of dissolution as practiced and described by Yudkowsky or Drescher (who actually does agree and demonstrate the method in detail within "Good and Real").
I'm also still not sure that Luke quite understands what the metaethics sequence is trying to say, but then I consider that sequence to have basically failed at exposition anyway. Unfortunately, there's nothing I can point Luke or anyone else at which says the same thing in more academic language.
Several of these citations are from after the originals were written! Why not (falsely) claim that academia is just agreeing with the Sequences, instead?
I don't understand what the purpose of this post was supposed to be - what positive consequence it was supposed to have. Lots of the Sequences are better exposition of existing ideas about evolutionary biology or cognitive biases or probability theory or whatever, which are appropriately quoted or cited within them? Yes, they are. People introducing Less Wrong should try to refer to those sources as much as possible when it comes to things like heuristics and biases, rather than talking like Eliezer Yudkowsky somehow invented the idea of scope insensitivity, so that they don't sound like phyg victims? Double yes. But writing something that predictably causes some readers to get the impression that ideas presented within the Sequences are just redoing the work of other academics, so that they predictably tweet,
...I do not think the creation of this misunderstanding benefits anyone. It is also a grave sin to make it sound like you're speaking for a standard academic position when you're not!
And I think Luke is being extremely charitable in his construal of what's "already" been done in academia. If some future anti-Luke is this charitable in construing how much of future work in epistemology and decision theory was "really" all done within the Sequences back in 2008, they will claim that everything was just invented by Eliezer Yudkowsky way back then - and they will be wrong - and I hope somebody argues with that anti-Luke too, and doesn't let any good feeling for ol E. Y. stand in their way, just like we shouldn't be prejudiced here by wanting to affiliate with academia or something.
I get what this is trying to do. There's a spirit in LW which really is a spirit that exists in many other places, you can get it from Feynman, Hofstadter, the better class of science fiction, Tooby and Cosmides, many beautiful papers that were truly written to explain things as simply as possible, the same place I got it. (Interesting side note: John Tooby is apparently an SF fan who grew up reading van Vogt and Null-A, so he got some of his spirit from the same sources I did! There really is an ancient and honorable tradition out there.) If someone encounters that spirit in LW for the first time, they'll think I invented it. Which I most certainly did not. If LW is your first introduction to these things, then you really aren't going to know how much of the spirit I learned from the anncient masters... because just reading a citation, or even a paragraph-long quote, isn't going to convey that at all. The only real way for people to learn better is to go out and read Language in Thought and Action or The Psychological Foundations of Culture. Doing this, I would guess, gave Luke an epiphany he's trying to share - there's a whole world out there, not just LW the way I first thought. But the OP doesn't do that. It doesn't get people to read the literature. Why should they? From what they can see, it's already been presented to them on LW, after all. So they won't actually read the literature and find out for themselves that it's not what they've already read.
There's literature out there which is written in the same spirit as LW, but with different content. Now that's an exciting message. It might even get people to read things.
Replies from: lukeprog, chaosmosis, Thrasymachus, ciphergoth, Wei_Dai, lukeprog, buybuydandavis, lukeprog, Morendil, wedrifid, Epiphany, Emile, CronoDAS↑ comment by lukeprog · 2012-09-15T22:35:41.414Z · LW(p) · GW(p)
the OP is vastly overstating how much of the Sequences are similar to the standard stuff out there... I think Luke is being extremely charitable in his construal of what's "already" been done in academia
Do you have a Greasemonkey script that rips all the qualifying words out of my post, or something? I said things like:
- "Eliezer's posts on evolution mostly cover material you can find in any good evolutionary biology textbook"
- "much of the Quantum Physics sequence can be found in quantum physics textbooks"
- "Eliezer's metaethics sequences includes dozens of lemmas previously discussed by philosophers"
- "Eliezer's free will mini-sequence includes coverage of topics not usually mentioned when philosophers discuss free will (e.g. Judea Pearl's work on causality), but the conclusion is standard compatibilism."
- "[Eliezer's posts] suggest that many philosophical problems can be dissolved into inquiries into the cognitive mechanisms that produce them, as also discussed in"
- "[Eliezer's posts] make the point that value is complex, a topic explored in more detail in..."
Your comment above seems to be reacting to a different post that I didn't write, one that includes (false) claims like: "The motivations, the arguments by which things are pinned down, the exact form of the conclusions are mostly the same between The Sequences and previous work in mainstream academia."
I have yet to encounter anyone who thinks the Sequences are more original than they are.
Really? This is the default reaction I encounter. Notice that when the user 'Thomas' below tried to name just two things he thought were original with you, he got both of them wrong.
Here's a report of my experiences:
People have been talking about TDT for years but nobody seems to have noticed Spohn until HamletHenna and I independently stumbled on him this summer.
I do find it hard to interpret the metaethics sequence, so I'm not sure I grok everything you're trying to say there. Maybe you can explain it to me sometime. In any case, when it comes to the pieces of it that can be found elsewhere, I almost never encounter anyone who knows their earlier counterparts in (e.g.) Railton & Jackson — unless I'm speaking to someone who has studied metaethics before, like Carl.
A sizable minority of people I talk to about dissolving questions are familiar with the logical positivists, but almost none of them are familiar with the recent cogsci-informed stuff, like Shafir (1998) or Talbot (2009).
As I recall, Less Wrong had never mentioned the field of "Bayesian epistemology" until my first post, The Neglected Virtue of Scholarship.
Here's a specific story. I once told Anna that once I read about intelligence explosion I understood right away that it would be disastrous by default, because human values are incredibly complex. She seemed surprised and a bit suspicious and said "Why, had you read Joshua Greene?" I said "Sure, but he's just one tip of a very large iceberg of philosophical and scientific work demonstrating the complexity of value. I was convinced of the complexity of value long ago by metaethics and moral psychology in general."
Several of these citations are from after the originals were written! Why not (falsely) claim that academia is just agreeing with the Sequences, instead?
Let's look at them more closely:
Lots of cited textbooks were written after the Sequences, because I wanted to point people to up-to-date sources, but of course they mostly summarize results that are a decade old or older. This includes books like Glimcher (2010) and Dolan & Sharot (2011).
Batson (2011) is a summary of Batson's life's work on altruism in humans, almost all of which was published prior to the Sequences.
Spohn (2012) is just an update to Spohn's pre-Sequences on work on his TDT-ish decision theory, included for completeness.
Talbot (2009) is the only one I see that is almost entirely composed of content that originates after the Sequences, and it too was included for completeness immediately after another work written before the Sequences: Sharif (1998).
I don't understand what the purpose of this post was supposed to be - what positive consequence it was supposed to have.
That's too bad, since I answered this question at the top of the post. I am trying to counteract these three effects:
- Some readers will mistakenly think that common Less Wrong views are more parochial than they really are.
- Some readers will mistakenly think Eliezer's Sequences are more original than they really are.
- If readers want to know more about the topic of a given article, it will be more difficult for them to find the related works in academia than if those works had been cited in Eliezer's article.
I find problem #1 to be very common, and a contributor to the harmful, false, and popular idea that Less Wrong is a phyg. I've been in many conversations in which (1) someone starts out talking as though Less Wrong views are parochial and weird, and then (2) I explain the mainstream work behind or similar to every point they raise as parochial and weird, and then (3) after this happens 5 times in a row they seem kind of embarrassed and try to pretend like they never said things suggesting that Less Wrong views are parochial and weird, and ask me to email them some non-LW works on these subjects.
Problem #2 is common (see the first part of this comment), and seems to lead to phygish hero worship, as has been pointed out before.
Problem #3, I should think, is uncontroversial. Many of your posts have citations to related work, most of them do not (as is standard practice in the blogosphere), and like I said I don't think it would have been a good idea for you to spend time digging up citations instead of writing the next blog post.
writing something that predictably causes some readers to get the impression that ideas presented within the Sequences are just redoing the work of other academics, so that they predictably tweet ...I do not think the creation of this misunderstanding benefits anyone
Predictable misunderstandings are the default outcome of almost anything 100+ people read. There's always a trade-off between maximal clarity, readability, and other factors. But, I'm happy to tweak my original post to try to counteract this specific misunderstanding. I've added the line: "(edit: probably most of their content is original)".
[Further reading, I would guess] gave Luke an epiphany he's trying to share - there's a whole world out there, not just LW the way I first thought.
Remember that I came to LW with a philosophy and cogsci (especially rationality) background, and had been blogging about biases and metaethics and probability theory and so on at CommonSenseAtheism.com for years prior to encountering LW.
I get what this is trying to do. There's a spirit in LW which really is a spirit that exists in many other places, you can get it from Feynman, Hofstadter, the better class of science fiction, Tooby and Cosmides, many beautiful papers that were truly written to explain things as simply as possible, the same place I got it.
That is definitely not the spirit of my post. If you'll recall, I once told you that if all human writing were about to be destroyed except for one book of our choosing, I'd go with The Sequences. You can't get the kind of thing that CFAR is doing solely from Feynman, Kahneman, Stanovich, etc. And you can't get FAI solely from Good, Minsky, and Wallach — not even close. Again, I get the sense you're reacting to a post with different phrasing than the one I actually wrote.
So they won't actually read the literature and find out for themselves that it's not what they've already read.
Most people won't read the literature either you or I link to. But many people will, like Wei Dai.
Case in point: Remember Benja's recent post on UDT that you praised as "Original scientific research on saving the world"? Benja himself wrote that the idea for that post clicked for him as a result of reading one of the papers on logical uncertainty I linked to from So You Want to Save the World.
Most people won't read my references. But some of those who do will go on to make a sizable difference as a result. And that is one of the reasons I cite so many related works, even if they're not perfectly identical to the thing me or somebody else is doing.
Replies from: None, Wei_Dai, CarlShulman, Eliezer_Yudkowsky↑ comment by [deleted] · 2012-09-16T16:14:59.391Z · LW(p) · GW(p)
Most people won't read my references. But some of those who do will go on to make a sizable difference as a result. And that is one of the reasons I cite so many related works, even if they're not perfectly identical to the thing me or somebody else is doing.
FWIW, Luke's rigorous citation of references has been absurdly useful to me when doing my research. It's one of the aspects of reading LW that makes it worthwhile and productive.
Luke is already aware that I've utilized his citations to a great extent, but I wanted to publicly thank him for all that awesome work. I'd also like to thank others who have done similar things, such as Klevador. We need more of this.
↑ comment by Wei Dai (Wei_Dai) · 2012-09-17T12:06:04.726Z · LW(p) · GW(p)
I am trying to counteract these three effects
I think a valid criticism can be made that while you were trying to counteract these three effects (which is clearly an important and useful effort), you didn't take enough care to avoid introducing a new effect, of making some people think the Sequences are less original than they actually are. (For example you didn't ask Eliezer to double check your descriptions of how the Sequences posts relate to the academic works, and you didn't give some examples of where the Sequences are original.)
This is bad because in addition to communicating various ideas, the Sequences also serve as evidence of Eliezer's philosophy and rationality talents/skills, which is useful for potential donors/supporters to judge the likely future effectiveness of the Singularity Institute in achieving its goals.
Replies from: lukeprog, TimS↑ comment by lukeprog · 2012-09-18T02:54:50.463Z · LW(p) · GW(p)
I agree I could have spent a paragraph reinforcing the originality of The Sequences.
As for asking Eliezer to check the article before posting: I've sent Eliezer things for feedback before, and he usually doesn't give feedback on them until after I stop waiting and post them to LW. But as a result of this post, we've arranged a new heuristic: If I think Eliezer plausibly disagrees with a thing I'm going to post to LW, I'll give him a chance to give feedback on it before I post it.
↑ comment by TimS · 2012-09-17T13:17:36.041Z · LW(p) · GW(p)
From a donor point of view, the question is as much whether Eliezer has made relevant lessons a true part of him as whether he has done original work.
The Sequences are neither necessary nor sufficient to get funding to do actual research (although I hope they are helpful in obtaining funding for research).
↑ comment by CarlShulman · 2012-09-16T07:33:34.087Z · LW(p) · GW(p)
On complexity of value, I didn't see anyone talking about the details of neuroeconomics nor the neuroscientific distinction between "pleasure" and "desire" until I started posting about them
Yvain has posted more than once on this, although with less detail and referencing.
Replies from: lukeprog↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-16T04:18:49.062Z · LW(p) · GW(p)
Do you have a Greasemonkey script that rips all the qualifying words out of my post, or something?
All readers have a Greasemonkey script that rips all the qualifying words out of a post. This is a natural fact of writing and reading.
Your comment above seems to be reacting to a different post that I didn't write
Not the post you wrote - the post that the long-time LWer who Twittered "Eliezer's Yudkowsky's Sequences are mostly not original" read. The actual real-world consequences of a post like this when people actually read it are what bothers me, and it does feel frustrating because those consequences seem very predictable - like you're living in an authorial should-universe. Of course somebody's going to read that post and think "Eliezer Yudkowsky's Sequences are mostly not original"! Of course that's going to be the consequence of writing it! And maybe it's just because I was reading it instead of writing it myself, without having all of your intentions so prominently in my mind, but I don't see why on Earth you'd expect any other message to come across than that. A few qualifying words don't have the kind of power it takes to stop that from happening!
Replies from: lukeprog, TimS, army1987, None↑ comment by lukeprog · 2012-09-16T04:58:37.997Z · LW(p) · GW(p)
All readers have a Greasemonkey script that rips all the qualifying words out of a post... I don't see why on Earth you'd expect any other message to come across than ["Eliezer's Sequences are mostly not original"].
Do you think most readers misinterpreted my post in that way? I doubt it. It looks to me like one person tweeted "Eliezer's Sequences mostly not original" — a misinterpretation of my post which I've now explicitly denied near the top of the post.
My guess now would be that I probably underestimate the degree to which readers misinterpreted my post (because my own intentions were clear in my mind, illusion of transparency), and that you probably overestimate the degree to which readers misinterpreted my post (because you seem to have initially misinterpreted it, and that misinterpretation diminishes several years of cognitive work that you are justly proud of).
Also: you seem to be focusing on the one tweeted misinterpretation and not taking into account that we have evidence that the post is also achieving its explicitly stated goals, as evidenced by many of the comments on this thread: 1, 2, 3, 4, 5.
↑ comment by TimS · 2012-09-16T04:54:37.384Z · LW(p) · GW(p)
It is very easy to read the sequences and think that you think the philosophical thought is original to you. Other than the FAI stuff and decision theory stuff, is that true?
What exactly is wrong with being thought of as a very high-end popularizer? That material is incredibly well presented.
Additionally, people who disagree with your philosophical positions ought not be put in the (EDIT: position) of needing to reinvent the philosophical wheel to engage critically with your essays.
Replies from: wedrifid↑ comment by wedrifid · 2012-09-16T05:00:02.238Z · LW(p) · GW(p)
Additionally, people who disagree with your philosophical positions ought not be put in the power of needing to reinvent the philosophical wheel to engage critically with your essays.
Put in the position of?
Replies from: TimS↑ comment by TimS · 2012-09-16T15:58:29.924Z · LW(p) · GW(p)
Yes, thanks.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-09-18T12:58:19.818Z · LW(p) · GW(p)
I'd take out the EDIT - people can see from the comment below that you edited in response to a comment.
↑ comment by A1987dM (army1987) · 2012-09-16T18:44:51.970Z · LW(p) · GW(p)
All readers have a Greasemonkey script that rips all the qualifying words out of a post. This is a natural fact of writing and reading.
I don't. In fact, I sometimes insert such words.
↑ comment by [deleted] · 2012-09-21T07:45:14.903Z · LW(p) · GW(p)
Of course somebody's going to read that post and think "Eliezer Yudkowsky's Sequences are mostly not original"! Of course that's going to be the consequence of writing it!
Only a single conclusion is possible: LukeProg is a TRAITOR!
Replies from: wedrifid↑ comment by wedrifid · 2012-09-21T11:13:19.427Z · LW(p) · GW(p)
Only a single conclusion is possible: LukeProg is a TRAITOR!
I can understand why this would be negatively received by some---it is clearly hyperbole with a degree of silliness involved. That said---and possibly coincidentally---there is a serious point here. In fact it is the most salient point I noticed when reading the post and initial responses.
In most social hierarchies this post would be seen as a betrayal. An unusually overt and public political move against Eliezer. Not necessarily treason, betrayal of the tribe, it is a move against a rival. Of course it would certainly be in the interest of the targeted rival to try to portray the move as treason (or heresy, or whatever other kind of betrayal of the tribe rather than mere personal conflict.)
The above consideration is why I initially expected Eliezer to agree to a larger extent than he did (which evidently wasn't very much!) Before making public statements of a highly status sensitive nature regarding an ally the typical political actor will make sure they aren't offending them---they don't take the small risk establishing an active rivalry unless they are certain the payoffs are worth it.
This (definitely!) isn't to say that any of the above applies to this situation. Rationalists are weird and in particular can have an unusual relationship between their intellectual and political expression. ie. They sometimes go around saying what they think.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-21T13:45:59.594Z · LW(p) · GW(p)
The thought that Luke was trying to sabotage my position, consciously or unconsciously, honestly never crossed my mind until I read this comment. Having now considered the hypothesis rather briefly, I assign it a rather low probability. Luke's not like that.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-09-21T19:37:47.763Z · LW(p) · GW(p)
It is perhaps worth noting that wedrifid didn't say anything about motives (conscious or otherwise).
Whether I believe someone is trying to sabotage my position (consciously or unconsciously) is a different question from whether I believe they are making a move against me in a shared social hierarchy. (Although each is evidence for the other, of course.)
↑ comment by chaosmosis · 2012-09-16T04:04:53.234Z · LW(p) · GW(p)
With both your comment here and your comments on the troll-fee issue I've found you coming across as arrogant. This perception seems to roughly match the response that other people have had to those comments as well, since most people disagreed with you in both areas (judging by number of upvotes). I hadn't perceived you that way before now, so I'm wondering if something happened to you recently that's altered the way you post or the way you think. This change is for the worse; I want my old model of Eliezer Yudkowsky back!
Frankly, I have found the sequences to be primarily useful for condensing concepts that I already had inside my head. The ideas expressed in almost all of the sequences are blatantly obvious, but they come across as catchy and often are reducible to a quick phrase. Their value lies in the fact that they make it easy to internalize certain ideas so that they're more readily accessible to me. They also helped clarify the boundaries of some concepts, to a certain extent. The sequences have provided me with a useful terminology, but I don't think they've offered me much else.
What ideas do you believe to be original that you've produced?
Is there a reason that defending the originality of the sequences is so important to you?
Replies from: IlyaShpitser, BayesLives, atorm↑ comment by IlyaShpitser · 2012-09-18T00:22:40.728Z · LW(p) · GW(p)
With both your comment here and your comments on the troll-fee issue I've found you coming across as arrogant.
You only got this now?
Replies from: Randaly↑ comment by Randaly · 2012-09-22T22:52:07.254Z · LW(p) · GW(p)
While it wasn't perfectly phrased, I understand where chaosmosis is coming from: I too get the sense that Eliezer is responding significantly less well to criticism, both by misinterpreting or straw-manning what other people have written and letting negative emotions influence what he writes. However, I don't think that one draw a line through two data points: after all, what I regard as Eliezer's best response to criticism, Reply to Holden on 'Tool AI', was written well after the Sequences.
↑ comment by BayesLives · 2012-09-16T15:38:16.499Z · LW(p) · GW(p)
"Is there a reason that defending the originality of the sequences is so important to you?"
Yudkowsky may need to begin reviewing the literature on cognitive biases for his own sake at this point.
↑ comment by atorm · 2012-09-17T04:59:06.745Z · LW(p) · GW(p)
I want my old model of Eliezer Yudkowsky back!
Eliezer Yudkowsky is the supreme being to whom it is up to all of us to become superior!
Replies from: wedrifid↑ comment by wedrifid · 2012-09-17T05:48:09.250Z · LW(p) · GW(p)
I want my old model of Eliezer Yudkowsky back!
Eliezer Yudkowsky is the supreme being to whom it is up to all of us to become superior!
I think chaosmosis would prefer to perceive this as occurring through a change in chaosmosis than a change in chaosmosis's evidence about Eliezer.
Replies from: chaosmosis↑ comment by chaosmosis · 2012-09-17T13:09:31.611Z · LW(p) · GW(p)
No preference.
I don't understand how your comment is responsive to atorm's though, so I might be missing something here.
Replies from: wedrifid↑ comment by wedrifid · 2012-09-17T13:42:49.425Z · LW(p) · GW(p)
I don't understand how your comment is responsive to atorm's though, so I might be missing something here.
It responds to the disconnect between the quote and the quoted quote, in particular the implication of the latter regarding the former.
↑ comment by Thrasymachus · 2012-09-18T08:30:55.118Z · LW(p) · GW(p)
One anecdote given the 'PR' worries raised:
I have never read the sequences. After reading Luke's post, I am much less likely to: the impression given is the sequences are generally idiosyncratic takes which recapitulate an already existing and better organized literature. I also think it is more likely the sequences are overrated, either through readers being unaware their (or similar) insights have already been made, or lacking the technical background to critique them.
It also downgraded my estimate of the value of EY's work. Although I was pretty sceptical, I knew there was at least some chance that the sequences really were bursting with new insights and that LW really was streets ahead of mainstream academia. This now seems much less likely - although I don't think EY is a plagiarist, it seems most of the sequences aren't breaking new ground, but summarizing/unwittingly recapitulating insights that have already been made and taken further elsewhere.
So I can see the motivation for EY to defend that their originality: his stock goes down if the sequences are neat summaries but nothing that new rather than bursting with new and important insights, and EY's stock is important for things like donations, public perception of him and the SI, etc. (Both my likelihood of donating and my regard for SI has been lowered a bit by this post and comments). However, EY's way of responding to (weakly implied) criticism with catty arrogance compounds the harm.
Replies from: Yvain, Peterdjones↑ comment by Scott Alexander (Yvain) · 2012-09-18T10:28:23.002Z · LW(p) · GW(p)
If you are at all interested in rationality it would be a huge shame for you to skip the Sequences.
Yes, a lot of the material in the Sequences could also be obtained by reading very very carefully a few hundred impenetrable scholarly books that most people have never heard of in five or ten different disciplines, supplemented by a few journal articles, plus some additional insights by "reading between the lines", plus drawing all the necessary connections between them. But you will not do this.
The Sequences condense all that information, put it in a really fun, really fascinating format, and transfer all of it into the deepest levels of your brain in a way that those hundred books wouldn't. And then there's some really valuable new material. Luke and Eliezer can argue whether the new material is 30% of the Sequences or 60% of the Sequences, but either number is still way more output than most people will produce over their entire lives.
If your worry is that they will just be recapitulating things you already know, I am pretty doubtful; I don't know your exact knowledge level, but they were pretty exciting for me when I first read them and I had college degrees in philosophy and psychology which are pretty much the subjects covered. And if they are new to you, then from a "whether you should read them" point of view it doesn't matter if Eliezer copied them verbatim off Wikipedia.
Seriously. Read the Sequences. Luke, who is the one arguing against their originality above, says that they are the one book he would like to save if there was an apocalypse. I would have to think a long time before saying the same but they're certainly up there.
Also, as a fellow doctor interested in utiltiarianism/efficient charity, I enjoyed your blog and associated links.
Replies from: lukeprog, Thrasymachus, army1987, Epiphany↑ comment by lukeprog · 2012-09-23T18:21:11.801Z · LW(p) · GW(p)
Luke and Eliezer can argue whether the new material is 30% of the Sequences or 60% of the Sequences...
For the record, when I read Eliezer's comments about the originality of The Sequences, it sounds to me like he and I have pretty much the same estimate of how original The Sequences are.
↑ comment by Thrasymachus · 2012-09-21T15:08:35.725Z · LW(p) · GW(p)
Fair enough. Your and Luke's recommendation are enough for me to read at least some to see if I have got the wrong impression.
↑ comment by A1987dM (army1987) · 2012-09-18T17:37:52.900Z · LW(p) · GW(p)
If you are at all interested in rationality it would be a huge shame for you to skip the Sequences.
You might want to link to "Yes, a blog" by Academian.
↑ comment by Epiphany · 2012-09-21T04:25:26.969Z · LW(p) · GW(p)
The sequences need a summary like the one you just wrote, the way books have a summary on the cover. Maybe this should be taken as a hint that you'd get more mileage out of the sequences with a really good description placed prominently in front of them. That could quickly re-frame non-originality claims as being irrelevant by plainly stating that they're an accessible and entertaining way to learn about logic and bias (implying that the presentation is valuable even if some of the content can be found elsewhere), with (whatever amount) of new content on X, Y, Z topics. If you choose to write such a description, I'd really like to know what you got out of them that your philosophy and psychology degrees didn't give you.
Replies from: None↑ comment by [deleted] · 2012-09-21T04:49:57.935Z · LW(p) · GW(p)
The sequences need a second edition. It's sheer hubris to think that nothing has changed in four years.
Replies from: wedrifid↑ comment by wedrifid · 2012-09-21T11:40:14.754Z · LW(p) · GW(p)
The sequences need a second edition. It's sheer hubris to think that nothing has changed in four years.
There would be room for improvement even without anything changing. They were produced as daily blog posts for the purpose of forcing Eliezer to get his thoughts down on a page.
↑ comment by Peterdjones · 2012-09-21T17:04:47.337Z · LW(p) · GW(p)
Actually I think the sequences are worth reading even though I deplore the tub-thumping, lack of informedness, etc.
What would you expect if someone bright but uninformed about philosophy invented their own philosophy?
Lots of ground re-covered. Lots of avoidable errors. Some novel insights.
↑ comment by Paul Crowley (ciphergoth) · 2012-09-16T10:01:46.105Z · LW(p) · GW(p)
I hadn't expected you to disagree with that tweet, so I'm clearly getting something wrong. I wrote that in the hope that it would encourage people to read the Sequences, not put them off - I think people imagine it as this million-word work of revelation, but a very large part of what it is is a work of popular science - turning people on to good existing ideas in psychology and philosophy and biology and physics and suchlike. There is a great deal that is original and valuable in there, but I don't think of it as the majority of the material.
Replies from: army1987, ArisKatsaris, ciphergoth↑ comment by A1987dM (army1987) · 2012-09-16T18:42:09.003Z · LW(p) · GW(p)
I get your point, but to lots of people the wording of that tweet would have the connotation ‘EY is a plagiarist’, not ‘EY is not a crackpot’.
Replies from: CronoDAS↑ comment by ArisKatsaris · 2012-09-18T09:25:27.196Z · LW(p) · GW(p)
The word "original" has positive connotations. And therefore the words "unoriginal" or "not original" have negative connotations.
So, yeah, I don't think you'd encourage anyone to read anything by calling it "not original".
Replies from: army1987, ciphergoth, Richard_Kennaway↑ comment by A1987dM (army1987) · 2012-09-18T17:45:02.155Z · LW(p) · GW(p)
The word "original" has positive connotations.
Except on Wikipedia (where it's usually an euphemism for ‘crackpottish’). ;-)
(As someone on a Wikipedia talk page once said -- quoting from memory, “if we aren't allowed to [do X] the allowed band between original research and plagiarism becomes dangerously narrow”.)
↑ comment by Paul Crowley (ciphergoth) · 2012-09-18T12:24:02.640Z · LW(p) · GW(p)
Right, but I had hoped that the result would be that someone would follow the link in the tweet, after which they find out some things that may cause them to feel more positively.
↑ comment by Richard_Kennaway · 2012-09-18T10:46:04.215Z · LW(p) · GW(p)
So, yeah, I don't think you'd encourage anyone to read anything by calling it "not original".
I don't know anything about the friends ciphergoth is attempting to reach, but I observe that in religion, "original" would be the greater turn-off. In religion, every innovation is heretical, because it is an innovation. To be accepted it must be presented as "not original", either because it is exactly in accordance with official doctrine, or because it is a return to the true religion that the official doctrine has departed from. It is rare for a religion to successfully introduce a new prophet with the power to sweep away the old, and even then ("I am not come to destroy, but to fulfil") the pretence is maintained that no such thing has happened.
Replies from: ryjm↑ comment by ryjm · 2012-09-18T14:20:30.569Z · LW(p) · GW(p)
Someone who doesn't want to read science-y stuff because they have that kind of mindset is not going to suddenly become curious when someone tells them it's based on science-y stuff from less than 30 years ago.
I like to think of it temporally; that religion is much like rationalists facing the wrong direction. Both occasionally look over their shoulders to confirm their beliefs (although with theists it's more like throwing a homunculus into the distant past and using that for eyes), while most of the time the things we really care about and find exciting are in front of us. Original vs unoriginal with respect to modern thought is of no practical interest to someone with the "every innovation is heretical" mindset unless it is completely within their usual line of sight - heretical is code for "I don't want to keep looking over my shoulder", not "I hate the original on principle". So unless you put that "original" encouragement thousands of years ago where they can see it, where it's a matter of one in front and one behind, the distinction between which is the greater turn-off is not going to matter, or bait anyone into turning around - there is nothing in their usually observed world to relate it to.
↑ comment by Paul Crowley (ciphergoth) · 2012-09-16T10:40:10.809Z · LW(p) · GW(p)
Thinking about it further though, this makes something of a nonsense of the original tweet, since it's hard to think what would count as "mostly original" by this standard. You might as well describe eg The Better Angels of Our Nature as "mostly not original" since it contains no original research but presents a synthesis of the research of others, building up to a common theme.
The problem I have is that if I say something that sounds positive about the Sequences, that's going to turn my friends off, since they already know I think well of them. By saying something that on first reading sounds negative, I might get their interest, but that only works if they go on to follow the link.
Replies from: wedrifid, Richard_Kennaway↑ comment by wedrifid · 2012-09-16T10:53:17.832Z · LW(p) · GW(p)
The problem I have is that if I say something that sounds positive about the Sequences, that's going to turn my friends off, since they already know I think well of them.
For example, they may be turned off if you came out and said "The sequences really aren't the parochial ramblings of an intellectual outcast, they are totally in accord with mainstream scientific thinking". But "mostly not original" conveys much of the same message by making a concession to the orthodoxy.
↑ comment by Richard_Kennaway · 2012-09-16T11:20:48.432Z · LW(p) · GW(p)
The problem I have is that if I say something that sounds positive about the Sequences, that's going to turn my friends off, since they already know I think well of them.
I do not understand this. What planet are your friends from? If you're tweeting to your friends, and they already know what you think of the Sequences, why are you tweeting about them to them?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-09-16T11:34:26.146Z · LW(p) · GW(p)
They are from Earth. Because it would be great for me and for the world if more of my friends took an interest in this sort of thing, and if they have misconceptions that stand in the way of that I'd like to clear up those misconceptions.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-09-17T08:01:17.398Z · LW(p) · GW(p)
Because it would be great for me and for the world if more of my friends took an interest in this sort of thing, and if they have misconceptions that stand in the way of that I'd like to clear up those misconceptions.
I understand the goal; but not the action taken to achieve it. Negging the Sequences will get them to take more of an interest?
↑ comment by Wei Dai (Wei_Dai) · 2012-09-15T19:41:25.909Z · LW(p) · GW(p)
I agree that Luke's post might cause some people to update too much in the direction of "the Sequences aren't original". He was wrong or overstated things in the couple of bullet points that I checked out (and pointed out in my earlier comment). He probably should have showed it to you for error-checking and making sure it's being fair before posting it.
I do think having an index of related works is very valuable, for people wanting to do further readings, or figuring out exactly which parts of the Sequences are original.
So they won't actually read the literature and find out for themselves that it's not what they've already read.
I read Spohn right away, and I'm at least planning to read some of the other references. But I'm not sure how typical I am in this regard.
Replies from: None↑ comment by [deleted] · 2012-09-21T07:50:59.317Z · LW(p) · GW(p)
I do think having an index of related works is very valuable, for people wanting to do further readings, or figuring out exactly which parts of the Sequences are original.
The reader really shouldn't have to figure it out; it's a bit intellectually dishonest to impose that burden on the reader--to the author's reputational benefit.
Replies from: Randaly↑ comment by Randaly · 2012-09-22T02:18:33.405Z · LW(p) · GW(p)
In general, Eliezer did a fairly good job of citing things that he actually was drawing from, ie he didn't plagiarize often. Much of LukeProg's post was simply providing references to similar or independently invented ideas in academia, which were not directly relevant and would have been somewhat inappropriate to put in the posts.
↑ comment by lukeprog · 2012-09-16T07:19:39.604Z · LW(p) · GW(p)
There's literature out there which is written in the same spirit as LW, but with different content. Now that's an exciting message. It might even get people to read things.
Maybe we can start to build up a repository of those things, too. So far, you've recommended:
- Language in Thought and Action
- Psychological Foundations of Culture
- Good and Real
- Rational Choice in an Uncertain World
Unfortunately, those works seem incredibly different to me, so it's hard for me to guess which other works you would also endorse as being in the "LW spirit." I'll try anyway:
- Many of the readings for Aaronson's class Philosophy and Theoretical Computer Science
- Several works by Paul Almond
- Much of Overcoming Bias
- Several blogs of other LWers, e.g. Yvain's blog
↑ comment by pjeby · 2012-09-17T19:25:07.047Z · LW(p) · GW(p)
How about:
Written by a psychologist-philosopher (literally), it reads exactly like a Sequence on five-second approaches to a wide array of thinking errors, carefully cataloged and taxonomized with the information needed to get out of them... and most of them are not thinking errors that have previously been cataloged on LW.
(Even what we commonly refer to here under the heading of "sunk-cost fallacy" is given a much more rigorous, "five-second level" analysis, showing how we get stuck in that fallacy all day long doing ordinary things. Forget sticking with a big multi-year project, he shows how we can get skewered by this fallacy in doing things that take five minutes.)
Replies from: curiousepic↑ comment by curiousepic · 2012-09-19T17:56:23.200Z · LW(p) · GW(p)
Written by a psychologist-philosopher (literally)
There should be more (literal) philosopher-psychologists.
↑ comment by Randaly · 2012-09-22T02:27:44.485Z · LW(p) · GW(p)
Several essays by Paul Graham. (eg Keep Your Identity Small or How to Disagree)
↑ comment by buybuydandavis · 2012-09-15T20:48:51.303Z · LW(p) · GW(p)
There's a spirit in LW which really is a spirit that exists in many other places,
Yes, and pointing out those other places here serves two purposes.
It serves to brand LW, so that people passing by can quickly see the kind of spirit here. Yes, there's a whole world out there, and many of us have spent some time in it, so seeing references to that world here serves to quickly communicate some of what LW is about.
References also server to point people here to other expositions of similar material.
For example, you say:
The only real way for people to learn better is to go out and read Language in Thought and Action
I'd recommend people at some point move on from Hayakawa to Korzybski, Science and Sanity, and the whole General Semantics literature. People have spent decades discussing these issues and organizing
It doesn't get people to read the literature. Why should they? From what they can see, it's already been presented to them on LW, after all.
That's not my reaction to references. When I first came here, the references to Jaynes didn't make me think "I've already covered this stuff, no need to read this web site." On the contrary, it made me want to read more. Similarly, seeing a reference to other work associated with a sequence wouldn't make me think "no need to bother reading that, EY has already regurgitated it for me", it would make me want to read the original.
Yes, there's a whole world out there, making it easier to navigate that world with links is a good thing.
↑ comment by lukeprog · 2012-09-15T23:15:10.321Z · LW(p) · GW(p)
TDT doesn't artifically sever decision nodes from anything upstream; the idea is that observing your algorithm, but not its output, is supposed to screen off things upstream.
Pardon me; I'm not yet much of an expert with LW decision theories. When you explained TDT on the whiteboard to Alex (with me listening), you kept talking about "severing" rather than "screening off." I'll try to find a way to modify the OP.
ETA: I remembered I have a recording of that tutorial, and I when checked the recording, and it turns out my memory was wrong. You did talk about how TDT "screened off" the information whereas CDT "severs" the causal diagram.
↑ comment by Morendil · 2012-09-15T15:07:06.691Z · LW(p) · GW(p)
I don't understand what the purpose of this post was supposed to be - what positive consequence it was supposed to have.
I took the post to be Luke writing notes to himself, in public so as to recruit others' help, toward the kind of bibliography that might be included in an academically acceptable version of the Sequences, or of some parts of them.
The intention being, I gathered, to publish these bibliographies as an adjunct to the Sequences - perhaps in the "wall of references" style of Luke's early posts. (If so, I hope a more user-friendly way of displaying those is worked out first!)
(ETA: the specific positive consequence of that would be to help the reader "find the related works in academia" as per Luke's third numbered point in the OP.)
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-15T15:40:37.121Z · LW(p) · GW(p)
Why would that actually be a consequence of the OP as written?
Replies from: Morendil↑ comment by Morendil · 2012-09-15T16:37:06.021Z · LW(p) · GW(p)
I'm stating what I discerned of the intention - I won't presume to judge the OP either as a plan of action, or as a first step in its execution.
Completely agree with your latter addendum that people should read Hofstadter, Hayakawa etc. not as footnotes to your work but for their own merits. Hofstadter I discovered in childhood and I wouldn't be the same person if I hadn't; I read Hayakawa on your recommendation, and am glad I did. Yay to more discussion of works that have the LW-nature, but are not otherwise alluded to in the Sequences. :)
↑ comment by wedrifid · 2012-09-15T15:53:21.031Z · LW(p) · GW(p)
Thankyou for clearing that up. Given your occupational affiliation with Luke I had been overestimating the extent to which you would endorse his position. That is, I wouldn't have expected Luke to write this without checking with you first so thought you must have agreed.
↑ comment by Epiphany · 2012-09-21T04:07:06.624Z · LW(p) · GW(p)
Regardless of whether it's original, you're the one making rationality popular. Inspiring this many people to take more interest in rationality is a profoundly worthwhile accomplishment. The world needs teachers who can motivate them to think more clearly. I'm heartened to see your progress.
↑ comment by Emile · 2012-09-15T15:57:07.972Z · LW(p) · GW(p)
The only real way for people to learn better is to go out and read Language in Thought and Action or Rational Choice in an Uncertain World.
Added to my list!
Do you have any more reading suggestions for people who have read the sequences? I read a few books recommended in the book recommendation open threads (or on irc), but was sometimes disappointed ("Thinking in Systems" is not very rigorous and formal, Nassim Taleb's "Fooled by Randomness" takes too much liberty interpreting various concepts, and I'm not a fan of books that start by telling me "don't worry I won't hurt your little brain with equations").
comment by brilee · 2012-09-15T13:02:29.495Z · LW(p) · GW(p)
It's posts like this that remind me that the sequences are vast, excellent, and most importantly of all, not particularly organized at the moment.
Every so often, Lukeprog or others will make a small effort towards collating the sequences, but the resulting product disappears into the ether of Discussion archives.
Talk is cheap, but somebody really needs to do something about the sequences to make them more accessible and visible to a newcomer. The LW wiki index of the sequence is incomplete, and seems like it hasn't been changed since 'Tetronian' created it six months ago.
Replies from: Raincomment by Manfred · 2012-09-15T01:20:46.994Z · LW(p) · GW(p)
Likewise, much of the Quantum Physics sequence can be found in quantum physics textbooks, e.g. Sakurai & Napolitano (2010).
I don't think Sakurai is the best reference here - most of an introductory QM book will be about what particles do in the presence of forces, and treats identical particles in a more complicated language because they can be either fermions or bosons.
A better text would be an introduction for people who want to do quantum computing - those people get to use all the nice abstractions and let the physicists worry about the particles in the presence of forces behind those abstractions :P An example I was able to dig up from a course syllabus was (Robert, not David) Griffiths' Consistent Quantum Theory.
EDIT: Ah, of course the best reference is Feynman's QED.
comment by Risto_Saarelma · 2012-09-16T15:03:20.342Z · LW(p) · GW(p)
This is good. Getting people to read lots of quality stuff by a wide variety of authors can put them on the path of being able to produce high quality output themselves, after overlearning the concepts and ways to present them from many viewpoints.
People who just stick with the sequences can end up parroting the surface jargon and alienating people who expect familiarity with a bit wider range of literature for someone whose opinion they would value.
↑ comment by wedrifid · 2012-10-05T18:15:29.877Z · LW(p) · GW(p)
So, you approve of the practice of disappearing comments without any notice of the fact or the reason?
Not as such, but I approve of disappearing anything everything from known trolling sockpuppet accounts.
(I feel like I should be paying a 5 karma troll-feeding-toll to write this but for some reason there are upvotes where I expected downvotes. I'll wait a day to see how things stabilize then consider if my model of lesswrong users needs to be updated.)
comment by John_Maxwell (John_Maxwell_IV) · 2012-09-17T01:26:22.406Z · LW(p) · GW(p)
I edited the sequences page to add a link to this post.
↑ comment by J_Taylor · 2012-09-25T03:29:06.181Z · LW(p) · GW(p)
If you were an honest person (not one feeding at the trough of Thiel/Yudkowsky), you would vote yourself down.
Assuming we disregard ownership of multiple accounts, the consequent of your conditional is impossible.
Replies from: wedrifidcomment by timtyler · 2012-09-15T00:52:27.499Z · LW(p) · GW(p)
It looks as though there is also earlier work on cooperation in one-shot prisoners dilemmas - e.g.:
Harrington, Joseph E. Jr. (1995) Cooperation in a one-shot Prisoners' Dilemma.
Heiner, Ronald Asher (2002) Robust Evolution Of Contingent Cooperation In Pure One-Shot Prisoners' Dilemmas
↑ comment by wedrifid · 2012-10-05T17:43:06.162Z · LW(p) · GW(p)
The sheer magnitude is what impresses gullible readers of the Sequences.
Wow. Deja vu. I actually have to follow this link and double check the date to see if this was the same comment we dealt with before or just a repetition of the same agenda by the same sockpuppet. If you check DevilWorm's user page you will see that this comment is a copy and paste clone of one he previously made that has now been deleted or banned (5 comments below on that page, to be precise). Once again it has received initial upvotes---either from his other accounts or from users who are vulnerable to persuasion on DevilWorm's only topic of discussion (the worthlessness of Eliezer Yudkowsky).
Come on, when I want to harp on about one issue repetitively I at least either make up new speech every time or make an explicit link to the previous one.
comment by Thomas · 2012-09-15T07:30:44.850Z · LW(p) · GW(p)
What is originally his? AFAIK the FOOM and the Friendliness are his.
I am just curious.
Replies from: CarlShulman, knb↑ comment by CarlShulman · 2012-09-15T09:20:57.030Z · LW(p) · GW(p)
Eliezer credits Nick Bostrom with coming up with the idea of Friendly AI first (and indeed while Eliezer was indifferent to AI risk on the assumption that either superintelligences would be automatically supermoral or it didn't matter what happened). FOOM probably goes to I.J. Good, or SF (Eliezer found out about the idea of a technological singularity by reading Vernor Vinge's science fiction, and closely related ideas are decades older in SF).
↑ comment by knb · 2012-09-15T09:16:22.024Z · LW(p) · GW(p)
FOOM (AKA Intelligence Explosion) was formulated by I.J. Good about 50 years ago.
Replies from: Eliezer_Yudkowsky, Thomas↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-15T14:12:54.339Z · LW(p) · GW(p)
...and pre-formulated by John W. Campbell, a famous science-fiction editor.
↑ comment by Thomas · 2012-09-15T10:12:36.827Z · LW(p) · GW(p)
Maybe it is just me. But as I understood I.J. Good's intelligence explosion is much more "Kurzweilian". Happens as a consequence of some large improvement all over the place. While for the Yudkowsky's FOOM, a right binary string in the RAM of the PC from 2000 would suffice to blow us away.
I think, that the computer may need to be from today, or even from tomorrow, but this does not change much.
Replies from: timtyler↑ comment by timtyler · 2012-09-16T16:10:12.907Z · LW(p) · GW(p)
You would need improvements in both software and hardware to compete with natural nanotechnology at its best.
Improvements in software would catalyse improvements in hardware - and visa versa. I think most of the parties involved are on the same page about all this.
comment by theduffman · 2012-10-31T10:23:37.090Z · LW(p) · GW(p)
http://lesswrong.com/lw/va/measuring_optimization_power/ and a couple of posts before and after are variations on the ideas of Daniel Dennett's The Intentional Stance. I loved both versions.
↑ comment by pragmatist · 2012-10-05T17:30:23.187Z · LW(p) · GW(p)
That's why he always refuses to summarize his conclusions.
It seems like his latest sequence is offering summarized versions of at least some of the previous sequences.
comment by CarlShulman · 2012-09-15T02:28:39.023Z · LW(p) · GW(p)
Looking at the PhilPapers.org survey of philosophers, the pattern of views in the main sequences can be predicted rather well with one heuristic: select the option that (religious) philosophers of religion disproportionately reject. Or, alternatively, the option most compatible with Dennett-style naturalism. Exceptions include one-boxing on Newcomb, and perhaps personal identity (which is somewhat tied up with transhumanism, but judged differently by philosophers of computing vs biology).
However, the conjunction of all of these naturalist-favored views has to be less plausible than any of its components. How much less plausible depends on how tightly interconnected these views are, and how much of our evidence pushes on that common core. I.e. to what extent do libertarian free will, moral realism with supernatural moral properties, non-physicalist accounts of consciousness, and so forth stand or fall together?
↑ comment by metatroll · 2012-09-25T04:05:18.912Z · LW(p) · GW(p)
limp on for years
More like a few weeks.
All I can do now is be silent. I shall sink into the divine shadow, and will not know the troll or the metatroll, or anything else. Numquam novi nomen eius, nomen troglodytam.
comment by Bruno_Coelho · 2012-09-15T18:24:39.329Z · LW(p) · GW(p)
Even when E ideas are extremely similar to some SEP article, he assumes: " If the words are not the same, then there are differences". This sentiment converge to point 2.
I suppose has to do with possible linguistic traps. If we use terms who has a focal points as a premisse, probably these words come with hidden inferences associated to specific groups. Avoiding academic parlance whenever possible is good, but obvious improvements could be done in the writings.