I Stand by the Sequences
post by Grognor · 2012-05-15T10:21:26.469Z · LW · GW · Legacy · 250 commentsContents
250 comments
Edit, May 21, 2012: Read this comment by Yvain.
Forming your own opinion is no more necessary than building your own furniture.
There's been a lot of talk here lately about how we need better contrarians. I don't agree. I think the Sequences got everything right and I agree with them completely. (This of course makes me a deranged, non-thinking, Eliezer-worshiping fanatic for whom the singularity is a substitute religion. Now that I have admitted this, you don't have to point it out a dozen times in the comments.) Even the controversial things, like:
- I think the many-worlds interpretation of quantum mechanics is the closest to correct and you're dreaming if you think the true answer will have no splitting (or I simply do not know enough physics to know why Eliezer is wrong, which I think is pretty unlikely but not totally discountable).
- I think cryonics is a swell idea and an obvious thing to sign up for if you value staying alive and have enough money and can tolerate the social costs.
- I think mainstream science is too slow and we mere mortals can do better with Bayes.
- I am a utilitarian consequentialist and think that if allow someone to die through inaction, you're just as culpable as a murderer.
- I completely accept the conclusion that it is worse to put dust specks in 3^^^3 people's eyes than to torture one person for fifty years. I came up with it independently, so maybe it doesn't count; whatever.
- I tentatively accept Eliezer's metaethics, considering how unlikely it is that there will be a better one (maybe morality is in the gluons?)
- "People are crazy, the world is mad," is sufficient for explaining most human failure, even to curious people, so long as they know the heuristics and biases literature.
- Edit, May 27, 2012: You know what? I forgot one: Gödel, Escher, Bach is the best.
There are two tiny notes of discord on which I disagree with Eliezer Yudkowsky. One is that I'm not so sure as he is that a rationalist is only made when a person breaks with the world and starts seeing everybody else as crazy, and two is that I don't share his objection to creating conscious entities in the form of an FAI or within an FAI. I could explain, but no one ever discusses these things, and they don't affect any important conclusions. I also think the sequences are badly-organized and you should just read them chronologically instead of trying to lump them into categories and sub-categories, but I digress.
Furthermore, I agree with every essay I've ever read by Yvain, I use "believe whatever gwern believes" as a heuristic/algorithm for generating true beliefs, and don't disagree with anything I've ever seen written by Vladimir Nesov, Kaj Sotala, Luke Muelhauser, komponisto, or even Wei Dai; policy debates should not appear one-sided, so it's good that they don't.
I write this because I'm feeling more and more lonely, in this regard. If you also stand by the sequences, feel free to say that. If you don't, feel free to say that too, but please don't substantiate it. I don't want this thread to be a low-level rehash of tired debates, though it will surely have some of that in spite of my sincerest wishes.
Holden Karnofsky said:
I believe I have read the vast majority of the Sequences, including the AI-foom debate, and that this content - while interesting and enjoyable - does not have much relevance for the arguments I've made.
I can't understand this. How could the sequences not be relevant? Half of them were created when Eliezer was thinking about AI problems.
So I say this, hoping others will as well:
I stand by the sequences.
And with that, I tap out. I have found the answer, so I am leaving the conversation.
Even though I am not important here, I don't want you to interpret my silence from now on as indicating compliance.
After some degree of thought and nearly 200 comment replies on this article, I regret writing it. I was insufficiently careful, didn't think enough about how it might alter the social dynamics here, and didn't spend enough time clarifying, especially regarding the third bullet point. I also dearly hope that I have not entrenched anyone's positions, turning them into allied soldiers to be defended, especially not my own. I'm sorry.
250 comments
Comments sorted by top scores.
comment by JoshuaZ · 2012-05-15T15:00:56.575Z · LW(p) · GW(p)
and don't disagree with anything I've ever seen written by Vladimir Nesov, Kaj Sotala, Luke Muelhauser, komponisto, or even Wei Da
This confuses me since these people are not in agreement on some issues.
Replies from: Wei_Dai, Randaly↑ comment by Wei Dai (Wei_Dai) · 2012-05-15T19:40:31.644Z · LW(p) · GW(p)
This confuses me as well, especially since I was a major contributor to "talk here lately about how we need better contrarians" which the OP specifically disagreed with.
Replies from: Grognor, JoshuaZ↑ comment by Grognor · 2012-05-15T23:09:28.022Z · LW(p) · GW(p)
Where you and those other fellows disagree are typically on policy questions, where I tend to not have any strong opinions at all. (Thus, "don't disagree".) If you will point to a specific example where you and one of those other fellows explicitly disagree on a factual question (or metaethical question if you don't consider that a subset of factual questions) I will edit my comment.
Addendum: I agree more with you than Eliezer about what to do, especially re: Some Thoughts and Wanted: Backup Plans.
↑ comment by JoshuaZ · 2012-05-15T19:48:38.622Z · LW(p) · GW(p)
There's a comparison in the back of my mind that may be pattern matching to religious behavior too closely: when one talks to some Orthodox Jews, some of them will claim that they believe everything that some set of major historical rabbis said (say Maimonides and Rashi) when one can easily point to issues where those rabbis disagreed. Moreover, they often use examples like Maimonides or Ibn Ezra who had positions that are in fact considered outright heretical by much of Orthodox Judaism today. I've seen a similar result with Catholics and their theologians. In both cases, the more educated members of the religion seem less inclined to do so, but even the educated members sometimes make such claims albeit with interesting doublethink to justify how the positions really aren't contradictory.
It may be in those cases that what may be going on is that saying "I agree with this list of people and have never seen them as wrong" is a statement of tribal affiliation by saying one agrees with various high status people in the tribe. It is possible that something similar is happening in this context. Alternatively, it may just indicate that Grognor hasn't read that much by you or by some of the other people on the list.
comment by Vladimir_Nesov · 2012-05-15T21:03:07.535Z · LW(p) · GW(p)
"Think for yourself" sounds vaguely reasonable only because of the abominable incompetence of those tasked with thinking for us.
-- Steven Kaas
comment by Jayson_Virissimo · 2012-05-15T11:37:47.962Z · LW(p) · GW(p)
I write this because I'm feeling more and more lonely, in this regard. If you also stand by the sequences, feel free to say that. If you don't, feel free to say that too, but please don't substantiate it. I don't want this thread to be a low-level rehash of tired debates, though it will surely have some of that in spite of my sincerest wishes.
Why must we stand-by or stand-away-from? Personally, I lean towards the Sequences. Do you really need to feel lonely unless others affirm every single doctrine?
I think the many-worlds interpretation of quantum mechanics is the closest to correct and you're dreaming if you think the true answer will have no splitting (or I simply do not know enough physics to know why Eliezer is wrong, which I think is pretty unlikely but not totally discountable).
I accept the MWI of QM as "empirically adequate"; no more, no less.
I think cryonics is a swell idea and an obvious thing to sign up for if you value staying alive and have enough money and can tolerate the social costs.
Cryonics is interesting and worth considering, but the probabilities invovled are so low that it is not at all obvious it is a net win after factoring in signalling costs.
I think mainstream science is too slow and we mere mortals can do better with Bayes.
"Science" is so many different things that I think it is much more responsible to divide it up into smaller sets (some of which could really use some help from LessWrongists and others which are doing just fine, thank-you-very-much) before making such blanket generalizations.
I am a utilitarian consequentialist and think that if allow someone to die through inaction, you're just as culpable as a murderer.
This is a point on which I side with the mathematical economists (and not with the ethicists) and just say that there is no good way to make interpersonal utility comparisons when you are considering large diverse populations (or, for that matter, the "easy" case of a genetically related nuclear family).
I tentatively accept Eliezer's metaethics, considering how unlikely it is that there will be a better one (maybe morality is in the gluons?)
I am confused about Eliezer's metaethics. If you ask 10 LessWrongers what Eliezer's metaethical theory is, you get approximately 10 distinct positions. In other words, I don't know how high a probability to assign to it, because I'm very unsure of what it even means.
"People are crazy, the world is mad," is sufficient for explaining most human failure, even to curious people, so long as they know the heuristics and biases literature.
I agree. The world really is mad. I seriously considered the hypothesis that it was I who am mad, but rejected this proposition, party because my belief-calibration seems to be better than average (precisely the opposite of what you would expect of crazy people). Of course, "madness" is relative, not absolute. I am no doubt insane compared to super-human intelligences (God, advanced-AI, Omega, etc...).
Replies from: Will_Newsome, buybuydandavis, Manfred↑ comment by Will_Newsome · 2012-05-15T20:01:38.667Z · LW(p) · GW(p)
You seem to mostly disagree in spirit with all Grognor's points but the last, though on that point you didn't share your impression of the H&B literature.
I'll chime in and say that at some point about two years ago I would have more or less agreed with all six points. These days I disagree in spirit with all six points and with the approach to rationality that they represent. I've learned a lot in the meantime, and various people, including Anna Salamon, have said that I seem like I've gained fifteen or twenty IQ points. I've read all of Eliezer's posts maybe three times over and I've read many of the cited papers and a few books, so my disagreement likely doesn't stem from not having sufficiently appreciated Eliezer's sundry cases. Many times when I studied the issues myself and looked at a broader set of opinions in the literature, or looked for justifications of the unstated assumptions I found, I came away feeling stupid for having been confident of Eliezer's position: often Eliezer had very much overstated the case for his positions, and very much ignored or fought straw men of alternative positions.
His arguments and their distorted echoes lead one to think that various people or conclusions are obviously wrong and thus worth ignoring: that philosophers mostly just try to be clever and that their conclusions are worth taking seriously more-or-less only insofar as they mirror or glorify science; that supernaturalism, p-zombie-ism, theism, and other philosophical positions are clearly wrong, absurd, or incoherent; that quantum physicists who don't accept MWI just don't understand Occam's razor or are making some similarly simple error; that normal people are clearly biased in all sorts of ways, and that this has been convincingly demonstrated such that you can easily explain away any popular beliefs if necessary; that religion is bad because it's one of the biggest impediments to a bright, Enlightened future; and so on. It seems to me that many LW folk end up thinking they're right about contentious issues where many people disagree with them, even when they haven't looked at their opponents' best arguments, and even when they don't have a coherent understanding of their opponents' position or their own position. Sometimes they don't even seem to realize that there are important people who disagree with them, like in the case of heuristics and biases. Such unjustified confidence and self-reinforcing ignorance is a glaring, serious, fundamental, and dangerous problem with any epistemology that wishes to lay claim to rationality.
Replies from: Emile, CuSithBell, siodine↑ comment by Emile · 2012-05-15T20:14:09.756Z · LW(p) · GW(p)
normal people are clearly biased in all sorts of ways
Does anybody actually dispute that?
religion is bad because it's one of the biggest impediments to a bright, Enlightened future;
For what it's worth, I don't hold that position, and it seems much more prevalent in atheist forums than on LessWrong.
Replies from: JoshuaZ, Will_Newsome↑ comment by JoshuaZ · 2012-05-15T20:25:22.649Z · LW(p) · GW(p)
it seems much more prevalent in atheist forums than on LessWrong.
Is it less prevalent here or is it simply less vocal because people here aren't spending their time on that particularly tribal demonstration? After all, when you've got Bayesianism, AI risk, and cognitive biases, you have a lot more effective methods of signaling allegiance to this narrow crowd.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-16T04:31:19.641Z · LW(p) · GW(p)
Well we have openly religious members of our 'tribe'.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-05-16T04:36:33.460Z · LW(p) · GW(p)
Clear minority, and most comments defending such views are voted down. With the exception of Will, no one in that category is what would probably be classified as high status here, and even Will's status is... complicated.
Replies from: Will_Newsome, Eugine_Nier↑ comment by Will_Newsome · 2012-05-16T19:34:04.471Z · LW(p) · GW(p)
Also I'm not religious in the seemingly relevant sense.
↑ comment by Eugine_Nier · 2012-05-16T05:32:30.860Z · LW(p) · GW(p)
Well this post is currently at +6.
↑ comment by Will_Newsome · 2012-05-15T20:38:03.127Z · LW(p) · GW(p)
Does anybody actually dispute that?
Depends on what connotations are implied. There are certainly people who dispute, e.g., the (practical relevance of the) H&B results on confirmations bias, overconfidence, and so on that LessWrong often brings up in support of the "the world is mad" narrative. There are also people like Chesterton who placed much faith in the common sense of the average man. But anyway I think the rest of the sentence needs to be included to give that fragment proper context.
For what it's worth, I don't hold that position, and it seems much more prevalent in atheist forums than on LessWrong.
Granted.
↑ comment by CuSithBell · 2012-05-15T21:25:17.888Z · LW(p) · GW(p)
Could you point towards some good, coherent arguments for supernatural phenomena or the like?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-05-15T21:53:07.685Z · LW(p) · GW(p)
Analyzing the sun miracle at Fatima seems to be a good starting point. This post has been linked from LessWrong before. Not an argument for the supernatural, but a nexus for arguments: it shows what needs to be explained, by whatever means. Also worth keeping in mind is the "capricious psi" hypothesis, reasonably well-explicated by J. E. Kennedy in a few papers and essays. Kennedy's experience is mostly in parapsychology. He has many indicators in favor of his credibility: he has a good understanding of the relevant statistics, he exposed some fraud going on in a lab where he was working, he doesn't try to hide that psi if it exists would seem to have weird and seemingly unlikely properties, et cetera.
But I don't know of any arguments that really go meta and take into account how the game theory and psychology of credibility might be expected to affect the debate, e.g., emotional reactions to people who look like they're trying to play psi-of-the-gaps, both sides' frustration with incommunicable evidence or even the concept of incommunicable evidence, and things like that.
Replies from: CuSithBell↑ comment by CuSithBell · 2012-05-15T23:27:51.744Z · LW(p) · GW(p)
Hm. This... doesn't seem particularly convincing. So it sounds like whatever convinced you is incommunicable - something that you know would be unconvincing to anyone else, but which is still enough to convince you despite knowing the alternate conclusions others would come to if informed of it?
Replies from: Will_Newsome, Eugine_Nier↑ comment by Will_Newsome · 2012-05-15T23:56:35.304Z · LW(p) · GW(p)
Hm. This... doesn't seem particularly convincing.
Agreed. The actually-written-up-somewhere arguments that I know of can at most move supernaturalism from "only crazy or overly impressionable people would treat it as a live hypothesis" to "otherwise reasonable people who don't obviously appear to have a bottom line could defensibly treat it as a Jamesian live hypothesis". There are arguments that could easily be made that would fix specific failure modes, e.g. some LW folk (including I think Eliezer and lukeprog) mistakenly believe that algorithmic probability theory implies a low prior for supernaturalism, and Randi-style skeptics seem to like fully general explanations/counterarguments too much. But once those basic hurdles are overcome there still seems to be a wide spread of defensible probabilities for supernaturalism based off of solely communicable evidence.
So it sounds like whatever convinced you is incommunicable - something that you know would be unconvincing to anyone else, but which is still enough to convince you despite knowing the alternate conclusions others would come to if informed of it?
Essentially, yes.
Replies from: steven0461, CuSithBell, r_claypool↑ comment by steven0461 · 2012-05-16T00:47:22.779Z · LW(p) · GW(p)
some LW folk (including I think Eliezer and lukeprog) mistakenly believe that algorithmic probability theory implies a low prior for supernaturalism
Is the point here that supernatural entities that would be too complex to specify into the universe from scratch may have been produced through some indirect process logically prior to the physics we know, sort of like humans were produced by evolution? Or is it something different?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-05-16T01:09:49.935Z · LW(p) · GW(p)
Alien superintelligences are less speculative and emerge naturally from a simple universe program. More fundamentally the notion of simplicity that Eliezer and Luke are using is entirely based off of their assessments of which kinds of hypotheses have historically been more or less fruitful. Coming up with a notion of "simplicity" after the fact based on past observations is coding theory and has nothing to do with the universal prior, which mortals simply don't have access to. Arguments should be about evidence, not "priors".
Replies from: siodine↑ comment by siodine · 2012-05-16T04:00:44.393Z · LW(p) · GW(p)
More fundamentally the notion of simplicity that Eliezer and Luke are using is entirely based off of their assessments of which kinds of hypotheses have historically been more or less fruitful
...
Coming up with a notion of "simplicity" after the fact based on past observations is coding theory and has nothing to do with the universal prior. Arguments should be about evidence, not "priors".
It isn't technically a universal prior, but it counts as evidence because it's historically fruitful. That leaves you with a nitpick rather than showing "LW folk (including I think Eliezer and lukeprog) mistakenly believe that algorithmic probability theory implies a low prior for supernaturalism."
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-05-16T04:17:07.650Z · LW(p) · GW(p)
I don't think it's nitpicking as such to point out that the probability of supernaturalism is unrelated to algorithmic probability. Bringing in Kolmogorov complexity is needlessly confusing, and even Bayesian probability isn't necessary because all we're really concerned with is the likelihood ratio. The error I want to discourage is bringing in confusing uncomputable mathematics for no reason and then asserting that said mathematics somehow justify a position one holds for what are actually entirely unrelated reasons. Such errors harm group epistemology.
Replies from: siodine↑ comment by siodine · 2012-05-16T12:35:20.267Z · LW(p) · GW(p)
I don't think it's nitpicking as such to point out that the probability of supernaturalism is unrelated to algorithmic probability.
I don't see how you've done that. If KC isn't a universal prior like objective isn't technically objective but inter-subjective then you can still use KC as evidence for a class of propositions (and probably the only meaningful class of propositions). For that class of propositions you have automatic evidence for or against them (in the form of KC), and so it's basically a ready-made prior because it passes from the posterior to the prior immediately anyway.
The error I want to discourage is bringing in confusing uncomputable mathematics for no reason and then asserting that said mathematics somehow justify a position one holds for what are actually entirely unrelated reasons.
So a) you think LWers reasons for not believing in supernaturalism have nothing to do with KC, and b) you think supernaturalism exists outside the class of propositions KC can count as evidence as for or against?
I don't care about A, but If B is your position I wonder: why?
↑ comment by CuSithBell · 2012-05-16T00:15:17.049Z · LW(p) · GW(p)
That's a shame. Any chance you might have suggestions on how to go about obtaining such evidence for oneself? Possibly via PM if you'd be more comfortable with that.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-05-16T01:37:52.766Z · LW(p) · GW(p)
I have advice. First off, if psi's real then I think it's clearly an intelligent agent-like or agent-caused process. In general you'd be stupid to mess around with agents with unknown preferences. That's why witchcraft was considered serious business: messing with demons is very much like building mini uFAIs. Just say no. So I don't recommend messing around with psi, especially if you haven't seriously considered what the implications of the existence of agent-like psi would be. This is why I like the Catholics: they take things seriously, it's not fun and games. "Thou shalt not tempt the Lord thy God." If you do experiment, pre-commit not to tell anyone about at least some predetermined subset of the results. Various parapsychology experiments indicate that psi effects can be retrocausal, so experimental results can be determined by whether or not you would in the future talk about them. If psi's capricious then pre-commiting not to blab increases likelihood of significant effects.
Replies from: Eugine_Nier, CuSithBell↑ comment by Eugine_Nier · 2012-05-16T04:45:18.702Z · LW(p) · GW(p)
Various parapsychology experiments indicate that psi effects can be retrocausal, so experimental results can be determined by whether or not you would in the future talk about them. If psi's capricious then pre-commiting not to blab increases likelihood of significant effects.
I just thought of something. What you're saying is that psi effects are anti-inductive.
Replies from: bogus, Will_Newsome↑ comment by bogus · 2012-05-16T20:52:56.145Z · LW(p) · GW(p)
The capricious-psi literature actually includes several proposed mechanisms which could lead to "anti-inductive" psi. Some of these mechanisms are amenable to mitigation strategies (such as not trying to use psi effects for material advantage, and keeping one's experiments confidential); others are not.
↑ comment by Will_Newsome · 2012-05-16T04:59:21.492Z · LW(p) · GW(p)
Indeed.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-16T05:34:49.909Z · LW(p) · GW(p)
Ok, I feel like we should now attempt to work out a theory of psi caused by some kind of market-like game theory among entities.
↑ comment by CuSithBell · 2012-05-18T03:01:51.987Z · LW(p) · GW(p)
Thanks for the advice! Though I suppose I won't tell you if it turns out to have been helpful?
↑ comment by r_claypool · 2012-05-16T22:15:10.599Z · LW(p) · GW(p)
LW folk (including I think Eliezer and lukeprog) mistakenly believe that algorithmic probability theory implies a low prior for supernaturalism
As lukeprog says here.
↑ comment by Eugine_Nier · 2012-05-16T04:39:35.009Z · LW(p) · GW(p)
I don't entirely agree with Will here. My issue is that there seem to be some events, e.g., Fatima, where the best "scientific explanation" is little better than the supernatural wearing a lab-coat.
Replies from: CuSithBell↑ comment by CuSithBell · 2012-05-18T03:01:13.412Z · LW(p) · GW(p)
Are there any good supernatural explanations for that one?! Because "Catholicism" seems like a pretty terrible explanation here.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-18T05:57:26.512Z · LW(p) · GW(p)
Because "Catholicism" seems like a pretty terrible explanation here.
Why? Do you have a better one? (Note: I agree "Catholicism" isn't a particularly good explanation, it's just that it's not noticeably worse than any other.)
Replies from: CuSithBell↑ comment by CuSithBell · 2012-05-19T01:17:32.906Z · LW(p) · GW(p)
I mentioned Catholicism only because it seems like the "obvious" supernatural answer, given that it's supposed to be a Marian apparition. Though, I do think of Catholicism proper as pretty incoherent, so it'd rank fairly low on my supernatural explanation list, and well below the "scientific explanation" of "maybe some sort of weird mundane light effect, plus human psychology, plus a hundred years". I haven't really investigated the phenomenon myself, but I think, say, "the ghost-emperor played a trick" or "mass hypnosis to cover up UFO experiments by the lizard people" rank fairly well compared to Catholicism.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-19T03:51:30.101Z · LW(p) · GW(p)
"maybe some sort of weird mundane light effect, plus human psychology, plus a hundred years".
This isn't really an explanation so much as clothing our ignorance in a lab coat.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-05-19T03:53:46.596Z · LW(p) · GW(p)
It does a little more than that. It points to a specific class of hypotheses where we have evidence that in similar contexts such mechanisms can have an impact. The real problem here is that without any ability to replicate the event, we're not going to be able to get substantially farther than that.
Replies from: CuSithBell↑ comment by CuSithBell · 2012-05-19T04:15:34.037Z · LW(p) · GW(p)
Yeah, it's not really an explanation so much as an expression of where we'd look if we could. Presumably the way to figure it out is to either induce repeat performances (difficult to get funding and review board approval, though) or to study those mechanisms further. I suspect that'd be more likely to help than reading about ghost-emperors, at least.
Replies from: Nornagest↑ comment by Nornagest · 2012-05-19T04:32:36.956Z · LW(p) · GW(p)
Quite. Seems to me that if we're going to hold science to that standard, we should be equally or more critical of ignorance in a cassock; we should view religion as a competing hypothesis that needs to be pointed to specifically, not as a reassuring fallback whenever conventional investigation fails for whatever reason. That's a pretty common flaw of theological explanations, actually.
↑ comment by siodine · 2012-05-15T21:45:16.297Z · LW(p) · GW(p)
Disagree in spirit? What exactly does that mean?
(I happen to mostly agree with your comment while mostly agreeing with Grognor's points--hence my confusion in what you mean, exactly.)
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-05-15T22:27:48.358Z · LW(p) · GW(p)
Hard to explain. I'll briefly go over my agreement/disagreement status on each point. MWI: Mixed opinion. MWI is a decent bet, but then again that's a pretty standard opinion among quantum physicists. Eliezer's insistence that MWI is obviously correct is not justified given his arguments: he doesn't address the most credible alternatives to MWI, and doesn't seem to be cognizant of much of the relevant work. I think I disagree in spirit here even though I sort of agree at face value. Cryonics: Disagree, nothing about cryonics is "obvious". Meh science, Yay Bayes!: Mostly disagree, too vague, little supporting evidence for face value interpretation. I agree that Bayes is cool. Utilitarianism: Disagree, utilitarianism is retarded. Consequentialism is fine, but often very naively applied in practice, e.g. utilitarianism. Eliezer's metaethics: Disagree, especially considering Eliezer's said he thinks he's solved meta-ethics, which is outright crazy, though hopefully he was exaggerating. "'People are crazy, the world is mad' is sufficient for explaining most human failure, even to curious people, so long as they know the heuristics and biases literature": Mostly disagree, LW is much too confident in the heuristics and biases literature and it's not nearly a sufficient explanation for lots of things that are commonly alleged to be irrational.
Replies from: steven0461, steven0461↑ comment by steven0461 · 2012-05-15T23:51:31.331Z · LW(p) · GW(p)
Disagree, utilitarianism is retarded.
When making claims like this, you need to do something to distinguish yourself from most people who make such claims, who tend to harbor basic misunderstandings, such as an assumption that preference utilitarianism is the only utilitarianism.
Utilitarianism has a number of different features, and a helpful comment would spell out which of the features, specifically, is retarded. Is it retarded to attach value to people's welfare? Is it retarded to quantify people's welfare? Is it retarded to add people's welfare linearly once quantified? Is it retarded to assume that the value of structures containing more than one person depends on no features other than the welfare of those persons? And so on.
Replies from: Will_Newsome, Will_Newsome, Will_Newsome↑ comment by Will_Newsome · 2012-05-16T05:10:04.929Z · LW(p) · GW(p)
I suppose it's easiest for me to just make the blanket metaphilosophical claim that normative ethics without well-justified meta-ethics just aren't a real contender for the position of actual morality. So I'm unsatisfied with all normative ethics. I just think that utilitarianism is an especially ugly hack. I dislike fake non-arbitrariness.
↑ comment by Will_Newsome · 2012-05-26T11:02:57.687Z · LW(p) · GW(p)
You went into the kitchen cupboard
Got yourself another hour, and you gave
Half of it to me
We sat there looking at the faces
Of the strangers in the pages
'til we knew 'em mathematicallyThey were in our minds
Until forever
But we didn't mind
We didn't know betterSo we made our own computer
Out of macaroni pieces
And it did our thinking
While we lived our lives
It counted up our feelings
And divided them up even
And it called our calculation
Perfect love [lives?]Didn't even know
That love was bigger
Didn't even know
That love was so, so
Hey hey heyHey this fire, this fire
It's burning us up
Hey this fire, It's burning us
Oh, oo oo oo, oo oo oo ooSo we made the hard decision
And we each made an incision
Past our muscles and our bones
Saw our hearts were little stonesPulled 'em out, they weren't beating
And we weren't even bleeding
As we laid them on the granite counter topWe beat 'em up against each other
We beat 'em up against each other
We struck 'em hard against each other
We struck 'em so hard, so hard until they sparkedHey this fire, this fire
It's burning us up
Hey this fire
It's burning us up
Hey this fire It's burning us
Oh, oo oo oo, oo oo oo oo
Oo oo oo oo oo oo
— Regina Spektor, The Calculation
↑ comment by Will_Newsome · 2012-05-16T00:49:10.032Z · LW(p) · GW(p)
Perhaps I show my ignorance. Pleasure-happiness and preference fulfillment are the only maximands I've seen suggested by utilitarians. A quick Google search hasn't revealed any others. What are the alternatives?
I'm unfortunately too lazy to make my case for retardedness: I disagree with enough of its features and motivations that I don't know where to begin, and I wouldn't know where to end.
Replies from: steven0461↑ comment by steven0461 · 2012-05-16T00:55:01.829Z · LW(p) · GW(p)
What are the alternatives?
Eudaimonia. "Thousand-shardedness". Whatever humans' complex values decide constitutes an intrinsically good life for an individual.
It's possible that I've been mistaken in claiming that, as a matter of standard definition, any maximization of linearly summed "welfare" or "happiness" counts as utilitarianism. But it seems like a more natural place to draw the boundary than "maximization of either linearly summed preference satisfaction or linearly summed pleasure indicators in the brain but not linearly summed eudaimonia".
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-05-16T01:58:34.955Z · LW(p) · GW(p)
That sounds basically the same as was what I'd been thinking of as preference utilitarianism. Maybe I should actually read Hare.
What's your general approach to utilitarianism's myriad paradoxes and mathematical difficulties?
↑ comment by steven0461 · 2012-05-16T00:16:12.759Z · LW(p) · GW(p)
he doesn't address the most credible alternatives to MWI
I don't think you need to explicitly address the alternatives to MWI to decide in favor of MWI. You can simply note that all interpretations of quantum mechanics either 1) fail to specify which worlds exist, 2) specify which worlds exist but do so through a burdensomely detailed mechanism, 3) admit that all the worlds exist, noting that worlds splitting via decoherence is implied by the rest of the physics. Am I missing something?
Replies from: TAG, Will_Newsome↑ comment by TAG · 2023-08-29T15:00:53.684Z · LW(p) · GW(p)
admit that all the worlds exist, noting that worlds splitting via decoherence is implied by the rest of the physics.
If "all the worlds" includes the non classical worlds, MWI is observationally false. Whether and how decoherence produces classical worlds is a topic of ongoing research.
↑ comment by Will_Newsome · 2012-05-16T01:52:07.853Z · LW(p) · GW(p)
Is that a response to my point specifically or a general observation? I don't think "simply noting" is nearly enough justification to decide strongly in favor of MWI—maybe it's enough to decide in favor of MWI, but it's not enough to justify confident MWI evangelism nor enough to make bold claims about the failures of science and so forth. You have to show that various specific popular interpretations fail tests 1 and 2.
ETA: Tapping out because I think this thread is too noisy.
Replies from: steven0461↑ comment by steven0461 · 2012-05-16T01:58:52.814Z · LW(p) · GW(p)
I suppose? It's hard for me to see how there could even theoretically exist a mechanism such as in 2 that failed to be burdensome. But maybe you have something in mind?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-16T04:57:15.224Z · LW(p) · GW(p)
It's hard for me to see how there could even theoretically exist a mechanism such as in 2 that failed to be burdensome.
It always seems that way until someone proposes a new theoretical framework, afterwards it seems like people were insane for not coming up with said framework sooner.
But maybe you have something in mind?
Well the Transactional Interpretation for example.
Replies from: steven0461↑ comment by steven0461 · 2012-05-16T05:19:02.960Z · LW(p) · GW(p)
That would have been my guess. I don't really understand the transactional interpretation; how does it pick out a single world without using a burdensomely detailed mechanism to do so?
↑ comment by buybuydandavis · 2012-05-16T04:55:34.978Z · LW(p) · GW(p)
I am confused about Eliezer's metaethics. If you ask 10 LessWrongers what Eliezer's metaethical theory is, you get approximately 10 distinct positions. In other words, I don't know how high a probability to assign to it, because I'm very unsure of what it even means.
I'm even more confused that people seem to think it quite natural to spend years debating the ethical positions of someone watching the debate.
I agree. The world really is mad.
A little creative editing with Stirner makes for a catchy line in this regard:
Do not think that I am jesting or speaking figuratively when I regard almost the whole world of men as veritable fools, fools in a madhouse, who only seem to go about free because the madhouse in which they walk takes in so broad a space.
↑ comment by Manfred · 2012-05-15T20:25:13.187Z · LW(p) · GW(p)
I am confused about Eliezer's metaethics. If you ask 10 LessWrongers what Eliezer's metaethical theory is, you get approximately 10 distinct positions. In other words, I don't know how high a probability to assign to it, because I'm very unsure of what it even means.
Luke's sequence of posts on this may help. Worth a shot at least :)
Replies from: bryjnarcomment by [deleted] · 2012-05-15T10:48:42.438Z · LW(p) · GW(p)
I think the Sequences got everything right
That is quite a bit of conjunction you've got going on there. Rather extraordinary if it is true, I've yet to see appropriately compelling evidence of this. Based on what evidence I do see I think the sequences, at least the ones I've read so far, are probably "mostly right", interesting and perhaps marginally useful to very peculiar kinds of people for ordering their lives.
I also think the sequences are badly-organized and you should just read them chronologically instead of trying to lump them into categories and sub-categories, but I digress.
I think I agree with this.
Replies from: Grognor, Grif↑ comment by Grognor · 2012-05-16T14:00:44.434Z · LW(p) · GW(p)
The error in your comment is that the sequences were all created by a few reliable processes, so it's as much of a conjunction fallacy as "My entire leg will function." Note that this also means that if one of the articles in the Sequences is wrong, it doesn't even mean Eliezer has made a grievous mistake. I have Nick Tarleton to thank for this insight, but I can't find where he originally said it.
Replies from: JoshuaZ, Will_Newsome↑ comment by JoshuaZ · 2012-05-17T17:48:09.237Z · LW(p) · GW(p)
The error in your comment is that the sequences were all created by a few reliable processes, so it's as much of a conjunction fallacy as "My entire leg will function."
Even professional runners will occasionally trip. Even Terry Tao occasionally makes a math error.
The point isn't that even highly reliable processes are likely to output some bad ideas over the long term.
↑ comment by Will_Newsome · 2012-05-16T19:44:26.092Z · LW(p) · GW(p)
I think it's a Kaasism.
comment by fubarobfusco · 2012-05-15T17:25:49.204Z · LW(p) · GW(p)
Many-worlds is a clearly explicable interpretation of quantum mechanics and dramatically simpler than the Copenhagen interpretation revered in the mainstream. It rules out a lot of the abnormal conclusions that people draw from Copenhagen, e.g. ascribing mystical powers to consciousness, senses, or instruments. It is true enough to use as a model for what goes on in the world; but it is not true enough to lead us to any abnormal beliefs about, e.g., morality, "quantum immortality", or "possible worlds" in the philosophers' sense.
Cryonics is worth developing. The whole technology does not exist yet; and ambitions to create it should not be mistaken for an existing technology. That said, as far as I can tell, people who advocate cryonic preservation aren't deluded about this.
Mainstream science is a social institution commonly mistaken for an epistemology. (We need both. Epistemologies, being abstractions, have a notorious inability to provide funding.) It is an imperfect social institution; reforms to it are likely to come from within, not by abolishing it and replacing it with some unspecified Bayesian upgrade. Reforms worth supporting include performing and publishing more replications of studies; open-access publishing; and registration of trials as a means to fight publication bias. Oh, and better training in probability, too, but everyone can use that. However, cursing "mainstream science" is a way to lose.
Consequentialism is the ground of morality; in a physical world, what else could be? However, human reasoning about morality is embodied in cognitive algorithms that focus on things like social rule-following and the cooperation of other agents. This is why it feels like deontological and virtue ethics have something going on. We kinda have to deal with those to get on with others.
I am not sure that my metaethics accord with Eliezer's, because I am not entirely sure what Eliezer's metaethics are. I have my own undeveloped crank theory of ethical claims as observations of symmetry among agents, which accords with Eliezer's comments on fairness and also Hofstadter's superrationality, so I'll give this a pass. It strikes me as deeply unfortunate that game theory came so recently in human history — surprise, it turns out the Golden Rule isn't "just" morality, it's also algebra.
The "people are crazy" maxim is a good warning against rationalization; but there are a lot of rationality-hacks to be found that exploit specific biases, cognitive shortcuts, and other areas of improvability in human reasoning. It's probably more useful as a warning against looking for complex explanations of social behaviors which arise from historical processes rather than reasoned ones.
Replies from: army1987, Logos01, Grognor↑ comment by A1987dM (army1987) · 2012-05-15T21:02:57.420Z · LW(p) · GW(p)
the Copenhagen interpretation revered in the mainstream
Is it? I think that the most widely accepted interpretation among physicists is the shut-up-and-calculate interpretation.
Replies from: Cthulhoo, Thomas, Normal_Anomaly↑ comment by Cthulhoo · 2012-05-16T11:14:36.134Z · LW(p) · GW(p)
Is it? I think that the most widely accepted interpretation among physicists is the shut-up-and-calculate interpretation.
There are quite a few people that actively do research and debate on QM foundations, and, among that group, there's honestly no preferred interpretation. People are even looking for alternatives that bypass the problem entirely (e.g. GRW. The debate is fully open, at the moment.
Outside of this specific field, yes, it's pretty much shut-up-and-calculate.
↑ comment by Thomas · 2012-05-15T21:16:53.935Z · LW(p) · GW(p)
Yes, doing the calculation and getting the right result is worth of many interpretations, if not.all of them together.
Besides, interpretations usually give you more than the truth. What is awkward.
Replies from: vi21maobk9vp↑ comment by vi21maobk9vp · 2012-05-17T08:27:56.756Z · LW(p) · GW(p)
Unfortunately, in some cases it is not clear what exactly you should calculate to make a good prediction. Penrose interpretation and MWI can be used to decide - at least sometimes. Nobody has (yet) reached the scale where the difference would be easily testable, though.
↑ comment by Normal_Anomaly · 2012-05-18T17:43:23.356Z · LW(p) · GW(p)
The wikipedia page on the Copenhagen Interpretation says:
According to a poll at a Quantum Mechanics workshop in 1997,[13] the Copenhagen interpretation is the most widely-accepted specific interpretation of quantum mechanics, followed by the many-worlds interpretation.[14] Although current trends show substantial competition from alternative interpretations, throughout much of the twentieth century the Copenhagen interpretation had strong acceptance among physicists. Astrophysicist and science writer John Gribbin describes it as having fallen from primacy after the 1980s.[15]
↑ comment by Logos01 · 2012-05-15T19:51:38.262Z · LW(p) · GW(p)
and dramatically simpler than the Copenhagen interpretation
No, it is exactly as complicated. As demonstrated by its utilization of exactly the same mathematics.
. It rules out a lot of the abnormal conclusions that people draw from Copenhagen, e.g. ascribing mystical powers to consciousness, senses, or instruments.
It is not without its own extra entities of equally enormously additive nature however; and even and those abnormal conclusions are as valid from the CI as is quantum immortality from MWI.
-- I speak as someone who rejects both.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-05-15T20:13:55.521Z · LW(p) · GW(p)
No, it is exactly as complicated. As demonstrated by its utilization of exactly the same mathematics.
Not all formalizations that give the same observed predictions have the same Kolmogorov complexity, and this is true even for much less rigorous notions of complexity. For example, consider a computer program that when given a positive integer n, outputs the nth prime number. One simple thing it could do is simply use trial division. But another could use some more complicated process, like say brute force searching for a generator of (Z/pZ)*.
In this case, the math being used is pretty similar, so the complexity shouldn't be that different. But that's a more subtle and weaker claim.
Replies from: dlthomas, Logos01↑ comment by dlthomas · 2012-05-15T21:43:56.238Z · LW(p) · GW(p)
Not all formalizations that give the same observed predictions have the same Kolmogorov complexity[.]
Is that true? I thought Kolmogorov complexity was "the length of the shortest program that produces the observations" - how can that not be a one place function of the observations?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-05-15T23:25:56.820Z · LW(p) · GW(p)
Yes. In so far as the output is larger than the set of observations. Take MWI for example- the output includes all the parts of the wavebranch that we can't see. In contrast, Copenhagen only has outputs that we by and large do see. So the key issue here is that outputs and observable outputs aren't the same thing.
Replies from: dlthomas↑ comment by Logos01 · 2012-05-16T06:20:21.663Z · LW(p) · GW(p)
Not all formalizations that give the same observed predictions have the same Kolmogorov complexity, and this is true even for much less rigorous notions of complexity.
Sure. But MWI and CI use the same formulae. They take the same inputs and produce the same outputs.
Everything else is just that -- interpretation.
One simple thing it could do is simply use trial division. But another could use some more complicated process, like say brute force searching for a generator of (Z/pZ)*.
And those would be different calculations.
In this case, the math being used is pretty similar,
No, it's the same math.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-05-16T15:28:09.755Z · LW(p) · GW(p)
The interpretation in this context can imply unobserved output. See the discussion with dlthomas below. Part of the issue is that the interpretation isn't separate from the math.
Replies from: Logos01↑ comment by Logos01 · 2012-05-17T13:10:32.193Z · LW(p) · GW(p)
"Entities must not be replicated beyond necessity". Both interpretations violate this rule. The only question is which violates it more. And the answer to that seems to one purely of opinion.
So throwing out the extra stuff -- they're using exactly the same math.
comment by komponisto · 2012-05-16T06:31:53.959Z · LW(p) · GW(p)
Furthermore, I agree with every essay I've ever read by Yvain, I use "believe whatever gwern believes" as a heuristic/algorithm for generating true beliefs, and don't disagree with anything I've ever seen written by Vladimir Nesov, Kaj Sotala, Luke Muelhauser, komponisto, or even Wei Dai;
Wow. One of these is not like the others! (Hint: all but one have karma > 10,000.)
In all seriousness, being placed in that group has to count as one of the greatest honors of my internet life.
So I suppose I can't be totally objective when I sing the praises of this post. Nonetheless, it is a fact that I was planning to voice my agreement well before I reached the passage quoted above. So, let me confirm that I, too, "stand by" the Sequences (excepting various quibbles which are of scant relevance in this context).
I'll go further and note that I am significantly less impressed than most of LW by Holden Karnofsky's critique of SI, and suspect that the extraordinary affective glow being showered upon it is mostly the result of Holden's affiliation with GiveWell. Of course, that affective glow is so luminous (the post is at, what, like 200 now?) that to say I'm less impressed than everyone else isn't really to say much at all, and indeed I agree that Holden's critique was constructive and thoughtful (certainly by the standards of "the outside world", i.e. people who aren't LW regulars or otherwise thoroughly "infected" by the memes here). I just don't think it was particularly original -- similar points were made in the past by people like multifoliaterose and XiXDu (not to mention Wei Dai, etc.) -- and nor do I think it is particularly correct.
(To give one example, it's pretty clear to me that "Tool AI" is Oracle AI for relevant purposes, and I don't understand why this isn't clear to Holden also. One of the key AI-relevant lessons from the Sequences is that an AI should be thought of as an efficient cross-domain optimization process, and that the danger is inherent in the notion of "efficient optimization" itself, rather than residing in any anthropomorphic "agency" properties that the AI may or may not have.)
By the way, for all that I may increasingly sound like a Yudkowsky/SI "cultist" (which may perhaps have contributed to my inclusion in the distinguished list referenced above!), I still have a very hard time thinking of myself that way. In fact, I still feel like something of an outsider, because I didn't grow up on science fiction, was never on the SL4 mailing list, and indeed had never even heard of the "technological singularity" before I started reading Overcoming Bias sometime around 2006-07.
(Of course, given that Luke went from being a fundamentalist Christian to running the Singularity Institute in less time than I've been reading Yudkowsky, perhaps it's time for me to finally admit that I too have joined the club.)
Replies from: Grognor↑ comment by Grognor · 2012-05-16T11:34:02.167Z · LW(p) · GW(p)
Wow. One of these is not like the others! (Hint: all but one have karma > 10,000.)
There are many others, as well, but a full list seemed like an extremely terrible idea.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-05-16T19:54:21.384Z · LW(p) · GW(p)
Though I'd like a post that encouraged people to make such lists in the comments, so I could figure out who the people I like like.
Replies from: TheOtherDave, khafra↑ comment by TheOtherDave · 2012-05-16T20:50:26.824Z · LW(p) · GW(p)
You could create such a post.
Alternatively, you could PM the people you like and ask them whom they like.
↑ comment by Grognor · 2012-05-17T19:33:22.648Z · LW(p) · GW(p)
Extreme nitpick: "PM the people you like" is not the converse of "create a post".
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-17T20:31:30.782Z · LW(p) · GW(p)
You are entirely correct. Edited.
comment by prase · 2012-05-15T20:17:16.587Z · LW(p) · GW(p)
Once we get into the habit of openly professing our beliefs for sake of our micropolitics, we are losing a large portion of our bandwith to pure noise. I realise some amount of group allegiance signalling is inevitable always when an opinion is expressed, even when questions are asked, but the recent post by Dmytry together with this post are too explicit in this manner. The latter is at least honest, but I have voted both down nevertheless.
comment by FiftyTwo · 2012-05-15T17:06:43.463Z · LW(p) · GW(p)
My "ick" sense is being set off pretty heavily by the idea of people publicly professing their faith in a shared set of beliefs, so this post makes me deeply uncomfortable. At best something like this is a self congratulatory applause light which doesn't add anything, at worst it makes us less critical and leads us further towards the dreaded c word.
Replies from: wgd, None↑ comment by wgd · 2012-05-15T18:47:11.151Z · LW(p) · GW(p)
I disagree. From checking "recent comments" a couple times a day as is my habit, I feel like the past few days have seen an outpouring of criticism of Eliezer and the sequences by a small handful of people who don't appear to have put in the effort to actually read and understand them, and I am thankful to the OP for providing a counterbalance to that.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-05-16T06:39:17.235Z · LW(p) · GW(p)
It degrades the quality of discussion to profess agreement or disagreement with such a large cluster of ideas simultaneously. Kind of like saying "I'm Republican and the Republican platform is what our country needs", without going into specifics on why the Republican platform is so cool.
I think it would have been fine for Grognor to summarize arguments that have been made in the past for more controversial points in the Sequences. But as it stands, this post looks like pure politics to me.
↑ comment by [deleted] · 2012-05-16T16:56:17.028Z · LW(p) · GW(p)
This is a good feeling. It means we are doing something right.
If we are well-calibrated wrt to phyg-risk, you should see large proclamations of agreement about as often as criticisms.
Rationalists should study marching in lockstep and agreeing with the group as much as they should practice lonely dissent.
comment by dlthomas · 2012-05-15T21:44:52.805Z · LW(p) · GW(p)
I think this might be the most strongly contrarian post here in a while...
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2012-05-16T01:17:13.435Z · LW(p) · GW(p)
And look! It's been upvoted! We are neither an Eliezer-following cult nor an Eliezer-hating cult! :)
Replies from: shminux↑ comment by Shmi (shminux) · 2012-05-16T02:09:15.517Z · LW(p) · GW(p)
Actually, if you apply the metric of #votes/#comments, the low vote for a large number of comments suggests that the post has been both heavily upvoted and heavily downvoted.
comment by Jack · 2012-05-15T20:45:48.727Z · LW(p) · GW(p)
I'm pretty sad a post that consists entirely of tribal signaling has been voted up like this. This post is literally asking people to publicly endorse the Sequences because they're somehow under-attack (by whom?!) and no one agrees with them any more. Because some of us think having more smart people who disagree with us would improve Less Wrong? I find glib notions of a "singularity religion" obnoxious but what exactly is the point of asking people here to announce their allegiance to the site founder's collection of blog posts?
Replies from: Viliam_Bur, RolfAndreassen, Grognor↑ comment by Viliam_Bur · 2012-05-16T07:41:14.929Z · LW(p) · GW(p)
This post is literally asking people to publicly endorse the Sequences because they're somehow under-attack (by whom?!) and no one agrees with them any more.
To me it seems more like a spreading attitude of -- "The Sequences are not that much important. Honestly, almost nobody reads all of them; they are just too long (and their value does not correspond to their length). Telling someone to 'read the Sequences' is just a polite way to say 'fuck you'; obviously nobody could mean that literally."
I have read most of the Sequences. In my opinion it is material really worth reading; not only better than 99% of internet, but if it became a book, it would be also better than 99% of books. (Making it a book would be a significant improvement, because there would be an unambiguous ordering of the chapters.) It contains a lot of information, and the information makes sense. And unlike many interesting books, it is not "repeating the basic idea over and over, using different words, adding a few details". It talks about truth, then about mind, then about biases, then about language, then about quantum physics, etc. For me reading the Sequences was very much worth my time, and I would honestly recommend it to everyone interested in these topics.
What is then wrong with the attitude described above? New members, who didn't read the Sequences yet, are making a decision whether reading the Sequences is worth their time, or whether the information can be learned as a side effect or reading LW forum. As a social species, we often make our decision by what other people are doing. So if it seems that nobody is reading the Sequences, why should I? This is why it is useful that someone makes a public announcement of "I have read the Sequences and it was worth it". It needs to be said explicitly, because many people who didn't read the Sequences are saying that explicitly. -- This is not an attack on people who don't read the Sequences and admit it openly. Speaking the truth is the right thing to do. But if we don't want to expose the filtered evidence, the people who did read the Sequences should admit it openly too. And because it seems that we have "Sequences as an attire" on this website, it is necessary to make it rather explicit.
Also for rationalists it would be silly to recommend to everyone reading the Sequences, if we were not really doing it. We should either read them, or stop suggesting everyone to read them. So this post is saying that some people are actually reading the Sequences, therefore suggesting to read them is not a shorthand to 'fuck you', but is means seriously.
Related, from another website:
If you indicate your disagreement with the local belief clusters without at least using their jargon, it used to be common for someone to helpfully suggest that "you should try reading the sequences" before attempting to talk to them. The "sequences" [contain] over a hundred and fifty 2,000-3,000-word blog posts. That's [...] around a million words [...] With a few million more words of often-relevant comments. For comparison, the Lord Of The Rings trilogy is 473,000 words. As such, "You should try reading the sequences" is LessWrong for "fuck you." This seems to have stopped since it was called to their attention.
I think that people should read the Sequences, and I mean it, literally. (I suggest skipping the comments below the articles. Sometimes they are interesting, but their signal-to-noise ratio is much lower.) If you have already decided to spend a lot of your time on LW, this will prevent you from discussing the same mistakes again and again and again, so in a long term it saves your time. -- Fellow procrastinators, if you have already read 300 LW posts, or 50 posts with all comments, then you have already read the same amount of text as the Sequences!
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-16T15:51:20.537Z · LW(p) · GW(p)
FWIW, when I read them, I often found the comment threads valuable as well. Admittedly, that might relate to the fact that I read them chronologically, which allowed me to see how the community's thoughts were evolving and influencing the flow of posts.
↑ comment by RolfAndreassen · 2012-05-15T21:54:31.150Z · LW(p) · GW(p)
There's such a thing as a false dissent effect. (In fact I think it's mentioned in the Sequences, although not by that name. Right now I'm too lazy to go find it.) If only those who disagree with some part or another of the Sequences post on the subject, we will get the apparent effect that "Nobody takes Yudkowsky seriously - not even his own cultists!" This in spite of the fact that everyone here (I assume) agrees with some part of the Sequences, perhaps as much as 80%. Our differences are in which 20% we disagree with. I don't think there's anything wrong with making that clear.
For myself, I agree that MWI seems obvious in retrospect, that morality is defined by the algorithm of human thought (whatever it is), and that building programs whose output affects anything but computer screens without understanding all the details of the output is a Bad Idea.
Replies from: prase↑ comment by prase · 2012-05-15T22:34:22.817Z · LW(p) · GW(p)
To counter the effect it would be enough if those who agree with a criticised position supported it by pointing out the errors in the arguments of the critics, or offering better counterarguments themselves. Being silent when a belief is being disputed only to declare allegiance to it and leave the conversation later seems to be the wrong approach.
↑ comment by Grognor · 2012-05-15T23:27:13.451Z · LW(p) · GW(p)
I was waiting for someone to make this accusation. The only thing missing is the part where I only agree with Yudkowsky because he's high status and I wish to affiliate with him!
Replies from: John_Maxwell_IV, Jack↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-05-16T06:44:42.199Z · LW(p) · GW(p)
I know that I have observed and corrected a pattern like that in my thinking in the past. Studying biases is useless if you don't adjust your own thinking at times you identify a bias that may be affecting it.
I think your prior for the comment you describe being true should be very high, and I'd like to know what additional evidence you had that brought that probability down. You've mentioned in the past that the high status tone of the sequences caught your attention and appealed to you. Isn't it possible that you are flawed? I know I am.
Replies from: Grognor↑ comment by Grognor · 2012-05-16T07:46:38.273Z · LW(p) · GW(p)
Actually, I think what Jack said (that this post is signaling, but nothing else about his comment) is true, and my reply is also quite possibly true. But I don't know how I should act differently in accordance with the possibility. It's a case of epistemic luck if it is true. (As an aside, I think this is Alicorn's best article on this website by leaps and bounds; I recommend it, &c.)
You've mentioned in the past that the high status tone of the sequences caught your attention and appealed to you.
What?
↑ comment by Jack · 2012-05-15T23:38:29.805Z · LW(p) · GW(p)
What accusation? I'm just describing your post and asking questions about it.
Edit: I mean, obviously I have a point of view. But it's not like it was much of a stretch to say what I did. I just paraphrased. How is my characterization of your post flawed?
comment by cousin_it · 2012-05-15T14:44:45.235Z · LW(p) · GW(p)
I think mainstream science is too slow and we mere mortals can do better with Bayes.
Then why aren't we doing better already?
Replies from: orthonormal, army1987, Oscar_Cunningham↑ comment by orthonormal · 2012-05-15T15:10:54.729Z · LW(p) · GW(p)
The institutional advantages of the current scientific community are so apparent that it seems more feasible to reform it than to supplant it. It's worth thinking about how we could achieve either.
↑ comment by A1987dM (army1987) · 2012-05-15T20:56:10.162Z · LW(p) · GW(p)
The Hansonian answer would be “science is not about knowledge”, I guess. (I don't think it's that simple, anyway.)
Replies from: cousin_it↑ comment by cousin_it · 2012-05-15T21:15:20.450Z · LW(p) · GW(p)
I don't understand your comment, could you explain? Science seems to generate more knowledge than LW-style rationality. Maybe you meant to say "LW-style rationality is not about knowledge"?
Replies from: Desrtopa, Xachariah↑ comment by Desrtopa · 2012-05-16T00:53:17.425Z · LW(p) · GW(p)
Science has a large scale academic infrastructure to draw on, wherein people can propose research that they want to get done, and those who argue sufficiently persuasively that their research is solid and productive receive money and resources to conduct it.
You could make a system that produces more knowledge than modern science just by diverting a substantial portion of the national budget to fund it, so only the people proposing experiments that are too poorly designed to be useful don't get funding.
Besides which, improved rationality can't simply replace entire bodies of domain specific knowledge.
There are plenty of ways though, in which mainstream science is inefficient at producing knowledge, such as improper use of statistics, publication bias, and biased interpretation of results. There are ways to do better, and most scientists (at least those I've spoken to about it,) acknowledge this, but science is very significantly a social process which individual scientists have neither the power nor the social incentives to change.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2012-05-17T21:15:02.528Z · LW(p) · GW(p)
"There are plenty of ways though, in which mainstream science is inefficient at producing knowledge, such as improper use of statistics, publication bias, and biased interpretation of results. There are ways to do better, and most scientists (at least those I've spoken to about it,) acknowledge this, but science is very significantly a social process which individual scientists have neither the power nor the social incentives to change."
I am an academic. Can you suggest three concrete ways for me to improve my knowledge production, which will not leave me worse off?
Replies from: Rhwawn, Desrtopa↑ comment by Desrtopa · 2012-05-19T22:02:34.457Z · LW(p) · GW(p)
Ways for science as an academic institution, or for you personally? For the latter, Luke has already done the work of creating a post on that. For the former, it's more difficult, since a lot of productivity in science requires cooperation with the existing institution. At the least, I would suggest registering your experiments in a public database before conducting them, if any exist within your field, and using Bayesian probability software to process the statistics of your experiments (this will probably not help you at all in getting published, but if you do it in addition to regular significance testing, it should hopefully not inhibit it either.)
↑ comment by Xachariah · 2012-05-16T00:25:16.373Z · LW(p) · GW(p)
My default answer for anything regarding Hanson is 'signaling'. How to fix science is a good start.
Science isn't just about getting things wrong or right, but an intricate signaling game. This is why most of what comes out of journals is wrong. Scientists are rewarded for publishing results, right or wrong, so they comb data for correlations which may or may not be relevant. (Statically speaking, if you comb data 20 different ways, you'll get at least 1 thing which shows a statistically significant correlation, just from sheer random chance.) Journals are rewarded for publishing sensational results, and not confirmations or even refutations (especially refutations of things they published in the first place). The rewards system does not set up coming with right answers, but coming up with answers that are sensational and cannot be easily refuted. Being right does make them hard to refute, which is why science is useful at all but that's not the only way things are made hard to refute.
An ideal bayesian unconstrained with signaling could completely outdo our current scientific system (as it could do for in all spheres of life). Even shifting our current system to be more bayesian by abandoning the journal system and creating pre-registration of scientific studies would be a huge upgrade. But science isn't about knowledge, knowledge is just a very useful byproduct we get from it.
Replies from: CuSithBell↑ comment by CuSithBell · 2012-05-16T01:01:59.297Z · LW(p) · GW(p)
But Grognor (not, as this comment read earlier, army1987) said that "we mere mortals can do better with Bayes", not that "an ideal bayesian unconstrained with signaling could completely outdo our current scientific system". Arguing, in response to cousin_it, that scientists are concerned with signalling makes the claim even stronger, and the question more compelling - "why aren't we doing better already?"
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-05-16T14:24:06.716Z · LW(p) · GW(p)
I had taken “we” to mean the 21st-century civilization in general rather than just Bayesians, and the question to mean “why is science doing so bad, if it could do much better just by using Bayes”?
Replies from: CuSithBell↑ comment by CuSithBell · 2012-05-16T15:14:19.861Z · LW(p) · GW(p)
I'm fairly confident that "we" refers to LW / Bayesians, especially given the response to your comment earlier. Unfortunately we've got a bunch of comments addressing a different question, and some providing reasons why we shouldn't expect to be "doing better", all of which strengthen cousin_it's question, as Grognor claims we can. Though naturally Grognor's intended meanings are up for grabs.
↑ comment by Oscar_Cunningham · 2012-05-15T15:11:19.922Z · LW(p) · GW(p)
To which gold standard are you comparing us and Science to determine that we are not doing better?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-05-15T15:13:53.936Z · LW(p) · GW(p)
There's a problem here in that Bayesian reasoning has become quite common in the last 20 years in many sciences, so it isn't clear who "we" should be in this sort of context.
Replies from: moridinamael↑ comment by moridinamael · 2012-05-15T17:04:18.944Z · LW(p) · GW(p)
Indeed. For anyone who has worked at all in oil & gas exploration, the LW treatment of Bayesian inference and decision theories as secret superpowers will seem perplexing. Oil companies have been basing billion dollar decisions on these methods for years, maybe decades.
I am also confused about what exactly we are supposed to be doing. If we had the choice of simply becoming ideal Bayesian reasoners then we would do that, but we don't have that option. "Debiasing" is really just "installing a new, imperfect heuristic as a patch for existing and even more imperfect hardware-based heuristics."
I know a lot of scientists - I am a scientist - and I guess if we were capable of choosing to be Bayesian superintelligences we might be progressing a bit faster, but as it stands I think we're doing okay with the cognitive resources at our disposal.
Not to say we shouldn't try to be more rational. It's just that you can't actually decide to be Einstein.
Replies from: None↑ comment by [deleted] · 2012-05-16T15:26:37.157Z · LW(p) · GW(p)
I think 'being a better Bayesian' isn't about deciding to be Einstein. I think it's about being willing to believe things that aren't 'settled science', where 'settled science' is the replicated and established knowledge of humanity as a whole. See Science Doesn't Trust Your Rationality.
The true art is being able to do this without ending up a New Ager, or something. The virtue isn't believing non-settled things. The virtue is being willing to go beyond what science currently believes, if that's where the properly adjusted evidence actually points you. (I say 'beyond' because I mean to refer to scope. If science believes something, you had better believe it - but if science doesn't have a strong opinion about something you have no choice but to use your rationality).
comment by John_Maxwell (John_Maxwell_IV) · 2012-05-16T06:29:23.038Z · LW(p) · GW(p)
I don't see what this adds beyond making LW more political. Let's discuss ideas, not affiliations!
If you agree with everything Eliezer wrote, you remember him writing about how every cause wants to be a cult. This post looks exactly like the sort of cultish entropy that he advised guarding against to me. Can you imagine a similar post on any run-of-the-mill, non-cultish online forum?
It worries me a lot that you relate ideas so strongly to the people who say them, especially since most of the people you refer to are so high status. Perhaps you could experimentally start using the less wrong anti-kibitzer feature to see if your perception of LW changes?
Replies from: Desrtopa, Grognor↑ comment by Desrtopa · 2012-05-16T13:21:49.324Z · LW(p) · GW(p)
If you agree with everything Eliezer wrote, you remember him writing about how every cause wants to be a cult.
But also what he said about swinging too far in the opposite direction. It's bad for a community to reach a point where it's taboo to profess dissent, but also for it to reach a point where it's taboo to profess wholehearted agreement.
Replies from: prase, TheOtherDave, John_Maxwell_IV↑ comment by prase · 2012-05-16T19:38:35.109Z · LW(p) · GW(p)
Wouldn't it be better if the professed agreement was agreement with ideas rather than with people? The dissent counterpart of this post would say "I disagree with everything what this person says". That's clearly pathological.
Replies from: Viliam_Bur, Desrtopa, Grognor↑ comment by Viliam_Bur · 2012-05-17T09:03:47.093Z · LW(p) · GW(p)
This may sound wrong, but "who said that" is a Bayesian evidence, sometimes rather strong one. If your experience tells you that given person is right in 95% of things they say, it is rational to give 95% prior probability to other things they say.
It is said that we should judge the ideas by the ideas alone. In Bayes-speak it means that if you update correctly, enough evidence can fix a wrong prior (how much evidence is needed depends on how wrong the prior was). But gathering evidence is costly, and we cannot spend too high cost for every idea around us. Why use a worse prior, if a better one is available?
Back to human-speak: if person A is a notorious liar (or a mindkilled person that repeats someone else's lies), person B is careless about their beliefs, and person C examines every idea carefully before telling it to others, then it is rational to react differently to ideas spoken by these three people. The word "everything" is too strong, but saying "if a person C said that, I believe it" is OK (assuming that if there is enough counter-evidence, the person will update: both on the idea and on credibility of C).
Disagreeing with everything one says would be trying to reverse stupidity. There are people who do "worse than random", so doing an opposite of what they said could be a good heuristic; but even for most of them assigning 95% probability that they are wrong would be too much.
Replies from: prase↑ comment by prase · 2012-05-17T18:07:27.939Z · LW(p) · GW(p)
You are right, but there is probably some misunderstanding. That the personal considerations should be ignored when assessing probability of an idea, and that one shouldn't express collective agreement with ideas based on their author, are different suggestions. You argue against the former while I was stating the latter.
It's important to take into account the context. When an idea X is being questioned, saying "I agree with X, because a very trustworthy person Y agrees with X" is fine with me, although it isn't the best sort of argument one could provide. Starting the discussion "I agree with X1, X2, X3, ... Xn", on the other hand, makes any reasonable debate almost impossible, since it is not practical to argue n distinct ideas at once.
↑ comment by Desrtopa · 2012-05-19T22:18:07.383Z · LW(p) · GW(p)
Well, the professed agreement of this post is with the Sequences, which are a set of ideas rather than a person, even if they were all written by one person. The dissent counterpart of this post would be "I disagree with the entire content of the sequences."
Am I misunderstanding you about something?
Replies from: prase↑ comment by prase · 2012-05-20T19:00:06.315Z · LW(p) · GW(p)
Professing agreement or disagreement with a broad set of rather unrelated ideas is not conductive to productive discussion because there is no single topic to concentrate on with object-level arguments. Having the set of ideas defined by their author brings in tribal political instincts, which too is not helpful. You are right that the post was formulated as agreement with the Sequences rather than with everything which Yudkowsky ever said but I don't see how this distinction is important. "Everything which Yudkowsky ever said" would also denote a set of ideas, after all.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-20T19:18:36.217Z · LW(p) · GW(p)
"Everything which Yudkowsky ever said" would also denote a set of ideas, after all.
Albeit an internally inconsistent set, given that Yudkowsky has occasionally changed his mind about things.
↑ comment by TheOtherDave · 2012-05-16T15:45:38.187Z · LW(p) · GW(p)
"I think the Sequences are right about everything" is a pro-Sequences position of roughly the same extremity as "I think the Sequences are wrong about everything."
As far as I know, nobody claims the latter, and it seems kind of absurd on the face of it. The closest I've seen anyone come to it is something like "everything true in the Sequences is unoriginal, everything original in them is false, and they are badly written"... which is a pretty damning criticism, but still allows for the idea that quite a lot of what the text expresses is true.
Its equally extreme counterpart on the positive axis should, then, allow for the idea that quite a lot of what the text expresses is false.
To reject overly-extreme positive statements more strongly than less-extreme negative ones is not necessarily expressing a rejection of positivity; it might be a rejection of extremism instead.
Replies from: Desrtopa↑ comment by Desrtopa · 2012-05-16T16:19:32.176Z · LW(p) · GW(p)
"I think the Sequences are right about everything" is a pro-Sequences position of roughly the same extremity as "I think the Sequences are wrong about everything."
As far as I know, nobody claims the latter, and it seems kind of absurd on the face of it.
I wouldn't say that they're comparably extreme. A whole lot of the content of the sequences is completely uncontroversial, and even if you think Eliezer is a total lunatic, it's unlikely that they wouldn't contain any substantively true claims. I'd bet heavily against "The Biggest Secret" by David Icke containing no substantively true claims at all.
I would have some points of disagreement with someone who thinks that everything in the sequences is correct (personally, I doubt that MWI is a slam dunk, because I'm not convinced Eliezer is accurately framing the implications of collapse, and I think CEV is probably a dead end when it comes to programming an FAI, although I don't know of any team which I think has better odds of developing an FAI than the SIAI.) But I think someone who agrees with the entirety of the contents is being far more reasonable than someone who disagrees with the entirety.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-16T17:07:25.387Z · LW(p) · GW(p)
I agree that "I think the Sequences are wrong about everything" would be an absurdly extreme claim, for the reasons you point out.
We disagree about the extremity of "I think the Sequences are right about everything".
I'm not sure where to go from there, though.
Replies from: Kindly↑ comment by Kindly · 2012-05-16T17:52:14.215Z · LW(p) · GW(p)
Suppose you separate the Sequences into "original" and "unoriginal".
The "unoriginal" segment is very likely to be true: agreeing with all of it is fairly straightforward, and disagreeing with all of it is ridiculously extreme.
To a first approximation, we can say that the middle-ground stance on any given point in the "original" statement is uncertainty. That is, accepting that point and rejecting it are equally extreme. If we use the general population for reference, of course, that is nowhere near correct: even considering the possibility that cryonics might work is a fairly extreme stance, for instance.
But taking the approximation at face value tells us that agreeing with every "original" claim, and disagreeing with every "original" claim, are equally extreme positions. If we now add the further stipulation that both positions agree with every "unoriginal" claim, they both move slightly toward the Sequences, but not by much.
So actually (1) "I agree with everything in the sequences" and (2) "Everything true in the Sequences is unoriginal, everything original in them is false" are roughly equally extreme. If anything, we have made an error in favor of (1). On the other hand, (3) "Everything in the Sequences ever is false" is much more extreme because it also rejects the "unoriginal" claims, each of which is almost certainly true.
P.S. If you are like me, you are wondering about what "extreme" means now. To be extremely technical (ha) I am interpreting it as measuring the probability of a position re: Sequences that you expect a reasonable, boundedly-rational person to have. For instance, a post that says "Confirmation bias is a thing" is un-controversial, and you expect that reasonable people will believe it with probability close to 1. A post that says "MWI is obviously true" is controversial, and if you are generous you will say that there is a probability of 0.5 that someone will agree with it. This might be higher or lower for other posts in the "original" category but on the whole the approximation of 0.5 is probably favorable to the person that agrees with everything.
So when I conclude that (1) and (2) are roughly equally extreme, I am saying that a "reasonable person" is roughly equally likely to end up at either one of them. This is an approximation, of course, but they are certainly both closer to each other than they are to (3).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-16T18:48:12.263Z · LW(p) · GW(p)
Yeah, I think I agree with everything here as far as it goes, though I haven't looked at it carefully. I'm not sure originality is as crisp a concept as you want it to be, but I can imagine us both coming up with a list of propositions that we believe captures everything in the Sequences that some reasonable person somewhere might conceivably disagree with, weighted by how reasonable we think a person could be and still disagree with that proposition, and that we'd end up with very similar lists (perhaps with fairly different weights). .
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-05-16T17:15:40.572Z · LW(p) · GW(p)
I just read that essay and I disagree with it. Stating one's points of disagreement amounts to giving the diffs between your mind and that of an author. What's good practice for scientific papers (in terms of remaining dispassionate) is probably good practice in general. The way to solve the cooperation problem is not to cancel out professing disagreement with professing agreement, it's to track group members' beliefs (e.g. by polling them) and act as a group on whatever the group consensus happens to be. In other words, teach people the value of majoritarianism and its ilk and tell them to use this outside view when making decisions.
Replies from: Desrtopa↑ comment by Desrtopa · 2012-05-16T18:15:59.861Z · LW(p) · GW(p)
What's good practice for scientific papers (in terms of remaining dispassionate) is probably good practice in general.
In terms of epistemic rationality, you can get by fine by raising only points of disagreement and keeping it implicit that you accept everything you do not dispute. But in terms of creating effective group cooperation, which has instrumental value, this strategy performs poorly.
↑ comment by Grognor · 2012-05-16T11:30:18.719Z · LW(p) · GW(p)
This post looks exactly like
Oho! But it is not. You know, the nervousness associated with wanting to not be part of a cult is also a cult attractor. Once again I must point out that you are conveying only connotations, not denotations.
Let's discuss ideas, not affiliations!
No slogans!
Perhaps you could experimentally start using the less wrong anti-kibitzer feature to see if your perception of LW changes?
I tried this for a few weeks; it didn't change anything.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-05-16T18:01:35.430Z · LW(p) · GW(p)
You know, the nervousness associated with wanting to not be part of a cult is also a cult attractor.
That sounds wrong to me.
I'm more motivated by making Less Wrong a good place to discuss ideas than any kind of nervousness.
Replies from: Viliam_Bur, Grognor↑ comment by Viliam_Bur · 2012-05-17T09:24:41.796Z · LW(p) · GW(p)
I'm more motivated by making Less Wrong a good place to discuss ideas
Meta-ideas are ideas too, for example:
An idea: "Many-Worlds Interpretation of quantum physics is correct, because it's mathematically correct and simplest according to Occam's Razor (if Occam's Razor means selecting the interpretation with greatest Solomonoff Prior)." -- agree or disagree.
A meta-idea: "Here is this smart guy called Eliezer. He wrote a series of articles about Occam's razor, Solomonoff prior, and quantum physics; those articles are relatively easy to read for a layman, and they also explain frequently made mistakes when discussing these topic. Reading those articles before you start discussing your opinions (with high probability repeating the frequently made mistakes explained in those articles) is a good idea to make the conversation efficient." -- agree or disagree.
This topic is about the meta-idea.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-05-17T17:28:40.193Z · LW(p) · GW(p)
Yes, but that was not what Grognor wrote... He professed a bunch of beliefs and complained that he felt in the minority for having them.
He even explicitly discouraged discussing individual beliefs:
If you also stand by the sequences, feel free to say that. If you don't, feel free to say that too, but please don't substantiate it. I don't want this thread to be a low-level rehash of tired debates, though it will surely have some of that in spite of my sincerest wishes.
In other words, he prefers to discuss high-level concerns like whether you are with him or against him over low-level nuts-and-bolts details.
Edit: I see that Grognor has added a statement of regret at the end of his post. I'm willing to give him some of his credibility back.
Replies from: Grognor↑ comment by Grognor · 2012-05-17T19:53:27.852Z · LW(p) · GW(p)
He professed a bunch of beliefs and complained that he felt in the minority for having them.
I don't like your tone. Anyway, this is wrong; I suspected I was part of a silent majority. Judging by the voting patterns (every comment indicating disagreement is above 15, every comment indicating agreement is below 15 and half are even negative) and the replies themselves, I was wrong and the silence is because this actually is a minority position.
In other words, he prefers to discuss high-level concerns like whether you are with him or against him over low-level nuts-and-bolts details.
No! Just in this thread! There are all the other threads on the entire website to debate at the object-level. I am tempted to say that fifteen more times, if you do not believe it.
I'm willing to give him some of his credibility back.
O frabjuous day, JMIV does not consider me to be completely ridiculous anymore. Could you be more patronizing?
Edit, in response to reply: In retrospect, a poll would have been better than what I ended up doing. But doing nothing would have been better still. At least we agree on that.
Replies from: John_Maxwell_IV, JoshuaZ↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-05-17T20:08:33.334Z · LW(p) · GW(p)
Hm. Perhaps you could have created an anonymous poll if you wished to measure opinions? Anonymity means people are less likely to form affiliations.
Just one thread devoted to politics is probably okay, but I would prefer zero.
↑ comment by JoshuaZ · 2012-05-17T20:13:46.970Z · LW(p) · GW(p)
(every comment indicating disagreement is above 15, every comment indicating agreement is below 15 and half are even negative)
This may not indicate what you think it indicates. In particular, I (and I suspect other people) try to vote up comments that make interesting points even if we disagree with them. In this context, some upvoting may be due to the interestingness of the remarks which in some contexts is inversely correlated with agreement. I don't think that this accounts for the entire disparity, but it does likely count for some of it. This, combined with a deliberate desire to be non-cultish in voting patterns may account for much of the difference.
↑ comment by Grognor · 2012-05-17T19:54:46.925Z · LW(p) · GW(p)
You know, the nervousness associated with wanting to not be part of a cult is also a cult attractor.
That sounds wrong to me.
See Cultish Countercultishness, unless of course you disagree with that too.
comment by Furcas · 2012-05-15T14:56:48.032Z · LW(p) · GW(p)
I stand by the sequences too. Except I'm agnostic about MWI, not because it makes no new predictions or another silly reason like that, but because I'm not smart and conscientious enough to read that particular sequence + a textbook about QM. And unlike you I'm sure that Eliezer's answer to the metaethics problem is correct, to the point that I can't imagine how it could be otherwise.
It's true that one possible reason why good contrarians are hard to find is that the group is starting to be cult-like, but another such reason is that the contrarians are just wrong.
comment by David_Gerard · 2012-05-15T11:55:04.319Z · LW(p) · GW(p)
I think the Sequences got everything right and I agree with them completely.
Even Eliezer considers them first drafts (which is, after all, what they were: a two-year exercise in writing raw material for the forthcoming book), not things to be treated as received wisdom.
Replies from: khafra↑ comment by khafra · 2012-05-15T12:17:48.727Z · LW(p) · GW(p)
But, hey, maybe he's wrong about that.
I think it was cool of Grognor to make a meta+ contrarian post like this one, and it's a good reminder that our kind have trouble expressing assent. As for me, I see some of the sequence posts I see as lies-to-children, but that's different from disagreeing.
Replies from: beriukay, Dorikka↑ comment by beriukay · 2012-05-15T13:55:40.119Z · LW(p) · GW(p)
But apparently we can hack it by expressing dissent of dissent.
Replies from: faul_sname↑ comment by faul_sname · 2012-05-15T19:19:50.306Z · LW(p) · GW(p)
How many levels of meta can we go?
Replies from: Rain↑ comment by Dorikka · 2012-05-15T15:31:39.835Z · LW(p) · GW(p)
As for me, I see some of the sequence posts I see as lies-to-children, but that's different from disagreeing.
Could you elaborate on this? I interpret 'lies-to-children' as meaning that a model is too basic and is wrong in some places because there are applicable details which it does not take into account -- would you not disagree with such things, if you don't think that such models actually form a correct map of reality?
Replies from: None, khafra↑ comment by [deleted] · 2012-05-15T17:17:50.264Z · LW(p) · GW(p)
The entire Quantum Physics sequence is a protracted lie to people who can't or don't want to do the math.
This was explicit in the introduction to the sequence.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-05-15T19:14:38.176Z · LW(p) · GW(p)
Well, I see it more like teaching someone about addition, and only covering the whole numbers, with an offhand mention that there are more numbers out there that can be added.
Drastic simplification, yes. Lie, no.
↑ comment by khafra · 2012-05-15T16:31:34.304Z · LW(p) · GW(p)
I meant in the sense of lies-to-children, or Wittgenstein's Ladder. I cannot remember the primary posts that gave me that impression, and I know they were better than this concrete example; but that shows what I was talking about. At a sufficiently granular layer, technically incorrect; but inarguably useful.
comment by Viliam_Bur · 2012-05-15T21:12:30.798Z · LW(p) · GW(p)
My opinions:
MWI seems obvious in hindsight.
Cryonics today has epsilon probability of success, maybe it is not worth its costs. If we disregard the costs, it is a good idea. (We could still subscribe to cryonics as an altruist act -- even if our chances of being successfuly revived are epsilon, our contributions and example might support development of cryonics, and the next generations may have chances higher than epsilon.)
Mainstream science is slow, but I doubt people will generally be able to do better with Bayes. Pressures to publish, dishonesty, congnitive biases, politics etc. will make people choose wrong priors, filter evidence, etc. and then use Bayes to support their bottom line. But it could be a good idea for a group of x-rationalists to use scientific results and improve them by Bayes.
I think our intuitions about morality don't scale well. Above some scale, it is all far mode, so I am not even sure there is a right answer. I think consequentialism is right, but computing all the consequences of a single act is almost impossible, so we have to use heuristics.
People in general are crazy, that's sure. But maybe rationality does not have enough benefit to be better on average; especially in a world full of crazy people. Maybe, after we cross the valley of bad rationality as a community of rationalists, this could change.
Generally, Eliezer's opinions are mostly correct, and considering the difficulty of topics he chose, this is very impressive. Also he is very good at explaining them. Mostly I like his attempt at creating a rationalist community, and the related articles -- in a long term, this might be his greatest achievement.
EDIT: The chance of being revived in a biological body frozen by today's cryonics are epsilon. (Seriously, pump your body with poison to prevent frost damage?) The chance of being uploaded could be greater than epsilon.
comment by [deleted] · 2012-05-15T20:13:47.253Z · LW(p) · GW(p)
This is only tangentially related, but:
Having Eliezer's large corpus of posts as the foundation for LW may have been helpful back in 2009, but at this point it's starting to become a hindrance. One problem is that the content is not being updated with the (often correct) critiques of Sequence posts that are continually being made in the comments and in the comments of the Sequence reruns. As a result, newcomers often bring up previously-mentioned arguments, which in turn get downvoted because of LW's policy against rehashing discussions. Additionally, the fact that important posts by other authors aren't added to the Sequences is part of what gives people the (incorrect) impression that the community revolves around Eliezer Yudkowsky. Having a static body of knowledge as introductory material also makes it look like the community consensus is tied to Eliezer's beliefs circa 2007-2009, which also promotes "phyg"-ish misconceptions about LW.
An alternative would be for LW to expect newcomers to read a variety of LW posts that the community thinks are important rather than just the Sequences. This would show newcomers the diversity of opinion on LW and allow the community's introductory material to be dynamic rather than static.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2012-05-16T01:24:20.186Z · LW(p) · GW(p)
the fact that important posts by other authors aren't added to the Sequences
As a matter of fact, the "Sequences" page contains the following as about 1/4 to 1/3 of its table of contents.
Replies from: None, Eugine_Nier4 Sequences by Others
4.1 Positivism, Self Deception, and Neuroscience by Yvain
4.2 Priming and Implicit Association by Yvain
4.3 Decision Theory of Newcomblike Problems by AnnaSalamon
4.4 Living Luminously by Alicorn
4.5 The Science of Winning at Life by lukeprog
4.6 Rationality and Philosophy by lukeprog
4.7 No-Nonsense Metaethics by lukeprog
4.8 What Intelligence Tests Miss by Kaj_Sotala
4.9 Why Everyone (Else) Is a Hypocrite by Kaj_Sotala
↑ comment by [deleted] · 2012-05-16T02:36:57.928Z · LW(p) · GW(p)
Yes, but newcomers aren't expected/instructed to read those. They are expected to be familiar with the Core/Major Sequences, which are all by Eliezer Yudkowsky and are not incrementally updated.
Another way of putting it: When an LWer tells a newbie to "read the Sequences," that list is not the Sequences they are talking about.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-05-16T07:56:01.905Z · LW(p) · GW(p)
The Sequences should be reorganized. It would be rational to admit that most new readers will not read all the articles (at least not now; later they may change opinion), so we could have a short list, a medium list, and a long list of articles. And even the short list should contain the best articles written by other authors.
Well, it's a wiki, anyone can edit it. Also anyone can make their suggested list of "Short Sequences" and put it to discussion for comments.
↑ comment by Eugine_Nier · 2012-05-16T05:22:11.814Z · LW(p) · GW(p)
There are also many useful posts by others that aren't part of any sequence.
Replies from: steven0461↑ comment by steven0461 · 2012-05-16T05:26:12.716Z · LW(p) · GW(p)
Yes, and adding posts to the official canon only if they're presented as part of a sequence creates a perverse incentive to write long-windedly.
comment by IlyaShpitser · 2012-05-15T15:10:01.537Z · LW(p) · GW(p)
"I think mainstream science is too slow and we mere mortals can do better with Bayes."
I never understood this particular article of LW faith. It reminds me of that old saying "It's too bad all the people who know how to run the country are too busy driving taxicabs and cutting hair."
I agree that there is quite a bit of useful material in stuff Eliezar wrote.
comment by scientism · 2012-05-15T13:06:38.999Z · LW(p) · GW(p)
I reject MWI, reject consequentialism/utilitarianism, reject reductionism, reject computationalism, reject Eliezer's metaethics. There's probably more. I think most of the core sequences are wrong/wrongheaded. Large parts of it trade in nonsense.
I appreciate the scope of Eliezer's ambition though, and enjoy Less Wrong.
Replies from: Luke_A_Somers, JoshuaZ↑ comment by Luke_A_Somers · 2012-05-15T19:11:33.859Z · LW(p) · GW(p)
Can you explain all that to someone who largely agrees with EY?
↑ comment by JoshuaZ · 2012-05-15T16:07:33.982Z · LW(p) · GW(p)
reject reductionism,
This one I'm curious to hear about. Of everything on that list, this is generally pretty uncontroversial.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-05-15T17:10:16.396Z · LW(p) · GW(p)
One silly assumption is that consciousness is reducible to quarks and leptons, whereas it is pretty clear by now that many different substrates can be used to run a Turing machine and hence human mind.
Replies from: Desrtopa, JoshuaZ↑ comment by Desrtopa · 2012-05-16T01:01:47.844Z · LW(p) · GW(p)
I would be surprised if this is an assumption that Eliezer is actually making. My understanding, and my interpretation of his, is that consciousness doesn't work because it's made of quarks and leptons, but you can make a consciousness using nothing but quarks and leptons, and the consciousness won't be a result of anything else entering in on some other level.
If you want to build a consciousness in our universe, quarks and leptons are the building blocks you've got.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-05-16T01:41:54.201Z · LW(p) · GW(p)
that consciousness doesn't work because it's made of quarks and leptons, but you can make a consciousness using nothing but quarks and leptons, and the consciousness won't be a result of anything else entering in on some other level.
This is quite true, but accidental and so irrelevant. Indeed, the intermediate levels between consciousness and quarks can vary wildly: it can be built on neurons or, potentially, on silicon gates. Worse yet, if you get a sandboxed emulated mind, the lowest level they have access to is whatever the host system decides to provide. Such a sandboxed EY would argue that everything is reducible to the API calls, which are the fundamental building blocks of matter.
Replies from: Desrtopa↑ comment by Desrtopa · 2012-05-16T01:47:49.743Z · LW(p) · GW(p)
I don't think he actually holds the position you're attributing to him. I don't know what probability he assigns to the possibility that our universe is a simulation, but I'm confident that he does not believe that as a matter of logical necessity quarks and leptons are the only things out of which one could build a consciousness, just that in our universe, these are the things that there are to build consciousnesses out of.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-05-16T02:27:55.309Z · LW(p) · GW(p)
Hmm, I suppose his position is not that everything is reducible to quarks and leptons, but that everything is reducible to something basic, in a sense that there are no magical "qualia" preventing one from building consciousness from whatever building blocks are available. This is certainly quite reasonable, and all the currently available evidence points that way.
↑ comment by JoshuaZ · 2012-05-15T17:29:18.754Z · LW(p) · GW(p)
That would be an interesting way of interpreting "reject reductionism" but the next step on scientism's list is "reject computationalism."
Replies from: shminux↑ comment by Shmi (shminux) · 2012-05-15T18:08:15.638Z · LW(p) · GW(p)
EY says:
So is the 747 made of something other than quarks? No, you're just modeling it with representational elements that do not have a one-to-one correspondence with the quarks of the 747. The map is not the territory.
To me it is an irrelevant accident that physical objects, unlike informational objects, happen to be reducible to quarks. After all, if you accept the possibility that we live in a simulation, I see no reason other than laziness to use the same substrate for different entities. But I agree that the best examples come from "computationalism": the same FAT file system can be implemented in multiple ways, some reducible to magnetic domains on a floppy disk, others only to the API calls from a sandboxed virtual machine.
Replies from: Thomas↑ comment by Thomas · 2012-05-15T20:53:38.665Z · LW(p) · GW(p)
So is the 747 made of something other than quarks?
Not entirely. Electrons are not of quarks. You can't have a 747 made from quarks only. [Nitpicking.]
Replies from: shminux↑ comment by Shmi (shminux) · 2012-05-15T21:06:04.875Z · LW(p) · GW(p)
I'm guessing that this reply is to the EY's original post and is here by mistake.
Replies from: Thomascomment by [deleted] · 2012-05-15T12:47:50.852Z · LW(p) · GW(p)
"There's been a lot of talk here lately about how we need better contrarians. I don't agree. I think the Sequences got everything right and I agree with them completely." But presumably theres room for you to be incorrect? Surely good contrarians would help clarify your views?
Also, when people talk about contrarians they talk of specific conclusions present in the sequences and in less wrong in general- MWI, cryonics, specific ethical conclusions, and the necessity of FAI. I suspect most visitors to this website would agree with much of the sequences- the best posts (to my mind) are clearly expressed arguments for rational thinking.
comment by Thomas · 2012-05-15T10:34:24.032Z · LW(p) · GW(p)
You are stating 7 points. It's at least 128 different world views from there, depending on what one agrees.
I am number 25 school member, since I agree with the last and two more.
I doubt, that there is a lot of school number 127 members here, you may be the only one.
Replies from: dbaupp, khafra↑ comment by dbaupp · 2012-05-15T12:37:49.635Z · LW(p) · GW(p)
(This isn't addressed at you, Thomas.)
For those who might not understand, Thomas is treating agreement-or-not on each bullet point as a 1 or 0, and stringing them together as a binary number to create a bitmask.
(I'm using the 0b
prefix to represent a number written in its binary format.)
This means that 127 = 0b1111111 corresponds to agreeing with all seven bullet points, and 25 = 0b0011001 corresponds to agreeing with only the 3rd, 4th and 7th bullet points.
(Note that the binary number is read from left-to-right in this case, so bullet point 1 corresponds to the "most-significant" (left-most) bit.)
Replies from: Thomas↑ comment by Thomas · 2012-05-15T13:57:24.020Z · LW(p) · GW(p)
Excellently explained. Now, do we have 127?
Replies from: drnickbone↑ comment by drnickbone · 2012-05-15T23:05:38.322Z · LW(p) · GW(p)
Anyone else voting for 0?
Most of the opinions in the list sound so whacky that 0 is likely the default position of someone outside Less Wrong. I've been here a few months, and read most of the Sequences, but none of the bits in my own bitmap has flipped. Sorry Eliezer!
The odd thing is that I find myself understanding almost exactly why Eliezer holds these opinions, and the perfectly lucid reasoning leading to them, and yet I still don't agree with them. A number of them are opinions I'd already considered myself or held myself at some point, but then later rejected. Or I hold a rather more nuanced or agnostic position than I used to.
Replies from: Thomas↑ comment by Thomas · 2012-05-16T07:45:47.506Z · LW(p) · GW(p)
What is the number of the most relevant points from the Sequences? The Grognor's selection of those 7 may not be the best. Let me try:
BitNumber Statement
0 Intelligence explosion is likely in a (near) future
1 FOOM is possible to occur
2 Total reductionism
3 Bayesism is greater than science
4 Action to save the world is a must
5 No (near) aliens
6 FAI or die
7 CEV is the way to go
8 MWI
9 Evolution is stupid and slow
Now, I agree with those from 0 to 5 (first six) in this list I've select. The binary number wold be "111111" or 63 in the decimal notation. They were not new to me, all 10 of them.
Yudkowsky's fiction is just great, BTW. The "Three world collide" may the the best story I have ever read.
Replies from: Grognor, prase↑ comment by prase · 2012-05-16T20:08:46.446Z · LW(p) · GW(p)
If I intended to encode my beliefs (which I don't), I couldn't, because I don't:
- know what's the precise difference between 0 and 1
- understand 2 - what's total reductionism, especially in contrast to ordinary reductionism
- see any novel insight in 9, which leads me to suspect I am missing the point
↑ comment by khafra · 2012-05-15T12:17:25.627Z · LW(p) · GW(p)
Cryonics is good, and bayes is better than science? An agreement bitmask is a fun perspective; I dunno why you got downvoted.
Replies from: Thomas↑ comment by Thomas · 2012-05-15T13:56:31.864Z · LW(p) · GW(p)
Bayes is better than science, yes. but it's not the cryonics that I like but
I am a utilitarian consequentialist and think that if allow someone to die through inaction, you're just as culpable as a murderer.
As dbaupp explained.
Replies from: khafra↑ comment by khafra · 2012-05-15T14:13:02.383Z · LW(p) · GW(p)
Whoops. I wasn't counting the sub-bullet as a power-of-two position; gotcha. FWIW, I still think the agreement bitmask is a fun perspective, even though I got it wrong (and there's the whole big-endian/little-endian question).
Replies from: dlthomascomment by JGWeissman · 2012-05-15T17:59:28.136Z · LW(p) · GW(p)
I largely agree with the Sequences, and I also don't care for "low-level rehashes of tired debates", but I am wary of dismissing all disagreement as "low-level rehashes of tired debates".
I think people participating on LW should be familiar with the arguments presented in the Sequences, and if they disagree, they should demonstrate that they disagree despite knowing those arguments. When people fail to do this, we should point it out, and people who repeatedly fail to do this should not be taken seriously.
Replies from: fortyeridania↑ comment by fortyeridania · 2012-05-16T13:53:01.331Z · LW(p) · GW(p)
I too agree with pretty much everything in the Sequences that I've read (which is nearly all except the quantum bits), and I share your antipathy toward summary dismissal of disagreement. I was especiallyoffended at OP's request that commenters refrain from "substantiating" their objections.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-16T15:36:19.289Z · LW(p) · GW(p)
So, I agree that a request like that regarding some idea X contributes to the silence of those who disagree with X.
That said, if there are a hundred people, and each person is (on average) ten times more likely to devote a unit of effort to talking about why X is wrong than to talking about why X is right even in situations where they are unsure whether X is wrong or right, then without some request along those lines there will effectively never be a discussion of why X is right.
That isn't necessarily a problem; maybe we're OK with never having a discussion of why X is right.
But it does mean that "summary dismissal" isn't a unilateral thing in cases like that. Yes, in such a case, if I make such a request, I am contributing to the silencing of the anti-X side as above... but if I fail to make such a request, I am contributing to the silencing of the pro-X side (though of course I can't be held accountable for it, since the responsibility for that silencing is distributed).
I try to stay aware that the end result sometimes matters more than whether anyone can hold me accountable for it.
Replies from: fortyeridania↑ comment by fortyeridania · 2012-05-17T15:18:45.108Z · LW(p) · GW(p)
without some request along those lines there will effectively never be a discussion of why X is right
This is true. Good point.
This isn't necessarily a problem
Also true.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-05-20T23:51:55.185Z · LW(p) · GW(p)
Thank you for saying this, Grognor! As you say, being willing to come out and say such is an important antidote to that phyg-ish nervous expression.
comment by Richard_Kennaway · 2012-05-15T11:42:04.422Z · LW(p) · GW(p)
That is pretty much my view as well. The only substantial disagreements I have with Eliezer are the imminence of AGI (I think it's not imminent at all) and the concept of a "Bayesian" superintelligence (Bayesian reasoning being nothing more than the latest model of thinking to be taken as being the key to the whole thing, the latest in a long line of fizzles).
I think criticism of the OP on the ground of conjunction improbability are unfounded. The components are not independent, and no-one, including the OP, is saying it is all correct in every detail.
ETA: And I'm not any sort of utilitarian. I don't know what I am, but I know I'm not that, and I don't feel a pressing need to have a completely thought out position.
comment by bryjnar · 2012-05-15T15:09:40.660Z · LW(p) · GW(p)
I regard the Sequences as a huge great slab of pretty decent popular philosophy. They don't really tread much ground that isn't covered in modern philosophy somewhere: for me personally, I don't think the sequences influenced my thinking much on anything except MWI and the import of Bayes' theorem; I had already independently come to/found most of the ideas Eliezer puts forward. But then again, I hadn't ever studied philosophy of physics or probability before, so I don't know whether the same would have happened in those areas as well.
The novel stuff in the sequences seems to be:
- The MWI stuff
- The focus on probabilistic reasoning and Bayesianism
- The decision theory/AI stuff
- The cohesiveness of it.
I think the last point is the most important: the particular cluster of LW philosophical positions occupies a quite natural position, but it hasn't had a good systematic champion yet. I'm thinking of someone who could write LW's Language, Truth and Logic. The Sequences go some way towards that (indeed, they are similar in a number of ways: Ayer wrote LTL in one go straight through, as the Sequences were, being a set of blog posts).
So I'll be interested to read Eliezer's book; but I suspect that it won't quite make it to "towering classic" in my books, probably due to lack of integration with modern philosophy. We should learn from the mistakes of philosophers!
EDIT: To avoid joining the profession-of-faith-club: I have plenty of significant points of disagreement as well! For example, Eliezer's metaethics is both wrong and deeply confused.
Replies from: thomblake↑ comment by thomblake · 2012-05-15T20:40:13.132Z · LW(p) · GW(p)
We should learn from the mistakes of philosophers!
But there are so many, and so little time!
I also found most of Eliezer's posts pretty obvious, also having read towering stacks of philosophy before-hand. But the signal-to-noise ratio is much higher for Eliezer than for most philosophers, or for philosophy in general.
comment by [deleted] · 2012-05-16T20:15:53.869Z · LW(p) · GW(p)
I agree with everything in this article, except for one thing where I am undecided:
Furthermore, I agree with every essay I've ever read by Yvain, I use "believe whatever gwern believes" as a heuristic/algorithm for generating true beliefs, and don't disagree with anything I've ever seen written by Vladimir Nesov, Kaj Sotala, Luke Muelhauser, komponisto, or even Wei Dai; policy debates should not appear one-sided, so it's good that they don't.
But that is only because I have yet to read anything big not by EY on this site.
I think the sequences are amazing, entertaining, helpful and accurate. The only downside is that EY's writing style can seem condescending to some.
comment by [deleted] · 2012-05-16T16:52:26.890Z · LW(p) · GW(p)
Yes.
I am also in this boat. Well said.
comment by CarlShulman · 2012-05-16T00:39:51.482Z · LW(p) · GW(p)
(or I simply do not know enough physics to know why Eliezer is wrong, which I think is pretty unlikely but not totally discountable).
How much physics do you know?
I tentatively accept Eliezer's metaethics, considering how unlikely it is that there will be a better one (maybe morality is in the gluons?)
What about this thread?
Replies from: Grognor↑ comment by Grognor · 2012-05-16T01:36:28.848Z · LW(p) · GW(p)
You are attacking my weak points, which is right and proper.
I've taken introductory college physics and read the quantum bits in Motion Mountain and watched some Khan Academy videos and read the Everett FAQ and this page and the answer to your question is "not very much, really". The reason I think it's unlikely that I don't know enough physics to know whether Eliezer is right is because I don't think Eliezer doesn't know enough physics to know whether he is right. Scott Aaronson wiggled his eyebrows at that possibility without asserting it, which is why I have it under consideration.
What about this thread?
I don't see an alternative to Yudkowsky's metaethics in there, except for error theory, which looks to me more like an assertion, "there is no metaethics" (Edit: More like no correct metaethics) than a metaethics. I'm fairly ignorant about this, though, which is another reason for my somewhat wide confidence interval.
Replies from: bryjnar↑ comment by bryjnar · 2012-05-16T10:36:06.272Z · LW(p) · GW(p)
Nitpick: error theory is more like saying "there is no ethics". In that sense it's a first-order claim; the metaethical part is where you claim that ethical terms actually do have some problematic meaning.
As for alternative metaethics; have a read of this if you're interested in the kind of background from which Richard is arguing.
comment by Tuxedage · 2012-05-15T14:38:23.364Z · LW(p) · GW(p)
I agree. I think that the reason why it seems as though people who think that the sequences are awesome don't exist is partially because of selection bias -- those that disagree with the sequences are most likely to comment about disagreeing, thus lending disproportionate weight on people who disagree with certain parts of the sequences.
I found the sequences to be a life changing piece of literature, and I usually consider myself fairly well read, especially in regards to the general population. The sequences changed my worldview, and forced me to reevaluate my entire inner philosophy. Suffice to say, had I not stumbled upon the sequences, my life would have taken an entirely different path.
I think that for every person who dismisses the sequences as "unimportant" or "arrogant", there are many more lurkers out there who have found the sequences to be amazing in helping improve their instrumental rationality, and vastly improve the quality of their lives.
That said, I agree with almost all the points regarding the sequences except what you mentioned regarding Science vs Bayes.
I think mainstream science is too slow and we mere mortals can do better with Bayes.
I disagree with this aspect because I think scientists can not and should not be trusted with Bayesian Inference -- mostly because firstly, science should be sure of it's conclusions, rather than assigning probability estimates to different interpretations. The potential for human judgement to be wrong coupled with the fact that often, wrongness has large consequences in the field of science warrants me to believe that the risk is not worth the possible speed of progress that we would gain.
Other than that, I wholeheartedly agree with almost all of the points in the Sequences, and I'd like to say that "I stand by the sequences", too.
Replies from: JoshuaZ, Thomas↑ comment by JoshuaZ · 2012-05-15T14:57:05.822Z · LW(p) · GW(p)
I disagree with this aspect because I think scientists can not and should not be trusted with Bayesian Inference -- mostly because firstly, science should be sure of it's conclusions, rather than assigning probability estimates to different interpretations. The potential for human judgement to be wrong coupled with the fact that often, wrongness has large consequences in the field of science warrants me to believe that the risk is not worth the possible speed of progress that we would gain.
In practice scientists often look at multiple hypotheses at once and consider them with different likelyhoods.. Moreover, the dangers of human judgement exist whether or not one feels one is "certain" or is more willing to admit uncertainty.
comment by Emile · 2012-05-15T12:23:26.869Z · LW(p) · GW(p)
I write this because I'm feeling more and more lonely, in this regard. If you also stand by the sequences, feel free to say that.
Most of your post describes my position as well, i.e. I consider the sequences (plus a lot of posts by Yvain, Lukeprog etc.) to be closer to truth than what I could work out by myself; though that only applies to stuff I'm sure I understand (i.e. not the Metaethics and QM).
comment by Armok_GoB · 2012-05-21T02:51:05.118Z · LW(p) · GW(p)
I don't technically believe the literal word of the all sequences, but I to follow your general policy that if you find yourself disagreeing with someone smarter than you, you should just belive what they do, that this community way overvalues contrarians, and that the sequences is mostly right about everything that matters.
Replies from: None↑ comment by [deleted] · 2012-05-27T00:00:10.745Z · LW(p) · GW(p)
I don't technically believe the literal word of the all sequences, but I to follow your general policy that if you find yourself disagreeing with someone smarter than you, you should just belive what they do
I agree, but I often find it hard to quantify peoples insight into specific areas, not knowing how much my judgment might be halo effect, and if I don't understand a problem properly I have a hard time telling how much of what the person is saying is correct/coherent. Mastery in knowing when to rely on others understanding, would indeed be an invaluable skill.
Edit: I just realized a prediction markets would give you that, if you would track statistics of users.
comment by Shmi (shminux) · 2012-05-15T19:50:30.410Z · LW(p) · GW(p)
To me personally, the most useful part of the Sequences is learning to ask the question "How does feel from the inside?" This dissolves a whole whack of questions, like the proverbial [lack of ]free will, reality of belief and many other contentious issues. This ties into the essential skill of constructing a mental model of the person asking a question or advancing an argument. I find that most participants on this forum fail miserably in this regard (I am no exception). I wonder if this is on the mini-camp's agenda?
comment by wedrifid · 2012-05-26T04:47:08.467Z · LW(p) · GW(p)
(In the spirit of making forceful declarations of stances.)
I am a utilitarian consequentialist
Is this prescribed by the sequences? If so (and, I suppose, if not) I wholeheartedly reject it. All utilitarian value systems are both crazy and abhorrent to me.
and think that if allow someone to die through inaction, you're just as culpable as a murderer.
Your judgement is both distasteful to me as a way of evaluating the desirability of aspects of the universal wave function and highly suspect as a way of interacting practically with reality. Further, this is a position that I consider naive and offensive when implemented via social moves against others. If Eliezer actually said that people should have this belief he is wrong.
Furthermore, I agree with every essay I've ever read by Yvain, I use "believe whatever gwern believes" as a heuristic/algorithm for generating true beliefs, and don't disagree with anything I've ever seen written by Vladimir Nesov, Kaj Sotala, Luke Muelhauser, komponisto, or even Wei Dai
I have seen disagreements among people on that list. In fact, I've seen Vladimir openly disagree with his past self. Good for him. Bad for you.
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2012-05-27T09:14:24.944Z · LW(p) · GW(p)
All utilitarian value systems are both crazy and abhorrent to me.
Would you be willing to expand on this? (Or have you done so elsewhere that I could read?)
comment by Eugine_Nier · 2012-05-16T04:24:32.862Z · LW(p) · GW(p)
Since you agree with all of Eliezer's posts, I recommend that you reread this post right now.
comment by falenas108 · 2012-05-15T17:52:30.911Z · LW(p) · GW(p)
I mostly agree with this, with one reservation about the "mainstream science is too slow" argument.
If we understood as much about a field as scientists, then we could do better using Bayes. But, I think it's very rare that a layman could out-predict a scientist just using Bayes without studying the field to the point where they could write a paper in that field.
comment by Bugmaster · 2012-05-15T17:30:36.347Z · LW(p) · GW(p)
I think mainstream science is too slow and we mere mortals can do better with Bayes.
I don't understand what this means at all. Who are these "mere mortals" ? What are they doing, exactly, and how can they do it "better with Bayes" ? If mainstream science is too slow, what will you replace it with ?
Replies from: JGWeissman, Grognor↑ comment by JGWeissman · 2012-05-15T18:13:25.300Z · LW(p) · GW(p)
I don't see any available path to replacing the current system of mainstream science as performed by trained scientists with a system of bayesian science peformed by trained bayesian masters. However, comparing mainstream science to normative bayesian epistemology suggests modifications to mainstream science worth advocating for. These include reporting likelihood ratios instead of p-values, pre-registering experiments, and giving replication attempts similar prominence to the novel results they attempt to replicate. For any of these changes, the "mere mortals" doing the science will be the same trained scientist who now perform mainstream science.
It is important to remember that though there is much that Bayes says science is doing wrong, there is also much that Bayes says science is doing right.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-05-15T18:15:51.421Z · LW(p) · GW(p)
These include reporting likelihood ratios instead of p-values, pre-registering experiments, and giving replication attempts similar prominence to the novel results they attempt to replicate.
It is possibly noteworthy that all of these are fairly mainstream positions about what sort of changes should occur. Pre-registration is already practiced in some subfields.
Replies from: JGWeissman↑ comment by JGWeissman · 2012-05-15T18:22:47.247Z · LW(p) · GW(p)
I suspect that pre-registering experiments and giving replication attempts similar prominence to the novel results they attempt to replicate are things that scientists who take "traditional rationality" seriously would advocate for and lament certain social structures that prevent science from being it's ideal form. But is use of likelihood ratios instead of p-values also fairly mainstream?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-05-15T18:24:49.465Z · LW(p) · GW(p)
Not as mainstream, but it has been discussed, particularly in the context that likelihood ratios are easier to intuitively understand what they mean. I don't have a reference off-hand, but most versions of the argument I've seen advocate including both.
↑ comment by Grognor · 2012-05-15T23:19:19.961Z · LW(p) · GW(p)
See http://lesswrong.com/lw/1gc/frequentist_statistics_are_frequently_subjective/ and http://lesswrong.com/lw/ajj/how_to_fix_science/
Replies from: Bugmaster↑ comment by Bugmaster · 2012-05-15T23:28:57.349Z · LW(p) · GW(p)
These are good articles, and I agree with most of what they say. However, if your thesis is that "mainstream science works too slowly, let's throw it all out and replace it with the Bayes Rule", then your plan is deeply flawed. There's more to science than just p-values.
comment by MarkusRamikin · 2012-05-27T09:59:01.831Z · LW(p) · GW(p)
Uh... This "phyg" thing has not died yet?
I wonder if anyone actually thinks this is clever or something. I mean, inventing ways to communicate that are obscure to the public is one of the signs of a cult. And acting like you have something to hide just makes people think you do. So while this might give us a smaller signature on Google's radar, I cringe to think of actual human newcomers to the site, and their impression.
Also, a simple call to sanity: the blog owner himself has written posts with cult in the very title. I just put "cult" and "rationality" into google and I see 2 references to LW in the first 10 (unpersonalised) results. It's kinda late to start being sneaky...
My weak prediction hasn't been entirely on mark... The word did migrate to RationalWiki, but other than being in the "The Bad" section, I don't see it expressly mocked. This is the one time where - LW-supporter though I most of the time am, and one who agrees with most of what Grognor said - I'm going to say "too bad".
_
(I apologise about being kinda off topic. This was prompted by a bunch of comments on this page and elsewhere more than Grognor's post, except the part where he makes "phyg" a tag.)
Replies from: wedrifid↑ comment by wedrifid · 2012-05-27T13:38:51.358Z · LW(p) · GW(p)
I wonder if anyone actually thinks this is clever or something. I mean, inventing ways to communicate that are obscure to the public is one of the signs of a cult. And acting like you have something to hide just makes people think you do. So while this might give us a smaller signature on Google's radar, I cringe to think of actual human newcomers to the site, and their impression.
Disagree and strongly oppose your influence. Obfuscating an unnecessary google keyword is not the sign of a cult. That word has an actual real world meaning and it evidently isn't the one you seem to think it is. It means this.
This is the one time where - LW-supporter though I most of the time am, and one who agrees with most of what Grognor said - I'm going to say "too bad".
I hope your needless defection is suppressed beyond the threshold for visibility.
Mind you, I've never used the term "phyg" (well, except in the quote here) and don't plan to. Putting it as an actual tag is ridiculous. If you don't want to associate with a concept that is already irrelevant just ignore it. Ironically, this means that only part of your comment that I can agree with is initial "Uh... This 'phyg' thing has not died yet?" The justification of the aversion is muddled thinking.
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2012-05-27T14:59:10.766Z · LW(p) · GW(p)
this means that only part of your comment that I can agree with is initial "Uh... This 'phyg' thing has not died yet?" The justification of the aversion is muddled thinking.
What's your unmuddled justification? (not sarcasm, interested what you think)
Replies from: wedrifid↑ comment by wedrifid · 2012-05-27T15:11:41.248Z · LW(p) · GW(p)
What's your unmuddled justification? (not sarcasm, interested what you think)
If you'll pardon my literal (and so crude) description of my reaction, "Ick. That word sucks dried monkey balls. You guys sound like total dorks if you say that all the time."
ie. I probably share approximately your sentiment just sans any "that means you're cultish" sentiment.
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2012-05-27T16:00:23.999Z · LW(p) · GW(p)
Pardoned. ;)
Sorry, I didn't mean "that means you're cultish", more like "that will look cultish/fishy to some newcomers". (And also "I find it embarassingly silly").
comment by Leonhart · 2012-05-15T12:08:48.406Z · LW(p) · GW(p)
Dropping out of lurk-mode to express support and agreement with your tone and aim, though not necessarily all of your points separately.
ETA: Argh, that came out clunky. Hopefully you grok the intent. Regardless; I know to whom I owe much of my hope, kindness and courage.
Replies from: Dorikkacomment by Normal_Anomaly · 2012-05-15T11:41:48.715Z · LW(p) · GW(p)
I agree with your first 2 bullet points. I agree with the third, with the caveat that doing so leads to a greater risk of error. I'm a consequentialist with utilitarianism as a subset of my values. I think "culpable" refers to how society should treat people, and treating people who fail to save lives as murderers is infeasible and unproductive. I choose TORTURE over SPECKS in the relevant thought experiment, if we stipulate that there are 3^^^3 distinct possible people who can be specked, which in reality there aren't. I want to sign up for cryonics. I like EY's metaethics, and agree that lots of people are crazy.
In addition to that: I think UFA is a risk worth worrying about and working to mitigate and FAI is worth pursuing. I think EY's sequence on words is true, and makes conversations much easier when both people have read it.
comment by Salemicus · 2012-05-15T19:41:53.778Z · LW(p) · GW(p)
I enjoyed reading (most of) the Sequences, but I'm stunned by this level of agreement. I doubt even Eliezer agrees with them completely. My father always told me that if I agreed with him on everything, at least one of us was an idiot. It's not that I have a sense of "ick" at your public profession of faith, but it makes me feel that you were too easily persuaded. Unless, of course, you already agreed with the sequences before reading them.
Like Jayson_Virissimo, I lean towards the Sequences. Parts I reject include MWI and consequentialism. In some cases they have caused me to update away from them. For instance, before reading them, I leaned towards materialism, but they helped persuade me to become a substance dualist, despite the author's intention.
Replies from: Desrtopa↑ comment by Desrtopa · 2012-05-16T00:32:50.584Z · LW(p) · GW(p)
For instance, before reading them, I leaned towards materialism, but they helped persuade me to become a substance dualist, despite the author's intention.
Could you explain why?
Replies from: Salemicus↑ comment by Salemicus · 2012-05-16T08:50:44.005Z · LW(p) · GW(p)
EY convinced me that consciousness is causally active within the physical universe, and I have yet to find any good argument against the notion - just equivocations about the word "meaning." At the same time, I do accept the argument that no amount of third-person description sums to first-person experience. Hence, substance dualism.
I am aware that this is not a full response, but I don't want to sidetrack the thread with an off-topic conversation.
comment by Risto_Saarelma · 2012-05-22T15:49:28.301Z · LW(p) · GW(p)
My sense is that the attitude presented in the article and in Yvain's linked comment is problematic in somewhat the same way as asexual reproduction is a problematic as an evolutionary strategy.
comment by vi21maobk9vp · 2012-05-17T08:54:35.667Z · LW(p) · GW(p)
Agreeing with something so big (but coherent inside itself) wholesale is a normal reaction if you consume it over small time and it doesn't contain anything you feel is a red flag. It can persist for a long time, but it can easily be either an actual agreement or an agreement "with an object" without integrating and cross-linking the ideas into your bigger map.
Fortunately, it looks like "taboo" technique is quite effective on a deeper level, too. After avoiding using terms, notions, arguments from some body of knowledge in your thinking, you start to agree with it point-by-point, and not think of it as of indivisible whole. You also start to find common ideas and clear disagreements with the other sources when you revisit the tabooed object. Usually, you start to disagree with a few minor details at that point, unless experimental evidence is abundant and not many inferences are drawn.
Oh, and while my priors are such that I don't agree with much of the Sequences, Sequences are a nice body of work and explain many ideas and their implications extremely well.
comment by Oscar_Cunningham · 2012-05-15T15:18:56.027Z · LW(p) · GW(p)
I agree.
comment by RomeoStevens · 2012-05-15T11:12:37.820Z · LW(p) · GW(p)
Any belief that I can be argued into by a blog post I can also probably be argued out of by a blog post.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2012-05-15T11:35:30.934Z · LW(p) · GW(p)
That sounds like you're saying that what beliefs you can be argued into is uncorrelated with what's true. I don't think you mean that.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-05-15T15:02:50.772Z · LW(p) · GW(p)
That doesn't necessarily follow from RomeoSteven's remark. It may take more subtle or careful arguing to persuade someone into a belief that is false. So in practice, what beliefs one can be argued into will be correlated with being true.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2012-05-15T15:13:08.086Z · LW(p) · GW(p)
Oh, so you think RomeoSteven was allowing for varying argument skill in the blog posts in question. That makes sense. Anything I can be argued into by a blog post, I can be argued out of by a blog post, if Omega takes up contrarian blogging. :)
comment by Shmi (shminux) · 2012-05-15T17:15:46.110Z · LW(p) · GW(p)
My current most serious quibble with the Sequences is that they do not make it clear that the map/territory meme is just a model. A very useful one in many contexts, but still only a model. One case where it fails miserably is QM, leading to the wasteful discussions of MWI vs collapse (which one is "the territory"?).
The other, minor one, is that it makes amateurs honestly think that they can do better than domain experts, just by reading the Sequences:
I think mainstream science is too slow and we mere mortals can do better with Bayes.
There are so many fallacies in this statement, I don't even know where to start.
Replies from: fubarobfusco, thomblake↑ comment by fubarobfusco · 2012-05-15T19:01:12.504Z · LW(p) · GW(p)
My current most serious quibble with the Sequences is that they do not make it clear that the map/territory meme is just a model. A very useful one in many contexts, but still only a model.
Are you claiming that the map/territory distinction is in the map, not in the territory?
One case where it fails miserably is QM, leading to the wasteful discussions of MWI vs collapse (which one is "the territory"?).
Neither one. They're both maps. The territory is that stuff out there that surprises us sometimes.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-05-15T19:32:27.256Z · LW(p) · GW(p)
Are you claiming that the map/territory distinction is in the map, not in the territory?
No, I don't presume the existence of something you call the territory (an absolute immutable and unobservable entity, of which only some weak glimpses can be experienced). I'm pretty happy with a hierarchy of models as ontologically fundamental, and map/territory being one of those models.
Neither one. They're both maps. The territory is that stuff out there that surprises us sometimes.
My issue is people arguing which of the two identically powerful models corresponds to "that stuff out there". Once you realize that the "stuff out there" is a [meta-]model, such an argument loses its appeal.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-05-15T22:46:11.700Z · LW(p) · GW(p)
I don't think you and I are talking about the same thing when we refer to "the map/territory distinction".
The way I use the "map/territory" expression, it doesn't make sense to ask which of MWI or collapse "is the territory". Both are relatively high-level analogies for explaining (in English) the mathematical models (in algebra) that describe the results (numerical measurements) of physics experiments. In other words, they are both "maps of maps of maps of the territory."
We can ask which map introduces fewer extraneous terms, or leads to less confusion, or is more internally consistent; or which is more "physical" and less "magical"; but neither one "is the territory".
I don't presume the existence of something you call the territory (an absolute immutable and unobservable entity, of which only some weak glimpses can be experienced). I'm pretty happy with a hierarchy of models as ontologically fundamental, and map/territory being one of those models.
It seems to me that this sort of subjectivism runs into difficulty when we notice that sometimes our models are wrong; sometimes the map has a river on it that the territory doesn't, and that as result if we dive into it we go crash instead of splash. See the ending of "The Simple Truth". But perhaps I have misconstrued what you're getting at here?
Replies from: shminux↑ comment by Shmi (shminux) · 2012-05-15T23:02:19.054Z · LW(p) · GW(p)
We can ask which map introduces fewer extraneous terms, or leads to less confusion, or is more internally consistent; or which is more "physical" and less "magical"; but neither one "is the territory".
If you do not assign any ontology to your mathematically identical maps, then there is no way to rate them in any objective way. (I don't consider "feel good" an objective way, since people disagree on what feels good.)
It seems to me that this sort of subjectivism runs into difficulty when we notice that sometimes our models are wrong; sometimes the map has a river on it that the territory doesn't, and that as result if we dive into it we go crash instead of splash. See the ending of "The Simple Truth". But perhaps I have misconstrued what you're getting at here?
Yeah, I expected that this cute story would come up. First, my approach can be classified as instrumentalism or even anti-realism, but not subjectivism. The difference is that when models do not match experience, they are adjusted or discarded. In the case of "The Simple Truth", there is an overwhelming experimental evidence that jumping off a cliff does not let one fly, so the Mark's model would be discarded as failed.
↑ comment by thomblake · 2012-05-15T20:47:31.922Z · LW(p) · GW(p)
I think mainstream science is too slow and we mere mortals can do better with Bayes.
There are so many fallacies in this statement, I don't even know where to start.
Start anywhere. I'd be interested in seeing a list.
Even in extreme cases I can only count 1 fallacy per statement, and I can only parse the above into at most 2 propositions. In normal cases I only expect to see 1 per argument, and what you quoted does not appear to be an argument at all. And despite being something of an expert on logic, I don't see a single fallacy in that.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-05-15T22:22:50.947Z · LW(p) · GW(p)
Start anywhere. I'd be interesting in seeing a list.
If you insist...
Implication that reading a pop-psych forum on the net can replace postgrad degrees and the related experience.
Assuming that Bayesian logic somehow replaces extensive experimental testing (to be fair, EY proclaims this kind of nonsense, as well).
Rushing to conclusions based on a very limited second-hand information.
Not noticing that the author the Sequences, while very articulate, is not, by any objective metric, a domain expert in any of the areas covered.
There is more, but this is a start.
Replies from: thomblake