The Ten Commandments of Rationality
post by Sophronius · 2014-03-30T16:36:50.092Z · LW · GW · Legacy · 73 commentsContents
The Ten Commandments of Rationality None 73 comments
(Disclaimer/TL;DR: This article, much like Camelot, is a silly place/post. Nonetheless I think it presents a pretty solid list of 10 rationality lessons to take away from Less Wrong which must not be forgotten upon pain of eternal damnation/irrationality.)
In a realm not far from here, somewhere within a bustling metropolis, there lies an old and dusty book. It is placed in a most conspicuous location; in the middle of a busy street where countless citizens walk by it every day. Yet none pick it up, for it is placed on a pedestal just high enough that it cannot be reached or seen easily, and the slight inconvenience of standing on one’s toes to reach for it is sufficient to deter most. Yet if a traveller were sufficiently aware to look up and see the book, and curious enough to reach for it, and willing to suffer the slight discomfort of having to touch its muddy cover to open and read its ancient pages, that one would find within a wealth of wisdom and rationality that would transform the reader’s life forever. For this is the most holy Book of Bayes, and its first and last pages both read thusly:
The Ten Commandments of Rationality
1) Thou shalt never conflate the truth or falsehood of a proposition with any other characteristic, be it the consequences of the proposition if it be true, or the consequences of believing it for thyself personally, or the pleasing or unpleasant aesthetics of the belief itself. Furthermore, thou shalt never let thy feelings regarding the matter overrule what thy critical faculties tells thee, or in any other way act as if reality might adjust itself in accordance with thine own wishes.
2) Thou shalt not accept any imperfect situation if it may be optimized, nor shalt thou abstain from improving upon a situation by imagining ever better options without acting on any of them, nor must thee allow thyself to be paralyzed with fear or apathy or indecision when any action is still superior to doing nothing at all. Thus let it be said: Thou shalt not allow thyself to be beaten by a random number generator.
3) Thou shalt not declare any matter to be unscientific, or inherently irrational, or a false question, or with any other excuse wilfully close thine own eyes and expel all curiosity regarding the matter before thou hast even asked thyself whether the question is worth answering. To transgress thusly is to forfeit any chance to update thy own beliefs on a matter that is truly unusual to thee.
4) Thou shalt not hold goals or beliefs which conflict with each other, in such a manner as to violate most divine transitivity, and thereby set thyself up for most ignominious defeat, and rest easy in knowing this fact. Rather shalt thou engage in mindfulness and self-reflection, and in doing so find thy own true priorities, and solve any inconsistencies in a utility maximising manner so that thou may not fall prey to the wrath of the most holy Dutch Book, which is merciless but just.
5) Thou shalt never engage in defeatism, nor wallow in ennui or existential angst, or in any other way declare that thy efforts are pointless and that exerting thyself is entirely without merit. For just as it is true that matters may never get to the point where they cannot possibly get any worse, so is it true that no situation is impossible to improve upon.
6) Thou shalt never judge a real or proposed action by any metric other than this: The expected consequences of the action, both direct and indirect, be they subtle or blatant, taking into account all relevant information available at the time of deciding and no more or less than this.
7) Thou shalt never sit back on thy lazy laurels and wait for rationality to come to thee, nor shalt thou declare that thy beliefs must be correct as all others have failed to convince thee of the contrary: The cultivation of thy rationality and the falsification of thine beliefs is thine own most sacred task, which is eternal and never finished, and to leave it to others is to invite doom upon the validity of thine own beliefs and actions, for in this case others will never serve thee as well as thou might serve thyself.
8) Thou shalt never let argumentation stand in the way of knowledge, nor let knowledge stand in the way of wisdom, nor let wisdom stand in the way of victory, no matter how wise or clever it makes thee feel. Also shalt thou never conflate exceptions for rules or rules for exceptions when arguing any issue, nor bring up minutiae as if they were crucial issues, nor allow oneself to be swept away in arguing for the sake of argumentation, nor act to score cheap and yea also easy points, nor present thy learnings in a needlessly ambiguous manner such as this if it can be helped, or in any other way allow oneself to lose sight of thine most sacred goal, which is victory.
9) Thou shalt never assign a probability exactly equal to 0 or 1 to any proposition, nor declare to the skies that thy certainty regarding any matter is absolute, nor any derivation of such, for to do so is to declare thyself infallible and is placing thyself above thine most holy lord, Bayes.
10) Thou shalt never curse thy rationality, and wish for ye immediate satisfaction over thy eventual victory, all for the sake of base emotion, which is transient whereas victory is transcendent. Let it be known that it is an unspoken truth amongst rationalists -indeed it is the first and most elementary rule of rationality and yet oft forgotten by those practiced in the art- that base impulse and most holy reason are as a general rule incompatible, as there cannot be two skies.
Such are the Ten Commandments of Rationality. And Lo! If one abides by these rules, then let it be said that they act virtuously, and the heavens shall reward them with the splendour of higher expected utility relative to the counterfactual wherein they did not act virtuously. But to those who do not act virtuously, but rather act with irrationality in their minds and biases in their thinking, and who in doing so break any of the Commandments of Rationality, to them let it be said that they have transgressed against thy lord Bayes, and they shall be smitten by the twin gods of Cause and yea also Effect as surely as if they had smitten themselves. For let it be said: The gods of causality may be blind, but their aim doth be excellent regardless.
(All silliness aside, what do you all think? Is this a good list of 10 things to take away from Less Wrong? Do you have a better list? Are posts like these a waste of time? Or, Bayes forbid, did I get my thees and thous wrong somewhere? Let me know in the comments.)
73 comments
Comments sorted by top scores.
comment by IlyaShpitser · 2014-03-31T13:24:13.998Z · LW(p) · GW(p)
Here, let me take a stab:
1) Don't confuse beliefs and values.
2) Be agenty.
3) Never leave information on the table.
4) Strive for consistent beliefs.
5) There is actually a Christian formulation of this one: "thou shalt not blaspheme against the Holy Spirit" (Acquinas interpretation). Judaism and Catholicism (perhaps Sufism also, but I am not very familiar with the Sufi tradition so will not comment) have many elements of "proto-rationality", for a number of reasons, one of them being that at one point studying religion was "academia" -- where smart people went.
6) Use CDT :).
7) Having accurate beliefs and completed goals takes work. Remember to work for what you want.
8) Argue collaboratively.
9) Never be certain of anything.
10) Remember to integrate your utility with respect to time.
You know, the difference between people like Dennett or Dawkings and the LW crowd, is that while all are atheists, Dennett and Dawkings genuinely do not miss God or religion. I get the feeling you guys do, with your commandments, and virtues, and solstices, and wedding ceremonies.
I disagree with 4). I think our cognitive architecture is not consistent, and I think wishing it were so is not really very productive. "Man, to thyself be true."
Replies from: Richard_Kennaway, ChristianKl↑ comment by Richard_Kennaway · 2014-03-31T16:01:13.216Z · LW(p) · GW(p)
Thank you for that brevity. It makes clear, what can then also be seen in the original, a striking omission: any injunction to pursue the truth, to make one's beliefs correspond with reality. Which highlights the problem with (4): updating towards consistency -- also called decompartmentalising -- while neglecting to update towards reality is a short road to crackpottery.
↑ comment by ChristianKl · 2014-04-04T19:15:46.017Z · LW(p) · GW(p)
You know, the difference between people like Dennett or Dawkings and the LW crowd, is that while all are atheists, Dennett and Dawkings genuinely do not miss God or religion.
Dawkins who wants that we call ourselves brights and who preaches militant atheism isn't that from from religion either.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-04-07T14:28:13.811Z · LW(p) · GW(p)
The Brights movement explicitly rejected priests, gurus, and ritual. I sometimes wonder why lesswrong cannot seem to let these things go, as well. Some of this attachment is disguised as humor, but us homo sapiens love to use humor as a cover. Personally, I find this attachment one of the creepier features of the lesswrong community.
Aggressively competing in the marketplace of ideas is not the same thing as religion, many clearly unreligious sets of ideas compete quite aggressively.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-04-07T15:45:52.315Z · LW(p) · GW(p)
What do you mean with "priest" and "guru". For what definition of those terms do they fit for someone in the LW community but don't fit for Dawkins or the other horsemen?
As far as rituals go, you might have a point. I think it's comes out of the rationalism is about winning idea of lesswrong. Rituals are simply very useful tools. Not using them means choosing a suboptimal strategy.
Policy Debates Should Not Appear One-Sided. The argument for or against using rituals shouldn't appear one-sided.
Aggressively competing in the marketplace of ideas is not the same thing as religion, many clearly unreligious sets of ideas compete quite aggressively.
Competing in the market place of ideas doesn't mean that you have to self-identify with a label that signals group loyality. There are a bunch of people in the new atheist community who say things like the purpose of life is to spread one's genes.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-04-07T15:53:33.120Z · LW(p) · GW(p)
Sorry for not being clear -- I am not saying that lesswrong should stop using rituals. That is, this is not a policy debate. I consider myself an outsider to this community, and it would be rude for me to impose. I may find it creepy, but who cares what I think?
If lesswrong wants to use rituals, for whatever reason, it should. But I think it is rather curious in the sense of being an irreligious org that uses religious trappings. This is what I mean when I say that lesswrong misses religion.
I mean "guru" in the sense that EY is considered a guru (and, I believe, deliberately cultivates an air of a guru).
Replies from: ChristianKl↑ comment by ChristianKl · 2014-04-07T16:37:00.392Z · LW(p) · GW(p)
I consider myself an outsider to this community
Given that you have 2000 karma I don't think you are truely an outsider.
But I think it is rather curious in the sense of being an irreligious org that uses religious trappings.
The Jusos are the youth organisation of Germany's SPD, which is the left party that's currently part of the government.
At one Jusos meeting I attended we sang the Internationale. It's a ritual. It's useful for group bonding. That doesn't make it religious.
I mean "guru" in the sense that EY is considered a guru (and, I believe, deliberately cultivates an air of a guru).
Could you be more specific? What behaviors are you talking about?
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-04-07T17:36:33.630Z · LW(p) · GW(p)
Given that you have 2000 karma I don't think you are truely an outsider.
You know, Israel defines a Jew to be someone who considers themselves a Jew.
I ignore karma. I am not convinced by the idea of rationality or consistent beliefs. I am not a Bayesian with a capital B. I don't subscribe to the Everett interpretation of quantum mechanics. I am not an atheist. I don't believe UFAI is an issue worth spending a lot of resources on at the moment. I attended a total of 1 LW meetup (in Boston -- I think Scott Aaronson and Michael Vassar were there).
I do think LW is a good and valuable community, and I think there are many very useful concepts in circulation here (for example tabooing and steelmanning -- these are useful enough to have been reinvented elsewhere), which is why I participate here. Also some folks connected to LW think about and write about interesting things.
Could you be more specific? What behaviors are you talking about?
Things like point 1 in this post:
http://lesswrong.com/lw/jne/a_fervent_defense_of_frequentist_statistics/ajwa
A guru speaks from a position of authority, a scientist communicates/argues with peers. I think the modern academic approach has a lot less failure modes than the guru approach (which has been tried extensively in the past of our species).
edit: To clarify my thinking a bit. A "guru" is a kind of memetic feudal lord. I am suspicious of attempts to revert to feudalism, our species does it far too easily. I think we can do better than feudal forms.
Replies from: ChristianKl, XiXiDu↑ comment by ChristianKl · 2014-04-07T22:00:48.457Z · LW(p) · GW(p)
Being an outsider is more than just not subscribing to the label of belonging to an ingroup.
I don't think disagreeing on things like consistent beliefs makes you an outsider. I think the last longer post on the topic even argued against having consistent beliefs.
I don't think what makes a true citizen of Lesswrong is that a person treats Eliezer as his guru and simply copies his beliefs.
If you decide that this community is good, that you find participation valuable and think for yourself that makes you perfect member.
I am not an atheist.
Then what are you and if you already like religion why do you see a problem with the same pattern appearing in LW?
A guru speaks from a position of authority, a scientist communicates/argues with peers.
I don't think it's useful to pretend that everyone understand what you mean with a concept. It can seem authoritative to say that the person you are talking to just doesn't understand what you mean, but it often directly addresses the core issue of a disagreement.
Communication is also not something where you have to pick one style for all your communication needs. One day you can be more intellectual and the other day you can use more simple language.
↑ comment by XiXiDu · 2014-04-07T18:32:42.658Z · LW(p) · GW(p)
I am not an atheist.
Would you mind to elaborate on this?
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-04-07T18:36:24.166Z · LW(p) · GW(p)
Absolutely -- I think things like (a)theism, and things like interpretations of QM are "questions of taste." I think it is a waste of time to argue about taste. I also think that tolerance of diverse tastes that agree on all empirical predictions (and agree that empirical predictions is how we go about evaluating things) has advantages.
Replies from: XiXiDu↑ comment by XiXiDu · 2014-04-07T19:05:49.068Z · LW(p) · GW(p)
Thanks. Outside of communities that entertain ideas such as acausal trade and ancestor simulations, I mostly interpret "atheism" to be an imprecise but useful term to communicate the beliefs that (a) any given religion has a negligible probability of being true, and that (b) empirical predictions is how we should go about evaluating things.
Replies from: Lumifer↑ comment by Lumifer · 2014-04-07T19:09:25.500Z · LW(p) · GW(p)
Typically, atheism is distinguished from agnosticism and what you're describing is on the agnosticism side of the spectrum.
Atheism is commonly interpreted as "I know there are no gods".
Replies from: XiXiDu↑ comment by XiXiDu · 2014-04-08T07:44:49.814Z · LW(p) · GW(p)
Atheism is commonly interpreted as "I know there are no gods".
Such a distinction is technically correct and appropriate within communities such as this one. But under most circumstances it amounts to the kind of hairsplitting that the average person does not understand. Yes, it is possible that gods exist, or that Catholicism is true. But these possibilities are unlikely enough, or practically irrelevant enough, that most of the time it is appropriate to communicate "I know there are no gods".
Even here I find it very strange if someone argues that he is not an atheist based on hairsplitting arguments such as that 0 is not a probability or that we might live in a simulation.
Of course I agree that technically atheism is as irrational as believing that Jehovah exists with probability 1.
Replies from: IlyaShpitser, Lumifer↑ comment by IlyaShpitser · 2014-04-09T19:14:43.634Z · LW(p) · GW(p)
Dude the difference between "meaningless question" and "no Gods" is not hairsplitting, it's epistemology vs ontology. Do you really not see the difference?
I am about as interested in what a young earther thinks about God as what Aristotle thinks about acceleration. It is bad hygiene to throw out a concept because someone screwed it up badly.
Replies from: XiXiDu↑ comment by XiXiDu · 2014-04-10T08:06:19.895Z · LW(p) · GW(p)
Dude the difference between "meaningless question" and "no Gods" is not hairsplitting, it's epistemology vs ontology. Do you really not see the difference?
Philosophically it is not hairsplitting. In other words, if you are a philosopher, then in the context of doing philosophy, it is of practical importance to make this distinction. But in most contexts it seems meaningless to make such a distinction. In most contexts it would amount to hairsplitting, because it would make a distinction that's too fine to have practical consequences.
I am about as interested in what a young earther thinks about God as what Aristotle thinks about acceleration. It is bad hygiene to throw out a concept because someone screwed it up badly.
Your resources are limited. You have to constantly choose who you are listening to, and who you should ignore. It is possible that given certain goals (e.g. studying religion or psychology), it would make sense to listen to a young earth creationist.
One of the worst habits that LessWrong features is taking ideas too seriously. Any agent whose resources are limited is forced to use crude heuristics to filter out nonsense (such as basilisks).
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-04-10T15:28:16.139Z · LW(p) · GW(p)
But in most contexts it seems meaningless to make such a distinction. In most contexts it would amount to hairsplitting, because it would make a distinction that's too fine to have practical consequences.
If the distinction between what's out there, and what your beliefs are is too fine for a person, that person can be put to a better use than talking about God, because talking about God is above their pay grade.
Atheists don't get to appropriate people who disagree with them. It will just annoy people, and end up being counterproductive.
Your resources are limited.
Perhaps my resources are less limited than yours, in the sense that I am perfectly happy to listen to anyone who has something interesting to say, whether they put up a political banner on their beliefs you are happy with, or not. I like history in general, and I have a lot of respect for many religious thinkers, or thinkers who were motivated by religious questions. At one point the vast majority of the world's smart people were affiliated with a religion in some way.
Replies from: XiXiDu↑ comment by XiXiDu · 2014-04-12T15:27:15.366Z · LW(p) · GW(p)
If the distinction between what's out there, and what your beliefs are is too fine for a person, that person can be put to a better use than talking about God, because talking about God is above their pay grade.
Let's say my set of beliefs is exactly the same as yours, except that I also believe into an alien named Bob, which exists outside of the observable universe. Then my set of beliefs is too "fine", in the sense that it makes unnecessarily detailed assumptions about what's out there. I am not able to verify such assumptions in any meaningful way.
Perhaps my resources are less limited than yours, in the sense that I am perfectly happy to listen to anyone who has something interesting to say,...
I should probably have chosen websites instead of people. If you want to learn "what's out there", by browsing webpages, then you need to adopt some sort of heuristic that filters out the most promising results. Simply because you would never be able to read all webpages, as webpages are likely created at a faster pace than the resources you have to read them.
This means that you can't afford to muse that someone who seems crazy might actually have it all figured out. Talking to the crazy guy would be the last resort, when nothing else worked.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-04-14T13:08:18.650Z · LW(p) · GW(p)
I am calling for tolerance of anyone who agrees with empiricism as a method for getting things done. That is, say there is a set of people:
Daniel, David, Thomas, Will, Albert, John.
Daniel, and David are atheists. Daniel is a hardcore reductionist, David thinks there is a hard problem of consciousness to explain, and so retreats to a version of dualism.
Thomas is agnostic. He is not sure if God or gods exist or not, nor is he willing to take a stance on this issue. He's happy with the scientific approach to exploring the unknown.
Albert, Will and John are theists. Albert thinks there is a creator God, but he left the universe completely alone to run on natural laws. Will thinks there is a God or gods, and moreover they interact with the universe, but not in a way that empiricist methods can catch (for whatever reason -- perhaps caprice or some purpose). John believes in God, and furthermore his religious beliefs cause him to believe that we should not vaccinate people against diseases at a young age. Furthermore, he does not believe in evolution.
The only person I have a problem with in this set is John. As long as we all agree on all logical consequences of a reasonable set of beliefs that make bridges fly and planes stay up, so to speak, I am not sure it is useful or polite to insist on anything else.
In other words, if you want to call the anti-vaccine people out for being idiots, great! That's useful. If you want to push the frontiers of science forward, great! That's useful. If you want to argue with agnostics or theists of the Albert or Will variety, well, I think you need a better hobby.
If you like, you can justify this call for tolerance as a call for "maintaining the fidelity of the posterior distribution."
Replies from: Lumifer, XiXiDu↑ comment by Lumifer · 2014-04-14T16:51:05.825Z · LW(p) · GW(p)
As long as we all agree on all logical consequences of a reasonable set of beliefs that make bridges fly and planes stay up, so to speak, I am not sure it is useful or polite to insist on anything else.
Well, what do we do about values, then? Specifically, about society norms which are codified and enforced as laws?
↑ comment by XiXiDu · 2014-04-14T14:48:11.831Z · LW(p) · GW(p)
Where would you fit in the typical MIRI donor here?
As long as we all agree on all logical consequences of a reasonable set of beliefs that make bridges fly and planes stay up, so to speak, I am not sure it is useful or polite to insist on anything else.
MIRI's mission to build an FAI is a good way to think about this. Given a singleton, an all powerful machine dictator, would you want it to be like any of the people you described? If some of those people would be better leaders than others, then why wouldn't you, to a lesser extent, insist on them becoming more like someone who you would readily empower to rule you?
Personally I wouldn't feel comfortable entrusting any of the people you describe, given unlimited power. Neither would I trust any MIRI staff, or myself. All seem flawed in more or less subtle ways.
Regarding logical consequences, concepts such as acausal trade might very well be logical consequences of a reasonable set of beliefs that make bridges stay up, and planes fly. Yet what makes LessWrong partly awful is that all logical consequences are taken seriously. I do insist on somehow discounting these consequences, because it is unworkable, and dangerously distracting, to worry about such possibilities as e.g. a simulation shutdown. In other words, I wouldn't entrust an FAI that would give money to a Pascalian mugger, or even one which took basilisks seriously.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-04-14T18:51:35.112Z · LW(p) · GW(p)
All seem flawed in more or less subtle ways.
I think you are going on a tangent. We are talking about beliefs, not values. I think we can all generally agree on a reasonable set of things we all think is bad, and we should insist people agree to respect those things. But why should we shun Will or Albert if they have a reasonable ethical system?
Regarding logical consequences, concepts such as acausal trade might very well be logical consequences of a reasonable set of beliefs that make bridges stay up, and planes fly.
Sorry, but no. In order for acausal trade, basilisks, etc. to logically follow from the "reasonable set of things describing modern empirical science + math" it would have to be the case that any model (in the model theoretic sense, that is a universe we construct) consistent with the latter also contains the former. That just isn't so.
We should take all logical consequences we can compute of things we know seriously. The entire trouble with basilisks et al. is precisely that they don't logically follow, but are taken seriously anyways. Only concentrating on one untestable possibility out of great many is precisely what my call for tolerance for views on untestable things is meant to combat. A culture that agrees only on what we can test, and lets your mind wander about other matters will be resistant to things like basilisks simply because most members of such a culture will believe something else, and give you other convincing possibilities (and you will be unable to choose since they are all untestable anyways).
Replies from: Lumifer, XiXiDu↑ comment by Lumifer · 2014-04-15T15:14:22.664Z · LW(p) · GW(p)
I think we can all generally agree on a reasonable set of things we all think is bad, and we should insist people agree to respect those things.
We can? That certainly doesn't seem to be so.
Also, can you step back a hundred years or so and repeat that? :-)
↑ comment by XiXiDu · 2014-04-15T08:36:44.028Z · LW(p) · GW(p)
I think we can all generally agree on a reasonable set of things we all think is bad, and we should insist people agree to respect those things. But why should we shun Will or Albert if they have a reasonable ethical system?
[...]
We should take all logical consequences we can compute of things we know seriously. The entire trouble with basilisks et al. is precisely that they don't logically follow, but are taken seriously anyways.
I am not sure I understand you here. Should we shun people who believe that the most probable model consistent with "a reasonable set of things describing modern empirical science + math" contains basilisks etc.? Or should we respect them, and be content with the possibility that their worldview might spread, and eventually dominate a certain influential subset of humanity?
What reasonable ethical system do you have in mind which could prevent people from taking dangerous actions if they believe Pascal's mugging, or basilisks, to be a logical consequence that is to be taken seriously?
A culture that agrees only on what we can test, and lets your mind wander about other matters will be resistant to things like basilisks simply because most members of such a culture will believe something else, and give you other convincing possibilities (and you will be unable to choose since they are all untestable anyways).
Suppose there exists a highly effective model, which contains basilisks, but which is consistent with "a reasonable set of things describing modern empirical science + math". What if this diverse culture was threatened by the propagation this model?
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-04-15T14:49:38.675Z · LW(p) · GW(p)
Or should we respect them, and be content with the possibility that their worldview might spread, and eventually dominate a certain influential subset of humanity?
What if this diverse culture was threatened by the propagation [of] this model?
"Consistent" is a much lower bar to meet than "logically must follow." Jehova and your green alien Bob are also consistent. Sensible religions are generally consistent.
I call for the spread of the culture of tolerance rather than the culture of religious war. History shows that the culture of tolerance will serve your goals better here. You can always find a boogieman as an excuse to knock heads -- be it Scientology, Wahabi Islam, Communism or whatever. But will that help you?
↑ comment by Lumifer · 2014-04-08T17:21:30.697Z · LW(p) · GW(p)
A fair point.
But do note that this subthread is about you asking IlyaShpitser to elaborate on what does his "I'm not an atheist" mean and within this context the distinction might be relevant.
Replies from: XiXiDu↑ comment by XiXiDu · 2014-04-09T10:49:29.377Z · LW(p) · GW(p)
But do note that this subthread is about you asking IlyaShpitser to elaborate on what does his "I'm not an atheist" mean...
Yes, and I gave an explanation (without being asked) of why I asked him to elaborate on it in the first place. My guess was that he simply made this technically correct distinction. But I wanted to make sure that he does mean that he is a theist instead. Since most of the time, when people say that they do not subscribe to atheism, as opposed to saying that they are agnostics, they mean that they hold certain irrational beliefs.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-04-09T19:18:26.397Z · LW(p) · GW(p)
If I said I was a theist, would I be ran out of town? I already said I wasn't sure about this whole rationality business.
Replies from: XiXiDu, Eugine_Nier↑ comment by XiXiDu · 2014-04-10T07:41:22.414Z · LW(p) · GW(p)
I already said I wasn't sure about this whole rationality business.
I am not sure of "this whole rationality business" either. But I don't know what you mean by it. You listed a bunch of points you disagree with. But there are a lot of ways to disagree with all of these points. Some of those possible "disagreements", such as "but Jehova is the one true god", are rather weird.
If I said I was a theist, would I be ran out of town?
You are obviously a really smart fellow. It would have been fascinating to learn that you are a theist. That's all.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-04-10T13:38:55.115Z · LW(p) · GW(p)
I think I prefer Will Newsome's world to Eliezer Yudkowsky's world. But this is about my preferences, not about ontology.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-04-11T22:33:26.469Z · LW(p) · GW(p)
I think I prefer Will Newsome's world to Eliezer Yudkowsky's world.
I have never been clear what Will Newsome's world is. Is he writing about it more fully somewhere else? But my almost invariable experience is that things of which I hear tantalising hints turn out, when they turn out to be anything, to be merely interesting-if-true, along with alien abductions, the Loch Ness Monster, and interpretations of quantum mechanics.
Eliezer's world is as clear and inviting as a summer day in comparison (although I would not extend that to what all of his admirers make of it). ETA: I'm leaving out his views on fooming AI, which I don't take an interest in even though it's his entire motivation for creating LessWrong, and MWI, which I don't consider myself qualified to have opinions about. I'm not signed up for cryonics either.
Replies from: Will_Newsome, IlyaShpitser↑ comment by Will_Newsome · 2014-05-30T20:25:14.425Z · LW(p) · GW(p)
I have never been clear what Will Newsome's world is.
Me neither man. There are, like, these gods, right? Or one god-like thingy at least. And also I'm supposed to help some humans build their own new god somehow? Except I don't really know how the already-present gods feel about that, and at any rate the humans are all kinda crazy and bizarrely terrible at moral philosophy, I guess because whatever process made them apparently wasn't thinking very far ahead, so instead the humans just sit there metabolizing and ineffectually signaling at each other until they die. It is occasionally beautiful.
↑ comment by IlyaShpitser · 2014-04-12T14:25:57.619Z · LW(p) · GW(p)
Will Newsome's is a demon-haunted world. But I think he's still around, and might pipe up himself.
Perhaps a better known person than Will who wrote more would be Phillip K. Dick. Phillip K. Dick saw "something" once (perhaps due to a temporal lobe epilepsy), and spent the rest of his life trying to come to terms with what he saw. His writing is not very clear at all, but that is because he is tackling a very difficult problem.
Replies from: Risto_Saarelma, Will_Newsome, Richard_Kennaway↑ comment by Risto_Saarelma · 2014-04-13T19:07:39.860Z · LW(p) · GW(p)
I'd think the non-cuddly theism of the Will Newsome or Philip K. Dick sort would be sort of like paranoid schizophrenia, but without the consoling part that it's all just misfirings in your brain and not all actually out there. Not quite sure you'd want to live there, though it might certainly be occasionally more interesting than staid materialism. Muflax used to have a post about something that sounds like that, but it got disappeared.
Replies from: XiXiDu, IlyaShpitser↑ comment by IlyaShpitser · 2014-04-13T21:59:32.204Z · LW(p) · GW(p)
Are there any serious cuddly theists? "He is not a tame lion." -- C.S. Lewis. (I don't like C.S. Lewis).
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2014-04-14T07:45:40.598Z · LW(p) · GW(p)
Pretty much anyone who at some point goes "and therefore it must obviously be that God is benevolent" sounds like a candidate. My vague impression is that a bunch of religious philosophers like Bishop Berkeley and Descartes had arguments you could caricature as "reality might actually be really messed up, so it's a good thing God has to be benevolent then and see that thing stay fixed up". Usually only the "reality might be really messed up" part is what stays in the philosophical canon.
Also there's Raymond Smullyan's Who Knows? which I read and liked some years ago.
↑ comment by Will_Newsome · 2014-05-30T20:36:03.038Z · LW(p) · GW(p)
Will Newsome's is a demon-haunted world.
Perhaps so, but it's not unpleasant, not for that reason anyway.
↑ comment by Richard_Kennaway · 2014-04-12T19:20:09.451Z · LW(p) · GW(p)
I've read a fair amount of Dick, and while the fiction may be entertaining, I can't take the "something" as anything more significant than the crud you get on your screen if your graphics card goes wrong. It may be very entertaining crud, it may even inspire great art, but in itself it's of no significance.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-04-13T17:51:25.666Z · LW(p) · GW(p)
I find this view somewhat unempathetic: "this impacted tooth pain is not very significant, it is just a cluster of neurons firing here and also here." What he saw was significant to him.
Replies from: XiXiDu, Richard_Kennaway↑ comment by XiXiDu · 2014-04-13T18:55:10.171Z · LW(p) · GW(p)
A few days ago, for the second time in my life, I had a nested dream. In other words, I dreamed that I was dreaming, that I woke up within a dream. Interestingly, the dream within the dream was, from the perspective of this level of reality, completely sane. While the world I woke up to, within the dream, was very different. I dreamed that I dreamed that our neighbours removed some bushes from their garden. Which they didn't do on this level. But everything else was seemingly exactly like it is here. But the world to which I dreamed to wake up to was weird (which I was not aware of in there). There was a foggy harbor next to our house, and a big ship was passing through it. Whereas in this level, and the nested level, the sea is far away.
Is this experience significant? Well, it could mean that there are many levels of reality, this just being another one I will wake up from sooner or later. It's possible. But I just don't see how it could be reasonable to take this into account when trying to figure out what is out there, as long as more sensible approaches have not been ruled out. Where sensible stands for concrete, specific, lawful, empirical activities that can be falsified in an intersubjective (objective) manner.
↑ comment by Richard_Kennaway · 2014-04-13T18:15:14.427Z · LW(p) · GW(p)
What he saw was significant to him.
Oh yes, it was very significant to him. Jill Bolte Taylor's stroke was significant to her. Aldous Huxley's drug experiences were significant to him. John C. Wright's heart attack was significant to him.
But none of these are significant to me, and the tales they tell are told by compromised witnesses. If brain damage is the entry price for a glimpse of the interesting-if-true things they saw, I'll pass.
↑ comment by Eugine_Nier · 2014-04-12T04:16:38.625Z · LW(p) · GW(p)
If I said I was a theist, would I be ran out of town?
No. We've had open theists hang around in the past.
comment by Zack_M_Davis · 2014-03-30T21:00:41.559Z · LW(p) · GW(p)
This niche has already been filled by the Twelve Virtues.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-03-30T21:57:02.176Z · LW(p) · GW(p)
Twelve Virtues would be a great first chapter in the book. Ten Commandments would be a great last chapter.
The former is about the mindset one needs to start reasoning about rationality; the latter is a list of conclusions to remember.
comment by fezziwig · 2014-03-30T20:02:28.627Z · LW(p) · GW(p)
Ilya's right, it's too long. For example, in Exodus it is written, "Thou shalt not commit adultery." The author doesn't bother defining adultery because his audience shares enough history and culture to know pretty what what's meant. I suspect that you're trying to defend yourself against corner cases and nitpicking. That's a reasonable thing to want (especially in this crowd!) but that's what commentaries are for.
Come to think of it, that division between commandment and commentary might be useful to you. For example, I would rewrite your ninth commandment as, "Thou shalt not assign probability of 0 or 1", and appeal to Rebe Yudkowsky's writings for questions like "but what about epsilon?"
Replies from: Viliam_Bur, jkadlubo, ThisSpaceAvailable↑ comment by Viliam_Bur · 2014-03-30T21:54:24.779Z · LW(p) · GW(p)
I like that it's self-contained, not a maze of hyperlinks. Could be a bit shorter, though.
Perhaps each point could start with the essence (in bold) and follow with an explanation, like this:
Thou shalt not assign a probability exactly equal to 0 or 1.
Declaring to the skies that thy certainty regarding any matter is absolute is declaring thyself infallible and placing thyself above thine most holy lord, Bayes.
↑ comment by jkadlubo · 2014-03-30T22:57:16.561Z · LW(p) · GW(p)
Actually, the Old Testament has three versions of the commandments, each one of different length (Exodus 20, Deuteronomy 5, and the third one I forgot. Fun fact: I learned that at literature lessons in high school, not at any kind of religious lessons). The shorter commandments are the same, but the longer ones differ - maybe it was too difficult even for ancient izraelites to remember them exactly?
Let's try to make some other points shorter.
Number 10. Thou shalt meekly accept battles lost in pursuit of wars won
Number 7. Thou shalt not cease falsificating thine beliefs
Replies from: fezziwig↑ comment by fezziwig · 2014-03-31T00:00:59.481Z · LW(p) · GW(p)
I'm only familiar with the two versions of the commandments given in Exodus and Deuteronomy: I specified Exodus specifically to clarify that distinction, then wound up using an example that's the same in both of them. Oh well. I've never heard of a third, though; can you remember any other context?
I'd expect there to be exactly two versions, for the same reason that there are two creation stories in Genesis: the early books of the Bible are the first written form of a faith with two competing (though closely related!) oral traditions.
Anyway, now that I've thought about it more I think this concept would work better as a riff on the book of Proverbs.
↑ comment by ThisSpaceAvailable · 2014-03-30T23:20:28.259Z · LW(p) · GW(p)
Of course, that shows that the Ten Commandments cannot possibly be a basis, rather than summary, of morality, since the commandment "Thou shalt not commit adultery" does not put forth a moral rule, but rather reminds the audience to follow a previously existing moral rule.
Replies from: fezziwigcomment by IlyaShpitser · 2014-03-30T17:42:41.146Z · LW(p) · GW(p)
Too long.
Replies from: None↑ comment by [deleted] · 2014-03-31T07:14:37.687Z · LW(p) · GW(p)
And too purple-prosy. Lines like this:
Verily, the most blessed of silver linings is the fact that the inherent incertitude of one’s own beliefs also implies that there is never cause for complete hopelessness and despair.
Need to be rewritten / removed.
comment by Mestroyer · 2014-03-30T16:56:33.278Z · LW(p) · GW(p)
Thou shalt never engage in solipsism or defeatism, nor wallow in ennui or existential angst, or in any other way declare that thy efforts are pointless and that exerting thyself is entirely without merit. For just as it is true that matters may never get to the point where they cannot possibly get any worse, so is it true that no situation is impossible to improve upon. Verily, the most blessed of silver linings is the fact that the inherent incertitude of one’s own beliefs also implies that there is never cause for complete hopelessness and despair.
Absolute-certainty/universal applicability red flag raised.
Silver-lining claim red flag raised.
And by far, most importantly: map-territory conflation red flag raised.
Some possible situations truly can't be improved upon. The fact that you must always be uncertain about whether you are really in one is no help. Just a guarantee that in such a situation a rationalist will always have a little bit of false hope.
Upvoted anyway, most of these are good.
Replies from: Sophronius↑ comment by Sophronius · 2014-03-30T18:13:08.296Z · LW(p) · GW(p)
Okay, I acknowledge that "no situation is impossible to improve upon" is not strictly speaking true for literally every conceivable situation, but if ever there is a time where it's acceptable to leave out the ol' BOCTAOE for the sake of prose, I'd say a post including this many thees and thous would be it.
I don't think I conflated map and territory though. The statement "There is never cause for complete hopelessness and despair" is a policy recommendation (read it as: "complete hopelessness and despair is never useful"), not a statement about the territory.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-03-31T11:13:04.637Z · LW(p) · GW(p)
I think "Grant me the serenity to accept the things I cannot change, the courage to change the things I can, and wisdom to know the difference." is much better than wanting to optimize everything.
comment by Sophronius · 2014-04-04T14:28:06.544Z · LW(p) · GW(p)
Disregarding the style of writing, I would just like to say that I think most people on this website could generally do very well with taking commandment number 8 seriously. Meditating on it every night before going to bed might help, or maybe tattooing it on their foreheads so they see it in the mirror when they wake up... 6 foot flaming letters in the sky spelling it out, maybe.
In all seriousness, I cannot count the number of times I get the urge to bash some heads together and yell "Stop being clever! You're here to learn things, not win arguments or score points!"
If anyone has a good idea for how to get this message across in a way that doesn't seem overly combattative, let me know.
Replies from: Lumifer↑ comment by Lumifer · 2014-04-04T15:53:23.088Z · LW(p) · GW(p)
I get the urge to bash some heads together and yell "Stop being clever! You're here to learn things, not win arguments or score points!"
On occasion I do get the urge to bash some heads and yell "Stop telling me what I'm here for! My goals are not necessarily what you think they are or what you think they should be!".
comment by ChristianKl · 2014-03-30T17:28:29.499Z · LW(p) · GW(p)
1) Thou shalt never conflate the truth or falsehood of a proposition with any other characteristic, be it the consequences of the proposition if it be true, or the consequences of believing it for thyself personally, or the pleasing or unpleasant aesthetics of the belief itself. Furthermore, thou shalt never let thy feelings regarding the matter overrule what thy critical faculties tells thee, or in any other way act as if reality might adjust itself in accordance with thine own wishes.
The map is not the territory. Rationality is about making effective decisions.
If you as an American ask me whether I come from Berlin, I'm going to say "Yes." I have been born in Berlin. If someone from Berlin asks me I could say: "No. I have been born in Spandau." Spandau is a district of Berlin and there a complex history. Both answer are true because it depends on the context in which the question is asked.
When doing biological modeling there often a tradeoff between complexity of the model and accuracy. Which model you want to use depends on the purpose. If you want to model a whole brain you are going to use a less complex model of a neuron than when you want to model 100 neurons and how those neurons interact with each other.
Beauty is a guiding principle in theoretical physics.
Feeling are a valuable source of information. Shutting down any source of information is no good idea.
6) Thou shalt never judge a real or proposed action by any metric other than this: The expected consequences of the action, both direct and indirect, be they subtle or blatant, taking into account all relevant information available at the time of deciding and no more or less than this.
Basically you are saying that Eliezer is wrong with Timeless decision theory.
Replies from: Sophronius, ThisSpaceAvailable↑ comment by Sophronius · 2014-03-30T18:00:27.951Z · LW(p) · GW(p)
The map is not the territory. Rationality is about making effective decisions.
I profess I entirely fail to see how your post refutes the quoted paragraph. Yes, using models is useful, but that is in no way the same as falling prey to wishful thinking. I keep trying to re-read that paragraph to see how it might be interpreted in a way that makes your reply seem natural, but my best guess is that you might have read "Do not let feelings overrule critical thinking or in any other way engage in wishful thinking" as "ignore your feelings". And I still don't see how saying models are useful flows from there.
Basically you are saying that Eliezer is wrong with Timeless decision theory.
As far as I know, that sequence is meant to detail ways in which your actions might have indirect/timeless/acausal consequences, and therefore supplements rather than contradicts consequentialism. If I'm wrong, please explain how and why.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-03-30T23:18:57.112Z · LW(p) · GW(p)
Yes, using models is useful, but that is in no way the same as falling prey to wishful thinking.
Your paragraph doesn't mention anything about wishful thinking. Wishful thinking might be the only thing that comes to mind for you if you think about allowing feelings override critical thinking, but it isn't.
If a sudden feeling of fear triggers in myself and I can't explain with rational thought why a given situation is dangerous or why I would feel fear, I still remove myself from the situation.
There are studies in nurses, that if a nurse get's a feeling that a patient is in a critical situation but the nurse has no evidence that the patient is in a critical situation the patient should still get extra supervision. There good evidence that the nurse should let her intuitive feelings overrule critical thinking if the cost of a false positive is low but the cost of a false negative is high.
In case you want to argue that you can make a rational decision by making an utility calcuation in your head, that might work in the case of the nurses but there are plenty of situation where the time to do that calculation isn't available and it's very useful to respond immediately.
If I dance intimitely with a woman who's a stranger than it's very important that I immediately act when I get the feeling that something isn't right. When I started dancing I tried to get a rational model of what intimicy is or isn't okay and act based on mental rules. It doesn't work that way.
That requires that I can tell the feeling of "touching a woman feels good" apart from "this interaction doesn't flow well, it's better to reduce intimacy". Understanding emotions and being able to tell different ones apart is useful. There are feelings that you should allow to override critical analysis in specific situations, there are other feelings that you shouldn't allow to override critical analysis.
In biological modeling feelings of the person doing the modeling aren't so central that they should override critical thought, but the model still get's optimized for a certain use case and good models often trade some accuracy for simplicity. Simple models are more beautiful and simply beautiful models should be preferred over ugly complicated one if both models predict reality equally well.
As far as I know, that sequence is meant to detail ways in which your actions might have indirect/timeless/acausal consequences, and therefore supplements rather than contradicts consequentialism. If I'm wrong, please explain how and why.
It not about the indirect consequences of the action but about the consequences of being the kind of person that engages in specific actions.
↑ comment by ThisSpaceAvailable · 2014-03-30T23:29:29.133Z · LW(p) · GW(p)
Perhaps "consequences" needs to be tabooed. A consequence of something is something that is caused by it, but what does "cause" mean? That's part of what makes Newcomb so paradoxical: it's generally accepted that cause must precede effect, but the hypothetical is set up to treat Omega's actions as depending on a decision after those actions. Are the contents of the boxes included in the category of "consequences" of the choice of how many boxes to take?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-03-31T10:46:34.230Z · LW(p) · GW(p)
Perhaps "consequences" needs to be tabooed.
I think most people actually mean consequence when they say the word. The difference between someone who practices TDT and someone who does CDT is more than a bunch of semantics. The paragraph describes CDT.
Beware of blaming semantics when you should update one of your core beliefs instead.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-04-04T16:11:54.470Z · LW(p) · GW(p)
Who here actually knows exactly what TDT is? (I am not sure I do -- it was never written down fully -- and I thought about these issues a lot). Are you just assuming people got TDT right? TDT might be "conceptual vaporware". I read an old paper on it, but I didn't like the paper (nor did that paper have a full description).
Replies from: ChristianKl↑ comment by ChristianKl · 2014-04-04T16:53:44.406Z · LW(p) · GW(p)
it was never written down fully
I think the wiki does contain a written down definition:
Timeless decision theory (TDT) is a decision theory, developed by Eliezer Yudkowsky which, in slogan form, says that agents should decide as if they are determining the output of the abstract computation that they implement. This theory was developed in response to the view that rationality should be about winning (that is, about agents achieving their desired ends) rather than about behaving in a manner that we would intuitively label as rational.
I think what Sophronius describes in the paragraph would is what's "intuitively labeled as rational".
I think that's sort of the problem with the post. It's a list of 10 things that intuitively feel like they are the things rational people should do.
It's not a list that tries to describe what reasoned principles about rationalism Lesswrong did come up with. TDT is sort of the LW house decision theory. It's about moving beyond the intuitive idea of rationalism that popular out there. LW rationality is on the other hand supposed to be about winning.
I think the example of reacting when fear comes up is a good example. A nurse should follow the algorithm that if she feels a given patient is in a critical condition the patient gets extra supervision.
The intuitive rational belief that the nurse should have good reasons that she can explain to other people about why a patient needs supervision. The intuitive rational belief is that there should be reasons besides the emotions of the nurse to give the patient extra supervision.
We do have studies that validate the abstract heuristic that the nurse should let her feeling overrule her intellectual analysis of the situation.
If you read the original paper from two decades ago that introduces the concept of evidence-based medicine you find that it's about getting medical professionals to read more scientific papers and deemphasized intuitive decision making.
We learned something in those two decades. We decided that rationality should be about winning. We don't know everything but we can at least make an effort to be less wrong. We know that specific choices are well made with intuition than it would be stupid to not go the winning way and instead try to analyse the situation intellectually. Of course the nurse should still learn medical science but she should also listen to her intuition.
We are in the 21st century and not anymore in the 20st. End 20st century ideology is outdated and it's useful to update. To get less wrong.
Is TDT the best way to think about making decisions? It's still in it's infancy and there still room to refine it. Let's run CFAR workshop to see what heuristics are actually practical when you teach them to humans.
There are a bunch of folk rationality beliefs.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-04-04T17:05:55.019Z · LW(p) · GW(p)
I think the wiki does contain a written down definition:
I am sorry, but that is not specified at all. If I give you a specific problem (I have a list of them right here!), will you be able to tell me what "the TDT answer" should be? The way people seem to use TDT is as a kind of "brand name" for a nebulous cloud of decision theoretic ideas. Until there is a paper and a definition, TDT is not a defensible point. It has to be formally written down in order to have a chance to be wrong (being wrong is how we make progress after all).
If it's a set of related decision theories, fine -- tell me what the set is! Example: "naive EDT" is "choose an action that maximizes utility with respect to the distribution p(outcome | action took place)." This is very clear, I know exactly what this is.
comment by ThisSpaceAvailable · 2014-03-30T23:34:23.540Z · LW(p) · GW(p)
I wonder whether you have an understanding of what "solipsism" means that is different from mine. How can a rationalist commandment include a proscription against considering a particular hypothesis?
comment by [deleted] · 2014-03-30T22:30:21.470Z · LW(p) · GW(p)
A list of exact things at the core of rationality is well-traveled ground; however, if you focus exactly on "things to take away from Less Wrong" I'd write a different list.
I hate flowery langauge and "commandment" style writing, so these are more along the lines of ten Oblique Strategies than ten commandments.
- Worrying about technology that doesn't exist is a gamble; sometimes it is correct but often times, it can be of no consequence to you.
- It's very easy to claim to be rational; it's much harder to win. Avoid calling yourself a rationalist. Save your breath to win.
- To have a thing to protect is rare indeed. (Aside: If your thing-to-protect is the same as a notable celebrity, or as the person you learned the concept from, it is not your thing-to-protect.)
- The way to get better at doing things normal people are good at is not by doing things other than the way normal people would do them.
- It's always easier to attack than to defend.
- The sharpest eyes have the darkest blind spots.
- The quickest minds get stuck the fastest.
- Whenever it is possible for a human to take a path of great effort, or a path of least effort, they will take the path of least effort, and pretend it was the path of great effort.
- Whenever it is possible for a human to lie to themselves, they will.
- Scavenge. Strip the value from all things you encounter. If the strategy of your enemy will make you win, use it. If the weapon of your enemy makes you win, use it. The scavenger
I would say more than anything else these are the things I've taken from Less Wrong, not from the Sequences, nor from the impulse-towards-rationality within all interactions, but from the veneer of community stretched on top of those things.
Replies from: Mestroyer↑ comment by Mestroyer · 2014-03-31T07:06:30.004Z · LW(p) · GW(p)
To have a thing to protect is rare indeed. (Aside: If your thing-to-protect is the same as a notable celebrity, or as the person you learned the concept from, it is not your thing-to-protect.)
Really? What if the thing you protect is "all sentient beings," and that happens to be the same as the thing the person who introduced it to you or a celebrity protects? There're some pretty big common choices (Edited to remove inflationary language) or what a human would want to protect.
Beware value hipsterism.
Or, if by "thing to protect", you really mean "means to protect", and you're warning against having the same plan to protect the thing as a celebrity or person who introduced the idea to you, this sounds like "Celebrities and people who introduce people to the idea of means to protect things are never correct and telling the truth about the best available means to protect", which is obviously false.
Replies from: None↑ comment by [deleted] · 2014-04-02T19:51:37.300Z · LW(p) · GW(p)
Really? What if the thing you protect is "all sentient beings," and that happens to be the same as the thing the person who introduced it to you or a celebrity protects?
You, personally, probably don't care about all sentient beings. You probably care about other things. It takes a very rare, very special person to truly care about "all sentient beings," and I know of 0 that exist.
I find it very convenient that most of Less Wrong has the same "thing-to-protect" as EY/SigInst, for the following reasons:
- Safe strong AI is something that can only be worked on by very few people, leaving most of LW free to do mostly what they were doing before they adopted that thing-to-protect.
- Taking the same thing-to-protect as the person they learned the concept from prevents them from having to think critically about their own wants, needs, and desires as they relate to their actual life. (This is deceptively hard -- most people do not know what they want, and are very willing to substitute nice-sounding things for what they actually want.)
Taken in concert with this quote from the original article:
Similarly, in Western real life, unhappy people are told that they need a "purpose in life", so they should pick out an altruistic cause that goes well with their personality, like picking out nice living-room drapes, and this will brighten up their days by adding some color, like nice living-room drapes. You should be careful not to pick something too expensive, though.
...it seems obvious to me that most people on LW are brutally abusing the concept of having a thing-to-protect, and thus have no real test for their rationality, making the entire community an exercise in doing ever-more-elaborate performance forms rather than a sparring ground.
Replies from: Mestroyer↑ comment by Mestroyer · 2014-04-02T23:04:48.761Z · LW(p) · GW(p)
You, personally, probably don't care about all sentient beings. You probably care about other things. It takes a very rare, very special person to truly care about "all sentient beings," and I know of 0 that exist.
I care about other things, yes, but I do care quite a bit about all sentient beings as well (though not really on the level of "something to protect", I'll admit). And I have cared about them before I even heard of Eliezer Yudkowsky. In fact, when I first encountered EY's writing, I figured he did not care about all sentient beings, that he in fact cared about all sapient beings, and was misusing the word like they usually do in science fiction, rather than holding some weird theory of what consciousness is that I haven't heard of anyone else respectable holding, that the majority of neuroscientists disagree with, and that unlike tons of other contrarian positions he holds, he doesn't argue for publicly (I think there might have been one facebook post with an argument about it he made, but I can't find it now).
Something I neglected in the phrase "all sentient beings" is that I care less about "bad" sentient beings, or sentient beings who deliberately do bad things than "good" sentient beings. But even for that classic example of evil, Adolf Hitler, if he were alive, I'd rather that he be somehow reformed than killed.
I find it very convenient that most of Less Wrong has the same "thing-to-protect" as EY/SigInst, for the following reasons: Safe strong AI is something that can only be worked on by very few people, leaving most of LW free to do mostly what they were doing before they adopted that thing-to-protect.
I may not be able to do FAI research, but I can do what I'm actually doing, which is donating a significant fraction of my income to people who can. (slightly more than 10% of adjusted gross income last tax year, and I'm still a student, so as they say, "This isn't even my final form").
Taking the same thing-to-protect as the person they learned the concept from prevents them from having to think critically about their own wants, needs, and desires as they relate to their actual life. (This is deceptively hard -- most people do not know what they want, and are very willing to substitute nice-sounding things for what they actually want.)
What I've really taken from the person who taught me the concept of a thing-to-protect, is a means-to-protect. If I hadn't been convinced that FAI was a good plan for achieving my values, I would be pursuing lesser plans to achieve my values. I almost started earning to give to charities spreading vegetarianism/veganism instead of MIRI. And I have thought pretty hard about whether this is a good means-to-protect.
Also, though I may not be "thing-to-protect"-level altruistic yet, I'm working on it. I'm more altruistic than I was a few years ago.
This isn't even my final form.
...it seems obvious to me that most people on LW are brutally abusing the concept of having a thing-to-protect, and thus have no real test for their rationality, making the entire community an exercise in doing ever-more-elaborate performance forms rather than a sparring ground.
Examples?
Replies from: None↑ comment by [deleted] · 2014-04-08T21:12:59.711Z · LW(p) · GW(p)
I'm not going to read all of that. Responding to a series of increasingly long replies is a negative-sum game.
I'll respond to a few choice parts though:
Examples?
Most of this thread: http://lesswrong.com/r/discussion/lw/jyl/two_arguments_for_not_thinking_about_ethics_too/
I may not be able to do FAI research, but I can do what I'm actually doing, which is donating a significant fraction of my income to people who can.
But why do you care about that? It's grossly improbable that out of the vast space of things-to-protect, you were dispotitioned to care about that thing before hearing of the concept. So you're probably just shopping for a cause in exactly the way EY advises against.
To put it another way... for the vast majorty of humans, their real thing-to-protect is probably their children, or their lover, or their closest friend. The fact that this is overwhelmingly underrepresented on LW indicates something funny is going on.
Replies from: Mestroyer