Posts
Comments
I went from ardently two-boxing to ardently one-boxing when I read that you shouldn't envy someone's choices. More general than that, actually; I had a habit of thinking "alas, if only I could choose otherwise!" about aspects of my identity and personality, and reading that post changed my mind on those aspects pretty rapidly.
An extreme form of brain damage might be destruction of the entire brain. I don't think that someone with their entire brain removed has consciousness but lacks the ability to communicate it; suggesting that consciousness continues after death seems to me to be pushing well beyond what we understand "consciousness" to refer to.
The brain seems to be something that leads to consciousness, but is it the only thing?
Maybe other things can "lead to" consciousness as well, but what makes you suspect that humans have redundant ways of generating consciousness? Brain damage empirically causes damage to consciousness, so that pretty clearly indicates that the brain is where we get our consciousness from.
If we had redundant ways of generating consciousness, we'd expect that brain damage would simply shift the consciousness generation role to our other redundant system, so there wouldn't be consciousness damage from brain damage (in the same way that damage to a car's engine wouldn't damage its ability to accelerate if it had redundant engines). But we don't see this.
we don't really know.
We know there's no afterlife. What work is "really know" doing in this sentence, that is capable of reversing what we know about the afterlife?
Well, in dath ilan, people do still die, even though they're routinely cryonically frozen. I suspect with an intelligence explosion death becomes very rare (or horrifically common, like, extinction).
I'd caution that suspecting (out loud) that she might develop an exercise disorder would be one of those insulting or belittling things you were worried about (either because it seems like a cheap shot based on the anorexia diagnosis, or because this might be one approach to actually getting out from under the anorexia by exerting control over her body).
Likely a better approach to this concern would be to silently watch for those behaviours developing and worry about it if and when it actually does happen. (Note that refusing to help her with training and diet means she gets this help from someone who is not watching out for the possibility of exercise addiction).
There are a few approaches that might work for different people:
- Talk as though she doesn't have anorexia. Since you are aware, you can tailor your message to avoid saying anything seriously upsetting (i.e you can present the diet assuming control of diet is easy, or assuming control of diet is hard). I don't recommend this approach.
- Confront the issue directly ("Exercise is what tells your body to grow muscle, but food is what muscles are actually built out of, so without a caloric surplus your progress will be slow. I'm aware that this is probably a much harder challenge for you than most people..."). I don't recommend this approach.
- Ask her how she feels about discussing diet. ("Do you feel comfortable discussing diet with me? Feel free to say no. Also, don't feel constrained by your answer to this question; if later you start wishing you'd said no, just say that, and I'll stop."). I recommend this approach.
In any case, make it clear from the outset you want to be respectful about it.
It seems like the War on Terror, etc, are not actually about prevention, but about "cures".
Some drug addiction epidemic or terrorist attack happens. Instead of it being treated as an isolated disaster like a flood, which we should (but don't) invest in preventing in the future, it gets described as an ongoing War which we need to win. This puts it firmly in the "ongoing disaster we need to cure" camp, and so cost is no object.
I wonder if the reason there appears to be a contradiction is just that some policy-makers take prevention-type measures and create a framing of "ongoing disaster" around it, to make it look like a cure (and also to get it done).
One would be ethical if their actions end up with positive outcomes, disregarding the intentions of those actions. For instance, a terrorist who accidentally foils an otherwise catastrophic terrorist plan would have done a very ‘morally good’ action.
This seems intuitively strange to many, it definitely is to me. Instead, ‘expected value’ seems to be a better way of both making decisions and judging the decisions made by others.
If the actual outcome of your action was positive, it was a good action. Buying the winning lottery ticket, as per your example, was a good action. Buying a losing lottery ticket was a bad action. Since we care about just the consequences of the action, the goodness of an action can only be evaluated after the consequences have been observed - at some point after the action was taken (I think this is enforced by the direction of causality, but maybe not).
So we don't know if an action is good or not until it's in the past. But we can only choose future actions! What's a consequentialist to do? (Equivalently, since we don't know whether a lottery ticket is a winner or a loser until the draw, how can we choose to buy the winning ticket and choose not to buy the losing ticket?) Well, we make the best choice under uncertainty that we can, which is to use expected values. The probability-literate person is making the best choice under uncertainty they can; the lottery player is not.
The next step is to say that we want as many good things to happen as possible, so "expected value calculations" is a correct way of making decisions (that can sometimes produce bad actions, but less often than others) and "wishful thinking" is an incorrect way of making decisions.
So the probability-literate used a correct decision procedure to come to a bad action, and the lottery player used an incorrect decision procedure to come to a good action.
The last step is to say that judging past actions changes nothing about the consequences of that action, but judging decision procedures does change something about future consequences (via changing which actions get taken). Here is the value in judging a person's decision procedures. The terrorist used a very morally wrong decision procedure to come up with a very morally good action: the act is good and the decision procedure is bad, and if we judge the terrorist by their decision procedure we influence future actions.
--
I think it's very important for consequentialists to always remember that an action's moral worth is evaluated on its consequences, and not on the decision theory that produced it. This means that despite your best efforts, you will absolutely make the best decision possible and still commit bad acts.
If you let it collapse - if you take the shortcut and say "making the best decision you could is all you can do", then every decision you make is good, except for inattentiveness or laziness, and you lose the chance to find out that expected value calculations or Bayes' theorem needs to go out the window.
There's no other source of morality and there's no other criterion to evaluate a behaviour's moral worth by. (Theorised sources such as "God" or "innate human goodness" or "empathy" are incorrect; criteria like "the golden rule" or "the Kantian imperative" or "utility maximisation" are only correct to the extent that they mirror the game theory evaluation.)
Of course we claim to have other sources and we act according to those sources; the claim is that those moral-according-to-X behaviours are immoral.
what is different about how we value morality based on its origin?
Evolution, either genetic or cultural, doesn't have infinite search capacity. We can evaluate which of our adaptations actually are promoting or enforcing symmetric cooperation in the IPD, and which are still climbing that hill, or are harmless extraneous adaptations generated by the search but not yet optimised away by selection pressures.
Sorry, I was trying to get at 'moral intuitions' by saying fairness, justice, etc. In this view, ethical theories are basically attempts to fit a line to the collection moral intuitions - to try and come up with a parsimonious theory that would have produced these behaviours - and then the outputs are right or interesting only as far as they approximate game-theoretic-good actions or maxims.
Even given other technological civilisations existing, putting "matter and energy manipulation tops out a little above our current cutting edge" at 5% is way off.
Irrationality game: Humanity's concept of morality (fairness, justice, etc) is just a collection of adaptations or adaptive behaviours that have grown out of game theory; specifically, out of trying to get to symmetrical cooperation in the iterated Prisoner's Dilemma. 85% confident.
so they round me off to the nearest cliche
I have found great value in re-reading my posts looking for possible similar-sounding cliches, and re-writing to make the post deliberately inconsistent with those.
For example, the previous sentence could be rounded off to the cliche "Avoid cliches in your writing". I tried to avoid that possible interpretation by including "deliberately inconsistent".
I suspect the real issue is using the "nutrients per calorie" meaning of nutrient dense, rather than interpreting it as "nutrients per some measure of food amount that makes intuitive sense to humans, like what serving size is supposed to be but isn't".
Ideally we would have some way of, for each person, saying "drink some milk" and seeing how much they drank, and "eat some spinach" and seeing how much they ate, then compare the total amount of nutrients in each amount on a person by person basis.
I know this is not the correct meaning of nutrient dense, but I think it's more useful.
Counterpoint: Beeminder does not play nice with certain types of motivation structures. I advocated it in the past; I do not anymore. It's probably not true for you, the reader (you should still go and use it, the upside is way bigger than the downside), but be aware that it's possible it won't work for you.
I mentioned on Slate Star Codex as well, it seems like if you let consequentialists predict the second-order consequences of their actions they strike violence and deceit off the list of useful tactics, in much the same way that a consequentialist doctor doesn't slaughter the healthy traveler for the organ transplants to save five patients, because the consequentialist doctor knows the consequence of destroying trust in the medical establishment is a worse consequence.
So, officially there is a battle between X and Y, and secretly there is a battle between X1 and X2 (and Y1 and Y2 on the other side). And people from X1 and X2 keep rationalizing about why their approach is the best strategy for the true victory of X against Y (and vice versa on the other side).
This part doesn't make clear enough the observation that X2 and Y2 are cooperating, across enemy lines, to weaken X1 and Y1. 2 being politeness and community, and 1 being psychopathy and violence.
(Rational) Harry
Seemed eminently more readable than rationalist!Harry to me when I first encountered this notation, although now it's sunk in enough that my brain actually generated "that's more keystrokes!" as a reason not to switch style.
I don't subvocalise, and when I learned that other people do I was very surprised. A data point for subvocalisation being a limit on reading speed: I read at ~800wpm.
It was a tongue-in-cheek suggestion to begin with (an amusing contrast to all the others saying 'turn money into time'), but modafinil has a unique claim to "buying time": it lets you function just as well and usually better than average, on less sleep. A more thorough analysis
My sympathies; I find it wonderful.
It's been a while since I watched it, but do you think Ben Affleck's character in Good Will Hunting was rational, but of limited intelligence?
Yep, a pretty good example, I think
Look, you're my best friend so don't take this the wrong way, but if you're still living here in 20 years, still working construction, I'll fuckin' kill ya. Tomorrow, I'm gonna wake up and I'll be fifty, and I'll still be doing this shit. And that's alright, that's fine. But you're sitting on a winning lottery ticket and you're too scared to cash it in, and that's bullshit. Cause I'd do fucking anything to have what you got. Hanging around here is a waste of your time.
So far, so normal, you don't need to be a rationalist to say these sorts of things to make your friend start using their talents.
Every day, I come by your house, and I pick you up. We go out, have a few drinks, a few laughs, it's great. You know what the best part of my day is? It's for about ten seconds, from when I pull up at the curb to when I get to your door. Cause I think maybe I'll get up there and I'll knock on the door and you won't be there. No goodbye, no see-ya-later, no nothing. You just left.
Now this is what it looks like when a rationalist actually believes in something. You actively enjoy imagining your friend's left without a word, a horrible thing for a friend to do - because you knows that your friend starting to use their potential is so important as to drown out even being totally abandoned by them.
Turn your money into time; that is, purchase modafinil.
With all capitalized words the list would start like this:
You know that feeling you get when you're coding, and you write something poorly and briefly expect it to Do What You Mean, before being abruptly corrected by the output? I think I just had that feeling at long distance.
From looking at the scripts, it appears first and last names (actually, all capitalised words I think) were counted separately ("Neal: 11, Stephenson: 11" and "Munroe: 13, Randall: 11", etc) and first names were handedited out (so that's why both Nassim and Taleb are on the list).
The answer is somewhere between "Nassim Taleb was quoted 16 times, and three of those times the attribution was just 'Taleb'" and "Nassim Taleb was quoted 13 times and was mentioned in three other quotes (since he's a controversial figure)".
It better be.
I think it's a bit unfair to the average physicist to say that he's closer in intelligence to the village idiot than to Einstein
The average physicist's contribution to physics is closer to the village idiot's contribution than to Einstein's, no?
Excellent in-group signalling but terrible public relations move.
Fair enough; drug use is a lot more public relations damaging than self-proclaimed high IQ.
And the same goes for recreational drug-use, no? If it's just in the survey like IQ is and we don't have a banner proclaiming it, the argument that it might make us look bad doesn't hold any water.
If you replace "smart" with "used drugs recreationally" you might see my point?
The same problem you presumably have with someone external writing an article about how LW is a group of criminals: it makes us look bad.
You might not agree with self-proclaimed high IQ being a social negative, but most of the world does.
The offence centered on the ableism of the slurs in particular; "You're free to use an insult I can't stand on things I don't respect, but I won't stand for use of it on things I do respect" doesn't sound like a standard policy; otherwise you'd feel comfortable using profanity in front of your parents, but only when talking about a group they don't respect.
There interested in not gathering data that would cause someone to admit criminal behavior.
As far as I'm aware - and correct me if I'm wrong - drug use is not a crime (and by extension admitting past drug use isn't either). Possession, operating a vehicle under the influence, etc, are all crimes, but actually having used drugs isn't a criminal act.
There also the issue of possible outsiders being able to say: "30% of LW participants are criminals!"
The current survey (hell, the IQ section alone) gives them more ammunition than they could possibly expend, I feel.
really incredibly blunt
It's possible that it is too blunt. My instinct (calibrated on around half a hundred nights of conversation with Australian LessWrongers in person) says that it's not, though.
Good point. It might not even make sense to ask "Which culture of social interaction do you feel most at home with, Ask or Guess?".
- Are you Ask or Guess culture?
P(Supernatural): 7.7 + 22 (0E-9, .000055, 1) [n = 1484]
P(God): 9.1 + 22.9 (0E-11, .01, 3) [n = 1490]
P(Religion): 5.6 + 19.6 (0E-11, 0E-11, .5) [n = 1497]
I'm extremely surprised and confused. Is there an explanation for how these probabilities are so high?
I hope rationalist culture doesn't head in that direction.
Something like "I'm finding this conversation aversive, and I'm not sure why. Can you help me figure it out?" would be way more preferable. Something in rationalist culture that I actually do like is using "This is a really low-value conversation, are you getting any value? We should stop." to end unproductive arguments.
This is a horrible thing to do to a Guesser. (I agree denotatively, but...)
It took me almost six months from meeting a particular Guess person to realise this: the times I offended them clustered according to whether I was a soldier in their war, not by my actual actions.[0]
Lots of things, maybe most things you can do in a conversation are horrible things to do to a Guesser. I'm well above average for social skills plus a few points above LW average IQ and even I find it hard to navigate conversations with a Guesser (I swear I have better social skills than that previous arrogant statement implies). The way I have found to not constantly insult and offend them is to take a lot of time to learn their particular 'dialect' of Guess.
I didn't grow up in a Guess culture, so at my first exposure to it I was already a mind that could think for itself - and my thought was "Guess culture is manipulative." It stacks up complicated laws, some of which are enforced ridiculously strictly[1] and others that are loosely enforced, if at all[2], so a skilled Guesser has both a minefield of rules, and an arsenal of selectively enforced rules, to use in conversation.
This is scary. If I walk into a conversation with a Guesser and I have something at stake, I am likely to lose that stake. Dealing with them feels like dealing with a negative utility monster; I must sacrifice too much to avoid offending.
(Please don't vote this post up because it bashes the hateful Guess enemy; evaluate it on its merits.)
0: I could use ableist slurs (insane; crazy) freely to deride people, institutions, papers etc that argued for no gendered pay gap, for biological difference between race, etc. But it was a serious transgression to use the same slurs to describe people, institutions, or papers that argued for parapsychology, telepathy, etc. Once I noticed this, I tested it experimentally - even when you know you're doing it for science, it hurts to offend a Guesser.
1: "Giving a negative response when someone asks for evaluations on their appearance / idea / whatever" is banned. (The only way you can provide that information is to guess at their personal evaluation, and then give the least warm approval you think has a plausible interpretation that agrees with their actual personal evaluation, which will be revealed only after you've made your social move. Yech.)
2: Gossip is frowned on. You can gossip all you like until you say something they don't like hearing, at which point you've offended them by gossiping.
I recognise your concern acutely - I've had the same "one of those people who has poor social skills and yet wants me to behave more like them" - and I think stressing the "whenever you suspect you'd both benefit from them knowing" part of rule one much more seems like it would help a lot in that direction.
(It's cheap, not cheep)
Tell and Ask seem to be more compatible than Ask and Guess. I have no intuition for how compatible Tell and Guess are. I think Ask is cheaper for the teller than Guess is (in Guess, you have to formulate a plausible sentence that contains a subtle request, unless you want to force the receiver).
I really like the idea of Tell on a date; I think it's already somewhat present in the rationalist meetup I attend.
It's evidence that Guess is the Nash equilibrium that human cultures find. Consider that the Nash equilibrium in the Prisoner's Dilemma (and in the Iterated Prisoner's Dilemma with known fixed length) is both defect. It's a common theme in game theory that the Nash equilibrium is not always the best place to be.
I am going to attempt to summarise this, hopefully fairly. A warning, for anyone to whom it applies: a cis white male is going to try and say what you said, but better.
I am doing this because I think social justice / equality is 1) important, 2) often written with an extreme inferential distance.
Parentheses with "ed:" are my own addition, usually a steelman of the author's position or an argument they didn't make but could have, although sometimes a critique. They aren't what the author said.
This is inspired by Yvain's writing, in particular a part where he said "I like eugenics". However, it also should explain why I can't join the LessWrong community. I am confident that this explanation generalises to most potential readers who are part of some marginalised group.
The issue is that I have a fight-or-flight response to this community. However, the only other times I have this response to a community are when the community contains someone who repeatedly disrespects my identity (for example, in a particular social group, one person consistently misgendered me; this made me unable to feel comfortable with that entire social group). That I have this response suggests that LessWrong is doing something wrong.
I do want to join the LessWrong community, I think it has value, but that desire seems to be in the category of "fanciful wishes".
-
Certain topics are very sensitive issues for me. LessWrong commenters are almost certainly going to want to discuss these topics. I cannot take part in that discussion unless everyone agrees with my position - or at least, doesn't explicitly disagree. It sounds irrational, but this is because if you disagree on those topics, you're likely to be bad for my health, sanity, and/or safety (ed: many of those who disagree with me would do me harm, even unintentionally. The prior on LW commenters being that kind of person is thus high). Given that disagreeing will cause me to fear potential harm, I think this no-disagreeing rule is necessary.
A concrete example: the argument that in third world countries "people should have heterosexual marriages early, the man provides, the woman does childcare, the family prioritises having many children " is made, on the grounds that this will provide the biggest improvement in their quality of life. I cannot accept any social system that doesn't respect the individual's gender and sexual identity, even if it truly is the best way to improve their quality of life.
A person endorsing these kinds of arguments is evidence that they will disrespect my personhood, either passively or actively. That disrespect is extremely dangerous to me (ed: I would have liked an example for this). So the danger prevents me from intellectually engaging with the argument sans emotion. I could engage emotionally but that would be unproductive and upsetting.
-
I am reasonably confident that this "unable to have a discussion with a disagreeing person" attitude is caused in part by the kind of specific risks experienced by marginalised persons, so I expect this attitude is widespread amongst the marginalised population. Thus, in a community that expects you to evaluate any argument (ed: ignoring social taboos and examining the merits of unpopular positions, let's call it "rational discussion"), those of us who are unable to discuss these topics opt out or avoid the discussion. So the less marginalised you are, the more you are able to talk. The more marginalised you are, the less you are able to talk.
Communities that avoid this rational discussion, in favour of conversations that the more marginalised are able to talk in, are often denigrated by communities that seek it out, and vice versa. Likely this is mostly due to social signalling reasons; each is the other's outgroup, so mock them to fit in! This is another deterrent to crossover.
The main deterrent is simply that the loudest members of the community are unintentionally signalling that marginalised persons will not have many comfortable conversations here. When you push rational discourse to the extreme (ed: as LessWrong arguably is proud of doing), you will signal to all the marginalised groups that they will not have many comfortable conversations here. Then you may wonder why LessWrong is so very homogenous. (ed: Yvain's survey said 90% white male, I think? Correct me if I'm wrong)
-
(Ed note: this section entertains the possibility that we might argue that marginalised people are somehow inherently incapable of rationality. I deem it unworthy of a summary.)
-
Addendum: even with rules against allowing discussion of these sensitive topics, passively allowing slurs from those topics (ed: such as using "insane", "crippled" to describe ideas that are wrong or limited. There is a fuller list in the original if you ctrl-F "ableist"), or even reacting negatively to accusations of sexism, will signal this same "you won't be comfortable with our conversations" to marginalised persons.
Arguing that this is an overreaction to use of these terms is also itself a signal of "uncomfortable conversations ahead". When you use these terms in your rational discourse, it sends that scary signal regardless of what you mean by it. So if you want diversity, why are you using it?(ed: the author has a point, saying you mean insane as in bad idea, not insane as in mentally ill isn't a good argument.)
In conclusion, Yvain, even though you are a fantastic and thought-provoking writer, if you say "I like eugenics" you cause me to fear for my safety and I can't read your blog. (ed: "In conclusion, LessWrong, even though you are a fantastic community with a lot of value, your commenters often say things that frighten marginalised people, so they can't join your community.")
Not necessarily, and in the case of "avowed racists of Less Wrong" almost certainly not. The "biological realism" concept is that there are genetic and physiological differences split so sharply along racial lines ("carves reality at its joints") that it is correct to say that all races are not born equal. Proponents of this concept would claim it is obviously true, and they would also be called racists. These people could donate heavily to African charities out of sympathy for what is, in their eyes, the "bad luck" to be born a certain race, and it would be consistent.
(I believe that biological realism is the main form of racism amongst LW posters, but I have nothing to back this assertion up except that I recall seeing it discussed)
I made a $150 donation. I particularly like that effort has gone into making the workshops more accessible. I'm suggesting to my father that he should apply for the February workshop (I am very surprised to have ended up believing it will be worthwhile for him).
It's unfortunate that "calories in, calories out" and "saturated fats are bad" are both general medical consensuses (wow, that word is actually in dictionaries) - it seems very likely the first is true and the second false, but both issues have the same "medical consensus saying they're true vs fringe expert saying they're all wrong" dynamic.
Was not going to reply until I saw this is actually a month old and not more than three years, so you're in luck.
The Confessor claims to have been a violent criminal, and in Interlude with the Confessor we see the Confessor say this to Akon:
And faster than you imagine possible, people would adjust to that state of affairs. It would no longer sound quite so shocking as it did at first. Babyeater children are dying horrible, agonizing deaths in their parents' stomachs? Deplorable, of course, but things have always been that way. It would no longer be news. It would all be part of the plan."
Contrast with the Joker in Dark Knight:
You know what I noticed? Nobody panics when things go according to plan. Even when the plan is horrifying. If tomorrow I told the press that, like, a gang-banger would get shot, or a truckload of soldiers will be blown up, nobody panics. Because it's all part of the plan. But when I say that one little old mayor will die, well then everybody loses their minds!
One hundred chapters of HPMoR have taught us that Eliezer is totally okay with throwing these references in. I think it's pretty clear (also hilarious, because all of the Joker's plots in Dark Knight were game-theory-esque, tying in with this gigantic Prisoner's Dilemma story).
It seems plausible that Quirrel read the science books and isn't going to tell Harry anything reality-breaking, since he did a similar thing with the library - after telling Harry that Memory Charms are just filed under M, he says he's going to put some of his own special wards on the restricted section.
Using it regularly is the most important thing by far. I don't use it anymore, the costs to starting back up seem too high (in that I try and fail to re-activate that habit), I wish I hadn't let that happen. Don't be me; make Anki a hardcore habit.