An Exercise in Applied Rationality: A New Apartment 2018-07-08T21:18:04.518Z
Becoming a Better Community 2017-06-06T07:11:42.367Z
What do you actually do to replenish your willpower? 2016-11-06T00:05:35.048Z
General-Purpose Questions Thread 2016-06-19T07:29:18.949Z
Iterated Gambles and Expected Utility Theory 2016-05-25T21:29:27.645Z
Cross-Cultural maps and Asch's Conformity Experiment 2016-03-09T00:40:58.150Z
Recommended Reading for Evolution? 2015-07-15T18:04:11.852Z
A Challenge: Maps We Take For Granted 2015-05-29T03:50:18.655Z
What Would You Do If You Only Had Six Months To Live? 2015-05-20T00:52:05.706Z
Guidelines for Upvoting and Downvoting? 2015-05-06T11:51:29.837Z


Comment by Sable on Attacking enlightenment · 2018-09-28T16:18:04.292Z · LW · GW

While a focus on the exterior may very well contribute to the high rate of mental health problems in the community, I've always thought it had more to do with selection effects.

A large portion of the thought in the community revolves around how to think, which is something most people never study (and likely never feel the need to). But those who are thinking badly - that is, those who realize that they have patterns of thought that don't correlate well with reality - have a reason to seek out a better way of thinking.

There's also some evidence to suggest that higher intelligence by itself correlates with mental illness:

Other than that, great post and thank you for assembling that list of resources.

Comment by Sable on Meetup Cookbook · 2018-07-15T08:13:44.207Z · LW · GW

This is excellent. Thank you.

Comment by Sable on What will we do with the free energy? · 2018-07-09T18:56:47.108Z · LW · GW

I'm not 100% sure what you're asking, but from Wikipedia:

 ...current best processes for water electrolysis have an effective electrical efficiency of 70-80%,[38][39][40] so that producing 1 kg of hydrogen (which has a specific energy of 143 MJ/kg or about 40 kWh/kg) requires 50–55 kWh of electricity. At an electricity cost of $0.06/kWh, as set out in the Department of Energy hydrogen production targets for 2015[41], the hydrogen cost is $3/kg.

Some quick googling indicates a kilogram of hydrogen seller for around $14, give or take.

There is inefficiency in hydrogen storage, but it should keep longer than a lithium ion battery.

Comment by Sable on An Exercise in Applied Rationality: A New Apartment · 2018-07-09T04:04:11.591Z · LW · GW

Thank you.

I happen to use my phone as my alarm - should I get a different, separate alarm so my phone can charge in another room?

Comment by Sable on Automatic for the people · 2018-07-08T21:05:17.934Z · LW · GW

What do you think of the effect of population growth/shrinkage is on this problem? From what I can tell, the population is projected to go up to about 11 billion before it starts declining.

Also, at what point does this become a crisis? How many of the truck drivers will be retired by the time automated trucks become widespread enough to unemploy them? Most seem to be nearing retirement age currently.

Comment by Sable on An optimization process for democratic organizations · 2018-07-05T20:05:18.220Z · LW · GW

A very interesting idea. My thoughts:

1) You mention, as a failure of US democracy, that "At the national level, we also have the Senate which is not democratic in the first place, and the electoral college, which is winner-take-all in most states and warped in favor of low-population states. (22)"

I would argue that this is a feature, not a bug. The US was, to my knowledge, designed to be a union of individual governments bound together by a federal government. Because each state can have its own distinct laws, people can sort themselves across states to a place with laws they like. This fosters competition between states to have the best laws.

The Senate was never meant to be democratically representing the people; it was meant to be democratically representing the states. If I remember correctly, Senators were elected by the state government originally, not the people.

2) The rest of your points on the failure of US democracy are well-made.

3) How does this democracy serve those too poor to afford the app? Those without internet? Those with mental illnesses that get a vote but are unfit to understand what that vote means?

4) Who runs the app? The federal government? Is the work contacted out to a company? Either option is dangerous.

5) The system you describe is very dependent on user history which must be stored in databases somewhere. In the event of a terrorist attack on those databases (assume the data is lost), how does the democracy continue?

6) At what age does someone get to vote?

7) If a bill was written to redistribute Bill Gates' money to everyone else (via taxation or any other effective means), what would stop it from getting passed? I'd imagine it would be popular enough.

8) Could a company "campaign" to pass a bill limiting/regulating their competition? If the situation was sufficiently complicated, would anyone notice? In a broader sense, how would this democracy interact with capitalism/socialism?

Comment by Sable on What will we do with the free energy? · 2018-07-04T18:46:00.841Z · LW · GW

There are energy storage mechanisms that last longer than batteries. I don't know the exact economics or mechanics, but excess energy could be used to pump water to a higher elevation to store the potential energy.

When the energy is needed, at night or in the winter, the water would be allowed to flow back down through turbines to reclaim some of that energy.

Excess energy could also be used for hydrolysis; the hydrogen could be stored for later use.

Comment by Sable on Which areas of rationality are underexplored? - Discussion Thread · 2016-12-03T04:34:44.259Z · LW · GW

Thanks for the info - I'll check out some of the chat channels. I had no idea they existed.

As for the idea, I hadn't thought it through quite that far, but I was picturing something along the lines of your second suggestion. Any publicized and easily accessible way of asking questions that doesn't force newer members to post their own topics would be helpful.

I remember back when I was just starting out on LessWrong, and being terrified to ask really stupid questions, especially when everyone else here was talking about graduate level computer science and medicine. Having someone to ask privately would've sped things up considerably.

Comment by Sable on Which areas of rationality are underexplored? - Discussion Thread · 2016-12-01T23:33:49.439Z · LW · GW

This is more of a practical suggestion than a theoretical one, but what if we had an instant message feature? Some kind of chat box like google hangouts, where we could talk in a more immediate sense to people rather than through comment and reply.

As an addendum, and as a way of helping newer members, maybe we could have some kind of Big/Little program? Nothing fancy, just a list of people who have volunteered to be 'Bigs,' who are willing to jump in and discuss things with newer members.

A 'little' could ask their big questions as they make their way through the literature, and both Bigs and Littles would gain a chance to practice rationality skills pertaining to discussion (controlling one's emotions, being willing to change one's mind, etc.) in real time. I think this would help reinforce these habits.

The LessWrong study hall on Complice is nice, but it's a place to get work done, not to chat or debate or teach.

Comment by Sable on Stand-up comedy as a way to improve rationality skills · 2016-11-28T02:32:32.984Z · LW · GW

Additionally, humor - especially self-effacing humor - allows one to critique ideas or people held in high esteem without being offensive or inciting anger. It's hard to be mad when you're laughing.

Thought: Humor lowers one's natural barriers to accepting new ideas.

In the context of ideas as memes that undergo Darwinian processes of mutation and natural selection, perhaps humor can be thought of as an immunodeficiency virus? A way to lower an idea's natural defenses against competing ideas, which is why we see Christians willing to listen to Atheist comics, and vice versa. Humor lowers Christianity's natural defenses against Atheism (group consolidation, faith, etc.) and allows new ideas to attack the weakened "body."

Comment by Sable on Synthetic supermicrobe will be resistant to all known viruses · 2016-11-26T03:36:01.254Z · LW · GW

Insanely dangerous, yes, but then again so is all potentially world-changing technology (think AI and nanobots).

In other words I agree with you, but I think that the response to "new technology with potentially horrific consequences or otherwise high risk/reward ratio" should be, "estimate level of caution necessary to reduce risk to manageable levels, double the level of caution, and proceed very, very slowly."

Because it seems to me, bad at biology as I am, that the ability to synthesize arbitrary proteins, which this technology does/is a stepping stone to, could be incredibly powerful and life-saving.

Comment by Sable on Mismatched Vocabularies · 2016-11-22T06:51:44.331Z · LW · GW


Comment by Sable on Mismatched Vocabularies · 2016-11-22T00:09:42.441Z · LW · GW

My understanding of #3 is that it comes from a place of insecurity. Someone secure in their own intelligence, or at least of their own self-worth, will either ignore the unknown word/phrase/idea, ask about it, or look it up.

So from the inside, #3 feels something like: "Look, I know you're smart, but you don't have to rub it in, okay? I mean, just 'cause I don't know what 'selective pressures in tribal mechanics' are doesn't make me stupid."

My guess is that it feels as though the other person is using a higher level vocabulary on purpose, rather than incidentally; kind of the like the opposite of the fundamental attribution error. Instead of generalizing situation-specific behavior to personality (i.e. "Oh, he's not trying to make me feel stupid, that's just how he talks"), people assume that personality-specific behavior is situational (i.e. "he's talking like that just to confuse me").

Also, I think a lot of the reaction you're going to get out of someone when using a word or idea they don't know is going to depend upon your nonverbal signals. Are you saying it like you assume that they know it? I've had professors who talk about really complex subjects I didn't fully understand as though they were obvious, and that tended to make me feel dumb. I doubt they were doing it on purpose - to them it was obvious - but by paying a little bit more attention to the inferential distance between the two of us, they could have moderated their tones and body language a bit to convey something a little less disdainful, even if the disdain itself was accidental.

Lastly, when it comes to communication I tend to favor the direct approach. If at any point I think the other person doesn't understand what I'm saying, I try to back up and explain it better. Sometimes I just flat-out ask if they understood, and if not, try to explain it, all while emphasizing that it isn't a word/phrase/idea that I (or anyone) would expect them to know.

True or not, the above strategy has been effective for me in reducing confrontation when the scenario you're describing happens.

Comment by Sable on Yudkowsky vs Trump: the nuclear showdown. · 2016-11-16T02:54:14.195Z · LW · GW

I was trying to be sincere with 4), although I admit that without tone of voice and body language, that's hard to communicate sometimes. And even if LW hasn't done as good a job as we could have with this topic, from what I've seen we've done far better than just about anyone not in the rationalist community at trying to remain rational.

Glad you agree with 1); when I first heard that argument (I didn't come up with it), I had a massive moment of "that seems realy obvious, now that someone said it."

With regards to 2), you're right that we do have information on Trump; I spoke without precision. What I mean is this: beliefs are informed by evidence, and we have little evidence, given the nature of the American election, of what a candidate will behave like when they aren't campaigning. I believe there's a history of president-elects moderating their stances once they take office, although I have no direct evidence to support myself there.

When it comes to Islam, I should begin by saying that I'm sure the vast majority of Muslims simply want to live a decent life, just like the rest of us. However, theirs is the only religion active today that currently endorses holy war.

Then observe that MAD only applies to people unwilling to sacrifice their children for their cause, and further observe that Islam, as an idea, a meme, a religion, has successfully been able to make people do exactly that.

An American wouldn't launch a nuke if it would kill their children, and Russian wouldn't either. But a jihadist? From what I understand (which is admittedly not much on this topic), a jihadist just might. At least, the jihadist has a much higher probability of choosing a nuclear war over a nationalist.

I agree that the West overreacts in terms of Terrorism, in the sense that any given person is more likely to die in a car accident than be killed by a terrorist, but I was referring to existential threats, a common topic on LW and one that Yudkowsky himself seems concerned with regarding this election. Car crashes don't threaten the existence of humanity; nuclear war does.

And because I can't see how either candidate would effect the likelihood of unfriendly AI, a meteor, a plague, or any of the other existential risks, nuclear war becomes the deciding vote in the "who's less likely to get us all killed" competition.

Admittedly, the risk of catastrophic climate change might be higher under Trump, but I've no evidence for that save the very standard left vs. right paradigm, which doesn't seem to apply all that well to Trump anyway.

Thank you for your response.

Comment by Sable on Yudkowsky vs Trump: the nuclear showdown. · 2016-11-14T23:03:15.864Z · LW · GW


Unless I am much mistaken, the reason that no one has yet used Nuclear Weapons is Mutually Assured Destruction, the idea that there can be no victor in a nuclear war. MAD holds so long as the people in control of nuclear weapons have something to lose if everything gets destroyed, and Trump has grandchildren.

Grandchildren who would burn in nuclear fire if he ever started a nuclear war.

So I am in no way sympathetic to any argument that he's stupid enough to start one. He has far too much to lose.


I believe that the sets of skills necessary to be a good president, and to be elected president, are two entirely separate things. They may be correlated, but I doubt they're correlated that highly; a popularity contest selects for popularity, after all.

So far, we have information on Trump's skill set as a businessman: immoral and unethical perhaps, but ultimately very successful.

And we have information on Trump's skill set as a Presidential Candidate: bombastic, brash, witty, politically incorrect and able to motivate large numbers of people to vote for him.

We have no information on what Trump will be like as President; that's the gamble. We can guess, but trends don't always continue, and I suspect, based on more recent data, that Trump has an inkling that now is not the time to do anything drastic.


Aside from the usual LW topics concerning existential risk (i.e. AI, Climate Change, etc.), my biggest concern is Islam. Mutually Assured Destruction only works when those with the Nuclear Weapons have nothing to lose, and if someone with such weapons genuinely believes that they and their family will go to heaven for using them, then MAD no longer applies.

From what meager evidence I can gather, I believe that Trump lowers the chance of such a war breaking out compared to Clinton. We've had a chance to see what Clinton's foreign policy looks like, and so far as I can tell, it isn't lowering the risk of nuclear war. It's heightening it.

Assuming other existential risks would be equal under either administration (which is a very questionable assumption, granted, and I would be happy to discuss it), that makes Trump look at the very least no worse than Clinton when it comes to existential risk.

I'd also like to note that I've been told plenty of people thought that Ronald Reagan would start a nuclear war with Russia, and he did nothing of the sort. Granted, I wasn't around then, so it's second person information, but there you go.


I don't know about the rest of you, but I am sick of having to expend copious amount of mental energy trying to remain as rational as I can throughout this election cycle. I've been glad to see in this thread that we LW's do, in fact, put our money where our mouths are when it comes to trying to navigate, circumvent, or otherwise evade the Mindkiller.

If you disagree with anything I have to say, please respond - if my thinking is wrong, I want your help to make it better, to make it closer to correct.

Comment by Sable on What do you actually do to replenish your willpower? · 2016-11-07T22:25:39.156Z · LW · GW

Welcome to lesswrong, and thanks for the advice. I'll take a look at what you suggested.

Comment by Sable on What do you actually do to replenish your willpower? · 2016-11-07T01:03:54.414Z · LW · GW

Thanks, I'll take a look.

Comment by Sable on Open thread, Sep. 26 - Oct. 02, 2016 · 2016-09-26T10:08:43.419Z · LW · GW

I was at the vet a while back; one of my dogs wasn't well (she's better now). The vet took her back, and after waiting for a few minutes, the vet came back with her.

Apparently there were two possible diagnosis: let's call them x and y, as the specifics aren't important for this anecdote.

The vet specifies that, based on the tests she's run, she cannot tell which diagnosis is accurate.

So I ask the vet: which diagnosis has the higher base rate among dogs of my dog's age and breed?

The vet gives me a funny look.

I rephrase: about how many dogs of my dog's breed and age get diagnosis x versus diagnosis y, without running the tests you did?

The vet gives me another funny look, and eventually replies: that doesn't matter.

My question for Lesswrong: Is there a better way to put this? Because I was kind of speechless after that.

Comment by Sable on Learning and Internalizing the Lessons from the Sequences · 2016-09-16T23:19:15.426Z · LW · GW

My experience was that the Sequences, like most pieces of writing densely packed with information, cannot be understood on a first read-through.

Instead, following how memory works by association, the first time you read through them a little will stick, and the next time more, and so on.

To be slightly more clear:

I suggest that the first time you read through them, focus on the bigger picture. Don't worry about any particular piece you don't understand, just keep going until you finish it. A decent metaphor for this might be how buildings are constructed: during your first reading, you are laying the foundations and creating the skeleton of steel girders.

Your next read-through will help to flesh out more of the meat, and so on.

I stress that it's important to keep going; Rationality is long, and a slog the first time through. If you have to skip ahead, skip.

Hope that helps.

Comment by Sable on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-12T15:54:59.302Z · LW · GW

I went to a party recently, and the host provided the food. At the end of the party, there was an awful lot left over, and my understanding is that most of it went to waste.

I had a thought when this was happening: if I was the host, why not keep track of how much food my guests actually ate, and try adjusting the amount of food at my next party to match?

The host was not a rationalist, as I suspect most hosts aren't, but upon researching the issue, it doesn't seem as if there's a widespread solution.

There are charities that focus on "recycling" food waste, and there are plenty of suggestions for how much food to bring to parties of various size, and yet I still have the experience of purchasing/preparing far too much food for parties, and almost every party I go to has far too much food available.

What exactly is going on, and how can it be made better? It seems to me as if this is a reasonably low-hanging fruit - getting people to properly estimate how much food people actually consume at parties in order to reduce food waste. It's the sort of calculation any restaurant with an all-you-can-eat buffet has clearly made in order to determine their price point.

Is this a publicity issue, that people don't realize they can optimize the amount of food they purchase and prepare? Or is it psychological, related to akrasia or a bias? I've been told that a host's greatest fear is that they run out of food, but why? Is the way to attack this problem through exposing that fear as unfounded?

This is one of the first external questions I've considered, since committing fully to instrumental rationality.

I'd like to hear everyone's thoughts on the matter.



Why do people waste food at parties? Is this a solvable problem?

Comment by Sable on Open thread, June 27 - July 3, 2016 · 2016-06-28T11:37:25.580Z · LW · GW

That's probably it; I read it recently. Thanks!

Comment by Sable on Open thread, June 27 - July 3, 2016 · 2016-06-27T22:41:47.363Z · LW · GW

Addressing 1) "Learning when you're wrong" (in a more general sense):

Absolutely a good thing to do, but the problem is that you're still losing time making the mistakes. We're rationalists; we can do better.

I can't remember what book I read it in, but I read about a practice used in projects called a "pre-mortem." In contrast to a post-mortem, in which the cause of death is found after the death, a pre-mortem assumes that the project/effort/whatever has already failed, and forces the people involved to think about why.

Taking it as a given that the project has failed forces people to be realistic about the possible causes of failures. I think.

In any case, this struck me as a really good idea.

Overwatch example: If you know the enemy team is running a Mcree, stay away from him to begin with. That flashbang is dangerous.

Real life example: Assume that you haven't met your goal of writing x pages or amassing y wealth or reaching z people with your message. Why didn't you?

Comment by Sable on Open thread, June 20 - June 26, 2016 · 2016-06-23T12:31:36.286Z · LW · GW

I've skimmed them, but I don't remember seeing these kinds of statistics. I'll take another look though. Thanks.

Comment by Sable on Open thread, June 20 - June 26, 2016 · 2016-06-23T00:35:23.697Z · LW · GW

Out of curiosity: because rationalists are supposed to win, are we (on average) below our respective national averages for things which are obviously bad (the low hanging fruits)?

In other words, are there statistics somewhere on rationalist or LessWrong fitness/weight, smoking/drinking, credit car debt, etc.?

I'd be curious to know how well the higher-level training effects these common failure modes.

Comment by Sable on General-Purpose Questions Thread · 2016-06-20T00:09:03.578Z · LW · GW

I don't know if this is what you meant, but here goes:

This is less a single piece of advice from someone than an attitude I've tried to adopt from places like LessWrong and CollegeInfoGeek.

  • Everything in your life is optimizable.
  • Doing better is less a matter of changing yourself than it is of implementing systems to help yourself overcome your failings.
Comment by Sable on General-Purpose Questions Thread · 2016-06-19T07:30:50.073Z · LW · GW

I'll go first. I'm' in the process of applying for jobs in software. Furthermore, it'll be my first job out of college.

Any advice? What will I, five/ten years from now, wish that I had known now?

Should I take a job in a topic that I don't see myself in long-term?

Comment by Sable on How my something to protect just coalesced into being · 2016-05-29T02:16:38.050Z · LW · GW

Thank you for sharing; I agree with your conclusions about education in general.

With regards to having something to protect, I still haven't figured out what mine is, so I can't answer your final question.

I can, however, observe that many important discoveries and business ventures seem to result from two factors:

1) Having a prepared mind (be looking for opportunity, have the wealth/intelligence/influence to leverage the new information).

2) Complete chance.

Observe that Fleming's discovery of Penicillin started with him discovering some mold; Percy Spenser discovered microwave cooking when he was working with microwave emitters and noticed a candy bar melting; Viagra was originally investigated for high blood pressure, until doctors started getting awkward reports from their patients...

The list goes on.

My point is that it seems like an established pattern that "smart people in the right places at the right times noticing things" is a way people find out what they want to do, and it sounds like you experienced a similar situation.

I think this quote applies beyond just science:

The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” (I found it!) but “That’s funny …” — Isaac Asimov

Comment by Sable on Cross-Cultural maps and Asch's Conformity Experiment · 2016-03-10T14:05:12.213Z · LW · GW

One question that you may ask is whether the bias (the difference between the territory and the map) is a function of the territory: do people in collectivist cultures mis-estimate the prevalent conformity in a different way from people in individualist cultures?

Thank you for putting that so clearly.

Comment by Sable on Cross-Cultural maps and Asch's Conformity Experiment · 2016-03-10T14:02:22.564Z · LW · GW

There are studies on hindsight bias, which is what I think you're talking about.

In 1983, researcher Daphna Baratz asked undergraduates to read 16 pairs of statements describing psychological findings and their opposites; they were told to evaluate how likely they would have been to predict each finding. So, for example, they read: “People who go to church regularly tend to have more children than people who go to church infrequently.” They also read, “People who go to church infrequently tend to have more children than people who go to church regularly.” Whether rating the truth or its opposite, most students said the supposed finding was what they would have predicted.

From her dissertation.

(I couldn't find a pdf of the dissertation, but that's its page on worldcat).

As for your specific question:

Have there really been no studies of when people say they think studies are surprising, comparing the results to what people actually predicted beforehand

I have no idea, but I want them.

Comment by Sable on Cross-Cultural maps and Asch's Conformity Experiment · 2016-03-10T13:58:04.161Z · LW · GW

Is there any evidence to support this in general?

Also, a dissenter in one area (religion, for example) might be a conformer in another. I think it's worth looking at whether someone who actively protests racial discrimination (in a non-conforming way, so maybe someone from the early civil rights movement) would dissent in Asch's experiment. Does willingness to dissent in one area of your life transfer over to a larger willingness to dissent in other areas of your life?

Comment by Sable on Cross-Cultural maps and Asch's Conformity Experiment · 2016-03-10T13:12:02.370Z · LW · GW

Baring a fault in our visual cortex or optical systems - an optical illusion, in other words - how is determining that Black is Black or that two lines are the same length any different from mathematical statements? There's a bit in the sequences on why 2+2=4 isn't exactly an unconditional truth. The thought processes that go into both include checking your perceptions, checking your memory, and checking reality.

Maybe 2+2=4 is too simple an example, though; it would be downright Orwellian to stand in a room and listen to a group of people declare that 2+2=5. On the other hand, imagine standing in a room with a bunch of people claiming that there aren't an infinite amount of prime numbers - it might be easier to doubt your own perceptions.

Anyone else want to weigh in on this? Does Asch's methodology effect conformity?

Comment by Sable on Purposeful Anti-Rush · 2016-03-09T00:46:58.500Z · LW · GW

In the American Military, they have a saying when dealing with firearms:

Slow is smooth, and smooth is fast.

Comment by Sable on Recommended Reading for Evolution? · 2015-07-15T21:00:34.676Z · LW · GW

I'm looking it up on Amazon now. Thanks.

Comment by Sable on Recommended Reading for Evolution? · 2015-07-15T20:25:04.474Z · LW · GW

I'll try to summarize:

1) I want to know enough about the low-level mechanics of gene transfer to be able to model it accurately enough (not necessarily for a scientific paper) with mathematics. This has to have been done before - links to how would be appreciated, or I could start from scratch.

2) I want to know enough about how it works on the macro level to simulate that too, perhaps with the lower level mechanics working behind the scenes.

3) I am very interested in how evolution started - Dawkins references a soup of chemicals, and then the creation of the first replicator mainly by chance over a very long period of time. Is that accurate?

How did evolution work in the beginning? Dawkins mentioned that there were other explanations than the one he gave - what are they? How do I find them?

My training is in engineering/programming, and my genetics knowledge doesn't much exceed anything taught at the high school level. I am, however, prepared to read college-level textbooks on the subject.


Comment by Sable on Which ideas from LW would you most like to see spread? · 2015-05-22T18:32:53.089Z · LW · GW

I think if you go to CFAR's webpage, and (I think) look at one of Michael Smith's interviews, he says that that's the one thing he wants people to take away from CFAR.

Comment by Sable on Which ideas from LW would you most like to see spread? · 2015-05-22T18:31:02.010Z · LW · GW

An idea that I think would be very helpful to people - and relatively simple to grasp - is the idea of tribalism, and how much it really motivates us, even to this day. Not just that politics is the mindkiller, but why. I think if more people were able to take a step back every once in a while and think, "Hey, I don't even care about or like this idea...why am I defending it? Because it's an idea that I think a group I consider myself a part of holds, and by attacking one idea of my tribe, it seems like you're attacking every idea of my tribe? Does this make sense?" then the world would be a much more friendly place, at least.

Comment by Sable on Leaving LessWrong for a more rational life · 2015-05-22T01:55:58.690Z · LW · GW

I'm relatively new here, so I have trouble seeing the same kinds of problems you do.

However, I can say that LessWrong does help me remember to apply the principles of rationality I've been trying to learn.

I'd also like to add that - much like writing a novel - the first draft rarely addresses all of the possible faults. LessWrong is one of (if not the first) community blogs devoted to "refining the art of human rationality." Of course we're going to get some things wrong.

What I really admire about this site, though, is that contrarian viewpoints end up being some of the most highly upvoted - people admire and discourse with dissenters here. So if you truly believe that LessWrong isn't the best use of your time, then I wish you the best with whatever efforts you pursue. But I think if you wrote a bit more on this subject and found a way to add it to the sequences, everyone would only thank you.

Comment by Sable on Leaving LessWrong for a more rational life · 2015-05-22T01:46:12.458Z · LW · GW

Isn't using a laptop as a metaphor exactly an example of

Most often reasoning by analogy?

I think one of the points trying to be made was that because we have this uncertainty about how a superintelligence would work, we can't accurately predict anything without more data.

So maybe the next step in AI should be to create an "Aquarium," a self-contained network with no actuators and no way to access the internet, but enough processing power to support a superintelligence. We then observe what that superintelligence does in the aquarium before deciding how to resolve further uncertainties.

Comment by Sable on Brainstorming new senses · 2015-05-21T02:10:10.936Z · LW · GW

Being able to "feel" electric/magnetic fields with your hands would be great. Not dissimilar to wifi sensing, but enough to be able to intuit what a circuit is doing just by observing/feeling it.

I also don't think that anyone's mentioned having a true internal clock. Some people can already wake up at a specific time of day just by wanting to - that'd be useful. Also for the ability to time things.

Lastly, while being able to detect neurotransmitter levels in your own brain would be great, being able to detect them in the brains of others would be even better. Kind of a toned-down empathic ability - you could tell who was stressed, who was happy, and so on by the amount of cortisol or dopamine in their brain.

Comment by Sable on What Would You Do If You Only Had Six Months To Live? · 2015-05-20T22:16:55.950Z · LW · GW

Out of curiosity, can you name any such activities? The first thing I thought of was donating your organs (whichever ones were healthy enough to donate). Especially if you could arrange to have them all taken at once when you die, and then put the money into a college fund for your kids or whatever.

To be honest, if I'd know one of my parent's kidneys had gone into paying for my chemistry class, I probably would have attended more.

Comment by Sable on What Would You Do If You Only Had Six Months To Live? · 2015-05-20T22:07:52.760Z · LW · GW

Apparently, retiring professors traditionally give a lecture entitled, "The Last Lecture," during which they talk about what wisdom they want to leave behind. This particular book is the lecture Randy Pausch gave after being diagnosed with terminal cancer.

Comment by Sable on What Would You Do If You Only Had Six Months To Live? · 2015-05-20T22:01:04.648Z · LW · GW

All utilitarian calculations, to my knowledge, have to start with an examination of one's goals. If your primary goal is to enjoy life (nothing wrong with that), then that approach is fine. If your goal is to help the world, then I'm arguing there are things you can do in your six months that others can't or won't because the behaviors are too dangerous.

Comment by Sable on What Would You Do If You Only Had Six Months To Live? · 2015-05-20T21:58:30.604Z · LW · GW

I think you've mostly got it right, although I didn't do the best job of communicating it. At first I was just pondering the idea that utilitarianism seems to support martyrdom if you're going to die anyway, but I realized that the theory actually applies to any finite lifespan. If we lived forever, the marginal utility of our lives would theoretically increase ad infinitum - we could bring more and more learning and experience to each problem we face.

However, until that happy day when nobody has to die anymore, thinking of death as an act (something you can use) instead of an act of god (something you take out insurance for) might help altruistic causes. This also might be a good attitude for soldiers to take (or one they already take, but not being a soldier, I wouldn't know).

It also, I think, helps me as a person to think of dying as something I do - it makes me feel more in control of my life, rather than just living with a scythe over my head that may come falling down at any moment.

Comment by Sable on What Would You Do If You Only Had Six Months To Live? · 2015-05-20T02:59:05.293Z · LW · GW

I think one of the points I didn't quite manage to make is that, in this situation, there isn't really a cure (and you can't find one in six months). I'm reminded of Bean from Ender's Shadow, who finds out that he's only going to live to about 20. The medical team researching him wishes that he could help, genius that he is, but Bean was taught warfare, not biology.

You raise a good point, though, that if you are really good at something, you should probably keep doing it - much like Bean helps Peter Wiggin wage his war.

Comment by Sable on What Would You Do If You Only Had Six Months To Live? · 2015-05-20T02:55:40.829Z · LW · GW

Yeah, paperwork isn't what life is about.

Caring for and supporting the people who you love and who love you, on the other hand, is a large part of what life is about. At least for most people.

Comment by Sable on The File Drawer Effect and Conformity Bias (Election Edition) · 2015-05-11T22:35:01.010Z · LW · GW

I wonder if there's some element of cause and effect at work here.

Let's say that I'm a British citizen who supported the Labour party before the election. As I watch BBC, I see that all the polls show that the Labour party will do well.

Does this effect my choice in whether or not to vote?

Personally, I live in a (very) democratic state in the US, to the point where I don't even bother voting for state officials. The "one person can make a difference" argument doesn't seem to hold up for me in the voting booth.

In short: how much do what the polls say effect the actual voting? Is there some way to measure this?

Comment by Sable on Guidelines for Upvoting and Downvoting? · 2015-05-06T12:21:26.024Z · LW · GW

I've mostly tried to avoid upvoting so far, and I've completely avoided downvoting.

My model for upvoting right now is:

  • If I've commented on a post, I should upvote it, because if it was good enough to comment on, then it was good enough to upvote.

  • If a post or comment is particularly well thought-out, well-reasoned, or otherwise showing an understandable mastery of the issue at hand, it's worth considering upvoting it.

  • Don't upvote unless I'm absolutely confident, because I don't want to go skewing the statistics here, and I'm also pretty new at this.

My model for downvoting has been:

  • Don't do it until you know why other people do it (hence this post).

I've also been trying to understand why posts get comments and up/downvotes, but the two don't seem to correlate well. So are there different rules for upvoting comments versus posts?

Comment by Sable on When does technological enhancement feel natural and acceptable? · 2015-05-03T20:09:45.824Z · LW · GW

I'm no expert in the field, but I'd like to bring up neuroplasticity. Our brains are constantly rewiring themselves as they process input, and they gradually adjust to change. My point is that I believe any enhancement could come to feel natural (although some would certainly have a higher learning curve).

Other thoughts:

  • Ever read Uglies, Pretties, and Specials by Scott Westerfield? It's set in a utopia/dystopia where massive plastic surgery is the norm - at 16, everyone chooses what they will look like (going from "Ugly" to "Pretty") and similar changes occur at middle age, and so on. One of the points made is that there will always be something to envy - if it stops being looks it'll become something else.

  • I'd take some kind of physical enhancement that removes most bodily needs - sleeping, bathroom, eating, etc. - although this is a symptom of the more general "anything that gives me more free time is good" heuristic.

  • I can imagine some kind of gene sequencing becoming a regular medical practice - stripping people of bad genes, or enhancing good ones.

Comment by Sable on A simple exercise in rationality: rephrase an objective statement as subjective and explore the caveats · 2015-04-26T22:50:56.865Z · LW · GW

Why can't it be both? I think that you're right, the technique you describe is good for exploring your own maps, but I also think it seems to work for figuring out where the territory continues but your maps end.

I also think I didn't do a sufficient job of explaining that my "exploring the depths of knowledge" take pertains more to your "The sky is blue" example than your "This book is awful" example (i.e., one that can be answered with fact, rather than opinion.)

Comment by Sable on A simple exercise in rationality: rephrase an objective statement as subjective and explore the caveats · 2015-04-26T21:53:13.196Z · LW · GW

This almost seems to be the grown-up form of the childhood game of Why?

"Mom, why is ice cold?"

"Because its temperature is lower than your skin temperature, love."


"Because its molecules are moving slower than yours."


And so on.

I like the idea, but with our current imperfect understanding of the universe, such questions addressing facts must inevitably end with some variation of "Because there are turtles all the way down, love."

I'm not saying that this should be discouraging - rather, that it is good to know where your knowledge ends. Furthermore, each generation (and I use the word "generation" loosely) has succeeded in pushing the turtles down one level more. Maybe one day the game of Why? will actually come to an end...