Comment by yvain on Tales From the American Medical System · 2019-05-10T16:46:07.157Z · score: 46 (18 votes) · LW · GW

I don’t think it’s necessarily greed.

Your doctor may be on a system where they are responsible for doing work for you (e.g. refilling your prescriptions, doing whatever insurance paperwork it takes to make your prescriptions go through, keeping track of when you need to get certain tests, etc) without receiving any compensation except when you come in for office visits. One patient like this isn’t so bad. Half your caseload like this means potentially hours of unpaid labor every day. Even if an individual doctor is willing to do this, high-level decision-makers like clinics and hospitals will realize this is a bad deal, make policies to avoid it, and pressure individual doctors to conform to the policies.

Also, your doctor remains very legally liable for anything bad that happens to you while you’re technically under their care, even if you never see them. If you’re very confused and injecting your insulin into your toenails every day, and then you get hyperglycemic, and your doctor never catches this because you never come into the office, you could sue them. So first of all, that means they’re carrying a legal risk for a patient they’re not getting any money from. And second of all, at the trial, your lawyer will ask “How often did you see so-and-so?” and the doctor will say “I haven’t seen them in years, I just kept refilling their prescription without asking any questions because they sent me an email saying I should”. And then they will lose, because being seen every three months is standard of care. Again, even if an individual doctor is overly altruistic and willing to accept this risk, high-level savvier entities like clinics and hospitals will institute and enforce policies against it. The clinic I work at automatically closes your chart and sends you a letter saying you are no longer our patient if you haven't seen us in X months (I can't remember what X is off the top of my head). This sounds harsh, but if we didn't do it, then if you ever got sick after having seen us even once, it would legally be our fault. Every lawyer in the world agrees you should do this, it's not some particular doctor being a jerk.

Also, a lot of people really do need scheduled appointments. You would be shocked how many people get much worse, are on death’s door, and I only see them when their scheduled three-monthly appointment rolls around, and I ask them “Why didn’t you come in earlier?!” and they just say something like they didn’t want to bother me, or didn’t realize it was so bad, or some other excuse I can’t possibly fathom (to be fair, many of these people are depressed or psychotic). This real medical necessity meshes with (more cynically provides a fig leaf for, but it's not a fake fig leaf) the financial and legal necessity.

I’m not trying to justify what your doctor did to you. If it were me, I would have refilled your insulin, then sent you a message saying in the future I needed to see you every three months. But I’ve seen patients try to get out of this. They’ll wait until the last possible moment, then send an email saying “I am out of my life-saving medication, you must refill now!” If I send a message saying we should have an appointment on the books before I fill it, they’ll pretend they didn’t see that and just resend “I need my life-saving medication now!” If my receptionist tries to call, they’ll hang up. At some point I start feeling like I’m being held hostage. I really only have one patient who is definitely doing this, but it’s enough that I can understand why some doctors don’t want to have to have this fight and institute a stricter “no refill until appointment is on the books” policy.

I do think there are problems with the system, but they’re more like:

- A legal system that keeps all doctors perpetually afraid of malpractice if they’re not doing this (but what is the alternative?)

- An insurance system that doesn’t let doctors get money except through appointments. If the doctor just charged you a flat fee per year for being their patient, that would remove the financial aspect of the problem. Some concierge doctors do this, but insurances don’t work that way (but insurances are pretty savvy, are they afraid doctors would cheat?)

- The whole idea that you can’t access life-saving medication until an official gives you permission (but what would be the effects of making potentially dangerous medications freely available?)

Comment by yvain on 1960: The Year The Singularity Was Cancelled · 2019-04-23T09:51:02.975Z · score: 12 (7 votes) · LW · GW

I showed it that way because it made more sense to me. But if you want, see https://docs.google.com/spreadsheets/d/1xEkh4jhUup0qlG6EzBct6igvLPeRH4avpM5nZQ-dgek/edit#gid=478995971 for a graph by Paul where the horizontal axis is log(GDP); it is year-agnostic and shows the same pattern.

1960: The Year The Singularity Was Cancelled

2019-04-23T01:30:01.224Z · score: 59 (18 votes)
Comment by yvain on On the Regulation of Perception · 2019-03-10T19:53:48.282Z · score: 17 (5 votes) · LW · GW

You may be interested in "Behavior: The Control Of Perception" by Will Powers, which has been discussed here a few times.

Comment by yvain on Rule Thinkers In, Not Out · 2019-03-02T05:36:10.298Z · score: 39 (14 votes) · LW · GW

Thanks for this response.

I mostly agree with everything you've said.

While writing this, I was primarily thinking of reading books. I should have thought more about meeting people in person, in which case I would have echoed the warnings you gave about Michael. I think he is a good example of someone who both has some brilliant ideas and can lead people astray, but I agree with you that people's filters are less functional (and charisma is more powerful) in the real-life medium.

On the other hand, I agree that Steven Pinker misrepresents basic facts about AI. But he was also involved in my first coming across "The Nurture Assumption", which was very important for my intellectual growth and which I think has held up well. I've seen multiple people correct his basic misunderstandings of AI, and I worry less about being stuck believing false things forever than about missing out on Nurture-Assumption-level important ideas (I think I now know enough other people in the same sphere that Pinker isn't a necessary source of this, but I think earlier for me he was).

There have been some books, including "Inadequate Equilibria" and "Zero To One", that have warned people against the Outside View/EMH. This is the kind of idea that takes the safety wheels off cognition - it will help bright people avoid hobbling themselves, but also give gullible people new opportunities to fail. And there is no way to direct it, because non-bright, gullible people can't identify themselves as such. I think the idea of ruling geniuses in is similarly dangerous, in that there's no way to direct it only to non-gullible people who can appreciate good insight and throw off falsehoods. You can only say the words of warning, knowing that people are unlikely to listen.

I still think on net it's worth having out there. But the example you gave of Michael and of in-person communication in general makes me wish I had added more warnings.

Rule Thinkers In, Not Out

2019-02-27T02:40:05.133Z · score: 99 (35 votes)

Book Review: The Structure Of Scientific Revolutions

2019-01-09T07:10:02.152Z · score: 75 (17 votes)

Bay Area SSC Meetup (special guest Steve Hsu)

2019-01-03T03:02:05.532Z · score: 30 (4 votes)
Comment by yvain on January 2019 Nashville SSC Meetup · 2018-12-25T10:23:09.263Z · score: 3 (1 votes) · LW · GW

I notice this isn't showing up on the sidebar of SSC; if you want it to, consider tagging this as SSC here.

Is Science Slowing Down?

2018-11-27T03:30:01.516Z · score: 108 (42 votes)
Comment by yvain on No Really, Why Aren't Rationalists Winning? · 2018-11-06T20:29:54.019Z · score: 81 (33 votes) · LW · GW

I support the opposite perspective - it was wrong to ever focus on individual winning and we should drop the slogan.

"Rationalists should win" was originally a suggestion for how to think about decision theory; if one agent predictably ends up with more utility than another, its choice is more "rational".

But this got caught up in excitement around "instrumental rationality" - the idea that the "epistemic rationality" skills of figuring out what was true, were only the handmaiden to a much more exciting skill of succeeding in the world. The community redirected itself to figuring out how to succeed in the world, ie became a self-help group.

I understand the logic. If you are good at knowing what is true, then you can be good at knowing what is true about the best thing to do in a certain situation, which means you can be more successful than other people. I can't deny this makes sense. I can just point out that it doesn't resemble reality. Donald Trump continues to be more successful than every cognitive scientist and psychologist in the world combined, and this sort of thing seems to happen so consistently that I can no longer dismiss it as a fluke. I think it's possible (and important) to analyze this phenomenon and see what's going on. But the point is that this will involve analyzing a phenomenon - ie truth-seeking, ie epistemic rationality, ie the thing we're good at and which is our comparative advantage - and not winning immediately.

Remember the history of medicine, which started with wise women unreflectingly using traditional herbs to cure conditions. Some very smart people like Hippocrates came up with reasonable proposals for better ideas, and it turned out they were much worse than the wise women. After a lot of foundational work they eventually became better than the wise women, but it took two thousand years, and a lot of people died in the meantime. I'm not sure you can short-circuit the "spend two thousand years flailing around and being terrible" step. It doesn't seem like this community has.

And I'm worried about the effects of trying. People in the community are pushing a thousand different kinds of woo now, in exactly the way "Schools Proliferating Without Evidence" condemned. This is not the fault of their individual irrationality. My guess is that pushing woo is an almost inevitable consequence of taking self-help seriously. There are lots of things that sound like they should work, and that probably work for certain individual people, and it's almost impossible to get the funding or rigor or sample size that you would need to study it at any reasonable level. I know a bunch of people who say that learning about chakras has done really interesting and beneficial things for them. I don't want to say with certainty that they aren't right - some of the chakras have a suspicious correspondence to certain glands or bundles of nerves in the body, and for all I know maybe it's just a very strange way of understanding and visualizing those nerves' behavior. But there's a big difference between me saying "for all I know maybe..." and a community where people are going around saying "do chakras! they really work!" But if you want to be a self-help community, you don't have a lot of other options.

I think my complaint is: once you become a self-help community, you start developing the sorts of epistemic norms that help you be a self-help community, and you start attracting the sort of people who are attracted to self-help communities. And then, if ten years later, someone says "Hey, are we sure we shouldn't go back to being pure truth-seekers?", it's going to be a very different community that discusses the answer to that question.

We were doing very well before, and could continue to do very well, as a community about epistemic truth-seeking mixed with a little practical strategy. All of these great ideas like effective altruism or friendly AI that the community has contributed to, are all things that people got by thinking about, by trying to understand the world and avoid bias. I don't think the rationalist community's contribution to EA has been the production of unusually effective people to man its organizations (EA should focus on "winning" to be more effective, but no moreso than any other movement or corporation, and they should try to go about it in the same way). I think rationality's contribution has been helping carve out the philosophy and convince people that it was true, after which those people manned its organizations at a usual level of effectiveness. Maybe rationality also helped develop a practical path forward for those organizations, which is fine and a more limited and relevant domain than "self-help".

Cognitive Enhancers: Mechanisms And Tradeoffs

2018-10-23T18:40:03.112Z · score: 43 (16 votes)
Comment by yvain on Anti-social Punishment · 2018-10-01T05:30:03.518Z · score: 14 (4 votes) · LW · GW

I'm a little confused. The explanation you give would explain why people might punish pro-social punishers, but it doesn't really give insight into why they would punish cooperators. Is the argument that cooperators are likely to also be pro-social punishers? Or am I misunderstanding the structure of the game?

The Tails Coming Apart As Metaphor For Life

2018-09-25T19:10:02.410Z · score: 92 (33 votes)
Comment by yvain on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-08T07:57:51.723Z · score: 22 (9 votes) · LW · GW

I agree Evan's intentions are good, and I'm glad that someone interesting who wants to criticize my writing is getting a chance to speak. I'm surprised this is downvoted as much as it has been, and I haven't downvoted it myself.

My main concern is with the hyperbolic way this was pitched and the name of the meetup, which I understand were intended kind of as jokes but which sound kind of creepy to me when I am the person being joked about. I don't think Evan needs to change these if he doesn't want to, but I do just want to register the concern.

Comment by yvain on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-08T01:49:25.788Z · score: 82 (28 votes) · LW · GW

I think it's good and important to criticize things, and I don't consider myself above criticism.

On the other hand, it's also kind of freaking me out to hear that a bunch of people in a city I've been in for like an hour tops are organizing an event using a derisive nickname for me and calling me a pseudointellectual, especially since I just sort of stumbled across it by coincidence.

I'm not sure how to balance these different considerations, and probably my feelings aren't as important as moving the engine of intellectual progress, but for the record I'm not really happy with the attempt made to balance them here.

I don't know if I am supposed to defend myself, but I will just say that I am particularly tired of criticism of the Dark Ages post. I've found this to have been a bunch of Redditors talking about how a freshman history student would have been ashamed to make so many howling mistakes, and then a bunch of trained historians telling me they thought it was basically fine (for example, here's a professional medieval historian saying he agrees with it entirely, here's a Renaissance historian who thinks it's fine, here's a historian of early modern science who says the same - also, I got an email from a Dominican friar who liked it, which is especially neat because it's like my post on the Middle Ages getting approval from the Middle Ages). I'm not saying this to make an argument from authority, I'm saying it because the people who disagree with me keep trying to make an argument from authority, and I don't want people to fall for it.

And, okay, one more thing. My Piketty review begins " I am not an economist. Many people who are economists have reviewed this book already. I review it only because if I had to slog through reading this thing I at least want to get a blog post out of it. If anything in my review contradicts that of real economists, trust them instead of me " If you're using errors in it to call me a pseudo-intellectual, I feel like you're just being a jerk at this point. Commenters did find several ways I was insufficiently critical of Piketty's claims, which I describe here ; I also added a correction note to that effect to the original post. The post was nevertheless recommend by an economist who said it was "the best summary I've ever read from a non-economist". Again, I'm not saying this as an argument from authority, I'm saying it because I know from experience that the criticism is going to involve a claim that "it's so bad that no knowledgeable person would ever take it seriously", and now you'll know that's not true.

Melatonin: Much More Than You Wanted To Know

2018-07-11T17:40:06.069Z · score: 92 (33 votes)
Comment by yvain on Last Chance to Fund the Berkeley REACH · 2018-06-28T18:02:08.555Z · score: 33 (12 votes) · LW · GW

I've increased my monthly donation to $600. Thanks again to Sarah and everyone else who works on this.

Comment by yvain on SSC South Bay Meetup · 2018-05-10T05:16:48.345Z · score: 5 (1 votes) · LW · GW

The 5 AM time looks like a mistake. David told me it's at 2 PM.

Varieties Of Argumentative Experience

2018-05-08T08:20:02.913Z · score: 118 (41 votes)

Recommendations vs. Guidelines

2018-04-13T04:10:01.328Z · score: 129 (36 votes)

Adult Neurogenesis – A Pointed Review

2018-04-05T04:50:03.107Z · score: 102 (32 votes)

God Help Us, Let’s Try To Understand Friston On Free Energy

2018-03-05T06:00:01.132Z · score: 95 (31 votes)

Does Age Bring Wisdom?

2017-11-08T07:20:00.376Z · score: 59 (21 votes)

SSC Meetup: Bay Area 10/14

2017-10-13T03:30:00.269Z · score: 4 (0 votes)

SSC Survey Results On Trust

2017-10-06T05:40:00.269Z · score: 13 (5 votes)

Different Worlds

2017-10-03T04:10:00.321Z · score: 92 (47 votes)

Against Individual IQ Worries

2017-09-28T17:12:19.553Z · score: 60 (33 votes)

My IRB Nightmare

2017-09-28T16:47:54.661Z · score: 24 (15 votes)

If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics

2017-09-03T20:56:25.373Z · score: 19 (10 votes)

Beware Isolated Demands For Rigor

2017-09-02T19:50:00.365Z · score: 42 (28 votes)

The Case Of The Suffocating Woman

2017-09-02T19:42:31.833Z · score: 5 (3 votes)

Learning To Love Scientific Consensus

2017-09-02T08:44:12.184Z · score: 8 (5 votes)

I Can Tolerate Anything Except The Outgroup

2017-09-02T08:22:19.612Z · score: 9 (8 votes)

The Lizard People Of Alpha Draconis 1 Decided To Build An Ansible

2017-08-10T00:33:54.000Z · score: 2 (2 votes)

Where The Falling Einstein Meets The Rising Mouse

2017-08-03T00:54:28.000Z · score: 6 (3 votes)

Why Are Transgender People Immune To Optical Illusions?

2017-06-28T19:00:00.000Z · score: 10 (3 votes)

SSC Journal Club: AI Timelines

2017-06-08T19:00:00.000Z · score: 2 (1 votes)

The Atomic Bomb Considered As Hungarian High School Science Fair Project

2017-05-26T09:45:22.000Z · score: 15 (7 votes)

G.K. Chesterton On AI Risk

2017-04-01T19:00:43.865Z · score: 2 (2 votes)

Guided By The Beauty Of Our Weapons

2017-03-24T04:33:12.000Z · score: 11 (9 votes)

[REPOST] The Demiurge’s Older Brother

2017-03-22T02:03:51.000Z · score: 6 (5 votes)
Comment by yvain on I Want To Live In A Baugruppe · 2017-03-18T05:52:12.305Z · score: 4 (4 votes) · LW · GW

Interested in some vague possible future.

Antidepressant Pharmacogenomics: Much More Than You Wanted To Know

2017-03-06T05:38:42.000Z · score: 2 (2 votes)

A Modern Myth

2017-02-27T17:29:17.000Z · score: 6 (4 votes)

Highlights From The Comments On Cost Disease

2017-02-17T07:28:52.000Z · score: 1 (0 votes)

Considerations On Cost Disease

2017-02-10T04:33:36.000Z · score: 6 (3 votes)

Albion’s Seed, Genotyped

2017-02-09T02:15:03.000Z · score: 4 (1 votes)
Comment by yvain on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2016-12-24T04:17:20.336Z · score: 22 (15 votes) · LW · GW

I attended in the Bay last year and also had a bad time because of the screaming child. Thanks for being willing to complain about this in the face of social pressure not to.

Discussion of LW in Ezra Klein podcast [starts 47:40]

2016-12-07T23:22:10.079Z · score: 9 (10 votes)
Comment by yvain on Fact Posts: How and Why · 2016-12-05T06:12:12.432Z · score: 18 (17 votes) · LW · GW

Some additional thoughts:

  • Don't underestimate Wikipedia as a really good place to get a (usually) unbiased overview of things and links to more in-depth sources.

  • The warning against biased sources is well-taken, but if you're looking into something controversial, you might have to just read the biased sources on both sides, then try to reconcile them. I've found it helpful to find a seemingly compelling argument, google something like "why X is wrong" or "X debunked" into Google, and see what the other side has to say about it. Then repeat until you feel like both sides are talking past each other or disagreeing on minutiae. This is important to do even with published papers!

  • Success often feels like realizing that a topic you thought would have one clear answer actually has a million different answers depending on how you ask the question. You start with something like "did the economy do better or worse this year?", you find that it's actually a thousand different questions like "did unemployment get better or worse this year?" vs. "did the stock market get better or worse this year?" and end up with things even more complicated like "did employment as measured in percentage of job-seekers finding a job within six months get better" vs. "did employment as measured in total percent of workforce working get better?". Then finally once you've disentangled all that and realized that the people saying "employment is getting better" or "employment is getting worse" are using statistics about subtly different things and talking past each other, you use all of the specific things you've discovered to reconstruct a picture of whether, in the ways important to you, the economy really is getting better or worse.

Comment by yvain on Fact Posts: How and Why · 2016-12-05T06:04:21.853Z · score: 10 (9 votes) · LW · GW

I have about six of these floating around in my drafts. This makes me think that maybe I should post them; I didn't think they were that interesting to anyone but me.

Please!

Expert Prediction Of Experiments

2016-11-29T02:47:47.276Z · score: 10 (11 votes)

The Pyramid And The Garden

2016-11-05T06:03:06.000Z · score: 27 (14 votes)

The Moral Of The Story

2016-10-18T02:22:45.000Z · score: 6 (5 votes)

Superintelligence FAQ

2016-09-20T19:00:00.000Z · score: 18 (7 votes)

It’s Bayes All The Way Up

2016-09-12T13:35:46.000Z · score: 3 (2 votes)

How The West Was Won

2016-07-26T03:05:17.000Z · score: 14 (5 votes)

Things Probably Matter

2016-07-21T02:03:46.000Z · score: 5 (2 votes)

Ascended Economy?

2016-05-30T21:06:07.000Z · score: 4 (1 votes)

Book Review: Age of Em

2016-05-28T20:42:17.000Z · score: 9 (4 votes)

Teachers: Much More Than You Wanted To Know

2016-05-19T06:13:32.000Z · score: 2 (2 votes)
Comment by yvain on 2016 LessWrong Diaspora Survey Results · 2016-05-01T17:11:57.840Z · score: 23 (21 votes) · LW · GW

Nice work.

If possible, please do a formal writeup like this: http://lesswrong.com/lw/lhg/2014_survey_results/

If possible, please change the data on your PDF file to include an option to have it without nonresponders. For example, right now sex is 66% male, 12% female, unknown 22%, which makes it hard to intuitively tell what the actual sex ratio is. If you remove the unknowns you see that the knowns are 85% male 15% female, which is a much more useful result. This is especially true since up to 50% of people are unknowns on some questions.

If possible, please include averages for numerical questions. For example, there's no data about age on the PDF file because it just says everybody was a "responder" but doesn't list numbers.

Book Review: Albion’s Seed

2016-04-27T08:40:44.000Z · score: 9 (5 votes)

The Ideology Is Not The Movement

2016-04-05T01:52:38.000Z · score: 12 (6 votes)
Comment by yvain on Lesswrong 2016 Survey · 2016-03-26T02:49:37.788Z · score: 1 (1 votes) · LW · GW

If you throw out the data, I request you keep the thrown-out data somewhere else so I can see how people responded to the issue.

Comment by yvain on Lesswrong 2016 Survey · 2016-03-26T02:42:51.779Z · score: 6 (6 votes) · LW · GW

"In general I planned to handle the "within 10 cm" thing during analysis. Try to fermi estimate the value and give your closest answer, then the probability you got it right. We can look at how close your confidence was to a sane range of values for the answer."

But unless I'm misunderstanding you, the size of the unspoken "sane range" is the entire determinant of how you should calibrate yourself.

Suppose you ask me when Genghis Khan was born, and all I know is "sometime between 1100 and 1200, with certainty". Suppose I choose 1150. If you require the exact year, then I'm only right if it was exactly 1150, and since it could be any of 100 years my probability is 1%. If you require within five years, then I'm right if it was any time between 1145 and 1155, so my probability is 10%. If you require within fifty years, then my probability is effectively 100%. All of those are potential "sane ranges", but depending on which one you the correctly calibrated estimate could be anywhere from 1% to 100%.

Unless I am very confused, you might want to change the questions and hand-throw-out all the answers you received before now, since I don't think they're meaningful (except if interpreted as probability of being exactly right).

(Actually, it might be interesting to see how many people figure this out, in a train wreck sort of way.)

PS: I admit this is totally 100% my fault for not getting around to looking at it the five times you asked me to before this.

Comment by yvain on Lesswrong 2016 Survey · 2016-03-26T02:04:36.220Z · score: 24 (24 votes) · LW · GW

Elo, thanks a lot for doing this.

(for the record, Elo tried really hard to get me involved and I procrastinated helping and forgot about it. I 100% endorse this.)

My only suggestion is to create a margin of error on the calibration questions, eg "How big is the soccer ball, to within 10 cm?". Otherwise people are guessing whether they got the exact centimeter right, which is pretty hard.

The Price Of Glee In China

2016-03-24T00:12:28.000Z · score: 8 (6 votes)
Comment by yvain on Open thread, Mar. 14 - Mar. 20, 2016 · 2016-03-18T20:51:25.081Z · score: 1 (1 votes) · LW · GW

This idea of having more "bandwidth" is tempting, but not really scientifically supported as far as I can tell, unless he just means autists have more free time/energy than neurotypicals.

Comment by yvain on Genetic "Nature" is cultural too · 2016-03-18T20:46:44.350Z · score: 7 (7 votes) · LW · GW

If race were a factor in twin studies, I think it would show up only in shared environment, since it differs between families but never within families (and is not differently different in MZ vs. DZ twins). That means it would not show in "heredity", unless we're talking about interracial couples with two children, each of whom by coincidence got a very different number of genes from the parents' two races - I think this is rare enough not to matter in real life studies.

Your point stands about the general role of these kinds of things, I just don't think it's counted that way in the twin studies we actually have.

You're right about beauty etc, though. Genetic studies are most informative about interventions to change individuals' standings relative to other individuals, not about interventions to completely change the nature of the playing field.

Comment by yvain on Probabilities Small Enough To Ignore: An attack on Pascal's Mugging · 2015-09-17T05:33:50.402Z · score: 14 (14 votes) · LW · GW

I don't know if this solves very much. As you say, if we use the number 1, then we shouldn't wear seatbelts, get fire insurance, or eat healthy to avoid getting cancer, since all of those can be classified as Pascal's Muggings. But if we start going for less than one, then we're just defining away Pascal's Mugging by fiat, saying "this is the level at which I am willing to stop worrying about this".

Also, as some people elsewhere in the comments have pointed out, this makes probability non-additive in an awkward sort of way. Suppose that if you eat unhealthy, you increase your risk of one million different diseases by plus one-in-a-million chance of getting each. Suppose also that eating healthy is a mildly unpleasant sacrifice, but getting a disease is much worse. If we calculate this out disease-by-disease, each disease is a Pascal's Mugging and we should choose to eat unhealthy. But if we calculate this out in the broad category of "getting some disease or other", then our chances are quite high and we should eat healthy. But it's very strange that our ontology/categorization scheme should affect our decision-making. This becomes much more dangerous when we start talking about AIs.

Also, does this create weird nonlinear thresholds? For example, suppose that you live on average 80 years. If some event which causes you near-infinite disutility happens every 80.01 years, you should ignore it; if it happens every 79.99 years, then preventing it becomes the entire focus of your existence. But it seems nonsensical for your behavior to change so drastically based on whether an event is every 79.99 years or every 80.01 years.

Also, a world where people follow this plan is a world where I make a killing on the Inverse Lottery (rules: 10,000 people take tickets; each ticket holder gets paid $1, except a randomly chosen "winner" who must pay $20,000)

Comment by yvain on Open Thread, Jul. 6 - Jul. 12, 2015 · 2015-07-16T14:09:49.751Z · score: 1 (1 votes) · LW · GW

There are meetings in the area every couple of months. There's no specific group to link to, but if you give me your email address I will add you to the list.

If you tell me where exactly in Michigan you are, I can try to put you in touch with other Michigan LW/SSC readers. Most are in Ann Arbor, but there are several in the Detroit metro area and at least one in Grand Rapids.

Comment by yvain on Open Thread, May 4 - May 10, 2015 · 2015-05-10T05:08:30.850Z · score: 3 (3 votes) · LW · GW

I liked the sound of "The Future Is Pipes" and saved that sentence structure in case I needed it.

Comment by yvain on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 112 · 2015-02-26T20:48:19.110Z · score: 10 (10 votes) · LW · GW

The Mirror did not touch the ground; the golden frame had no feet. It didn't look like it was hovering; it looked like it was fixed in place, more solid and more motionless than the walls themselves, like it was nailed to the reference frame of the Earth's motion.

The Mirror is in the fourth wall. Now that we-the-readers have seen the mirror, we have to consider that our seeing Eliezer saying this isn't in the mirror might just be part of our coherent extrapolated volition.

Comment by yvain on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 111 · 2015-02-25T21:46:24.086Z · score: 1 (1 votes) · LW · GW

But that suggests that you can resurrect someone non-permanently without the Stone - and possibly keep them alive indefinitely by expending constant magic on it like Harry does with his father's rock.

Comment by yvain on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 112 · 2015-02-25T21:44:40.553Z · score: 9 (9 votes) · LW · GW

Why is Voldemort not getting rid of Harry in some more final way?

Even if he's worried killing Harry will rebound against him because of the prophecy somehow, he can, I don't know, freeze Harry? Stick Harry in the mirror using whatever happened to Dumbledore? Destroy Harry's brain and memories and leave him an idiot? Shoot Harry into space?

Why is "resurrect Harry's best friend to give him good counsel" a winning move here?

Comment by yvain on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 111 · 2015-02-25T19:26:18.368Z · score: 3 (3 votes) · LW · GW

How is Voldemort resurrecting Hermione?

With his own resurrection, he's transfiguring stuff into his body, using the Stone to make the transfiguration permanent, then having his spirit repossess the body.

With Hermione, the body was never the problem. Where's he getting the spirit? How does the Stone help?

Comment by yvain on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 111 · 2015-02-25T19:08:27.854Z · score: 24 (24 votes) · LW · GW

So Harry gets his wand back, gets his pouch back, Voldemort resurrects Hermione with superpowers, then Voldemort becomes super-weak, his horcruxes mysteriously stop working, and he mentions this is happening loudly enough for Harry to hear and kill him?

Either this is still all in the mirror, or Harry needs to buy lottery tickets right away.

Comment by yvain on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 110 · 2015-02-24T23:01:12.281Z · score: 12 (12 votes) · LW · GW

"Well," said Albus Dumbledore. "I do feel stupid."

"I should hope so," Professor Quirrell said easily; if he had been at all shocked himself at being caught, it did not show. A casual wave of his hand changed his robes back to a Professor's clothing.

Dumbledore's grimness had returned and redoubled. "There I am, searching so hard for Voldemort's shade, never noticing that the Defense Professor of Hogwarts is a sickly, half-dead victim possessed by a spirit far more powerful than himself. I would call it senility, if so many others had not missed it as well."

This is Dumbledore admitting he held the Idiot Ball. We've been promised nobody's holding the Idiot Ball. So something's up. I like the theory that mirror is operating as intended and showing Voldemort what he wants to see, that being Dumbledore making a mistake and losing. Alternately, it could be that Dumbledore was somehow able to make a version of him inside the mirror (maybe in his CEV he saw himself inside the mirror protecting the Stone, and that reflection-Dumbledore gained independent existence?). Or he could just have something else up his sleeve.

Comment by yvain on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 104 · 2015-02-16T17:22:54.058Z · score: 24 (26 votes) · LW · GW

Prediction:

Harry gets the Snitch eliminated from Quidditch. Not just in Hogwarts, but in the big leagues as well - they don't want a Germany vs. Austria on their hands.

All of the celebrity Quidditch players of the world - Victor Krum, Ludo Bagman, Finbar Quigley - are distraught by these sudden and drastic changes to a traditional game they've loved for many years. At the ceremony marking the changes, some of them tear up.

The Daily Prophet headline is "BOY WHO LIVED TEARS UP THE STARS"

Eliezer gives all of us a long lecture about how the prior for somebody making celebrities cry is so much higher than the prior for someone literally ripping the Sun apart that the latter hypothesis should never even have entered our consideration, regardless of how much more natural an interpretation of the prophecy it is.

Comment by yvain on Open thread, Jan. 12 - Jan. 18, 2015 · 2015-01-16T01:30:44.591Z · score: 4 (4 votes) · LW · GW

I can't find the exact post you're talking about, but the book involved was probably The Adapted Mind, since Eliezer often praises it in terms like those.

Comment by yvain on Velocity of behavioral evolution · 2014-12-20T04:41:12.031Z · score: 11 (11 votes) · LW · GW

Smell is interesting because it's way overrepresented genetically. Something like 5% of most animals' genomes are just a whole bunch of olfactory receptor genes, each for a different individual smell. So it should be unusually easy to do epigenetics with it. Just say "Express the gene for cherry smell more" and then the mice have a stronger reaction to it.

This doesn't mean that any more complex behaviors can be inherited epigenetically. In fact, it might be that nothing else is as suitable to epigenetic transmission as olfaction.

Comment by yvain on PSA: Eugine_Nier evading ban? · 2014-12-08T01:26:56.991Z · score: 76 (82 votes) · LW · GW

Both accounts' SSC comments come from the same IP.

Comment by yvain on xkcd on the AI box experiment · 2014-11-22T02:39:54.372Z · score: 51 (55 votes) · LW · GW

At this point I think the winning move is rolling with it and selling little plush basilisks as a MIRI fundraiser. It's our involuntary mascot, and we might as well 'reclaim' it in the social justice sense.

Then every time someone brings up "Less Wrong is terrified of the basilisk" we can just be like "Yes! Yes we are! Would you like to buy a plush one?" and everyone will appreciate our ability to laugh at ourselves, and they'll go back to whatever they were doing.

Comment by yvain on xkcd on the AI box experiment · 2014-11-22T02:37:11.843Z · score: 30 (28 votes) · LW · GW

When I visited MIRI's headquarters, they were trying to set up a video link to the Future of Humanity Institute. Somebody had put up a monitor in a prominent place and there was a sticky note saying something like "Connects to FHI - do not touch".

Except that the H was kind of sloppy and bent upward so it looked like an A.

I was really careful not to touch that monitor.

Comment by yvain on Neo-reactionaries, why are you neo-reactionary? · 2014-11-21T08:02:34.760Z · score: 22 (22 votes) · LW · GW

I agree with Toggle that this might not have been the best place for this question.

The Circle of Life goes like this. Somebody associates Less Wrong with neoreactionaries, even though there are like ten of them here total. They start discussing neoreaction here, or asking their questions for neoreactionaries here. The discussion is high profile and leads more people to associate Less Wrong with neoreactionaries. That causes more people to discuss it and ask questions here, which causes more people to associate us, and it ends with everybody certain that we're full of neoreactionaries, and that ends with bad people who want to hurt us putting "LESS WRONG IS A RACIST NEOREACTIONARY WEBSITE" in big bold letters over everything.

If you really want to discuss neoreaction, I'd suggest you do it in an Slate Star Codex open thread, since apparently I'm way too tarnished by association with them to ever escape. Or you can go to a Xenosystems open thread and get it straight from the horse's mouth.

Comment by yvain on Neo-reactionaries, why are you neo-reactionary? · 2014-11-21T07:09:01.738Z · score: 49 (49 votes) · LW · GW

I've been advised to come here and defend myself.

If you haven't been watching closely, David Gerard has been spreading these same smears about me on RationalWiki, on Twitter, and now here. His tweets accuse me of treating the Left in general and the social justice movement in particular with "frothing" and as "ordure". And now he comes here and adds Tumblr to the list of victims, and "actual disgust" to the list of adjectives.

I resent this because it is a complete fabrication.

I resent it because, far from a frothing hatred of Tumblr, I myself have a Tumblr account which I use almost every day and which I've made three hundred posts on. Sure, I've gently mocked Tumblr (as has every Tumblr user) but I've also very publicly praised it for hosting some very interesting and enlightening conversations.

I resent it because I've posted a bunch of long defenses and steelmannings of social justice ideas like Social Justice For The Highly Demanding Of Rigor and The Wonderful Thing About Triggers, some of which have gone mildly viral in the social justice blogosphere, and some of which have led to people emailing me or commenting saying they've changed their minds and become less hostile to social justice as a result.

I resent it because, far from failing to intellectually engage with the Left, in the past couple of months I've read, reviewed, and enjoyed left-leaning books on Marx, the Soviet economy, and market socialism

I resent it because the time I most remember someone trying to engage me about social justice, Apophemi, I wrote a seven thousand word response which I consider excruciatingly polite, which started with a careful justification for why writing it would be more productive and respectful than not writing it, and which ended with a heartfelt apology for the couple of things I had gotten wrong on my last post on the subject.

(Disgust! Frothing! Ordure!)

I resent it because I happily hosted Ozy's social justice blogging for several months, giving them an audience for posts like their takedown of Heartiste, which was also very well-received and got social justice ideas to people who otherwise wouldn't have seen them.

I resent it because about a fifth of my blogroll is social justice or social justice-aligned blogs, each of which get a couple dozen hits from me a day.

I resent it because even in my most impassioned posts about social justice, I try to make it very clear that there are parts of the movement which make excellent points, and figures in the movement I highly respect. Even in what I think everyone here will agree is my meanest post on the subject, Radicalizing the Romanceless, I stop to say the following about the social justice blogger I am arguing against:

[He] is a neat guy. He draws amazing comics and he runs one of the most popular, most intellectual, and longest-standing feminist blogs on the Internet. I have debated him several times, and although he can be enragingly persistent he has always been reasonable...He cares deeply about a lot of things, works hard for those things, and has supported my friends when they have most needed support.

(DISGUST! FROTHING! ORDURE!)

I resent it because it trivializes all of my sick burns against neoreactionaries, like the time I accused them of worshipping Kim Jong-un as a god, and the time I said they were obsessed with "precious, precious, white people", and the time I mocked Jim for thinking Eugene V. Debs was a Supreme Court case.

I resent this because anyone who looks at my posts tagged with social justice can see that almost as many are in favor as against.

And I resent this because I'm being taken to task about charity by somebody whose own concept of a balanced and reasonable debate is retweeting stuff like this -- and again and again calling the people he disagrees with "shitlords"

(which puts his faux-horror that I treat people I disagree with 'like ordure' in a pretty interesting new light)

No matter how many pro-social-justice posts I write, how fair and nice I am, or what I do, David Gerard is going to keep spreading these smears about me until I refuse to ever engage with anyone who disagrees with him about anything at all. As long as I'm saying anything other than "every view held by David Gerard is perfect and flawless and everyone who disagrees with David Gerard is a shitlord who deserve to die", he is going to keep concern-trolling you guys that I am "biased" or "unfair".

Please give his continued campaigning along these lines the total lack of attention it richly deserves.

Comment by yvain on 2014 Less Wrong Census/Survey · 2014-11-02T01:52:17.198Z · score: 2 (2 votes) · LW · GW

I stated that all disputes would be resolved by Wikipedia, and here is Wikipedia's verdict on the matter: http://en.wikipedia.org/wiki/List_of_best-selling_PC_games

Comment by yvain on 2014 Less Wrong Census/Survey · 2014-10-26T20:07:59.062Z · score: 9 (9 votes) · LW · GW

I'm confused. If you were male at birth and transitioned to female, can't you just answer the "sex assigned at birth" question male, and the gender question with "transgender m -> f" ?

Comment by yvain on 2014 Less Wrong Census/Survey · 2014-10-26T20:06:02.865Z · score: 9 (9 votes) · LW · GW

I can already tell you that...well, you remember the preview thread. The one where I posted a version of the survey saying in big letters on the top "DO NOT TAKE THIS, IT IS NOT OPEN" and the first question was "You are not supposed to take the survey now" and the only answer was "Okay, I'll stop"?

Four people took it. Obviously they won't be counted.

Comment by yvain on 2014 Less Wrong Census/Survey · 2014-10-26T20:03:58.463Z · score: 6 (6 votes) · LW · GW

Yes, but I keep my data private because I'd be easy to find otherwise and I don't want everyone knowing my income and politics et cetera.

Comment by yvain on Open thread, Oct. 13 - Oct. 19, 2014 · 2014-10-15T07:30:05.690Z · score: 18 (18 votes) · LW · GW

Should effective altruists donate to fighting Ebola?

Argument against: usually very famous things that make the news are terrible effective altruist causes and you should stick to well-studied things like malaria.

Argument for: Ebola is very underfunded compared to sexier disasters. And it is a disease in the Third World, a category which has brought us most of the best-known effective altruism interventions.

Thoughts: The CDC estimates a best-case scenario of 20,000 cases by January and a worst-case scenario of about 1.5 million cases by January. They do not estimate risks past January. There are also black swan risks in which Ebola spreads to the entire Third World (eg India) and kills tens of millions of people there. However, on the margin individual donations are unlikely to shift the virus from one of these scenarios to another, so it's probably more worth considering how much good the marginal donation does.

Doctors Without Borders is a very well known, GiveWell-approved charity. They are running clinics in the country, but it's hard to tell how much more clinic they can run per dollar. On the other hand, they are also giving out home infection prevention kits by the tens of thousands. Other charities price these at about ten dollars per kit, although I've seen estimates that differ by an order of magnitude. I don't think anybody knows how effective the kits are going to be, although everyone agrees they are a vastly inferior option to sufficient space in hospitals, which at the moment does not exist.

If we estimate likelihood of 100,000 Liberians (geometric mean of estimates) eventually infected = 2% of the population, then $1000 buys 100 kits buys 2 kits for people likely to be infected..

$1000 for malaria bed nets supposedly gives something like 20 to 100 DALYs, depending on whose estimate you trust.

Ebola death rate is about 50%. Suppose the average infected person has 30 DALYs left to lose. So each case of Ebola costs 15 DALYs directly. But it probably ends up costing more like 30, because I think on average each case infects one other person (I don't think this is meant to be iterate, or else the estimate quickly goes to infinity). So if every Ebola kit was 100% effective, we would expect distributing the kits to save 60 DALYs.

That means in order for kits to be as good as the bottom range of estimates for bed nets, they would have to be at least 33% effective in preventing Ebola among people who get them, which they probably aren't.

On the other hand, every number in this estimate is a total wild guess, and I don't trust that I'm within two orders of magnitude of anything approaching reality. Kits likely cost more when including distribution (I expect charities to underreport costs to make people feel good about giving them), there's no guarantee that there's room for more kits, and my rate of how many subsequent cases are caused by each case is from a half-remembered news article. Does anyone have better ideas for how to figure this out?

Comment by yvain on 2014 Less Wrong Census/Survey - Call For Critiques/Questions · 2014-10-12T03:37:24.936Z · score: 3 (3 votes) · LW · GW

There was a Schrodinger atom question a couple years ago. I'm trying not to keep all questions lest the survey just grow and grow forever. Any particular reason you want to know whether the Schrodinger solving percent has changed since last time?

Comment by yvain on 2014 Less Wrong Census/Survey - Call For Critiques/Questions · 2014-10-12T03:36:09.232Z · score: 3 (3 votes) · LW · GW

How would you handle this?

Comment by yvain on 2014 Less Wrong Census/Survey - Call For Critiques/Questions · 2014-10-12T03:35:52.452Z · score: 8 (8 votes) · LW · GW

Anything I do with gender and sex is going to have lots of people yell at me. But if I keep it the same, it will be the same people as last year and I won't make new enemies.

Comment by yvain on 2014 Less Wrong Census/Survey - Call For Critiques/Questions · 2014-10-12T03:35:01.666Z · score: 7 (7 votes) · LW · GW

I had this last time, and several people told me to take it off because it was bad to make people admit to illegal activities.

Also, for complicated reasons I can't do "Check as many as apply" questions, so this would take forever.

Comment by yvain on 2014 Less Wrong Census/Survey - Call For Critiques/Questions · 2014-10-12T03:34:04.202Z · score: 4 (4 votes) · LW · GW

Any particular implementation details on OCEAN? Exact same as last time?

Comment by yvain on 2014 Less Wrong Census/Survey - Call For Critiques/Questions · 2014-10-11T17:24:57.560Z · score: 5 (5 votes) · LW · GW

See for example http://www.livescience.com/13958-birth-month-health-effects.html

Comment by yvain on Questions on Theism · 2014-10-09T04:25:06.766Z · score: 42 (42 votes) · LW · GW

The Roman Catholic Church has a process -- to its credit, not a completely ridiculous one -- by which it certifies some healings there as miraculous. Although the process isn't completely ridiculous, it's far from obviously bulletproof; the main requirement is that a bunch of Roman Catholic doctors declare that the alleged cure is inexplicable according to current medical knowledge.

I went to medical school in Ireland and briefly rotated under a neurologist there. One time he received a very nice letter from the Catholic Church, saying that one of his patients had gotten much better after praying to a certain holy figure, and the Church was trying to canonize (or beatify, or whatever) the figure, so if the doctor could just certify that the patient's recovery was medically impossible, that would be really helpful and make everyone very happy.

The neurologist wrote back that the patient had multiple sclerosis, a disease which remits for long periods on its own all the time and so there was nothing medically impossible about the incident at all.

I have only vague memories of this, but I think the Church kept pushing it, asking whether maybe it was at least a little medically impossible, because they really wanted to saint this guy.

(the neurologist was an atheist and gleefully refused as colorfully as he could)

This left me less confident in accounts of medical miracles.