Posts

Autism And Intelligence: Much More Than You Wanted To Know 2019-11-14T05:30:02.643Z · score: 56 (15 votes)
Building Intuitions On Non-Empirical Arguments In Science 2019-11-07T06:50:02.354Z · score: 62 (23 votes)
Book Review: Ages Of Discord 2019-09-03T06:30:01.543Z · score: 36 (11 votes)
Book Review: Secular Cycles 2019-08-13T04:10:01.201Z · score: 62 (29 votes)
Book Review: The Secret Of Our Success 2019-06-05T06:50:01.267Z · score: 136 (43 votes)
1960: The Year The Singularity Was Cancelled 2019-04-23T01:30:01.224Z · score: 65 (22 votes)
Rule Thinkers In, Not Out 2019-02-27T02:40:05.133Z · score: 101 (37 votes)
Book Review: The Structure Of Scientific Revolutions 2019-01-09T07:10:02.152Z · score: 75 (17 votes)
Bay Area SSC Meetup (special guest Steve Hsu) 2019-01-03T03:02:05.532Z · score: 30 (4 votes)
Is Science Slowing Down? 2018-11-27T03:30:01.516Z · score: 108 (42 votes)
Cognitive Enhancers: Mechanisms And Tradeoffs 2018-10-23T18:40:03.112Z · score: 44 (17 votes)
The Tails Coming Apart As Metaphor For Life 2018-09-25T19:10:02.410Z · score: 95 (35 votes)
Melatonin: Much More Than You Wanted To Know 2018-07-11T17:40:06.069Z · score: 92 (33 votes)
Varieties Of Argumentative Experience 2018-05-08T08:20:02.913Z · score: 118 (41 votes)
Recommendations vs. Guidelines 2018-04-13T04:10:01.328Z · score: 130 (37 votes)
Adult Neurogenesis – A Pointed Review 2018-04-05T04:50:03.107Z · score: 102 (32 votes)
God Help Us, Let’s Try To Understand Friston On Free Energy 2018-03-05T06:00:01.132Z · score: 96 (32 votes)
Does Age Bring Wisdom? 2017-11-08T07:20:00.376Z · score: 60 (22 votes)
SSC Meetup: Bay Area 10/14 2017-10-13T03:30:00.269Z · score: 4 (0 votes)
SSC Survey Results On Trust 2017-10-06T05:40:00.269Z · score: 13 (5 votes)
Different Worlds 2017-10-03T04:10:00.321Z · score: 92 (47 votes)
Against Individual IQ Worries 2017-09-28T17:12:19.553Z · score: 69 (38 votes)
My IRB Nightmare 2017-09-28T16:47:54.661Z · score: 24 (15 votes)
If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics 2017-09-03T20:56:25.373Z · score: 32 (14 votes)
Beware Isolated Demands For Rigor 2017-09-02T19:50:00.365Z · score: 46 (31 votes)
The Case Of The Suffocating Woman 2017-09-02T19:42:31.833Z · score: 6 (4 votes)
Learning To Love Scientific Consensus 2017-09-02T08:44:12.184Z · score: 9 (7 votes)
I Can Tolerate Anything Except The Outgroup 2017-09-02T08:22:19.612Z · score: 14 (12 votes)
The Lizard People Of Alpha Draconis 1 Decided To Build An Ansible 2017-08-10T00:33:54.000Z · score: 10 (6 votes)
Where The Falling Einstein Meets The Rising Mouse 2017-08-03T00:54:28.000Z · score: 7 (4 votes)
Why Are Transgender People Immune To Optical Illusions? 2017-06-28T19:00:00.000Z · score: 13 (5 votes)
SSC Journal Club: AI Timelines 2017-06-08T19:00:00.000Z · score: 2 (2 votes)
The Atomic Bomb Considered As Hungarian High School Science Fair Project 2017-05-26T09:45:22.000Z · score: 24 (14 votes)
G.K. Chesterton On AI Risk 2017-04-01T19:00:43.865Z · score: 3 (3 votes)
Guided By The Beauty Of Our Weapons 2017-03-24T04:33:12.000Z · score: 11 (9 votes)
[REPOST] The Demiurge’s Older Brother 2017-03-22T02:03:51.000Z · score: 8 (7 votes)
Antidepressant Pharmacogenomics: Much More Than You Wanted To Know 2017-03-06T05:38:42.000Z · score: 2 (2 votes)
A Modern Myth 2017-02-27T17:29:17.000Z · score: 10 (6 votes)
Highlights From The Comments On Cost Disease 2017-02-17T07:28:52.000Z · score: 2 (2 votes)
Considerations On Cost Disease 2017-02-10T04:33:36.000Z · score: 8 (5 votes)
Albion’s Seed, Genotyped 2017-02-09T02:15:03.000Z · score: 4 (2 votes)
Discussion of LW in Ezra Klein podcast [starts 47:40] 2016-12-07T23:22:10.079Z · score: 9 (10 votes)
Expert Prediction Of Experiments 2016-11-29T02:47:47.276Z · score: 10 (11 votes)
The Pyramid And The Garden 2016-11-05T06:03:06.000Z · score: 31 (17 votes)
The Moral Of The Story 2016-10-18T02:22:45.000Z · score: 19 (9 votes)
Superintelligence FAQ 2016-09-20T19:00:00.000Z · score: 19 (8 votes)
It’s Bayes All The Way Up 2016-09-12T13:35:46.000Z · score: 14 (6 votes)
How The West Was Won 2016-07-26T03:05:17.000Z · score: 14 (5 votes)
Things Probably Matter 2016-07-21T02:03:46.000Z · score: 5 (3 votes)
Ascended Economy? 2016-05-30T21:06:07.000Z · score: 4 (2 votes)

Comments

Comment by yvain on bgaesop's Shortform · 2019-10-27T09:24:29.090Z · score: 12 (4 votes) · LW · GW

I'd assumed what I posted was the LW meditator consensus, or at least compatible with it.

Comment by yvain on Free Money at PredictIt? · 2019-09-26T17:43:46.914Z · score: 25 (12 votes) · LW · GW
In prediction markets, cost of capital to do trades is a major distorting factor, as are fees and taxes and other physical costs, and participants are much less certain of correct prices and much more worried about impact and how many others are in the same trade. Most everyone who is looking to correct inefficiencies will only fade very large and very obvious inefficiencies, given all the costs.

https://blog.rossry.net/predictit/ has a really good discussion of how this works, with some associated numbers that show how you will probably outright lose money on even apparently ironclad trades like the 112-total candidates above.

Comment by yvain on Could we solve this email mess if we all moved to paid emails? · 2019-08-13T04:09:28.693Z · score: 5 (2 votes) · LW · GW

I'm sorry, I didn't understand that. Yes, this answers my objection (although it might cause other problems like make me less likely to answer "sorry, I can't do that" compared to just ghosting someone)

Comment by yvain on Could we solve this email mess if we all moved to paid emails? · 2019-08-12T02:03:53.283Z · score: 12 (8 votes) · LW · GW

I think it's great that you're trying this and I hope it succeeds.

But I won't be using it. For me, the biggest problem is lowering the sense of obligation I feel to answer other people's emails. Without a sense of obligation, there's no problem - I just delete it and move on. But part of me feels like I'm incurring a social cost by doing this, so it's harder than it sounds.

I feel like using a service like this would make the problem worse, not better. It would make me feel a strong sense of obligation to answer someone's email if they had paid $5 to send it. What sort of monster deletes an email they know the other person had to pay money to send?

In the same way, I would feel nervous sending someone else a paid email, because I would feel like I was imposing a stronger sense of obligation on them to respond to my request, rather than it being a harmless ask they can either answer or not. This would be true regardless of how important my email was. Meanwhile, people who don't care about other people's feelings won't really be held back, since $5 is not a lot of money for most people in this community.

I think the increased obligation would dominate any tendency for me to get less emails, and make this a net negative in my case. I still hope other people try it and report back.

Comment by yvain on How to Ignore Your Emotions (while also thinking you're awesome at emotions) · 2019-08-04T01:48:57.528Z · score: 23 (14 votes) · LW · GW

What would you recommend to people who are doing this (or to people who aren't sure if they're doing it or not?)

Comment by yvain on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-08-02T05:53:28.207Z · score: 22 (6 votes) · LW · GW

I'm a little confused, and I think it might be because you're using "conflict theorist" different from how I do.

For me, a conflict theorist is someone who thinks the main driver of disagreement is self-interest rather than honest mistakes. There can be mistake theorists and conflict theorists on both sides of the "is billionaire philanthropy good?" question, and on the "are individual actions acceptable even though they're nondemocratic?" question.

It sounds like you're using it differently, so I want to make sure I know exactly what you mean before replying.

You say you've given up understanding the number of basically people who disagree with things you think are obvious and morally obligatory. I suspect there's a big confusion about what 'basically good' means here, I'm making a note of it for future posting, but moving past that for now: When you examine specific cases of such disagreements happening, what do you find how often? (I keep writing possible things, but on reflection avoiding anchoring you is better)

I think I usually find we're working off different paradigms, in the really strong Kuhnian sense of paradigm.

Comment by yvain on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-08-01T17:34:11.881Z · score: 108 (37 votes) · LW · GW

Rob Reich is a former board member of GiveWell and Good Ventures (i.e. Moskowitz and Tuna) and the people at OpenPhil seem to have a huge amount of respect for him. He responded to my article by tweeting "Really grateful to have my writing taken seriously by someone whose blog I've long enjoyed and learned from" and promising to write a reply soon.

Dylan Matthews, who wrote the Vox article I linked (I don’t know if he is against billionaire philanthropy, but he seems to hold some sympathy for the position), self-describes as EA, has donated a kidney, and switched from opposing work on AI risk to supporting it after reading arguments on the topic.

And here's someone on the subreddit saying that they previously had some sympathy for anti-billionaire-philanthropy arguments but are now more convinced that it's net positive.

I don’t think any of these people fit your description of “people opposed to nerds or to thinking”, “people opposed to all private actions not under ‘democratic control’”, or “people opposed to action of any kind.” They seem like basically good people who I disagree with. I am constantly surprised by how many things that seem obvious and morally obligatory to me can have basically good people disagree with them, and I have kind of given up on trying to understand it, but there we go.

Even if there are much worse people in the movement, I think getting Reich and Matthews alone to dial it down 10% would be very net positive, since they're among the most prominent opponents.

I was concerned about backlash and ran the post by a couple of people I trusted to see if they thought it was net positive, and they all said it was. If you want I'll run future posts I have those concerns about by you too.

Comment by yvain on Dialogue on Appeals to Consequences · 2019-07-19T20:28:53.423Z · score: 25 (8 votes) · LW · GW

Instead of Quinn admitting lying is sometimes good, I wish he had said something like:

“PADP is widely considered a good charity by smart people who we trust. So we have a prior on it being good. You’ve discovered some apparent evidence that it’s bad. So now we have to combine the prior and the evidence, and we end up with some percent confidence that they’re bad.
If this is 90% confidence they’re bad, go ahead. What if it’s more like 55%? What’s the right action to take if you’re 55% sure a charity is incompetent and dishonest (but 45% chance you misinterpreted the evidence)? Should you call them out on it? That’s good in the world where you’re right, but might disproportionately tarnish their reputation in the world where they're wrong. It seems like if you’re 55% sure, you have a tough call. You might want to try something like bringing up your concerns privately with close friends and only going public if they share your opinion, or asking the charity first and only going public if they can’t explain themselves. Or you might want to try bringing up your concerns in a nonconfrontational way, more like ‘Can anyone figure out what’s going on with PADP’s math?’ rather than ‘PADP is dishonest’. After this doesn’t work and lots of other people confirm your intuitions of distrust, then your confidence reaches 90% and you start doing things more like shouting ‘PADP is dishonest’ from the rooftops.
Or maybe you’ll never reach 90% confidence. Many people think that climate science is dishonest. I don’t doubt many of them are reporting their beliefs honestly - that they’ve done a deep investigation and that’s what they’ve concluded. It’s just that they’re not smart, informed, or rational enough to understand what’s going on, or to process it in an unbiased way. What advice would you give these people about calling scientists out on dishonesty - again given that rumors are powerful things and can ruin important work? My advice to them would be to consider that they may be overconfident, and that there needs to be some intermediate ‘consider my own limitations and the consequences of my irreversible actions’ step in between ‘this looks dishonest to me’ and ‘I will publicly declare it dishonest’. And that step is going to look like an appeal to consequences, especially if the climate deniers are so caught up in their own biases that they can't imagine they might be wrong.
I don’t want to deny that calling out apparent dishonesty when you’re pretty sure of it, or when you’ve gone through every effort you can to check it and it still seems bad, will sometimes (maybe usually) be the best course, but I don’t think it’s as simple as you think.”

...and seen what Carter answered.

Comment by yvain on The AI Timelines Scam · 2019-07-11T07:30:47.811Z · score: 48 (23 votes) · LW · GW

1. It sounds like we have a pretty deep disagreement here, so I'll write an SSC post explaining my opinion in depth sometime.

2. Sorry, it seems I misunderstood you. What did you mean by mentioning business's very short timelines and all of the biases that might make them have those?

3. I feel like this is dismissing the magnitude of the problem. Suppose I said that the Democratic Party was a lying scam that was duping Americans into believing it, because many Americans were biased to support the Democratic Party for various demographic reasons, or because their families were Democrats, or because they'd seen campaign ads, etc. These biases could certainly exist. But if I didn't even mention that there might be similar biases making people support the Republican Party, let alone try to estimate which was worse, I'm not sure this would qualify as sociopolitical analysis.

4. I was trying to explain why people in a field might prefer that members of the field address disagreements through internal channels rather than the media, for reasons other than that they have a conspiracy of silence. I'm not sure what you mean by "concrete criticisms". You cherry-picked some reasons for believing long timelines; I agree these exist. There are other arguments for believing shorter timelines and that people believing in longer timelines are "duped". What it sounded like you were claiming is that the overall bias is in favor of making people believe in shorter ones, which I think hasn't been proven.

I'm not entirely against modeling sociopolitical dynamics, which is why I ended the sentence with "at this level of resolution". I think a structured attempt to figure out whether there were more biases in favor of long timelines or short timelines (for example, surveying AI researchers on what they would feel uncomfortable saying) would be pretty helpful. I interpreted this post as more like the Democrat example in 3 - cherry-picking a few examples of bias towards short timelines, then declaring short timelines to be a scam. I don't know if this is true or not, but I feel like you haven't supported it.

Bayes Theorem says that we shouldn't update on information that you could get whether or not a hypothesis were true. I feel like you could have written an equally compelling essay "proving" bias in favor of long timelines, of Democrats, of Republicans, or of almost anything; if you feel like you couldn't, I feel like the post didn't explain why you felt that way. So I don't think we should update on the information in this, and I think the intensity of your language ("scam", "lie", "dupe") is incongruous with the lack of update-able information.

Comment by yvain on The AI Timelines Scam · 2019-07-11T05:46:44.883Z · score: 186 (70 votes) · LW · GW

1. For reasons discussed on comments to previous posts here, I'm wary of using words like "lie" or "scam" to mean "honest reporting of unconsciously biased reasoning". If I criticized this post by calling you a liar trying to scam us, and then backed down to "I'm sure you believe this, but you probably have some bias, just like all of us", I expect you would be offended. But I feel like you're making this equivocation throughout this post.

2. I agree business is probably overly optimistic about timelines, for about the reasons you mention. But reversed stupidity is not intelligence. Most of the people I know pushing short timelines work in nonprofits, and many of the people you're criticizing in this post are AI professors. Unless you got your timelines from industry, which I don't think many people here did, them being stupid isn't especially relevant to whether we should believe the argument in general. I could find you some field (like religion) where people are biased to believe AI will never happen, but unless we took them seriously before this, the fact that they're wrong doesn't change anything.

3. I've frequently heard people who believe AI might be near say that their side can't publicly voice their opinions, because they'll get branded as loonies and alarmists, and therefore we should adjust in favor of near-termism because long-timelinists get to unfairly dominate the debate. I think it's natural for people on all sides of an issue to feel like their side is uniquely silenced by a conspiracy of people biased towards the other side. See Against Bravery Debates for evidence of this.

4. I'm not familiar with the politics in AI research. But in medicine, I've noticed that doctors who go straight to the public with their controversial medical theory are usually pretty bad, for one of a couple of reasons. Number one, they're usually wrong, people in the field know they're wrong, and they're trying to bamboozle a reading public who aren't smart enough to figure out that they're wrong (but who are hungry for a "Galileo stands up to hidebound medical establishment" narrative). Number two, there's a thing they can do where they say some well-known fact in a breathless tone, and then get credit for having blown the cover of the establishment's lie. You can always get a New Yorker story by writing "Did you know that, contrary to what the psychiatric establishment wants you to believe, SOME DRUGS MAY HAVE SIDE EFFECTS OR WITHDRAWAL SYNDROMES?" Then the public gets up in arms, and the psychiatric establishment has to go on damage control for the next few months and strike an awkward balance between correcting the inevitable massive misrepresentations in the article while also saying the basic premise is !@#$ing obvious and was never in doubt. When I hear people say something like "You're not presenting an alternative solution" in these cases, they mean something like "You don't have some alternate way of treating diseases that has no side effects, so stop pretending you're Galileo for pointing out a problem everyone was already aware of." See Beware Stephen Jay Gould for Eliezer giving an example of this, and Chemical Imbalance and the followup post for me giving an example of this. I don't know for sure that this is what's going on in AI, but it would make sense.

I'm not against modeling sociopolitical dynamics. But I think you're doing it badly, by taking some things that people on both sides feel, applying it to only one side, and concluding that means the other is involved in lies and scams and conspiracies of silence (while later disclaiming these terms in a disclaimer, after they've had their intended shocking effect).

I think this is one of the cases where we should use our basic rationality tools like probability estimates. Just from reading this post, I have no idea what probability Gary Marcus, Yann LeCun, or Steven Hansen has on AGI in ten years (or fifty years, or one hundred years). For all I know all of them (and you, and me) have exactly the same probability and their argument is completely political about which side is dominant vs. oppressed and who should gain or lose status (remember the issue where everyone assumes LWers are overly certain cryonics will work, whereas in fact they're less sure of this than the general population and just describe their beliefs differently ). As long as we keep engaging on that relatively superficial monkey-politics "The other side are liars who are silencing my side!" level, we're just going to be drawn into tribalism around the near-timeline and far-timeline tribes, and our ability to make accurate predictions is going to suffer. I think this is worse than any improvement we could get by making sociopolitical adjustments at this level of resolution.

Comment by yvain on What are the open problems in Human Rationality? · 2019-05-26T05:50:59.428Z · score: 58 (17 votes) · LW · GW

I've actually been thinking about this for a while, here's a very rough draft outline of what I've got:

1. Which questions are important?
a. How should we practice cause prioritization in effective altruism?
b. How should we think about long shots at very large effects? (Pascal's Mugging)
c. How much should we be focusing on the global level, vs. our own happiness and ability to lead a normal life?
d. How do we identify gaps in our knowledge that might be wrong and need further evaluation?
e. How do we identify unexamined areas of our lives or decisions we make automatically? Should we examine those areas and make those decisions less automatically?

2. How do we determine whether we are operating in the right paradigm?
a. What are paradigms? Are they useful to think about?
b. If we were using the wrong paradigm, how would we know? How could we change it?
c. How do we learn new paradigms well enough to judge them at all?

3. How do we determine what the possible hypotheses are?
a. Are we unreasonably bad at generating new hypotheses once we have one, due to confirmation bias? How do we solve this?
b. Are there surprising techniques that can help us with this problem?

4. Which of the possible hypotheses is true?
a. How do we make accurate predictions?
b. How do we calibrate our probabilities?

5. How do we balance our explicit reasoning vs. that of other people and society?
a. Inside vs. outside view?
b. How do we identify experts? How much should we trust them?
c. Does cultural evolution produce accurate beliefs? How willing should we be to break tradition?
d. How much should the replication crisis affect our trust in science?
e. How well does good judgment travel across domains?

6. How do we go from accurate beliefs to accurate aliefs and effective action?
a. Akrasia and procrastination
b. Do different parts of the brain have different agendas? How can they all get on the same page?

7. How do we create an internal environment conducive to getting these questions right?
a. Do strong emotions help or hinder rationality?
b. Do meditation and related practices help or hinder rationality?
c. Do psychedelic drugs help or hinder rationality?

8. How do we create a community conducive to getting these questions right?
a. Is having "a rationalist community" useful?
b. How do strong communities arise and maintain themselves?
c. Should a community be organically grown or carefully structured?
d. How do we balance conflicting desires for an accepting where everyone can bring their friends and have fun, vs. high-standards devotion to a serious mission?
e. How do we prevent a rationalist community from becoming insular / echo chambery / cultish?
f. ...without also admitting every homeopath who wants to convince us that "homeopathy is rational"?
g. How do we balance the need for a strong community hub with the need for strong communities on the rim?
h. Can these problems be solved by having many overlapping communities with slightly different standards?

9. How does this community maintain its existence in the face of outside pressure?

Comment by yvain on Tales From the American Medical System · 2019-05-10T16:46:07.157Z · score: 49 (21 votes) · LW · GW

I don’t think it’s necessarily greed.

Your doctor may be on a system where they are responsible for doing work for you (e.g. refilling your prescriptions, doing whatever insurance paperwork it takes to make your prescriptions go through, keeping track of when you need to get certain tests, etc) without receiving any compensation except when you come in for office visits. One patient like this isn’t so bad. Half your caseload like this means potentially hours of unpaid labor every day. Even if an individual doctor is willing to do this, high-level decision-makers like clinics and hospitals will realize this is a bad deal, make policies to avoid it, and pressure individual doctors to conform to the policies.

Also, your doctor remains very legally liable for anything bad that happens to you while you’re technically under their care, even if you never see them. If you’re very confused and injecting your insulin into your toenails every day, and then you get hyperglycemic, and your doctor never catches this because you never come into the office, you could sue them. So first of all, that means they’re carrying a legal risk for a patient they’re not getting any money from. And second of all, at the trial, your lawyer will ask “How often did you see so-and-so?” and the doctor will say “I haven’t seen them in years, I just kept refilling their prescription without asking any questions because they sent me an email saying I should”. And then they will lose, because being seen every three months is standard of care. Again, even if an individual doctor is overly altruistic and willing to accept this risk, high-level savvier entities like clinics and hospitals will institute and enforce policies against it. The clinic I work at automatically closes your chart and sends you a letter saying you are no longer our patient if you haven't seen us in X months (I can't remember what X is off the top of my head). This sounds harsh, but if we didn't do it, then if you ever got sick after having seen us even once, it would legally be our fault. Every lawyer in the world agrees you should do this, it's not some particular doctor being a jerk.

Also, a lot of people really do need scheduled appointments. You would be shocked how many people get much worse, are on death’s door, and I only see them when their scheduled three-monthly appointment rolls around, and I ask them “Why didn’t you come in earlier?!” and they just say something like they didn’t want to bother me, or didn’t realize it was so bad, or some other excuse I can’t possibly fathom (to be fair, many of these people are depressed or psychotic). This real medical necessity meshes with (more cynically provides a fig leaf for, but it's not a fake fig leaf) the financial and legal necessity.

I’m not trying to justify what your doctor did to you. If it were me, I would have refilled your insulin, then sent you a message saying in the future I needed to see you every three months. But I’ve seen patients try to get out of this. They’ll wait until the last possible moment, then send an email saying “I am out of my life-saving medication, you must refill now!” If I send a message saying we should have an appointment on the books before I fill it, they’ll pretend they didn’t see that and just resend “I need my life-saving medication now!” If my receptionist tries to call, they’ll hang up. At some point I start feeling like I’m being held hostage. I really only have one patient who is definitely doing this, but it’s enough that I can understand why some doctors don’t want to have to have this fight and institute a stricter “no refill until appointment is on the books” policy.

I do think there are problems with the system, but they’re more like:

- A legal system that keeps all doctors perpetually afraid of malpractice if they’re not doing this (but what is the alternative?)

- An insurance system that doesn’t let doctors get money except through appointments. If the doctor just charged you a flat fee per year for being their patient, that would remove the financial aspect of the problem. Some concierge doctors do this, but insurances don’t work that way (but insurances are pretty savvy, are they afraid doctors would cheat?)

- The whole idea that you can’t access life-saving medication until an official gives you permission (but what would be the effects of making potentially dangerous medications freely available?)

Comment by yvain on 1960: The Year The Singularity Was Cancelled · 2019-04-23T09:51:02.975Z · score: 12 (7 votes) · LW · GW

I showed it that way because it made more sense to me. But if you want, see https://docs.google.com/spreadsheets/d/1xEkh4jhUup0qlG6EzBct6igvLPeRH4avpM5nZQ-dgek/edit#gid=478995971 for a graph by Paul where the horizontal axis is log(GDP); it is year-agnostic and shows the same pattern.

Comment by yvain on On the Regulation of Perception · 2019-03-10T19:53:48.282Z · score: 17 (5 votes) · LW · GW

You may be interested in "Behavior: The Control Of Perception" by Will Powers, which has been discussed here a few times.

Comment by yvain on Rule Thinkers In, Not Out · 2019-03-02T05:36:10.298Z · score: 46 (17 votes) · LW · GW

Thanks for this response.

I mostly agree with everything you've said.

While writing this, I was primarily thinking of reading books. I should have thought more about meeting people in person, in which case I would have echoed the warnings you gave about Michael. I think he is a good example of someone who both has some brilliant ideas and can lead people astray, but I agree with you that people's filters are less functional (and charisma is more powerful) in the real-life medium.

On the other hand, I agree that Steven Pinker misrepresents basic facts about AI. But he was also involved in my first coming across "The Nurture Assumption", which was very important for my intellectual growth and which I think has held up well. I've seen multiple people correct his basic misunderstandings of AI, and I worry less about being stuck believing false things forever than about missing out on Nurture-Assumption-level important ideas (I think I now know enough other people in the same sphere that Pinker isn't a necessary source of this, but I think earlier for me he was).

There have been some books, including "Inadequate Equilibria" and "Zero To One", that have warned people against the Outside View/EMH. This is the kind of idea that takes the safety wheels off cognition - it will help bright people avoid hobbling themselves, but also give gullible people new opportunities to fail. And there is no way to direct it, because non-bright, gullible people can't identify themselves as such. I think the idea of ruling geniuses in is similarly dangerous, in that there's no way to direct it only to non-gullible people who can appreciate good insight and throw off falsehoods. You can only say the words of warning, knowing that people are unlikely to listen.

I still think on net it's worth having out there. But the example you gave of Michael and of in-person communication in general makes me wish I had added more warnings.

Comment by yvain on January 2019 Nashville SSC Meetup · 2018-12-25T10:23:09.263Z · score: 3 (1 votes) · LW · GW

I notice this isn't showing up on the sidebar of SSC; if you want it to, consider tagging this as SSC here.

Comment by yvain on No Really, Why Aren't Rationalists Winning? · 2018-11-06T20:29:54.019Z · score: 83 (36 votes) · LW · GW

I support the opposite perspective - it was wrong to ever focus on individual winning and we should drop the slogan.

"Rationalists should win" was originally a suggestion for how to think about decision theory; if one agent predictably ends up with more utility than another, its choice is more "rational".

But this got caught up in excitement around "instrumental rationality" - the idea that the "epistemic rationality" skills of figuring out what was true, were only the handmaiden to a much more exciting skill of succeeding in the world. The community redirected itself to figuring out how to succeed in the world, ie became a self-help group.

I understand the logic. If you are good at knowing what is true, then you can be good at knowing what is true about the best thing to do in a certain situation, which means you can be more successful than other people. I can't deny this makes sense. I can just point out that it doesn't resemble reality. Donald Trump continues to be more successful than every cognitive scientist and psychologist in the world combined, and this sort of thing seems to happen so consistently that I can no longer dismiss it as a fluke. I think it's possible (and important) to analyze this phenomenon and see what's going on. But the point is that this will involve analyzing a phenomenon - ie truth-seeking, ie epistemic rationality, ie the thing we're good at and which is our comparative advantage - and not winning immediately.

Remember the history of medicine, which started with wise women unreflectingly using traditional herbs to cure conditions. Some very smart people like Hippocrates came up with reasonable proposals for better ideas, and it turned out they were much worse than the wise women. After a lot of foundational work they eventually became better than the wise women, but it took two thousand years, and a lot of people died in the meantime. I'm not sure you can short-circuit the "spend two thousand years flailing around and being terrible" step. It doesn't seem like this community has.

And I'm worried about the effects of trying. People in the community are pushing a thousand different kinds of woo now, in exactly the way "Schools Proliferating Without Evidence" condemned. This is not the fault of their individual irrationality. My guess is that pushing woo is an almost inevitable consequence of taking self-help seriously. There are lots of things that sound like they should work, and that probably work for certain individual people, and it's almost impossible to get the funding or rigor or sample size that you would need to study it at any reasonable level. I know a bunch of people who say that learning about chakras has done really interesting and beneficial things for them. I don't want to say with certainty that they aren't right - some of the chakras have a suspicious correspondence to certain glands or bundles of nerves in the body, and for all I know maybe it's just a very strange way of understanding and visualizing those nerves' behavior. But there's a big difference between me saying "for all I know maybe..." and a community where people are going around saying "do chakras! they really work!" But if you want to be a self-help community, you don't have a lot of other options.

I think my complaint is: once you become a self-help community, you start developing the sorts of epistemic norms that help you be a self-help community, and you start attracting the sort of people who are attracted to self-help communities. And then, if ten years later, someone says "Hey, are we sure we shouldn't go back to being pure truth-seekers?", it's going to be a very different community that discusses the answer to that question.

We were doing very well before, and could continue to do very well, as a community about epistemic truth-seeking mixed with a little practical strategy. All of these great ideas like effective altruism or friendly AI that the community has contributed to, are all things that people got by thinking about, by trying to understand the world and avoid bias. I don't think the rationalist community's contribution to EA has been the production of unusually effective people to man its organizations (EA should focus on "winning" to be more effective, but no moreso than any other movement or corporation, and they should try to go about it in the same way). I think rationality's contribution has been helping carve out the philosophy and convince people that it was true, after which those people manned its organizations at a usual level of effectiveness. Maybe rationality also helped develop a practical path forward for those organizations, which is fine and a more limited and relevant domain than "self-help".

Comment by yvain on Anti-social Punishment · 2018-10-01T05:30:03.518Z · score: 15 (5 votes) · LW · GW

I'm a little confused. The explanation you give would explain why people might punish pro-social punishers, but it doesn't really give insight into why they would punish cooperators. Is the argument that cooperators are likely to also be pro-social punishers? Or am I misunderstanding the structure of the game?

Comment by yvain on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-08T07:57:51.723Z · score: 22 (9 votes) · LW · GW

I agree Evan's intentions are good, and I'm glad that someone interesting who wants to criticize my writing is getting a chance to speak. I'm surprised this is downvoted as much as it has been, and I haven't downvoted it myself.

My main concern is with the hyperbolic way this was pitched and the name of the meetup, which I understand were intended kind of as jokes but which sound kind of creepy to me when I am the person being joked about. I don't think Evan needs to change these if he doesn't want to, but I do just want to register the concern.

Comment by yvain on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-08T01:49:25.788Z · score: 80 (29 votes) · LW · GW

I think it's good and important to criticize things, and I don't consider myself above criticism.

On the other hand, it's also kind of freaking me out to hear that a bunch of people in a city I've been in for like an hour tops are organizing an event using a derisive nickname for me and calling me a pseudointellectual, especially since I just sort of stumbled across it by coincidence.

I'm not sure how to balance these different considerations, and probably my feelings aren't as important as moving the engine of intellectual progress, but for the record I'm not really happy with the attempt made to balance them here.

I don't know if I am supposed to defend myself, but I will just say that I am particularly tired of criticism of the Dark Ages post. I've found this to have been a bunch of Redditors talking about how a freshman history student would have been ashamed to make so many howling mistakes, and then a bunch of trained historians telling me they thought it was basically fine (for example, here's a professional medieval historian saying he agrees with it entirely, here's a Renaissance historian who thinks it's fine, here's a historian of early modern science who says the same - also, I got an email from a Dominican friar who liked it, which is especially neat because it's like my post on the Middle Ages getting approval from the Middle Ages). I'm not saying this to make an argument from authority, I'm saying it because the people who disagree with me keep trying to make an argument from authority, and I don't want people to fall for it.

And, okay, one more thing. My Piketty review begins " I am not an economist. Many people who are economists have reviewed this book already. I review it only because if I had to slog through reading this thing I at least want to get a blog post out of it. If anything in my review contradicts that of real economists, trust them instead of me " If you're using errors in it to call me a pseudo-intellectual, I feel like you're just being a jerk at this point. Commenters did find several ways I was insufficiently critical of Piketty's claims, which I describe here ; I also added a correction note to that effect to the original post. The post was nevertheless recommend by an economist who said it was "the best summary I've ever read from a non-economist". Again, I'm not saying this as an argument from authority, I'm saying it because I know from experience that the criticism is going to involve a claim that "it's so bad that no knowledgeable person would ever take it seriously", and now you'll know that's not true.

Comment by yvain on Last Chance to Fund the Berkeley REACH · 2018-06-28T18:02:08.555Z · score: 33 (12 votes) · LW · GW

I've increased my monthly donation to $600. Thanks again to Sarah and everyone else who works on this.

Comment by yvain on SSC South Bay Meetup · 2018-05-10T05:16:48.345Z · score: 5 (1 votes) · LW · GW

The 5 AM time looks like a mistake. David told me it's at 2 PM.

Comment by yvain on I Want To Live In A Baugruppe · 2017-03-18T05:52:12.305Z · score: 4 (4 votes) · LW · GW

Interested in some vague possible future.

Comment by yvain on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2016-12-24T04:17:20.336Z · score: 22 (15 votes) · LW · GW

I attended in the Bay last year and also had a bad time because of the screaming child. Thanks for being willing to complain about this in the face of social pressure not to.

Comment by yvain on Fact Posts: How and Why · 2016-12-05T06:12:12.432Z · score: 18 (17 votes) · LW · GW

Some additional thoughts:

  • Don't underestimate Wikipedia as a really good place to get a (usually) unbiased overview of things and links to more in-depth sources.

  • The warning against biased sources is well-taken, but if you're looking into something controversial, you might have to just read the biased sources on both sides, then try to reconcile them. I've found it helpful to find a seemingly compelling argument, google something like "why X is wrong" or "X debunked" into Google, and see what the other side has to say about it. Then repeat until you feel like both sides are talking past each other or disagreeing on minutiae. This is important to do even with published papers!

  • Success often feels like realizing that a topic you thought would have one clear answer actually has a million different answers depending on how you ask the question. You start with something like "did the economy do better or worse this year?", you find that it's actually a thousand different questions like "did unemployment get better or worse this year?" vs. "did the stock market get better or worse this year?" and end up with things even more complicated like "did employment as measured in percentage of job-seekers finding a job within six months get better" vs. "did employment as measured in total percent of workforce working get better?". Then finally once you've disentangled all that and realized that the people saying "employment is getting better" or "employment is getting worse" are using statistics about subtly different things and talking past each other, you use all of the specific things you've discovered to reconstruct a picture of whether, in the ways important to you, the economy really is getting better or worse.

Comment by yvain on Fact Posts: How and Why · 2016-12-05T06:04:21.853Z · score: 10 (9 votes) · LW · GW

I have about six of these floating around in my drafts. This makes me think that maybe I should post them; I didn't think they were that interesting to anyone but me.

Please!

Comment by yvain on 2016 LessWrong Diaspora Survey Results · 2016-05-01T17:11:57.840Z · score: 23 (21 votes) · LW · GW

Nice work.

If possible, please do a formal writeup like this: http://lesswrong.com/lw/lhg/2014_survey_results/

If possible, please change the data on your PDF file to include an option to have it without nonresponders. For example, right now sex is 66% male, 12% female, unknown 22%, which makes it hard to intuitively tell what the actual sex ratio is. If you remove the unknowns you see that the knowns are 85% male 15% female, which is a much more useful result. This is especially true since up to 50% of people are unknowns on some questions.

If possible, please include averages for numerical questions. For example, there's no data about age on the PDF file because it just says everybody was a "responder" but doesn't list numbers.

Comment by yvain on Lesswrong 2016 Survey · 2016-03-26T02:49:37.788Z · score: 1 (1 votes) · LW · GW

If you throw out the data, I request you keep the thrown-out data somewhere else so I can see how people responded to the issue.

Comment by yvain on Lesswrong 2016 Survey · 2016-03-26T02:42:51.779Z · score: 6 (6 votes) · LW · GW

"In general I planned to handle the "within 10 cm" thing during analysis. Try to fermi estimate the value and give your closest answer, then the probability you got it right. We can look at how close your confidence was to a sane range of values for the answer."

But unless I'm misunderstanding you, the size of the unspoken "sane range" is the entire determinant of how you should calibrate yourself.

Suppose you ask me when Genghis Khan was born, and all I know is "sometime between 1100 and 1200, with certainty". Suppose I choose 1150. If you require the exact year, then I'm only right if it was exactly 1150, and since it could be any of 100 years my probability is 1%. If you require within five years, then I'm right if it was any time between 1145 and 1155, so my probability is 10%. If you require within fifty years, then my probability is effectively 100%. All of those are potential "sane ranges", but depending on which one you the correctly calibrated estimate could be anywhere from 1% to 100%.

Unless I am very confused, you might want to change the questions and hand-throw-out all the answers you received before now, since I don't think they're meaningful (except if interpreted as probability of being exactly right).

(Actually, it might be interesting to see how many people figure this out, in a train wreck sort of way.)

PS: I admit this is totally 100% my fault for not getting around to looking at it the five times you asked me to before this.

Comment by yvain on Lesswrong 2016 Survey · 2016-03-26T02:04:36.220Z · score: 24 (24 votes) · LW · GW

Elo, thanks a lot for doing this.

(for the record, Elo tried really hard to get me involved and I procrastinated helping and forgot about it. I 100% endorse this.)

My only suggestion is to create a margin of error on the calibration questions, eg "How big is the soccer ball, to within 10 cm?". Otherwise people are guessing whether they got the exact centimeter right, which is pretty hard.

Comment by yvain on Open thread, Mar. 14 - Mar. 20, 2016 · 2016-03-18T20:51:25.081Z · score: 1 (1 votes) · LW · GW

This idea of having more "bandwidth" is tempting, but not really scientifically supported as far as I can tell, unless he just means autists have more free time/energy than neurotypicals.

Comment by yvain on Genetic "Nature" is cultural too · 2016-03-18T20:46:44.350Z · score: 7 (7 votes) · LW · GW

If race were a factor in twin studies, I think it would show up only in shared environment, since it differs between families but never within families (and is not differently different in MZ vs. DZ twins). That means it would not show in "heredity", unless we're talking about interracial couples with two children, each of whom by coincidence got a very different number of genes from the parents' two races - I think this is rare enough not to matter in real life studies.

Your point stands about the general role of these kinds of things, I just don't think it's counted that way in the twin studies we actually have.

You're right about beauty etc, though. Genetic studies are most informative about interventions to change individuals' standings relative to other individuals, not about interventions to completely change the nature of the playing field.

Comment by yvain on Probabilities Small Enough To Ignore: An attack on Pascal's Mugging · 2015-09-17T05:33:50.402Z · score: 14 (14 votes) · LW · GW

I don't know if this solves very much. As you say, if we use the number 1, then we shouldn't wear seatbelts, get fire insurance, or eat healthy to avoid getting cancer, since all of those can be classified as Pascal's Muggings. But if we start going for less than one, then we're just defining away Pascal's Mugging by fiat, saying "this is the level at which I am willing to stop worrying about this".

Also, as some people elsewhere in the comments have pointed out, this makes probability non-additive in an awkward sort of way. Suppose that if you eat unhealthy, you increase your risk of one million different diseases by plus one-in-a-million chance of getting each. Suppose also that eating healthy is a mildly unpleasant sacrifice, but getting a disease is much worse. If we calculate this out disease-by-disease, each disease is a Pascal's Mugging and we should choose to eat unhealthy. But if we calculate this out in the broad category of "getting some disease or other", then our chances are quite high and we should eat healthy. But it's very strange that our ontology/categorization scheme should affect our decision-making. This becomes much more dangerous when we start talking about AIs.

Also, does this create weird nonlinear thresholds? For example, suppose that you live on average 80 years. If some event which causes you near-infinite disutility happens every 80.01 years, you should ignore it; if it happens every 79.99 years, then preventing it becomes the entire focus of your existence. But it seems nonsensical for your behavior to change so drastically based on whether an event is every 79.99 years or every 80.01 years.

Also, a world where people follow this plan is a world where I make a killing on the Inverse Lottery (rules: 10,000 people take tickets; each ticket holder gets paid $1, except a randomly chosen "winner" who must pay $20,000)

Comment by yvain on Open Thread, Jul. 6 - Jul. 12, 2015 · 2015-07-16T14:09:49.751Z · score: 1 (1 votes) · LW · GW

There are meetings in the area every couple of months. There's no specific group to link to, but if you give me your email address I will add you to the list.

If you tell me where exactly in Michigan you are, I can try to put you in touch with other Michigan LW/SSC readers. Most are in Ann Arbor, but there are several in the Detroit metro area and at least one in Grand Rapids.

Comment by yvain on Open Thread, May 4 - May 10, 2015 · 2015-05-10T05:08:30.850Z · score: 3 (3 votes) · LW · GW

I liked the sound of "The Future Is Pipes" and saved that sentence structure in case I needed it.

Comment by yvain on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 112 · 2015-02-26T20:48:19.110Z · score: 10 (10 votes) · LW · GW

The Mirror did not touch the ground; the golden frame had no feet. It didn't look like it was hovering; it looked like it was fixed in place, more solid and more motionless than the walls themselves, like it was nailed to the reference frame of the Earth's motion.

The Mirror is in the fourth wall. Now that we-the-readers have seen the mirror, we have to consider that our seeing Eliezer saying this isn't in the mirror might just be part of our coherent extrapolated volition.

Comment by yvain on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 111 · 2015-02-25T21:46:24.086Z · score: 1 (1 votes) · LW · GW

But that suggests that you can resurrect someone non-permanently without the Stone - and possibly keep them alive indefinitely by expending constant magic on it like Harry does with his father's rock.

Comment by yvain on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 112 · 2015-02-25T21:44:40.553Z · score: 9 (9 votes) · LW · GW

Why is Voldemort not getting rid of Harry in some more final way?

Even if he's worried killing Harry will rebound against him because of the prophecy somehow, he can, I don't know, freeze Harry? Stick Harry in the mirror using whatever happened to Dumbledore? Destroy Harry's brain and memories and leave him an idiot? Shoot Harry into space?

Why is "resurrect Harry's best friend to give him good counsel" a winning move here?

Comment by yvain on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 111 · 2015-02-25T19:26:18.368Z · score: 3 (3 votes) · LW · GW

How is Voldemort resurrecting Hermione?

With his own resurrection, he's transfiguring stuff into his body, using the Stone to make the transfiguration permanent, then having his spirit repossess the body.

With Hermione, the body was never the problem. Where's he getting the spirit? How does the Stone help?

Comment by yvain on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 111 · 2015-02-25T19:08:27.854Z · score: 24 (24 votes) · LW · GW

So Harry gets his wand back, gets his pouch back, Voldemort resurrects Hermione with superpowers, then Voldemort becomes super-weak, his horcruxes mysteriously stop working, and he mentions this is happening loudly enough for Harry to hear and kill him?

Either this is still all in the mirror, or Harry needs to buy lottery tickets right away.

Comment by yvain on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 110 · 2015-02-24T23:01:12.281Z · score: 12 (12 votes) · LW · GW

"Well," said Albus Dumbledore. "I do feel stupid."

"I should hope so," Professor Quirrell said easily; if he had been at all shocked himself at being caught, it did not show. A casual wave of his hand changed his robes back to a Professor's clothing.

Dumbledore's grimness had returned and redoubled. "There I am, searching so hard for Voldemort's shade, never noticing that the Defense Professor of Hogwarts is a sickly, half-dead victim possessed by a spirit far more powerful than himself. I would call it senility, if so many others had not missed it as well."

This is Dumbledore admitting he held the Idiot Ball. We've been promised nobody's holding the Idiot Ball. So something's up. I like the theory that mirror is operating as intended and showing Voldemort what he wants to see, that being Dumbledore making a mistake and losing. Alternately, it could be that Dumbledore was somehow able to make a version of him inside the mirror (maybe in his CEV he saw himself inside the mirror protecting the Stone, and that reflection-Dumbledore gained independent existence?). Or he could just have something else up his sleeve.

Comment by yvain on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 104 · 2015-02-16T17:22:54.058Z · score: 24 (26 votes) · LW · GW

Prediction:

Harry gets the Snitch eliminated from Quidditch. Not just in Hogwarts, but in the big leagues as well - they don't want a Germany vs. Austria on their hands.

All of the celebrity Quidditch players of the world - Victor Krum, Ludo Bagman, Finbar Quigley - are distraught by these sudden and drastic changes to a traditional game they've loved for many years. At the ceremony marking the changes, some of them tear up.

The Daily Prophet headline is "BOY WHO LIVED TEARS UP THE STARS"

Eliezer gives all of us a long lecture about how the prior for somebody making celebrities cry is so much higher than the prior for someone literally ripping the Sun apart that the latter hypothesis should never even have entered our consideration, regardless of how much more natural an interpretation of the prophecy it is.

Comment by yvain on Open thread, Jan. 12 - Jan. 18, 2015 · 2015-01-16T01:30:44.591Z · score: 4 (4 votes) · LW · GW

I can't find the exact post you're talking about, but the book involved was probably The Adapted Mind, since Eliezer often praises it in terms like those.

Comment by yvain on Velocity of behavioral evolution · 2014-12-20T04:41:12.031Z · score: 11 (11 votes) · LW · GW

Smell is interesting because it's way overrepresented genetically. Something like 5% of most animals' genomes are just a whole bunch of olfactory receptor genes, each for a different individual smell. So it should be unusually easy to do epigenetics with it. Just say "Express the gene for cherry smell more" and then the mice have a stronger reaction to it.

This doesn't mean that any more complex behaviors can be inherited epigenetically. In fact, it might be that nothing else is as suitable to epigenetic transmission as olfaction.

Comment by yvain on PSA: Eugine_Nier evading ban? · 2014-12-08T01:26:56.991Z · score: 76 (82 votes) · LW · GW

Both accounts' SSC comments come from the same IP.

Comment by yvain on xkcd on the AI box experiment · 2014-11-22T02:39:54.372Z · score: 51 (55 votes) · LW · GW

At this point I think the winning move is rolling with it and selling little plush basilisks as a MIRI fundraiser. It's our involuntary mascot, and we might as well 'reclaim' it in the social justice sense.

Then every time someone brings up "Less Wrong is terrified of the basilisk" we can just be like "Yes! Yes we are! Would you like to buy a plush one?" and everyone will appreciate our ability to laugh at ourselves, and they'll go back to whatever they were doing.

Comment by yvain on xkcd on the AI box experiment · 2014-11-22T02:37:11.843Z · score: 30 (28 votes) · LW · GW

When I visited MIRI's headquarters, they were trying to set up a video link to the Future of Humanity Institute. Somebody had put up a monitor in a prominent place and there was a sticky note saying something like "Connects to FHI - do not touch".

Except that the H was kind of sloppy and bent upward so it looked like an A.

I was really careful not to touch that monitor.

Comment by yvain on Neo-reactionaries, why are you neo-reactionary? · 2014-11-21T08:02:34.760Z · score: 22 (22 votes) · LW · GW

I agree with Toggle that this might not have been the best place for this question.

The Circle of Life goes like this. Somebody associates Less Wrong with neoreactionaries, even though there are like ten of them here total. They start discussing neoreaction here, or asking their questions for neoreactionaries here. The discussion is high profile and leads more people to associate Less Wrong with neoreactionaries. That causes more people to discuss it and ask questions here, which causes more people to associate us, and it ends with everybody certain that we're full of neoreactionaries, and that ends with bad people who want to hurt us putting "LESS WRONG IS A RACIST NEOREACTIONARY WEBSITE" in big bold letters over everything.

If you really want to discuss neoreaction, I'd suggest you do it in an Slate Star Codex open thread, since apparently I'm way too tarnished by association with them to ever escape. Or you can go to a Xenosystems open thread and get it straight from the horse's mouth.

Comment by yvain on Neo-reactionaries, why are you neo-reactionary? · 2014-11-21T07:09:01.738Z · score: 49 (49 votes) · LW · GW

I've been advised to come here and defend myself.

If you haven't been watching closely, David Gerard has been spreading these same smears about me on RationalWiki, on Twitter, and now here. His tweets accuse me of treating the Left in general and the social justice movement in particular with "frothing" and as "ordure". And now he comes here and adds Tumblr to the list of victims, and "actual disgust" to the list of adjectives.

I resent this because it is a complete fabrication.

I resent it because, far from a frothing hatred of Tumblr, I myself have a Tumblr account which I use almost every day and which I've made three hundred posts on. Sure, I've gently mocked Tumblr (as has every Tumblr user) but I've also very publicly praised it for hosting some very interesting and enlightening conversations.

I resent it because I've posted a bunch of long defenses and steelmannings of social justice ideas like Social Justice For The Highly Demanding Of Rigor and The Wonderful Thing About Triggers, some of which have gone mildly viral in the social justice blogosphere, and some of which have led to people emailing me or commenting saying they've changed their minds and become less hostile to social justice as a result.

I resent it because, far from failing to intellectually engage with the Left, in the past couple of months I've read, reviewed, and enjoyed left-leaning books on Marx, the Soviet economy, and market socialism

I resent it because the time I most remember someone trying to engage me about social justice, Apophemi, I wrote a seven thousand word response which I consider excruciatingly polite, which started with a careful justification for why writing it would be more productive and respectful than not writing it, and which ended with a heartfelt apology for the couple of things I had gotten wrong on my last post on the subject.

(Disgust! Frothing! Ordure!)

I resent it because I happily hosted Ozy's social justice blogging for several months, giving them an audience for posts like their takedown of Heartiste, which was also very well-received and got social justice ideas to people who otherwise wouldn't have seen them.

I resent it because about a fifth of my blogroll is social justice or social justice-aligned blogs, each of which get a couple dozen hits from me a day.

I resent it because even in my most impassioned posts about social justice, I try to make it very clear that there are parts of the movement which make excellent points, and figures in the movement I highly respect. Even in what I think everyone here will agree is my meanest post on the subject, Radicalizing the Romanceless, I stop to say the following about the social justice blogger I am arguing against:

[He] is a neat guy. He draws amazing comics and he runs one of the most popular, most intellectual, and longest-standing feminist blogs on the Internet. I have debated him several times, and although he can be enragingly persistent he has always been reasonable...He cares deeply about a lot of things, works hard for those things, and has supported my friends when they have most needed support.

(DISGUST! FROTHING! ORDURE!)

I resent it because it trivializes all of my sick burns against neoreactionaries, like the time I accused them of worshipping Kim Jong-un as a god, and the time I said they were obsessed with "precious, precious, white people", and the time I mocked Jim for thinking Eugene V. Debs was a Supreme Court case.

I resent this because anyone who looks at my posts tagged with social justice can see that almost as many are in favor as against.

And I resent this because I'm being taken to task about charity by somebody whose own concept of a balanced and reasonable debate is retweeting stuff like this -- and again and again calling the people he disagrees with "shitlords"

(which puts his faux-horror that I treat people I disagree with 'like ordure' in a pretty interesting new light)

No matter how many pro-social-justice posts I write, how fair and nice I am, or what I do, David Gerard is going to keep spreading these smears about me until I refuse to ever engage with anyone who disagrees with him about anything at all. As long as I'm saying anything other than "every view held by David Gerard is perfect and flawless and everyone who disagrees with David Gerard is a shitlord who deserve to die", he is going to keep concern-trolling you guys that I am "biased" or "unfair".

Please give his continued campaigning along these lines the total lack of attention it richly deserves.

Comment by yvain on 2014 Less Wrong Census/Survey · 2014-11-02T01:52:17.198Z · score: 2 (2 votes) · LW · GW

I stated that all disputes would be resolved by Wikipedia, and here is Wikipedia's verdict on the matter: http://en.wikipedia.org/wiki/List_of_best-selling_PC_games