Posts

Willpower Thermodynamics 2016-08-16T03:00:58.263Z
Dry Ice Cryonics- Preliminary Thoughts 2015-09-28T07:00:03.440Z
Effects of Castration on the Life Expectancy of Contemporary Men 2015-08-08T04:37:52.592Z
Efficient Food 2015-04-06T05:20:11.307Z
Tentative Thoughts on the Cost Effectiveness of the SENS Foundation 2015-01-04T02:58:53.627Z
Expansion on A Previous Cost-Benefit Analysis of Vaccinating Healthy Adults Against Flu 2014-11-12T04:36:50.139Z
A Cost- Benefit Analysis of Immunizing Healthy Adults Against Influenza 2014-11-11T04:10:27.554Z

Comments

Comment by Fluttershy on The Power to Judge Startup Ideas · 2019-09-06T18:32:40.878Z · LW · GW

What the hell? It's just a more specific version of the point in inadequate equilibria, and don't you want to know if you can do something better?

Comment by Fluttershy on The Power to Judge Startup Ideas · 2019-09-05T18:32:13.066Z · LW · GW

Presumably the reason why people are roleplaying everything in the first place is because, you'll be seen badly if you stop roleplaying, and being seen badly hurts if you don't have enough emotional resilience. Here's my best attempt at how to break people out of this.

Comment by Fluttershy on The Power to Judge Startup Ideas · 2019-09-05T18:19:30.557Z · LW · GW

Man, most people are roleplaying everything. It's not fixable by just telling them what concrete stuff they're doing wrong, because they're still running on the algorithm of roleplaying things. Which is why rationality, an attempted account of how to not do stuff wrong, ended in a social club, because it didn't directly address that people are roleplaying everything anyways.

Comment by Fluttershy on What is the state of the ego depletion field? · 2019-08-11T11:58:31.387Z · LW · GW

Nice, but the second paper is less on track, as the idea is more "people, society etc. coerce you to do things you don't want" than "long vs short term preferences".

Comment by Fluttershy on What is the state of the ego depletion field? · 2019-08-11T00:08:56.304Z · LW · GW

Not something you'll see in papers, but the point of willpower is to limit the amount of time doing stuff you don't want to do. So, your community has some morality that isn't convenient for you? That's why it costs willpower to follow that morality. Your job is tiring? Maybe deep down you don't believe it's serving your interests.

If you have a false belief about what you want, e.g. "I actually want to keep this prestigious position because yay prestige, even though I get tired all the time at work", well, that's a thing a lot of people end up believing, because nobody told them to use "things that make you tired" as a proxy for "things I don't want".

Obviously this has nothing to do with e.g. blood glucose levels.

Comment by Fluttershy on Diana Fleischman and Geoffrey Miller - Audience Q&A · 2019-08-10T23:50:07.342Z · LW · GW
Comment by Fluttershy on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-09T05:13:59.655Z · LW · GW

If you want to spend time predictably spinning in circles in your analysis because you can't bring yourself to believe someone is lying, be my guest.

As for the specific authors: the individual reports written seem fine in themselves, and as for the geoengineering one, I know a guy who did a PhD under the author and said he's generally trustworthy (I recall Vaniver was in his PhD program too). Like what I'm saying is the specific reports, e.g. Bickel's report on geoengineering, seem fine, but Lomborg's synthesis of them is shit, and you're obscuring things with your niceness-and-good-faith approach.

Comment by Fluttershy on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-09T01:15:56.136Z · LW · GW

b/c of doing the analysis and then not ranking shit in order.

Further down the list, we find a very controversial project, that is geo-engineering to reduce the intensity of incoming solar radiation to counteract global warming. According to a background paper, such investments would give a return rate of about 1,000. In spite of this enormous return rate, this is given moderate priority, apparently because it is deemed rather uncertain if this will actually work as intended.
> The lowest ranking accepted project, project no. 16, is called "Borehole and public hand pump intervention". This has an estimated benefit-cost-ratio of loess than 3.4.
Next, we come to priority no. 17, the highest ranking not-accepted project. This is  "Increased funding for green energy research and development". According to the authors of the background paper, this has benefit-cost-ratios of 10 or more if the time horizon is slightly more than 1 decade.  It is therefore a bit strange that this is placed below a project with a clearly less favourable benefit-cost-ratio.

do your own research if you disagree, but if you use "apparently because it is deemed rather uncertain if this will actually work as intended." as an excuse to rate something poorly because you wanted to anyways rather than either do more research and update it, or even just make a guess, then wtf?

We are not playing, "is this plausibly defensible", we are playing, "what was this person's algorithm and are the systematically lying".

Comment by Fluttershy on Subagents, neural Turing machines, thought selection, and blindspots · 2019-08-07T05:32:46.824Z · LW · GW

Responding to your Dehaene book review and IFS thoughts as well as this:

On Dehaene: I read the 2018 version of Dehaene's Consciousness and the Brain a while ago and would recommend it as a good intro to cognitive neurosci, your summary looks correct.

On meditation: it's been said before, but >90% of people reading this are going to be high on "having models of how their brain works", and low on "having actually sat down and processed their emotions through meditation or IFS or whatevs". Double especially true for all the depressed Berkeley rationalists.

Oh, and fun thing: surely you've heard the idea that "pretty much all effective therapy and meditation and shit is just helping people sit down until they process their emotions instead of running from them like usual". Well, here's IFS being used that way, see from 4:51-5:32.

Comment by Fluttershy on How would a person go about starting a geoengineering startup? · 2019-08-06T23:58:09.335Z · LW · GW

For the love of the spark, fucking don't. At least separate yourself from the social ladder of EA and learn the real version of rationality first.

Or: ignore that advice, but at least don't do the actual MCB implementation worldwide that costs a billion a year, talk with the scientists who worked on it and figure out the way that MCB could be done most efficiently. And then get things to the point of having a written plan, like, "hey government, here's exactly how you can do MCB if you want, now you can execute this plan as written if/when you choose". Do a test run over a small area, iterate and improve on the technology. B/c governments or big NGOs are more likely to do it if it's fleshed out, e.g. lower risk from their POV.

Comment by Fluttershy on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-06T23:38:57.485Z · LW · GW

Thanks! This all sounds right. "CCC has interesting heresies"--was there stuff other than MCB and global poverty? It's an interesting parallel to EA--that they have interesting heresies, but are ultimately wrong about some key assumptions (that there's room for more funding/that MCB is sufficient to stop all climate change, respectively. And they both have a fetish for working within systems rather than trying to change them at all.)

Kinda a shame that leftists are mostly not coming to the "how can we change systems that will undo any progress we make" thing with an effectiveness mindset, though at least these people are.

Comment by Fluttershy on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-06T13:27:56.285Z · LW · GW

I'll give an answer that considers the details of the Copenhagen Consensus Center (CCC) and geoengineering, rather than being primarily a priori. I've spent a day and a half digging around and have zero prior knowledge. I spent too much time reading Lomborg and CCC in retrospect, so I mention him disproportionately relative to other sources.

Cross-posted to my blog.

Here's what I notice:

1. Lomborg and his CCC seem very cost-benefit focused in their analysis. A few others are too, but see point 4. Basically, it's easy to compare climate interventions to other interventions, but hard to figure out how much damage warming and other climate change will cause, so you can't really figure out the benefit part of the cost-benefit analysis.

2. Lomborg and his CCC has recieved a ton of criticism for systematically making errors that underestimate the effect of climate change, and never making errors that overestimate the effect of it. One detailed account of him making such an error that could not have been made in good faith, is given here. He also literally lies in his cost-benefit analysis (by more than 10x).

There's a lot of articles about Lomborg e.g. taking a 700k salary and getting donations from Koch and Exxon, which showed up before I found the above examples of him lying about data. I reacted to the info on his salary/funding by saying, "this is indicative of him being sketchy, but instead of just changing my estimate of how likely he is to be sketch by however much ("updating") and calling it a day, I'm going to take this as a cue to dig into things until I have a firm understanding of whether this guy is systematically lying or not". Turns out he's a liar (see previous paragraph).

3. Page 33 of this CCC paper notes that,

To place SRM 1 [a plan which, by definition, reduces the amount of heat from the sun that stays in the earth's atmosphere by 1 watt per square meter] in perspective, 1 W m-2 is about 0.3% of the [average over the earth] incoming solar radiation of 341 W m-2 (Kiehl and Trenberth, 1997; Trenberth et al., 2009).

Which corresponds to a 0.6 C average temperature change. This should put in perspective what a huge effect a 0.3% change in how much heat gets stays in earth's atmosphere has on global temperatures. If all the glaciers melted, the earth's temperature would rise 10 C, because glaciers are good at reflecting heat back away from the earth (back of envelope, me); if the earth became totally frozen, the temperature would drop by 55 C (see this page), which is why the thing this post is about, MCB, can realistically change global temperatures by a couple degrees C. Note that 1/2 of the global temperature change that already happened was because some glaciers and snow already melted and stopped reflecting heat away from the earth. Basically, the science behind MCB is pretty solid and I'd expect it to basically work.

AFAICT the IPCC estimates of how much warming there will be in the future seem to take into account the fact that the melting of the glaciers will further speed warming in itself, in addition to the warming you get from rising CO2 levels. Except they don't ever explicitly say whether that was considered as a factor, so I can't be totally sure they took that into account, even though it sounds like the most obvious thing to consider. (I skimmed this whole damn thing, and it wasn't said either way!) I guess my next course of action could be to annoy an author about it, though I think I'll be lazy and not.

4. We have little enough data on "how much economic damage has global warming done so far" that we can't make decent extrapolations to "how much economic damage will global warming do later". Like, you have papers saying that the economic damage from 3 C of warming could be 1%, 5-20%, 23%, or 35% of the GDP. When you have zip for data, you fall back on your politics.

5. The obvious game theory consideration of, "it's better if someone other than you spends money on global warming". The normal lefty position of, "our institutions aren't set up to coordinate well on this sort of problem, and every action against climate change, until we change, will predictably be a stopgap measure". The unusual conservative position of, "just do the cost-benefit analysis for MCB". How much damn energy I've spent filtering out the selectivity in what scraps of data scientists and economists want to show me. /rant

Here's what I'm taking away from all that:

CCC isn't reliable in general, but others have made estimates of the cost of worldwide MCB. I'm inclined to believe CCC about MCB in particular, as their numbers match up with others'. MCB is the most cost-effective climate intervention by a ~50x margin, and the estimated cost of worldwide MCB is 750M-1.5B USD annually. The exact technology needed to do MCB hasn't been fleshed out yet, but could be engineered in a straightforward way.

By CCC's own analysis, deploying worldwide MCB is >10x more cost-effective than standard global poverty interventions, and the fact that OPP and Givewell have far more funding than they know what to do with (even though they're lying and saying they don't), makes MCB even more attractive than this in practice.

Personally, I suspect that fleshing out the details of how MCB could be done in practice, would be more cost-effective than instituting a full-blown implementation of MCB, as having a well-defined way to implement it would reduce the friction for other(s) to implement it. Once I have hella money, it's something I'd fund (the research on how to do it, but certainly not the actual MCB). Like, to get things to the point of having a written plan, "hey government, here's exactly how you can do MCB if you want, now you can execute this plan as written if/when you choose". I expect other interventions (re: factory farming) to be more effective than the actual MCB at preventing suffering.

Thanks for reading, and thanks for bringing MCB to my attention. Stay awesome.

Comment by Fluttershy on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-06T09:31:08.289Z · LW · GW

I'd say: stop wanting MCB to work out so much. Don't just hope that it's gonna get approved, mate. Convincing people of stuff if fricking impossible. I think you're seriously overestimating how likely this is.

Comment by Fluttershy on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-05T12:19:19.505Z · LW · GW

It's 750m/yr, and that's including air capture costs as well. see p3 here

Comment by Fluttershy on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-05T07:47:44.550Z · LW · GW
Instead we just have a bunch of moderate liberal democracies who are institutionally incapable of doing anything significant.

Awesome burn! :D

a group of nations can do it without needing very much political energy.

I mean, if your plan is "convince people or governments to do a thing" rather than "do this thing myself", you're gonna have a bad time. It's probably within the scope of an individual NGO or maybe a hella determined individual to pull this sort of thing off, no? I guess you'd have to try, and see if anyone decided it was illegal after you started!

Hey, important question: I liked your first two links at the top of this post, were there any others you found helpful in your own research? I've been meaning to do my own research on what geoengineering stuff would be effective.

Added: Ok, I spent a few hours actually reading science and looking into it. So this says the "make clouds over the ocean, so light + warmth gets reflected back into space" strategy has "the capacity to balance global warming up to the carbon dioxide-doubling point". Which is like two to fourish degrees C. Which I can't find a figure on how long that's expected to take, except we went from like 355 to 415 ppm from 1991 to 2019. So this is roughly a century of warming you'd be undoing.

Further, the MCB seems like a very solid approach. I didn't get a good quantified feeling for how big of a deal various types of non-warming climate are though. Any info there?

Note that you could (maybe) just do a fifth of the full version of Marine Cloud Brightening (MCB): spend a bit less and do it over less of the ocean, and then be like " 'oops' I'm done funding this, but wow it lowered global temperatures by 0.4 C (hopefully a statistically significant difference?), guess someone else better fund it now", and then see if anyone takes the bait, and then use the rest of your money for something else.

But overall, MCB seems... like the effect size might be enough to justify unilaterally doing it even though it's not a great game theoretic idea. I'd have to think more about that part of it, but unless I come up with something better, I'll fund it once I have a spare couple billion.

Comment by Fluttershy on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-05T04:19:17.283Z · LW · GW

Edit: I ended up spending a bit over a day looking into geoengineering and the Copenhagen Consensus Center after writing this, so go look at my answer for a more informed take that includes what I learned from doing that. My below 2 long-form comments are not exactly wrong, but more poorly informed than that answer.

---

Awesome! I'd wanted to know what the actually useful geoengineering stuff was.

I do buy the claim that public support for any sort of emission control will evaporate the moment geoengineering is realised as a tolerable alternative... Major emitters have already signalled a clear lack of any real will to change. The humans will not repent. Move on. Stop waiting for humanity to be punished for its sin, act, do something that has some chance of solving the problem.

From a game theory POV, "dont pressure emitters" is basically just "surrender". In theory, "emitters who don't want to change" can and should be coerced by force, whether that's within a nation by laws (which will ultimately be enforced with force if broken), or internationally, by threat of military force that's willing to follow through. Like, that's how you'd game theoretically not lose.

In practice, fuck me if you're able to get any coordination to work.

There's a case for, "don't do geoengineering until you have an actual solid international power alliance capable of doing regulation". Because then the emissions agreements are set.

In practice, what's the actual utilitarian thing to do? Well, the main unanswered question is, how much can cloud brightening be scaled? Can it keep temperatures constant if emissions levels go 5, even 20x? Secondly, what can be done about e.g. ocean acidification and other non-warming issues? I have zero knowledge here. But if it scales that well, then throw out the game theory and just do the geoengineering.

If you're a lone EA and you're trying to use this information, presumably your options are, "do startup and try to get >$10B", and "gain control of a tiny country, boost military, start threatening emitters".

added: or "do startup, make money, then fund research".

Comment by Fluttershy on Writing children's picture books · 2019-08-05T03:20:22.761Z · LW · GW
because such discussion would make it harder to morally pressure people into reducing carbon emissions. I don’t know how to see this as anything other than an adversarial action against reasonable discourse

ffs, because incentives. You're playing tragedy of the commons, and your best move is to make there be more shared resources people can just take?

Comment by Fluttershy on The AI Timelines Scam · 2019-07-12T07:24:11.126Z · LW · GW

Basically, don't let your thinking on what is useful affect your thinking on what's likely.

Comment by Fluttershy on The AI Timelines Scam · 2019-07-12T07:21:26.796Z · LW · GW

It's a pretty clear way of endorsing something to call it "honest reporting".

Comment by Fluttershy on The AI Timelines Scam · 2019-07-12T03:23:01.566Z · LW · GW
It also seems like there's an argument for weighting urgency in planning that could lead to 'distorted' timelines while being a rational response to uncertainty.

It's important to do the "what are all the possible outcomes and what are the probabilities of each" calculation before you start thinking about weightings of how bad/good various outcomes are.

Comment by Fluttershy on The AI Timelines Scam · 2019-07-12T03:16:17.138Z · LW · GW
I'm wary of using words like "lie" or "scam" to mean "honest reporting of unconsciously biased reasoning"

When someone is systematically trying to convince you of a thing, do not be like, "nice honest report", but be like, "let me think for myself whether that is correct".

Comment by Fluttershy on The AI Timelines Scam · 2019-07-12T03:03:12.359Z · LW · GW

Yeah, 10/10 agreement on this. Like it'd be great if you could "just" donate to some AI risk org and get the promised altruistic benefits, but if you actually care about "stop all the fucking suffering I can", then you should want to believe AI risk research is a scam if it is a scam.

At which point you go oh fuck, I don't have a good plan to save the world anymore. But not having a better plan shouldn't change your beliefs on whether AI risk research is effective.

Comment by Fluttershy on [deleted post] 2017-05-30T05:36:48.806Z

Putting communication through a filter imposes a cost, which will inevitably tend to discourage communication in the long term.

As does allowing people to be unduly abrasive. But on top of that, communities where conversations are abrasive attract a lower caliber of person than one where they aren't. Look at what happened to LW.

Moreover, the cost is not the same for everyone

It's fairly common for this cost to go down with practice. Moreover, it seems like there's an incentive gradient at work here; the only way to gauge how costly it is for someone to act decently is to ask them how costly it is to them, and the more costly they claim it to be, the more the balance of discussion will reward them by letting them impose costs on others via nastiness while reaping the rewards of getting to achieve their political and interpersonal goals with that nastiness.

I'm not necessarily claiming that you or any specific person is acting this way; I'm just saying that this incentive gradient exists in this community, and economically rational actors would be expected to follow it.

communicative clarity and so-called "niceness"

That's a horrible framing. Niceness is sometimes important, but what really matters is establishing a set of social norms that incentivize behaviors in a way that leads to the largest positive impact. Sometimes that involves prioritizing communicative clarity (when suggesting that some EA organizations are less effective than previously thought), and sometimes that involves, say, penalizing people for acting on claims they've made to other's emotional resources (reprimanding someone for being rude when that rudeness could have reasonably been expected to hurt someone and was entirely uncalled for). Note that the set of social norms used by normal folks would have gotten both of these cases mostly right, and we tend to get them both mostly wrong.

Comment by Fluttershy on [deleted post] 2017-05-30T04:54:27.386Z

I appreciate your offer to talk things out together! To the extent that I'm feeling bad and would feel better after talking things out, I'm inclined to say that my current feelings are serving a purpose, i.e. to encourage me to keep pressing on this issue whenever doing so is impactful. So I prefer to not be consoled until the root issue has been addressed, though that wouldn't have been at all true of the old version of myself. This algorithm is a bit new to me, and I'm not sure if it'll stick.

Overall, I'm not aware that I've caused the balance of the discussion (i.e. pro immediate abrasive truthseeking vs. pro incentives that encourage later collaborative truthseeking & prosociality) to shift noticeably in either way, though I might have made it sound like I made less progress than I did, since I was sort of ranting/acting like I was looking for support above.

Comment by Fluttershy on [deleted post] 2017-05-28T20:21:05.628Z

Your comment was perfectly fine, and you don't need to apologize; see my response to komponisto above for my reasons for saying that. Apologies on my part as there's a strong chance I'll be without internet for several days and likely won't be able to further engage with this topic.

Comment by Fluttershy on [deleted post] 2017-05-28T20:16:11.640Z

Duncan's original wording here was fine. The phrase "telling the humans I know that they're dumb or wrong or sick or confused" is meant in the sense of "socially punishing them by making claims in a certain way, when those claims could easily be made without having that effect".

To put it another way, my view is that Duncan is trying to refrain from adopting behavior that lumps in values (boo trans people) with claims (trans people disproportionately have certain traits). I think that's a good thing to do for a number of reasons, and have been trying to push the debate in that direction by calling people out (with varying amounts of force) when they have been quick to slip in propositions about values into their claims.

I'm frustrated by your comment, komponisto, since raising a red-flag alert, saying that something is poorly worded at best, and making a large number of more subtle negative implications about what they've written are all ways of socially discouraging someone from doing something. I think that Duncan's comment was fine, I certainly think that he didn't need to apologize for it, and I'm fucking appalled that this conversation as a whole has managed to simultaneously promote slipping value propositions into factual claims, and promote indirectly encouraging social rudeness, and then successfully assert in social reality that a certain type of overtly abrasive value-loaded proposition making is more cooperative and epistemically useful than a more naturally kind style of non-value-loaded proposition making, all without anyone actually saying something about this.

Comment by Fluttershy on [deleted post] 2017-05-28T19:03:09.551Z

assess why the community has not yet shunned them

Hi! I believe I'm the only person to try shunning them, which happened on Facebook a month ago (since Zack named himself in the comments, see here, and here). The effort more or less blew up in my face and got a few people to publicly say they were going to excluded me, or try to get others to exclude me from future community events, and was also a large (but not the only) factor in getting me to step down from a leadership position in a project I'm spending about half of my time on. To be fair, there are a couple of places where Zack is less welcome now also, (I don't think either of us have been successfully excluded from anything other than privately hosted events we weren't likely to go to anyways), and someone with the viewpoint that shunning him was the wrong thing for me to do also stepped down from an equivalent leadership position in order to maintain a balance. So, I guess we're in a stalemate-like de facto ceasefire, though I'd be happy to pick up the issue again.

I still stand by my response to Zack. It would have been better if I'd been skilled enough to convince him to use a less aggressive tone throughout his writing by being gentler myself; that's an area where I'm still trying to grow. I think that collaborative truthseeking is aided rather than hindered by shunning people who call others "delusional perverts" because of their gender. This is, at least in part, because keeping discussions focused on truthseeking, impact, etc. is easier when there are social incentives (i.e. small social nudges that can later escalate to shunning) in place that disincentivize people from acting in ways that predictably push others into a state where they're hurt enough that they're unable to collaborate with you, such as by calling them delusional perverts. I know that the process of applying said social incentives (i.e. shunning) doesn't look like truthseeking, but it's instrumental to truthseeking (when done with specificity and sensitivity/by people with a well-calibrated set of certain common social skills).

Comment by Fluttershy on Bad intent is a disposition, not a feeling · 2017-05-02T21:03:48.419Z · LW · GW

This all sounds right, but the reasoning behind using the wording of "bad faith" is explained in the second bullet point of this comment.

Tl;dr the module your brain has for detecting things that feel like "bad faith" is good at detecting when someone is acting in ways that cause bad consequences in expectation but don't feel like "bad faith" to the other person on the inside. If people could learn to correct a subset of these actions by learning, say, common social skills, treating those actions like they're taken in "bad faith" incentivizes them to learn those skills, which results in you having to live with negative consequences from dealing with that person less. I'd say that this is part of why our minds often read well-intentioned-but-harmful-in-expectation behaviors as "bad faith"; it's a way of correcting them.

Comment by Fluttershy on Bad intent is a disposition, not a feeling · 2017-05-02T09:34:47.796Z · LW · GW

nod. This does seem like it should be a continuous thing, rather than System 1 solely figuring things out in some cases and System 2 figuring it out alone in others.

Comment by Fluttershy on Bad intent is a disposition, not a feeling · 2017-05-01T19:48:44.326Z · LW · GW

Good observation.

Amusingly, one possible explanation is that the people who gave Gleb pushback on here were operating on bad-faith-detecting intuitions--this is supported by the quick reaction time. I'd say that those intuitions were good ones, if they lead to those folks giving Gleb pushback on a quick timescale, and I'd also say that those intuitions shaped healthy norms to the extent that they nudged us towards establishing a quick reality-grounded social feedback loop.

But the people who did give Gleb pushback more frequently framed things in terms other than them having bad-faith-detecting intuitions than you'd have guessed, if they were actually concluding that giving Gleb pushback was worth their time based on their intuitions--they pointed to specific behaviors, and so on, when calling him out. But how many of these people actually decided to give Gleb feedback because they System-2-noticed that he was implementing a specific behavior, and how many of us decided to give Gleb feedback because our bad-faith-detecting intuitions noticed something was up, which led us to fish around for a specific bad behavior that Gleb was doing?

If more of us did the latter, this suggests that we have social incentives in place that reward fishing around and finding specific bad behaviors, but to me, fishing around for bad behaviors (i.e. fishing through data) like this doesn't seem too much different from p-hacking, except that fishing around for social data is way harder to call people out on. And if our real reasons for reaching the correct conclusion that Gleb needed to get pushback were based in bad-faith-detecting intuitions, and not in System 2 noticing bad behaviors, then maybe providing social allowance for the mechanism that actually led some of us to detect Gleb a bit earlier to do its work on its own in the future, rather than requiring its use to be backed up by evidence of bad behaviors (junk data) that can be both p-hacked by those who want to criticize independently of what was true, or hidden by those with more skill than Gleb, would be a good idea.

At a minimum, being honest with ourselves about what our real reasons are ought to help us understand our minds a bit better.

Comment by Fluttershy on Bad intent is a disposition, not a feeling · 2017-05-01T19:01:53.342Z · LW · GW

I'm very glad that you asked this! I think we can come up with some decent heuristics:

  • If you start out with some sort of inbuilt bad faith detector, try to see when, in retrospect, it's given you accurate readings, false positives, and false negatives. I catch myself doing this without having planned to on a System 1 level from time to time. It may be possible, if harder, to do this sort of intuition reshaping in response to evidence with System 2. Note that it sometimes takes a long time, and that sometimes you never figure out, whether or not your bad-faith-detecting intuitions were correct.
  • There's debate about whether a bad-faith-detecting intuition that fires when someone "has good intentions" but ends up predictably acting in ways that hurt you (especially to their own benefit) is "correct". My view is that the intuition is correct; defining it as incorrect and then acting in social accordance with it being incorrect incentivizes others to manipulate you by being/becoming good at making themselves believe they have good intentions when they don't, which is a way of destroying information in itself. Hence why allowing people to get away with too many plausibly deniable things destroys information: if plausible deniability is a socially acceptable defense when it's obvious someone has hurt you in a way that benefits them, they'll want to blind themselves to information about how their own brains work. (This is a reason to disagree with many suggestions made in Nate's post. If treating people like they generally have positive intentions reduces your ability to do collaborative truth-seeking with others on how their minds can fail in ways that let you down--planning fallacy is one example--then maybe it would be helpful to socially disincentivize people from misleading themselves this way by giving them critical feedback, or at least not tearing people down for being ostracizers when they do the same).
  • Try to evaluate other's bad faith detectors by the same mechanism as in the first point; if they give lots of correct readings and not many false ones (especially if they share their intuitions with you before it becomes obvious to you whether or not they're correct), this is some sort of evidence that they have strong and accurate bad-faith-detecting intuitions.
  • The above requires that you know someone well enough for them to trust you with this data, so a quicker way to evaluate other's bad-faith-detecting intuitions is to look at who they give feedback to, criticize, praise, etc. If they end up attacking or socially qualifying popular people who are later revealed to have been acting in bad faith, or if they end up praising or supporting ones who are socially suspected of being up to something who are later revealed to have been acting in good faith, these are strong signals of them having accurate bad-faith-detecting intuitions.
  • Done right, bad-faith-detecting intuitions should let you make testable predictions about who will impose costs or provide benefits to you and your friends/cause; these intuitions become more valuable as you become more accurate at evaluating them. Bad-faith-detecting intuitions might not "taste" like Officially Approved Scientific Evidence, and we might not respect them much around here, but they should tie back into reality, and be usable to help you make better decisions than you'd been able to make without using them.
Comment by Fluttershy on Bad intent is a disposition, not a feeling · 2017-05-01T10:45:20.249Z · LW · GW

I think the burden of evidence is on the side disagreeing with the intuitions behind this extremely common defensive response

Note also that most groups treat their intuitions about whether or not someone is acting in bad faith as evidence worth taking seriously, and that we're remarkable in how rarely we tend to allow our bad-faith-detecting intuitions to lead us to reach the positive conclusion that someone is acting in bad faith. Note also that we have a serious problem with not being able to effectively deal with Gleb-like people, sexual predators, etc, and that these sorts of people reliably provoke person-acting-in-bad-faith-intuitions in people with (both) strong and accurate bad-faith-sensing intuitions. (Note that having strong bad-faith-detecting intuitions correlates somewhat with having accurate ones, since having strong intuitions here makes it easier to pay attention to your training data, and thus build better intuitions with time). Anyways, as a community, taking intuitions about when someone's acting in bad faith more seriously on the margin could help with this.

Now, one problem with this strategy is that many of us are out of practice at using these intuitions! It also doesn't help that people without accurate bad-faith-detecting intuitions often typical-mind fallacy their way into believing that there aren't people who have exceptionally accurate bad-faith-detecting intuitions. Sometimes this gets baked into social norms, such that criticism becomes more heavily taxed, partly because people with weak bad-faith-detecting intuitions don't trust others to direct their criticism at people who are actually acting in bad faith.

Of course, we currently don't accept person-acting-in-bad-faith-intuitions as useful evidence in the EA/LW community, so people who provoke more of these intuitions are relatively more welcome here than in other groups. Also, for people with both strong and accurate bad-faith-detecting intuitions, being around people who set off their bad-faith-sensing intuitions isn't fun, so such people feel less welcome here, especially since a form of evidence they're good at acquiring isn't socially acknowledged or rewarded, while it is acknowledged and rewarded elsewhere. And when you look around, you see that we in fact don't have many people with strong and accurate bad-faith-detecting intuitions; having more of these people around would have been a good way to detect Gleb-like folks much earlier than we tend to.

How acceptable bad-faith-detecting intuitions are in decision-making is also highly relevant to the gender balance of our community, but that's a topic for another post. The tl;dr of it is that, when bad-faith-detecting intuitions are viewed as providing valid evidence, it's easier to make people who are acting creepy change how they're acting or leave, since "creepiness" is a non-objective thing that nevertheless has a real, strong impact on who shows up at your events.

Anyhow, I'm incredibly self-interested in pointing all of this out, because I have very strong (and, as of course I will claim, very accurate) bad-faith-detecting intuitions. If people with stronger bad-faith-detecting intuitions are undervalued because our skill at detecting bad actors isn't recognized, then, well, this implies people should listen to us more. :P

Comment by Fluttershy on Bad intent is a disposition, not a feeling · 2017-05-01T09:40:37.577Z · LW · GW

For more explanation on how incentive gradients interact with and allow the creation of mental modules that can systematically mislead people without intent to mislead, see False Faces.

Comment by Fluttershy on Effective altruism is self-recommending · 2017-04-22T00:16:35.516Z · LW · GW

Well, that's embarrassing for me. You're entirely right; it does become visible again when I log out, and I hadn't even considered that as a possibility. I guess I'll amend the paragraph of my above comment that incorrectly stated that the thread had been hidden on the EA Forum; at least I didn't accuse anyone of anything in that part of my reply. I do still stand by my criticisms, though knowing what I do now, I would say that it wasn't necessary of me to post this here if my original comment and the original post on the EA Forum are still publicly visible.

Comment by Fluttershy on Effective altruism is self-recommending · 2017-04-21T22:32:03.987Z · LW · GW

Some troubling relevant updates on EA Funds from the past few hours:

  • On April 20th, Kerry Vaughan from CEA published an update on EA Funds on the EA Forum. His post quotes the previous post in which he introduced the launch of EA Funds, which said:

We only want to focus on the Effective Altruism Funds if the community believes it will improve the effectiveness of their donations and that it will provide substantial value to the EA community. Accordingly, we plan to run the project for the next 3 months and then reassess whether the project should continue and if so, in what form.

  • In short, it was promised that a certain level of community support would be required to justify the continuation of EA Funds beyond the first three months of the project. In an effort to communicate that such a level of support existed, Kerry commented:

Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.

  • Around 11 hours ago, I pointed out that this claim was patently false.
  • (I stand corrected by the reply to this comment which addressed this bullet point: the original post on which I had commented wasn't hidden from the EA Forum; I just needed to log out of my account on the EA Forum to see it after having downvoted it.)
  • Between the fact that the EA Funds project has taken significant criticism, failed to implement a plan to address it, acted as if its continuation was justified on the basis of having not received any such criticism, and signaled its openness to being deceptive in the future by doing all of this in a way that wasn't plausibly deniable, my personal opinion is that there is not sufficient reason to allow the EA Funds to continue to operate past their three-month trial period, and additionally, that I have less reason to trust other projects run by CEA in light of this debacle.
Comment by Fluttershy on Effective altruism is self-recommending · 2017-04-21T21:39:24.121Z · LW · GW

GiveWell reanalyzed the data it based its recommendations on, but hasn’t published an after-the-fact retrospective of long-run results. I asked GiveWell about this by email. The response was that such an assessment was not prioritized because GiveWell had found implementation problems in VillageReach's scale-up work as well as reasons to doubt its original conclusion about the impact of the pilot program.

This seems particularly horrifying; if everyone already knows that you're incentivized to play up the effectiveness of the charities you're recommending, then deciding to not check back on a charity you've recommended for the explicit reason that you know you're unable to show that something went well when you predicted it would is a very bad sign; that should be a reason to do the exact opposite thing, i.e. going back and actually publishing an after-the-fact retrospective of long-run results. If anyone was looking for more evidence on whether or not they should take GiveWell's recommendations seriously, then, well, here they are.

Comment by Fluttershy on Open Thread, Feb. 20 - Feb 26, 2017 · 2017-02-21T06:59:52.307Z · LW · GW

Ok, thank you, this helps a lot and I feel better after reading this, and if I do start crying in a minute it'll be because you're being very nice and not because I'm sad. So, um, thanks. :)

Comment by Fluttershy on Open Thread, Feb. 20 - Feb 26, 2017 · 2017-02-20T15:15:14.669Z · LW · GW

Second edit: Dagon is very kind and I feel ok; for posterity, my original comment was basically a link to the last paragraph of this comment, which talked about helping depressed EAs as some sort of silly hypothetical cause area.

Edit: since someone wants to emphasize how much they would "enjoy watching [my] evaluation contortions" of EA ideas, I elect to delete what I've written here.

I'm not crying.

Comment by Fluttershy on "The unrecognised simplicities of effective action #2: 'Systems engineering’ and 'systems management' - ideas from the Apollo programme for a 'systems politics'", Cummings 2017 · 2017-02-17T18:19:24.902Z · LW · GW

There's actually a noteworthy passage on how prediction markets could fail in one of Dominic's other recent blog posts I've been wanting to get a second opinion on for a while:

NB. Something to ponder: a) hedge funds were betting heavily on the basis of private polling [for Brexit] and b) I know at least two ‘quant’ funds had accurate data (they had said throughout the last fortnight their data showed it between 50-50 and 52-48 for Leave and their last polls were just a point off), and therefore c) they, and others in a similar position, had a strong incentive to game betting markets to increase their chances of large gains from inside knowledge. If you know the probability of X happening is much higher than markets are pricing, partly because financial markets are looking at betting markets, then there is a strong incentive to use betting markets to send false signals and give competitors an inaccurate picture. I have no idea if this happened, and nobody even hinted to me that it had, but it is worth asking: given the huge rewards to be made and the relatively trivial amounts of money needed to distort betting markets, why would intelligent well-resourced agents not do this, and therefore how much confidence should we have in betting markets as accurate signals about political events with big effects on financial markets?

Comment by Fluttershy on "The unrecognised simplicities of effective action #2: 'Systems engineering’ and 'systems management' - ideas from the Apollo programme for a 'systems politics'", Cummings 2017 · 2017-02-17T18:13:44.977Z · LW · GW

The idea that there's much to be gained by crafting institutions, organizations, and teams which can train and direct people better seems like it could flower into an EA cause, if someone wanted it to. From reading the first post in the series, I think that that's a core part of what Dominic is getting at:

We could significantly improve the decisions of the most powerful 100 people in the UK or the world for less than a million dollars (~£10^6) and a decade-long project on a scale of just ~£10^7 could have dramatic effects.

Comment by Fluttershy on Metrics to evaluate a Presidency · 2017-01-25T01:18:45.584Z · LW · GW

Regarding tone specifically, you have two strong options: one would be to send strong "I am playing" signals, such as by dropping the points which men's rights people might make, and, say, parodying feminist points. Another would be to keep the tone as serious as it currently is, but qualify things more; in some other contexts, qualifying your arguments sounds low-status, but in discussions of contentious topics on a public forum, it can nudge participants towards cooperative truth-seeking mode.

Amusingly, I emphasized the points of your comment that I found agreeable in my first reply, both since you're pretty cool, and also since I didn't want the fact that I'm a hardcore feminist to be obvious enough to affect the discourse. However, to the extent which my reply was more serious than your comment, this could have made me look like the less feminist one out of the two of us :D

Comment by Fluttershy on Metrics to evaluate a Presidency · 2017-01-25T00:45:33.201Z · LW · GW

Fair enough! I am readily willing to believe your statement that that was your intent. It wasn't possible to tell from the comment itself, since the metric regarding sexual harassment report handling is much more serious than the other metrics.

Comment by Fluttershy on Metrics to evaluate a Presidency · 2017-01-24T23:56:10.232Z · LW · GW

(This used to be a gentle comment which tried to very indirectly defend feminism while treating James_Miller kindly, but I've taken it down for my own health)

Comment by Fluttershy on Polling Thread January 2017 · 2017-01-23T08:29:23.267Z · LW · GW

Let's find out how contentious a few claims about status are.

  1. Lowering your status can be simultaneously cooperative and self-beneficial. [pollid:1186]

  2. Conditional on status games being zero-sum in terms of status, it’s possible/common for the people participating in or affected by a status game to end up much happier or much worse off, on average, than they were before the status game. [pollid:1187]

  3. Instinctive trust of high status people regularly obstructs epistemic cleanliness outside of the EA and rationalist communities. [pollid:1188]

  4. Instinctive trust of high status people regularly obstructs epistemic cleanliness within the EA and rationalist communities. [pollid:1189]

Comment by Fluttershy on Rationality Considered Harmful (In Politics) · 2017-01-09T05:07:56.844Z · LW · GW

Most of my friends can immediately smell when a writer using a truth-oriented approach to politics has a strong hidden agenda, and will respond much differently than they would to truth-oriented writers with weaker agendas. Some of them would even say that, conditional on you having an agenda, it's dishonest to note that you believe that you're using a truth-oriented approach; in this case, claiming that you're using a truth-oriented approach reads as an attempt to hide the fact that you have an agenda. This holds regardless of whether your argument is correct, or whether you have good intentions.

There's a wide existing literature on concepts which are related to (but don't directly address) how to best engage in truth seeking on politically charged topics.The books titled Nonviolent Communication, HtWFaIP, and Impro, are all non-obvious examples. I posit that promoting this literature might be one of the best uses of our time, if our strongest desire is to make political discourse more truth-oriented.

One central theme to all of these works is that putting effort into being agreeable and listening to your discussion partners will make them more receptive to evaluating your own claims based on how factual they are. I'm likely to condense most of the relevant insights into a couple posts once I'm in an emotional state amenable to doing so.

Comment by Fluttershy on Open thread, Jan. 02 - Jan. 08, 2017 · 2017-01-05T00:20:00.341Z · LW · GW

It helps that you shared the dialogue. I predict that Jane doesn't System-2-believe that Trump is trying to legalize rape; she's just offering the other conversation participants a chance to connect over how much they don't like Trump. This may sound dishonest to rationalists, but normal people don't frown upon this behavior as often, so I can't tell if it would be epistemically rational of Jane to expect to be rebuffed in the social environment you were in. Still, making claims like this about Trump may be an instrumentally rational thing for Jane to do in this situation, if she's looking to strengthen bonds with others.

Jane's System 1 is a good bayesian, and knows that Trump supporters are more likely to rebuff her, and that Trump supporters aren't social allies. She's testing the waters, albeit clumsily, to see who her social allies are.

Jane could have put more effort into her thoughts, and chosen a factually correct insult to throw at Trump. You could have said that even if he doesn't try to legalize rape, then he'll do some other specific thing that you don't approve of (and you'd have gotten bonus points for proactively thinking of a bad thing to say about him). The implementation of either of these changes would have had a roughly similar effect on the levels of nonviolence and agreeability of the conversation.

This generalizes to most conversations about social support. When looking for support, many people switch effortlessly between making low effort claims they don't believe, and making claims that they System-2-endorse. Agreeing with their sensible claims, and offering supportive alternative claims to their preposterous claims, can mark you as a social ally while letting you gently, nonviolently nudge them away from making preposterous claims.

Comment by Fluttershy on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2016-12-23T10:42:36.192Z · LW · GW

I think that Merlin and Alicorn should be praised for Merlin's good behavior. :)

I was happy with the Berkeley event overall.

Next year, I suspect that it would be easier for someone to talk to the guardian of a misbehaving child if there was a person specifically tasked to do so. This could be one of the main event organizers, or perhaps someone directly designated by them. Diffusion of responsibility is a strong force.

Comment by Fluttershy on "Flinching away from truth” is often about *protecting* the epistemology · 2016-12-20T09:47:26.061Z · LW · GW

I've noticed that sometimes, my System 2 starts falsely believing there are fewer buckets when I'm being socially confronted about a crony belief I hold, and that my System 2 will snap back to believing that there are more buckets once the confrontation is over. I'd normally expect my System 1 to make this flavor of error, but whenever my brain has done this sort of thing during the past few years, it's actually been my gut that has told me that I'm engaging in motivated reasoning.

Comment by Fluttershy on Epistemic Effort · 2016-11-30T23:22:17.962Z · LW · GW

"Epistemic status" metadata plays two roles: first, it can be used to suggest to a reader how seriously they should consider a set of ideas. Second, though, it can have an effect on signalling games, as you suggest. Those who lack social confidence can find it harder to contribute to discussions, and having the ability to qualify statements with tags like "epistemic status: not confident" makes it easier for them to contribute without feeling like they're trying to be the center of attention.

"Epistemic effort" metadata fulfills the first of these roles, but not the second; if you're having a slow day and take longer to figure something out or write something than normal, then it might make you feel bad to admit that it took you as much effort as it did to produce said content. Nudging social norms towards use of "epistemic effort" over "epistemic status" provides readers with the benefit of having more information, at the potential cost of discouraging some posters.

Comment by Fluttershy on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T09:40:27.951Z · LW · GW

It was good of you to write this post out of a sense of civic virtue, Anna. I'd like to share a few thoughts on the incentives of potential content creators.

Most humans, and most of us, appreciate being associated with prestigious groups, and receiving praise. However, when people speak about LessWrong being dead, or LessWrong having been taken over by new folks, or about LessWrong simply not being fun, this socially implies that the people saying these things hold LessWrong posters in low esteem. You could reasonably expect that replacing these sorts of remarks with discourse that affirmed the worth of LessWrong posters would incentiveize more collaboration on this site.

I'm not sure if this implies that we should shift to a platform that doesn't have the taint of "LessWrong is dead" associated with it. Maybe we'll be ok if a selection of contributors who are highly regarded in the community begin or resume posting on the site. Or, perhaps this implies that the content creators who come to whatever locus of discussion is chosen should be praised for being virtuous by contributing directly to a central hub of knowledge. I'm sure that you all can think of even better ideas along these lines.