Comment by Fluttershy on [deleted post] 2017-05-30T05:36:48.806Z

Putting communication through a filter imposes a cost, which will inevitably tend to discourage communication in the long term.

As does allowing people to be unduly abrasive. But on top of that, communities where conversations are abrasive attract a lower caliber of person than one where they aren't. Look at what happened to LW.

Moreover, the cost is not the same for everyone

It's fairly common for this cost to go down with practice. Moreover, it seems like there's an incentive gradient at work here; the only way to gauge how costly it is for someone to act decently is to ask them how costly it is to them, and the more costly they claim it to be, the more the balance of discussion will reward them by letting them impose costs on others via nastiness while reaping the rewards of getting to achieve their political and interpersonal goals with that nastiness.

I'm not necessarily claiming that you or any specific person is acting this way; I'm just saying that this incentive gradient exists in this community, and economically rational actors would be expected to follow it.

communicative clarity and so-called "niceness"

That's a horrible framing. Niceness is sometimes important, but what really matters is establishing a set of social norms that incentivize behaviors in a way that leads to the largest positive impact. Sometimes that involves prioritizing communicative clarity (when suggesting that some EA organizations are less effective than previously thought), and sometimes that involves, say, penalizing people for acting on claims they've made to other's emotional resources (reprimanding someone for being rude when that rudeness could have reasonably been expected to hurt someone and was entirely uncalled for). Note that the set of social norms used by normal folks would have gotten both of these cases mostly right, and we tend to get them both mostly wrong.

Comment by Fluttershy on [deleted post] 2017-05-30T04:54:27.386Z

I appreciate your offer to talk things out together! To the extent that I'm feeling bad and would feel better after talking things out, I'm inclined to say that my current feelings are serving a purpose, i.e. to encourage me to keep pressing on this issue whenever doing so is impactful. So I prefer to not be consoled until the root issue has been addressed, though that wouldn't have been at all true of the old version of myself. This algorithm is a bit new to me, and I'm not sure if it'll stick.

Overall, I'm not aware that I've caused the balance of the discussion (i.e. pro immediate abrasive truthseeking vs. pro incentives that encourage later collaborative truthseeking & prosociality) to shift noticeably in either way, though I might have made it sound like I made less progress than I did, since I was sort of ranting/acting like I was looking for support above.

Comment by Fluttershy on [deleted post] 2017-05-28T20:21:05.628Z

Your comment was perfectly fine, and you don't need to apologize; see my response to komponisto above for my reasons for saying that. Apologies on my part as there's a strong chance I'll be without internet for several days and likely won't be able to further engage with this topic.

Comment by Fluttershy on [deleted post] 2017-05-28T20:16:11.640Z

Duncan's original wording here was fine. The phrase "telling the humans I know that they're dumb or wrong or sick or confused" is meant in the sense of "socially punishing them by making claims in a certain way, when those claims could easily be made without having that effect".

To put it another way, my view is that Duncan is trying to refrain from adopting behavior that lumps in values (boo trans people) with claims (trans people disproportionately have certain traits). I think that's a good thing to do for a number of reasons, and have been trying to push the debate in that direction by calling people out (with varying amounts of force) when they have been quick to slip in propositions about values into their claims.

I'm frustrated by your comment, komponisto, since raising a red-flag alert, saying that something is poorly worded at best, and making a large number of more subtle negative implications about what they've written are all ways of socially discouraging someone from doing something. I think that Duncan's comment was fine, I certainly think that he didn't need to apologize for it, and I'm fucking appalled that this conversation as a whole has managed to simultaneously promote slipping value propositions into factual claims, and promote indirectly encouraging social rudeness, and then successfully assert in social reality that a certain type of overtly abrasive value-loaded proposition making is more cooperative and epistemically useful than a more naturally kind style of non-value-loaded proposition making, all without anyone actually saying something about this.

Comment by Fluttershy on [deleted post] 2017-05-28T19:03:09.551Z

assess why the community has not yet shunned them

Hi! I believe I'm the only person to try shunning them, which happened on Facebook a month ago (since Zack named himself in the comments, see here, and here). The effort more or less blew up in my face and got a few people to publicly say they were going to excluded me, or try to get others to exclude me from future community events, and was also a large (but not the only) factor in getting me to step down from a leadership position in a project I'm spending about half of my time on. To be fair, there are a couple of places where Zack is less welcome now also, (I don't think either of us have been successfully excluded from anything other than privately hosted events we weren't likely to go to anyways), and someone with the viewpoint that shunning him was the wrong thing for me to do also stepped down from an equivalent leadership position in order to maintain a balance. So, I guess we're in a stalemate-like de facto ceasefire, though I'd be happy to pick up the issue again.

I still stand by my response to Zack. It would have been better if I'd been skilled enough to convince him to use a less aggressive tone throughout his writing by being gentler myself; that's an area where I'm still trying to grow. I think that collaborative truthseeking is aided rather than hindered by shunning people who call others "delusional perverts" because of their gender. This is, at least in part, because keeping discussions focused on truthseeking, impact, etc. is easier when there are social incentives (i.e. small social nudges that can later escalate to shunning) in place that disincentivize people from acting in ways that predictably push others into a state where they're hurt enough that they're unable to collaborate with you, such as by calling them delusional perverts. I know that the process of applying said social incentives (i.e. shunning) doesn't look like truthseeking, but it's instrumental to truthseeking (when done with specificity and sensitivity/by people with a well-calibrated set of certain common social skills).

Comment by fluttershy on Bad intent is a disposition, not a feeling · 2017-05-02T21:03:48.419Z · score: 0 (0 votes) · LW · GW

This all sounds right, but the reasoning behind using the wording of "bad faith" is explained in the second bullet point of this comment.

Tl;dr the module your brain has for detecting things that feel like "bad faith" is good at detecting when someone is acting in ways that cause bad consequences in expectation but don't feel like "bad faith" to the other person on the inside. If people could learn to correct a subset of these actions by learning, say, common social skills, treating those actions like they're taken in "bad faith" incentivizes them to learn those skills, which results in you having to live with negative consequences from dealing with that person less. I'd say that this is part of why our minds often read well-intentioned-but-harmful-in-expectation behaviors as "bad faith"; it's a way of correcting them.

Comment by fluttershy on Bad intent is a disposition, not a feeling · 2017-05-02T09:34:47.796Z · score: 1 (1 votes) · LW · GW

nod. This does seem like it should be a continuous thing, rather than System 1 solely figuring things out in some cases and System 2 figuring it out alone in others.

Comment by fluttershy on Bad intent is a disposition, not a feeling · 2017-05-01T19:48:44.326Z · score: 2 (2 votes) · LW · GW

Good observation.

Amusingly, one possible explanation is that the people who gave Gleb pushback on here were operating on bad-faith-detecting intuitions--this is supported by the quick reaction time. I'd say that those intuitions were good ones, if they lead to those folks giving Gleb pushback on a quick timescale, and I'd also say that those intuitions shaped healthy norms to the extent that they nudged us towards establishing a quick reality-grounded social feedback loop.

But the people who did give Gleb pushback more frequently framed things in terms other than them having bad-faith-detecting intuitions than you'd have guessed, if they were actually concluding that giving Gleb pushback was worth their time based on their intuitions--they pointed to specific behaviors, and so on, when calling him out. But how many of these people actually decided to give Gleb feedback because they System-2-noticed that he was implementing a specific behavior, and how many of us decided to give Gleb feedback because our bad-faith-detecting intuitions noticed something was up, which led us to fish around for a specific bad behavior that Gleb was doing?

If more of us did the latter, this suggests that we have social incentives in place that reward fishing around and finding specific bad behaviors, but to me, fishing around for bad behaviors (i.e. fishing through data) like this doesn't seem too much different from p-hacking, except that fishing around for social data is way harder to call people out on. And if our real reasons for reaching the correct conclusion that Gleb needed to get pushback were based in bad-faith-detecting intuitions, and not in System 2 noticing bad behaviors, then maybe providing social allowance for the mechanism that actually led some of us to detect Gleb a bit earlier to do its work on its own in the future, rather than requiring its use to be backed up by evidence of bad behaviors (junk data) that can be both p-hacked by those who want to criticize independently of what was true, or hidden by those with more skill than Gleb, would be a good idea.

At a minimum, being honest with ourselves about what our real reasons are ought to help us understand our minds a bit better.

Comment by fluttershy on Bad intent is a disposition, not a feeling · 2017-05-01T19:01:53.342Z · score: 2 (2 votes) · LW · GW

I'm very glad that you asked this! I think we can come up with some decent heuristics:

  • If you start out with some sort of inbuilt bad faith detector, try to see when, in retrospect, it's given you accurate readings, false positives, and false negatives. I catch myself doing this without having planned to on a System 1 level from time to time. It may be possible, if harder, to do this sort of intuition reshaping in response to evidence with System 2. Note that it sometimes takes a long time, and that sometimes you never figure out, whether or not your bad-faith-detecting intuitions were correct.
  • There's debate about whether a bad-faith-detecting intuition that fires when someone "has good intentions" but ends up predictably acting in ways that hurt you (especially to their own benefit) is "correct". My view is that the intuition is correct; defining it as incorrect and then acting in social accordance with it being incorrect incentivizes others to manipulate you by being/becoming good at making themselves believe they have good intentions when they don't, which is a way of destroying information in itself. Hence why allowing people to get away with too many plausibly deniable things destroys information: if plausible deniability is a socially acceptable defense when it's obvious someone has hurt you in a way that benefits them, they'll want to blind themselves to information about how their own brains work. (This is a reason to disagree with many suggestions made in Nate's post. If treating people like they generally have positive intentions reduces your ability to do collaborative truth-seeking with others on how their minds can fail in ways that let you down--planning fallacy is one example--then maybe it would be helpful to socially disincentivize people from misleading themselves this way by giving them critical feedback, or at least not tearing people down for being ostracizers when they do the same).
  • Try to evaluate other's bad faith detectors by the same mechanism as in the first point; if they give lots of correct readings and not many false ones (especially if they share their intuitions with you before it becomes obvious to you whether or not they're correct), this is some sort of evidence that they have strong and accurate bad-faith-detecting intuitions.
  • The above requires that you know someone well enough for them to trust you with this data, so a quicker way to evaluate other's bad-faith-detecting intuitions is to look at who they give feedback to, criticize, praise, etc. If they end up attacking or socially qualifying popular people who are later revealed to have been acting in bad faith, or if they end up praising or supporting ones who are socially suspected of being up to something who are later revealed to have been acting in good faith, these are strong signals of them having accurate bad-faith-detecting intuitions.
  • Done right, bad-faith-detecting intuitions should let you make testable predictions about who will impose costs or provide benefits to you and your friends/cause; these intuitions become more valuable as you become more accurate at evaluating them. Bad-faith-detecting intuitions might not "taste" like Officially Approved Scientific Evidence, and we might not respect them much around here, but they should tie back into reality, and be usable to help you make better decisions than you'd been able to make without using them.
Comment by fluttershy on Bad intent is a disposition, not a feeling · 2017-05-01T10:45:20.249Z · score: 4 (4 votes) · LW · GW

I think the burden of evidence is on the side disagreeing with the intuitions behind this extremely common defensive response

Note also that most groups treat their intuitions about whether or not someone is acting in bad faith as evidence worth taking seriously, and that we're remarkable in how rarely we tend to allow our bad-faith-detecting intuitions to lead us to reach the positive conclusion that someone is acting in bad faith. Note also that we have a serious problem with not being able to effectively deal with Gleb-like people, sexual predators, etc, and that these sorts of people reliably provoke person-acting-in-bad-faith-intuitions in people with (both) strong and accurate bad-faith-sensing intuitions. (Note that having strong bad-faith-detecting intuitions correlates somewhat with having accurate ones, since having strong intuitions here makes it easier to pay attention to your training data, and thus build better intuitions with time). Anyways, as a community, taking intuitions about when someone's acting in bad faith more seriously on the margin could help with this.

Now, one problem with this strategy is that many of us are out of practice at using these intuitions! It also doesn't help that people without accurate bad-faith-detecting intuitions often typical-mind fallacy their way into believing that there aren't people who have exceptionally accurate bad-faith-detecting intuitions. Sometimes this gets baked into social norms, such that criticism becomes more heavily taxed, partly because people with weak bad-faith-detecting intuitions don't trust others to direct their criticism at people who are actually acting in bad faith.

Of course, we currently don't accept person-acting-in-bad-faith-intuitions as useful evidence in the EA/LW community, so people who provoke more of these intuitions are relatively more welcome here than in other groups. Also, for people with both strong and accurate bad-faith-detecting intuitions, being around people who set off their bad-faith-sensing intuitions isn't fun, so such people feel less welcome here, especially since a form of evidence they're good at acquiring isn't socially acknowledged or rewarded, while it is acknowledged and rewarded elsewhere. And when you look around, you see that we in fact don't have many people with strong and accurate bad-faith-detecting intuitions; having more of these people around would have been a good way to detect Gleb-like folks much earlier than we tend to.

How acceptable bad-faith-detecting intuitions are in decision-making is also highly relevant to the gender balance of our community, but that's a topic for another post. The tl;dr of it is that, when bad-faith-detecting intuitions are viewed as providing valid evidence, it's easier to make people who are acting creepy change how they're acting or leave, since "creepiness" is a non-objective thing that nevertheless has a real, strong impact on who shows up at your events.

Anyhow, I'm incredibly self-interested in pointing all of this out, because I have very strong (and, as of course I will claim, very accurate) bad-faith-detecting intuitions. If people with stronger bad-faith-detecting intuitions are undervalued because our skill at detecting bad actors isn't recognized, then, well, this implies people should listen to us more. :P

Comment by fluttershy on Bad intent is a disposition, not a feeling · 2017-05-01T09:40:37.577Z · score: 1 (1 votes) · LW · GW

For more explanation on how incentive gradients interact with and allow the creation of mental modules that can systematically mislead people without intent to mislead, see False Faces.

Comment by fluttershy on Effective altruism is self-recommending · 2017-04-22T00:16:35.516Z · score: 2 (2 votes) · LW · GW

Well, that's embarrassing for me. You're entirely right; it does become visible again when I log out, and I hadn't even considered that as a possibility. I guess I'll amend the paragraph of my above comment that incorrectly stated that the thread had been hidden on the EA Forum; at least I didn't accuse anyone of anything in that part of my reply. I do still stand by my criticisms, though knowing what I do now, I would say that it wasn't necessary of me to post this here if my original comment and the original post on the EA Forum are still publicly visible.

Comment by fluttershy on Effective altruism is self-recommending · 2017-04-21T22:32:03.987Z · score: 2 (2 votes) · LW · GW

Some troubling relevant updates on EA Funds from the past few hours:

  • On April 20th, Kerry Vaughan from CEA published an update on EA Funds on the EA Forum. His post quotes the previous post in which he introduced the launch of EA Funds, which said:

We only want to focus on the Effective Altruism Funds if the community believes it will improve the effectiveness of their donations and that it will provide substantial value to the EA community. Accordingly, we plan to run the project for the next 3 months and then reassess whether the project should continue and if so, in what form.

  • In short, it was promised that a certain level of community support would be required to justify the continuation of EA Funds beyond the first three months of the project. In an effort to communicate that such a level of support existed, Kerry commented:

Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.

  • Around 11 hours ago, I pointed out that this claim was patently false.
  • (I stand corrected by the reply to this comment which addressed this bullet point: the original post on which I had commented wasn't hidden from the EA Forum; I just needed to log out of my account on the EA Forum to see it after having downvoted it.)
  • Between the fact that the EA Funds project has taken significant criticism, failed to implement a plan to address it, acted as if its continuation was justified on the basis of having not received any such criticism, and signaled its openness to being deceptive in the future by doing all of this in a way that wasn't plausibly deniable, my personal opinion is that there is not sufficient reason to allow the EA Funds to continue to operate past their three-month trial period, and additionally, that I have less reason to trust other projects run by CEA in light of this debacle.
Comment by fluttershy on Effective altruism is self-recommending · 2017-04-21T21:39:24.121Z · score: 15 (11 votes) · LW · GW

GiveWell reanalyzed the data it based its recommendations on, but hasn’t published an after-the-fact retrospective of long-run results. I asked GiveWell about this by email. The response was that such an assessment was not prioritized because GiveWell had found implementation problems in VillageReach's scale-up work as well as reasons to doubt its original conclusion about the impact of the pilot program.

This seems particularly horrifying; if everyone already knows that you're incentivized to play up the effectiveness of the charities you're recommending, then deciding to not check back on a charity you've recommended for the explicit reason that you know you're unable to show that something went well when you predicted it would is a very bad sign; that should be a reason to do the exact opposite thing, i.e. going back and actually publishing an after-the-fact retrospective of long-run results. If anyone was looking for more evidence on whether or not they should take GiveWell's recommendations seriously, then, well, here they are.

Comment by fluttershy on Open Thread, Feb. 20 - Feb 26, 2017 · 2017-02-21T06:59:52.307Z · score: 1 (1 votes) · LW · GW

Ok, thank you, this helps a lot and I feel better after reading this, and if I do start crying in a minute it'll be because you're being very nice and not because I'm sad. So, um, thanks. :)

Comment by fluttershy on Open Thread, Feb. 20 - Feb 26, 2017 · 2017-02-20T15:15:14.669Z · score: 0 (0 votes) · LW · GW

Second edit: Dagon is very kind and I feel ok; for posterity, my original comment was basically a link to the last paragraph of this comment, which talked about helping depressed EAs as some sort of silly hypothetical cause area.

Edit: since someone wants to emphasize how much they would "enjoy watching [my] evaluation contortions" of EA ideas, I elect to delete what I've written here.

I'm not crying.

Comment by fluttershy on "The unrecognised simplicities of effective action #2: 'Systems engineering’ and 'systems management' - ideas from the Apollo programme for a 'systems politics'", Cummings 2017 · 2017-02-17T18:19:24.902Z · score: 7 (7 votes) · LW · GW

There's actually a noteworthy passage on how prediction markets could fail in one of Dominic's other recent blog posts I've been wanting to get a second opinion on for a while:

NB. Something to ponder: a) hedge funds were betting heavily on the basis of private polling [for Brexit] and b) I know at least two ‘quant’ funds had accurate data (they had said throughout the last fortnight their data showed it between 50-50 and 52-48 for Leave and their last polls were just a point off), and therefore c) they, and others in a similar position, had a strong incentive to game betting markets to increase their chances of large gains from inside knowledge. If you know the probability of X happening is much higher than markets are pricing, partly because financial markets are looking at betting markets, then there is a strong incentive to use betting markets to send false signals and give competitors an inaccurate picture. I have no idea if this happened, and nobody even hinted to me that it had, but it is worth asking: given the huge rewards to be made and the relatively trivial amounts of money needed to distort betting markets, why would intelligent well-resourced agents not do this, and therefore how much confidence should we have in betting markets as accurate signals about political events with big effects on financial markets?

Comment by fluttershy on "The unrecognised simplicities of effective action #2: 'Systems engineering’ and 'systems management' - ideas from the Apollo programme for a 'systems politics'", Cummings 2017 · 2017-02-17T18:13:44.977Z · score: 2 (2 votes) · LW · GW

The idea that there's much to be gained by crafting institutions, organizations, and teams which can train and direct people better seems like it could flower into an EA cause, if someone wanted it to. From reading the first post in the series, I think that that's a core part of what Dominic is getting at:

We could significantly improve the decisions of the most powerful 100 people in the UK or the world for less than a million dollars (~£10^6) and a decade-long project on a scale of just ~£10^7 could have dramatic effects.

Comment by fluttershy on Metrics to evaluate a Presidency · 2017-01-25T01:18:45.584Z · score: 0 (0 votes) · LW · GW

Regarding tone specifically, you have two strong options: one would be to send strong "I am playing" signals, such as by dropping the points which men's rights people might make, and, say, parodying feminist points. Another would be to keep the tone as serious as it currently is, but qualify things more; in some other contexts, qualifying your arguments sounds low-status, but in discussions of contentious topics on a public forum, it can nudge participants towards cooperative truth-seeking mode.

Amusingly, I emphasized the points of your comment that I found agreeable in my first reply, both since you're pretty cool, and also since I didn't want the fact that I'm a hardcore feminist to be obvious enough to affect the discourse. However, to the extent which my reply was more serious than your comment, this could have made me look like the less feminist one out of the two of us :D

Comment by fluttershy on Metrics to evaluate a Presidency · 2017-01-25T00:45:33.201Z · score: 0 (0 votes) · LW · GW

Fair enough! I am readily willing to believe your statement that that was your intent. It wasn't possible to tell from the comment itself, since the metric regarding sexual harassment report handling is much more serious than the other metrics.

Comment by fluttershy on Metrics to evaluate a Presidency · 2017-01-24T23:56:10.232Z · score: 0 (0 votes) · LW · GW

(This used to be a gentle comment which tried to very indirectly defend feminism while treating James_Miller kindly, but I've taken it down for my own health)

Comment by fluttershy on Polling Thread January 2017 · 2017-01-23T08:29:23.267Z · score: 1 (1 votes) · LW · GW

Let's find out how contentious a few claims about status are.

  1. Lowering your status can be simultaneously cooperative and self-beneficial. [pollid:1186]

  2. Conditional on status games being zero-sum in terms of status, it’s possible/common for the people participating in or affected by a status game to end up much happier or much worse off, on average, than they were before the status game. [pollid:1187]

  3. Instinctive trust of high status people regularly obstructs epistemic cleanliness outside of the EA and rationalist communities. [pollid:1188]

  4. Instinctive trust of high status people regularly obstructs epistemic cleanliness within the EA and rationalist communities. [pollid:1189]

Comment by fluttershy on Rationality Considered Harmful (In Politics) · 2017-01-09T05:07:56.844Z · score: 3 (3 votes) · LW · GW

Most of my friends can immediately smell when a writer using a truth-oriented approach to politics has a strong hidden agenda, and will respond much differently than they would to truth-oriented writers with weaker agendas. Some of them would even say that, conditional on you having an agenda, it's dishonest to note that you believe that you're using a truth-oriented approach; in this case, claiming that you're using a truth-oriented approach reads as an attempt to hide the fact that you have an agenda. This holds regardless of whether your argument is correct, or whether you have good intentions.

There's a wide existing literature on concepts which are related to (but don't directly address) how to best engage in truth seeking on politically charged topics.The books titled Nonviolent Communication, HtWFaIP, and Impro, are all non-obvious examples. I posit that promoting this literature might be one of the best uses of our time, if our strongest desire is to make political discourse more truth-oriented.

One central theme to all of these works is that putting effort into being agreeable and listening to your discussion partners will make them more receptive to evaluating your own claims based on how factual they are. I'm likely to condense most of the relevant insights into a couple posts once I'm in an emotional state amenable to doing so.

Comment by fluttershy on Open thread, Jan. 02 - Jan. 08, 2017 · 2017-01-05T00:20:00.341Z · score: 5 (5 votes) · LW · GW

It helps that you shared the dialogue. I predict that Jane doesn't System-2-believe that Trump is trying to legalize rape; she's just offering the other conversation participants a chance to connect over how much they don't like Trump. This may sound dishonest to rationalists, but normal people don't frown upon this behavior as often, so I can't tell if it would be epistemically rational of Jane to expect to be rebuffed in the social environment you were in. Still, making claims like this about Trump may be an instrumentally rational thing for Jane to do in this situation, if she's looking to strengthen bonds with others.

Jane's System 1 is a good bayesian, and knows that Trump supporters are more likely to rebuff her, and that Trump supporters aren't social allies. She's testing the waters, albeit clumsily, to see who her social allies are.

Jane could have put more effort into her thoughts, and chosen a factually correct insult to throw at Trump. You could have said that even if he doesn't try to legalize rape, then he'll do some other specific thing that you don't approve of (and you'd have gotten bonus points for proactively thinking of a bad thing to say about him). The implementation of either of these changes would have had a roughly similar effect on the levels of nonviolence and agreeability of the conversation.

This generalizes to most conversations about social support. When looking for support, many people switch effortlessly between making low effort claims they don't believe, and making claims that they System-2-endorse. Agreeing with their sensible claims, and offering supportive alternative claims to their preposterous claims, can mark you as a social ally while letting you gently, nonviolently nudge them away from making preposterous claims.

Comment by fluttershy on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2016-12-23T10:42:36.192Z · score: 10 (9 votes) · LW · GW

I think that Merlin and Alicorn should be praised for Merlin's good behavior. :)

I was happy with the Berkeley event overall.

Next year, I suspect that it would be easier for someone to talk to the guardian of a misbehaving child if there was a person specifically tasked to do so. This could be one of the main event organizers, or perhaps someone directly designated by them. Diffusion of responsibility is a strong force.

Comment by fluttershy on "Flinching away from truth” is often about *protecting* the epistemology · 2016-12-20T09:47:26.061Z · score: 3 (3 votes) · LW · GW

I've noticed that sometimes, my System 2 starts falsely believing there are fewer buckets when I'm being socially confronted about a crony belief I hold, and that my System 2 will snap back to believing that there are more buckets once the confrontation is over. I'd normally expect my System 1 to make this flavor of error, but whenever my brain has done this sort of thing during the past few years, it's actually been my gut that has told me that I'm engaging in motivated reasoning.

Comment by fluttershy on Epistemic Effort · 2016-11-30T23:22:17.962Z · score: 3 (3 votes) · LW · GW

"Epistemic status" metadata plays two roles: first, it can be used to suggest to a reader how seriously they should consider a set of ideas. Second, though, it can have an effect on signalling games, as you suggest. Those who lack social confidence can find it harder to contribute to discussions, and having the ability to qualify statements with tags like "epistemic status: not confident" makes it easier for them to contribute without feeling like they're trying to be the center of attention.

"Epistemic effort" metadata fulfills the first of these roles, but not the second; if you're having a slow day and take longer to figure something out or write something than normal, then it might make you feel bad to admit that it took you as much effort as it did to produce said content. Nudging social norms towards use of "epistemic effort" over "epistemic status" provides readers with the benefit of having more information, at the potential cost of discouraging some posters.

Comment by fluttershy on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T09:40:27.951Z · score: 7 (7 votes) · LW · GW

It was good of you to write this post out of a sense of civic virtue, Anna. I'd like to share a few thoughts on the incentives of potential content creators.

Most humans, and most of us, appreciate being associated with prestigious groups, and receiving praise. However, when people speak about LessWrong being dead, or LessWrong having been taken over by new folks, or about LessWrong simply not being fun, this socially implies that the people saying these things hold LessWrong posters in low esteem. You could reasonably expect that replacing these sorts of remarks with discourse that affirmed the worth of LessWrong posters would incentiveize more collaboration on this site.

I'm not sure if this implies that we should shift to a platform that doesn't have the taint of "LessWrong is dead" associated with it. Maybe we'll be ok if a selection of contributors who are highly regarded in the community begin or resume posting on the site. Or, perhaps this implies that the content creators who come to whatever locus of discussion is chosen should be praised for being virtuous by contributing directly to a central hub of knowledge. I'm sure that you all can think of even better ideas along these lines.

Comment by fluttershy on Voting is like donating hundreds of thousands to charity · 2016-11-03T10:11:37.661Z · score: 2 (6 votes) · LW · GW

Gleb, given the recent criticisms of your work on the EA forum, it would be better for your mental health, and less wasteful of our time, if you stopped posting this sort of thing here. Please do take care of yourself, but don't expect the average rationalist to be more sympathetic to you than the average EA.

Comment by fluttershy on Cryo with magnetics added · 2016-10-04T01:43:49.557Z · score: 3 (3 votes) · LW · GW

I'm sorry! Um, it probably doesn't help that much of the relevant info hasn't been published yet; this patent is the best description that will be publicly available until the inventors get more funding. From the patent:

By replacing the volume of the vasculature (from 5 to 10 percent of the volume of tissues, organs, or whole organisms) with a gas, the vasculature itself becomes a “crush space” that allows stresses to be relieved by plastic deformation at a very small scale. This reduces the domain size of fracturing...

So, pumping the organ full of cool gas (not necessarily oxygen) is done for reasons of cooling the entire tissue at the same time, as well as to prevent fracturing, rather than for biological reasons.

ETA: To answer your last question, persufflation would be done on both cooling and rewarming.

Comment by fluttershy on Cryo with magnetics added · 2016-10-02T01:09:38.064Z · score: 4 (4 votes) · LW · GW

OTOH it's plausible they don't have much compelling evidence mainly because they were resource-constrained. I'm still not expecting this to go anywhere, though.

Whole kidneys can already be stored and brought back up from liquid nitrogen temps via persufflation well enough to properly filter waste and produce urine, and possibly well enough to be transplanted (research pending), though this may or may not go anywhere, depending on the funding environment.

Comment by fluttershy on Cryo with magnetics added · 2016-10-02T00:40:42.838Z · score: 6 (6 votes) · LW · GW

The most striking problem with this paper is how easy all of the tests of viability they used are to game. There are a bunch of simple tests you can do to check for viability, and it's fairly common for non-viable tissue to produce decent-looking results on at least a couple, if you do enough. (A couple of weeks ago, I was reading a paper by Fahy which described the presence of this effect in tissue slices.)

It may be worth pointing out that they only cooled the hearts to -3 C, as well.

Comment by fluttershy on Open Thread, Sept 5. - Sept 11. 2016 · 2016-09-05T09:11:51.603Z · score: 2 (2 votes) · LW · GW

Has anyone else tried the new Soylent bars? Does anyone who has also tried MealSquares/Ensure/Joylent/etc. have an opinion on how they compare with other products?

My first impression is that they're comparable to MealSquares in tastiness. Since they're a bit smaller and more homogeneous than MealSquares (they don't have sunflower seeds or bits of chocolate sticking out of them), it's much easier to finish a whole one in one sitting, but more boring to make a large meal out of them.

Admittedly, eating MealSquares may have a bit more signalling value among rationalists, and MealSquares cost around a dollar less per 2000 kcal than the Soylent bars do. I'll probably stick with the Soylent bars, though; they're vegan, and I care about animals enough for that to be the deciding factor for me.

Comment by fluttershy on Hedging · 2016-08-26T13:45:18.885Z · score: 1 (1 votes) · LW · GW

For groups that care much more about efficient communication than pleasantness, and groups made up of people who don't view behaviors like not hedging bold statements as being hurtful, the sort of policy I'm weakly hinting at adopting above would be suboptimal, and a potential waste of everyone's time and energy.

Comment by fluttershy on Hedging · 2016-08-26T13:27:36.118Z · score: 3 (3 votes) · LW · GW

Which is to say - be confident of weak effects, rather than unconfident of strong effects.

This suggestion feels incredibly icky to me, and I think I know why.

Claims hedged with "some/most/many" tend to be both higher status and meaner than claims hedged with "I think" when "some/most/many" and "I think" are fully interchangeable. Not hedging claims at all is even meaner and even higher status than hedging with "some/most/many". This is especially true with claims that are likely to be disputed, claims that are likely to trigger someone, etc.

Making sufficiently bold statements without hedging appropriately (and many similar behaviors) can result in tragedy of the commons-like scenarios in which people grab status in ways that make others feel uncomfortable. Most of the social groups I've been involved in allow some zero-sum status seeking, but punish these sorts of negative-sum status grabs via e.g. weak forms of ostracization.

Of course, if the number of people in a group who play negative-sum social games passes a certain point, this can de facto force more cooperative members out of the group via e.g. unpleasantness. Note that this can happen in the absence of ill will, especially if group members aren't socially aware that most people view certain behaviors as being negative sum.

Comment by fluttershy on Open Thread, Aug. 22 - 28, 2016 · 2016-08-23T08:09:20.019Z · score: 6 (8 votes) · LW · GW

Several months ago, Ozy wrote a wonderful post on weaponized kindness over at Thing of Things. The principal benefit of weaponized kindness is that you can have more pleasant and useful conversations with would-be adversaries by acknowledging correct points they make, and actively listening to them. The technique sounds like exactly the sort of thing I'd expect Dale Carnegie to write about in How to Win Friends and Influence People.

I think, though, that there's another benefit to both weaponized kindness, and more general extreme kindness. To generalize from my own experience, it seems that people's responses to even single episodes of extreme kindness can tell you a lot about how you'll get along with them, if you're the type of person who enjoys being extremely kind. Specifically, people who reciprocate extreme kindness tend to get along well with people who give extreme kindness, as do people who socially or emotionally acknowledge that an act of kindness has been done, even without reciprocating. On the other hoof, the sort of people who have a habit of using extreme kindness don't tend to get along with the (say) half of the population consisting of people who are most likely to ignore or discredit extreme kindness.

In some sense, this is fairly obvious. The most surprising-for-me thing about using the reaction-to-extreme-kindness heuristic for predicting who I'll be good friends with, though, is how incredibly strong and accurate the heuristic is for me. It seems like 5 of the 6 individuals I feel closest to are all in the top ~1 % of people I've met at being good at giving and receiving extreme kindness.

(Partial caveat: this heuristic doesn't work as well when another party strongly wants something from you, e.g. in some types of unhealthy dating contexts).

Comment by fluttershy on Open Thread, Aug. 22 - 28, 2016 · 2016-08-23T06:43:02.560Z · score: 4 (4 votes) · LW · GW

There was a lengthy and informative discussion of why many EA/LW/diaspora folks don't like Gleb's work on Facebook last week. I found both Owen Cotton-Barratt's mention of the unilateralist's curse, and Oliver Habryka's statement that people dislike what Gleb is doing largely because of how much he's done to associate himself with rationality and EA, to be informative and tactful.

Willpower Thermodynamics

2016-08-16T03:00:58.263Z · score: 5 (6 votes)
Comment by fluttershy on Open Thread, Aug. 8 - Aug 14. 2016 · 2016-08-09T03:04:22.255Z · score: 1 (1 votes) · LW · GW

I've noticed that old money types will tend to cooperate in this sort of publication-based dilemma more frequently for cultural reasons: to them, not cooperating would be a failure to show off their generosity.

To give a real life example, I've often seen my parent's friends "fighting over the check" when they all eat together, while I've never seen new-money-types of similar net worth do this outside of romantic contexts.

Comment by fluttershy on Open thread, Jun. 13 - Jun. 19, 2016 · 2016-06-13T20:35:53.071Z · score: 2 (4 votes) · LW · GW

I've noticed that my System 1 automatically discounts arguments made for points that benefit the speaker even more when the speaker sounds either prideful, or like they're trying to grab status that isn't due to them, than when the speaker sounds humble.

I've also noticed that my System 1 has stopped liking the idea of donating to certain areas of EA quite as much after people who exclusively champion those causes have somehow been abrasive during a conversation I've listened to.

Comment by fluttershy on Revitalizing Less Wrong seems like a lost purpose, but here are some other ideas · 2016-06-12T11:06:18.895Z · score: 2 (4 votes) · LW · GW

I believe that you'll need to attend a CFAR workshop ($3,900 without a scholarship) to receive a subscription to the CFAR mailing list. I'd be willing to pay some amount just to get added to it, since I already have a CFAR workbook, and am relatively familiar with the material taught during the workshops.

Comment by fluttershy on 2016 LessWrong Diaspora Survey Analysis: Part One (Meta and Demographics) · 2016-05-18T19:41:40.694Z · score: 1 (1 votes) · LW · GW

Good job on catching that, and thank you for mentioning it :)

Comment by fluttershy on Open Thread May 16 - May 22, 2016 · 2016-05-18T04:34:39.500Z · score: 1 (1 votes) · LW · GW

I'm curious that you know others with rationality-themed tattoos, too; do they either live in your area, or work for Intentional Insights? I hadn't been aware that people had these sorts of tattoos at all.

Comment by fluttershy on 2016 LessWrong Diaspora Survey Analysis: Part One (Meta and Demographics) · 2016-05-17T17:48:40.667Z · score: 1 (1 votes) · LW · GW

Hat tip goes to an anonymous friend of mine who had been playing around with the survey data, and noticed that all MtF and FtM trans survey respondents reported being bisexual.

Comment by fluttershy on Open Thread May 2 - May 8, 2016 · 2016-05-05T17:36:13.963Z · score: 0 (0 votes) · LW · GW

Neither would necessarily the handling of some kind of lab equipment, if there was some clear documentation available for you

In practice, learning to handle certain lab equipment outside of an institutional context is sometimes hard because it's much easier to break expensive stuff if you don't have someone looking over your work the first few times you do something. Of course, you qualified your above statement quite well, so you haven't said anything incorrect. :)

Comment by fluttershy on Open Thread April 25 - May 1, 2016 · 2016-04-28T02:12:00.863Z · score: 3 (3 votes) · LW · GW

You're very good at using ponies for that purpose, and have a strong track record to prove it. <3

Comment by fluttershy on Open Thread April 25 - May 1, 2016 · 2016-04-27T18:00:43.307Z · score: 2 (2 votes) · LW · GW

You took that criticism quite well.

This comment was quite funny, because of the mental picture it evoked; using ponies can sometimes be a high variance strategy (which is sometimes a reason to not use ponies, sadly). ;)

Comment by fluttershy on Gratitude Thread :-) · 2016-04-19T23:28:46.800Z · score: 5 (5 votes) · LW · GW

Perhaps the way in which you helped this person would have been valued more strongly in one of the monthly bragging threads? Getting the upvotes you deserve after helping a friend make a tough choice would have merely been an issue of wording, if you had posted there.

Also, many of us who read Hanson will have developed the intuition that gratitude is about giving others status, while bragging is about giving yourself status.

Comment by fluttershy on An update on Signal Data Science (an intensive data science training program) · 2016-04-12T02:22:53.243Z · score: 1 (1 votes) · LW · GW

I've already had versions of this conversation with Robert and Jonah in person, but I'll reiterate a few things I shared with them here, since you asked politely. Also, this conversation is becoming aversive to me, so it will become increasingly difficult for me to respond to your comments as we get farther and farther down this comment chain.

specific examples of times when Jonah's explanations were too abstract and not sufficiently practical?

There were actually multiple times during the first couple weeks when I (or my partner and I) would spend 4+ hours trying to fix one particular line of code, and Jonah would give big-picture answers about e.g. how linear regression worked in theory, when what I'd asked for were specific suggestions on how to fix that line of code. This led me to giving up on asking Jonah for help after long enough.

what are some specific topics that you think were neglected in favor of more abstract but less applicable material?

Intermediate and advanced SQL, practice of certain social skills (e.g. handshakes, being interested in your interviewer, and other interview-relevant social skills), and possibly nonlinear models.

Comment by fluttershy on An update on Signal Data Science (an intensive data science training program) · 2016-04-10T05:23:41.623Z · score: 2 (2 votes) · LW · GW

I think it is better to assess personal fit for the bootcamp.

Yes, this is correct.

Pair programming was not always optimal due to the large degree of differences between students.

You're good at socializing and very pleasant to be around, and didn't generally had problems finding pair programming partners when you wanted to work with someone. I'm shy, and couldn't even find anyone who wanted to pair program with me most days, even though I was generally interested in working with others, and often asked Jonah or other students if anyone wanted to work together.

Comment by fluttershy on An update on Signal Data Science (an intensive data science training program) · 2016-04-10T05:05:42.835Z · score: 3 (3 votes) · LW · GW

Again, your perception of the instructors' competencies may have been the result of a mismatch between the sort of environment the program was trying to offer and the sort of environment you were hoping for.

This actually sounds about right.

I think that I care more about job-preparedness, potential for impact, and preparing people for being able to earn-to-give or do direct EA work. I think that Robert also cares about those things, which is why I liked his weekly interview sessions, as I mentioned above.

However, I didn't get the sense that Jonah, the instructor for the first cohort, really cared about these things quite as much. Jonah strikes me as an intelligent individual whose heart is in academia, rather than in data science or industry. This was quite problematic, because, among other reasons, it meant that even his explanations of grittier things were too focused on the big picture, and too spare on details for some people to figure out how to actually do the thing at all. It also skewed the distribution of topics taught away from things relevant to industry.

Dry Ice Cryonics- Preliminary Thoughts

2015-09-28T07:00:03.440Z · score: 8 (9 votes)

Effects of Castration on the Life Expectancy of Contemporary Men

2015-08-08T04:37:52.592Z · score: 2 (28 votes)

Efficient Food

2015-04-06T05:20:11.307Z · score: 4 (5 votes)

Tentative Thoughts on the Cost Effectiveness of the SENS Foundation

2015-01-04T02:58:53.627Z · score: 4 (7 votes)

Expansion on A Previous Cost-Benefit Analysis of Vaccinating Healthy Adults Against Flu

2014-11-12T04:36:50.139Z · score: 3 (6 votes)

A Cost- Benefit Analysis of Immunizing Healthy Adults Against Influenza

2014-11-11T04:10:27.554Z · score: 16 (24 votes)