Posts
Comments
To clarify, I also don't think EA has much potential as a social movement even if marketed properly. Specific EA beliefs are much more spreadable memes down the line IMO.
Yup - although in the case of EA, that's still likely to be a very slow process. This isn't the sort of thing that can go viral. It takes months or years of cultivating before someone transfers from complete outsider to core member.
If you're talking about recruiting new EAs, it sounds like you mean people that agree enough with the entire meme set that they identify as EAs. Have there been any polls on what percentage of self-identifying EAs hold which beliefs? It seems like the type of low-hanging fruit .impact could pick off. That poll would give you an idea of how common it is for EAs to believe only small portions of the meme set. I expect that people agree with the majority of the meme set before identifying as EA. I believe a lot more than most people and I only borderline identify as EA.
So you expect movement building / outreach to be a lot less successful than community building ("inreach", if you will)?
Yes, especially if the same strategies are expected to accomplish both. They're two very different tasks.
Some of this comes down to what counts as an "EA". What kind of conversion do we need to do, and how much? I also think I'll be pretty unsuccessful at getting new core EAs, but what can I get? How hard is it? These are things I'd like to know, and things I believe would be valuable to know.
I think you can convince people to give more of their money away, you can convince people to take the effectiveness of the charity into account, you can convince people to care more about animals or to stop eating meat, and possibly that there are technological risks that are greater than climate change and nuclear war. I don't think you'll convince the same person of all of these things. Rather they'll be individuals that are on board with specific parts and that may or may not identify with EA.
I'm saying it helps with retention but barely at all with recruitment - and that it may even get in the way of recruitment of casual EAs. I don't think Skillshare favours will make people want to self-identify as EA. Only a minority of people even require the sorts of favours being offered.
"A stronger community for the effective altruist movement should better encourage existing EAs to contribute more and better attract new people to consider becoming EA. By building the EA Community, we hope to indirectly improve recruitment and retention in the effective altruist movement, which in turn indirectly results in more total altruistic effort, in turn resulting in more reduced suffering and increased happiness."
I'm going to predict that .impact struggles to meet this objective.
I think you're taking a naive view of how movement building works.
I think you need to see the distinction between retaining and recruiting members as analogous to the tension between a core and casual fan base. In order to recruit new EAs, your pitch will almost definitely have to downplay certain areas that many core EAs spend lots of time thinking about. That way, you'll bring in a lot of new people that, for example, buy the argument that you should donate to the charity that provides the most bang for your buck and yet still, for example, have zero interest in AI or animals. If you refuse to alienate core EA member values in order to get more casual EAs (e.g. people that donate to GiveWell's top charities and give a bit more than average) then, well, that's admirable, I guess, but you're movement building won't go anywhere. There's a reason why for-profit organizations do this - it actually works.
The amount of people that share most EA values is going to remain low for a very long time. Increasing that number wouldn't involve "recruitment" as much as it would involve full-on conversion. As long as your goal is to increase that number, you're going to see very low recruitment rates. Most people aren't on the market shopping for new worldviews - but individual new beliefs or values, maybe. And if you won't agree with a worldview, you aren't going to join the community just because it's active.
If you want more "total altruistic effort," go convince people to show more altruistic effort. Trying to movement build a group as complex and alienating as EA by strengthening its internal ties will dissuade most outsiders from wanting to join you. Pre-existing communities can be scary things to self-identify with.
You know how some parents make their kids try cigarettes at a young age so that they'll hate it and then not want to smoke when they're older? Well, a website like Brian Tomasik's is like that for most potential EAs. Way too much, too soon.
Cool. Will there be a lot of overlap with Intuition Pumps and Other Tools for Thinking? Based on your description, it sounds like Dennett just wrote this book for you.
You're right. I think scientific thinkers can sometimes misinterpret skepticism as meaning that nothing short of peer-reviewed, well-executed experiments can be considered evidence. I think sometimes anecdotal evidence is worth taking seriously. It isn't the best kind of evidence, but it falls above 0 on the continuum.
The good news is that our higher cognitive abilities also allow us to overcome depression in many situations. In Stumbling on Happiness, Daniel Gilbert explains how useful it is that we can rationalize away bad events in our lives (such as rejection). This capability, which Gilbert refers to as our psychological immune system, explains why people are able to bounce back from negative events much more quickly than they expect to.
I think speaking in terms of probabilities also clears up a lot of epistemological confusion. "Magical" thinkers tend to believe that a lack of absolute certainty is more or less equivalent to total uncertainty (I know I did). At the same time, they'll understand that a 50% chance is not a 99% chance even though neither of them is 100% certain. It might also be helpful to point out all the things they are intuitively very certain of (that the sun will rise, that the floor will not cave in, that the carrot they put in their mouth will taste like carrots always do) but don't have absolute certainty of. I think it's important to make clear that you agree with them that we don't have absolute certainty of anything and instead shift the focus toward whether absolute certainty is really necessary in order to make decisions or claim that we "know" things.
"Keeping busy" is based mainly on my personal experience and from what I've heard other people say. But in the book Flow: The Psychology of Optimal Experience (which I didn't cite because I assume you're familiar with it), it's suggested based on self-reports on subjective wellbeing that people are, on average, happier while at work than they are in their leisure time - even though they don't feel as if this is the case.
In Stumbling on Happiness, Daniel Gilbert also suggests that when making decisions about the future, we rely on our own speculations of how we'll feel less than on the reviews of those with experience. This isn't a way of treating depression as much as it is a way of making decisions better at keeping our future selves happy.
If you start a PR campain about AI risk that results into bringing a lot of luddites into the AGI debate, it might be harder for MIRI to convince AI researchers to treat UFAI as a serious risk not easier because the average AI person might think how the luddites oppose AGI for all the wrong reasons. He's not a luddite so why should he worry about UFAI?
Fair enough. I still believe there could be benefits to gaining wider support but I agree that this is an area that will be mainly determined by the actions of elite specialized thinkers and the very powerful.
I also something that contradicts the goal you layed out above. You said you wanted to spread the meme: "Belief without evidence is bad." If you start pushing memes because you like the effect and not because they are supported by good evidence you don't get "Belief without evidence is bad."
I'm not sure I see a contradiction there. I can see that if I say things that aren't true and people believe them just because I said them, that would be believing without evidence. But "belief without evidence is bad" doesn't have to be true 100% of the time in order for it to be a good, safe meme to spread. If your argument is that the spreading of "Utility > Truth" interferes with "Belief without evidence is bad" so that the two will largely cancel out, then (1) I didn't include "Utility > Truth" on my incomplete list of safe memes precisely because I don't think it's safe and (2) the argument would only be persuasive if the two memes usually interfered with each other, which I don't think is the case. In most situations, people knowing the truth is a really desirable thing. Journalism and marketing are exceptions where it could make sense to oversimplify a message in order for laypeople to understand it, hence making the meme less accurate but more effective at getting people interested (in which case, they'll hopefully continue researching until they have a more accurate understanding). Also, (3) even if two memes contradict each other, using both in tandem could theoretically yield more utilons than using either one alone (or neither), though I'd expect examples to be rare.
By the way, I emailed Adbusters about if/how they measure the effectiveness of their culture jamming campaigns. I'll let you know when I get a response.
Finding flow activities seems to be a good one.
Also - keeping busy?
Oprah doesn't need everyone to like her. She wants the largest viewership possible. MIRI doesn't need everyone to support it. It wants the most supporters possible.
They don't need to appeal to everyone but they probably should appeal to a wider audience of people than they currently do (evidenced by the only ~10 FAI researches in the world) - and a different audience requires a different presentation of the ideas in order to be optimally effective.
I don't think pointing new people toward Less Wrong would be as effective as just creating a new pitch just for "ordinary people." Luke's Reddit AMA, Singularity 1-on-1 interview, and Facing the Singularity ebook were pretty good for this but it doesn't seem like many x-risk researchers have put much energy into marketing themselves to the broader public. (To be fair, in doing so, they might do more harm than good.)
The kinds of memes we want to push are more complex. I also don't know if we actually have decided which memes we want to push. I personally don't know enough about FAI to be confident in deciding which memes benefits the agenda of MIRI and FHI. If MIRI wants more PR the first step would be to articulate what kind of memes it actually wants to transmit to a broader public.
This was one of the suggestions in my post. :) Though I'm not sure it's possible to communicate about AI and only spread "complex" memes. I think about memes more in terms of positive and negative effects rather than in terms of their accuracy.
"Nudge" says more people means more eating.
http://ajcn.nutrition.org/content/90/2/282.full.pdf
http://europepmc.org/articles/PMC2276563/reload=0;jsessionid=pcefhmQohIp1CPVmEpqG.2
This article suggests it's more complicated:
http://www.ncbi.nlm.nih.gov/pubmed/14599286
It's also believed that obesity is contagious across social networks:
http://dash.harvard.edu/bitstream/handle/1/3710802/Christakis_SpreadofObesity.pdf?sequence=2
But this article suggests otherwise:
I think we might be using different definitions of "radical" and "moderate, socially acceptable." I'm not referring to things that massively impact society, but to things that clash with widely held values and attitudes.
3D printing doesn't strike me as an idea most people negatively associate with "radical." More importantly, even if it was, it is possible to present a "radical" or "weird" or "unfamiliar" idea in a way that it appears not to clash with people's values and attitudes.
That's what I suggest people be cautious to do. When you tell the average American that you fear an artificial agent will destroy humanity in this century, you are going to get mainly aversive reactions - in a way that you won't get by telling people you think 3D printing will revolutionize our socioeconomic structure. Do you disagree with that?
That doesn't doom FAI research to eternal neglect, it just means FAI outreach people need to be cognizant of the fact that they're fighting an uphill battle toward persuasion that most outreach and marketing campaigns don't have to face. As a result, it's important to frame FAI as something consistent with most people's attitudes. That probably involves leaving out certain details. As Bre Pettis demonstrates with the school dropout point, there can be socially acceptable ways to express minority views. He "smuggled" that idea into his talk about a non-offensive subject.
I should also add that I wrote that topic with the idea of a media platform in mind (that's why I made the comparison to LW and not to individual posts on LW). So if you ran your own TV or radio station, I think it would be a better idea to use +Compromise and Smuggle+ than to cover only subjects such as cryonics, transhumanism, wild animal suffering, etc. In the latter case, you may cover topics you believe to be more important, but your station will be too easy for people uninterested in those topics to ignore. If you include some status quo material, you can lure in some unsuspecting listeners that will also catch the less conventional stuff. I think it can be compared to a pharmacy taking a loss on a product ("Toothpaste, $0.25 a tube!) just to get customers in their store, where they'll likely buy other stuff on the shelves.
If you're just submitting an article to a pre-existing news source then, as you say, you don't really need to consider this. The mainstream content is probably the bulk of what they cover, so they'll welcome your unconventional post.
I have no idea about culture jamming's effectiveness. I read the book "Culture Jam" by Kalle Lasn, head of Adbusters, and it was pretty horrible. My impression is that it fuels cynicism and dissent. I support its existence because I think different tactics work on different people.
That sounds naive. If you ask yourself whether there disinformation in the coverage of topic X in the mainstream media, the answer is "yes" no matter the issue. Journalist write stories under tight timetables without much time to fact check. They are also under all sorts of other pressures that aren't about telling the truth as it happens to be.
Yes, it's ubiquitous, but some fields and issues are more affected than others, usually due to politicization. Tight timetables may apply to all stories but not all pressures do.
From what I personally experienced in doing Quantified Self press interviews I don't think that's the case. I see no reason why a journalist shouldn't want to report about effective altruism.
You're right that effective altruism isn't so radical that a broader public wouldn't take interest in it. I probably shouldn't have included it alongside existential risks and AGI. I'm editing my post to remove it from that sentence.
AGI is more complicated than being for or against it. We have specific objectives such as increasing FAI research that are complex issues. Making people associate AGI with depictions of ugly humanoids makes them model the problems of AGI wrong.
As you already suggested, oversimplification and distortion are routine parts of journalism. Limiting yourself to coverage appropriately modelling the problems of AGI essentially means exiling yourself from news sources that people unlike yourself want to read. My suggestion is also a kind of cheap marketing trick or flourish rather than a full on FAI outreach campaign. I'm not all that confident this trick would accomplish anything.
I don't see the point of Medium.com. Why should you focus effort on it? On the other hand I agree that the Guardian is a good place when you want to publish an article about a topic. Editing Wikipedia in a way that important topics get proper representation seems to be effective.
Medium.com wasn't selected for its being optimal - it's just a random example of a website you could post to with a very different viewership. I agree that The Guardian and Wikipedia are better bets.
I however can point you to Ryan Holiday's book "Trust Me, I'm Lying: Confessions of a Media Manipulator".
Thanks. I'll check this out.
I believe this is pretty standard in media, communications, and journalism-ish programs. They don't call it "critical thinking" but they are definitely clear about the bias and tricks.
I think the reason for the downvotes is that people on LW have generally already formulated their ethical views past the point of wanting to speculate about entirely new normative theories.
Your post probably would have received a better reaction had you framed it as a question ("What flaws can you guys find in a utilitarian theory that values the maximization of the amount of computation energy causes before dissolving into high entropy?") rather than as some great breakthrough in moral reasoning.
As for constructive feedback, I think Creutzer's response was pretty much spot on. There are already mainstream normative theories like preference utilitarianism that don't directly value pain and pleasure and yet seem to make more sense than the alternatives you offered.
Also, your post is specifically about ethics in the age of superintelligence, but doesn't mention CEV. If you're going to offer a completely new theory in a field as well-trod as normative ethics, you need to spend more time debunking alternative popular theories and explaining the advantages yours has over them.
Jonah, do you think uncertainty about how to prioritize charities and causes is an argument for centralizing or for diversifying your donations?
Upvoted.
I meant to say that if you believe a scientific claim to be legitimate, there should/are going to be implications of that on other parts of your worldview. When we misjudge what the implications of a belief are, we can believe it while simultaneously rejecting something it implies. (That's what reductio ad absurdum's are for.)
I was under the impression that GPS was such a technology. I also don't see much room for reasonably believing in evolutionary medicine without accepting macro-evolution - but that's a bit of a stretch from my original point. After struggling to find examples, I'm going to downshift my probability of there being many around.
Right, tougher debate moderators could make it clearer what each candidate really believes by reducing deception and vagueness, but probably wouldn't have any effect on making straightforward dumb-but-popular views any less popular.
Good point, thanks. Skepticism of specific scientific claims is fully consistent with a "pro-science" outlook. I would maintain that people rejecting legitimate scientific claims often are inconsistent, though. Case in point, Young Earth Creationists that completely trust technology and medication that could only work if the scientific case for YEC is false.
Calling anti-vaccination people "anti-science" is a transparently bad persuasion tactic. Leave a social line of retreat.
Also, it probably isn't even true that they're anti-science. It's more likely their stances on science are inconsistent, trusting it to varying degrees in different situations depending on the political and social implications of declaring belief.
I'm not sure it would lead to better politicians as much as would it lead to politicians adapting their bullshit skills to better fit the new interview set up.
Many of the bullshit explanations politicians give are perceived as perfectly acceptable to the wider public.
MODERATOR: Should gay marriage be legal?
POLITICIAN: Nope.
MODERATOR: Why not?
POLITICIAN: It goes against the teachings of my religion. It says in passage X:YZ of the Bible that homosexuality is a sin. I refuse to go against the command of God in my time in office.
That answer is fine to many, maybe most, Americans. If the moderator presses the politician on his religious beliefs at this point, he comes off as biased, far too biased to be interviewing presidential candidates.
In general, I do think demanding more of politicians is a safe bet to be a Good Thing though.
"In general, this suggests that we should give relatively more weight to tastes and values that we expect to be more universal among civilizations across the multiverse."
This is a pretty interesting idea to me, Brian. It makes intuitive sense but when would we apply it? Can it only be used as a tiebreaker? It's difficult for me to imagine scenarios where this consideration would sway my decision.
Spend it on other people.
http://www.uvm.edu/~pdodds/teaching/courses/2009-08UVM-300/docs/others/everything/dunn2008a.pdf