Posts
Comments
I have plenty of social status, and sufficient money, as a professor. I don't need any more personally. In fact, I've donated about $38K to charity over the last 2 years. My goal is EA ends. You can choose to believe me or not :-)
Never claimed to be - I have long argued for the most effective communication techniques to promote EA ends.
I don't believe I am wrong here. My rich uncle doesn't read Less Wrong. However, those who have rich uncles do read Less Wrong. If I can sway even a single individual to communicate effectively, as opposed to maximizing transparency, in swaying people to give money effectively, I'll be glad to have done so.
You seem to be suggesting that I had previously advocated being as transparent as possible. On the contrary - I have long advocated for the most effective communication techniques to achieve EA ends.
Sarah's post highlights some of the essential tensions at the heart of Effective Altruism.
Do we care about "doing the most good that we can" or "being as transparent and honest as we can"? These are two different value sets. They will sometimes overlap, and in other cases will not.
And please don't say that "we do the most good that we can by being as transparent and honest as we can" or that "being as transparent and honest as we can" is best in the long term. Just don't. You're simply lying to yourself and to everyone else if you say that. If you can't imagine a scenario where "doing the most good that we can" or "being as transparent and honest as we can" are opposed, you've just suffered from a failure mode by flinching away from the truth.
So when push comes to shove, which one do we prioritize? When we have to throw the switch and have the trolley crush either "doing the most good" or "being as transparent and honest as we can," which do we choose?
For a toy example, say you are talking to your billionaire uncle on his deathbed and trying to convince him to leave money to AMF instead of his current favorite charity, the local art museum. You know he would respond better if you exaggerate the impact of AMF. Would you do so, whether lying by omission or in any other way, in order to get much more money for AMF, given that no one else would find out about this situation? What about if you know that other family members are standing in the wings and ready to use all sorts of lies to advocate for their favorite charities?
If you do not lie, that's fine, but don't pretend that you care about doing the most good, please. Just don't. You care about being as transparent and honest as possible over doing the most good.
If you do lie to your uncle, then you do care about doing the most good. However, you should consider at what price point you will not lie - at this point, we're just haggling.
The people quoted in Sarah's post all highlight how doing the most good sometimes involves not being as transparent and honest as we can (including myself). Different people have different price points, that's all. We're all willing to bite the bullet and sometimes send that trolley over transparency and honesty, whether questioning the value of public criticism such as Ben or appealing to emotions such as Rob or using intuition as evidence such as Jacy, for the sake of what we believe is the most good.
As a movement, EA has a big problem with believing that ends never justify the means. Yes, sometimes ends do justify the means - at least if we care about doing the most good. We can debate whether we are mistaken about the ends not justifying the means, but using insufficient means to accomplish the ends is just as bad as using excessive means to get to the ends. If we are truly serious about doing the most good as possible, we should let our end goal be the North Star, and work backward from there, as opposed to hobbling ourselves by preconceived notions of "intellectual rigor" at the cost of doing the most good.
Thank you!
This is probably too complex to hash out in comments - lots of semantics issues and some strategic/tactical information that might be best to avoid discussing publicly. If you're interested in getting involved in the project and want to chat on Skype, email me at gleb [at] intentionalinsights [dot] org
We chose the issue of lies specifically because it is something a bunch of people can get behind opposing, across the political spectrum. Otherwise, we have to choose political virtues, and it's always a trade-off. So the two fundamental orientations of this project are utilitarianism and anti-lies.
FYI, we plan to tackle sloppy thinking too, as I did in this piece, but that's more complex, and it's important to start with simple messages first. Heck, if we can get people to realize the simple difference between truth and comfort, I'd be happy.
Agreed with the issues around measuring lies, and noting the concession of the point - LW gold to you for highlighting the concession.
I hear you about "rationalism in politics." The public-facing aspect of this project will be using terms like "post-lies movement" and so on. We're using "Rational Politics" as the internal and provisional name for now, while we are gathering allies and spreading word about the project rather than doing much public outreach.
I'm talking about prioritizing the good of the country as a whole, not necessarily distant strangers - although in my personal value stance, that would be nice. Like I said, it's an EA project :-)
At this point, I'm finished engaging with you, since you're clearly not making statements based on reality. Good luck with growing more rational!
I'm going with the official definition of post-truth here, and am comfortable standing by it.
Nice, didn't know that - thanks for pointing it out! Updated slightly on credibility of NYTimes on this basis.
I see the situation right now as more liberals being closer to rational thinking than more conservatives, but it hasn't been the case in the past. I don't know how this document would read if more conservatives were closer to rational thinking.
Regarding the Muslim issue, you might want to check out the radio interview I linked in the document. It shows very clearly how I got a conservative talk show host to update toward being nicer to Muslims.
If you're interested in participating in this project, email me at gleb [at] intentionalinsights [dot] org
Agree that the attempts to rid academia of conservatives are bad.
Can you be comfortable saying that Trump lies more often, and more intensely, than prominent liberal politicians; usually does not back away from lies when called out; slams the credibility of those who call him out on lies; focuses on appealing to emotions over facts; tends to avoid providing evidence for assertions (such as that Russia was not behind the hack), etc.? This is what is meant by post-truth in Oxford Dictionary definition of this term.
Yup, agreed that it may well be not wise for those who have racist beliefs to be open about them. The same applies to the global warming stuff.
This is why I say this is a project informed by EA values - it comes from the perspective that voting is like donating thousands of dollars to charity and that voters care about the public good. It's not meant to target those who don't care about the public good - just those mistaken about what is the best way to achieve the public good. For instance, plenty of voters are mistaken about the state of reality, and some of those folks would genuinely want the most good. The project is not meant to reach all, in other words - just that select slice.
Yup, agreed that it may well be not worthwhile for voters who vote for reasons that are not oriented toward the most social good to vote rationally. This is why I say this is a project informed by EA values - it comes from the perspective that voting is like donating thousands of dollars to charity. For those who are purely self-interested, it's really not rational to vote.
So to be clear, it's not meant to target those who don't care about the public good - just those mistaken about what is the best way to achieve the public good. For instance, plenty of voters are mistaken about the state of reality, and some of those folks would genuinely want the most good. The project is not meant to reach all, in other words - just that select slice.
I am comfortable with saying that my post is anti-post truth politics. I think most LWs would agree that Trump relies more on post-truth tactics than other politicians. Note that I also called out Democrats for doing so as well.
Um, Breitbart news is hardly a credible site to use to attack Politifact. Besides, that citations also had Washington Post and The New York Times - do you call them fake news as well?
This is described in the "How Is This Project Different From Others Trying To Do Somewhat Similar Things?" and "Do You Have Any Evidence That This Will Work?" sections in the document linked above - here's the link for convenience.
FYI: http://lesswrong.com/r/discussion/lw/ofi/rational_politics_project/
I hear you about the interesting articles.
This piece was not aimed at folks who want interesting articles, but to the smaller proportion of folks who are concerned about the election outcome and want to do something to help out.
I'm very comfortable with people downvoting my posts, if they reach the minority of folks receptive to them.
I was invited on a radio show to talk further about this piece: https://www.youtube.com/watch?v=RNXw6ifqcNg
A number of other venues republished this piece as well, showing general interest in making politics less irrational:
Salon
Fact-checking doesn’t matter: Human biases control whether or not we’re going to believe politicians
The Dallas Morning News in Dallas, Texas
It's not what Trump and Clinton say, but how they say it
http://www.dallasnews.com/opinion/commentary/2016/10/24/trup-clinton-say-say
Psychology Today
How Our Biases Cause Us To Misinterpret Politics
The Huffington Post
Fact-Checking Clinton And Trump Is Not Enough
Patheos
How Our Thinking Errors Cause Us To Misinterpret Politics
Globe Gazette in Mason City, Iowa
How thinking errors affect our views of candidates' statements
The Daily World in Aberdeen, Washington
How thinking errors affect our views of candidates’ statements
The Intelligencer / Wheeling News-Register in Doylestown, Pennsylvania
How thinking errors affect our views of candidates' statements
Thanks for your good words about my insights on EA marketing, really appreciate it!
Regarding having InIn in the video, the goal is not to establish any sort of equivalence. In fact, it would be hard to compare the other organizations with each other as well. For instance, GiveWell has a huge budget and vastly more staff than any of the other organizations mentioned in the video. The goal is to give people information on various venues where they could get different types of information. For example, ACE is there for people who care about animal rights, and GWWC is there for people who want a community. InIn is there for people who want easy content to inform themselves about effective giving. This is why InIn is specifically discussed as a venue to get content, not recommendations on effective charities or things like that.
Also, please remember people's priors. This video is not aimed at EAs. The people who watch this video will not have any idea about the popularity of various organizations. InIn would get fine credit within the EA community if we had just produced the video without including InIn itself. The goal is to provide a broad audience with a variety of sources of information about effective giving. We included InIn because it provides some types of content - such as this video - that other orgs do not - as you say, they have a different target group :-)
I like those other examples for labeling others, though - might be a nice general strategy to employ.
I agree that it does produce disassociation, but I don't think, for me, it's about disassociating from emotions. It's a disassociation from an identity label. It helps keep my identity small in way that speaks to my System 1 well.
Weird works for me, and I actually associate positive value with weirdness. But of course your mileage may vary. Any term that works to indicate distance from an identity label viscerally to one's System 1 will do, as Gram_Stone pointed out.
Agreed, to me it also makes no sense to do cash transfers to people with above average income. I see basic income as mainly about a social safety net.
Here's my piece in Salon about updating my beliefs about basic income. The goal of the piece was to demonstrate the rationality technique of updating beliefs in the hard mode of politics. Another goal was to promote GiveDirectly, a highly effective charity, and its basic income experiment. Since it had over 1K shares in less than 24 hours and the comment section is surprisingly decent, I'm cautiously optimistic about the outcome.
Applying probabilistic thinking to fears about terrorism in this piece for the 16th largest newspaper in the US, reaching over 320K with its printed version and over 5 million hits on its website per month. The title was chosen by the newspaper, and somewhat occludes the points. The article is written from a liberal perspective to play into the newspaper's general bent, and its main point was to convey the benefits of applying probabilistic thinking to evaluating political reality.
Edit Updated somewhat based on conversation with James Miller here
Applying probabilistic thinking to fears about terrorism in this piece for the 16th largest newspaper in the US, reaching over 320K with its printed version and over 5 million hits on its website per month. The title was chosen by the newspaper, and somewhat occludes the points. The article is written from a liberal perspective to play into the newspaper's general bent, and its main point was to convey the benefits of applying probabilistic thinking to evaluating political reality.
[Edit] Updated somewhat based on conversation with James Miller here
:-(
Consider reposting this on the EA Forum, might get more hits that way.
Speed Giving Games involve having people make a decision between two charities. In SGGs, participants who come to the table are given a 1-minute introduction to the concept of effective giving and the two charities involved in the SGG, and are then invited to make a decision about which of the two charities to support. Their vote results in a dollar each going to either charity, sponsored by an outside party, usually The Life You Can Save. For the SGG, we chose GiveDirectly as the effective charity, and the Mid-Ohio Food Bank as a local and not so effective charity.
Will keep in mind about the photo, thanks for the feedback.
A videotaped virtual meeting on effective ways of marketing EA to a broad audience.
Yeah, totally hear you about the file drawer effect, which is why I found two separate citations besides the Center for Policing Equity, which I cited in the piece - this one, and this one. One is a poll, and the other is a government statistical analysis on traffic stops that includes race information. Neither of these is something to which the file drawer effect (publication bias) would apply.
An article based on rationality-informed strategies of probabilistic thinking and de-anchoring to deal with police racial profiling. Note that the data on racial profiling is corrected for the higher rate of crimes committed by black people. This is a very by-the-numbers piece.
Eugine strikes again - this is really creating a great deal of noise and reversing any indications of salience for posts. Previously, he mainly did only one downvote, now he's doing ten at a time, if my -20 karma that appeared in the last hour for the two comments I made is anything to judge by. He seems to also not only be targeting posts he dislikes, but also specific people he dislikes, such as Elo and me. Makes it really hard to judge the quality of my posts, as who knows who actually downvotes them. Frustrating.
I figured :-(
Also good to keep in mind this article by Danny Kahneman: "Why Moving to California Won’t Make You Happy".
BTW, sad to see this post downvoted, pretty good post.
This video discusses the most effective science-based strategies for communicating AI Risk to a broad audience. It focuses on issues such as minimizing the inference gap, using emotional engagement, avoiding pattern-matching to sci-fi narratives and instead pattern-matching to unemployment narratives and other topics that the audience would find realistic. It's unlisted, so you can watch and share it with others only if you have a link. Feel free to pass it on to those who you think might benefit from it.
Did some rationality-informed commenting for my university television about guns and racism.
An article on Psychology Today on map and terriotry and fundamental attribution error, and another one on false consensus effect.
Agreed on the benefits of trying things, such as links and an additional Open section. That will give us additional data to go on.
For those interested in longevity research, on the Intentional Insights videocast, we interviewed the project leader and outreach coordinator for the Major Mouse Testing Project, which focuses on how we can advance the science on longevity.
We also published a blog on strategies to resist impulsive temptations, which I think some here might find interesting.
Nice ideas! I think you highlighted well the fundamental problem of lack of social rewards for writing content for LW, and having strong criticism for doing so.
Regarding changing things, I think it makes sense to work with people like Scott who have a lot of credibility, and figure out what would work for them.
However, it also seems that LW itself has a certain brand, and attracts a sizable community. I would like to see a version of the voting system you described implemented here, with people who have more karma having votes that weigh more. I'd also like to see some cross-posting of content from Scott and others on LW itself.
So not doing away with LW as it exists, but expanding it in collaboration with others who would be interested in revitalizing a different form of LW. One where authors get appropriate credit for posting, with credible people - those who have lots of karma - being able to upvote them more.
Interesting, didn't think of it that way. The purpose for the threads is to organize in one place the things we do to advance rationality. I can see where it might pattern-match to bragging. So what would be another alternative to organizing in one venue the things done to advance rationality outreach?
Perhaps this is something best for CFAR staff to determine rather than yourself - they have certain standards for scholarships.
Yeah, one of the big failure modes is that people think that attending the workshop will magically result in internalizing all the benefits of CFAR materials. It's vital to keep working on them afterward, as I described in my post. For instance, in about an hour I will attend a weekly Google hangout with CFAR staff following up on some of the materials from the workshop. I'm not sure how many others from the workshop will be there, we'll see. Besides, as Kaj_Sotaja noted here, you can get your money back as well.