Against responsibility
post by Benquo · 2017-03-31T21:12:12.718Z · LW · GW · Legacy · 21 commentsContents
Why I am worried Summary of the argument Responsibility implies control Control-seeking is harmful Simple patches don't undo the harms from adversarial strategies We can do better None 21 comments
I am surrounded by well-meaning people trying to take responsibility for the future of the universe. I think that this attitude – prominent among Effective Altruists – is causing great harm. I noticed this as part of a broader change in outlook, which I've been trying to describe on this blog in manageable pieces (and sometimes failing at the "manageable" part).
I'm going to try to contextualize this by outlining the structure of my overall argument.
Why I am worried
Effective Altruists often say they're motivated by utilitarianism. At its best, this leads to things like Katja Grace's excellent analysis of when to be a vegetarian. We need more of this kind of principled reasoning about tradeoffs.
At its worst, this leads to some people angsting over whether it's ethical to spend money on a cup of coffee when they might have saved a life, and others using the greater good as license to say things that are not quite true, socially pressure others into bearing inappropriate burdens, and make ever-increasing claims on resources without a correspondingly strong verified track record of improving people's lives. I claim that these actions are not in fact morally correct, and that people keep winding up endorsing those conclusions because they are using the wrong cognitive approximations to reason about morality.
Summary of the argument
- When people take responsibility for something, they try to control it. So, universal responsibility implies an attempt at universal control.
- Maximizing control has destructive effects:
- An adversarial stance towards other agents.
- Decision paralysis.
- These failures are not accidental, but baked into the structure of control-seeking. We need a practical moral philosophy to describe strategies that generalize better, and benefit from the existence of other benevolent agents, rather than treating them primarily as threats.
Responsibility implies control
In practice, the way I see the people around me applying utilitarianism, it seems to make two important moral claims:
- You - you, personally - are responsible for everything that happens.
- No one is allowed their own private perspective - everyone must take the public, common perspective.
The first principle is almost but not quite simple consequentialism. But it's important to note that it actually doesn't generalize; it's massive double-counting if each individual person is responsible for everything that happens. I worked through an example of the double-counting problem in my post on matching donations.
The second principle follows from the first one. If you think you're personally responsible for everything that happens, and obliged to do something about that rather than weigh your taste accordingly – and you also believe that there are ways to have an outsized impact (e.g. that you can reliably save a life for a few thousand dollars) – then in some sense nothing is yours. The money you spent on that cup of coffee could have fed a poor family for a day in the developing world. It's only justified if the few minutes you save somehow produce more value.
One way of resolving this is simply to decide that you're entitled to only as much as the global poor, and try to do without the rest to improve their lot. This is the reasoning behind the notorious demandingness of utilitarianism.
But of course, other people are also making suboptimal uses of resources. So if you can change that, then it becomes your responsibility to do so.
In general, if Alice and Bob both have some money, and Alice is making poor use of money by giving to the Society to Cure Rare Diseases in Cute Puppies, and Bob is giving money to comparatively effective charities like the Against Malaria Foundation, then if you can cause one of them to have access to more money, you'd rather help Bob than Alice.
There's no reason for this to be different if you are one of Bob and Alice. And since you've already rejected your own private right to hold onto things when there are stronger global claims to do otherwise, there's no principled reason not to try to reallocate resources from the other person to you.
What you're willing to do to yourself, you'll be willing to do to others. Respecting their autonomy becomes a mere matter of either selfishly indulging your personal taste for "deontological principles," or a concession made because they won't accept your leadership if you're too demanding - not a principled way to cooperate with them. You end up trying to force yourself and others to obey your judgment about what actions are best.
If you think of yourself as a benevolent agent, and think of the rest of the world and all the people in it in as objects with regular, predictable behaviors you can use to improve outcomes, then you'll feel morally obliged - and therefore morally sanctioned - to shift as much of the locus of control as possible to yourself, for the greater good.
If someone else seems like a better candidate, then the right thing to do seems like throwing your lot in with them, and transferring as much as you can to them rather than to yourself. So this attitude towards doing good leads either to personal control-seeking, or support of someone else's bid for the same.
I think that this reasoning is tacitly accepted by many Effective Altruists, and explains two seemingly opposite things:
- Some EAs get their act together and make power plays, implicitly claiming the right to deceive and manipulate to implement their plan.
- Some EAs are paralyzed by the impossibility of weighing the consequences for the universe of every act, and collapse into perpetual scrupulosity and anxiety, mitigated only by someone else claiming legitimacy, telling them what to do, and telling them how much is enough.
Interestingly, people in the second category are somewhat useful for people following the strategy of the first category, as they demonstrate demand for the service of telling other people what to do. (I think the right thing to do is largely to decline to meet this demand.)
Objectivists sometimes criticize "altruistic" ventures by insisting on Ayn Rand's definition of altruism as the drive to self-abnegation, rather than benevolence. I used to think that this was obnoxiously missing the point, but now I think this might be a fair description of a large part of what I actually see. (I'm very much not sure I'm right. I am sure I'm not describing all of Effective Altruism – many people are doing good work for good reasons.)
Control-seeking is harmful
You have to interact with other people somehow, since they're where most of the value is in our world, and they have a lot of causal influence on the things you care about. If you don't treat them as independent agents, and you don't already rule over them, you will default to going to war against them (and more generally trying to attain control and then make all the decisions) rather than trading with them (or letting them take care of a lot of the decisionmaking). This is bad because it destroys potential gains from trade and division of labor, because you win conflicts by destroying things of value, and because even when you win you unnecessarily become a bottleneck.
People who think that control-seeking is the best strategy for benevolence tend to adopt plans like this:
Step 1 – acquire control over everything.
Step 2 – optimize it for the good of all sentient beings.
The problem with this is that step 1 does not generalize well. There are lots of different goals for which step 1 might seem like an appealing first step, so you should expect lots of other people to be trying, and their interests will all be directly opposed to yours. Your methods will be nearly the same as the methods for someone with a different step 2. You'll never get to step 2 of this plan; it's been tried many times before, and failed every time.
Lots of different types of people want more resources. Many of them are very talented. You should be skeptical about your ability to win without some massive advantage. So, what you're left with are your proximate goals. Your impact on the world will be determined by your means, not your ends.
What are your means?
Even though you value others' well-being intrinsically, when pursuing your proximate goals, their agency mostly threatens to muck up your plans. Consequently, it will seem like a bad idea to give them info or leave them resources that they might misuse.
You will want to make their behavior more predictable to you, so you can influence it better. That means telling simplified stories designed to cause good actions, rather than to directly transmit relevant information. Withholding, rather than sharing, information. Message discipline. I wrote about this problem in my post on the humility argument for honesty.
And if the words you say are tools for causing others to take specific actions, then you're corroding their usefulness for literally true descriptions of things far away or too large or small to see. Peter Singer's claim that you can save a life for hundreds of dollars by giving to developing-world charities no longer means that you can save a life for hundreds of dollars by giving to developing-world charities. It simply means that Peter Singer wants to motivate you to give to developing-world charities. I wrote about this problem in my post on bindings and assurances.
More generally, you will try to minimize others' agency. If you believe that other people are moral agents with common values, then e.g. withholding information means that the friendly agents around you are more poorly informed, which is obviously bad, even before taking into account trust considerations! This plan only makes sense if you basically believe that other people are moral patients, but independent, friendly agents do not exist; that you are the only person in the world who can be responsible for anything.
Another specific behavioral consequence is that you'll try to acquire resources even when you have no specific plan for them. For instance, GiveWell's impact page tracks costs they've imposed on others – money moved, and attention in the form of visits to their website – but not independent measures of outcomes improved, or the opportunity cost of people who made a GiveWell-influenced donation. The implication is that people weren't doing much good with their money or time anyway, so it's a "free lunch" to gain control over these.<fn>Their annual metrics report goes into more detail and does track this, and finds that about a quarter of GiveWell-influenced donations were reallocated from other developing-world charities (and another quarter from developed-world charities).</fn> By contrast, the Gates foundation's Valentine's day report to Warren Buffet tracks nothing but developing-world outcomes (but then absurdly takes credit for 100% of the improvement).
As usual, I'm not picking on GiveWell because they're unusually bad – I'm picking on GiveWell because they're unusually open. You should assume that similar but more secretive organizations are worse by default, not better.
This kind of divergent strategy doesn't just directly inflict harms on other agents. It takes resources away from other agents that aren't defending themselves, which forces them into a more adversarial stance. It also earns justified mistrust, which means that if you follow this strategy, you burn cooperative bridges, forcing yourself farther down the adversarial path.
I've written more about the choice between convergent and divergent strategies in my post about the neglectedness consideration.
Simple patches don't undo the harms from adversarial strategies
Since you're benevolent, you have the advantage of a goal in common with many other people. Without abandoning your basic acquisitive strategy, you could try to have a secret handshake among people trying to take over the world for good reasons rather than bad. Ideally, this would let the benevolent people take over the world, cooperating among themselves. But, in practice, any simple shibboleth can be faked; anyone can say they're acquiring power for the greater good.
It's a commonplace in various discussions among Effective Altruists, when someone identifies an individual or organization doing important work, to suggest that we "persuade them to become an EA" or "get an EA in the organization", rather than directly about ways to open up a dialogue and cooperate. This is straightforwardly an attempt to get them to agree to the same shibboleths in order to coordinate on a power-grabbing strategy. And yet, the standard of evidence we're using is mostly "identifies as an EA".
When Gleb Tsipursky tried to extract resources from the Effective Altruism movement with straightforward low-quality mimesis, mouthing the words but not really adding value, and grossly misrepresenting what he was doing and his level of success, it took EAs a long time to notice the pattern of misbehavior. I don't think this is because Gleb is especially clever, or because EAs are especially bad at noticing things. I think this is because EAs identify each other by easy-to-mimic shibboleths rather than meaningful standards of behavior.
Nor is Effective Altruism unique in suffering from this problem. When the Roman empire became too big to govern, gradually emperors hit upon the solution of dividing the empire in two and picking someone to govern the other half. This occasionally worked very well, when the two emperors had a strong preexisting bond, but generally they distrusted each other enough that the two empires behaved like rival states as often as they behaved like allies. Even though both emperors were Romans, and often close relatives!
Using "believe me" as our standard of evidence will not work out well for us. The President of the United States seems to have followed the strategy of saying the thing that's most convenient, whether or not it happens to be true, and won an election based on this. Others can and will use this strategy against us.
We can do better
The above is all a symptom of not including other moral agents in your model of the world. We need a moral theory that takes this into account in its descriptions (rather than having to do a detailed calculation each time), and yet is scope-sensitive and consequentialist the way EAs want to be.
There are two important desiderata for such a theory:
- It needs to take into account the fact that there are other agents who also have moral reasoning. We shouldn't be sad to learn that others reason the way we do.
- Graceful degradation. We can't be so trusting that we can be defrauded by anyone willing to say they're one of us. Our moral theory has to work even if not everyone follows it. It should also degrade gracefully within an individual – you shouldn't have to be perfect to see benefits.
One thing we can do now is stop using wrong moral reasoning to excuse destructive behavior. Until we have a good theory, the answer is we don't know if your clever argument is valid.
On the explicit and systematic level, the divergent force is so dominant in our world that sincere benevolent people simply assume, when they see someone overtly optimizing for an outcome, that this person is optimizing for evil. This leads to perceptive people who don’t like doing harm, like Venkatesh Rao, to explicitly advise others to minimize their measurable impact on the world.
I don't think this impact-minimization is right, but on current margins it's probably a good corrective.
One encouraging thing is that many people using common-sense moral reasoning already behave according to norms that respect and try to cooperate with the moral agency of others. I wrote about this in Humble Charlie.
I've also begun to try to live up to cooperative heuristics even if I don't have all the details worked out, and help my friends do the same. For instance, I'm happy to talk to people making giving decisions, but usually I don't go any farther than connecting them with people they might be interested in, or coaching them through heuristics, because doing more would be harmful, it would destroy information, and I'm not omniscient, otherwise I'd be richer.
A movement like Effective Altruism, explicitly built around overt optimization, can only succeed in the long run at actually doing good with (a) a clear understanding of this problem, (b) a social environment engineered to robustly reject cost-maximization, and (c) an intellectual tradition of optimizing only for actually good things that people can anchor on and learn from.
This was only a summary. I don't expect many people to be persuaded by this alone. I'm going to fill in the details in the future posts. If you want to help me write things that are relevant, you can respond to this (preferably publicly), letting me know:
- What seems clearly true?
- Which parts seem most surprising and in need of justification or explanation?
(Cross-posted at my personal blog.)
21 comments
Comments sorted by top scores.
comment by RomeoStevens · 2017-04-02T22:18:55.249Z · LW(p) · GW(p)
After digesting for a few days my intuitive response is to add the handle 'virtue fatigue' to this concept cluster. Virtues are a means by which the commons are policed. When you have runaway virtue signaling this is essentially defecting against the commons. You get what you want from scrupulous people who take public virtues seriously in the short term, but create virtue fatigue in the long run as more and more gets piled on to this working behavioral modification channel. Eventually the channel fails. This might turn ugly.
Replies from: Benquo↑ comment by Benquo · 2017-04-03T03:01:32.498Z · LW(p) · GW(p)
This seems closely related to the controversy around the GWWC pledge, and more generally nerds' complaints in general about social rules that are phrased as absolutes, with a claim of 100% applicability, but actually not meant to be taken literally.
comment by gjm · 2017-04-01T12:44:50.531Z · LW(p) · GW(p)
This is utterly tangential to the main point, but:
When Gleb Tsipursky tried to extract resources from the Effective Altruism movement with straightforward low-quality mimesis, mouthing the words but not really adding value, and grossly misrepresenting what he was doing and his level of success, it took EAs a long time to notice the pattern of misbehavior.
Is this really true? What I saw of Gleb was mostly his appearances here on LW, where so far as I can tell he was widely mistrusted (and specific misbehaviours identified and called out) from very early on. Did he have more success in the EA community at large? Or was it more that it took a long time to gather the willingness to be rude enough about what he was doing?
Replies from: Benquo, Elo, bogus↑ comment by Benquo · 2017-04-02T21:31:26.432Z · LW(p) · GW(p)
He was accepted into the most recent EA Global conference and tried to raise funds for Intentional Insights there, with the benefit of the implied social proof. That's what inspired Jeff Kaufman's first post about him. In the resulting controversy multiple people told me that he'd been in various EA/Rationality venues and pushily trying to get people to do his thing, and no one had bothered to try to create common knowledge that there was a pattern of problematic behavior. But in the same controversy, some EA leaders argued that we were rushing to judgment too fast.
I think he got some volunteer hours out of the EA community, and he certainly got a lot of time from prominent EAs.
Likewise with ACE, a some individuals knew that they had misleading stuff up, and kept pointing it out to ACE staff in semi-public internet venues, the organization's estimates and recommendations continued to be taken seriously in public for quite a while, the org got invited to EA leadership events, etc.
(Double counting caveat: one of the EA leaders defending Gleb was part of ACE at the time.)
Replies from: bogus↑ comment by bogus · 2017-04-02T22:18:55.824Z · LW(p) · GW(p)
He was accepted into the most recent EA Global conference and tried to raise funds for Intentional Insights there, with the benefit of the implied social proof.
I'm not sure what makes this such a pressing issue. While InIn may not be, strictly speaking, an "EA" organization, they do self-identify as promoters of "effective giving". Many EA organizations are in fact expending resources on 'outreach' goals that seemingly do not differ in any meaningful way from InIn's broad mission. Where InIn differ most markedly is in their methods; such as focusing much of their outreach on third-world countries where messages can be delivered most cheaply, and where the need for people to make effective career choices is that much starker, given the opportunity for "doing good" locally by addressing a vast amount of currently-neglected issues.
Replies from: Benquo↑ comment by Benquo · 2017-04-03T05:59:44.863Z · LW(p) · GW(p)
Did you follow the link to the roundup of Gleb's and InIn's pattern of deception?
Replies from: bogus↑ comment by bogus · 2017-04-03T19:28:14.201Z · LW(p) · GW(p)
The article you link to doesn't mention either "pattern" or "deception" in its text - at least, not in any relevant sense. It looks more like a laundry list of purported concerns; I'm ready to believe that some of these concerns might be more justified than others - but that still does not convince me that there's a relevant 'pattern' here.
↑ comment by bogus · 2017-04-01T14:41:05.324Z · LW(p) · GW(p)
Is this really true?
Not really. Gleb has plenty of unconventional ideas, so it's all-too-easy to make outlandish claims about the nefarious things he's supposedly doing. But I see no evidence at all that he's been "trying to extract resources from the Effective Altruism movement". If anything, much of his more contentious activity focuses on expanding the effective-giving movement (and thus, its resource-base) well beyond its traditional audience of WEIRD folks with a narrow focus on STEM or rationality .
comment by PhilGoetz · 2017-04-04T04:34:03.947Z · LW(p) · GW(p)
Great post, and especially appropriate for LW. I add the proviso that you may in some cases be making the most-favorable interpretation rather than the correct interpretation.
I know one person on LessWrong who has talked himself into overwriting his natural morality with his interpretation of rational utilitarianism. This ended up giving him worse-than-human morality, because he assumes that humans are not actually moral--that humans don't derive utility from helping others. He ended up convincing himself to do the selfish things that he thinks are "in his own best interests" in order to be a good rationalist, even in cases where he didn't really want to be selfish--or wouldn't have, before rewriting his goals.
Replies from: MrCogmor↑ comment by MrCogmor · 2017-04-04T11:58:03.562Z · LW(p) · GW(p)
It sounds less like he rewrote his natural morality and more like he engaged in a lot of motivated reasoning to justify his selfish behaviour. Rational Utilitarianism is the greatest good for the greatest number given the constraints of imperfect information and faulty brains. The idea that other people don't have worth because they aren't as prosocial as you is not Rational Utilitarianism (especially when you aren't actually prosocial because you don't value other people).
If whoever it is can't feel much sympathy for people in distant countries then that is fine, plenty of people are like that. The good thing about consequentalism is that it doesn't care about why. You could do it for self-esteem, social status, empathy or whatever but you still save lives either way. Declaring yourself a Rational Utilitarian and then not contributing is just a dishonest way of making yourself feel superior. To be a Rational Utilitarian you need to be a rationalist first and that means examining your beliefs even when they are pleasant.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2017-04-04T19:46:10.089Z · LW(p) · GW(p)
Rational Utilitarianism is the greatest good for the greatest number given the constraints of imperfect information and faulty brains.
No; I object to your claiming the term "rational" for that usage. That's just plain-old Utilitarianism 1.0 anyway; it doesn't take a modifier.
Rationality plus Utilitarianism plus evolutionary psychology leads to the idea that a rational person is one who satisfies their own goals. You can't call trying to achieve the greatest good for the greatest number of people "rational" for an evolved organism.
Replies from: MrCogmor↑ comment by MrCogmor · 2017-04-04T22:20:31.567Z · LW(p) · GW(p)
Rational Utilitarianism is the greatest good for the greatest number given the constraints of imperfect information and faulty brains.
Rationality is the art of making better decisions in service to a goal taking into account imperfect information and the constratints of our mental hardware. When applied to utilitarianism you get posts like this Nobody is perfect, evertyhing is commensurable
Rationality plus Utilitarianism plus evolutionary psychology leads to the idea that a rational person is one who satisfies their own goals.
I don't see how this follows. Evolutionary psychology provides some explanations for our intuitions and instincts that the majority of humans share but that doesn't really say anything about morality as Is Cannot Imply Ought. Some quotes from the wiki page on evolutionary psychology.
Replies from: PhilGoetzWe are optimized for an "ancestral environment" (often referred to as EEA, for "environment of evolutionary adaptedness") that differs significantly from the environments in which most of us live. In the ancestral environment, calories were the limiting resource, so our tastebuds are built to like sugar and fat.
Evolution's purposes also differ from our own purposes. We are built to deceive ourselves because self-deceivers were more effective liars in ancestral political disputes; and this fact about our underlying brain design doesn't change when we try to make a moral commitment to truth and rationality.
↑ comment by PhilGoetz · 2017-04-07T18:21:51.780Z · LW(p) · GW(p)
I don't see how this follows. Evolutionary psychology provides some explanations for our intuitions and instincts that the majority of humans share but that doesn't really say anything about morality as Is Cannot Imply Ought.
Start by saying "rationality" means satisficing your goals and values. The issue is what values you have. You certainly have selfish values. A human also has values that lead to optimizing group survival. Behavior oriented primarily towards those goals is called altruistic.
The model of rationality presented on LessWrong usually treats goals and values that are of negative utility to the agent as biases or errors rather than as goals evolved to benefit the group or the genes. That leads to a view of rationality as strictly optimizing selfish goals.
As to old Utilitarianism 1.0, where somebody just declares by fiat that we are all interested in the greatest good for the greatest number of people--that isn't on the table anymore. People don't do that. Anyone who brings that up is the one asserting an "ought" with no justification. There is no need to talk about "oughts" yet.
Replies from: MrCogmor↑ comment by MrCogmor · 2017-04-11T10:26:03.627Z · LW(p) · GW(p)
Rationality means achieving your goals and values efficiently and effectively.
The model of rationality presented on LessWrong usually treats goals and values that are of negative utility to the agent as biases or errors rather than as goals evolved to benefit the group or the genes. That leads to a view of rationality as strictly optimizing selfish goals.
This is a false dichotomy. Just because a value is not of negative utility doesn't mean it is optimized to benefit the genes. Scott Alexander for example is asexual and there are plenty of gay people.
As to old Utilitarianism 1.0, where somebody just declares by fiat that we are all interested in the greatest good for the greatest number of people--that isn't on the table anymore. People don't do that.
GiveWell exists, Peter Singer exists. The Effective Altrusim movement exists. They may not be perfect utilitarians but most rationalists aren't perfect either, neither are most christians and they still exist.
This ended up giving him worse-than-human morality, because he assumes that humans are not actually moral--that humans don't derive utility from helping others. He ended up convincing himself to do the selfish things that he thinks are "in his own best interests" in order to be a good rationalist, even in cases where he didn't really want to be selfish
I finally remembered the Less Wrong meta-ethics sequence which you should read. This in particular.
comment by der (DustinWehr) · 2017-04-03T20:56:47.425Z · LW(p) · GW(p)
Love this. The Rationalist community hasn't made any progress on the problem of controlling, over confident, non-self-critical people rising to the top in any sufficiently large organization. Reading more of your posts now.
comment by TedSanders · 2017-04-01T01:53:49.370Z · LW(p) · GW(p)
Thanks for the long and thoughtful post.
My main question: Who are these 'people' that you seem to be arguing against?
It sounds like you're seeing people who believe:
"You - you, personally - are responsible for everything that happens."
"No one is allowed their own private perspective - everyone must take the public, common perspective."
Other humans are not independent and therefore warring with them is better than trading with them ("If you don't treat them as independent... you will default to going to war against them... rather than trading with them")
To do good, "you will try to minimize others' agency"
And the people who hold the aforementioned beliefs are:
"the people around me applying utilitarianism"
"many effective altruists"
people with ideas "commonplace in discussions with effective altruists"
I guess I struggled to engage with the piece because my experiences with 'people' are very different than your experiences with 'people.' I don't think anyone I know would claim to think the things that you think many effective altruists. I loosely consider myself an effective altruist and I certainly don't hold those beliefs.
I think one way to get more engagement would be to argue against specific claims that specific people have spoken or written. It would feel more concrete and less strawmanny, I think. That's a general principle of good writing that I'm trying to employ more myself.
Anyway, great work writing this post and thinking through these issues!
Replies from: Benquo↑ comment by Benquo · 2017-04-01T08:05:08.769Z · LW(p) · GW(p)
Thanks for the clear criticism! I do plan to try to write more on exactly where I see people making this and related errors. It's helpful to know that's a point on which some readers don't already share my sense.
I'm not saying that people explicitly state that you ought to be in a state of war against everyone else - I'm instead saying that it's implied by some other things people in EA often believe. For instance, the idea that it's good for GiveWell to recommend one set of charities to the public, but advise Good Ventures to fund a different set of charities, because the public isn't smart enough to go for the real best giving opportunities. Or that you should try to get people to give more by running a matching donations fundraiser. Or that you can and should estimate the value of an intervention by assuming it's equal to the cost. Or that it's good to exaggerate the effect of an intervention you like because then more people will give to the best charities.
The thing all these have in common is that they ignore the opportunity cost of assuming control of other people's actions.
comment by Darklight · 2017-04-01T01:35:12.911Z · LW(p) · GW(p)
I think you're misunderstanding the notion of responsibility that consequentialist reasoning theories such as Utilitarianism argue for. The nuance here is that responsibility does not entail that you must control everything. That is fundamentally unrealistic and goes against the practical nature of consequentialism. Rather, the notion of responsibility would be better expressed as:
- An agent is personally responsible for everything that is reasonably within their power to control.
This coincides with the notion of there being a locus of control, which is to say that there are some thing we can directly affect in the universe, and other things (most things) that are beyond our capacity to influence, and therefore beyond our personal responsibility.
Secondly, I take issue with the idea that this notion of responsibility is somehow inherently adversarial. On the contrary, I think it encourages agents to cooperate and form alliances for the purposes of achieving common goals such as the greatest good. This naturally tends to be associated with granting other agents as much autonomy as possible because this usually enables them to maximize their happiness, because a rational Utilitarian will understand that individuals tend to understand their own preferences and what makes them happy, better than anyone else. This is arguably why John Stuart Mill and many modern day Utilitarians are also principled liberals.
Only someone suffering from delusions of grandeur would be so paternalistic as to assume they know better than the people themselves what is good for them and try to take away their control and resources in the way that you describe. I personally tend towards something I call Non-Interference Code, as a heuristic for practical ethical decision making.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2017-04-04T04:42:41.077Z · LW(p) · GW(p)
Benquo isn't saying that these attitudes necessarily follow, but that in practice he's seen it happen. There is a lot of unspoken LessWrong / SIAI history here. Eliezer Yudkowsky and many others "at the top" of SIAI felt personally responsible for the fate of the human race. EY believed he needed to develop an AI to save humanity, but for many years he would only discuss his thoughts on AI with one other person, not trusting even the other people in SIAI, and requiring them to leave the area when the two of them talked about AI. (For all I know, he still does that.) And his plans basically involve creating an AI to become world dictator and stop anybody else from making an AI. All of that is reducing the agency of others "for their own good."
This secrecy was endemic at SIAI; when I've walked around NYC with their senior members, sometimes 2 or 3 people would gather together and whisper, and would ask anyone who got too close to please walk further away, because the ideas they were discussing were "too dangerous" to share with the rest of the group.
Replies from: Darklight