Posts

Comments

Comment by AlexanderRM on Rationality Quotes April 2016 · 2016-04-15T21:54:13.337Z · LW · GW

Interesting: He makes the argument that progress in physical areas of technology (transportation, chemistry etc.) has slowed in part due to government regulation (which would explain why the computers and the internet have been the one thing progressing drastically). But the United States has never been the source of all or even the majority of the worlds' new inventions, so an explanation focused on the U.S. government can't fill that large a gap (although, I suppose a slowdown of 1/3rd or even more would be explained).

Any information on what the situation has been in other countries? I wouldn't be surprised if the entire First World has trended towards drastically more regulation, which would indeed leave only the places with fewer inventors and little capital to invest or consumer-money to spend able to experiment with technologies in those fields (if true, the implications for the chance of changing the situation aren't as bright as if it's just the United States). Still, this is something that has to be remembered in any discussion of technology, or for that matter any questions of this type. More generally there seems to be a general lack of tendency (among Americans at least) to check on or be aware of other countries in all sorts of questions, and the few times they are brought up it's usually a single anecdote to reinforce the speakers' point (but even these are less common than one would expect). That seems to be a serious impediment to actually figuring out problems.

Comment by AlexanderRM on Rationality Quotes April 2016 · 2016-04-15T21:22:23.533Z · LW · GW

If I were to steelman the usefulness of the argument, I'd say the conclusion is that positions on economics shouldn't be indispensable parts of a political movement, because that makes it impossible to reason about economics and check whether that position is wrong. Which is just a specific form of the general argument against identifying with object-level beliefs*.

*For that matter, one should perhaps be careful about identifying with meta-level beliefs as well, although I don't know if that's entirely possible for a human to do, even discounting the argument that there might be conservation of tribalism. It might be possible to reduce ones' identity down to a general framework for coming up with good meta-level beliefs, and avoid object-level

Comment by AlexanderRM on Rationality Quotes April 2016 · 2016-04-15T21:15:54.026Z · LW · GW

"He who builds his cart behind closed gates will find it not suited to the tracks outside the gates."

-Unattributed (Chinese) proverb, quoted by Chen Duxiu in "Call to Youth" 1915.

Comment by AlexanderRM on Rationality Quotes Thread October 2015 · 2015-12-02T20:34:15.796Z · LW · GW

The way to signal LW ingroupness would be to say "signaling progressiveness", but that does cover it fairly well. I suspect the logic is roughly that our current prison system (imprisoning people for 12 years for a 1st time drug offense) is bad in the direction of imprisoning far too many people, so opposing our current prison system is good, so opposing the current prison system more is even better, and the most you can oppose the prison system is to support abolishing all prisons.

(actually there might be something of an argument to be made that in order to fight a policy way too far to one side of good policy, it can be useful in some cases to overcompensate and bring a policy too far to the other side into the discussion, although I think in a politically polarized environment like the US that's bad overall- the overwhelming majority people who hear such an argument will be people who were already convinced of a decent policy and will be sent too far to one side by it, while the people who actually would have their beliefs brought closer to a good policy by hearing the counter-narrative either won't hear it, or will use it to strawman the opposition.)

Comment by AlexanderRM on Shut Up and Divide? · 2015-11-14T22:31:38.550Z · LW · GW

I know I'm 5 years late on this but on the offchance someone sees this, I just want to mention I found Yvain's/Scott Alexander's essay on the subject incredibly useful*.

The tl;dr: Use universalizability for your actions moreso than direct utilitarianism. His suggestion is 10% for various reasons, mainly being a round number that's easy to coordinate around and have people give that exact number. Once you've done that, the problems that would be solved by everyone donating 10% of their income to efficient charities are the responsibility of other people who are donating less than that amount (I'd also suggest trying to spread the message as much as possible, as I'm doing here).

Of course it'd be better to donate more of your income. I would say that if feeling bad about donating 10% causes you to donate more, then... donate more. If it just causes you to feel like you'll never be good enough so you don't even try, it's useless and you'd do more good by considering yourself completely absolved. 10% is also incredibly useful for convincing people who aren't already convinced of unlimited utilitarian duty to donate to efficient charity.

*http://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/

Comment by AlexanderRM on Ethical Injunctions · 2015-11-10T18:52:28.622Z · LW · GW

It's also worth noting that "I would set off a bomb if it would avert or shorten the Holocaust even if it would kill a bunch of babies" would still answer the question... ...or maybe it wouldn't, because the whole point of the question is that you might be wrong that it would end the war. See for comparison "I would set off a bomb and kill a bunch of innocent Americans if it would end American imperialism", which has a surprising tendency to not end American imperialism and in fact make it worse.

Overall I think if everyone followed a heuristic of "never kill babies", the world would be better on average. However you could get a problem if only the carefully moral people follow that rule and the less-careful don't and end up winning. For a consequentialist, a good rule would be "any ethical injunction which causes itself to be defeated cannot be used". At the very least, the heuristic of "don't violate Geneva Convention-like agreements restricting war to make it less horrible which the other side has stuck to" seems reasonable, although it's less clear for cases like where a few enemy soldiers individually violate it, or where being the first to violate it gives a major advantage and you're worried the other side might do so.

Comment by AlexanderRM on Things You Can't Countersignal · 2015-11-03T00:00:37.050Z · LW · GW

I think the first two of those at least can be read in any combination of sarcastic/sincere*, which IMO is the best way to read them. I need to take a screenshot of those two and share them on some internet site somewhere.

Comment by AlexanderRM on Failed Utopia #4-2 · 2015-10-08T21:24:32.436Z · LW · GW

I assume what Will_Pearson meant to say was "would not regret making this wish", which fits with the specification of "I is the entity standing here right now". Basically such that: if before finishing/unboxing the AI, you had known exactly what would result from doing so, you would still have built the AI. (and it's supposed the find out of that set of possibly worlds the one you would most like, or... something along those lines)) I'm not sure that would rule out every bad outcome, but... I think it probably would. Besides the obvious "other humans have different preferences from the guy building the AI"- maybe the AI is ordered to do a similar thing for each human individually- can anyone think of ways this would go badly?

Comment by AlexanderRM on Approaching rationality via a slippery slope · 2015-10-05T02:02:41.151Z · LW · GW

A more practical and simple (and possibly legal) idea for abusing knowledge of irrational charity: Instead of asking for money to save countless children, ask for money to save one, specific child.

If one circulated a message on the internet saying that donations could save the life of a specific child, obviously if you then used the money for something unrelated there would be laws against that. But if you simply, say, A: lied about why they were in danger of dying, B: overstated the amount of money needed, C: left out the nationality of the child, and D: Used the money to save a large number of children, do you think a court would convict that?

Getting the money towards some cause where the child-saving is a lot less direct, like technological research or SIAI, would probably get hit for lying, but for something like fighting Malaria or the like that might be incredibly useful.

Comment by AlexanderRM on Approaching rationality via a slippery slope · 2015-10-05T01:55:03.018Z · LW · GW

This probably is a bit late, but in a general sense Effective Altruism sounds like what you're looking for, although the main emphasis there is the "helping others as much as possible" rather than the "rationalists" part, but there's still a significant overlap in the communities. If both LW and EA are too general for you and you want something with both rationality and utilitarian altruism right in it's mission statement... I'm sure there's some blog somewhere in the ratioinalist blogosphere which is devoted to that specifically, although it might be just a single person's blog rather than a community forum.

Incidentally, if you did find- or found- a specific community along those lines I'd be interested in joining it myself.

Comment by AlexanderRM on Approaching rationality via a slippery slope · 2015-10-05T01:46:36.688Z · LW · GW

Just want to mention @ #8: After a year and a half of reading LW and the like I still haven't accomplished this one. Admittedly this is more like a willpower/challenge thing (similar to a "rationality technique") than just an idea I dispute, and there might be cases where simply convincing someone to agree that that's important would get them past the point of what you term "philosophical garbage" where they go "huh, that's interesting", but still hard.

Granted I should mention that I at least hope that LW stuff will affect how I act once I graduate college, get a job and start earning money beyond what I need to survive. I was already convinced that I ought to donate as much as possible to various causes, but LW has probably affect which causes I'll choose.

Comment by AlexanderRM on Shit Rationalists Say? · 2015-10-02T23:17:46.789Z · LW · GW

I would be amazed if Scott Alexander has not used "I won't socially kill you" at some point. Certainly he's used some phrase along the line of "people who won't socially kill me".

...and in fact, I checked and the original article has basically the meaning I would have expected: "knowing that even if you make a mistake, it won't socially kill you.". That particular phrase was pretty much lifted, just with the object changed.

Comment by AlexanderRM on Sympathetic Minds · 2015-10-02T22:54:43.068Z · LW · GW

The thing is, in evolutionary terms, humans were human-maximizers. To use a more direct example, a lot of empires throughout history have been empire-maximizers. Now, a true maximizer would probably turn on allies (or neutrals) faster than a human or a human tribe or human state would- although I think part of the constraints on that with human evolution are 1. it being difficult to constantly check if it's worth it to betray your allies, and 2. it being risky to try when you're just barely past the point where you think it's worth it. Also there's the other humans/other nations around, which might or might not apply in interstellar politics.

...although I've just reminded myself that this discussion is largely pointless anyway, since the chance of encountering aliens close enough to play politics with is really tiny, and so is the chance of inventing an AI we could play politics with. The closest things we have a significant chance of encountering are a first-strike-wins situation, or a MAD situation (which I define as "first strike would win but the other side can see it coming and retaliate"), both of which change the dynamics drastically. (I suppose it's valid in first-strike-wins, except in that situation the other side will never tell you their opinion on morality, and you're unlikely to know with certainty that the other side is an optimizer without them telling you)

Comment by AlexanderRM on Variables in Arguments as a Source of Confusion · 2015-09-23T01:49:00.274Z · LW · GW

It seems like the Linux user (and possibly the Soviet citizen example, but I'm not sure) is... in a broader category than the equal treatment fallacy, because homosexuality and poverty are things one can't change (or, at least, that's the assumption on which criticizing the equal treatment fallacy is based).

Although, I suppose my interpretation may have been different from the intended one- as I read it as "the OSX user has the freedom to switch to Linux and modify the source code of Linux", i.e. both the Linux and OSX user has the choice of either OS. Obviously the freedom to modify Linux and keep using OSX would be the equal treatment fallacy.

Comment by AlexanderRM on How Many LHC Failures Is Too Many? · 2015-09-07T19:30:17.366Z · LW · GW

Some of the factors leading to a terrorist attack succeeding or failing would be past the level of quantum uncertainty before the actual attack happens, so unless the terrorists are using bombs set up on the same principle as the trigger in Scrodinger's Cat, the branches would have split already before the attack happened.

Comment by AlexanderRM on How Many LHC Failures Is Too Many? · 2015-09-07T19:24:16.860Z · LW · GW

I wouldn't describe a result that eliminated the species conducting the experiment in the majority of world-branches as "successful", although I suppose the use of LHCs could be seen as an effective use of quantum suicide (two species which want the same resources meet, flip a coin loser kills themselves- might have problems with enforcement) if every species invariably experiments with them before leaving their home planet.

On the post as a whole: I was going to say that since humans in real life don't use the anthropic principle in decision theory, that seems to indicate that applying it isn't optimal (if your goal is to maximize the number of world-branches with good outcomes), but realized that humans are able to observe other humans and what sort of things tend to kill them, along with hearing about those things from other humans when we grow up, so we're almost never having close calls with death frequently enough to need to apply the anthropic principle. If a human were exploring an unknown environment with unknown dangers by themselves, and tried to consider the anthropic principle... that would be pretty terrifying.

Comment by AlexanderRM on Insufficiently Awesome · 2015-09-03T15:40:16.794Z · LW · GW

I'd be interested to hear from other LessWrongians if anyone has bought this and if it lives up to the description (and also if this model produces a faint noise constantly audible to others nearby, like the test belt); I'm the sort of person who measures everything in dead African children so $149... I'm a bit reserved about even if it is exactly as awesome as the article implied.

On the other hand, the "glasses that turn everything upside" interest me somewhat; my perspective on that is rather odd- I'm wondering how that would interact with my mental maps of places. Specifically because I'm a massive geography buff and have an absurdly detailed mental map of the whole world, which I've noticed has a specific north=up direction. Obviously those glasses probably won't help shake the built-in direction (if I just get used to them), but I'd still be interested to see what they do.

Comment by AlexanderRM on Dead Child Currency · 2015-09-03T01:23:59.559Z · LW · GW

The specific story described is perfectly plausible, because it involves political pressure rather than social, and (due to the technology level and the like) the emperor's guards can't kill everybody in the crowd, so once everyone starts laughing they're safe. However, as a metaphor for social pressure it certainly is overly optimistic by a long shot.

Comment by AlexanderRM on Dead Child Currency · 2015-09-03T01:21:36.459Z · LW · GW

I would really like to know the name for that dynamic if it has one, because that's very useful.

Comment by AlexanderRM on A variant on the trolley problem and babies as unit of currency · 2015-09-02T23:25:51.869Z · LW · GW

It seems like in the event that, for example, such buttons that paid out money exclusively to the person pushing became widespread and easily available, governments ought to band together to prevent the pressing of those buttons, and the only reason they might fail to do so would be coordination problems (or possibly the question of proving that the buttons kill people), not primarily from objections that button-pushing is OK. If they failed to do so (keeping in mind these are buttons that don't also do the charity thing) that would inevitably result in the total extermination of the human race (assuming that the buttons paid out goods with inherent value so that the collapse of society and shortage of other humans doesn't interfere with pressing them).

However I agree with your point that this is about ethics, not law.

Comment by AlexanderRM on Dead Child Currency · 2015-09-02T23:09:00.201Z · LW · GW

I just want to say that even though I generally disagree with these objections to donation*, I really love the "You can't just throw nutrients into ecosystem and expect a morally good outcome." bit and will try to remember/save that in the future. It's rather interesting that Malthusianism is completely accepted without comment in ecology and evolution, but seems to be widely hated when brought up in political or social spheres, so maybe phrasing it in ecosystem terms will make people more liable to accept it. Probably be best to introduce the concept that way first before suggesting any policies derived from it.

*Not the objection, but the bit where people conclude "So I'm going to keep my money for myself" rather than "So I'm going to give to a charity to distribute birth control instead". Which to be fair you don't seem to be entirely doing, so you're not actually one of those people.

Comment by AlexanderRM on Dead Child Currency · 2015-09-02T23:01:33.179Z · LW · GW

Worth noting that the dead baby value is very different from the actual amount which most Westerners regard the lives of white, middle-class people from their own country as being worth. In fact, pretty much the whole point of the statistic is that it's SHOCKINGLY low. I suppose we could hope that Dead Baby currency would result in a reduction to that discrepancy... although I think in the case of the actual example given, the Malthusians* have a point where it would dramatically increase access to life-prolonging things without increasing access to birth control much, resulting in more population and thus more people to save.

*To clarify: I actually agree with the Malthusian ecology- it's just a basic fact of ecology, I'm amazed that anyone seriously disagrees with it- but not to the objection to charitable donations on that basis; anyone who actually thinks that would go "you should instead give money to provide birth control".

Comment by AlexanderRM on A variant on the trolley problem and babies as unit of currency · 2015-09-02T22:43:05.899Z · LW · GW

Alternative rephrasing: $4000 dollars is given to your choice of either one of the top-rated charities for saving lives, or one of the top-rated charities for distributing birth control (or something else that reduces population growth).

That means a pure reduction on both sides in number of people on the planet, and- assuming there are currently too many people on the planet- a net reduction in suffering in the long run as there are fewer people to compete with each other, plus the good it does in the short run to women who don't have to go through unwanted pregnancies and raising the children and all the benefits associated with that (like being able to devote more resources to their other children, or possibly pursuing careers further, or the like).

Comment by AlexanderRM on A variant on the trolley problem and babies as unit of currency · 2015-09-02T22:33:00.007Z · LW · GW

Note that the Reversal Test is written with the assumption of consequentialism, where there's an ideal value for some trait of the universe, whereas the whole point of the trolley problem is that the only problem is deontological, assuming the hypothetical pure example where there are no unintended consequences.

However, the Reversal Test of things like "prevent people from pulling the lever" is still useful if you want to make deontologists question the action/inaction distinction.

Comment by AlexanderRM on Really Extreme Altruism · 2015-09-02T20:00:02.008Z · LW · GW

I was about to give the exact same example of the soldier throwing himself on a grenade. I don't know where the idea of his actions being "shameful" even comes up.

The one thing I realize from your comment is there's the dishonesty of his actions, and if lots of people did this insurance companies would start catching on and it would stop working plus it would make life insurance that much harder to work with. But it didn't sound like the original post was talking about that with "shameful", it sounds like they were suggesting (or assuming people would think) that there was something inherently wrong with the man's altruism. At least that's what's implied by the title, "really extreme altruism".

Edit: I didn't catch the "Two years after the policy is purchased, it will pay out in the event of suicide." bit until reading others comments- so, indeed, he's not being dishonest, he made a bet with the insurance company (over whether he would still intend suicide two years later) and the insurance company lost. I don't know how many insurance companies have clauses like that, though.

Comment by AlexanderRM on False Laughter · 2015-08-07T00:43:40.072Z · LW · GW

I know I'm 8 years late on this (only started reading LessWrong a year ago)- does anyone have a good, snappy term for the quality of humor being funny regardless of the politics? There have been times when I was amused by a joke despite disagreeing with the political point, and wanted to make some comment along the lines of "I'm a [group attacked by the joke] and this passes the Yudkowsky Test of being funny regardless of the politics", but I think "Yudkowsky test" isn't a good term (for one thing, I have no idea if Yudkowsky actually came up with this originally).

(actually, a more generalized term of the art principle including humor would be useful. Although the only time I could think when I might have wanted to apply that was when I first listened to the ISIL theme, and that was a somewhat different attitude from my reaction to people who don't murder their ideological opponents coming up with a funny joke.)

Comment by AlexanderRM on Human Evil and Muddled Thinking · 2015-08-04T04:31:54.128Z · LW · GW

The assumption is that people start doing things that match with their stated beliefs- so, for instance, people who claim to oppose genocide would actually oppose genocide in all cases, which is the whole point of thinking hypocrisy is bad. Causing people to no longer be hypocrites by making them instead give up their stated beliefs would just make for a world which was more honest but otherwise not dramatically improved.

Incidentally, on the joking side: If atheists did win the religious war, they could then use this statement in a completely serious and logical context: https://www.youtube.com/watch?v=FmmQxXPOMMY

Comment by AlexanderRM on Human Evil and Muddled Thinking · 2015-08-04T04:09:34.564Z · LW · GW

Worth elaborating: If all religious people were non-hypocritical and do exactly what the religion they claim to follow commands, there would probably be an enormous initial drop in violence, followed by any religions that follow commandments like "thou shalt not kill" without exception being wiped out, with religions advocating holy war and the persecution of heretics getting the eventual upper hand (although imperfectly adapted religions might potentially be able to hold off the better-adapted ones through strength of numbers- for instance, if a large area was controlled by a religion with the burning of heretics and defensive, cooperative religious wars, they could hold off smaller nations with religions advocating offensive wars).

One good thing about hypocrisy is that it makes a massive buffer against certain types of virulent memes. On the other hand, a world where everyone took a burn-the-heretics interpretation of Christianity or Islam 100% seriously would certainly have some advantages over ours, and especially over our middle ages- things like no un-sanctioned killing, most notably, no wars against others of the same religion, etc. Probably lots of things that would be decent ideas if you could get everyone to follow them, at the cost of an occasional burnt heretic (and possibly constant holy wars, until one religion gains the upper hand and overwhelms the others).

Comment by AlexanderRM on Base your self-esteem on your rationality · 2015-08-04T03:53:16.521Z · LW · GW

I think that might help somewhat- thinking of rationality as something you do rather than something you are is definitely good regardless- but there's still the basic problem that your self-esteem is invested in rationality. Rationality requires you to continually be willing to doubt your core values and consider that they might be wrong, and if your core values are wrong, then you haven't gotten any use up to that point out of your rationality. I don't think it's just a matter of realizing your were wrong once and recovering self-esteem out of the fact that you were rational enough to see that- ideally you ought to constantly consider the possibility that everything you believe might be wrong.

Now, if you can get up to the level of thinking I just described, that's probably still a lot better than basing your self-esteem on specific political views. It just doesn't totally solve the problem, and you need to be aware that it doesn't totally solve the problem.

Comment by AlexanderRM on Growing Up is Hard · 2015-08-04T00:38:16.570Z · LW · GW

I just want to mention that the thing about a human trying to self-modify their brain in the manner described and with all the dangers listed could make an interesting science fiction story. I couldn't possibly write it myself and am not even sure what the best method of telling it would be- probably it would at least partially include something like journal entries or just narration from inside the protagonists' head, to illustrate what exactly was going on.

Especially if the human knew the dangers perfectly well, but had some reason they had to try anyway, and also a good reason to think it might work- presumably this would require it to be an attempt at some modification other than "runaway intelligence" (and also a context where a modified self would have very little chance of thereafter achieving runaway superintelligence); if things went wrong they might spend the rest of their life doing very weird things, or die for one reason or another, or at the very worst go on a killing spree and kill a couple dozen people before being caught, but wouldn't convert the entire world into smiley faces. That way they would be a sympathetic viewpoint character taking perfectly reasonable actions, and the reader/viewer watches as their sanity teeters on the edge and is genuinely left wondering whether they'll last long enough to accomplish their goal.

Comment by AlexanderRM on Top 9+2 myths about AI risk · 2015-07-10T05:41:49.114Z · LW · GW

Why would a command economy be necessary to avoid that? Welfare Capitalism- you run the economy with laissez-faire except you tax some and give it away to poor people, who can then spend it as they wish as if they'd earned it in laissez-faire economics- would work just fine. As mechanization increases, you gradually increase the welfare.

It won't be entirely easy to implement politically, mainly because of our ridiculous political dichotomy where you can either understand basic economics or care about poor people, but not both.

Since we're citing sources I'll admit Scott expressed this better than I can: http://slatestarcodex.com/2013/12/08/a-something-sort-of-like-left-libertarianism-ist-manifesto/#comment-23688

Comment by AlexanderRM on Top 9+2 myths about AI risk · 2015-07-10T05:13:42.673Z · LW · GW

An important distinction that jumps out to me- if we slowed down all technological progress equally, that wouldn't actually "buy time" for anything in particular- I can't think of anything we'd want to be doing with that time besides either 1. researching other technologies that might help with avoiding AI (can't think of any ATM though- one that comes to mind is technologies that would allow downloading or simulating a human mind before we build AI from scratch, which sounds at least somewhat less dangerous from a human perspective than building AI from scratch), or 2. thinking about AI value systems.

The 2 is presumably the reason why anyone would suggest slowing down AI research, but I think a notable obstacle to 2 at present is large numbers of people not being concerned about AI risk because it's so far away. If we get to the point where people actually expect an AI very soon, then slowing down while we discuss it might make sense.

Comment by AlexanderRM on Unspeakable Morality · 2015-04-18T22:59:15.527Z · LW · GW

My impression of the thought experiment is that there's suppose to be no implication that their side winning the war would be any better than the other side winning. Their side winning is explicitly about maintaining social status and authority. "Keep harm at a low level" might mean "lower than a Hobbesian war of all against all", not necessarily low by our standards. It seems like maybe the thought experiment could be improved by explicitly rephrasing it to make their nation be a pretty terrible place by our standards and winning the war be bad overall. That would rather complicate things though when the point is Bob being tortured and killed. So maybe it should be the country is at peace and "The president feels much more relaxed and it able to work better at crafting his new anti-homosexuality legislation" or something like that?

However, I do on an unrelated note really like your comment about "Imagine a world in which this particular form of morality inexplicably produces positive results. Don't you feel silly trying to defend your morality now?". I've noticed (...although I have trouble thinking of actual examples, but I'm sure I've seen some) that in a fair amount of fiction there's a tendency to have Utilitarian villains with plans that will clearly bring about terrible results, as a result of them having made a very obvious error which the heroes are for some reason able to spot, which when used as an argument against Utilitarianism is pretty much literally "this particular form of morality inexplicably produces negative results". (obviously it's entirely possible for Utilitarians to make mistakes which have horrendous consequences. It's just that as a rule, on average, Utilitarianism will get you better consequences from a Utilitarian standpoint than non-consequential Hollywood Morality. Which is exactly why it's such an appealing argument to use in fiction, because it's a plausible scenario which leads to obviously incorrect conclusions if generalized.)

Comment by AlexanderRM on Consequentialism FAQ · 2015-04-18T22:37:08.412Z · LW · GW

Interesting observation: You talked about that in terms the effects of banning sweatshops, rather than talking about it in terms of the effects of opening them. It's of course the exact same action and the same result in every way- deontological as well as consequentialist- but it changes from "causing people to work in horrible sweatshop conditions" to "leaving people to starve to death as urban homeless", so it switches around the "killing vs. allowing to die" burden. (I'm not complaining, FYI, I think it's actually an excellent technique. Although maybe it would be better if we came up with language to list two alternatives neutrally with no burden of action.)

Comment by AlexanderRM on Consequentialism FAQ · 2015-04-18T22:33:37.854Z · LW · GW

"consequentialists who believe in heaven and hell and think they have insight into how to get people into heaven would be willing to do lots of nasty things to increase the number of people who go to heaven."

I fully agree with this (as someone who doesn't believe in heaven and hell, but is a consequentialist), and also would point out that it's not that different from the way many people who believe in heaven and hell already act (especially if you look at people who strongly believe in them; ignore anyone who doesn't take their own chances of heaven/hell into account in their own decisions).

In fact I suspect that even from an atheistic, humanist viewpoint, consequentialism on this one would have been better in many historical situations than the way people acted in real life; if a heathen will go to hell but can be saved by converting them to the True Faith, then killing heathens becomes an utterly horrific act. Of course, it's still justified if it allows you to conquer the heathens and forcibly convert them all, as is killing a few as examples if it gets the rest to convert, but that's still better than the way many European colonizers treated native peoples in many cases.

Comment by AlexanderRM on Consequentialism FAQ · 2015-04-18T22:23:52.858Z · LW · GW

"Paras 7.2 and 7.3 (the slavery and gladiator questions) left me with an odd impression. The "test" you propose in both cases is more or less the same as Rawls' Veil of Ignorance. So at that point I was wondering, if you apply Rawls' procedure to determine what is a preferable social contract, perhaps you're a Rawlsian more than you're a consequentialist. :) BTW, are you familiar with Rawls' objections to (classical) utilitarianism?"

I can't speak for Yvain but as someone who fully agreed with his use of that test, I would describe myself as both a Rawlsian (in the sense of liking the "veil of ignorance" concept) and a Utilitarian. I don't really see any conflict between the two. I think maybe the difference between my view and that of Rawls is that I apply something like the Hedonic Treadmill fully (despite being a Preference Utilitarian), which essentially leads to Yvain's responses.

...Actually I suppose I practically define the amount of Utility in a world by whether it would be better to live there, so maybe it would in fact be better to describe me as a Rawslian. I still prefer to think of myself as a Utilitarian with a Rawlsian basis for my utility function, though (essentially I define the amount of utility in a world as "how desirable it would be to be born as a random person in that world). I think it's that Utilitarianism sounds easier to use as a heuristic for decisions, whereas calling yourself a Rawlsian requires you to go one step further back every time you analyze a thought experiment.

Comment by AlexanderRM on Consequentialism FAQ · 2015-04-18T22:03:28.811Z · LW · GW

"The main point is that forcing people to become gladiators against their will requires a system that would almost certainly lower utility (really you'd have to have an institution of slavery or a caste system; any other way and people would revolt against the policy since they would expect a possibility of being to be gladiators themselves)."

It seems to me that, specifically, gladiatorial games that wouldn't lower utility would require that people not revolt against the system since they accept the risk of being forced into the games as the price they pay to watch the games. If gladiators are drawn exclusively from the slaves and lower castes, and the people with political power are exempted, then most likely the games are lowering utility.

@ Prostitution: Don't the same arguments apply to paid labor of any type?

Comment by AlexanderRM on Consequentialism FAQ · 2015-04-18T21:57:32.385Z · LW · GW

I would say yes, we should re-examine it.

The entertainment value of forced gladiatorial games on randomly-selected civilians... I personally would vote against them because I probably wouldn't watch them anyway, so it would be a clear loss for me. Still, for other people voting in favor of them... I'm having trouble coming up with a really full refutation of the idea in the Least Convenient Possible World hypothetical where there's no other way to provide gladiatorial games, but there are some obvious practical alternatives.

It seems to me that voluntary gladiatorial games where the participants understand the risks and whatnot would be just fine to a consequentialst. It's especially obvious if you consider the case of poor people going into the games for money. There are plenty of people currently who die because of factors relating to lack of money. If we allowed such people to voluntarily enter gladiatorial games for money, then the gladiators would be quite clearly better off. If we ever enter a post-scarcity society but still have demand for gladiatorial games, then we can obviously ask for volunteers and get people who want the glory/social status/whatnot of it.

If for some reason that source of volunteers dried up, yet we still have massive demand, then we can have everyone who wants to watch gladiatorial games sign up for a lottery in exchange for the right to watch them, thus allowing their Rawlsian rights to be maintained while keeping the rest of the population free from worry.

Comment by AlexanderRM on Ask and Guess · 2015-04-12T18:11:10.673Z · LW · GW

So you're suggesting one should always use ask culture in response to questions, but being careful about which culture you use when asking questions? That sounds like a decent strategy overall. However, from the descriptions people have been giving it seems to me that you aren't supposed to refuse requests in guess culture (that's why it's offensive to make a request someone doesn't want to agree to).

Now, I'm probably both biased personally against guess culture and being influenced by other people who are more on the ask side describing it here, but it seems to me that guess culture is sustained solely by forcing all participants to participate in all phases. Rather like the hypothetical society where one rule is that everyone has to cooperate to kill anyone who breaks a rule, including this one. As far as I can tell the only way to contribute to breaking it would be to either think carefully about which one you're in at all times, or explain the concept to everyone who you interact with so you can ask them about it.

Comment by AlexanderRM on Thoughts on moral intuitions · 2015-04-07T21:02:09.507Z · LW · GW

I don't have much to contribute here personally; Just want ton ote that Yvain has an excellent diagram on the "inferrential distances" thing: http://squid314.livejournal.com/337475.html

(Also, the place he linked it from: http://slatestarcodex.com/2013/05/30/fetal-attraction-abortion-and-the-principle-of-charity/ is probably the more obviously relevant thing to moral debates in politics and the like.)

Comment by AlexanderRM on Thoughts on moral intuitions · 2015-04-07T20:49:35.297Z · LW · GW

I would say that for someone who accepts liberal ideas (counting most conservatives in western countries), this seems like a very useful argument for convincing them of this: If we always used intuitional morality, we would currently have morality that disagrees with their intuitions (about slavery being wrong, democracy being good, those sorts of things).

Of course, as a rational argument it makes no sense. It just appeals to me because my intuitions are Consequentialist and I want to try to convince others to follow Consequentialism, because it will lead to better outcomes.

Comment by AlexanderRM on Thoughts on moral intuitions · 2015-04-07T20:45:35.772Z · LW · GW

It seems to me that Utilitarianism can be similar to the way you describe Kant's approach: Selecting a specific part of our intuitions- "Actions that have bad consequences are bad"- ignoring the rest, and then extrapolating from that. Well, that and coming up with a utility function. Still, it seems to me that you can essentially apply it logically to situations and come up with decisions based on actual reasoning: You'll still have biases, but at least (besides editing utility functions) you won't be editing your basic morality just to follow your intuitions.

Of course, as mwengler notes, we're just replacing our arbitrary set of moral intuitions, with a cohesive, logical system based on... one of those arbitrary moral intuitions. I'm pretty sure there's no solution to that; the only justification for being moral at all is our moral intuitions. Still, if you are going to be moral, I find Utilitarianism preferable to intuitional morality... actually, I guess mainly because I'd already been a Utilitarian for awhile before realizing morality was arbitrary, so my moral intuitions have changed to be consequentialist. Oh well. :/

Comment by AlexanderRM on Thoughts on moral intuitions · 2015-04-07T20:24:18.051Z · LW · GW

Isn't "the psychology of the discoursing species" another way of saying "moral intuitions"? Or at least, those are included in the umbrella of that term.

Comment by AlexanderRM on Thoughts on moral intuitions · 2015-04-07T20:20:15.285Z · LW · GW

As I side note, I'd like to say I'd imagine nearly all political beliefs throughout history have had people citing every imaginable form of ethics as justifications, and furthermore without even distinguishing between them. From what I understand the vast majority of people don't even realize there's a distinction (I myself didn't know about non-consequentalist ideas until about 6 months ago, actually).

BTW, I would say that an argument about "the freedom to own slaves" is essentially an argument that slavery being allowed is a terminal value, although I'd doubt anyone would argue that owning of slaves is itself a terminal value.

Comment by AlexanderRM on Circular Altruism · 2015-03-27T22:31:12.666Z · LW · GW

"My favorite anecdote along these lines - though my books are packed at the moment, so no citation for now - comes from a team of researchers who evaluated the effectiveness of a certain project, calculating the cost per life saved, and recommended to the government that the project be implemented because it was cost-effective. The governmental agency rejected the report because, they said, you couldn't put a dollar value on human life. After rejecting the report, the agency decided [i]not[/i] to implement the measure."

Does anyone know of a citation for this? Because I'd really like to be able to share it. I found this really, really, hilarious until I realized that, according to Eliezer, it actually happened and killed people. Although it's still hilarious, just simultaneously horrifying. It sounds like somebody misunderstood the point of their own moral grandstanding. (on the other hand, I suppose a Deontologist could in fact say "you can't put a dollar value on human life" and literally mean "comparing human lives to dollars is inherently immoral", not "human lives have a value of infinity dollars". To me as a consequentialist the former seems even stupider than the latter, but in deontology it's acceptable moral reasoning.)

Comment by AlexanderRM on Circular Altruism · 2015-03-27T22:23:13.639Z · LW · GW

"But with this dust speck scenario, if we accept Mr. Yudkowsky's reasoning and choose the one-person-being-tortured option, we end up with a situation in which every participant would rather that the other option had been chosen! Certainly the individual being tortured would prefer that, and each potentially dust-specked individual* would gladly agree to experience an instant of dust-speckiness in order to save the former individual."

A question for comparison: would you rather have a 1/Googolplex chance of being tortured for 50 years, or lose 1 cent? (A better comparison in this case would be if you replaced "tortured for 50 years" with "death".)

Also: for the original metaphor, imagine that you aren't the only person being offered this choice, and that the people suffering the consequences are out of the same pool- which is how real life works, although in this world we have a population of 1 googolplex rather than 7 billion. If we replace "dust speck" with "horribly tortured for 1 second", and we give 1.5 billion people the same choice and presume they all make the same decision, then the choice is between 1.5 billion people being horribly tortured for 50 years, and 1 googolplex people begin horribly tortured for 50 years.

Comment by AlexanderRM on Circular Altruism · 2015-03-27T22:12:39.723Z · LW · GW

As I understand it, the math is in the dust speck's favor because EY used an arbitrarily large number such that it couldn't possibly be otherwise.

I think a better comparison would be between 1 second of torture (which I'd estimate is worth multiple dust specks, assuming it's not hard to get them out of your eye) and 50 years of torture, in which case yes, it would flip around 1.5 billion. That is of course assuming that you don't have a term in your utility function where sharing of burdens is valuable- I assume EY would be fine with that but would insist that you implement it in the intermediate calculations as well.

Comment by AlexanderRM on Circular Altruism · 2015-03-27T22:03:59.860Z · LW · GW

A better metaphor: What if we replaced "getting a dust speck in your eye" with "being horribly tortured for one second"? Ignore the practical problems of the latter, just say the person experiences the exact same (average) pain as being horribly tortured, but for one second.

That allows us to directly compare the two experiences much better, and it seems to me it eliminates the "you can't compare the two experiences"- except of course with long term effects of torture, I suppose; to get a perfect comparison we'd need a torture machine that not only does no physical damage, but no psychological damage either.

On the other hand, it does leave in OnTheOtherHandle's argument about "fairness" (specifically in the "sharing of burdens" definition, since otherwise we could just say the person tortured is selected at random). Which to me as a utilitarian makes perfect sense; I'm not sure if I agree or disagree with him on that.

Comment by AlexanderRM on Circular Altruism · 2015-03-27T21:50:15.350Z · LW · GW

Note here that the difference is between the deaths of currently-living people, and preventing the births of potential people. In hedonic utilitarian terms it's the same, but you can have other utilitarian schemes (ex. choice utilitarianism as I commented above) where death either has an inherent negative value, or violates the person's preferences against dying.

BTW note that even if you draw no distinction, your thought experiment doesn't necessarily prove the Repugnant Conclusion. The third option is to say that because the Repugnant Conclusion is false, it must be that the automatic response to your thought experiment is incorrect, i.e. that it's OK to wipe out a googolplex galaxies full of people with lives barely worth living to save 10,000 people. Although I feel like most people, if they rejected the killing/preventing birth distinction, would go with the Repugnant Conclusion over that.

Comment by AlexanderRM on Circular Altruism · 2015-03-27T21:41:32.247Z · LW · GW

I think the dust motes vs. torture makes sense if you imagine a person being bombarded with dust motes for 50 years. I could easily imagine a continuous stream of dust motes being as bad as torture (although possibly the lack of variation would make it far less effective than what a skilled torturer could do).

Based on that, Eliezer's belief is just that the same number of dust motes spread out among many people is just as bad as one person getting hit by all of them. Which I will admit is a bit harder to justify. One possible way to make the argument is to think in terms of rules utilitarianism, and imagine a world where a huge number of people got the choice, then compare one where they all choose the torture vs. one where they all choose the dust motes- the former outcome would clearly be better. I'm pretty sure there are cases where this could be important in government policy.