Posts

Meetup : Utrecht: Game theory 2014-10-25T09:09:58.889Z

Comments

Comment by Philip_W on Why is this utilitarian calculus wrong? Or is it? · 2019-02-02T10:42:27.029Z · LW · GW
By changing x and y, we represent your altruism to the other parties in the situation; if x is greater than 1, then you would rather give the commune money than have it yourself,

Small correction: you want to buy the widget as long as x > 7/8 .

You should also almost never expect x>1, because that means you should immediately spend your money on that cause until x becomes 1 or you run out of credit. x=1 means that something is the best marginal way to allocate money that you know of right now.

Comment by Philip_W on Is Science Slowing Down? · 2018-12-02T12:15:07.863Z · LW · GW

It's probably too small scale to be statistically significant. The God acts on large sample sizes and problems with many different bottlenecks. I would guess that most of the cost was tied up in a single technique.

Comment by Philip_W on We can all be high status · 2018-11-27T09:03:04.550Z · LW · GW

Status works like OP describes, when going from "dregs" to "valued community member". Social safety is a very basic need, and EA membership undermines that for many people by getting them to compare themselves to famous EAs, rather than to a more realistic peer group. This is especially true in regions with a lower density of EAs, or where all the 'real' EAs pack up and move to higher density regions.

I think the OP meant "high" as a relative term, compared to many people who feel like dregs.

Comment by Philip_W on A Rationalist's Account of Objectification? · 2016-09-09T17:50:35.770Z · LW · GW

People don't have that amount of fine control over their own psychology. Depression isn't something people 'do to themselves' either, at least not with the common implications of that phrase.

Also, this was a minimal definition based on a quick search of relevant literature for demonstrated effects, as I intended to indicate with "at least". Effects of objectification in the perpetrator are harder to disentangle.

Comment by Philip_W on Rationality Quotes Thread October 2015 · 2015-12-06T18:05:35.870Z · LW · GW

Sociology and psychology. Determine patterns in human desires and behaviour, and determine universal rules. Either that, or scale up your resources and get yourself an fAI.

Comment by Philip_W on Rationality Quotes Thread November 2015 · 2015-11-05T10:09:40.925Z · LW · GW

'Happiness' is a vague term which refers to various prominent sensations and to a more general state, as vague and abstract as CEV (e.g. "Life, Liberty, and the pursuit of Happiness"). 'Headache', on the other hand, primarily refers to the sensation.

If you take an aspirin for a headache, your head muscles don't stop clenching (or whatever else the cause is); it just feels like it for a while. A better pill would stop the clenching, and a better treatment still would make you aware of the physiological cause of the clenching and allow you to change it to your liking.

Comment by Philip_W on Rationality Quotes Thread October 2015 · 2015-11-04T18:47:04.520Z · LW · GW

Having a good factual model of a person would be necessary, and perhaps sufficient, for making that judgment favourably. When moving beyond making people more equal and free in their means, the model should be significantly better than their self-model. After that, the analyst would probably value the thus observed people caring about self-determination in the territory (so no deceiving them to think they're self-determining), and act accordingly.

If people declare that analysing people well enough to know their moral values is itself being a busybody, it becomes harder. First I would note that using the internet without unusual data protection already means a (possibly begrudging) acceptance of such busybodies, up to a point. But in a more inconvenient world, consent or prevention of acute danger are as far as I would be willing to go in just a comment.

Comment by Philip_W on Rationality Quotes Thread October 2015 · 2015-11-04T18:10:24.010Z · LW · GW

In the analogy, water represents the point of the quote (possibly as applied to CEV). You're saying there is no point. I don't understand what you're trying to say in a way that is meaningful, but I won't bother asking because 'you can't do my thinking for me'.

Edit: fiiiine, what do you mean?

Comment by Philip_W on Rationality Quotes Thread October 2015 · 2015-11-01T09:54:25.073Z · LW · GW

Be careful when defining the winner as someone other than the one currently sitting on a mound of utility.

Most lesswrong users at least profess to want to be above social status games, so calling people out on it increases expected comment quality and personal social status/karma, at least a little.

Comment by Philip_W on Rationality Quotes Thread October 2015 · 2015-10-31T11:41:05.882Z · LW · GW

You may not be able to make a horse drink, but you can still lead it to water rather than merely point out it's thirsty. Teaching is a thing that people do with demonstrated beneficial results across a wide range of topics. Why would this be an exception?

Comment by Philip_W on Rationality Quotes Thread October 2015 · 2015-10-27T07:42:15.342Z · LW · GW

I don't think that helps AndHisHorse figure out the point.

Comment by Philip_W on Polyhacking · 2015-09-21T20:31:31.934Z · LW · GW

Congratulations!

I might just have to go try it now.

Comment by Philip_W on Visualizing Eutopia · 2015-09-21T20:30:10.066Z · LW · GW

'he' in that sentence ('that isn't the procedure he chose') still referred to Joe. Zubon's description doesn't justify the claim, it's a description of the consequence of the claim.

My original objection was that 'they' ("I think they would have given up on this branch already.") have a different procedure than Joe has ("all you have to do is do a brute force search of the space of all possible actions, and then pick the one with the consequences that you like the most."). Whomever 'they' refers to, you're expecting them to care about human suffering and be more careful than Joe is. Joe is a living counterexample to the notion that anyone with that kind of power would have given up on our branch already, since he explicitly throws caution to the wind and runs a brute force search of all Joe::future universes using infinite processing power, which would produce an endless array of rejection-worthy universes run at arbitrary levels of detail.

Comment by Philip_W on Visualizing Eutopia · 2015-09-15T15:26:08.402Z · LW · GW

What do you mean with "never-entered" (or "entered") states? Ones Joe doesn't (does) declare real to live out? If so, the two probably correlate but Joe may be mistaken. A full simulation of our universe running on sufficient hardware would contain qualia, so the infinitely powerful process which gives Joe the knowledge which he uses to decide which universe is best may contain qualia as well, especially if the process is optimised for ability-to-make Joe-certain-of-his-decision rather than Joe's utility function.

Comment by Philip_W on Polyhacking · 2015-09-11T20:18:41.594Z · LW · GW

How about now?

Comment by Philip_W on Visualizing Eutopia · 2015-09-11T17:33:00.051Z · LW · GW

While Joe could follow each universe and cut it off when it starts showing disutility, that isn't the procedure he chose. He opted to create universes and then "undo" them.

I'm not sure whether "undoing" a universe would make the qualia in it not exist. Even if it is removed from time, it isn't removed from causal history, because the decision to "undo" it depends on the history of the universe.

Comment by Philip_W on Caelum est Conterrens: I frankly don't see how this is a horror story · 2015-08-18T19:51:05.312Z · LW · GW

Read it more carefully. One or several paragraphs before the designated-human aliens, it is mentioned that CelestAI found many sources of complex radio waves which weren't deemed "human".

Comment by Philip_W on Rationality Quotes Thread May 2015 · 2015-07-28T09:45:03.442Z · LW · GW

From your username it looks like you're Dutch (it is literally "the flying Dutchman" in Dutch), so I'm surprised you've never heard of the Dutch bible belt and their favourite political party, the SGP. They get about 1.5% of the vote in the national elections and seem pretty legit. And those are just the Christians fervent enough to oppose women's suffrage. The other two Christian parties have around 15% of the vote, and may contain proper believers as well.

Comment by Philip_W on The Truly Iterated Prisoner's Dilemma · 2015-06-25T09:32:16.161Z · LW · GW

I think he means "I cooperate with the Paperclipper IFF it would one-box on Newcomb's problem with myself (with my present knowledge) playing the role of Omega, where I get sent to rationality hell if I guess wrong". In other words: If Elezier believes that if Elezier and Clippy were in the situation that Elezier would prepare for one-boxing if he expected Clippy to one-box and two-box if he expected Clippy to two-box, Clippy would one-box, then Elezier will cooperate with Clippy. Or in other words still: If Elezier believes Clippy to be ignorant and rational enough that it can't predict Elezier's actions but uses game theory at the same level as him, then Elezier will cooperate.

In the uniterated prisoner's dilemma, there is no evidence, so it comes down to priors. If all players are rational mutual one-boxers, and all players are blind except for knowing they're all mutual one-boxers, then they should expect everyone to make the same choice. If you just decide that you'll defect/one-box to outsmart others, you may expect everyone to do so, so you'll be worse off than if you decided not to defect (and therefore nobody else would rationally do so either). Even if you decide to defect based on a true random number generator, then for

(2,2) (0,3)

(3,0) (1,1)

the best option is still to cooperate 100% of the time.

If there are less rational agents afoot, the game changes. The expected reward for cooperation becomes 2(xr+(1-d-r)) and the reward for defection becomes 3(xr+(1-d-r))+d+(1-x)r=1+2(xr+(1-d-r)), where r is the fraction of agents who are rational, d is the fraction expected to defect, x is the probability with which you (and by extension other rational agents) will cooperate, and (1-d-r) is the fraction of agents who will always cooperate. Optimise for x in 2x(xr+(1-d-r))+(1-x)(1+2(xr+(1-d-r)))=1-x+2(xr-1-d-r)=x(2r-1)-(1+2d+2r); which means you should cooperate 100% of the time if the fraction of agents who are rational r > 0.5, and defect 100% of the time if r < 0.5.

In the iterated prisoner's dilemma, this becomes more algebraically complicated since cooperation is evidence for being cooperative. So, qualitatively, superintelligences which have managed to open bridges between universes are probably/hopefully (P>0.5) rational, so they should cooperate on the last round, and by extension on every round before that. If someone defects, that's strong evidence to them not being rational or having bad priors, and if the probability of them being rational drops below 0.5, you should switch to defecting. I'm not sure if you should cooperate if your opponent cooperates after defecting on the first round. Common sense says to give them another chance, but that may be anthropomorphising the opponent.

If the prior probability of inter-universal traders like Clippy and thought experiment::Elezier is r>0.5, and thought experiment::Elezier has managed not to make his mental makeup knowable to Clippy and vice versa, then both Elezier and Clippy ought to expect r>0.5. Therefore they should both decide to cooperate. If Elezier suspects that Clippy knows Elezier well enough to predict his actions, then for Elezier 'd' becomes large (Elezier suspects Clippy will defect if Elezier decides to cooperate). Elezier unfortunately can't let himself be convinced that Clippy would cooperate at this point, because if Clippy knows Elezier, then Clippy can fake that evidence. This means both players also have strong motivation not to create suspicion in the other player: knowing the other player would still mean you lose, if the other player finds out you know. Still, if it saves a billion people, both players would want to investigate the other to take victory in the final iteration of the prisoner's dilemma (using methods which provide as little evidence of the investigation as possible; the appropriate response to catching spies of any sort is defection).

Comment by Philip_W on The True Prisoner's Dilemma · 2015-06-25T06:35:33.857Z · LW · GW

In a sense they did eat gold, like we eat stacks of printed paper, or perhaps nowadays little numbers on computer screens.

Comment by Philip_W on Simulate and Defer To More Rational Selves · 2015-06-16T05:51:07.129Z · LW · GW

That doesn't seem true. How can the victim know for sure that the blackmailer is simulating them accurately or being rational?

Suppose you get mugged in an alley by random thugs. Which of these outcomes seems most likely:

  1. You give them the money, they leave.

  2. You lecture them about counterfactual reasoning, they leave.

  3. You lecture them about counterfactual reasoning, they stab you.

Any agent capable of appearing irrational to a rational agent can blackmail that rational agent. This decreases the probability of agents which appear irrational being irrational, but not necessarily to the point that you can dismiss them.

Comment by Philip_W on You have a set amount of "weirdness points". Spend them wisely. · 2015-02-09T00:24:15.143Z · LW · GW

I think I might have been a datapoint in your assessment here, so I feel the need to share my thoughts on this. I would consider myself socially progressive and liberal, and I would hate not being included in your target audience, but for me your wearing cat ears to the CFAR workshop cost you weirdness points that you later earned back by appearing smart and sane in conversations, by acceptance by the peer group, acclimatisation, etc.

I responded positively because it fell within the 'quirky and interesting' range, but I don't think I would have taken you as seriously on subjectively weird political or social opinions. It is true that the cat ears are probably a lot less expensive for me than cultural/political out-group weirdness signals, like a military haircut. It might be a good way to buy other points, so positive overall, but that depends on the circumstances.

Comment by Philip_W on Rationality Quotes December 2014 · 2015-01-06T13:30:34.346Z · LW · GW

Ah, "actual" threw me off. So you mean something close to "The lifetime projected probability of being born(/dying) for people who came into existence during the last year".

Comment by Philip_W on Tell Culture · 2015-01-05T12:01:25.860Z · LW · GW

Thanks, edited.

Comment by Philip_W on Tell Culture · 2015-01-05T08:33:35.380Z · LW · GW

Karma sink.

Comment by Philip_W on Tell Culture · 2015-01-05T08:33:26.327Z · LW · GW

If you're on the autism spectrum and think Tell culture is a bad idea, upvote this comment.

Comment by Philip_W on Tell Culture · 2015-01-05T08:33:19.559Z · LW · GW

If you're on the autism spectrum and think Tell culture is a good idea, upvote this comment.

Comment by Philip_W on Tell Culture · 2015-01-05T08:32:45.108Z · LW · GW

I'm on the autism spectrum (PDD-NOS), and Tell culture sounds like a good idea to me.

[pollid:807]

Comment by Philip_W on Rationality Quotes December 2014 · 2015-01-05T08:07:24.457Z · LW · GW

birth rate

I wouldn't consider abortion a "birth", per se.

Comment by Philip_W on Rationality Quotes December 2014 · 2015-01-03T13:55:46.557Z · LW · GW

That's just not true. Death rate, as the name implies, is a rate - the population that died in this year divided by the average total population. If "death rate" is 100%, then "birth rate" is 100% by the same reasoning, because 100% of people were born.

Comment by Philip_W on On Caring · 2015-01-02T16:35:27.666Z · LW · GW

You seem to be talking about what I would call sympathy, rather than empathy. As I would use it, sympathy is caring about how others feel, and empathy is the ability to (emotionally) sense how others feel. The former is in fine enough state - I am an EA, after all - it's the latter that needs work. Your step (1) could be done via empathy or pattern recognition or plain listening and remembering as you say. So I'm sorry, but this doesn't really help.

Comment by Philip_W on On Caring · 2015-01-02T16:19:50.044Z · LW · GW

I'll admit I don't really have data for this. But my intuitive guess is that ...

Have you made efforts to research it? Either by trawling papers or by doing experiments yourself?

students don't just need to be able to attend school; they need a personal relationship with a teacher who will inspire them.

Your objection had already been accounted for: $500 to SCI = around 150 people extra attend school for a year. I estimated the number of students that will have a relationship with their teacher as good as the average you provide at around 1:150.

But it seems like there's many things that matter in life that don't have a price tag.

That sounds deep, but is obviously false: would you condemn yourself to a year of torture so that you get one unit of the thing that allegedly doesn't have a price tag (for example a single minute of a conversation with a student where you feel a real connection)? Would you risk a one in a million chance to get punched on the arm in order to get the same unit? If the answer to these questions is [no] and [yes] respectively, as I would expect them to be, those are outer limits on the price range. Getting to the true value is just a matter of convergence.

Perhaps more to the point, though, those people you would help halfway across the world are just as real, and their lives just as filled with "things that don't have a price tag" as people in your environment. For $3000, one family is not torn apart by a death from malaria. For $3, one child more attends grade school regularly for a year because they are no longer ill from parasitic stomach infections. These are not price tags, these are trades you can actually make. Make the trades, and you set a lower limit. Refuse them, and the maximum price tag you put on a child's relationship with their teacher is set, period.

It does seem very much like you're guided by your warm fuzzies.

Comment by Philip_W on The Need for Human Friendliness · 2014-12-18T16:43:55.699Z · LW · GW

Did MIRI answer you? I would expect them to have answered by now, and I'm curious about the answer.

Comment by Philip_W on On Caring · 2014-12-09T15:39:32.173Z · LW · GW

you can do things to change yourself so that you do care.

Would you care to give examples or explain what to look for?

Comment by Philip_W on On Caring · 2014-12-09T13:51:30.824Z · LW · GW

(separated from the other comment, because they're basically independent threads).

I've concluded that my impact probably comes mostly from my everyday interactions with people around me, not from money that I send across the world.

This sounds unlikely. You say you're improving the education and mental health of on-the-order-of 100 students. Deworm the World and SCI improve attendance of schools by 25%, meaning you would have the same effect, as a first guess and to first order at least, by donating on-the-order-of $500/yr. And that's just one of the side-effects of ~600 people not feeling ill all the time. So if you primarily care about helping people live better lives, $50/yr to SCI ought to equal your stated current efforts.

However, that doesn't count flow-through effects. EA is rare enough that you might actually get a large portion of the credit of convincing someone to donate to a more effective charity, or even become an effective altruist: expected marginal utility isn't conserved across multiple agents (if you have five agents who can press a button, and all have to press their buttons to save one person's life, each of them has the full choice of saving or failing to save someone, assuming they expect the others to press the button too, so each of them has the expected marginal utility of saving a life). Since it's probably more likely that you convince someone else to donate more effective than that one of the dewormed people will be able to have a major impact because of their deworming, flow-through effects should be very strong for advocacy relative to direct donation.

To quantify: Americans give 1% of their incomes to poverty charities, so let's make that $0.5k/yr/student. Let's say that convincing one student to donate to SCI would get them to donate that much more effectively about 5 years sooner than otherwise (those willing would hopefully be roped in eventually regardless). Let's also say SCI is five times more effective than their current charities. That means you win $2k to SCI for every student you convince to alter their donation patterns.

You probably enjoy helping people directly (making you happy, which increases your productivity and credibility, and is also just nice), and helping them will earn you social credit which is more likely to convince them, so you could mostly keep doing what you're doing, just adding the advocacy bit in the best way you see fit. Suppose you manage to convince 2.5% of each class, that means you get around $5k/year to SCI, or about 100 times more impact than what you're doing now, just by doing the same AND advocating people to donate more effectively. That's six thousand sick people, more than a third of them children and teens, you would be curing extra every year.

Note: this is a rough first guess. Better numbers and the addition of ignored or forgotten factors may influence the results by more than one order of magnitude. If you decide to consider this advice, check the results thoroughly and look for things I missed. 80000hours has a few pages on advocacy, if you're interested.

Comment by Philip_W on On Caring · 2014-12-09T13:49:28.668Z · LW · GW

Empathy is a useful habit that can be trained, just as much as rationality can be.

Could you explain how? My empathy is pretty weak and could use some boosting.

Comment by Philip_W on On Caring · 2014-12-09T10:38:06.303Z · LW · GW

Assuming his case is similar to mine: the altruism-sense favours wireheading - it just wants to be satisfied - while other moral intuitions say wireheading is wrong. When I imagine wireheading (like timujin imagines having a constant taste of sweetness in his mouth), I imagine still having that part of the brain which screams "THIS IS FAKE, YOU GOTTA WAKE UP, NEO". And that part wouldn't shut up unless I actually believed I was out (or it's shut off, naturally).

When modeling myself as sub-agents, then in my case at least the anti-wireheading and pro-altruism parts appear to be independent agents by default: "I want to help people/be a good person" and "I want it to actually be real" are separate urges. What the OP seems to be appealing to is a system which says "I want to actually help people" in one go - sympathy, perhaps, as opposed to satisfying your altruism self-image.

Comment by Philip_W on Agency and Life Domains · 2014-11-19T17:09:39.031Z · LW · GW

Evidence please?

Comment by Philip_W on A discussion of heroic responsibility · 2014-11-05T18:16:28.891Z · LW · GW

Right, I thought you were RobinZ. By the context, it sounds like he does consider serenity incongruous with heroic responsibility:

There are no rational limits to heroic responsibility. It is impossible to fulfill the requirements of heroic responsibility. What you need is the serenity to accept the things you cannot change, the courage to change the things you can, and the wisdom to know the difference.

With my (rhetorical) question, I expressed doubt towards his interpretation of the phrase, not (necessarily) all reasonable interpretations of it.

and I don't see a reason to have a difference between things I "can't change" and things I might be able to change but which are simply suboptimal.

The Virtue of Narrowness may help you. I have different names for "DDR Ram" and "A replacement battery for my Sony Z2 android" even though I can see how they both relate to computers.

For me at least, saying something "can't be changed" roughly means modelling something as P(change)=0. This may be fine as a local heuristic when there are significantly larger expected utilities on the line to work with, but without a subject of comparison it seems inappropriate, and I would blame it for certain error modes, like ignoring theories because they have been labeled impossible at some point.

To approach it another way, I would be fine with just adding adjectives to "extremely ridiculously [...] absurdly unfathomably unlikely" to satisfy the requirements of narrowness, rather than just saying something can't be done.

A human psychological experience and tool that can approximately be described by referring to allocating attention and resources efficiently in the face of some adverse and difficult to influence circumstance.

I would call this "level-headedness". By my intuition, serenity is a specific calm emotional state, which is not required to make good decisions, though it may help. My dataset luckily isn't large, but I have been able to get by on "numb" pretty well in the few relevant cases.

Comment by Philip_W on A discussion of heroic responsibility · 2014-11-04T20:06:43.917Z · LW · GW

In ethics, the question would be answered by "yes, this ethical system is the only acceptable way to make decisions" by definition. In practice, this fact is not sufficient to make more than 0.01% of the world anywhere near heroically responsible (~= considering ethics the only emotionally/morally/role-followingly acceptable way of making decisions), so apparently the question is not decided by ethics.

Instead, roles and emotions play a large part in determining what is acceptable. In western society, the role of someone who is responsible for everything and not in the corresponding position of power is "the hero". Yudkowsky (and HPJEV) might have chosen to be heroically responsible because he knows it is the consistent/rational conclusion of human morality and he likes being consistent/rational very much, or because he likes being a hero, or more likely a combination of both. The decision is made due to the role he wants to lead, not due to the ethics itself.

Comment by Philip_W on A discussion of heroic responsibility · 2014-11-04T18:41:25.439Z · LW · GW

In that case, I'm confused about what serenity/acceptance entails, why you seem to believe heroic responsibility to be incongruent with it, and why it doesn't just fall under "courage" and "wisdom" (as the emotional fortitude to withstand the inevitable imperfection/partial failure and accurate beliefs respectively). Not wasting (computational) resources on efforts with low expected utility is part of your responsibility to maximise utility, and I don't see a reason to have a difference between things I "can't change" and things I might be able to change but which are simply suboptimal.

Comment by Philip_W on A discussion of heroic responsibility · 2014-11-03T20:09:02.414Z · LW · GW

No: the concept that our ethics is utilitarian is independent from the concept that it is the only acceptable way of making decisions (where "acceptable" is an emotional/moral term).

Comment by Philip_W on A discussion of heroic responsibility · 2014-11-03T19:48:52.917Z · LW · GW

HPJEV isn't supposed to be a perfect executor of his own advice and statements. I would say that it's not the concept of heroic responsibility is at fault, but his own self-serving reasoning which he applies to justify breaking the rules and doing something cool. In doing so, he fails his heroic responsibility to the over 100 expected people whose lives he might have saved by spending his time more effectively (by doing research which results in an earlier friendly magitech singularity, and buying his warm fuzzies separately by learning the spell for transfiguring a bunch of kittens or something), and HPJEV would feel appropriately bad about his choices if he came to that realisation.

you'll drive yourself crazy if you blame yourself every time you "could have" prevented something that no-one should expect you to have.

Depending on what you mean by "blame", I would either disagree with this statement, or I would say that heroic responsibility would disapprove of you blaming yourself too. By heroic responsibility, you don't have time to feel sorry for yourself that you failed to prevent something, regardless of how realistically you could have.

It is impossible to fulfill the requirements of heroic responsibility.

Where do you get the idea of "requirements" from? When a shepherd is considered responsible for his flock, is he not responsible for every sheep? And if we learn that wolves will surely eat a dozen over the coming year, does that make him any less responsible for any one of his sheep? IMO no: he should try just as hard to save the third sheep as the fifth, even if that means leaving the third to die when it's wounded so that 4-10 don't get eaten because they would have been traveling more slowly.

It is a basic fact of utilitarianism that you can't score a perfect win. Even discounting the universe which is legitimately out of your control, you will screw up sometimes as point of statistical fact. But that does not make the utilons you could not harvest any less valuable than the ones you could have. Heroic responsibility is the emotional equivalent of this fact.

What you need is the serenity to accept the things you cannot change, the courage to change the things you can, and the wisdom to know the difference.

That sounds wise, but is it actually true? Do you actually need that serenity/acceptance part? To keep it heroically themed, I think you're better off with courage, wisdom, and power.

Comment by Philip_W on A discussion of heroic responsibility · 2014-11-02T17:54:40.495Z · LW · GW

As you point out - and eli-sennesh points out, and the trope that most closely resembles the concept points out - 'heroic responsibility' assumes that everyone other than the heroes cannot be trusted to do their jobs.

This would only be true if the hero has infinite resources, actually able to redo everyone's work. In practice, deciding how your resources should be allocated requires a reasonably accurate estimate of how likely everyone is to do their job well. Swimmer963 shouldn't insist on farming her own wheat for her bread (like she would if she didn't trust the supply chain), not because she doesn't have (heroic) responsibility to make sure she stays alive to help patients, but because that very responsibility means she shouldn't waste her time and effort on unfounded paranoia to the detriment of everyone.

The main thing about heroic responsibility is that you don't say "you should have gotten it right". Instead you can only say "I was wrong to trust you this much": it's your failure, and whether it's a failure of the person you trusted really doesn't matter for the ethics of the thing.

Comment by Philip_W on A discussion of heroic responsibility · 2014-11-02T17:20:36.982Z · LW · GW

No, it doesn't. If you're uncertain about your own reasoning, discount the weight of your own evidence proportionally, and use the new value. In heuristic terms: err on the side of caution, by a lot if the price of failure is high.

Comment by Philip_W on A discussion of heroic responsibility · 2014-11-02T12:53:06.265Z · LW · GW

You and Swimmer963 are making the mistake of applying heroic responsibility only to optimising some local properties. Of course that will mean damaging the greater environment: applying "heroic responsibility" basically means you do your best AGI impression, so if you only optimise for a certain subset of your morality your results aren't going to be pleasant.

Heroic responsibility only works if you take responsibility for everything. Not just the one patient you're officially being held accountable for, not just the most likely Everett branches, not just the events you see with your own eyes. If your calling a halt to the human machine you are a part of truly has an expected negative effect, then it is your heroic responsibility to shut up and watch others make horrible mistakes.

A culture of heroic responsibility demands appropriate humility; it demands making damn sure what you're doing is correct before defying your assigned duties. And if human psychology is such that punishing specific people for specific events works, then it is everyone's heroic responsibility to make sure that rule exists.

Applying this in practice would, for most people, boil down to effective altruism: acquiring and pooling resources to enable a smaller group to optimise the world directly (after acquiring enough evidence of the group's reliability that you know they'll do a better job at it than you), trying to influence policy through political activism, and/or assorted meta-goals, all the while searching for ways to improve the system and obeying the law. Insisting you help directly instead of funding others would be statistical murder in the framework of heroic responsibility.

Comment by Philip_W on Living Luminously · 2014-08-19T08:57:16.336Z · LW · GW

FWIW, this is more commonly known as "cognitive behavioural therapy", with focus on "schema therapy".

Comment by Philip_W on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-08-05T10:08:22.431Z · LW · GW

I still don't see why repeat castings with hatred would require higher amounts of effort each time,

This is weird: In many cases hatred would peter out into indifference, rather than positive value, which ought to make AK easier. In fact, the idea that killing gets easier with time because of building indifference is a recognised trope. It's even weirder that the next few paragraphs are an author tract on how baseline humans let people die out of apathy all the time, so it's not like Yudkowski is unfamiliar with the ease with which people kill.

Comment by Philip_W on Meetup : Utrecht: Behavioural economics, game theory... · 2014-04-07T15:35:39.548Z · LW · GW

Just FYI: We've got a meetup group and a facebook group.

Comment by Philip_W on A critique of effective altruism · 2014-02-12T22:07:07.522Z · LW · GW

Concerning historical analogues: From what I understand about their behaviour, it seems like the Rotary Club pattern-matches some of the ideas of Effective Altruism, specifically the earning-to-give and community-building aspects. They have a million members who give on average over $100/yr to charities picked out by Rotary Club International or local groups. This means that in the past decade, their movement has collected one billion dollars towards the elimination of Polio. Some noticeable differences include:

  1. I can't find any mention of Rotary spending on charity effectiveness research^1 .
  2. They have a relatively monolithic structure. The polio-elimination charity was founded by Rotarians, charitable goals are suggested to Rotary and picked by Rotarians, etc.
  3. Relatively low expectations for people within the group. Rotarians tend to be first world upper or middle class, so $100 is likely to be closer to 0.1% of their income than the 10% commonly proscribed by EA.
  4. Relatively high barrier of entry. To become Rotarian, you have to be asked by a Rotarian, and you have to be vetted. Any old fool can call themselves Effective Altruist and nobody will challenge them on it.
  5. Allegedly, nepotism. Rotarians allegedly form a network and are willing to give other Rotarians preferential treatment and/or employment. I've heard some earning-to-give effective altruists speak of evolving to do the same thing, but we currently don't have the network.
  6. They started as businessmen, EA started as philosophers and students. That gives us a significant disadvantage when combined with (5), because we aren't capable of helping each other or funding significant endeavours, and we won't be for some time.

(1) Note that per year Rotary advanced the annihilation of polio, they saved 1,000 lives and improved 200,000. Highballing at 100,000 life-equivalents saved, that would put them at $10,000 per life saved. That's a factor 3-4 worse than GiveWell charities, though I'm not confident the current "skimming the margins" tactic would work when you've got a billion dollars to distribute.