r/HPMOR on heroic responsibility

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-21T11:08:25.284Z · LW · GW · Legacy · 51 comments

Contents

51 comments

r/HPMOR readers on heroic responsibility - not the OP, the comments.  Holy snorkels this is good.

51 comments

Comments sorted by top scores.

comment by cousin_it · 2012-08-21T12:18:21.001Z · LW(p) · GW(p)

It seems to me that if many people adopt heroic responsibility to their own values, then a handful of people with destructive values might screw up everyone else, because destroying is easier than helping people.

Replies from: RolfAndreassen, shminux, V_V
comment by RolfAndreassen · 2012-08-21T19:03:22.822Z · LW(p) · GW(p)

Well yes, but what exactly are you going to substitute for individual judgement of what actions to take? If you decide that following code X, and enforcing that others do so as well, even when it seems like breaking the code this one time would give better results, is the best course of action (perhaps for reasons of precommitment or average-case utility or what-have-you)... then oops, you seem to have made a decision in accordance with your best judgement, there.

Replies from: cousin_it
comment by cousin_it · 2012-08-21T19:26:33.345Z · LW(p) · GW(p)

Maybe heroic responsibility is one of those policies you want to adopt, but don't want to advocate?

comment by Shmi (shminux) · 2012-08-21T16:13:27.013Z · LW(p) · GW(p)

There is an obvious parallel between HJPEV and AGI: he can do (and does) stuff no (other) human can even conceive of doing.

How do you know if your values and goals are constructive or destructive? It all comes down to the same hard question of FAI (well, FHHI, friendly human hero intelligence, in this case): are your values CEV-aligned? So, the first thing Harry should do is stop running around saving people and derive the CEV :)

One can, of course, argue that, as a human hero, HJPEV has the right CEV built-in and should just run around implementing it. However, given that even his friends and allies disagree (Dumbledore is a deathist, McGonagall is a disciplinarian, Hermione thinks they are too young, Weasley twins just want to have fun), and he gets the most help from the anti-hero Quirrell, this point of view is hard to defend.

The situation is much worse outside of fiction, where buggy or limited wetware constantly leads would-be heroes (Lenin, Castro, Lincoln or even Hitler) to commit or cause suffering or destruction of the same group of people they aspired to help.

So the first question a would-be hero should ask herself is whether she is prepared to live with the consequences of her actions if they backfire. (And if the answer is yes, she is clearly a villain.)

Replies from: Eliezer_Yudkowsky, kilobug, V_V
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-21T20:37:49.970Z · LW(p) · GW(p)

CEV is a construct for AI purposes that actual human beings can't eval - I don't think I've ever seen a human discussion that was helped by invoking it. It's not like Solomonoff Induction where sometimes you really can be helped by thinking formally about Occam's Razor. In practice, human beings arguing about ethics are either already approximating their part of the 'good' as best they can, or they're confused about something much simpler than CEV, like consequentialism. If you should never use the word 'truth' when you can just talk about the object level, and never say 'because it's not optimal!' when that just means 'I don't think you should do that', then there's basically never a good time to talk about CEV - it always deflates out of the sentence unless you're talking directly about FAI.

Replies from: shminux, V_V, Will_Newsome
comment by Shmi (shminux) · 2012-08-22T00:11:51.127Z · LW(p) · GW(p)

I suppose my point is that, if you adopt "heroic responsibility", you ought to put in the correspondingly heroic amount of effort into figuring out what a hero ought to do. And given that your Harry plans to take over the world and then radically change it, he ought to do an awful lot of figuring out first. Probably of the same order of magnitude an FAI would.

comment by V_V · 2012-08-21T22:15:41.326Z · LW(p) · GW(p)

Solomonoff Induction where sometimes you really can be helped by thinking formally about Occam's Razor.

That's curious, because Solomonoff Induction is something not even an enormously powerful (but computable) AI can evaluate.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-22T01:49:36.627Z · LW(p) · GW(p)

Yes, but my point is that thinking about SI or MML in the abstract helps because people sometimes gain insight from asking "How complex is that computer program?" I haven't seen appeal-to-CEV produce much insight in practice, and any insight it could produce can probably be better produced by appealing to the relevant component principle of CEV instead. (Nor yet is this a critique of CEV, because it's meant as an AI design, not as a moral intuition pump.)

Replies from: V_V
comment by V_V · 2012-08-22T10:04:26.554Z · LW(p) · GW(p)

Can you provde an example where Solomonoff Induction can be used to gain insight that Occam's razor doesn't help to gain?

Replies from: Manfred, Kawoomba
comment by Manfred · 2012-08-22T13:14:50.302Z · LW(p) · GW(p)

Willam of Ockham originally used his principle to argue for the existence of God (God is the only necessary entity, therefore the simplest explanation).

Replies from: V_V
comment by V_V · 2012-08-22T15:45:18.579Z · LW(p) · GW(p)

That's a truly epic fail, since Occam's razor is the strongest argument against the existence of God.

It's worth noting that the current formulation "entities must not be multiplied beyond necessity" is much more recent than Ockham's original formulation "For nothing ought to be posited without a reason given, unless it is self-evident (literally, known through itself) or known by experience or proved by the authority of Sacred Scripture."

I suppose that he included the reference to the Sacred Scripture specifically because he realized that without it, God would be the first thing to fly out of the window.

Replies from: CarlShulman
comment by CarlShulman · 2012-09-17T04:09:54.246Z · LW(p) · GW(p)

I sometimes wish I knew which philosophers of the time were sincere in their religious disclaimers.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-09-17T05:26:31.945Z · LW(p) · GW(p)

That should go in a quotes thread.

Consider it done.

Replies from: CarlShulman
comment by CarlShulman · 2012-09-17T08:01:33.300Z · LW(p) · GW(p)

My thought in leaving that comment rather than doing it myself was for V_V to get credit, but OK.

comment by Kawoomba · 2012-08-22T13:38:40.473Z · LW(p) · GW(p)

How else can you impartially wield Occam's Razor than with a formal model, and what convincing formalization is there other than Kolmogorov Complexity (and assorted variants), which SI in a way extends?

Replies from: V_V
comment by V_V · 2012-08-22T16:04:13.000Z · LW(p) · GW(p)

Setting aside the theoretical objections to Solomonoff induction (a priori assumption of computability of the hypotheses, disregard of logical depth, dependance on the details of the computational model, normalization issues), even if you accept it as a proper formalization of Occam's Razor, in order to apply it in a formal argument, you would have to perform an uncomputable calculation.

Since you can't do that, what's left of it?

Replies from: Kawoomba
comment by Kawoomba · 2012-08-22T17:10:13.763Z · LW(p) · GW(p)

Besides noting that there are computable versions of Kolmogorov Complexity (such as MML), in your parent comment you contrasted the use of SI with using Occam's Razor itself.

That's what I was asking about, and it doesn't seem like you answered it:

How do you use Occam's Razor, what formalizations do you perceive as "proper", or if you're just intuiting the heuristic, guesstimating the complexity, what is the formal principle that your intuition derives from / approximates and how does it differ from e.g. Kolmogorov Complexity?

Replies from: V_V
comment by V_V · 2012-08-22T19:13:29.776Z · LW(p) · GW(p)

Besides noting that there are computable versions of Kolmogorov Complexity (such as MML)

If by MML you mean Minimum message length, then I don't think that's correct. This paper compares Minimum message length with Kolmogorov Complexity but it doesn't seem to make that claim.

How do you use Occam's Razor, what formalizations do you perceive as "proper", or if you're just intuiting the heuristic, guesstimating the complexity, what is the formal principle that your intuition derives from / approximates and how does it differ from e.g. Kolmogorov complexity

My point is that Kolmogorov complexity, Solomonoff induction, etc., are matematical constructions with a formal semantics. Talking about "informal" Kolmogorov complexity is pseudo-mathematics, which is usually an attempt to make your arguments sound more compelling than they are by dressing them in mathematical language.

If there is a disagreement about which hypothesis is simpler, trying to introduce concepts such as ill-defined program lengths that can't be computed, can only obscure the terms of the debate, rather than clarifying them.

Replies from: Kawoomba
comment by Kawoomba · 2012-08-22T19:40:38.859Z · LW(p) · GW(p)

From the paper you cited:

"(...) MML usually (but not necessarily) restricts the reference machine to a non-universal form in the interest of computational feasibility. (...) As a result, MML can be, and has routinely been, applied with some confidence to many problems of machine learning (...)"

If there is a disagreement about which hypothesis is simpler, trying to introduce concepts such as ill-defined program lengths that can't be computed, can only obscure the terms of the debate, rather than clarifying them.

There will be such disagreement about many different hypotheses, and even when there's not our common intuition will usely have approximated the informational content density of the hypotheses, their complexity.

How do you suggest to resolve such disagreements, or reach common ground without resorting to an intuition ultimately resting on complexity measures?

How do you use Occam's Razor without an appeal a formal notion that grounds your intuition? What does your intuition rest on, if not information theory?

Replies from: V_V
comment by V_V · 2012-08-22T20:38:53.964Z · LW(p) · GW(p)

MML usually (but not necessarily) restricts the reference machine to a non-universal form in the interest of computational feasibility.

Sure, but IIUC (I've just skimmed the paper), in order to make the comparison to Kolmogorov complexity, they consider arbitrary Turing machines as their hypotheses, which makes the analysis uncomputable.

How do you use Occam's Razor without an appeal a formal notion that grounds your intuition? What does your intuition rest on, if not information theory?

I think that's still an open problem. Solomonoff induction is certainly an attempt towards its formalization, but it doesn't yield anything that can be used for reasoning in practice. Saying "my hypothesis has smaller Kolmogorov complexity than yours" is meaningless unless you can make the argument formal.

Replies from: Kawoomba
comment by Kawoomba · 2012-08-22T21:19:43.336Z · LW(p) · GW(p)

MML and KC are conceptually and theoretically highly related concepts, MML is another stab at formalizing Occam's Razor in a more feasible manner, using the same approach as KC. No, they are not in fact identical, if that's what you meant (hence the different names ...)

Saying "my hypothesis has smaller Kolmogorov complexity than yours" is meaningless unless you can make the argument formal.

But saying "Based on Occam's Razor, my hypothesis is smaller than yours" isn't just as meaningless as long as your intuition stays sufficiently fuzzy and ungrounded? Is it an open problem as soon as anyone disagrees (or on what basis would you solve any dispute)? What use would the heuristic be, then?

I guess what I don't understand is how you can embrace Occam's Razor as an intuition, yet argue against the use of the branch of information theory that formalizes it, given there's even computable variants. I agree that to categorically make statements about the KC of most hypotheses is misguided, and I also dislike the misuse of the terminology as mere buzzwords.

However, it is the formalism that our intuition is aspiring to emulate, and to improve our intuition would be to move it further towards the formalized basis it derives from, a move which you seem to reject.

Replies from: V_V
comment by V_V · 2012-08-22T21:44:06.536Z · LW(p) · GW(p)

But saying "Based on Occam's Razor, my hypothesis is smaller than yours" isn't just as meaningless as long as your intuition stays sufficiently fuzzy and ungrounded?

It's not just a fuzzy intuition, you can try to count the concepts, but ultimately the argument remains informal. But throwing in "informal" Kolmogorov complexity doesn't help, so what's the point of doing that?

However, it is the formalism that our intuition is aspiring to emulate, and to improve our intuition would be to move it further towards the formalized basis it derives from, a move which you seem to reject.

I'm not sure that is the proper formalism, but even if it is, unless it provides actual tools to use in arguments, I think it's not appropriate to use its terminology as buzzwords.

Replies from: Kawoomba
comment by Kawoomba · 2012-08-22T22:10:37.651Z · LW(p) · GW(p)

It's not just a fuzzy intuition, you can try to count the concepts, but ultimately the argument remains informal.

Counting concepts is an error-prone, extremely rough approximation of complexity. A fuzzy, undependable version of it, if you will.

It falls to such problems such as (H1: A, B, C) versus (H2: A, D) with D being potentially larger or smaller than (B, C).

Or would you recommend trying to chunk out concepts of similar size? This will invariably lead you to the smallest differing unit, the smallest lexeme of your language of choice...

...and in the end, your "concept" will translate to "bit", you'll choose the the shortest equivalent restatement of the hypothesis with the fewest concepts (bits), and you'll compare those. Familiar?

(t)[T]hrowing in "informal" Kolmogorov complexity doesn't help, so what's the point of doing that?

Think of it more as moving the intuition in the right direction. Of course that implies more than just usage of the terminology and precludes definitive statements (it's still an intuition, not a formal calculation).

Such emphasis on the roots of our intuition can yield both positive and negative effects: Positive if used as a qualifier and a note of caution to our easily misguided "A is clearly more complex" intuitions, negative if we just append our intuition with "according to Kolmogorov Complexity" to lend unwarranted credence to our usual fallible guesstimating.

Replies from: V_V
comment by V_V · 2012-08-22T22:25:55.434Z · LW(p) · GW(p)

I'm not sure about what is exactly the focal point of our disagreement.

I'm not against making arguments more formal, I just don't see how Kolmogorov complexity, Solomonoff induction, etc. can be practically used to that purpose.

comment by Will_Newsome · 2012-08-22T02:33:14.912Z · LW(p) · GW(p)

It's not like Solomonoff Induction where sometimes you really can be helped by thinking formally about Occam's Razor. In practice, human beings arguing about ethics are either already approximating their part of the 'good' as best they can, or they're confused about something much simpler than CEV, like consequentialism.

It's exactly like Solomonoff Induction where most of the time you really can't be helped by thinking formally about Occam's Razor. In practice, human beings arguing about probabilities are either already approximating their part of the 'simple' as best they can, or they're confused about something much simpler (haha) than Solomonoff Induction, like Bayesianism.

comment by kilobug · 2012-08-21T19:05:00.067Z · LW(p) · GW(p)

Nitpicking note : I don't think the Weasley twins just want to have fun. They are in Griffyndor, in the Order of the Phoenix, they fight the Death Eaters to the end, ... they want to have fun, but they also want others to have fun.

Replies from: shminux
comment by Shmi (shminux) · 2012-08-21T19:11:45.982Z · LW(p) · GW(p)

In the canon, sure. Their HPMOR characters are not nearly as nuanced.

Replies from: Desrtopa
comment by Desrtopa · 2012-08-22T02:02:09.429Z · LW(p) · GW(p)

Well, it's not as if they've been given nearly as much opportunity to characterize themselves by anything else.

Replies from: shminux
comment by Shmi (shminux) · 2012-08-22T02:55:56.631Z · LW(p) · GW(p)

Oh, I did not mean this in a negative way.

comment by V_V · 2012-08-21T22:29:32.435Z · LW(p) · GW(p)

CEV is one of the many flavors of total act utilitarianism (or average? I can't really make sense of Yudkowsky's position given what he recently wrote in a comment on OB ).

Anyway, act utilitarianism comes in many flavors that differ essentially in how they define the utility functions of the moral patients and how to compare them. 'Coherent Extrapolation', IIUC, means "The Almighty FAI knows what is best for you better than you do".

That doesn't look very pretty to me.

Replies from: CarlShulman
comment by CarlShulman · 2012-08-22T05:02:58.454Z · LW(p) · GW(p)

CEV is one of the many flavors of total act utilitarianism (or average?

This is not just false, but something of a category error. Here is the CEV working paper if you want to read it. It's talking about a very loose class of procedures to select actions using the device of what an idealized version of some specified population would decide on under some idealized circumstances. The upshot of that for object-level normative questions depends on how the idealized process of moral deliberation would go. There's no necessary connection to any form of utilitarianism, or any other first-order normative account.

Replies from: V_V
comment by V_V · 2012-08-22T09:30:43.104Z · LW(p) · GW(p)

How does it aggregate preferences?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-08-26T11:14:42.600Z · LW(p) · GW(p)

That's an open question.

comment by V_V · 2012-08-21T22:04:55.975Z · LW(p) · GW(p)

That's the Übermensch morality of dictators and totalitarian regimes. The problem is that every dictator thinks of themselves as a benevolent dictator, but it turns out that they are often mistaken.

Even when there are multiple aspiring dictators, all with essentially benevolent values, conflict on who should rule can be very much destructive.

Of course this presupposes consequentialism. In deontologism morality is typically viewed as a social contract where each party has well defined responsabilities.

I don't think it's an accident that violent totalitarian ideologies tend to be consequentialist: these innocent heretics/Jews/bourgeois stand between us and our utopia. Kill'em all!

Replies from: amcknight
comment by amcknight · 2012-08-21T22:41:20.576Z · LW(p) · GW(p)

citations please! I doubt that most dictators think they are benevolent and are consequentialists.

Replies from: Decius, wedrifid
comment by Decius · 2012-08-22T13:51:10.188Z · LW(p) · GW(p)

I think that most dictators who make it into history books think about benevolence differently than most people.

comment by wedrifid · 2012-09-17T06:03:12.695Z · LW(p) · GW(p)

citations please! I doubt that most dictators think they are benevolent and are consequentialists.

Thankyou! I get tired of the whole "everybody thinks they are good" nonsense I hear all the time. I call it mind-projection. Some people just don't care.

comment by DanielLC · 2012-08-21T18:07:41.595Z · LW(p) · GW(p)

I don't buy Harry's argument. I call it ethical solipsism, thinking that you are the only one who has any ethical responsibility, and everyone else's actions are simply the consequences of your own.

Ironic. I consider anything else ethical solipsism. Why would it matter when it's your fault, but not when it's someone else's?

Also, I don't think Harry made an argument. He just defined heroic responsibility.

Replies from: PlacidPlatypus
comment by PlacidPlatypus · 2012-08-25T04:51:53.947Z · LW(p) · GW(p)

I would say that he was making at least the argument that "this level of responsibility is something you should adopt if you want to be a hero", and probably the more general argument that "people should adopt this attitude toward responsibility.

comment by Nectanebo · 2012-08-21T13:49:04.614Z · LW(p) · GW(p)

A lot of the comments take a very consequentialist point of view, and they explain themselves fairly well, which is good.

Perhaps it is because I've seen many really bad reddit comments before (even in subreddits relating to fields usually sympathetic to rationalist ideals) and what I'm seeing here is of a different standard, but I find myself hoping to some extent that some of the people commenting here were idiots before reading HPMOR, and that somehow they became more insightful and eloquent as a result of being exposed to the fic and related content.

I think I might recommend the fic to some more people...

Replies from: None, Manfred, PlacidPlatypus
comment by [deleted] · 2012-09-02T01:06:42.121Z · LW(p) · GW(p)

Thinking back, I'm almost certain that the way I was introduced to Less Wrong was through HPMOR. The long gaps between updates encouraged me to start reading the sequences, even.

It's probably the most effective recruiting tool for rationality in the world as of now.

comment by Manfred · 2012-08-21T17:22:38.863Z · LW(p) · GW(p)

I think many of them are secretly (not secretly) LW readers :P

comment by PlacidPlatypus · 2012-08-25T05:02:54.696Z · LW(p) · GW(p)

At least for myself , I first heard of Eliezer via the HPMOR TV Tropes page. There's a good chance I would have read the sequences sooner or later even if I hadn't (my brother found them independently and recommended them), but it definitely helped.

And I wouldn't say I was an idiot before, but twenty minutes of conversation with myself from a couple years ago might change my mind. And of course it's hard to tell how much of the difference is LW's influence and how much is just a matter of being older and wiser.

comment by buybuydandavis · 2012-08-21T19:53:42.961Z · LW(p) · GW(p)

But Harry just shook his head. "That's not the responsible thing to do, Hermione. It's what someone playing the role of a responsible girl would do."

I don't know that Harry or Eliezer make this distinction, but I'd say that it's important to recognize a difference between judgments about yourself, and judgments about others.

When Harry talks about heroic responsibility, he's speaking from the first person perspective (at least). He has his values, and he wants them achieved regardless of the good/bad behavior of others.

The usual responsibility we talk about is second/third person - your fault/his fault. It's about negotiation with and judgment of others. This is where someone judges someone as "fulfilling a role" - it's a thumbs down or a thumbs up on another person, according to the rules of interpersonal behavior you approve of.

We can treat ourselves as objects, and apply that thumbs up or thumbs down to ourselves as well. That is useful in understanding the reactions of other people to you, in negotiating and otherwise interacting with others, and asserting your values in terms of that negotiating and interacting, but it's a category error to confuse that interpersonal thumbs up/thumbs down applied to yourself as the proper way to choose your actions. I'm pretty far down the path of just considering it a neurosis.

Choose your actions to achieve your values, don't choose them to achieve good boy status.

However, it seems to me that what Harry has described as heroic responsibility is a conflation of the 1st and 3rd person perspective. He still judges his own good boy status in the 3rd person, but he does it from the perspective of behaving properly from a 1st person perspective. He is a good boy if he chooses the best actions to achieve his values, and a bad boy besides. Actually, he seems a little worse than this, in that any failure makes him a bad boy, which is an impossible standard.

(EDIT: jimrandomh makes a similar point)

Replies from: Decius, RobbBB
comment by Decius · 2012-08-22T14:02:08.291Z · LW(p) · GW(p)

He is maximizing his utility part-function: people should not be harmed. If I know of a situation such that: Someone will be harmed if I do nothing; They will be harmed equally if I tell the authorities; If I intercede with my allies, they will be harmed less.

The first two actions are equal. The third one is better. The ability to determine the actual consequences of future actions is magical. Fortunately for Harry, he has access to magic. I expect Harry to develop a procedure where he makes several plans, checks for a note indicating which plan he should use and any changes he should make, executes the plan, and then time-turns a note back to himself indicating which plan worked and any retroactive changes to it, then assists himself if needed.

Replies from: PlacidPlatypus
comment by PlacidPlatypus · 2012-08-25T05:10:15.412Z · LW(p) · GW(p)

Something tells me that the note would be more likely to say something like "DO NOT MESS WITH TIME".

Replies from: Decius
comment by Decius · 2012-08-25T21:34:41.743Z · LW(p) · GW(p)

Really? Is that what happened just before he got the time-turner?

As I recall, he was trying to demonstrate a way to solve any problem in C time, where C is the time required to falsify a proposed solution. That's different than realizing that you have a higher chance of destroying Azkaban in an hour if you help yourself eight times.

comment by Rob Bensinger (RobbBB) · 2012-11-23T07:08:14.133Z · LW(p) · GW(p)

It sounds like you're assuming that all Morality-speak is a ploy to get others to follow our will, and a way of taking stock of others' comparable attempts to influence us. I'd like to suggest that there are other ways to construe Morality-speak like 'hero' and 'virtue:'

  1. One could simply treat one's values and Morality-speak as interchangeable. Why use one language for the second- and third-person, and a totally different language for the first-person? It's confusing, makes generalizations of your desires difficult, and greatly weakens your rhetorical power (since you seem to be subscribing to a double standard, if only terminologically). If your values are such that you don't actually give a privileged status to your own welfare as opposed to others, then this linguistic shift conceals an important symmetry.

  2. Morality-speak might be an idealization of one's optimized values, one's values once they've been brought into optimal reflective equilibrium. What you currently care about might not be what you think you ought care about, even if you ultimately define 'oughtness' in terms of the aforementioned preferences. Otherwise it would be incoherent to lament how much one presently likes something, to wish for a reform to one's secondary concerns that would bring them into greater internal harmony.

Actually, he seems a little worse than this, in that any failure makes him a bad boy, which is an impossible standard.

Two issues: First, he's probably exaggerating at least a little. Second, it's clear that he's adopting the standard in question for utilitarian reasons; he happens to be especially motivated by falling short of some lofty ideal. For most people an at least somewhat less exacting standard would probably be desirable; but since standards are just heuristics for winning, the question of how demanding to be is one for empirical psychology.

comment by A1987dM (army1987) · 2012-08-22T10:05:04.085Z · LW(p) · GW(p)

To what extent does it contains spoilers? I haven't read HP:MOR yet.

Replies from: PlacidPlatypus, Decius
comment by PlacidPlatypus · 2012-08-25T05:11:40.184Z · LW(p) · GW(p)

Decius is right that there aren't really spoilers, but I would argue that your time would be better spent reading HP:MOR than the discussion.

comment by Decius · 2012-08-22T13:53:36.768Z · LW(p) · GW(p)

You won't gain knowledge about a twist in the plot by reading the article; the comments contain descriptions of events in the story, but I didn't see anything that I would call a spoiler.