A discussion of heroic responsibility
post by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-10-29T04:22:04.426Z · LW · GW · Legacy · 216 commentsContents
Introduction Something Impossible The Well-Functioning Gear Recursive Heroic Responsibility Heroic responsibility for average humans under average conditions None 216 comments
[Originally posted to my personal blog, reposted here with edits.]
Introduction
You could call it heroic responsibility, maybe,” Harry Potter said. “Not like the usual sort. It means that whatever happens, no matter what, it’s always your fault. Even if you tell Professor McGonagall, she’s not responsible for what happens, you are. Following the school rules isn’t an excuse, someone else being in charge isn’t an excuse, even trying your best isn’t an excuse. There just aren’t any excuses, you’ve got to get the job done no matter what.” Harry’s face tightened. “That’s why I say you’re not thinking responsibly, Hermione. Thinking that your job is done when you tell Professor McGonagall—that isn’t heroine thinking. Like Hannah being beat up is okay then, because it isn’t your fault anymore. Being a heroine means your job isn’t finished until you’ve done whatever it takes to protect the other girls, permanently.” In Harry’s voice was a touch of the steel he had acquired since the day Fawkes had been on his shoulder. “You can’t think as if just following the rules means you’ve done your duty. –HPMOR, chapter 75.
Something Impossible
Bold attempts aren't enough, roads can't be paved with intentions...You probably don’t even got what it takes,But you better try anyway, for everyone's sakeAnd you won’t find the answer until you escape from theLabyrinth of your conventions.Its time to just shut up, and do the impossible.Can’t walk away...Gotta break off those shackles, and shake off those chainsGotta make something impossible happen today...
The Well-Functioning Gear
I feel like maybe the hospital is an emergent system that has the property of patient-healing, but I’d be surprised if any one part of it does.Suppose I see an unusual result on my patient. I don’t know what it means, so I mention it to a specialist. The specialist, who doesn’t know anything about the patient beyond what I’ve told him, says to order a technetium scan. He has no idea what a technetium scan is or how it is performed, except that it’s the proper thing to do in this situation. A nurse is called to bring the patient to the scanner, but has no idea why. The scanning technician, who has only a vague idea why the scan is being done, does the scan and spits out a number, which ends up with me. I bring it to the specialist, who gives me a diagnosis and tells me to ask another specialist what the right medicine for that is. I ask the other specialist – who has only the sketchiest idea of the events leading up to the diagnosis – about the correct medicine, and she gives me a name and tells me to ask the pharmacist how to dose it. The pharmacist – who has only the vague outline of an idea who the patient is, what test he got, or what the diagnosis is – doses the medication. Then a nurse, who has no idea about any of this, gives the medication to the patient. Somehow, the system works and the patient improves.Part of being an intern is adjusting to all of this, losing some of your delusions of heroism, getting used to the fact that you’re not going to be Dr. House, that you are at best going to be a very well-functioning gear in a vast machine that does often tedious but always valuable work. –Scott Alexander
Recursive Heroic Responsibility
Heroic responsibility for average humans under average conditions
I can predict at least one thing that people will say in the comments, because I've heard it hundreds of times–that Swimmer963 is a clear example of someone who should leave nursing, take the meta-level responsibility, and do something higher impact for the usual. Because she's smart. Because she's rational. Whatever.
Fine. This post isn't about me. Whether I like it or not, the concept of heroic responsibility is now a part of my value system, and I probably am going to leave nursing.
But what about the other nurses on my unit, the ones who are competent and motivated and curious and really care? Would familiarity with the concept of heroic responsibility help or hinder them in their work? Honestly, I predict that they would feel alienated, that they would assume I held a low opinion of them (which I don't, and I really don't want them to think that I do), and that they would flinch away and go back to the things that they were doing anyway, the role where they were comfortable–or that, if they did accept it, it would cause them to burn out. So as a consequentialist, I'm not going to tell them.
And yeah, that bothers me. Because I'm not a special snowflake. Because I want to live in a world where rationality helps everyone. Because I feel like the reason they would react that was isn't because of anything about them as people, or because heroic responsibility is a bad thing, but because I'm not able to communicate to them what I mean. Maybe stupid reasons. Still bothers me.
216 comments
Comments sorted by top scores.
comment by [deleted] · 2014-10-29T16:42:31.762Z · LW(p) · GW(p)
I kind of predict that the results of installing heroic responsibility as a virtue, among average humans under average conditions, would be a) everyone stepping on everyone else’s toes, and b) 99% of them quitting a year later.
There's a reason it's called heroic responsibility: it's for a fictional hero, who can do Fictional Hero Things like upset the world order on a regular basis and get away with it. He has Plot Armor, and an innately limited world. In fact, the story background even guarantees this: there are only a few tens of thousands or hundreds of thousands of wizards in Britain, and thus the Law of Large Numbers does not apply, and thus Harry is a one-of-a-kind individual rather than a one-among-several-hundred-thousand as he would be in real life. Further, he goes on adventures as an individual, and never has to engage in the kinds of large-scale real-life efforts that take the massive cooperation of large numbers of not-so-phoenix-quality individuals.
Which you very much do. You don't need heroic rationality, you need superrationality, which anyone here who's read up on decision-theory should recognize. The super-rational thing to do is systemic effectiveness, at the level of habits and teams, so that patients' health does not ever depend on one person choosing to be heroic. An optimal health system does not sound melodramatically heroic: it works quietly and can absolutely, always be relied upon.
Last bit of emphasis: you are both realer and better than Harry. He's a fictional hero, and has to fight a few battles as an individual. You are a real nurse, and have to do your part to save hundreds of lives for decades of time. The fucked-up thing about children's literature is that we never manage to get across just how small children's heroes are, how little they do, and just how large the real world inhabited by adults is, and just how very difficult it is to live here, and just how fucking heroic each and every person who does the slightest bit of good here actually is.
Replies from: wedrifid, Jiro, Viliam_Bur, SilentCal, morvkala, private_messaging↑ comment by wedrifid · 2014-10-30T12:21:48.658Z · LW(p) · GW(p)
There's a reason it's called heroic responsibility: it's for a fictional hero, who can do Fictional Hero Things like upset the world order on a regular basis and get away with it.
NO! This is clearly not why it was called heroic responsibility and it is unlikely that the meaning has degraded so completely over time as to refer to the typical behaviour of fictional heroes. That isn't the message of either the book or the excerpt quoted in the post.
Which you very much do. You don't need heroic rationality, you need superrationality, which anyone here who's read up on decision-theory should recognize. The super-rational thing to do is systemic effectiveness, at the level of habits and teams, so that patients' health does not ever depend on one person choosing to be heroic.
Those who have read up on decision theory will be familiar with the term superrationality and notice that you are misusing the term. Incidentally, those who who are familiar with decision theory will also notice that 'heroic responsibility' is already assumed as part of the basic premise (ie. Agents actually taking actions that maximise expectation of desired things occurring doesn't warrant any special labels like 'heroic' or 'responsible'.) Harry is merely advocating using decision theory in a particular context where different reasoning processes are often substituted.
An optimal health system does not sound melodramatically heroic: it works quietly and can absolutely, always be relied upon.
Harry knows this. If Harry happened to care about optimising the health system (more than he cared about other opportunities) then his 'heroic responsibility' would be to do whatever action moved the system in that direction most effectively. The same applies to any real humans who (actually) have that goal. Melodrama is not the point. (And the flaw in Harry that makes him Melodramatic isn't his 'heroic responsibility', it's his ego. A little more heroic responsibility would likely reduce his melodrama.)
The fucked-up thing about children's literature
You seem to be confused either about which piece of literature is being discussed or about the target audience of said piece of literature.
Replies from: Jiro↑ comment by Jiro · 2014-10-30T14:37:38.683Z · LW(p) · GW(p)
Those who have read up on decision theory will be familiar with the term superrationality and notice that you are misusing the term.
Superrationality involves assuming that other people using the same reasoning as yourself will produce the same result as yourself, and so you need to decide what is best to do assuming everyone like yourself does it too. That does indeed seem to be what eli is talking about: you support the existing system, knowing that if you think it's a good idea to support the system, so will other people who think like you, and the system will work.
You seem to be confused either about which piece of literature is being discussed or about the target audience of said piece of literature.
I don't think he's confused. While Eliezer's fanfic isn't children's literature, the fact that Harry is a hero with plot armor is not something Eliezer invented; rather, it carries over from the source which is children's literature.
↑ comment by Jiro · 2014-10-29T19:40:37.963Z · LW(p) · GW(p)
There's a reason it's called heroic responsibility: it's for a fictional hero, who can do Fictional Hero Things like upset the world order on a regular basis and get away with it. He has Plot Armor, and an innately limited world.
But it's my understanding that HPMOR was meant to teach about real-world reasoning.
Is this really supposed to be one of the HPMOR passages which is solely about the fictional character and is not meant to have any application to the real world except as an example of something not to do? It certainly doesn't sound like that.
(Saying this with a straight face)
Replies from: TheOtherDave, V_V, None, None↑ comment by TheOtherDave · 2014-10-29T20:54:09.585Z · LW(p) · GW(p)
No, it's pretty clear that the author intends this to be a real-world lesson. It's a recurring theme in the Sequences.
I think Eli was disagreeing with the naive application of that lesson to real-world situations, especially ones where established systems are functional.
That said, I don't want to put words in Eli's mouth, so I'll say instead that I was disagreeing in that way when I said something similar above.
↑ comment by V_V · 2014-10-31T15:30:54.869Z · LW(p) · GW(p)
Keep in mind that the author perceives himself pretty much like a stereotypical fictional hero: he is the One chosen to Save the World from the Robot Apocalypse, and maybe even Defeat Death and bring us Heaven. No wonder he thinks that advice to fictional heroes is applicable to him.
But when you actually try to apply that advice to people with a "real-life" job which involves coordinating with other people in a complex organization that has to ultimately produce measurable results, you run into problems.
A complex organization, for instance a hospital, needs clear rules detailing who is responsible for what. Sometimes this yields suboptimal outcomes: you notice that somebody is making a mistake and they won't listen to you, or you don't tell them because it would be socially unacceptable to do so. But the alternative where any decision can be second-guessed and argued at length until a consensus is reached would paralyse the organization and amplify the negative outcomes of the Dunner-Kruger effect.
Moreover, a culture of heroic responsibility would make accountability essentially impossible:
If everybody is responsible for everything, then nobody is responsible for anything. Yes, Alice made a mistake, but how can we blame her without also blaming Bob for not noticing it and stopping her? Or Carol, or Dan, or Erin, and so on.
↑ comment by Philip_W · 2014-11-02T12:53:06.265Z · LW(p) · GW(p)
You and Swimmer963 are making the mistake of applying heroic responsibility only to optimising some local properties. Of course that will mean damaging the greater environment: applying "heroic responsibility" basically means you do your best AGI impression, so if you only optimise for a certain subset of your morality your results aren't going to be pleasant.
Heroic responsibility only works if you take responsibility for everything. Not just the one patient you're officially being held accountable for, not just the most likely Everett branches, not just the events you see with your own eyes. If your calling a halt to the human machine you are a part of truly has an expected negative effect, then it is your heroic responsibility to shut up and watch others make horrible mistakes.
A culture of heroic responsibility demands appropriate humility; it demands making damn sure what you're doing is correct before defying your assigned duties. And if human psychology is such that punishing specific people for specific events works, then it is everyone's heroic responsibility to make sure that rule exists.
Applying this in practice would, for most people, boil down to effective altruism: acquiring and pooling resources to enable a smaller group to optimise the world directly (after acquiring enough evidence of the group's reliability that you know they'll do a better job at it than you), trying to influence policy through political activism, and/or assorted meta-goals, all the while searching for ways to improve the system and obeying the law. Insisting you help directly instead of funding others would be statistical murder in the framework of heroic responsibility.
Replies from: V_V↑ comment by V_V · 2014-11-03T13:56:06.718Z · LW(p) · GW(p)
So "heroic responsibility" just means "total utilitarianism"?
Replies from: Philip_W, Kenny↑ comment by Philip_W · 2014-11-03T20:09:02.414Z · LW(p) · GW(p)
No: the concept that our ethics is utilitarian is independent from the concept that it is the only acceptable way of making decisions (where "acceptable" is an emotional/moral term).
Replies from: V_V↑ comment by V_V · 2014-11-03T20:43:22.818Z · LW(p) · GW(p)
What is an acceptable way of making decisions (where "acceptable" is an emotional/moral term) looks like an ethical question, how can it be independent from your ethics?
Replies from: Philip_W↑ comment by Philip_W · 2014-11-04T20:06:43.917Z · LW(p) · GW(p)
In ethics, the question would be answered by "yes, this ethical system is the only acceptable way to make decisions" by definition. In practice, this fact is not sufficient to make more than 0.01% of the world anywhere near heroically responsible (~= considering ethics the only emotionally/morally/role-followingly acceptable way of making decisions), so apparently the question is not decided by ethics.
Instead, roles and emotions play a large part in determining what is acceptable. In western society, the role of someone who is responsible for everything and not in the corresponding position of power is "the hero". Yudkowsky (and HPJEV) might have chosen to be heroically responsible because he knows it is the consistent/rational conclusion of human morality and he likes being consistent/rational very much, or because he likes being a hero, or more likely a combination of both. The decision is made due to the role he wants to lead, not due to the ethics itself.
↑ comment by Kenny · 2014-11-09T04:12:32.823Z · LW(p) · GW(p)
It just means 'consequentalism'.
Replies from: V_V↑ comment by V_V · 2014-11-09T10:45:05.055Z · LW(p) · GW(p)
There are various types of consequentalism. The lack of distinction between ethical necessity and supererogation, and the general focus about optimizing the world, are typical of utilitarianism, which is in fact often associated with effective altruism (although it is not strictly necessary for it).
Replies from: Kenny↑ comment by Philip_W · 2014-11-02T11:49:50.625Z · LW(p) · GW(p)
You and Swimmer963 are making the mistake of applying heroic responsibility only to optimising some local properties. Of course that will mean damaging the greater environment: "heroic responsibility" basically means you do your best AGI impression, so if you only optimise for a certain subset of your morality your results aren't going to be pleasant.
Heroic responsibility only works if you take responsibility for everything. Not just the one patient you're officially being held accountable for, not just the most likely Everett branches, not just the events you see with your own eyes. If your calling a halt to the human machine you are a part of truly has an expected negative effect, then it is your heroic responsibility to shut up and watch others make horrible mistakes.
A culture of heroic responsibility demands appropriate humility; it demands making damn sure what you're doing is correct before defying your assigned duties. And if human psychology is such that a criminal justice system is still appropriate (where specific individuals are punished for specific events), then it is everyone's job to make sure there is a criminal justice system.
Applying this in practice
↑ comment by [deleted] · 2014-10-30T08:13:12.058Z · LW(p) · GW(p)
Well, I can't answer for Eliezer's intentions, but I can repeat something he has often said about HPMoR: the only statements in HPMoR he is guaranteed to endorse with a straight face and high probability are those made about science/rationality, preferably in an expo-speak section, or those made by Godric Gryffindor, his author-avatar. Harry, Dumbledore, Hermione, and Quirrell are fictional characters: you are not necessarily meant to emulate them, though of course you can if you independently arrive to the conclusion that doing so is a Good Idea.
Is this really supposed to be one of the HPMOR passages which is solely about the fictional character and is not meant to have any application to the real world except as an example of something not to do?
I personally think it is one of the passages in which the unavoidable conceits of literature (ie: that the protagonist's actions actually matter on a local-world-historical scale) overcome the standard operation of real life. Eliezer might have a totally different view, but of course, he keeps info about HPMoR close to his chest for maximum Fun.
↑ comment by Viliam_Bur · 2014-10-30T14:36:47.114Z · LW(p) · GW(p)
the Law of Large Numbers does not apply, and thus Harry is a one-of-a-kind individual rather than a one-among-several-hundred-thousand as he would be in real life
I think we need a lot of local heroism. We have a few billions people on this planet, but we also have a few billion problems -- even if we perhaps have only a few thousand repeating patterns of problems.
Maybe it would be good to distinguish between "heroism within a generally functional pattern which happened to have an exception" and a "pattern-changing heroism". Sometimes we need a smart person to invent a solution to the problem. Sometimes we need thousands of people to implement that solution, and also to solve the unexpected problems with the solution, because in real life the solution is never perfect.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-30T14:49:20.467Z · LW(p) · GW(p)
Maybe it would be good to distinguish between "heroism within a generally functional pattern which happened to have an exception" and a "pattern-changing heroism".
That's a good distinction and I would also throw in the third kind -- "heroism within a generally disfunctional pattern which continues to exist because regular heroics keep it afloat". This is related to the well-known management concept of the "firefighting mode".
↑ comment by SilentCal · 2014-10-31T18:24:09.446Z · LW(p) · GW(p)
Superrationality isn't a substitute for heroic responsibility, it's a complement. Heroic responsibility is the ability to really ask the question, "Should I break the rules in a radical effort to change the world?" Superrationality is the tool that will allow you to usually get the correct, negative answer.
ETA: When Harry first articulates the concept of heroic responsibility, it's conspicuously missing superrationality. I think that's an instance of the character not being the author. But I think it's later suggested that McGonagall could also use some heroic responsibility, and this clearly does not mean that she should be trying to take over the world.
Replies from: None↑ comment by morvkala · 2014-10-30T16:34:31.973Z · LW(p) · GW(p)
This seems to misunderstand the definition of heroic responsibility in the first place. It doesn't require that you're better, smarter, luckier, or anything else than the average person. All that matters is the probability that you can beat the status quo, whether through focused actions to help one person, or systematic changes. If swimmer had strong enough priors that the doctor was neglecting their duty, swimmer would be justified in doing the stereotypically heroic thing. She didn't, so she had to follow the doctors lead.
If everyone else cares deeply about solving a problem and there are a lot of smarter minds than your own focusing on the issue, you're probably right to take the long approach and look for any systematic flaws instead of doing something that'll probably be stupid. However, there's lots of problems where the smartest, wealthiest people don't actually have the motivation to solve the problem, and the majority of people who care are entrenched in the status quo, so a mere prole lacking HJPEVesque abilities benefits strongly from heroic responsibility.
And sometimes you can't fix the system, but you can save one person and that is okay. It doesn't make the system any better, and you'll still need to fix it another day, but ignoring the cases you think you can solve because you lack the tools to tackle the root of the problem is EXACTLY the kind of behaviour heroic responsibility should be warning you about.
Replies from: Jiro↑ comment by Jiro · 2014-10-30T18:30:47.229Z · LW(p) · GW(p)
All that matters is the probability that you can beat the status quo, whether through focused actions to help one person, or systematic changes.
This assumes that you're perfect at figuring out the probability that you can beat the status quo. Human beings are pretty bad at this.
Replies from: Philip_W↑ comment by private_messaging · 2014-10-30T17:31:21.775Z · LW(p) · GW(p)
Well said. The way I put it, the hero jumps into the cockpit and lands the plane in storm without once asking if there's a certified pilot on board. It is "Heroic Responsibility" because it isn't responsible without qualifiers. Nor is it heroic, it's just a glitch due to the expected amount of getting laid times your primate brain not knowing about birth control times tiny probability of landing the plane yielding >1 surviving copy of your genes. Or, likely, a much cruder calculation, where the impressiveness appears to be greater than the chance of success seem small, on a background of severe miscalibration due to living in a well tuned society.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-10-29T04:34:59.257Z · LW(p) · GW(p)
Brienne, my consort, is currently in Santiago, Chile because I didn't want to see her go through the wintertime of her Seasonal Affective Disorder. While she's doing that, I'm waiting for the load of 25 cheap 15-watt 4500K LED spotlight bulbs I ordered from China via DHgate, so I can wire them into my 25-string of light sockets, aim them at her ceiling, and try to make her an artificial sky. She's coming back the middle of February, one month before the equinox, so we can give that part a fair test.
I don't think I would have done either of these things if I didn't have that strange concept of responsibility. Empirically, despite there being huge numbers of people with SAD, I don't observe them flying to another continent for the winter, or trying to build their own high-powered lighting systems after they discover that the sad little 60-watt off-the-shelf light-boxes don't work sufficiently for them. I recently confirmed in conversation that a certain very wealthy person (who will not be on the list of the first 10 people you think I might be referring to) with SAD, someone who was creative enough to go to the Southern Hemisphere for a few weeks to try to interrupt the dark momentum, still had not built their own high-powered lighting system. Some part of their brain thought they'd done enough, I suppose, when they tried the existing 'lightboxes'.
But no, you can't make a heroic effort to save everyone, as Dumbledore notes:
Replies from: EHeller, Kawoomba, James_Miller, Vaniver, buybuydandavisThere can only be one king upon a chessboard, Harry Potter, only one piece that you will sacrifice any other piece to save. And Hermione Granger is not that piece.
↑ comment by Kawoomba · 2014-10-29T07:23:00.773Z · LW(p) · GW(p)
This is a tangent, but to light up the whole environment just to get a few more photons to the retina is a strange approach, even if it seems to be the go-to treatment (light boxes etc.). Why not just light up the retina with a portable device, say glasses with some LED lights tacked on. That way you can take your enlightenment with you! Could be polarised to reflect indirectly off of the glasses into your eye, with little stray radiation.
Not saying that you should McGyver that yourself, but I was surprised that such a solution did not seem to exist.
But, it's hard to have a truly original thought, so when I googled it I found this. Seems like a good idea, no? Same principle as your artificial sky, if one would work, so should the other.
Also, as an aside to the tangent, tangent is a strange phrase, since it doesn't actually touch the main point. Should be polar line or somesuch.
Replies from: SilentCal, Rain, Eliezer_Yudkowsky, undermind↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-10-29T18:11:19.318Z · LW(p) · GW(p)
Considered the light glasses earlier, but Brienne did not expect to like them, we need morning light, and they also looked too weaksauce for serious SAD.
↑ comment by undermind · 2014-10-30T04:48:39.359Z · LW(p) · GW(p)
Also, as an aside to the tangent, tangent is a strange phrase, since it doesn't actually touch the main point. Should be polar line or somesuch.
"Tangent" is perfectly appropriate -- it touches a point somewhere on the curve of the main argument, and then diverges. There is something that made the association with the tangent.
And, to further overextend this metaphor, this implies that if someone's argument is rough enough (i.e. not differentiable), then it's not even possible to go off it on a tangent.
↑ comment by James_Miller · 2014-10-29T19:19:00.726Z · LW(p) · GW(p)
If this doesn't work you should experiment with other frequencies of light . I have been using a heat lamp to play with near infrared light therapy, and use changing color light strips to expose myself to red light in the morning and night, and blue light in the early afternoon.
Replies from: CronoDAS↑ comment by CronoDAS · 2014-10-29T22:55:28.648Z · LW(p) · GW(p)
Indeed - I don't know what kind of spectrum "white" LEDs give off, but I seem to have gotten the impression somewhere that most lightbulbs don't emit the same spectrum as the sun, which contributes to "sunlight deprivation" conditions such as SAD.
Replies from: Nornagest, buybuydandavis↑ comment by Nornagest · 2014-10-29T23:30:31.254Z · LW(p) · GW(p)
Incandescent bulbs have a blackbody spectrum, usually somewhat redder than the sun's (which is also close to blackbody radiation, modulo a few absorption lines). White LEDs have a much spikier spectrum, usually with two to maybe a half-dozen peaks at different wavelengths, which come from the band gaps of their component diodes (a "single" white LED usually includes two to four) or from the fluorescent qualities of phosphor coatings on them. High-quality LED bulbs use a variety of methods to tune the locations of these peaks and their relative intensities such that they're visually close to sun or incandescent light; lower-quality ones tend to have them in weird places dictated by availability or ease of manufacture, which gives their light odd visual qualities and leads to poor color rendering. There are also tradeoffs involving the number of emitting diodes per unit. Information theory considerations mean that colors are never going to have quite the same fidelity under LED lights that they would under incandescent, but some can get damn close.
The same's true in varying degrees for most other non-incandescent lights. The most extreme example in common use is probably low-pressure sodium lamps (those intense yellow-orange streetlights), which emit almost exclusively at two very close wavelengths, 589.0 and 589.6 nm.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-30T00:26:30.548Z · LW(p) · GW(p)
The most extreme example in common use is probably low-pressure sodium lamps (those intense yellow-orange streetlights), which emit almost exclusively at two very close wavelengths, 589.0 and 589.6 nm.
Yep -- if you take photographs under these lights (e.g. night street scenes), you essentially get tinted monochrome photographs. Under an almost-single-wavelength source of light there are no colors, only illumination intensities.
↑ comment by buybuydandavis · 2014-10-29T23:44:29.910Z · LW(p) · GW(p)
And you don't get true full spectrum white out of LEDs either, as they're generally a combination of 3 different narrow band LEDs that look white to the eyes, but give a spiked spectrum instead of a full spectrum. There are phosphor coated LEDs that give broader spectrum, but still nothing like the sun's spectrum.
↑ comment by Vaniver · 2014-10-29T17:31:32.118Z · LW(p) · GW(p)
Empirically, despite there being huge numbers of people with SAD, I don't observe them flying to another continent for the winter
I learned from family who live in Alaska about "snowbirds)," who live in the North during the summer and the South during the winter. I suspect this is primarily for weather reasons, but no doubt those with SAD are more likely to be snowbirds than those without.
Santiago does have 13 hours of sunlight to Austin or Berkeley's 11 or Juneau's 9 (now; the differences will increase as we approach the solstice), so the change is larger, but the other changes are larger as well- having to switch from speaking English outside the house to speaking Spanish outside the house every six months seems costly to me. (New Zealand solves that problem, but adds a time zone problem.)
trying to build their own high-powered lighting systems after they discover that the sad little 60-watt off-the-shelf light-boxes don't work sufficiently for them.
My off the shelf light lamp is 100W, and seems pretty dang bright to me- but I don't have SAD and used it as a soft alarm, so I can't speak to how effective or ineffective it is for SAD.
↑ comment by buybuydandavis · 2014-10-29T23:38:48.429Z · LW(p) · GW(p)
I recently confirmed in conversation that a certain very wealthy person
It really grates on me when people with more money than God don't put it to any particularly good use in their lives, especially when it's a health related issue. Maybe this will encourage me to use the not so much I have to more effect.
Anyone try that Valkee for SAD? $300 for a couple of LEDs to stick in my ears grates as well. Supposedly having the training to wire up LEDs together, but not the follow through, doesn't help either.
And yes, fraud, scam, placebo controlled, blah blah blah. The proposed mechanism of photoreceptors distributed in the brain and elsewhere seemed interesting and worth checking out.
comment by RobinZ · 2014-10-30T18:50:24.603Z · LW(p) · GW(p)
True story: when I first heard the phrase 'heroic responsibility', it took me about five seconds and the question, "On TV Tropes, what definition fits this title?" to generate every detail of EY's definition save one. That detail was that this was supposed to be a good idea. As you point out - and eli-sennesh points out, and the trope that most closely resembles the concept points out - 'heroic responsibility' assumes that everyone other than the heroes cannot be trusted to do their jobs. And, as you point out, that's a recipe for everyone getting in everyone else's way and burning out within a year. And, as you point out, you don't actually know the doctor's job better than the doctors do.
In my opinion, what we should be advocating is the concept of 'subsidiarity' that Fred Clark blogs about on Slacktivist:
Responsibility — ethical obligation — is boundless and universal. All are responsible for all. No one is exempt.
Now, if that were all we had to say or all that we could know, we would likely be paralyzed, overwhelmed by an amorphous, undifferentiated ocean of need. We would be unable to respond effectively, specifically or appropriately to any particular dilemma. And we would come to feel powerless and incapable, thus becoming less likely to even try.
But that’s not all that we can know or all that we have to say.
We are all responsible, but we are not all responsible in the same way. We each and all have roles to play, but we do not all have the same role to play, and we do not each play the same role all the time.
Relationship, proximity, office, ability, means, calling and many other factors all shape our particular individual and differentiated responsibilities in any given case. In every given case. Circumstance and pure chance also play a role, sometimes a very large role, as when you alone are walking by the pond where the drowning stranger calls for help, or when you alone are walking on the road to Jericho when you encounter the stranger who has fallen among thieves.
Different circumstances and different relationships and different proximities entail different responsibilities, but no matter what those differences may be, all are always responsible. Sometimes we may be responsible to act or to give, to lift or to carry directly. Sometimes indirectly. Sometimes our responsibility may be extremely indirect — helping to create the context for the proper functioning of those institutions that, in turn, create the context that allows those most directly and immediately responsible to respond effectively. (Sometimes our indirect responsibility involves giving what we can to the Red Cross or other such organizations to help the victims of a disaster.)
The idea of heroic responsibility suggests that you should make an extraordinary effort to coerce the doctor into re-examining diagnoses whenever you think an error has been made. Bearing in mind that I have no relevant expertise, the idea of subsidiarity suggests to me that you, being in a better position to monitor a patient's symptoms than the doctor, should have the power to set wheels in motion when those symptoms do not fit the diagnosis ... which suggests a number of approaches to the situation, such as asking the doctor, "Can you give me more information on what I should expect to see or not see based on this diagnosis?"
(My first thought regarding your anecdote was that the medical records should automatically include Bayesian probability data on symptoms to help nurses recognize when the diagnosis doesn't fit, but this article about the misdiagnosis of Ebola suggests revising the system to make it more likely for doctors see the nurses' observations that would let them catch a misdiagnosis. You're in a better position to examine the policy question than I am.)
I have to admit, I haven't been following the website for a long while - these days, I don't get a lot of value out of it - so what I'm saying that Fred Clark is saying might be what a lot of people already see as the meaning of the concept. But I think that it is valuable to emphasize that responsibility is shared, and sometimes the best thing you can do is help other people do the job. And that's not what Harry Potter-Evans-Verres does in the fanfic.
Replies from: Philip_W, Lumifer↑ comment by Philip_W · 2014-11-02T17:54:40.495Z · LW(p) · GW(p)
As you point out - and eli-sennesh points out, and the trope that most closely resembles the concept points out - 'heroic responsibility' assumes that everyone other than the heroes cannot be trusted to do their jobs.
This would only be true if the hero has infinite resources, actually able to redo everyone's work. In practice, deciding how your resources should be allocated requires a reasonably accurate estimate of how likely everyone is to do their job well. Swimmer963 shouldn't insist on farming her own wheat for her bread (like she would if she didn't trust the supply chain), not because she doesn't have (heroic) responsibility to make sure she stays alive to help patients, but because that very responsibility means she shouldn't waste her time and effort on unfounded paranoia to the detriment of everyone.
The main thing about heroic responsibility is that you don't say "you should have gotten it right". Instead you can only say "I was wrong to trust you this much": it's your failure, and whether it's a failure of the person you trusted really doesn't matter for the ethics of the thing.
Replies from: RobinZ↑ comment by RobinZ · 2014-11-03T03:55:17.819Z · LW(p) · GW(p)
My referent for 'heroic responsibility' was HPMoR, in which Harry doesn't trust anyone to do a competent job - not even someone like McGonagall, whose intelligence, rationality, and good intentions he had firsthand knowledge of on literally their second meeting. I don't know the full context, but unless McGonagall had her brain surgically removed sometime between Chapter 6 and Chapter 75, he could actually tell her everything that he knew that gave him reason to be concerned about the continued good behavior of the bullies in question, and then tell her if those bullies attempted to evade her supervision. And, in the real world, that would be a perfect example of comparative advantage and opportunity cost in action: Harry is a lot better at high-stakes social and magical shenanigans relative to student discipline than McGonagall is, so for her to expend her resources on the latter while he expends his on the former would produce a better overall outcome by simple economics. (Not to mention that Harry should face far worse consequences if he screws up than McGonagall would - even if he has his status as Savior of the Wizarding World to protect him.) (Also, leaving aside whether his plans would actually work.)
I am advocating for people to take the initiative when they can do good without permission. Others in the thread have given good examples of this. But you can't solve all the problems you touch, and you'll drive yourself crazy if you blame yourself every time you "could have" prevented something that no-one should expect you to have. There are no rational limits to heroic responsibility. It is impossible to fulfill the requirements of heroic responsibility. What you need is the serenity to accept the things you cannot change, the courage to change the things you can, and the wisdom to know the difference.
Replies from: Kenny, Philip_W↑ comment by Kenny · 2014-11-09T03:47:23.277Z · LW(p) · GW(p)
Did we read the same story? Harry has lots of evidence that McGonagall isn't in fact trustworthy and in large-part it's because she doesn't fully accept heroic responsibility and is too willing to uncritically delegate responsibility to others.
I also vaguely remember your point being addressed in HPMoR. I certainly wouldn't guess that Harry wouldn't understand that "there are no rational limits to heroic responsibility". It certainly matters for doing the most good as a creature that can't psychologically handle unlimited responsibility.
Replies from: RobinZ↑ comment by RobinZ · 2014-11-13T20:00:14.059Z · LW(p) · GW(p)
Full disclosure: I stopped reading HPMoR in the middle of Chapter 53. When I was researching my comment, I looked at the immediate context of the initial definition of "heroic responsibility" and reviewed Harry's rationality test of McGonagall in Chapter 6.
I would have given Harry a three-step plan: inform McGonagall, monitor situation, escalate if not resolved. Based on McGonagall's characterization in the part of the story I read, barring some drastic idiot-balling since I quit, she's willing to take Harry seriously enough to act based on the information he provides; unless the bullies are somehow so devious as to be capable of evading both Harry and McGonagall's surveillance - and note that, with McGonagall taking point, they wouldn't know that they need to hide from Harry - this plan would have a reasonable chance of working with much less effort from Harry (and much less probability of misfiring) than any finger-snapping shenanigans. Not to mention that, if Harry read the situation wrong, this would give him a chance to be set straight. Not to mention that, if McGonagall makes a serious effort to crack down on bullying, the effect is likely to persist for far longer than Harry's term.
On the subject of psychology: really, what made me so emphatic in my denouncing "heroic responsibility" was [edit: my awareness of] the large percentage of adults (~10-18%) subject to anxiety disorders of one kind or another - including me. One of the most difficult problems for such people is how to restrain their instinct to blame themselves - how to avoid blaming themselves for events out of their control. When Harry says, "whatever happens, no matter what, it’s always your fault" to such persons, he is saying, "blame yourself for everything" ... and that makes his suggestion completely useless to guide their behavior.
Replies from: wedrifid, Kenny↑ comment by wedrifid · 2014-11-13T21:08:54.086Z · LW(p) · GW(p)
I would have given Harry a three-step plan: inform McGonagall, monitor situation, escalate if not resolved.
Your three step plan seems much more effective than Harry's shenannigans and also serves as an excellent example of heroic responsibility. Normal 'responsibility' in that situation is to do nothing or at most take step one.
Heroic responsibility doesn't mean do it yourself through personal power and awesomeness. It means using whatever resources are available to cause the desired thing to occur (unless the cost of doing so is deemed too high relative to the benefit). Institutions, norms and powerful people are valuable resources.
Replies from: RobinZ↑ comment by RobinZ · 2014-11-13T23:47:48.755Z · LW(p) · GW(p)
I'm realizing that my attitude towards heroic responsibility is heavily driven by the anxiety-disorder perspective, but telling me that I am responsible for x doesn't tell me that I am allowed to delegate x to someone else, and - especially in contexts like Harry's decision (and Swimmer's decision in the OP) - doesn't tell me whether "those nominally responsible can't do x" or "those nominally responsible don't know that they should do x". Harry's idea of heroic responsibility led him to conflate these states of affairs re: McGonagall, and the point of advice is to make people do better, not to win philosophy arguments.
When I came up with the three-point plan I gave to you, I did not do so by asking, "what would be the best way to stop this bullying?" I did so by asking myself, "if McGonagall is the person best placed to stop bullying, but official school action might only drive bullying underground without stopping it, what should I do?" I asked myself this because subsidiarity includes something that heroic responsibility does not: the idea that some people are more responsible - better placed, better trained, better equipped, etc. - than others for any given problem, and that, unless the primary responsibility-holder cannot do the job, those farther away should give support instead of acting on their own.
(Actually, thinking about localism suggested a modification to my Step 1: brief the prefects on the situation in addition to briefing McGonagall. That said, I don't know if that would be a good idea in this case - again, I stopped reading twenty chapters before.)
Replies from: dxu, wedrifid↑ comment by dxu · 2014-11-14T05:32:34.882Z · LW(p) · GW(p)
I asked myself this because subsidiarity includes something that heroic responsibility does not: the idea that some people are more responsible - better placed, better trained, better equipped, etc. - than others for any given problem, and that, unless the primary responsibility-holder cannot do the job, those farther away should give support instead of acting on their own.
I agree with all of this except the part where you say that heroic responsibility does not include this. As wedrifid noted in the grandparent of this comment, heroic responsibility means using the resources available in order to achieve the desired result. In the context of HPMoR, Harry is responding to this remark by Hermione:
"I would've done the responsible thing and told Professor McGonagall and let her take care of it," Hermione said promptly.
Again, as wedrifid noted above, this is step one and only step one. Taking that step alone, however, is not heroic responsibility. I agree that Harry's method of dealing with the situation was far from optimal; however, his general point I agree with completely. Here is his response:
"You could call it heroic responsibility, maybe," Harry Potter said. "Not like the usual sort. It means that whatever happens, no matter what, it's always your fault. Even if you tell Professor McGonagall, she's not responsible for what happens, you are. Following the school rules isn't an excuse, someone else being in charge isn't an excuse, even trying your best isn't an excuse. There just aren't any excuses, you've got to get the job done no matter what."
Notice that nowhere in this definition is the notion of running to an authority figure precluded! Harry himself didn't consider it because he's used to occupying the mindset that "adults are useless". But if we ignore what Harry actually did and just look at what he said, I'm not seeing anything here that disagrees with anything you said. Perhaps I'm missing something. If so, could you elaborate?
Replies from: RobinZ↑ comment by RobinZ · 2014-11-14T08:25:11.810Z · LW(p) · GW(p)
Neither Hermione nor Harry dispute that they have a responsibility to protect the victims of bullying. There may be people who would have denied that, but none of them are involved in the conversation. What they are arguing over is what their responsibility requires of them, not the existence of a responsibility. In other words, they are arguing over what to do.
Human beings are not perfect Bayesian calculators. When you present a human being with criteria for success, they do not proceed to optimize perfectly over the universe of all possible strategies. The task "write a poem" is less constrained than the task "write an Elizabethan sonnet", and in all likelihood the best poem is not an Elizabethan sonnet, but that doesn't mean that you will get a better poem out of a sixth-grader by asking for any poem than by giving them something to work with. The passage from Zen and the Art of Motorcycle Maintenance Eliezer Yudkowsy quoted back during the Overcoming Bias days, "Original Seeing", gave an example of this: the student couldn't think of anything to say in a five-hundred word essay about the United States, Bozeman, or the main street of Bozeman, but produced a five-thousand word essay about the front facade of the Opera House. Therefore, when I evaluate "heroic responsibility", I do not evaluate it as a proposition which is either true or false, but as a meme which either produces superior or inferior results - I judge it by instrumental, not epistemic, standards.
Looking at the example in the fanfic and the example in the OP, as a means to inspire superior strategic behavior, it sucks. It tells people to work harder, not smarter. It tells people to fix things, but it doesn't tell them how to fix things - and if you tell a human being (as opposed to a perfect Bayesian calculator) to fix something, it sounds like you're telling them to fix it themselves because that is what it sounds like from a literary perspective. "You've got to get the job done no matter what" is not what the hero says when they want people to vote in the next school board election - it's what the hero says when they want people to run for the school board in the next election, or to protest for fifteen days straight outside the meeting place of the school board to pressure them into changing their behavior, or something else on that level of commitment. And if you want people to make optimal decisions, you need to give them better guidance than that to allocating their resources.
Replies from: dxu, Kenny↑ comment by dxu · 2014-11-14T20:23:42.006Z · LW(p) · GW(p)
It tells people to work harder, not smarter.
That's the part I'm not getting. All Harry is saying is that you should consider yourself responsible for the actions you take, and that delegating that responsibility to someone else isn't a good idea. Delegating responsibility, however, is not the same as delegating tasks. Delegating a particular task to someone else might well be the correct action in some contexts, but you're not supposed to use that as an excuse to say, "Because I delegated the task of handling this situation to someone else, I am no longer responsible for the outcome of this situation." This advice doesn't tell people how to fix things, true, but that's not the point--it tells people how to get into the right mindset to fix things. In other words, it's not object-level advice; it's meta-level advice, and obviously if you treat it as the former instead of the latter you're going to come to the conclusion that it sucks.
Sometimes, to solve a problem, you have to work harder. Other times, you have to work smarter. Sometimes, you have to do both. "Heroic responsibility" isn't saying anything that contradicts that. In the context of the conversation in HPMoR, I do not agree with either Hermione or Harry; both of them are overlooking a lot of things. But those are object-level considerations. Once you look at the bigger picture--the level on which Harry's advice about heroic responsibility actually applies--I don't think you'll find him saying anything that runs counter to what you're saying. If anything, I'd say he's actually agreeing with you!
Humans are not perfectly rational agents--far from it. System 1 often takes precedence over System 2. Sometimes, to get people going, you need to re-frame the situation in a way that makes both systems "get it". The virtue of "heroic responsibility", i.e. "no matter what happens, you should consider yourself responsible", seems like a good way to get that across.
Replies from: RobinZ↑ comment by RobinZ · 2014-11-14T21:28:13.022Z · LW(p) · GW(p)
s/work harder, not smarter/get more work done, not how to get more work done/
This advice doesn't tell people how to fix things, true, but that's not the point--it tells people how to get into the right mindset to fix things.
Why do you believe this to be true?
Replies from: dxu↑ comment by dxu · 2014-11-15T03:34:56.395Z · LW(p) · GW(p)
Why do you believe this to be true?
That's an interesting question. I'll try to answer it here.
"You could call it heroic responsibility, maybe," Harry Potter said. "Not like the usual sort. It means that whatever happens, no matter what, it's always your fault. Even if you tell Professor McGonagall, she's not responsible for what happens, you are. Following the school rules isn't an excuse, someone else being in charge isn't an excuse, even trying your best isn't an excuse. There just aren't any excuses, you've got to get the job done no matter what."
This seems to imply that no matter whatever happens, you should hold yourself responsible in the end. If you take a randomly selected person, which of the following two cases do you think will be more likely to cause that person to think really hard about how to solve a problem?
- They are told to solve the problem.
- They are told that they must solve the problem, and if they fail for any reason, it's their fault.
Personally, I would find the second case far more pressing and far more likely to cause me to actually think, rather than just take the minimum number of steps required of me in order to fulfill the "role" of a problem-solver, and I suspect that this would be true of many other people here as well. Certainly I would imagine it's true of many effective altruists, for instance. It's possible I'm committing a typical mind fallacy here, but I don't think so.
On the other hand, you yourself have said that your attitude toward this whole thing is heavily driven by the fact that you have anxiety disorder, and if that's the case, then I agree that blaming yourself is entirely the wrong way to go about doing things. That being said, the whole point of having something called "heroic responsibility" is to get people to actually put in some effort as opposed to just playing the role of someone who's perceived as putting in effort. If you are able to do that without resorting to holding yourself responsible for the outcomes of situations, then by all means continue to do so. However, I would be hesitant to label advice intended to motivate and galvanize as "useless", especially when using evidence taken from a subset of all people (those with anxiety disorders) to make a general claim (the notion of "heroic responsibility" is useless).
Replies from: RobinZ↑ comment by RobinZ · 2014-11-17T19:19:42.396Z · LW(p) · GW(p)
I think I see what you're getting at. If I understand you rightly, what "heroic responsibility" is intended to affect is the behavior of people such as [trigger warning: child abuse, rape] Mike McQueary during the Penn State child sex abuse scandal, who stumbled upon Sandusky in the act, reported it to his superiors (and, possibly, the police), and failed to take further action when nothing significant came of it. [/trigger warning] McQueary followed the 'proper' procedure, but he should not have relied upon it being sufficient to do the job. He had sufficient firsthand evidence to justify much more dramatic action than what he did.
Given that, I can see why you object to my "useless". But when I consider the case above, I think what McQueary was lacking was the same thing that Hermione was lacking in HPMoR: a sense of when the system might fail.
Most of the time, it's better to trust the system than it is to trust your ability to outthink the system. The system usually has access to much, much more information than you do; the system usually has people with much, much better training than you have; the system usually has resources that are much, much more abundant than you can draw on. In the vast majority of situations I would expect McQueary or Hermione to encounter - defective equipment, scheduling conflicts, truancy, etc. - I think they would do far worse by taking matters into their own hands than by calling upon the system to handle it. In all likelihood, prior to the events in question, their experiences all supported the idea that the system is sound. So what they needed to know was not that they were somehow more responsible to those in the line of fire than they previously realized, but that in these particular cases they should not trust the system. Both of them had access to enough data to draw that conclusion*, but they did not.
If they had, you would not need to tell them that they had a responsibility. Any decent human being would feel that immediately. What they needed was the sense that the circumstances were extraordinary and awareness of the extraordinary actions that they could take. And if you want to do better than chance at sensing extraordinary circumstances when they really are extraordinary and better than chance at planning extraordinary action that is effective, determination is nice, but preparation and education are a whole lot better.
* The reasons differ: McQueary shouldn't have trusted it because:
- One cannot rely on any organization to act against any of its members unless that member is either low-status or has acted against the preferences of its leadership.
- In some situations, one's perceptions - even speculative, gut-feeling, this-feels-not-right perceptions - produce sufficiently reliable Bayesian evidence to overwhelm the combined force of a strong negative prior on whether an event could happen and the absence of supporting evidence from others in the group that said event could happen.
...while Hermione shouldn't have trusted it because:
- Past students like James Potter got away with much because they were well-regarded.
- Present employees like Snape got away with much because they were an established part of the system.
↑ comment by Kenny · 2014-11-14T19:46:04.798Z · LW(p) · GW(p)
Again, you're right about the advice being poor – in the way you mention – but I also think it's great advice if you consider it's target the idea that the consequences are irrelevant if you've done the 'right' thing. If you've done the 'right' thing but the consequences are still bad, then you should probably reconsider what you're doing. When aiming at this target, 'heroic responsibility' is just the additional responsibility of considering whether the 'right' thing to do is really right (i.e. will really work).
...
And now that I'm thinking about this heroic responsibility idea again, I feel a little more strongly how it's a trap – it is. Nothing can save you from potential devastation at the loss of something or someone important to you. Simply shouldering responsibility for everything you care about won't actually help. It's definitely a practical necessity that groups of people carefully divide and delegate important responsibilities. But even that's not enough! Nothing's enough. So we can't and shouldn't be content with the responsibilities we're expected to meet.
I subscribe to the idea that virtue ethics is how humans should generally implement good (ha) consequentialist ethics. But we can't escape the fact that no amount of Virtue is a complete and perfect means of achieving all our desired ends! We're responsible for which virtues we hold as much as we are of learning and practicing them.
Replies from: RobinZ↑ comment by RobinZ · 2014-11-14T21:35:10.029Z · LW(p) · GW(p)
You are analyzing "heroic responsibility" as a philosophical construct. I am analyzing it as [an ideological mantra]. Considering the story, there's no reason for Harry to have meant it as the former, given that it is entirely redundant with the pre-existing philosophical construct of consequentialism, and every reason for him to have meant it as the latter, given that it explains why he must act differently than Hermione proposes.
[Note: the phrase "an ideological mantra" appears here because I'm not sure what phrase should appear here. Let me know if what I mean requires elaboration.]
Replies from: Kenny↑ comment by Kenny · 2014-11-16T04:00:06.936Z · LW(p) · GW(p)
I think you might be over-analyzing the story; which is fine actually, as I'm enjoying doing the same.
I have no evidence that Eliezer considered it so, but I just think Harry was explaining consequentialism to Hermione, without introducing it as a term.
I'm unsure if it's connected in any obvious way, but to me the quoted conversation between Harry and Hermione is reminiscent of other conversations between the two characters about heroism generally. In that context, it's obviously a poor 'ideological mantra' as it was targeted towards Hermione. Given what I remember of the story, it worked pretty well for her.
Replies from: RobinZ↑ comment by RobinZ · 2014-11-17T17:14:22.156Z · LW(p) · GW(p)
I confess, it would make sense to me if Harry was unfamiliar with metaethics and his speech about "heroic responsibility" was an example of him reinventing the idea. If that is the case, it would explain why his presentation is as sloppy as it is.
↑ comment by wedrifid · 2014-11-16T04:18:24.924Z · LW(p) · GW(p)
I'm realizing that my attitude towards heroic responsibility is heavily driven by the anxiety-disorder perspective,
Surprisingly, so is mine, yet we've arrived at entirely different philosophical conclusions. Perfectionistic, intelligent idealist with visceral aversions to injustice walk a fine line when it comes to managing anxiety and the potential for either burn out or helpless existential dispair. To remain sane and effectively harness my passion and energy I had to learn a few critical lessons:
- Over-responsibility is not 'responsible'. It is right there next to 'completely negligent' inside the class 'irresponsible'.
- Trusting that if you do what the proximate social institution suggests you 'should' do then it will take care of problems is absurd. Those cursed with either weaker than normal hypocrisy skills or otherwise lacking the privelidge to maintain a sheltered existence will quickly become distressed from constant disappointment.
- For all that the local social institutions fall drastically short of ideals - and even fall short of what we are supposed to pretend to believe of them - they are still what happens to be present in the universe that is and so are a relevant source of power. Finding ways to get what you want (for yourself or others) by using the system is a highly useful skill.
- You do not (necessarily) need to fix the system in order to fix a problem that is important to you. You also don't (necessarily) need to subvert it.
'Hermione' style 'responsibility' would be a recipe for insanity if I chose to keep it. I had to abandon it at about the same age she is in the story. It is based on premises that just don't hold in this universe.
but telling me that I am responsible for x doesn't tell me that I am allowed to delegate x to someone else
'Responsibility' of the kind you can tell others they have is almost always fundamentally different in kind to the 'responsibility' word as used in 'heroic responsibility'. It's a difference that results in frequent accidental equivocation and accidental miscommunicaiton across inferential distances. This is one rather large problem with 'heroic responsibility' as a jargon term. Those who have something to learn about the concept are unlikely to because 'responsibility' comes riddled with normative social power connotations.
, and - especially in contexts like Harry's decision (and Swimmer's decision in the OP) - doesn't tell me whether "those nominally responsible can't do x" or "those nominally responsible don't know that they should do x".
That's technically true. Heroic responsibility is completely orthogonal to either of those concerns.
I asked myself this because subsidiarity includes something that heroic responsibility does not: the idea that some people are more responsible - better placed, better trained, better equipped, etc. - than others for any given problem, and that, unless the primary responsibility-holder cannot do the job, those farther away should give support instead of acting on their own.
Expected value maximisation isn't for everyone. Without supplementing it with an awfully well developed epistemology people will sometimes be worse off than with just following whichever list of 'shoulds' they have been prescribed.
Replies from: RobinZ↑ comment by RobinZ · 2014-11-17T19:31:33.611Z · LW(p) · GW(p)
I may have addressed the bulk of what you're getting at in another comment; the short form of my reply is, "In the cases which 'heroic responsibility' is supposed to address, inaction rarely comes because an individual does not feel responsible, but because they don't know when the system may fail and don't know what to do when it might."
Replies from: wedrifid↑ comment by wedrifid · 2014-11-18T03:48:43.651Z · LW(p) · GW(p)
I may have addressed the bulk of what you're getting at in another comment; the short form of my reply is, "In the cases which 'heroic responsibility' is supposed to address, inaction rarely comes because an individual does not feel responsible, but because they don't know when the system may fail and don't know what to do when it might."
Short form reply: That seems false. Perhaps you have a different notion of precisely what heroic responsibility is supposed to address?
Replies from: RobinZ↑ comment by RobinZ · 2014-11-18T05:48:17.998Z · LW(p) · GW(p)
Is the long form also unclear? If so, could you elaborate on why it doesn't make sense?
↑ comment by Kenny · 2014-11-14T19:26:19.252Z · LW(p) · GW(p)
Your mention of anxiety (disorders) reminds me of Yvain's general point that lots of advice is really terrible for at least some people.
As I read HPMoR (and I've read all of it), a lot of the reason why Harry specifically distrusts the relevant authority figures is that they are routinely surprised by the various horrible events that happen and seem unwilling to accept responsibility for anything they don't already expect. McGonagall definitely improves on this point in the story tho.
In the story, the advice Harry gives Hermione seems appropriate. Your example would be much better for anyone inclined to anxiety about satisfying arbitrary constraints (i.e. being responsible for arbitrary outcomes) – and probably for anyone, period, if for no other reason than it's easier to edit an existing idea than generate an entirely new one.
@wedrifid's correct your plan is better than Harry's in the story, but I think Harry's point – and it's one I agree with – is that even having a plan, and following it, doesn't absolve oneself – and to oneself, if no one else – of coming up with a better plan, or improvising, or delegating some or all of the plan, if that's what's needed to stop kids from being bullied or an evil villain from destroying the world (or whatever).
Another way to consider the conversation in the story is that Hermione initially represents virtue ethics:
"I would've done the responsible thing and told Professor McGonagall and let her take care of it," Hermione said promptly.
Harry counters with a rendition of consequentialist ethics.
Replies from: RobinZ↑ comment by RobinZ · 2014-11-14T22:24:25.328Z · LW(p) · GW(p)
If I believed you to be a virtue ethicist, I might say that you must be mindful of your audience when dispensing advice. If I believed you to be a deontologist, I might say that you should tailor your advice to the needs of the listener. Believing you to be a consequentialist, I will say that advice is only good if it produces better outcomes than the alternatives.
Of course, you know this. So why do you argue that Harry's speech about heroic responsibility is good advice?
Replies from: Kenny↑ comment by Philip_W · 2014-11-03T19:48:52.917Z · LW(p) · GW(p)
HPJEV isn't supposed to be a perfect executor of his own advice and statements. I would say that it's not the concept of heroic responsibility is at fault, but his own self-serving reasoning which he applies to justify breaking the rules and doing something cool. In doing so, he fails his heroic responsibility to the over 100 expected people whose lives he might have saved by spending his time more effectively (by doing research which results in an earlier friendly magitech singularity, and buying his warm fuzzies separately by learning the spell for transfiguring a bunch of kittens or something), and HPJEV would feel appropriately bad about his choices if he came to that realisation.
you'll drive yourself crazy if you blame yourself every time you "could have" prevented something that no-one should expect you to have.
Depending on what you mean by "blame", I would either disagree with this statement, or I would say that heroic responsibility would disapprove of you blaming yourself too. By heroic responsibility, you don't have time to feel sorry for yourself that you failed to prevent something, regardless of how realistically you could have.
It is impossible to fulfill the requirements of heroic responsibility.
Where do you get the idea of "requirements" from? When a shepherd is considered responsible for his flock, is he not responsible for every sheep? And if we learn that wolves will surely eat a dozen over the coming year, does that make him any less responsible for any one of his sheep? IMO no: he should try just as hard to save the third sheep as the fifth, even if that means leaving the third to die when it's wounded so that 4-10 don't get eaten because they would have been traveling more slowly.
It is a basic fact of utilitarianism that you can't score a perfect win. Even discounting the universe which is legitimately out of your control, you will screw up sometimes as point of statistical fact. But that does not make the utilons you could not harvest any less valuable than the ones you could have. Heroic responsibility is the emotional equivalent of this fact.
What you need is the serenity to accept the things you cannot change, the courage to change the things you can, and the wisdom to know the difference.
That sounds wise, but is it actually true? Do you actually need that serenity/acceptance part? To keep it heroically themed, I think you're better off with courage, wisdom, and power.
Replies from: wedrifid↑ comment by wedrifid · 2014-11-04T14:05:54.934Z · LW(p) · GW(p)
That sounds wise, but is it actually true? Do you actually need that serenity/acceptance part?
Yes, I do. Most other humans do, too and it's a sufficiently difficult and easy to neglect skill that it is well worth preserving as 'wisdom'.
Non-human intelligences will not likely have 'serenity' or 'acceptance' but will need some similar form of the generalised trait of not wasting excessive amounts of computational resources exploring parts of solution space that have insufficient probability of significant improvement.
Replies from: Philip_W↑ comment by Philip_W · 2014-11-04T18:41:25.439Z · LW(p) · GW(p)
In that case, I'm confused about what serenity/acceptance entails, why you seem to believe heroic responsibility to be incongruent with it, and why it doesn't just fall under "courage" and "wisdom" (as the emotional fortitude to withstand the inevitable imperfection/partial failure and accurate beliefs respectively). Not wasting (computational) resources on efforts with low expected utility is part of your responsibility to maximise utility, and I don't see a reason to have a difference between things I "can't change" and things I might be able to change but which are simply suboptimal.
Replies from: wedrifid↑ comment by wedrifid · 2014-11-05T03:09:41.778Z · LW(p) · GW(p)
I'm confused about what serenity/acceptance entails
A human psychological experience and tool that can approximately be described by referring to allocating attention and resources efficiently in the face of some adverse and difficult to influence circumstance.
why you seem to believe heroic responsibility to be incongruent with it
I don't. I suspect you are confusing me with someone else.
Not wasting (computational) resources on efforts with low expected utility is part of your responsibility to maximise utility,
Yes. Yet for some reason merely seeing an equation and believing it must be maximised is an insufficient guide to optimally managing the human machinery we inhabit. We have to learn other things - including things which can be derived from the equation - in detail and and practice them repetitively.
and I don't see a reason to have a difference between things I "can't change" and things I might be able to change but which are simply suboptimal.
The Virtue of Narrowness may help you. I have different names for "DDR Ram" and "A replacement battery for my Sony Z2 android" even though I can see how they both relate to computers.
Replies from: Philip_W↑ comment by Philip_W · 2014-11-05T18:16:28.891Z · LW(p) · GW(p)
Right, I thought you were RobinZ. By the context, it sounds like he does consider serenity incongruous with heroic responsibility:
There are no rational limits to heroic responsibility. It is impossible to fulfill the requirements of heroic responsibility. What you need is the serenity to accept the things you cannot change, the courage to change the things you can, and the wisdom to know the difference.
With my (rhetorical) question, I expressed doubt towards his interpretation of the phrase, not (necessarily) all reasonable interpretations of it.
and I don't see a reason to have a difference between things I "can't change" and things I might be able to change but which are simply suboptimal.
The Virtue of Narrowness may help you. I have different names for "DDR Ram" and "A replacement battery for my Sony Z2 android" even though I can see how they both relate to computers.
For me at least, saying something "can't be changed" roughly means modelling something as P(change)=0. This may be fine as a local heuristic when there are significantly larger expected utilities on the line to work with, but without a subject of comparison it seems inappropriate, and I would blame it for certain error modes, like ignoring theories because they have been labeled impossible at some point.
To approach it another way, I would be fine with just adding adjectives to "extremely ridiculously [...] absurdly unfathomably unlikely" to satisfy the requirements of narrowness, rather than just saying something can't be done.
A human psychological experience and tool that can approximately be described by referring to allocating attention and resources efficiently in the face of some adverse and difficult to influence circumstance.
I would call this "level-headedness". By my intuition, serenity is a specific calm emotional state, which is not required to make good decisions, though it may help. My dataset luckily isn't large, but I have been able to get by on "numb" pretty well in the few relevant cases.
Replies from: wedrifid↑ comment by wedrifid · 2014-11-06T02:52:06.234Z · LW(p) · GW(p)
Right, I thought you were RobinZ. By the context, it sounds like he does consider serenity incongruous with heroic responsibility:
I agree. I downvoted RobinZ's comment and ignored it because the confusion about what heroic responsibility means was too fundamental, annoyingly difficult to correct and had incidentally already been argued for far more eloquently elsewhere in the thread. In contrast I fundamentally agree with most of what you have said on this thread so the disagreement on one conclusion regarding a principle of rationality and psychology is more potentially interesting.
With my (rhetorical) question, I expressed doubt towards his interpretation of the phrase, not (necessarily) all reasonable interpretations of it.
I agree with your rejection of the whole paragraph. My objection seems to be directed at the confusion about heroic (and arguably mundane) responsibility rather than the serenity wisdom heuristic.
For me at least, saying something "can't be changed" roughly means modelling something as P(change)=0.
I can empathize with being uncomfortable with colloquial expressions which deviate from literal meaning. I can also see some value in making a stand against that kind of misuse due to the way such framing can influence our thinking. Overconfident or premature ruling out of possibilities is something humans tend to be biased towards.
I would call this "level-headedness".
Whatever you call it it sounds like you have the necessary heuristics in place to avoid the failure modes the wisdom quote is used to prevent. (Avoiding over-responsibility and avoiding pointless worry loops).
By my intuition, serenity is a specific calm emotional state, which is not required to make good decisions, though it may help.
The phrasing "The X to" intuitively brings to my mind a relative state rather than an absolute one. That is, while getting to some Zen endpoint state of inner peace or tranquillity is not needed but there are often times when moving towards that state to a sufficient degree will allow much more effective action. ie. it translates to "whatever minimum amount of acceptance of reality and calmness is needed to allow me correctly account for opportunity costs and decide according to the bigger picture".
My dataset luckily isn't large, but I have been able to get by on "numb" pretty well in the few relevant cases.
That can work. If used too much it sometimes seems to correlate with developing pesky emotional associations (like 'Ugh fields') with related stimulus but that obviously depends on which emotional cognitive processes result in the 'numbness' and soforth.
Replies from: RobinZ↑ comment by RobinZ · 2014-11-14T00:11:29.283Z · LW(p) · GW(p)
I downvoted RobinZ's comment and ignored it because the confusion about what heroic responsibility means was too fundamental, annoyingly difficult to correct and had incidentally already been argued for far more eloquently elsewhere in the thread.
I would rather you tell me that I am misunderstanding something than downvote silently. My prior probability distribution over reasons for the -1 had "I disagreed with Eliezer Yudkowsky and he has rabid fans" orders of magnitude more likely than "I made a category error reading the fanfic and now we're talking past each other", and a few words from you could have reversed that ratio.
Replies from: wedrifid↑ comment by wedrifid · 2014-11-16T03:21:02.415Z · LW(p) · GW(p)
I would rather you tell me that I am misunderstanding something than downvote silently.
Thankyou for your feedback. I usually ration my explicit disagreement with people on the internet but your replies prompt me to add "RobinZ" to the list of people worth actively engaging with.
Replies from: RobinZ↑ comment by RobinZ · 2014-11-17T17:02:23.046Z · LW(p) · GW(p)
...huh. I'm glad to have been of service, but that's not really what I was going for. I meant that silent downvoting for the kind of confusion you diagnosed in me is counterproductive generally - "You keep using that word. I do not think it means what you think it means" is not a hypothesis that springs naturally to mind. The same downvote paired with a comment saying:
This is a waste of time. You keep claiming that "heroic responsibility" says this or "heroic responsibility" demands that, but you're fundamentally mistaken about what heroic responsibility is and you can't seem to understand anything we say to correct you. I'm downvoting the rest of this conversation.
...would have been more like what I wanted to encourage.
Replies from: wedrifid↑ comment by wedrifid · 2014-11-18T03:34:11.264Z · LW(p) · GW(p)
I meant that silent downvoting for the kind of confusion you diagnosed in me is counterproductive generally
I fundamentally disagree. It is better for misleading comments to have lower votes than insightful ones. This helps limit the epistemic damage caused to third parties. Replying to every incorrect claim with detailed arguments in not viable and not my responsibility either heroic or conventional - even though my comment history suggests that for a few years I made a valiant effort.
Silent downvoting is often the most time efficient form positive influence available and I endorse it as appropriate, productive and typically wiser than trying to argue all the time.
Replies from: RobinZ↑ comment by RobinZ · 2014-11-18T05:46:35.722Z · LW(p) · GW(p)
I didn't propose that you should engage in detailed arguments with anyone - not even me. I proposed that you should accompany some downvotes with an explanation akin to the three-sentence example I gave.
Another example of a sufficiently-elaborate downvote explanation: "I downvoted your reply because it mischaracterized my position more egregiously than any responsible person should." One sentence, long enough, no further argument required.
Replies from: wedrifid↑ comment by Lumifer · 2014-10-30T19:33:22.799Z · LW(p) · GW(p)
the medical records should automatically include Bayesian probability data on symptoms to help nurses recognize when the diagnosis doesn't fit
Medical expert systems are getting pretty good, I don't see why you wouldn't just jump straight to an auto-updated list of most likely diagnoses (generated by a narrow AI) given the current list of symptoms and test results.
Replies from: hyporational, RobinZ↑ comment by hyporational · 2014-11-07T03:37:49.989Z · LW(p) · GW(p)
Most patient cases are so easy and common that filling forms for an AI would greatly slow the system down. AI could be useful when the diagnosis isn't clear however. A sufficiently smart AI could pick up the relevant data from the notes but usually the picture that the diagnostician has in their mind is much more complete than any notes they make.
Note that I'm looking at this from a perspective where implementing theoretically smart systems has usually done nothing but increased my workload.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-07T17:25:35.188Z · LW(p) · GW(p)
Most patient cases are so easy and common that filling forms for an AI would greatly slow the system down.
I am assuming you're not filling out any forms specially for the AI -- just that the record-keeping system is computerized and the AI has access to it. In trivial cases the AI won't have much data (e.g. no fever, normal blood pressure, complains of a running nose and cough, that's it) and its diagnoses will be low-credence, but that's fine, you as a doctor won't need its assistance in those cases.
Replies from: hyporational↑ comment by hyporational · 2014-11-07T17:55:29.702Z · LW(p) · GW(p)
The AI would need to know natural language to be of any use or else it will miss most of the relevant data. I suppose Watson is pretty close to that and have read that it's tested in some hospitals. I wonder how this is implemented. I suspect doctors carry a lot more data in their heads than is readily apparent and much of this data will never make it to their notes and thus to the computerized records.
Taking a history, evaluating it's reliability and using the senses to observe the patients are something machines can't do for quite some time. On top of this I roughly know hundreds of patients now that I will see time and again and this helps immensely when judging their most acute presentations. By this I don't mean I know them as lists of symptoms, but I know their personalities too and how this affects how they tell their stories and how seriously they take their symptoms from minor complaints to major problems. I could never take the approach of jumping from a hospital to hospital now that I've experienced this first hand.
Replies from: Vaniver, Lumifer↑ comment by Vaniver · 2014-11-07T18:54:44.646Z · LW(p) · GW(p)
The AI would need to know natural language to be of any use or else it will miss most of the relevant data. I suppose Watson is pretty close to that and have read that it's tested in some hospitals. I wonder how this is implemented. I suspect doctors carry a lot more data in their heads than is readily apparent and much of this data will never make it to their notes and thus to the computerized records.
This is the reason Watson is a game-changer, despite expert prediction systems (using linear regression!) performing at the level of expert humans for ~50 years. Doctors may carry a lot of information in their heads, but I've yet to meet a person that's able to mentally invert matrices of non-trivial size, which helps quite a bit with determining the underlying structure of the data and how best to use it.
Taking a history, evaluating it's reliability and using the senses to observe the patients are something machines can't do for quite some time.
I think machines have several comparative advantages here. An AI with basic conversational functions can take a history, and is better at evaluating some parts of the reliability and worse at others. It can compare with 'other physicians' more easily, or check public records, but probably can't determine whether or not it's a coherent narrative as easily ("What is Toronto?"). A webcam can measure pulse rate just by looking, and so I suspect it'll be about as good at detecting deflection and lying as the average doctor. (I don't remember seeing doctors as being particularly good at lie-detection, but it's been a while since I've read any of the lie-detection literature.)
I could never take the approach of jumping from a hospital to hospital now that I've experienced this first hand.
Note that if the AI is sufficiently broadly used (here I'm imagining, say, the NHS in the UK using just one) then everyone will always have access to a doctor that's known them as long as they've been in the system.
Replies from: hyporational↑ comment by hyporational · 2014-11-07T19:21:36.895Z · LW(p) · GW(p)
despite expert prediction systems (using linear regression!) performing at the level of expert humans for ~50 years.
Is this because using them is incredibly slow or something else?
A webcam can measure pulse rate just by looking, and so I suspect it'll be about as good at detecting deflection and lying as the average doctor. (I don't remember seeing doctors as being particularly good at lie-detection, but it's been a while since I've read any of the lie-detection literature.)
Lies make no sense medically, or make too much sense. Once I've spotted a few lies, many of them fit a stereotypical pattern many patients use even if there aren't any other clues. I don't need to rely on body language much.
People also misremember things, or have a helpful relative misremember things for them, or home care providers feeding their clueless preliminary diagnoses for these people. People who don't remember fill in the gap with something they think is plausible. Some people are also psychotic or don't even remember what year it is or why they came in the first place. Some people treat every little ache like it's the end of the world and some don't seem to care if their leg's missing.
I think even an independent AI could make up for many of its faults simply by being more accurate at interpreting the records and current test results.
I hope that when an AI can do my job I don't need a job anymore :)
Replies from: Vaniver↑ comment by Vaniver · 2014-11-07T21:44:59.896Z · LW(p) · GW(p)
Is this because using them is incredibly slow or something else?
My understanding is that the ~4 measurements the system would use as inputs were typically measured by the doctor, and by the time the doctor had collected the data they had simultaneously come up with their own diagnosis. Typing the observations into the computer to get the same level of accuracy (or a few extra percentage points) rarely seemed worth it, and turning the doctor from a diagnostician to a tech was, to put it lightly, not popular with doctors. :P
There are other arguments which would take a long time to go into. One is "but what about X?", where the linear regression wouldn't take into account some other variable that the human could take into account, and so the human would want an override option. But, as one might expect, the only way for the regression to outperform the human is for the regression to be right more often than not when the two of them disagree, and humans are unfortunately not very good at determining whether or not the case in front of them is a special case where an override will increase accuracy or a normal case where an override will decrease accuracy. Here's probably the best place to start if interested in reading more.
↑ comment by Lumifer · 2014-11-07T18:03:09.081Z · LW(p) · GW(p)
The AI would need to know natural language
A rather limited subset of the natural language, I think it's a surmountable problem.
I suspect doctors carry a lot more data in their heads than is readily apparent ... I roughly know hundreds of patients now that I will see time and again and this helps immensely when judging their most acute presentations.
All true, which is why I think a well-designed diagnostic AI will work in partnership with a doctor instead of replacing him.
Replies from: hyporational↑ comment by hyporational · 2014-11-07T18:17:53.806Z · LW(p) · GW(p)
I agree with you, but I fear that makes for a boring conversation :)
The language is already relatively standardized and I suppose you could standardize it more to make it easier for the AI. I suspect any attempt to mold the system for an AI would meet heavy resistance however.
↑ comment by RobinZ · 2014-10-30T19:57:53.664Z · LW(p) · GW(p)
Largely for the same reasons that weather forecasting still involves human meteorologists and the draft in baseball still includes human scouts: a system that integrates both human and automated reasoning produces better outcomes, because human beings can see patterns a lot better than computers can.
Also, we would be well-advised to avoid repeating the mistake made by the commercial-aviation industry, which seems to have fostered such extreme dependence on the automated system that many 'pilots' don't know how to fly a plane. A system which automates almost all diagnoses would do that.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-30T20:05:34.703Z · LW(p) · GW(p)
I am not saying this narrow AI should be given direct control of IV drips :-/
I am saying that a doctor, when looking at a patient's chart, should be able to see what the expert system considers to be the most likely diagnoses and then the doctor can accept one, or ignore them all, or order more tests, or do whatever she wants.
A system which automates almost all diagnoses would do that.
No, I don't think so because even if you rely on an automated diagnosis you still have to treat the patient.
Replies from: RobinZ↑ comment by RobinZ · 2014-10-30T20:34:23.476Z · LW(p) · GW(p)
Even assuming that the machine would not be modified to give treatment recommendations, that wouldn't change the effect I'm concerned about. If the doctor is accustomed to the machine giving the correct diagnosis for every patient, they'll stop remembering how to diagnose disease and instead remember how to use the machine. It's called "transactive memory".
I'm not arguing against a machine with a button on it that says, "Search for conditions matching recorded symptoms". I'm not arguing against a machine that has automated alerts about certain low-probability risks - if there was a box that noted the conjunction of "from Liberia" and "temperature spiking to 103 Fahrenheit" in Thomas Eric Duncan during his first hospital visit, there'd probably only be one confirmed case of ebola in the US instead of three, and Duncan might be alive today. But no automated system can be perfectly reliable, and I want doctors who are accustomed to doing the job themselves on the case whenever the system spits out, "No diagnosis found".
Replies from: Lumifer↑ comment by Lumifer · 2014-10-30T21:13:19.465Z · LW(p) · GW(p)
But no automated system can be perfectly reliable
You are using the wrong yardstick. Ain't no thing is perfectly reliable. What matters is whether an automated system will be more reliable than the alternative -- human doctors.
Commercial aviation has a pretty good safety record while relying on autopilots. Are you quite sure that without the autopilot the safety record would be better?
whenever the system spits out, "No diagnosis found".
And why do you think a doctor will do better in this case?
Replies from: Swimmer963, RobinZ↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-10-30T21:34:46.830Z · LW(p) · GW(p)
I was going to say "doctor's don't have the option of not picking the diagnosis", but that's actually not true; they just don't have the option of not picking a treatment. I've had plenty of patients who were "symptom X not yet diagnosed" and the treatment is basically supportive, "don't let them die and try to notice if they get worse, while we figure this out." I suspect that often it never gets figured out; the patient gets better and they go home. (Less so in the ICU, because it's higher stakes and there's more of an attitude of "do ALL the tests!")
Replies from: EGI↑ comment by EGI · 2014-10-30T23:43:23.015Z · LW(p) · GW(p)
they just don't have the option of not picking a treatment.
They do, they call the problem "psychosomatic" and send you to therapy or give you some echinacea "to support your immune system" or prescribe "something homeopathic" or whatever... And in very rare cases especially honest doctors may even admit that they do not have any idea what to do.
↑ comment by RobinZ · 2014-10-30T21:59:35.858Z · LW(p) · GW(p)
Because the cases where the doctor is stumped are not uniformly the cases where the computer is stumped. The computer might be stumped because a programmer made a typo three weeks ago entering the list of symptoms for diphtheria, because a nurse recorded the patient's hiccups as coughs, because the patient is a professional athlete whose resting pulse should be three standard deviations slower than the mean ... a doctor won't be perfectly reliable either, but like a professional scout who can say, "His college batting average is .400 because there aren't many good curveball pitchers in the league this year", a doctor can detect low-prior confounding factors a lot faster than a computer can.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-31T00:41:56.364Z · LW(p) · GW(p)
Well, let's imagine a system which actually is -- and that might be a stretch -- intelligently designed.
This means it doesn't say "I diagnose this patient with X". It says "Here is a list of conditions along with their probabilities". It also doesn't say "No diagnosis found" -- it says "Here's a list of conditions along with their probabilities, it's just that the top 20 conditions all have probabilities between 2% and 6%".
It also says things like "The best way to make the diagnosis more specific would be to run test A, then test B, and if it came back in this particular range, then test C".
A doctor might ask it "What about disease Y?" and the expert system will answer "It's probability is such-and-such, it's not zero because of symptoms Q and P, but it's not high because test A came back negative and test B showed results in this range. If you want to get more certain with respect to disease Y, use test C."
And there probably would be button which says "Explain" and pressing it will show the precisely what leads to the probability of disease X being what it is, and the doctor should be able to poke around it and say things like "What happens if we change these coughs to hiccups?"
An intelligently designed expert system often does not replace the specialist -- it supports her, allows her to interact with it, ask questions, refine queries, etc.
If you have a patient with multiple nonspecific symptoms who takes a dozen different medications every day, a doctor cannot properly evaluate all the probabilities and interactions in her head. But an expert system can. It works best as a teammate of a human, not as something which just tells her.
Replies from: RobinZ↑ comment by RobinZ · 2014-10-31T01:09:57.652Z · LW(p) · GW(p)
Well, let's imagine a system which actually is -- and that might be a stretch -- intelligently designed.
Us? I'm a mechanical engineer. I haven't even read The Checklist Manifesto. I am manifestly unqualified either to design a user interface or to design a system for automated diagnosis of disease - and, as decades of professional failure have shown, neither of these is a task to be lightly ventured upon by dilettantes. The possible errors are simply too numerous and subtle for me to be assured of avoiding them. Case in point: prior to reading that article about Air France Flight 447, it never occurred to me that automation had allowed some pilots to completely forget how to fly a plane.
The details of automation are much less important to me than the ability of people like Swimmer963 to be a part of the decision-making process. Their position grants them a much better view of what's going on with one particular patient than a doctor who reads a chart once a day or a computer programmer who writes software intended to read billions of charts over its operational lifespan. The system they are incorporated in should take advantage of that.
comment by Kaj_Sotala · 2014-10-31T13:22:21.709Z · LW(p) · GW(p)
Heroic responsibility always struck as me the kind of thing that a lot of people probably have too little of, but also like the kind of thing that will just make you a miserable wreck if you take it too seriously. After all, interpreted literally, it means that every person dying of a terrible disease, every war, every case of domestic violence, etc. etc. happening in the world, now or in the future, is because you didn't stop it.
The concept is useful to have as a way to remind ourselves that often supposed "impossibles" just mean we're unwilling to put a real effort into it, and that we shouldn't just content ourselves to doing the things that our socially-prescribed roles require from us. But at the same time, some things - like preventing every nasty thing that's happening on Earth right now - really are impossible, and that won't change just because someone tells themselves otherwise. And feeling guilty about all of it won't do anyone any good.
Basically, "heroic responsibility" means telling yourself that yes, it's possible for you to fix that problem, regardless of how difficult or challenging it feels. Which is obviously a falsehood, because some problems genuinely are unsolvable. But a small dose of that falsehood can be helpful in counteracting our biases towards self-serving behavior. Since we have those biases, introducing a small dose of an opposite falsehood into our reasoning can bring the system towards an overall more correct state. But if we introduce too much of it, we'll end up more distant from the truth again, and believing in heroic responsibility too much may be worse for our well-being than believing in it too little.
comment by private_messaging · 2014-10-29T07:01:25.050Z · LW(p) · GW(p)
There may be Dunning-Kruger effect though...
I don't know about the medical context but in the software context, the "heroically responsible" developer is the new guy who is waxing poetic about switching to another programming language (for no reason and entirely unaware of all the bindings that would need to be implemented), who wants others to do unit tests in the situation where they're inapplicable or do some sort of agile development where more formal process with tests is necessary, and fails to recognize unit testing already in place, etc.
He puts himself and his need to be the hero of the project's story ahead of the needs of the project, which is irresponsible, he doesn't actually take time to critically evaluate his own proposals before making them (not fun), which is again irresponsible. His need to heroically save the project is more important than the success of the team. People like him are the starters of those 90%+ start-ups that fail, wasting other people's money and time.
But in his own mind he's the only responsible person on the whole team. The tech lead spents his near-deadline weekend going over thousands lines of other people's code and fixing up other people's bugs? Doesn't register to that new guy, it's still just him.
Eventually most people grow out of that mindset. (I'd dare say most people exhibit some of such behaviours for at least a short period of time. )
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-10-29T07:32:30.505Z · LW(p) · GW(p)
This may indeed be a failure mode that new people on teams are prone to, and maybe even something that new people on teams are especially prone to if they've read HPMOR, but I don't think it's the same as the thing I'm talking about–and in particular this doesn't sound like me, as a new nurse who's read HPMOR. I think the analog in nursing would be the new grad who's carrying journal articles around everywhere, overconfident in their fresh-out-of-school knowledge, citing the new Best Practice Guidelines and nagging all the experienced nurses about not following them. Whereas I'm pretty much always underconfident, trying to watch how the experienced nurses do things and learn for them, asking for help lots, and offering my help to everyone all the time. Which is probably annoying sometimes, but not in the same way.
I think that there is a spirit of heroic responsibility that makes people genuinely stronger, which Eliezer is doing his best to describe in HPMOR, and what you described is very much not in the spirit of heroic responsibility.
Replies from: private_messaging↑ comment by private_messaging · 2014-10-29T16:18:44.262Z · LW(p) · GW(p)
Whereas I'm pretty much always underconfident
That's a bit of self contradictory statement, isn't it? (People can be unassertive but internally very overconfident, by the way).
So you have that patient, and you have your idea on the procedures that should have been done, and there's doctor's, and you in retrospect think you were under-confident that your treatment plan was superior? What if magically you were in the position where you'd actually have to take charge? Where ordering a wrong procedure hurts the patient? It's my understanding that there's a very strong initial bias to order unnecessary procedures, that takes years of experience to overcome.
I suspect it's one of things that look very different from the inside and from the outside... None of those arrogant newbies would have seen themselves in my description (up until they wisen up). Also, your prototype here is the heroic responsibility for saving the human race, taken upon by someone who neither completed formal education in relevant subjects, nor (which would actually be better to see) produced actual working software products of relevance, nor other things of such nature evaluated to be correct in a way that's somewhat immune to rationalization. And a straightforwardly responsible thing to do is to try to do more of rationalization-immune things to practice, because the idea is that screwing up here has very bad consequences.
Other issue is that you are essentially thinking meat, and if the activation of the neurons used for responsibility is outside a specific range, things don't work right, performance is impaired, responsibility too is impaired, etc, whether the activation is too low or too high.
edit: to summarize with an analogy, say, driving a car without having passed a driving test is irresponsible, right? No matter how much you feel that you can drive the bus better than the person who's legally driving it, the responsible thing to do is to pass a driving test first. Now, the heroes, they don't need no stinking tests. They jump into the airplane cockpit and they land it just fine, without once asking if there's a certified pilot on board. In most fiction, heroes are incredibly irresponsible, and the way they take responsibility for things is very irresponsible, but it all works out fine because it's fiction.
Replies from: Swimmer963, wedrifid↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-10-29T17:41:41.953Z · LW(p) · GW(p)
So you have that patient, and you have your idea on the procedures that should have been done, and there's doctor's, and you in retrospect think you were under-confident that your treatment plan was superior?
I'm not sure that the doctor and I disagreed on that much. So we had this patient, who weighed 600 pounds and had all the chronic diseases that come with it, and he was having more and more trouble breathing–he was in heart failure, with water backing up into his lungs, basically. Which we were treating with diuretics, but he was already slowly going into kidney failure, and giving someone big doses of diuretics can push them into complete kidney failure, and also can make you deaf–so the doses we were giving him weren't doing anything, and we couldn't give him more. Normally it would have been an easy decision to intubate him and put him on a ventilator around Day 3, but at 600 pounds, with all that medical history, if we did that he'd end up in the hospital for six months, with a tracheotomy, all that. So the doctor had a good reason for wanting to delay the inevitable as long as possible. We were also both expecting that he would need dialysis sooner or later...but we couldn't put him on dialysis to take water off his lungs and avoid having to intubate him, because he was completely confused and delirious and I had enough trouble getting him to keep his oxygen mask on. Dialysis really requires a patient who stays still. We couldn't give him too many medications to calm him down, because anything with a sedative effect would decrease his respiratory effort, and then he'd end up needed to be intubated.
Basically, it was a problem with so many constraints that there was no good solution. I think that my disagreement with the doctor was over values–specifically, the doctor thought of the scenario where we intubate him and put him on dialysis on Monday as basically equivalent to the scenario where we delay it as long as possible and then end up intubating him on Thursday. Whereas to me, latter, where my patient got to spend four extra days writhing around, confused and in pain and struggling to breathe, was a lot worse. I think nurses are trained to have more empathy and care more about a patient being in pain, and also I was seeing him for twelve hours a day whereas the doctor was seeing him for five minutes. And I was really hoping that there was a course of action no one had thought of that was better...but there wasn't, at least not one I was able to think of. So the guy suffered for five days, ended up intubated, and is probably still in the hospital.
What if magically you were in the position where you'd actually have to take charge? Where ordering a wrong procedure hurts the patient?
I would be terrified all the time of doing the wrong thing. Maybe even more than I already am. I think as a nurse, I basically have causal power a lot of the time anyway–I point a problem out to the doctor, I suggest "do you want to do X", he says, "Yeah, X is a good idea." That's scary, despite the presence of a back-up filter that will let me know if X is a terrible idea. [And doctors also have a lot of back-up filters: the pharmacy will call them to clarify a medication order that they think is a bad idea, and nurses can and will speak their opinion, and have the right to refuse to administer treatment if they think that it's unsafe for the patient.]
Replies from: private_messaging↑ comment by private_messaging · 2014-10-30T12:41:38.041Z · LW(p) · GW(p)
Well, from your description it may be that doctor has less hyperbolic discounting (due to having worked longer). Being more able to weight the chance of avoiding intrusive procedures and long term hospitalization, which carry huge risks as well as huge amount of total pain over time.
↑ comment by wedrifid · 2014-10-30T12:37:27.462Z · LW(p) · GW(p)
That's a bit of self contradictory statement, isn't it?
No, that is an entirely coherent claim for a person to make and not even a particularly implausible one.
Replies from: private_messaging↑ comment by private_messaging · 2014-10-30T13:22:17.525Z · LW(p) · GW(p)
To say that you're underconfident is to say that you believe you're correct more often than you believe yourself to be correct. The claim of underconfidence is not a claim underconfident people tend to make. Underconfident people usually don't muster enough confidence about their tendency to be right to conclude that they're underconfident.
Replies from: gjm↑ comment by gjm · 2014-10-31T02:39:41.734Z · LW(p) · GW(p)
It's self-contradictory only in the same way as "I believe a lot of false things" is. (Maybe a closer analogy: "I make a lot of mistakes.".) In other words, it make a general claim that conflicts with various (unspecified) particular beliefs one has from time to time.
I am generally underconfident. That is: if I look at how sure I am about things (measured by how I feel, what I say, what in some cases how willing I am to take risks based on those opinions), with hindsight it turns out that my confidence is generally too low. In some sense, recognizing this should automatically increase my confidence levels until they stop being too low -- but in practice my brain doesn't work that way. (I repeat: in some sense it should, and that's the only sense in which saying "I am generally underconfident" is self-contradictory.)
I make a lot of mistakes. That is: if I look at the various things I have from time to time believed to be true, with hindsight it turns out that quite often those beliefs are incorrect. It seems likely that I have a bunch of incorrect current beliefs, but of course I don't know which ones they are.
(Perhaps I've introduced a new inconsistency by saying both "I am generally underconfident" and "I make a lot of mistakes". As it happens, on the whole I think I haven't; in any case that's a red herring.)
Replies from: private_messaging↑ comment by private_messaging · 2014-10-31T08:50:11.575Z · LW(p) · GW(p)
Yes, that's why I said it was a bit self contradictory. The point is, you got to have two confidence levels involved that aren't consistent with each other one being lower than the other.
comment by undermind · 2014-10-30T04:58:44.032Z · LW(p) · GW(p)
I probably am going to leave nursing.
This makes me sad to hear. It sounds like you've been really enjoying it. And I think that those of us here on LW have benefited from your perspective as a nurse in many ways -- you've demonstrated its worth as a career choice, and challenged people's unwarranted assumptions.
comment by mare-of-night · 2014-11-08T04:31:41.334Z · LW(p) · GW(p)
This was really, really good for me to hear. I think permission to not be a hero was something I needed. (The following is told vaguely and with HP:MOR metaphors to avoid getting too personal.)
I had a friend who I tried really hard to help, in different ways at different times, but most of it all relating to the same issue. I remember once spending several days thinking really hard about an imminently looming crisis, trying to find some creative way out, and eventually I did but it was almost as bad an idea as using hufflepuff bones to make weapons so I didn't do it. It was probably also morally wrong, but even now I can't quite get that on an emotional level.
At one point, I thought I was in a position to start doing something about the core problem. I kept trying, but it wasn't working. And then I tried too hard and made everything worse, then temporarily cut ties to avoid doing more damage. Said goodbye a while later, and walked away.
We still talk, occasionally. They're still in hell. I left them there. I walked away without letting the prisoner out of their cell.
I have a lot of roadblocks in my mind, put there to avoid depression and such, which are stopping me from feeling terrible about it. I still wonder at the back of my mind if maybe I should feel terrible, for leaving a friend to their fate like that. I'm trying to think now whether there's anything I still could do, but my brain is putting up a big flashing warning sign not to do that. And when I try to think objectively about it without heading into risky mental territory, expected value of me trying to help again does not look good. I guess maybe this is where equal and opposite advice applies.
Anyway, thanks for this post. I think I did the right think by leaving, but it doesn't feel that way.
comment by TheOtherDave · 2014-10-29T14:01:17.703Z · LW(p) · GW(p)
My $0.02: it matters whether I trust the system as a whole (for example, the hospital) to be doing good.
If I do, then if I'm going to be "heroically" responsible I'm obligated to take that into account and make sure my actions promote the functioning of the system as a whole, or at least don't impede it. Of course, that's a lot more difficult than just focusing on a particular bit of the environment that I can improve through my actions. But, well, the whole premise underlying "heroic" responsibility is that difficulty doesn't matter, we just do the "impossible" because hey, it needs doing.
If I don't, then I can basically ignore the system and go forth and "heroically" do good on my own.
So, yeah, maybe being a "heroically" responsible nurse (as opposed to a "heroically" responsible person, who as you suggest might find it necessary to stop being a nurse and instead take over the medical profession and run it properly) would involve coordinating with the other nurses on your unit, and not just going off on your own to do what you can do with your own two hands.
Which, I understand, is a very different model of "heroic" responsibility than what's presented in the Sequences (and HPMOR), which is much more about individual achievements against a backdrop of a system that's at best useless and more often harmful.
Another $0.02: this whole notion of "heroic" responsibility seems incompatible with counting the cost. If achieving whatever-it-is requires working 24-hour shifts, according to this model, then by gum you work 24-hour shifts!
So, yes, burnout is inevitable if whatever-it-is is the sort of thing, like sick patients, that is being presented in a steady stream.
There's a big difference between a goal like "invent a technology that optimizes the world for human value," which only needs to be done once, and a goal like "care for my patient" which has to be done over, and over, and over. I'm not sure it's possible for humans to be "heroic" about the latter without generalizing to the root causes and giving up being a nurse.
comment by Mass_Driver · 2014-10-29T07:26:34.772Z · LW(p) · GW(p)
You might be wrestling with a hard trade-off between wanting to do as much good as possible and wanting to fit in well with a respected peer group. Those are both good things to want, and it's not obvious to me that you can maximize both of them at the same time.
I have some thoughts on your concepts of "special snowflake" and "advice that doesn't generalize." I agree that you are not a special snowflake in the sense of being noticeably smarter, more virtuous, more disciplined, whatever than the other nurses on your shift. I'll concede that you and them have -basically- the same character traits, personalities, and so on. But my guess is that the cluster of memes hanging out in your prefrontal cortex is more attuned to strategy than their meme-clusters -- you have a noticeably different set of beliefs and analytical tools. Because strategic meme-clusters are very rare compared to how useful they are, having those meme-clusters makes you "special" in a meaningful way even if in all other respects you are almost identical to your peers. The 1% more-of-the-time that you spend strategizing about how best to accomplish goals can double or triple your effectiveness at many types of tasks, so your small difference in outlook leads to a large difference in what kinds of activities you want to devote your life to. That's OK.
Similarly, I agree with you that it would be bad if all the nurses in your ward quit to enter politics -- someone has to staff the bloody ward, or no amount of political re-jiggering will help. The algorithm that I try to follow when I'm frustrated that the advice I'm giving myself doesn't seem to generalize is to first check and see if -enough- people are doing Y, and then switch from X to Y if and only if fewer-than-enough people are doing Y. As a trivial example, if forty of my friends and I are playing soccer, we will probably all have more fun if one of us agrees to serve as a referee. I can't offer the generally applicable advice "You should stop kicking the ball around and start refereeing." That would be stupid advice; we'd have forty referees and no ball game. But I can say "Hm, what is the optimal number of referees? Probably 2 or 3 people out of the 40 of us. How many people are currently refereeing? Hm, zero. If I switch from playing to refereeing, we will all have more fun. Let me check and see if everyone is making the same leap at the same time and scrambling to put on a striped shirt. No? OK, cool, I'll referee for a while." That last long quote is fully generalizable advice -- I wish literally everyone would follow it, because then we'd wind up with close to an optimal number of referees.
comment by Shmi (shminux) · 2014-10-29T06:42:33.839Z · LW(p) · GW(p)
But what about the other nurses on my unit, the ones who are competent and motivated and curious and really care? Would familiarity with the concept of heroic responsibility help or hinder them in their work? Honestly, I predict that they would feel alienated, that they would assume I held a low opinion of them (which I don't, and I really don't want them to think that I do), and that they would flinch away and go back to the things that they were doing anyway, the role where they were comfortable–or that, if they did accept it, it would cause them to burn out. So as a consequentialist, I'm not going to tell them.
I have known of this concept for a couple of years and I admire people who are moved by it and really attempt to "make a difference" in a heroic way, but I am not one of them and have no inclination to be. I suspect that my mind is more typical than yours in this respect, and that you telling the other nurses about it will not cause a flinch. More like a "oh, that's nice" reaction before going back "to the things that they were doing anyway", without any additional impetus to argue with a doctor, or go an extra mile, or wanting to put more effort into fighting Moloch.
comment by Richard_Kennaway · 2014-10-31T14:42:15.369Z · LW(p) · GW(p)
Leaving aside the scale implied by the word "heroic", another word for "heroic responsibility" is "initiative". A frame of mind in which the thought, "I don't know how to solve this" is immediately followed not by "therefore I can do nothing" but by "therefore I will find a way."
comment by wedrifid · 2014-10-30T12:32:37.539Z · LW(p) · GW(p)
I kind of predict that the results of installing heroic responsibility as a virtue, among average humans under average conditions, would be a) everyone stepping on everyone else’s toes, and b) 99% of them quitting a year later.
You are probably right. That would be a horrific lesson in the valley of bad rationality. I really do not want people to start actually acting on their beliefs and values. That makes things (literally) explode.
comment by Gunnar_Zarncke · 2014-10-29T21:42:27.891Z · LW(p) · GW(p)
Someone’s going to be the Minister of Health for Canada, and they’re likely to be in a position where taking heroic responsibility for the Canadian health care system makes things better.
Maybe they are in such a position - but they are gears too. More powerful gears, but they are part of a machine that selects these gears by certain properties and puts them in places where they bear more load. By this analogy I wouldn't want such a gear spring out of place. It could disrupt the whole machine. At best you can hope that the gear spins a bit faster or slower than expected. But maybe the machine analogy is broken.
comment by Richard_Kennaway · 2014-11-02T18:49:35.522Z · LW(p) · GW(p)
All of the discussion here has been based on the assumption that heroic responsibility is advocated by HPMOR as a fundamental moral virtue. But it is advocated by Harry Potter. Eliezer wrote somewhere about what in HPMOR can and what cannot be taken as the author's own views. I forget the exact criterion, but I'm sure it did not include "everything said by HP".
Heroic responsibility is a moral tool. That not everyone is able to use the tool, that the tool should not always be employed, that the tool exacts its own costs: these are all true. The tool itself is still a thing of usefulness and value, to be taken out and used when appropriate, and kept sharp the rest of the time.
Scaled down from heroic levels, it is what on LW has been called agentiness, or being a PC. I called it initiative in another comment in this thread.
A footnote:
I just looked up "initiative" on Google. Does it no longer mean what it used to? The first page of hits gives good definitions and examples from dictionary sites ("the ability to assess and initiate things independently"), but the rest of the hits are to brand names and actions taken by organisations, not individuals. I went down to the 20th page of hits, and apart from a few media companies using the word as a brand name and one more dictionary entry, it was all activities by organisations. I didn't find a single example of the word used in the sense of agentiness.
What does "initiative" mean to people who learned it in the last 20 years?
Replies from: hargup, Lumifer, Emily↑ comment by hargup · 2014-12-15T21:15:50.479Z · LW(p) · GW(p)
Eliezer wrote somewhere about what in HPMOR can and what cannot be taken as the author's own views. I forget the exact criterion, but I'm sure it did not include "everything said by HP".
This is mentioned at the beginning of the book
" please keep in mind that, beyond the realm of science, the views of the characters may not be those of the author. Not everything the protagonist does is a lesson in wisdom, and advice offered by darker characters may be untrustworthy or dangerously double-edged."
comment by undermind · 2014-10-30T05:19:27.407Z · LW(p) · GW(p)
I'm wary of advice that doesn't generalize.
I'm wary of advice that does claim to generalize. Giving good advice is a hard problem, partly because it's so context-specific. Yes, there are general principles, but there are tons of exceptions, and even quite similar situations can trigger these exceptions.
Kant got into this kind of problem with (the first formulation of) the categorical imperative. There are many things that are desirable if some people, but not everybody, does them -- say, learning any specific skill or filling a particular social function.
What's difference between the nurse who should leave in order to take meta-level responsibility, and the nurse who should stay because she's needed as a gear?
There are several bad answers to this, and you're right to be suspicious of them. In particular, feeling like you're special is not sufficient reason to act like you're special.
But different people have different value systems and abilities. If people are given the opportunity to develop their skills (up to the limit of interest and/or natural ability), then they should differentiate their roles based on value systems.
In this case: some people want stability, family, friends etc., and some people want to change the world. (It gets difficult for those of us who want all of the above, unfortunately.) No, you don't get to dictate what other people can do with their lives. But I really think you're in no danger of doing so -- even if you do make a distinction between yourself and other nurses (which is really not arbitrary, as you seem to be afraid it is), you're just choosing your own path, not theirs.
comment by Cyan · 2014-10-30T01:57:25.245Z · LW(p) · GW(p)
FWIW, in my estimation your special-snowflake-nature is somewhere between "more than slightly, less than somewhat" and "potential world-beater". Those are wide limits, but they exclude zero.
Replies from: XFrequentist↑ comment by XFrequentist · 2014-10-30T04:15:54.983Z · LW(p) · GW(p)
Ooh ooh, do mine!
Replies from: Cyan↑ comment by Cyan · 2014-10-30T12:36:35.959Z · LW(p) · GW(p)
Same special-snowflake level credible limits, but for different reasons. Swimmer963 has an innate drive to seek out and destroy (whatever she judges to be) her personal inadequacies. She wasn't very strategic about it in teenager-hood, but now she has the tools to wield it like a scalpel in the hands of a skilled surgeon. Since she seems to have decided that a standard NPC job is not for her, I predict she'll become a PC shortly.
You're already a PC; your strengths are a refusal to tolerate mediocrity in the long-term (or let us say, in the "indefinite" term, in multiple senses) and your vision for controlling and eradicating disease.
comment by Viliam_Bur · 2014-10-29T17:57:59.474Z · LW(p) · GW(p)
One possible thing you could do while being a nurse is starting a blog about problems nurses face. A blog where other nurses could also post anyonymously (but you would moderate it to remove the crazy stuff).
There is a chance that the new Minister of Health would read it. Technically, you could just send them a hyperlink, when the articles will already be there.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-10-31T13:46:14.936Z · LW(p) · GW(p)
And possibly ending up in Atul Gawnde's position, which I hope is doing good in addition to what he could do as an individual doctor.
15 nursing blogs. I recommend skipping the introduction and going straight to the links. I don't know if any of them are from a rationalist angle.
comment by Neil (neil-warren) · 2023-06-22T14:11:41.555Z · LW(p) · GW(p)
There's an interesting concept Adam Grant introduced to me in Originals: the "risk portfolio". For him, people who are wildly creative and take risks in one domain compensate by being extra cautious in another domain ("drive carefully on your way to the casino"). The same might apply for heroic responsibility: continue working as a cog in the system on Mondays, write well-written thought-provoking posts on LessWrong (where the median person wants to take over the world) on Sundays.
comment by Kenny · 2014-11-09T05:06:35.943Z · LW(p) · GW(p)
I think you're wrong about how the other nurses on your unit, and other people generally, would react to the idea of 'heroic responsibility', depending on you were to both bring it up and present it.
The key part of the quote with which I would expect lots of people to agree is:
“You can’t think as if just following the rules means you’ve done your duty."
I'd expect everyone to have encountered an incompetent or ineffective authority figure. I'd also expect nurses to routinely help each other out, and help their patients, by taking actions that aren't formally or technically their responsibility. Ex. "Did Ms. Smith ever get that pillow she requested?"
But fully accepting heroic responsibility means you're also accepting responsibility for (a) doing what needs to be done despite feelings of guilt; (b) not burning oneself out (unless that's the best course of action); (c) not accepting heroic responsibility for one thing (unless that's all you truly care about).
Accepting heroic responsibility for all your patients very well might be best honored by you doing whatever you need to do "to care less – and thus be less frustrated and more emotionally available to comfort a guy who was having the worst week of his life".
comment by AshwinV · 2014-11-01T18:46:32.985Z · LW(p) · GW(p)
I kind of feel that heroic responsibility works better in situations where small individuals have the potential to make a large difference.
For example, in the world of HPMoR, it makes sense for one person to have a sort of heroic responsibility, because a sufficiently powerful wizard can actually make waves, can actually play a keystone role in the shaping of events.
On the other hand, take an imaginary planet where all the inhabitants are of equal size, shape and intelligence and there are well over a zillion inhabitants. On this planet, it is very hard to imagine a single inhabitant to assume responsibility for the actions of all the other zillion inhabitants that are there on the planet.
Even in the examples discussed above, the minister having a lot of power is in a better position to take heroic responsibility for the functioning of the system as opposed to any of the individual nurses. I know it sounds like i'm saying heroic responsibility should be left to the heroes, but my point is more subtle than that.
The prime considerations as to whether you should take up heroic responsibility or not is the situation in front of you and the extent of your capabilities.
comment by Adam Zerner (adamzerner) · 2015-04-24T01:54:35.218Z · LW(p) · GW(p)
In short, I don't see any "philisophical" points to discuss here, just practical ones. I appoligize if I'm being too literal and missing out on something. Please let me know if I am.
All I got from the idea of heroic responsibility is, "Delegating responsibility to authorities is a heuristic. Heuristics sacrifice accuracy for speed, and will thus sometimes be inaccurate. People tend to apply this heuristic way too much in real life without thinking about whether or not doing so makes sense."
Concrete questions:
- How should a nurse act if she was optimizing for societies aggregate happiness?
- How should a nurse act if she was optimizing for her own happiness?
- How should a nurse act if she was optimizing for a blend between the two?
- How should a nurse act if she was optimizing for "heroic responsibility"?
Take the Standard Definitional Dispute, for example, about the tree falling in a deserted forest. Is there any way-the-world-could-be—any state of affairs—that corresponds to the word "sound" really meaning only acoustic vibrations, or really meaning only auditory experiences?
Is there any way-the-world-could-be that corresponds to the phrase "heroic responsibility" really meaning something?
Why does the mind formulate questions of responsibility?
I don't know, but I have a hypothesis. My hypothesis is social pressure/conditioning. Personally, even though I think it's a Wrong Question, I nevertheless feel like it's a real thing (to some extent).
I feel some sort of drive toward "heroic responsibility" and away from "heroic irresponsibility". Even when I dissolve the question, there's still a force at play: my emotions telling me, "Bad Adam. I don't care if you dissolve the question. Heroic Responsiblity is Good and Heroic Irresponsibility is Bad".
Being a well functioning gear.
- There is a downside to "going against the grain". The machine seems to be somewhat fragile, and a deviant gear is likely to screw stuff up.
- There's also the obvious upside of possibly making the machine more efficient.
- Whether or not it's worth it to go against the grain depends on the cost-benefit.
What's different between the nurse who should leave in order to take meta-level responsibility, and the nurse who should stay because she's needed as a gear?
1) What they optimize for, and 2) EV of trying.
The machine does need fixing–but I would argue that from within the machine, as one of its parts, taking heroic responsibility for your own sphere of control isn’t the way to go about fixing the system.
My impression is that that's true for the majority of people in the majority of situations, because they're not smart enough for the EV of them going against the grain to be worth it.
As for whether or not it's worth it for you:
1) It depends on what your goals are. Largely the personal happiness vs. altruism question: what is the blend you're optimizing for?
2) My impression: situations like the one you describe in Something Impossible seem unlikely to be worth deviating in. However, my impression is that if you put your mind to it and took a more intermediate-long term approach to pursuing change, you could do incredible things.
↑ comment by joaolkf · 2014-11-26T20:29:05.966Z · LW(p) · GW(p)
It seems you have just closed the middle road.
Replies from: private_messaging↑ comment by private_messaging · 2014-11-28T04:33:39.377Z · LW(p) · GW(p)
I don't think it can be closed. I mean, when one derives that level of heroism smugness from something as little as a few lightbulbs... a lot of people add a lot of lights just because they like it brighter. Which is ultimately what it boils down to if you go with qualitative 'more light is better for mood'.
comment by DanielLC · 2014-11-03T06:12:50.869Z · LW(p) · GW(p)
I think the problem is mixing heroic responsibility with the idea that responsibility is something you can consistently fulfil. You can fulfil your responsibility as a nurse. Just do your job. Heroic responsibility isn't like that. You will let someone die about 1.8 times per second. Just save as many as you can. And to do that, start with the ones that are easiest to save. GiveWell has some tips for that.
comment by Jonathan Paulson (jpaulson) · 2014-10-30T05:37:28.120Z · LW(p) · GW(p)
I think for most things, it's important to have a specific person in charge, and have that person be responsible for the success of the thing as a whole. Having someone in charge makes sure there's a coherent vision in one person, makes a specific person accountable, and helps make sure nothing falls through the cracks because it was "someone else's job". When you're in charge, everything is your job.
If no one else has taken charge, stepping up yourself can be a good idea. In my software job, I often feel this way when no one is really championing a particular feature or bug. If I want to get it done, I have to own it and push it through myself. This usually works well.
But I don't think taking heroic responsibility for something someone else already owns is a good idea. Let them own it. Even if they aren't winning all the time, or even if they sometimes do things you disagree with (obviously, consistent failure is a problem).
Nor do I think dropping everything to fix the system as a whole is necessarily a good idea (but it might be, if you have specific reforms in mind). Other people are already trying to fix the system; it's not clear that you'll do better than them. It might be better to keep nursing, and look for smaller ways to improve things locally that no one is working on yet.
comment by Jackercrack · 2014-10-29T23:37:10.117Z · LW(p) · GW(p)
I think heroic responsibility is essentially a response to being in a situation where not enough people are both competent at and willing to make changes to improve things. The authority figures are mad or untrustworthy, so a person has to figure out their own way to make the right things happen and put effective methods in place. It is particularly true of HPMOR where Harry plays the role of Only Sane Man. So far as I can tell, we're in a similar situation in real life at the minute: we have insufficient highly sane people taking heroic responsibility. If we had enough sane people taking heroic responsibility, things would look rather different and likely a lot better run. It would be easy to be a happy gear, if you knew the machine was properly designed, the goal was a good one and the plan was likely to succeed.
There is clearly some ratio of heroically responsible:useful gears that works the best for each situation, some optimal equilibrium. Too many people trying to unilaterally change things in different directions and you get chaos and infighting. Too many useful gears and you have a wonderfully maintained, smooth running machine working at 40% maximum efficiency towards a goal that doesn't make much sense. I propose heroic responsibility be fit into a larger framework, that of filling the role that is required. I can't think up a snappy name for it, but you essentially mould your actions into the shape that best maximises your group's outcome given your abilities. If there is already someone doing the job of heroic responsibility and doing it well, you aim for the next empty or poorly-done role down where you do most good. If all the important positions are filled by competent people or by people more competent than you, then be a gear.
The problem lies in knowing whether you actually could do better in someone's situation with the resources available to them. The ethical injunctions seem to say that humans are rather bad at figuring out when they could do better than those in power. There is only an injunction against cheating to gain power though, not against legitimate means. Still, it's difficult to say whether one is actually more competent and a large amount of research into the potential role should be done first before action is taken. If you can see a decision for the role coming and make a strong prediction of what a good choice would look like, then seen what the actual decision the person made was and predict the effects and finally get it right, with a better batting average than the person in the role, if you can do that, then it's time to go for the position.
Disclaimer: There is a possibility that this theory is an extension of my belief that I should eventually be in charge.
Replies from: shminux, Lumifer↑ comment by Shmi (shminux) · 2014-10-30T02:08:47.862Z · LW(p) · GW(p)
The problem lies in knowing whether you actually could do better in someone's situation with the resources available to them. The ethical injunctions seem to say that humans are rather bad at figuring out when they could do better than those in power. There is only an injunction against cheating to gain power though, not against legitimate means.
To me Snowden is one of the best examples of taking heroic responsibility. All the way to potentially breaking the laws and getting into the harms way to make the world a better place.
↑ comment by Lumifer · 2014-10-30T00:29:00.584Z · LW(p) · GW(p)
If we had enough sane people taking heroic responsibility, things would look rather different and likely a lot better run.
I don't know about that. Let me offer you an example of a not-mad person who took heroic responsibility: Lenin.
Generally speaking, it's all very tightly tied to values. If you share the values, the person "takes heroic responsibility", if you don't share the values, the person is just a fanatic.
Replies from: Jackercrack↑ comment by Jackercrack · 2014-10-30T08:34:18.925Z · LW(p) · GW(p)
You say he's not-mad, but isn't he the spitting image of the revolutionary that power corrupts? Wasn't Communism the archetype of the affective death spiral?It would appear he was likely suffering from syphilis, a disease that can cause confusion, dementia and memory problems. Anyway, isn't that an ad hominem argument?
Replies from: wedrifid, Lumifer↑ comment by wedrifid · 2014-10-30T11:51:12.426Z · LW(p) · GW(p)
Anyway, isn't that an ad hominem argument?
No. It is an argument which happens to use the perceived negative consequences of an individual's actions as a premise. Use of 'ad hominem!' to reject a claim only (legitimately) applies when there is a fallacy of relevance that happens to be a personal attack that doesn't support the conclusion. It does not apply whenever an argument happens to contain content that reflects badly on an individual.
↑ comment by Lumifer · 2014-10-30T14:43:29.755Z · LW(p) · GW(p)
isn't he the spitting image of the revolutionary that power corrupts?
Lenin in the 1920s is not relevant to this argument, I would say he "took heroic responsibility" around, say, 1915-1918, and It looks to me that it would be hard to make the argument that he was already corrupted by power at this point.
But if you don't like this example I'm sure I can find others. The underlying point is rather simple -- imagine "enough sane people taking heroic responsibility" with these people having a value system you find unacceptable...
Replies from: Jackercrack↑ comment by Jackercrack · 2014-10-30T23:49:18.748Z · LW(p) · GW(p)
I think we're using a different meaning of the word sane. See, I hold sanity to a rather high standard which excludes a huge breadth of people, probably myself as well until I've progressed somewhat.
I am imagining enough sane people taking heroic responsibility, the world looks rather different than this and it seems to be better run. We already have people in charge with value systems unacceptable to me, making them at least competent and getting them to use evidence-based strategies seems like a step forwards. People will have a normal range of value systems, if a particularly aberrant person comes with a particularly strange value system, then they'll still have to outsmart all the other people to actually get their unacceptable value system in place
Honestly lumifer, I'm beginning to thing you never want to change anything about any power structure in case it goes horribly wrong. How are things to progress if no changes are allowed?
Replies from: Lumifer↑ comment by Lumifer · 2014-10-31T00:58:10.551Z · LW(p) · GW(p)
We already have people in charge with value systems unacceptable to me, making them at least competent and getting them to use evidence-based strategies seems like a step forwards.
Why is it a step forward? If these people have value systems unacceptable to you, presumably you want them stopped or at least slowed. You do NOT want them to become more efficient.
People will have a normal range of value systems
That, um, is entirely non-obvious to me. Not to mention that I have no idea what do you mean by "normal".
I'm beginning to thing you never want to change anything about any power structure in case it goes horribly wrong.
Oh, I do, I do. Usually, the first thing I want to do is reduce its power, though :-D
But here I'm basically pointing out that both rationality and willingness to do something at any cost (which is what heroic responsibility is) are orthogonal to values. There are two consequences.
First, heroic responsibility throws overboard the cost-benefit analysis. That's not really a good thing for people who run the world to do. "At any cost" is rarely justified.
Second, I very much do NOT want people with values incompatible with mine to become more efficient, more effective, and more active. Muslim suicide bombers, for example, take heroic responsibility and I don't want more of them. True-believer cultists often take heroic responsibility, and no, I don't think it's a good thing either. It really does depend on the values involved.
Replies from: Jackercrack↑ comment by Jackercrack · 2014-10-31T01:18:40.060Z · LW(p) · GW(p)
See, you're ignoring the qualifier 'sane' again. I do not consider suicide bombers sane. Suicide bombers are extreme outliers, and they kill negligible numbers of people. Last time I checked they kill less people per year on average than diseases I had never heard of. Quite frankly, they are a non-issue when you actually look at the numbers.
It is not obvious to me that heroic responsibility implies that a thing should be done without cost/benefit analysis or at any cost.
Of course it depends on the values systems involved, I just happen to be fine with most values systems. I'll rephrase normal values systems to be more clear: People will on average end up with an average range of value systems. The majority will probably be somewhat acceptable to me, so in aggregate I'm fine with it.
Is there a specific mechanism by which reducing government power would do good? What countries have been improved when that path has been taken? It seems like it would just shift power to even less accountable companies.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-31T01:29:02.462Z · LW(p) · GW(p)
See, you're ignoring the qualifier 'sane' again.
Well, would you like to define it, then? I am not sure I understand your use of this word. In particular, does it involve any specific set of values?
It is not obvious to me that heroic responsibility implies that a thing should be done without cost/benefit analysis or at any cost.
Things done on the basis of cost-benefit analysis are just rational things to do. The "heroic" part must stand for something, no?
I just happen to be fine with most values systems.
Ahem. Most out of which set? Are there temporal or geographical limits?
Is there a specific mechanism by which reducing government power would do good?
That's a complicated discussion that should start with what is meant by "good" (we're back to value systems again), maybe we should take it up another time...
Replies from: Jackercrack, gjm, Jackercrack↑ comment by Jackercrack · 2014-10-31T10:18:55.217Z · LW(p) · GW(p)
I'll put this in a separate post because it is not to do with heroic responsibility and it has been bugging me. What evidence do you have that your favoured idea of reducing political power does what you want it to do? Are there states which have switched to this method and benefited? Are there countries that have done this and what happened to them? Why do you believe what you believe?
Replies from: Lumifer↑ comment by Lumifer · 2014-10-31T15:41:49.033Z · LW(p) · GW(p)
Well, before we wade into mindkilling territory, let me set the stage and we'll see if you find the framework reasonable.
Government power is multidimensional. It's very common to wish for more government power in one area but less in another area. Therefore government power in aggregate is a very crude metric. However if you try to imagine government power as an n-dimensional body in a high-dimensional space, you can think of the volume of that n-dimensional body as total government power and that gives you a handle on what that means.
Government power, generally speaking, has costs and benefits. Few people prefer either of the two endpoints -- complete totalitarianism or stateless anarchy. Most arguments are about which trade-offs are advantageous and about where the optimal point on the axis is located. To talk about optimality you need a yardstick. That yardstick is people's value system. Since people have different value systems, different people will prefer different optimal points. If you consider the whole population you can (theoretically) build a preference distribution and interpret one of its centrality measures (e.g. mean, median, or mode) as the "optimal" optimal point, but that needs additional assumptions and gets rather convoluted rather fast.
There are multiple complicating factors in play here. Let me briefly list two.
First, the population's preferences do not arise spontaneously in a pure and sincere manner. They are a function of local culture and the current memeplex, for example (see the Overton window), and are rather easily manipulated. Manipulating the political sentiments of the population is a time-honored and commonplace activity, you can assume by default that it is happening. There are multiple forces attempting the manipulation, of course, with different goals, so the balance is fluid and uncertain. Consider the ideas of "manufacturing consent" or the concept of "engines of consent" -- these ideas were put forward by such diverse people as, say, Chomsky and neoreactionaries.
Second, the government, as an organization, has its own incentives, desires, and goals. The primary among them is to survive, then to grow which generally means become more powerful. Governments rarely contract (willingly), most of the time they expand. This means that without a countervailing force governments will "naturally" grow too big and too powerful past that optimal point mentioned above. Historically that has been dealt with by military conquests, revolutions, and internal coups, but the world has been quite stable lately...
I'll stop before this becomes a wall of text, but does all of the above look reasonable to you?
Replies from: Jackercrack↑ comment by Jackercrack · 2014-10-31T16:32:01.477Z · LW(p) · GW(p)
All of it looks reasonable to me apart from the last paragraph. I can see times when governments do willingly contract. There are often candidates who campaign on a platform of tax cuts, the UK had one in power from 1979-1990 and the US had one in power from 2001-2009.
Tax cuts necessarily require eventual reductions in government spending and thus the power of government, agreed?
Replies from: Nornagest, V_V, Lumifer↑ comment by Nornagest · 2014-10-31T16:42:15.153Z · LW(p) · GW(p)
Tax cuts necessarily require eventual reductions in government spending and thus the power of government, agreed?
If they're sustained long enough, yeah. But a state has more extensive borrowing powers than an individual does, and an administration so inclined can use those powers to spend beyond its means for rather a long time -- certainly longer than the term in office of a politician who came to power on a promise of tax cuts. The US federal budget has been growing for a long time, including over the 2001-2009 period, and the growth under low-tax regimes has been paid for by deficit spending.
(Though you'd really want to be looking at federal spending as a percentage of GDP. There seems to be some disagreement over the secular trend there, but the sources I've found agree that the trend 2001-2009 was positive.)
Replies from: Jackercrack, Lumifer↑ comment by Jackercrack · 2014-10-31T17:20:03.508Z · LW(p) · GW(p)
Yes, I was going to comment on how a clever politician could spend during their own term to intentionally screw over the next party to take power, but I wanted to avoid the possible political argument that could ensue.
↑ comment by V_V · 2014-10-31T17:31:04.538Z · LW(p) · GW(p)
Tax cuts necessarily require eventual reductions in government spending and thus the power of government, agreed?
Even if the tax cut are funded by reduction in government spending why would that imply a reduction of government power?
Replies from: Jackercrack↑ comment by Jackercrack · 2014-10-31T17:52:02.851Z · LW(p) · GW(p)
They don't necessarily have to, but generally do. For instance during austerity measures spending is generally reduced in most areas. Police forces have less funding and thus lose the ability to have as great an effect on an area, that is they have less power. Unless you're talking about power as a state of laws instead of a state of what is physically done to people?
Replies from: Lumifer↑ comment by Lumifer · 2014-10-31T18:55:05.660Z · LW(p) · GW(p)
For instance during austerity measures spending is generally reduced in most areas.
Do you think UK had an austerity period recently?
Replies from: Jackercrack↑ comment by Jackercrack · 2014-10-31T21:10:17.990Z · LW(p) · GW(p)
Well, yes, it was all over the news. This feels like a trick question. Are you about to tell me that spending went up during the recession or something?
Replies from: Lumifer↑ comment by Lumifer · 2014-10-31T21:51:17.330Z · LW(p) · GW(p)
You have good instincts :-) Yes, this was a trap: behold.
Replies from: Jackercrack↑ comment by Jackercrack · 2014-11-01T00:10:40.409Z · LW(p) · GW(p)
Then what was all that stuff on the news about cutting government jobs, trying desperately to ensure frontline services weren't effected and so on about?
Edit: I knew it! No wonder I felt so confused. It would seem the reduction in spending just took a while to come into effect. Take a look at the years after 2011 that your chart is missing. Unfortunately it's not adjusted for inflation but you still get the idea. If you change category to protection and the subcategory to 'police', 'prisons' or 'law courts', you can see the reduction in police funding over the course of the recession.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-01T19:35:55.353Z · LW(p) · GW(p)
Take a look at the years after 2011 that your chart is missing.
So, my trap backfired? Ouch. :-( I guess I should be more careful about where I dig them :-) But I shall persevere anyway! :-D
First, let me point out that the UK public spending contracted for a single year (2013) and 2014 is already projected to top all previous years. That's not a meaningful contraction.
Second, we are talking about the power of the government. Did you feel this power lessened is some way around 2013? Sure, some programs were cut or didn't grow as fast as some people wanted, but is there any discernible way in which the government was weaker in 2013 than it was in 2012?
Replies from: EHeller↑ comment by EHeller · 2014-11-01T19:58:56.058Z · LW(p) · GW(p)
Second, we are talking about the power of the government. Did you feel this power lessened is some way around 2013?
Fewer police on the street, for one. I've seen declining numbers of officers in my visits to the UK since probably around late 2010.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-01T20:14:35.180Z · LW(p) · GW(p)
That's true, it seems in England and Wales the number of police officers dropped by about 10% since the peak of 2009 (source).
Replies from: Jackercrack↑ comment by Jackercrack · 2014-11-01T21:53:11.854Z · LW(p) · GW(p)
Right, it's time we got back on track. Now that we using the same definition of power and we've come to the conclusion that a reduction in tax revenues can reduce physical projection of power but is unlikely to remove the laws that determine what maximum level of power is legally allowed to be projected.
I believe you were talking about optimal levels of power when compared to growth?
Replies from: Lumifer↑ comment by Lumifer · 2014-11-01T22:42:43.678Z · LW(p) · GW(p)
I believe you were talking about optimal levels of power when compared to growth?
Not at all. I was talking about optimal levels of power from the point of view of my system of values.
Replies from: Jackercrack↑ comment by Jackercrack · 2014-11-01T22:54:05.948Z · LW(p) · GW(p)
Right, well would you please continue? I believe the question that started all this off was how do you know said theory corresponds to reality.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-02T00:25:49.336Z · LW(p) · GW(p)
Which particular theory? You asked why do I want the reduce the power of the government and what does that mean. I tried to answer to the best of my ability, but there is no falsifiable theory about my values. They are what they are.
Replies from: Jackercrack↑ comment by Jackercrack · 2014-11-02T09:27:30.182Z · LW(p) · GW(p)
A theory of government is not an terminal value, it is an instrumental one. You believe that that particular way of government will make people happy/autonomous/free/healthy/whatever your value system is. What is lacking is evidence that this particular government actually achieves those aims. It's a reasonable a priori argument, but so are dozens of other arguments for other governments. We need to distinguish which reality we are actually living in. By what metric can your goals be measured and where would you expect them to be highest? Are there countries/states trying this and what is the effect? Are there countries doing the exact opposite and what would you expect to be the result of that? Your belief must be falsifiable or else it is permeable to flour and meaningless. Stage a full crisis of faith if you have to. No retreating into a separate magesterium, why do you believe what you believe?
Replies from: Lumifer↑ comment by Lumifer · 2014-11-02T19:59:52.999Z · LW(p) · GW(p)
What is lacking is evidence that this particular government actually achieves those aims.
Which "this particular government"? I don't think I'm advocating any specific government. May I point you here?
Your belief must be falsifiable
My preferences neither are nor need to be falsifiable.
why do you believe what you believe?
Why do I believe what?
Replies from: Jackercrack↑ comment by Jackercrack · 2014-11-03T00:41:37.717Z · LW(p) · GW(p)
That large government is worse than small government.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-03T00:53:09.246Z · LW(p) · GW(p)
Because a larger government takes more of my money, because it limits me in certain areas where I would prefer not to be limited, and because it has scarier and more probable failure modes.
Replies from: Jackercrack, TheAncientGeek↑ comment by Jackercrack · 2014-11-03T01:15:46.975Z · LW(p) · GW(p)
It finally makes sense, you're looking at it from a personal point of view. Consider it from the view of the average wellbeing of the entire populace. Zoom out to consider the entire country, the full system of which the government is just a small part. A larger government has more probable failure modes, but a small one simply outsources its failure modes to companies and extremely rich individuals. Power abhors a vacuum.
You and I are not large enough or typical enough for considerations about our optimality to enter into the running of a country. People are eternal and essentially unchanging, the average level of humanity rises but slowly. The only realistic way to improve their lot is to change the situation in which the decision is made. The structure of the system they flow through is too important to be left to market forces and random chance. I don't care much if it inconveniences me so long as on average the lot of humanity is improved.
Edit: I fully expect you to disagree with me, but at least that's one mystery solved.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-03T03:57:04.364Z · LW(p) · GW(p)
Consider it from the view of the average wellbeing of the entire populace.
Sure. A larger government takes more of their money, limits them in areas where they would prefer to be not limited, and has scarier and more probable failure modes.
a small one simply outsources its failure modes to companies and extremely rich individuals.
No, I don't think so, not the really scary failure modes. Things like Pol Pot's Kampuchea cannot be outsourced.
People are eternal and essentially unchanging, the average level of humanity rises but slowly.
The second half of that sentence contradicts the first half.
The structure of the system they flow through is too important to be left to market forces and random chance.
I don't know of anyone who proposes random chance as a guiding political principle. As to the market forces, well, they provide the best economy human societies have ever seen. A lot of people thought they could do better -- they all turned out to be wrong.
so long as on average the lot of humanity is improved.
You're still missing a minor part -- showing that a large government does indeed do that better compared to a smaller one. By the way, are you saying that the current government size and power (say, typical for EU countries) are optimal? too small?
Replies from: Jackercrack↑ comment by Jackercrack · 2014-11-05T11:59:39.285Z · LW(p) · GW(p)
You misunderstand me. I am not saying that a large government is definitely better. I'm simply playing devils advocate. I find it worrying that you can't find any examples of good things in larger government though. Do socialised single payer healthcare, lower crime rates due to more police, better roads, better infrastructure, environmental protections and higher quality schools not count as benefit? These are all things that require taxes and can be improved with greater spending on them.
Edit: In retrospect maybe this is how a changed humanity looks already. That seems to fit the reality better.
Replies from: Lumifer, Azathoth123↑ comment by Lumifer · 2014-11-05T16:16:21.713Z · LW(p) · GW(p)
I find it worrying that you can't find any examples of good things in larger government
Of course I can. Recall me talking about the multidimensionality of government power and how most people (including me) would prefer more in one dimension but less in another. On the whole I would prefer a weaker government, but not necessarily in every single aspect.
However I would stress once again the cost-benefit balance. More is only better is you're below the optimal point, go above it and more will be worse.
Replies from: Jackercrack↑ comment by Jackercrack · 2014-11-05T18:36:46.722Z · LW(p) · GW(p)
And neither of us have the evidence required to find this point (if indeed it is just one point instead of several optimal peaks). I'm tapping out. If you have any closing points I'll try to take them into account in my thinking. Regardless, it seems like we agree on more than we disagree on.
↑ comment by Azathoth123 · 2014-11-09T07:29:18.414Z · LW(p) · GW(p)
Do socialised single payer healthcare, lower crime rates due to more police, better roads, better infrastructure, environmental protections and higher quality schools not count as benefit?
Some of these things are, some aren't. Let's go through the list:
single payer healthcare,
In the countries I'm most familiar with the socialized health care system is something you want to avoid if you have an alternative.
lower crime rates due to more police, better roads, better infrastructure,
Ok, those are examples. Even if the the crime rates that make more police necessary are due to other stupid government policies.
environmental protections
Well these days a lot of environmental protection laws are insane, as in we must divert water from the farms because if we don't the delta smelt population might be reduced (this is California's actual water policy). Other times they're just excuses for extreme NIMBYism.
higher quality schools
Well, in the US the rule of thumb is that the more control government exercises over schools the worse they are.
Replies from: TheAncientGeek, wedrifid↑ comment by TheAncientGeek · 2014-11-09T11:45:00.772Z · LW(p) · GW(p)
In the countries I'm most familiar with the socialized health care system is something you want to avoid if you have an alternative.l
Kind of trueish but, not in a way that supports your point, Public healthcare systems tend to be run on something of a shoe string, so an Individual who can easily afford private treatment is often better off with that option, However, that does not translate to the total population or average person.. Analogously , the fact that travelling in a chauffeur limo is more pleasant than travelling on a train, for those who can afford it, is no justification for dismantling public transportation systems. And it's not either/or, anyway.
other stupid government policies.
Ok stupid government bad. But what's the relationship between large government and stupid government? Large government has at least the capacity to hire expert consultants, and implement checks and balances. And there's plenty of examples of autocratic rulers who were batshit crazy.
Well these days a lot of environmental protection laws are insane,
In the US? Doesn't generalize.
Well, in the US the rule of thumb is that the more control government exercises over schools the worse they are.
Ditto.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-11-12T05:45:04.935Z · LW(p) · GW(p)
Public healthcare systems tend to be run on something of a shoe string
Um. Do you mean the money allocated in the budget for the healthcare system or the money that actually trickles down to the actual doctors? Because the former tends to be larger than the latter.
Replies from: TheAncientGeek, army1987↑ comment by TheAncientGeek · 2014-11-12T09:59:18.674Z · LW(p) · GW(p)
I believe that private healthcare deliverers have nonzero administrative costs as well.
http://epianalysis.wordpress.com/2012/07/18/usversuseurope/
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-11-15T05:36:54.628Z · LW(p) · GW(p)
Yes, but they actually have incentives to keep those costs down.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-11-15T10:40:08.531Z · LW(p) · GW(p)
Taxpayers don't like paying tax, which is the incentive to keep down costs in a public healthcare system, and it works because they are all cheaper than the US system.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-11-18T05:15:58.511Z · LW(p) · GW(p)
Taxpayers don't like paying tax, which is the incentive to keep down costs in a public healthcare system,
To the extend this incentive exists its fulfilled by degrading quality rather than improving efficiency.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-11-18T11:32:58.246Z · LW(p) · GW(p)
Taxpayers don't like poor quality healthcare either. And degraded from what? It's not like there was ever a golden age where the average person had top quality and affordable healthcare, and then someone came along and spoiled everything. Public healthcare is like public transport: it's not supposed to be the best in-money-is-no-object terms, it is supposed to better than nothing.
And lets remind ourselves that, factually, a number of public healthcare systems deliver equal .or better results to the US system for less money.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-11-21T07:34:08.385Z · LW(p) · GW(p)
Taxpayers don't like poor quality healthcare either.
But they have to solve a rational ignorance and a collective action problem to do something about it.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-11-21T13:28:34.357Z · LW(p) · GW(p)
But they have to solve a rational ignorance and a collective action problem to do something about it.
And lets remind ourselves, again, that, factually, a number of public healthcare systems deliver equal .or better results to the US system for less money. So it looks like they have.
↑ comment by A1987dM (army1987) · 2014-11-15T12:35:17.481Z · LW(p) · GW(p)
Even the former is much smaller than what you guys pay in the US.
↑ comment by wedrifid · 2014-11-13T07:57:00.797Z · LW(p) · GW(p)
In the countries I'm most familiar with the socialized health care system is something you want to avoid if you have an alternative.
Such things are referred to as 'safety nets' for a reason. Falling from the tightrope still isn't advised.
↑ comment by TheAncientGeek · 2014-11-09T11:31:09.881Z · LW(p) · GW(p)
Larger government gives more and invests more...governments don't just burn money.
Large government doesn't automatically mean less freedom...the average person in mediaeval Europe was not particularly free.
Large government can rescue large corporations when they fail....
Replies from: Lumifer↑ comment by Lumifer · 2014-11-10T03:05:42.605Z · LW(p) · GW(p)
Large government doesn't automatically mean less freedom...the average person in mediaeval Europe was not particularly free.
You seem to be well on the roads towards the "if you want a small government why don't you GO AND LIVE IN SOMALIA" argument....
Large government can rescue large corporations when they fail
And why in the world would that be a good thing?
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-11-10T14:08:24.603Z · LW(p) · GW(p)
SOMALIA
Why not answer the points I actually made?
Large government can rescue large corporations when they fail
And why in the world would that be a good thing?
Because ineffective corporations continuing to exist is less bad in terms of human suffering than major economic collapse.
Replies from: MarkusRamikin, Lumifer↑ comment by MarkusRamikin · 2014-11-10T15:30:13.425Z · LW(p) · GW(p)
Because ineffective corporations continuing to exist is less bad in terms of human suffering than major economic collapse.
Raising the spectre of "major economic collapse" at the notion that big corporations might have to operate under the same market conditions and risks as everyone else seems like an argument straight from a corporate lobbyist.
Don't government rescues reward poor management and incentivise excessive risk, thus leading to economic troubles which necessitate them in the first place? It is not at all clear to me that the hypothetical world in which bailouts don't happen and corporations know it and act accordingly contains more suffering.
Especially after you consider the costs imposed on the competent to rescue the failures, and the cost to the economy from uneven competition (between those who can afford to take bigger risks, or simply manage themselves sloppier, knowing that they are "too big to fail", and those who cannot).
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-11-11T14:24:40.764Z · LW(p) · GW(p)
Raising the spectre of "major economic collapse" at the notion that big corporations might have to operate under the same market conditions and risks as everyone else seems like an argument straight from a corporate lobbyist.
Calling it a spectre makes it sound mythical, but it has been known to happen. The fallacy lies in not having sufficient evidence it will happen in any particular case.
Don't government rescues reward poor management and incentivise excessive risk, thus leading to economic troubles which necessitate them in the first place?
You can reduce risky behaviour by regulation. Baillouts without regulation is the worst possible word.
Especially after you consider the costs imposed on the competent to rescue the failures, and the cost to the economy from uneven competition (between those who can afford to take bigger risks, or simply manage themselves sloppier, knowing that they are "too big to fail", and those who cannot).
Bailouts involve disutility. My argument is that by spreading the costs over more people and more time, they entail less suffering.
↑ comment by Lumifer · 2014-11-10T15:57:50.070Z · LW(p) · GW(p)
Why not answer the points I actually made?
Because I didn't see a point, just a bunch of straw.
Because ineffective corporations continuing to exist is less bad in terms of human suffering than major economic collapse.
First, I don't think that is true. Second, there was a bit of sleight of hand -- you replaced the failure of large corporations with "major economic collapse". That's, um, not exactly the same thing :-/
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-11-11T13:00:57.025Z · LW(p) · GW(p)
Why not answer the points I actually made?
Because I didn't see a point, just a bunch of straw.
Free free to specify the non straw versions.
Because ineffective corporations continuing to exist is less bad in terms of human suffering than major economic collapse.
First, I don't think that is true.
Feel free to support that claim with an argument. There are good reasons for thinking that the collapse of a large financial institution, in particular can cause a domino affect. It's happened before. And it's hardly debatable that recessions cause suffering...the suicide rate goes up, for one thing.
Second, there was a bit of sleight of hand -- you replaced the failure of large corporations with "major economic collapse". That's, um, not exactly the same thing :-/
No, and it's not completely disjoint , neither.
↑ comment by Lumifer · 2014-10-31T16:35:45.440Z · LW(p) · GW(p)
So, how much did the government actually contract under Maggie or under Ronnie? :-) Did that contraction stick?
Tax cuts necessarily require eventual reductions in government spending and thus the power of government, agreed?
Oh, not at all. You just borrow more.
Besides, spending is only part of the power of the government. Consider e.g. extending the reach of the laws which does not necessarily require any budgetary increases.
Replies from: Strange7, Jackercrack↑ comment by Strange7 · 2014-11-02T19:07:21.148Z · LW(p) · GW(p)
You just borrow more.
And/or authorize the police to steal.
http://en.wikipedia.org/wiki/Asset_forfeiture
Replies from: Lumifer↑ comment by Jackercrack · 2014-10-31T21:14:52.721Z · LW(p) · GW(p)
There does come a point when the bill must be paid though, even if it is over a long time. Even if it's over 40 years as you pay back the interest on the debt.
Before we go further, I think we need to be sure we're talking about the same thing when we say power. See, when you said a reduction in government power, what I heard was essentially less money, smaller government. I'm getting the feeling that that is not entirely what you meant, could you clarify?
Replies from: Lumifer↑ comment by Lumifer · 2014-11-01T19:31:04.979Z · LW(p) · GW(p)
when you said a reduction in government power, what I heard was essentially less money, smaller government.
That too, but not only that. There is nothing tricky here, I'm using the word "power" in its straightforward meaning. Power includes money, but it also includes things like the monopoly on (legal) violence, the ability to create and enforce laws and regulations, give or withhold permission to do something (e.g. occupational licensing), etc. etc.
↑ comment by gjm · 2014-10-31T02:19:11.367Z · LW(p) · GW(p)
[...] just rational things to do. The "heroic" part must stand for something, no?
I had always assumed it was intended to stand for doing things that are rational even if they're really hard or scary and unanticipated.
If you do a careful cost-benefit calculation and conclude (depending on your values and beliefs) that ...
- ... the biggest risk facing humanity in the nearish future is that of a runaway AI doing things we really don't want but are powerless to stop, and preventing this requires serious hard work in mathematics and philosophy and engineering that no one seems to be doing; or
- ... most of the world's population is going to spend eternity in unimaginable torment because they don't know how to please the gods; or
- ... there are billions of people much, much worse off than you, and giving away almost everything you have and almost everything you earn will make the world a substantially better place than keeping it in order to have a nicer house, better food, more confidence of not starving when you get old, etc.
and if you are a normal person then you shrug your shoulders, say "damn, that's too bad", and get on with your life; but if you are infused with a sense of heroic responsibility then you devote your life to researching AI safety (and propagandizing to get other people thinking about it too), or become a missionary, or live in poverty while doing lucrative but miserable work in order to save lives in Africa.
If it turns out that you picked as good a cause as you think you did, and if you do your heroic job well and get lucky, then you can end up transforming the world for the better. If you picked a bad cause (saving Germany from the Jewish menace, let's say) and do your job well and get lucky, you can (deservedly) go down in history as an evil genocidal tyrant and one of the worst people who ever lived. And if you turn out not to have the skill and luck you need, you can waste your life failing to solve the problem you took aim at, and end up neither accomplishing anything of importance nor having a comfortable life.
So there are reasons why most people don't embrace "heroic responsibility". But the premise for the whole thing -- without which there's nothing to be heroically responsible about -- is, it seems to me, that you really think that this thing needs doing and you need to do it and that's what's best for the world.
("Heroic responsibility" isn't only about tasks so big that they consume your entire life. You can take heroic responsibility for smaller-scale things too, if they present themselves and seem important enough. But, again, I think what makes them opportunities for heroic responsibility is that combination of importantly worth doing and really intimidating.)
Replies from: Jiro, Lumifer↑ comment by Jiro · 2014-10-31T09:10:22.973Z · LW(p) · GW(p)
and if you are a normal person then you shrug your shoulders, say "damn, that's too bad", and get on with your life; but if you are infused with a sense of heroic responsibility then you devote your life to...
If you're a normal person, the fact that you shrug your shoulders when faced with such things is beneficial because shrugging your shoulders instead of being heroic when faced with the destruction of civilization serves as immunity against crazy ideas and because you're running on corrupted hardware, you probably aren't as good at figuring out how to avoid the destruction of civilization as you think.
Just saying "I'm not going to shrug my shoulders; I'm going to be heroic instead" is removing the checks and balances that are irrational themselves but protect you against bad rationality of other types, leaving you worse off overall.
Replies from: gjm↑ comment by Lumifer · 2014-10-31T03:27:48.685Z · LW(p) · GW(p)
I think what makes them opportunities for heroic responsibility is that combination of importantly worth doing and really intimidating.
Well, here is a counter-example. I can't imagine that was too intimidating :-/
↑ comment by Jackercrack · 2014-10-31T10:14:49.327Z · LW(p) · GW(p)
Okay, my definition of sane is essentially: rational enough to take actions that generally work towards your goals and to create goals that are effective ways to satisfy your terminal values. It's a rather high bar. Suicide bombers do not achieve their goals, cultists have had their cognitive machinery hijacked to serve someone else's goals instead of their own. The reason I think this would be okay in aggregate is the psychological unity of mankind: we're mostly pretty similar and there are remarkably low numbers of evil mutants. Being pretty similar, most people's goals would be acceptable to me. I disagree with some things China does for example, but I find their overwhelming competence makes up for it in aggregate wellbeing of their populace.
gjm gives some good examples of heroic responsibility, but I understand the term slightly differently. Heroic responsibility is to have found a thing that you have decided is important, generally by reasoned cost/benefit and then take responsibility to get it done regardless of what life throws your way. It may be an easy task or a hard task, but it must be an important task. The basic idea is that you don't stop when you feel like you tried, if your first attempt doesn't work you do more research and come up with a new strategy. If your second plan doesn't work because of unfair forces you take those unfair forces into account and come up with another plan. If that still doesn't work you try harder again, then you keep going until you either achieve the goal, it becomes clear that you cannot achieve the goal or the amount of effort you would have to put into the problem becomes significantly greater than the size of the benefit you expect.
For example, the benefit for FAI is humanities continued existence, there is essentially no amount of effort one person could put in that could be too much. To use the example of Eliezer in this thread, the benefit of a person being happier and more effective for months each year is also large, much larger than the time it takes to research SAD and come up with some creative solutions.
Replies from: Azathoth123, Lumifer↑ comment by Azathoth123 · 2014-11-04T05:25:15.462Z · LW(p) · GW(p)
Suicide bombers do not achieve their goals,
Really, last time I checked there is now a Caliphate in what is still nominal Iraq and Syria.
Replies from: Lumifer, Jackercrack↑ comment by Lumifer · 2014-11-04T05:33:37.601Z · LW(p) · GW(p)
there is now a Caliphate
Not quite. A collection of semi-local militias who managed to piss off just about everyone does not a caliphate make.
P.S. Though as a comment on the grandparent post, some suicide bombers certainly achieve their goals (and that's even ignoring the obvious goal to die a martyr for the cause).
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-11-12T05:38:41.195Z · LW(p) · GW(p)
A collection of semi-local militias who managed to piss off just about everyone
But not enough for "everyone" to mount an effective campaign to destroy them.
↑ comment by Jackercrack · 2014-11-04T22:58:17.270Z · LW(p) · GW(p)
Achieved almost entirely by fighting through normal means, guns and such so I hardly see the relevant. Suicide bombing kills a vanishing small number of people. IED's are an actual threat.
Their original goal as rebels was to remove a central government and now they're fighting a war of genocide against other rebel factions. I wonder how they would have responded if you'd told them at the start that a short while later they'd be slaughtering fellow muslims in direct opposition to their holy book.
↑ comment by Lumifer · 2014-10-31T15:05:19.690Z · LW(p) · GW(p)
rational enough to take actions that generally work towards your goals and to create goals that are effective ways to satisfy your terminal values. It's a rather high bar.
The definition you give sounds like a pretty low bar to me. The fact that you're calling the bar high means that there are implied but unstated things around this definition -- can you be more explicit? "Generally work towards your goals" looks to me like what 90% of the population is doing...
but I understand the term slightly differently
Is it basically persistence/stubborness/bloodymindedness, then?
Replies from: Jackercrack↑ comment by Jackercrack · 2014-11-01T01:42:39.436Z · LW(p) · GW(p)
Persistence is a good word for it, plus a sense of making it work even if the world is unfair, the odds are stacked against you. No sense of having fought the good fight and lost, if you failed and there were things you possibly could done beforehand, general strategies that would have been effective even if you did not know what was coming, then that is your own responsibility. It is not, I think, a particularly healthy way of looking at most things. It can only really be useful as a mindset for things that really matter.
can you be more explicit?
Ah, sorry, I insufficiently unpacked "effective ways to satisfy terminal values". The hidden complexity was in "effectively". By effectively I meant in an efficient and >75% optimal manner. Many people do not know their own terminal values. Most people also don't know that what makes a human happy, which is often different from what a human wants. Of those that do know their values, few have effective plans to satisfy them. Looking back on it now, this is quite a large inferential distance behind the innocuous looking work 'sane'. I shall try to improve on that in the future.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-01T19:37:51.858Z · LW(p) · GW(p)
Many people do not know their own terminal values.
Is there an implication that someone or something does know? That strikes me as awfully paternalistic.
Replies from: Jackercrack↑ comment by Jackercrack · 2014-11-01T21:50:01.010Z · LW(p) · GW(p)
It's a statement of fact, not a political agenda. Neuroscientists know more about people's brains than normal people do, as a result of spending years and decades studying the subject.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-01T22:41:33.965Z · LW(p) · GW(p)
Huh? Neuroscientists know my terminal values better than I do because they studied brains?
Sorry, that's nonsense.
Replies from: Jackercrack↑ comment by Jackercrack · 2014-11-01T22:52:36.907Z · LW(p) · GW(p)
Not yours specifically, but the general average across humanity. lukeprog wrote up a good summary of the factors correlated with happiness which you've probably read as well as an attempt to discern the causes. Not that happiness is the be-all and end-all of terminal values, but it certainly shows how little the average person knows about what they would actually happy with vs what they think they'd be happy with. I believe that small sub-sequence on the science of winning at life is far more than the average person knows on the subject, or else people wouldn't give such terrible advice.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-02T00:21:51.962Z · LW(p) · GW(p)
the general average across humanity.
Aren't you making the assumption that the average applies to everyone? It does not. There is a rather wide spread and pretending that a single average value represents it well enough is unwarranted.
There are certainly things biologically hardwired into human brains but not all of them are terminal values and for things that are (e.g. survival) you don't need a neurobiologist to point that out. Frankly, I am at loss to see what neurobiologists can say about terminal values. It's like asking Intel chip engineers about what a piece of software really does.
how little the average person knows about what they would actually happy with
I don't know about that. Do you have evidence? If a person's ideas about her happiness diverge from the average ones, I would by default assume that she's different from the average, not that she is wrong.
comment by Jackercrack · 2014-10-29T23:34:17.215Z · LW(p) · GW(p)
I think heroic responsibility is essentially a response to being in a situation where not enough people are both competent at and willing to make changes to improve things. The authority figures are mad or untrustworthy, so a person has to figure out their own way to make the right things happen and put effective methods in place. It is particularly true of HPMOR where Harry plays the role of Only Sane Man. So far as I can tell, we're in a similar situation in real life at the minute: we have insufficient highly sane people taking heroic responsibility. If we had enough sane people taking heroic responsibility, things would look rather different and likely a lot better run. It would be easy to be a happy gear, if you knew the machine was properly designed, the goal was a good one and the plan was likely to succeed.
There is clearly some ratio of heroically responsible:useful gears that works the best for each situation, some optimal equilibrium. Too many people trying to unilaterally change things in different directions and you get chaos and infighting. Too many useful gears and you have a wonderfully maintained, smooth running machine working at 40% maximum efficiency towards a goal that doesn't make much sense. I propose heroic responsibility be fit into a larger framework, that of filling the role that is required. I can't think up a snappy name for it, but you essentially mould your actions into the shape that best maximises your group's outcome given your abilities. If there is already someone doing the job of heroic responsibility and doing it well, you aim for the next empty or poorly-done role down where you do most good. If all the important positions are filled by competent people or by people more competent than you, then be a gear.
The problem lies in knowing whether you actually could do better in someone's situation with the resources available to them. The ethical injunctions seem to say that humans are rather bad at figuring out when they could do better than those in power. There is only an injunction against cheating to gain power though, not against legitimate means. Still, it's difficult to say whether one is actually more competent and a large amount of research into the potential role should be done first before action is taken. If you can see a decision for the role coming and make a strong prediction of what a good choice would look like, then seen what the actual decision the person made was and predict the effects and finally get it right, with a better batting average than the person in the role, if you can do that, then it's time to go for the position.
Disclaimer: There is a possibility that this theory is an extension of my belief that I should eventually be in charge.