Serious Stories
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-01-08T23:49:35.000Z · LW · GW · Legacy · 105 commentsContents
106 comments
Every Utopia ever constructed—in philosophy, fiction, or religion—has been, to one degree or another, a place where you wouldn't actually want to live. I am not alone in this important observation: George Orwell said much the same thing in "Why Socialists Don't Believe In Fun", and I expect that many others said it earlier.
If you read books on How To Write—and there are a lot of books out there on How To Write, because amazingly a lot of book-writers think they know something about writing—these books will tell you that stories must contain "conflict".
That is, the more lukewarm sort of instructional book will tell you that stories contain "conflict". But some authors speak more plainly.
"Stories are about people's pain." Orson Scott Card.
"Every scene must end in disaster." Jack Bickham.
In the age of my youthful folly, I took for granted that authors were excused from the search for true Eutopia, because if you constructed a Utopia that wasn't flawed... what stories could you write, set there? "Once upon a time they lived happily ever after." What use would it be for a science-fiction author to try to depict a positive Singularity, when a positive Singularity would be...
...the end of all stories?
It seemed like a reasonable framework with which to examine the literary problem of Utopia, but something about that final conclusion produced a quiet, nagging doubt.
At that time I was thinking of an AI as being something like a safe wish-granting genie for the use of individuals. So the conclusion did make a kind of sense. If there was a problem, you would just wish it away, right? Ergo—no stories. So I ignored the quiet, nagging doubt.
Much later, after I concluded that even a safe genie wasn't such a good idea, it also seemed in retrospect that "no stories" could have been a productive indicator. On this particular occasion, "I can't think of a single story I'd want to read about this scenario", might indeed have pointed me toward the reason "I wouldn't want to actually live in this scenario".
So I swallowed my trained-in revulsion of Luddism and theodicy, and at least tried to contemplate the argument:
- A world in which nothing ever goes wrong, or no one ever experiences any pain or sorrow, is a world containing no stories worth reading about.
- A world that you wouldn't want to read about is a world where you wouldn't want to live.
- Into each eudaimonic life a little pain must fall. QED.
In one sense, it's clear that we do not want to live the sort of lives that are depicted in most stories that human authors have written so far. Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: The Iliiad, Romeo and Juliet, The Godfather, Watchmen, Planescape: Torment, the second season of Buffy the Vampire Slayer, or that ending in Tsukihime. Is there a single story on the list that isn't tragic?
Ordinarily, we prefer pleasure to pain, joy to sadness, and life to death. Yet it seems we prefer to empathize with hurting, sad, dead characters. Or stories about happier people aren't serious, aren't artistically great enough to be worthy of praise—but then why selectively praise stories containing unhappy people? Is there some hidden benefit to us in it? It's a puzzle either way you look at it.
When I was a child I couldn't write fiction because I wrote things to go well for my characters—just like I wanted things to go well in real life. Which I was cured of by Orson Scott Card: Oh, I said to myself, that's what I've been doing wrong, my characters aren't hurting. Even then, I didn't realize that the microstructure of a plot works the same way—until Jack Bickham said that every scene must end in disaster. Here I'd been trying to set up problems and resolve them, instead of making them worse...
You simply don't optimize a story the way you optimize a real life. The best story and the best life will be produced by different criteria.
In the real world, people can go on living for quite a while without any major disasters, and still seem to do pretty okay. When was the last time you were shot at by assassins? Quite a while, right? Does your life seem emptier for it?
But on the other hand...
For some odd reason, when authors get too old or too successful, they revert to my childhood. Their stories start going right. They stop doing horrible things to their characters, with the result that they start doing horrible things to their readers. It seems to be a regular part of Elder Author Syndrome. Mercedes Lackey, Laurell K. Hamilton, Robert Heinlein, even Orson Scott bloody Card—they all went that way. They forgot how to hurt their characters. I don't know why.
And when you read a story by an Elder Author or a pure novice—a story where things just relentlessly go right one after another—where the main character defeats the supervillain with a snap of the fingers, or even worse, before the final battle, the supervillain gives up and apologizes and then they're friends again—
It's like a fingernail scraping on a blackboard at the base of your spine. If you've never actually read a story like that (or worse, written one) then count yourself lucky.
That fingernail-scraping quality—would it transfer over from the story to real life, if you tried living real life without a single drop of rain?
One answer might be that what a story really needs is not "disaster", or "pain", or even "conflict", but simply striving. That the problem with Mary Sue stories is that there's not enough striving in them, but they wouldn't actually need pain. This might, perhaps, be tested.
An alternative answer might be that this is the transhumanist version of Fun Theory we're talking about. So we can reply, "Modify brains to eliminate that fingernail-scraping feeling", unless there's some justification for keeping it. If the fingernail-scraping feeling is a pointless random bug getting in the way of Utopia, delete it.
Maybe we should. Maybe all the Great Stories are tragedies because... well...
I once read that in the BDSM community, "intense sensation" is a euphemism for pain. Upon reading this, it occurred to me that, the way humans are constructed now, it is just easier to produce pain than pleasure. Though I speak here somewhat outside my experience, I expect that it takes a highly talented and experienced sexual artist working for hours to produce a good feeling as intense as the pain of one strong kick in the testicles—which is doable in seconds by a novice.
Investigating the life of the priest and proto-rationalist Friedrich Spee von Langenfeld, who heard the confessions of accused witches, I looked up some of the instruments that had been used to produce confessions. There is no ordinary way to make a human being feel as good as those instruments would make you hurt. I'm not sure even drugs would do it, though my experience of drugs is as nonexistent as my experience of torture.
There's something imbalanced about that.
Yes, human beings are too optimistic in their planning. If losses weren't more aversive than gains, we'd go broke, the way we're constructed now. The experimental rule is that losing a desideratum—$50, a coffee mug, whatever—hurts between 2 and 2.5 times as much as the equivalent gain.
But this is a deeper imbalance than that. The effort-in/intensity-out difference between sex and torture is not a mere factor of 2.
If someone goes in search of sensation—in this world, the way human beings are constructed now—it's not surprising that they should arrive at pains to be mixed into their pleasures as a source of intensity in the combined experience.
If only people were constructed differently, so that you could produce pleasure as intense and in as many different flavors as pain! If only you could, with the same ingenuity and effort as a torturer of the Inquisition, make someone feel as good as the Inquisition's victims felt bad—
But then, what is the analogous pleasure that feels that good? A victim of skillful torture will do anything to stop the pain and anything to prevent it from being repeated. Is the equivalent pleasure one that overrides everything with the demand to continue and repeat it? If people are stronger-willed to bear the pleasure, is it really the same pleasure?
There is another rule of writing which states that stories have to shout. A human brain is a long way off those printed letters. Every event and feeling needs to take place at ten times natural volume in order to have any impact at all. You must not try to make your characters behave or feel realistically —especially, you must not faithfully reproduce your own past experiences—because without exaggeration, they'll be too quiet to rise from the page.
Maybe all the Great Stories are tragedies because happiness can't shout loud enough—to a human reader.
Maybe that's what needs fixing.
And if it were fixed... would there be any use left for pain or sorrow? For even the memory of sadness, if all things were already as good as they could be, and every remediable ill already remedied?
Can you just delete pain outright? Or does removing the old floor of the utility function just create a new floor? Will any pleasure less than 10,000,000 hedons be the new unbearable pain?
Humans, built the way we are now, do seem to have hedonic scaling tendencies. Someone who can remember starving will appreciate a loaf of bread more than someone who's never known anything but cake. This was George Orwell's hypothesis for why Utopia is impossible in literature and reality:
"It would seem that human beings are not able to describe, nor perhaps to imagine, happiness except in terms of contrast... The inability of mankind to imagine happiness except in the form of relief, either from effort or pain, presents Socialists with a serious problem. Dickens can describe a poverty-stricken family tucking into a roast goose, and can make them appear happy; on the other hand, the inhabitants of perfect universes seem to have no spontaneous gaiety and are usually somewhat repulsive into the bargain."
For an expected utility maximizer, rescaling the utility function to add a trillion to all outcomes is meaningless—it's literally the same utility function, as a mathematical object. A utility function describes the relative intervals between outcomes; that's what it is, mathematically speaking.
But the human brain has distinct neural circuits for positive feedback and negative feedback, and different varieties of positive and negative feedback. There are people today who "suffer" from congenital analgesia—a total absence of pain. I never heard that insufficient pleasure becomes intolerable to them.
Congenital analgesics do have to inspect themselves carefully and frequently to see if they've cut themselves or burned a finger. Pain serves a purpose in the human mind design...
But that does not show there's no alternative which could serve the same purpose. Could you delete pain and replace it with an urge not to do certain things that lacked the intolerable subjective quality of pain? I do not know all the Law that governs here, but I'd have to guess that yes, you could; you could replace that side of yourself with something more akin to an expected utility maximizer.
Could you delete the human tendency to scale pleasures—delete the accomodation, so that each new roast goose is as delightful as the last? I would guess that you could. This verges perilously close to deleting Boredom, which is right up there with Sympathy as an absolute indispensable... but to say that an old solution remains as pleasurable, is not to say that you will lose the urge to seek new and better solutions.
Can you make every roast goose as pleasurable as it would be in contrast to starvation, without ever having starved?
Can you prevent the pain of a dust speck irritating your eye from being the new torture, if you've literally never experienced anything worse than a dust speck irritating your eye?
Such questions begin to exceed my grasp of the Law, but I would guess that the answer is: yes, it can be done. It is my experience in such matters that once you do learn the Law, you can usually see how to do weird-seeming things.
So far as I know or can guess, David Pearce (The Hedonistic Imperative) is very probably right about the feasibility part, when he says:
"Nanotechnology and genetic engineering will abolish suffering in all sentient life. The abolitionist project is hugely ambitious but technically feasible. It is also instrumentally rational and morally urgent. The metabolic pathways of pain and malaise evolved because they served the fitness of our genes in the ancestral environment. They will be replaced by a different sort of neural architecture—a motivational system based on heritable gradients of bliss. States of sublime well-being are destined to become the genetically pre-programmed norm of mental health. It is predicted that the world's last unpleasant experience will be a precisely dateable event."
Is that... what we want?
To just wipe away the last tear, and be done?
Is there any good reason not to, except status quo bias and a handful of worn rationalizations?
What would be the alternative? Or alternatives?
To leave things as they are? Of course not. No God designed this world; we have no reason to think it exactly optimal on any dimension. If this world does not contain too much pain, then it must not contain enough, and the latter seems unlikely.
But perhaps...
You could cut out just the intolerable parts of pain?
Get rid of the Inquisition. Keep the sort of pain that tells you not to stick your finger in the fire, or the pain that tells you that you shouldn't have put your friend's finger in the fire, or even the pain of breaking up with a lover.
Try to get rid of the sort of pain that grinds down and destroys a mind. Or configure minds to be harder to damage.
You could have a world where there were broken legs, or even broken hearts, but no broken people. No child sexual abuse that turns out more abusers. No people ground down by weariness and drudging minor inconvenience to the point where they contemplate suicide. No random meaningless endless sorrows like starvation or AIDS.
And if even a broken leg still seems too scary—
Would we be less frightened of pain, if we were stronger, if our daily lives did not already exhaust so much of our reserves?
So that would be one alternative to the Pearce's world—if there are yet other alternatives, I haven't thought them through in any detail.
The path of courage, you might call it—the idea being that if you eliminate the destroying kind of pain and strengthen the people, then what's left shouldn't be that scary.
A world where there is sorrow, but not massive systematic pointless sorrow, like we see on the evening news. A world where pain, if it is not eliminated, at least does not overbalance pleasure. You could write stories about that world, and they could read our stories.
I do tend to be rather conservative around the notion of deleting large parts of human nature. I'm not sure how many major chunks you can delete until that balanced, conflicting, dynamic structure collapses into something simpler, like an expected pleasure maximizer.
And so I do admit that it is the path of courage that appeals to me.
Then again, I haven't lived it both ways.
Maybe I'm just afraid of a world so different as Analgesia—wouldn't that be an ironic reason to walk "the path of courage"?
Maybe the path of courage just seems like the smaller change—maybe I just have trouble empathizing over a larger gap.
But "change" is a moving target.
If a human child grew up in a less painful world—if they had never lived in a world of AIDS or cancer or slavery, and so did not know these things as evils that had been triumphantly eliminated—and so did not feel that they were "already done" or that the world was "already changed enough"...
Would they take the next step, and try to eliminate the unbearable pain of broken hearts, when someone's lover stops loving them?
And then what? Is there a point where Romeo and Juliet just seems less and less relevant, more and more a relic of some distant forgotten world? Does there come some point in the transhuman journey where the whole business of the negative reinforcement circuitry, can't possibly seem like anything except a pointless hangover to wake up from?
And if so, is there any point in delaying that last step? Or should we just throw away our fears and... throw away our fears?
I don't know.
105 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Divia2 · 2009-01-09T00:30:12.000Z · LW(p) · GW(p)
Have you read The Worthing Saga by Orson Scott Card? It's one of my favorite books of his and deals with a world in which a few humans with special power act as gods and watch over everyone, not allowing any pain. One day these "gods" decide that such a world has no stories, and they stop acting as gods, allowing the people to experience pain. (The book contains may of Orson Scott Card's earliest stories, I believe, certainly from before he started from Older Author Syndrome.)
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-10-20T11:09:51.354Z · LW(p) · GW(p)
See also "Alpha Ralpha Boulevard" by Cordwainer Smith.
comment by bogdanb · 2009-01-09T01:21:45.000Z · LW(p) · GW(p)
Is there any value in heroism other than that it (attempts to) cease (some) pain (for some, at the expense of the actor)?
Replies from: Nonecomment by JulianMorrison · 2009-01-09T01:28:42.000Z · LW(p) · GW(p)
"I sense damage. The data could be called pain." -- The Terminator, who is not wired like humans.
I'd say the primary bad thing about pain is not that it hurts, but that it's pushy and won't tune out. You could learn to sleep in a ship's engine room, but a mere stubbed toe grabs and holds your attention.
That, I think we could delete with impunity.
Replies from: grendelkhan↑ comment by grendelkhan · 2012-06-05T20:09:09.790Z · LW(p) · GW(p)
At one point it was thought that it would be a good idea to shut off pain, replacing it, perhaps, with some sort of warning message. Then it was discovered that pain was the warning message, and to remove it carried the danger of apparent invulnerability. The best that could be done was to make the message less... distracting.
--Sam Hughes, Fine Structure
Replies from: gwern↑ comment by gwern · 2012-06-05T20:51:18.266Z · LW(p) · GW(p)
…One could imagine a conscious nervous system that operates as humans do but does not suffer any internal strife. In such a system, knowledge guiding skeletomotor action would be isomorphic to, and never at odds with, the nature of the phenomenal state — running across the hot desert sand in order to reach water would actually feel good, because performing the action is deemed adaptive. Why our nervous system does not operate with such harmony is perhaps a question that only evolutionary biology can answer. Certainly one can imagine such integration occurring without anything like phenomenal states, but from the present standpoint, this reflects more one’s powers of imagination than what has occurred in the course of evolutionary history.
From a nonfictional paper.
comment by infotropism · 2009-01-09T01:33:10.000Z · LW(p) · GW(p)
There's something I wanted to say about the dusk speck in a Knuth notation number of eyes versus torture for one person; something as light as a speck of dust wouldn't even register, it's noise level, in practice doesn't affect someone one way or the other. A bit in the same idea that you need a signal to reach a certain strength to make a neuron fire. So to make it work, you'd need something that at least makes a difference, even the smallest of differences, in terms of pain.
Now with that being said, different people have different sensibilities. This may even be more the case in the future indeed. But in the end, after writing this, can you still argue that it is preferable to torture someone for 50 years, rather than have an unimaginably high number of people people bearing some minimal pain ?
Replies from: pnrjulius, DanielLC↑ comment by pnrjulius · 2012-06-06T23:58:00.730Z · LW(p) · GW(p)
If a dust speck isn't enough, pick your favorite: A staple into your cheek?
I do find the Rawlsian solution tempting though: is it wrong to torture one person to save 3^^^3 people from staples in their cheek, because the one tortured person is unfairly disadvantaged. Maximize the minimum and you'll find you really can't torture anybody at all.
On the other hand, 3^^^3 is a lot of people...
↑ comment by DanielLC · 2013-01-29T20:44:26.999Z · LW(p) · GW(p)
Being a noise level means you don't notice. It doesn't mean it doesn't register.
If it's small enough, it literally won't register, but there's more to that than just being small. If you get to the point where it registers, and move back epsilon, then all it takes is an epsilon difference. Unless you were very, very careful about how you chose your people, there's going to be an unimaginable number who are within epsilon. Even if you made sure all the people were exactly identical, there's still a chance that they're all within epsilon.
comment by Russell_Wallace · 2009-01-09T01:49:57.000Z · LW(p) · GW(p)
The way stories work is not as simple as Orson Scott Card's view. I can't do justice to it in a blog comment, but read 'The Seven Basic Plots' by Christopher Booker for the first accurate, comprehensive theory of the subject.
Replies from: pnrjulius↑ comment by pnrjulius · 2012-06-07T00:00:38.555Z · LW(p) · GW(p)
Yes. Clearly no one would enjoy a story where someone is just continuously tortured without change.
Replies from: ialdabaoth↑ comment by ialdabaoth · 2013-09-20T00:13:41.417Z · LW(p) · GW(p)
You have clearly never heard of "ero-guro" manga - you are, in fact, describing a specific subgenre of it.
comment by TGGP4 · 2009-01-09T01:55:54.000Z · LW(p) · GW(p)
Is that... what we want?
To just wipe away the last tear, and be done? For the last time, yes! Wake up from the Dragon-Tyrant's spell!
You could cut out just the intolerable parts of pain? It is all tolerable. Or intolerable. You'd better define your terms.
Keep the sort of pain that tells you not to stick your finger in the fire Just regenerate the finger.
grinds down and destroys a mind Does pain actually do that? Have we done experiments showing that's the case?
Or configure minds to be harder to damage One of Judith Harris' points is that minds are designed to be resilient, which is why child abuse doesn't have the effect many assume it does.
No child sexual abuse that turns out more abusers. Are you sure you've got the causation right there? Couldn't it be that abusive people are likely to be related to other abusive people?
or AIDS This is a less serious criticism of Eliezer than the others, but it's funny how often people go on about this rather easily preventable disease that kills a lot fewer people than diseases that get much less attention (various tropical diseases in Africa, a huge list of cancers in the U.S). Other diseases need better marketing and market segmentation research.
Is there a point where Romeo and Juliet just seems less and less relevant, more and more a relic of some distant forgotten world Eliminating out-group hatred alone would do that.
comment by Kazuo_Thow · 2009-01-09T02:27:54.000Z · LW(p) · GW(p)
Could it be that pain-filled stories carry literary value exactly because (to a reader) they're filled with bearable pain? But I have little idea as to how we'd go about setting the threshold for "tolerable pain."
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-01-09T02:38:14.000Z · LW(p) · GW(p)
Without pain can there be heroism?
If you want to save anyone from really serious trouble, act now, your window of opportunity won't last forever.
Are you sure you've got the causation right there? Couldn't it be that abusive people are likely to be related to other abusive people?
Certainly not all abused children become abusers, and people are indeed more resilient than some myths would have it. But TGGP, if we're going to be all evo-psych anyway, then it's often stepchildren who get abused. If they tend not to continue the cycle of abuse, that would certainly be news to me.
This is a less serious criticism of Eliezer than the others, but it's funny how often people go on about this rather easily preventable disease
AIDS goes on grinding you down - separating you from other people socially, even. Most cancers kill you quicker.
Replies from: None↑ comment by [deleted] · 2012-02-04T20:24:10.097Z · LW(p) · GW(p)
AIDS goes on grinding you down - separating you from other people socially, even. Most cancers kill you quicker.
If you have the good fortune of being wealthy, AIDS is slowly becoming more and more like contagious diabetes rather than something that robs your (now shortened) years of most of their meaning.
Replies from: pnrjuliuscomment by Psy-Kosh · 2009-01-09T02:38:30.000Z · LW(p) · GW(p)
Hrm... I'm pretty sure that, at least initially, losing the capacity for pain is a change I would not want. There're definite changes I would want in myself, but I don't think, at least initially, I would want that.
I'd want more to be, well, "stronger" than I am, better able to handle it, for lack of better terminology. Not so much less pain so much as so much more, well "me", that the pain can't fill it. (Yes, this is obviously imprecise. I'm simply trying to appeal to how I currently imagine the desired state "feeling from the inside", as best as I can.)
Further, in the long term, I'm thinking I may want to keep the capacity because I don't think I'd want to give up the ability to really properly "comprehend from the inside" my memories. So I think I'd want to retain some of that circuitry in some form, at least to decode ancient memories.
(But then, contrary to something you hinted at in a previous post, I think I would like, on some level, to still like cookies even by the time the last star would have burned out. I find the idea of carrying the ability to enjoy such a "simple childish" pleasure so far into deep time to itself be appealing. That and I hope to retain appreciation at least in some form for something analogous to corny puns. And again, for similar motivations.)
However, the similarity of removing pain to removing boredom or anything analogous, well... I'm not sure. I think one motivation I'd have would be more "increasing complexity/possibility of human experience", so it's partly I just don't want to give up a "trick" I already have.
But if pleasures came in complex forms like tastes and smells and... no. Cancel that. Let's go farther: Pleasure as complex as human vision (or, preferably, much farther. But you get the idea), then it might be different. I don't know if this is possible, it'd be a much longer term change, a more complex upgrade probably, but eventually setting myself up so that at no point do the pleasures translate to any simple one dimensional positive reinforcement. That "all the way down" the algorithm stack it's distinct and complex, and not just differently named tokens that do the same thing, then maybe we could more or less safely eliminate suffering without giving up anything important, without really approaching anything as, well, anything as downright depressing as a world of static blissed out wireheads.
I don't know if it's possible to really get something like pleasure that doesn't, somewhere in the stack of stuff that generates experience, translate on some level to simple 1D reinforcement. I admit this notion of being able to do this may be an incoherent confusion, I'm really unsure here. But I think if it was possible, getting rid of the whole simple 1D positive reinforcement thing might be nice, if on all levels of experience it was more complex than that.
Replies from: None↑ comment by [deleted] · 2012-02-04T20:29:04.917Z · LW(p) · GW(p)
Further, in the long term, I'm thinking I may want to keep the capacity because I don't think I'd want to give up the ability to really properly "comprehend from the inside" my memories. So I think I'd want to retain some of that circuitry in some form, at least to decode ancient memories.
Throwing away the circuitry behind tears means throwing away the circuitry that allows one to sympathize with the tears of others. To ignore the objection any virtue ethicist that might be reading this currently for a little while, obviously in a world without tears such sympathy may not be needed. But as you point out we still have our memories and great tales that we'd probably like to go on appreciating somewhat for a long time to come.
comment by Kevin7 · 2009-01-09T03:00:42.000Z · LW(p) · GW(p)
Cory Doctorow's Down and Out in the Magic Kingdom is a pretty good utopia. Also, I would happily live in the extreme post-singularity of complete AI control off all matter and energy from The Metamorphosis of Prime Intellect.
Doctorow's Utopia has few drawbacks that don't exist in modern society, and Metamorphosis is an issue of what friendly AI means. Eliezer, you'd probably like Metamorphosis if you haven't read it -- it's about an obscenely strong AI programmed to follow Asimov's three laws. It touches on a number of issues that you write about here, like orgasmium.
Both are available for free online, Doctorow's under a CC license and localroger's free as in beer.
http://www.kuro5hin.org/prime-intellect/
Replies from: Hul-Gil↑ comment by Hul-Gil · 2012-04-25T18:17:49.550Z · LW(p) · GW(p)
I enjoyed Down and Out in the Magic Kingdom quite a bit! I'm glad Kevin7 posted this link.
However, the insanity portrayed as being beneficial and desirable in The Metamorphosis is too egregious to ignore - even if the rest of the story had made good on its promise of providing an interesting look at a posthuman world. (It doesn't. We don't even get to see anything of it.) At first, I thought "oh, great; more cached-thought SF"... but it was worse than that. I forced myself to finish it just so I could be sure the following is accurate.
Worse than the already-irritating "death gives meaning to life!" reasoning replete in the work, we find either actual insanity or just a blithe disregard for self-contradiction:
- Technology is bad, because one day the universe will die. (What's the connection? No fucking clue.)
- We should live like cavemen, because technology (and knowledge itself - no reading!) will lead to murder (but certain arbitrary tools are okay); but death is fine when it's a bear or disease that kills you.
- Reality isn't "really real" if it's created or controlled by an AI, even if it's indistinguishable from... uh... other reality.
- And, of course, we save the most obvious conclusion for last (sorta-spoiler warning): despite item #2, it's okay to murder billions of happy immortals because you're unhappy that life is safe and everyone is free at last.
Merits as a story? Well, at first, it's even a little exciting, as we are treated to a glimpse of a post-Singularity world (the only glimpse we get, as it turns out), and then some backstory on how the AI was created. That's cool; but after that, it's not worth reading, in this reader's humble opinion. It's very formulaic, the characters (all ~three of them) have no personality (unless you count angst), and any technical or imaginative details that might be interesting are... well, either not there at all, or waved away with the magic Correlation Effect Plot Device. (It's first used to explain one thing, then turns out to do, quite literally, everything.)
I would like to contrast this to John Wright's The Golden Age trilogy. That work is replete with interesting ideas and details about how a Far Future society might look and work; no magic one-size-fits-all Plotonium (to coin a term; I'm sure TVTropes already has one, though) here. In Metamorphosis, we aren't really given any glimpse at society, but what we do see is essentially Now Except With Magic Powers. In The Golden Age, it is immediately obvious we aren't in Kansas any more. Metamorphosis explores one idea - AI - and that, poorly; The Golden Age includes nanotech, simulation, self-modification, the problem of willpower (see: Werewolf Contracts), posthumans, post-posthumans, post-/trans-human art, and more. Check it out if you have transhumanist leanings... or just enjoy science fiction, come to that.
Replies from: grendelkhan↑ comment by grendelkhan · 2013-10-18T17:53:38.079Z · LW(p) · GW(p)
even if the rest of the story had made good on its promise of providing an interesting look at a posthuman world. (It doesn't. We don't even get to see anything of it.)
You may enjoy A Casino Odyssey in Cyberspace--it's based in part on the author's history of card-counting--but then, you might not, as the Casinos don't seem like a very Fun place to go.
comment by Court3 · 2009-01-09T03:24:08.000Z · LW(p) · GW(p)
I was just going to chime in with Down And Out in the Magic Kingdom. There's a Utopia where there's striving, and existential pain.
But I shouldn't comment too much on it, because I got too bored to finish it. In the first page it is revealed that characters will survive until the "heat death of the universe." Given that premise, I quickly surmised that any dilemmas would be sort of, well, boring without the threat of imminent death. Based on that one small example I would say there is something necessary about the threat of death and lesser forms of tragedy, to maintain the needed literary tension to keep those pages turning.
Suggested reading:
Aristotle's Poetics for the ancient, and I think incorrect, theory for tragedy as catharsis.
Nietzsche's The Birth of Tragedy for the view that tragedy gives meaning to our ultimately meaningless striving. Based on the pre-Platonic view of life under the thumb of despotic Greek gods, i.e., fate. (Or so Nietzsche says.)
A little off topic, but Cormac McCarthy said that he "doesn't understand" fiction that doesn't have death in it. Why write it? he's saying. Or, from our perspective, why read it?
comment by Caroline · 2009-01-09T04:31:34.000Z · LW(p) · GW(p)
Eliezer, are you asking if we think universal boredom is a worse fate than world suffering? ;) How terribly emo of you.
Also, you seem to be describe pleasure and pain as a sliding scale- moving towards pleasure means moving away from pain. But there are already humans where that isn't the case, where pain bleeds into pleasure. People whose humiliation makes them proud, whose submission gives them control. Do they sound bored to you? (That brief foray into BDSM was incredibly simplistic. Naughty boy.)
comment by Tom_McCabe2 · 2009-01-09T04:32:15.000Z · LW(p) · GW(p)
"Would they take the next step, and try to eliminate the unbearable pain of broken hearts, when someone's lover stops loving them?"
We already have an (admittedly limited) counterexample to this, in that many Westerners choose to seek out and do somewhat painful things (eg., climbing Everest), even when they are perfectly capable of choosing to avoid them, and even at considerable monetary cost.
comment by Edward · 2009-01-09T06:55:19.000Z · LW(p) · GW(p)
A lot of this post hinges on storytelling, which as we all seem to agree is different than actually living life. Perhaps the reason people are interested in tragic stories and news is related to Prospect Theory. We are more interested in curing and preventing tragedy than increasing from 10,000 to 20,000 hedons.
Of course people to this day read all sorts of self-help books even if they don't have much tragedy in their lives. They just don't read them as you would a "masterpiece." I suppose people in the future may do the same, hoping to glean some information on how to increase their hedonic level, but it wont have a sort of urgent feel to it.
I'm sure the intense emotions and catharsis we feel from some of the great Shakespearean tragedies can be felt even more profoundly in the world Pearce describes simply by, say, listening to music.
We will find creative solutions to creating more majestic subjective experiences if our species can pull through long enough to do so. I'm more interested in mitigating risk and suffering in the here and now, mainly through the intelligent use of decentralized technology.
comment by Doug_S. · 2009-01-09T09:05:52.000Z · LW(p) · GW(p)
In one sense, it's clear that we do not want to live the sort of lives that are depicted in most stories that human authors have written so far. Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: The Iliad, Romeo and Juliet, The Godfather, Watchmen, Planescape: Torment, the second season of Buffy the Vampire Slayer, or that ending in Tsukihime. Is there a single story on the list that isn't tragic?
In many stories, things go horribly wrong and characters hurt, badly, but in the end, things end up much better than they started. As you say later, it's often more about the striving than the suffering. Currently, The Shawshank Redemption is sitting at the top of the IMDB Top 250 Movies list. Is that a tragic story? It does have a happy ending, after all.
Incidentally, my favorite movies to watch over and over tend to be comedies. Are Blazing Saddles, Airplane!, and Monty Python and the Holy Grail capital-G Great? How about the works of Gilbert and Sullivan? Mark Twain wrote comedies, and Don Quixote is a comedy, too!
I suspect that comedies tend to be more culture-specific than tragedies; things that were hilarious 300 years ago might just get yawns and blank stares today. On the other hand, some comedies do stand the test of time, they're just a bit less common. Lysistrata is over 2000 years old and it hasn't stopped being funny yet, and Don Quixote outlasted the entire genre of stories it was making fun of.
comment by Kaj_Sotala · 2009-01-09T09:19:10.000Z · LW(p) · GW(p)
TGGP: One of Judith Harris' points is that minds are designed to be resilient, which is why child abuse doesn't have the effect many assume it does.
Didn't Harris explicitly make the point that yes, obviously actual abuse is an exception to the "family doesn't have that big of an effect" rule? (At least she did in The Nurture Assumption.)
comment by Ben_Jones · 2009-01-09T11:07:11.000Z · LW(p) · GW(p)
[...]my experience of drugs is as nonexistent as my experience of torture.
There's something imbalanced about that.
Agreed. I'm sure both can be procured somewhere in the Bay Area though. Great material for blogging too!
Is the equivalent pleasure one that overrides everything with the demand to continue and repeat it?
Yes. And that's as horrible an idea as eternal torture. I'm surprised you haven't cited any of the studies about relative happiness of lottery winners, (compared to their expectations) though I seem to remember references in some of the posts about a year back.
Being able to change the rules of the game is dangerous. Being able to change your brain so you perceive the game differently is dangerous. Achieving the capability to do both within a short time window is my favourite candidate for a Great Filter.
comment by steven · 2009-01-09T11:24:31.000Z · LW(p) · GW(p)
If I'm 50% sure that the asymmetry between suffering and happiness is just because it's very difficult to make humans happy (and so in general achieving great happiness is about as important as avoiding great suffering), and 50% sure that the asymmetry is because of something intrinsic to how these things work (and so avoiding great suffering is maybe a hundred times as important), should I act in the mean time as if avoiding great suffering is slightly over 50 times as important as achieving great happiness, slightly under 2 times as important as achieving great happiness, or something in between? This is where you need the sort of moral uncertainty theory that Nick Bostrom has been working on I think.
comment by Vizikahn2 · 2009-01-09T12:02:46.000Z · LW(p) · GW(p)
Beyonder here. The "unbearable pain of broken hearts" sounds like an interesting experience. I'll take one of those, and one "defeated in a fist fight". The "Romeo and Juliet just seems less and less relevant" sounds interesting too, but I'll try that later.
comment by Aaron5 · 2009-01-09T15:16:29.000Z · LW(p) · GW(p)
Towards the end of the essay, Orwell writes:
"The real objective of Socialism is human brotherhood. This is widely felt to be the case, though it is not usually said, or not said loudly enough. Men use up their lives in heart-breaking political struggles, or get themselves killed in civil wars, or tortured in the secret prisons of the Gestapo, not in order to establish some central-heated, air-conditioned, strip-lighted Paradise, but because they want a world in which human beings love one another instead of swindling and murdering one another. And they want that world as a first step. Where they go from there is not so certain, and the attempt to foresee it in detail merely confuses the issue."
Is there a similar transhumanist objective? Is trying to see everything in detail causing confusion?
comment by Another_Anonymous · 2009-01-09T17:28:07.000Z · LW(p) · GW(p)
@Eliezer, why not try certain psychoactive drugs?
comment by Michael_Bishop · 2009-01-09T18:04:50.000Z · LW(p) · GW(p)
It is unclear to what extent, or even whether, being a victim of sexual abuse causes people to perpetrate sexual abuse. That said, I would personally be surprised if there wasn't some effect.
comment by Caledonian2 · 2009-01-09T18:07:46.000Z · LW(p) · GW(p)
I would suggest that this book, and the two books immediately preceding it, are an examination of the difference between what people believe they want the world to be and what they actually want and need it to be. When people gain enough power to create their vision of the perfect world, they do - and then find they've constructed an elaborate prison at best and a slow and terrible death at worst.
An actual "perfect world" can't be safe, controlled, or certain -- and the inevitable consequence of that is pain. But so is delight.
comment by billswift · 2009-01-09T18:12:20.000Z · LW(p) · GW(p)
John Derbyshire has a review up of a book that addresses these types of problems, that is evolutionary psych arguments about the arts including fiction. http://www.johnderbyshire.com/Reviews/HumanSciences/artinstinct.html
comment by Caledonian2 · 2009-01-09T18:16:42.000Z · LW(p) · GW(p)
I'd say the primary bad thing about pain is not that it hurts, but that it's pushy and won't tune out. You could learn to sleep in a ship's engine room, but a mere stubbed toe grabs and holds your attention. That, I think we could delete with impunity.
If we could learn to simply get along with any level of pain... how would it constitute an obstacle?
Real accomplishment requires real obstacles to avoid, remove, or transcend. Real obstacles require real consequences. And real consequences require pain.
Replies from: pnrjulius↑ comment by pnrjulius · 2012-06-07T00:12:35.821Z · LW(p) · GW(p)
In that case, we'll never get anything done, because you only count something as "real accomplishment" if we have things that prevent us from doing it. Once we get rid of the obstacles and do it, now it's not a "real accomplishment" anymore.
comment by George_Weinberg2 · 2009-01-09T19:20:13.000Z · LW(p) · GW(p)
Best thought-out utopia ever:
In the Big Rock Candy Mountains, all the cops have wooden legs And the bulldogs all have rubber teeth and the hens lay soft-boiled eggs The farmer's trees are full of fruit and the barns are full of hay Oh I'm bound to go where there ain't no snow Where the rain don't fall, the wind don't blow In the Big Rock Candy Mountains
In the Big Rock Candy Mountains, you never change your socks And little streams of alcohol come a-trickling down the rocks The brakemen have to tip their hats and the railroad bulls are blind There's a lake of stew and of whiskey too And you can paddle all around 'em in a big canoe In the Big Rock Candy Mountains
In the Big Rock Candy Mountains the jails are made of tin, And you can walk right out again as soon as you are in There ain't no short-handled shovels, no axes, saws or picks, I'm a-goin' to stay where you sleep all day Where they hung the jerk that invented work In the Big Rock Candy Mountains
comment by Peter_Eng · 2009-01-09T19:52:42.000Z · LW(p) · GW(p)
Offhand, I can't think of any fictional universe that I haven't classed as "a great place to visit, but I wouldn't want to live there."
But that's why I go visiting. I want to see people defeat real obstacles, face real consequences, and feel the pain.
I just don't want to live there. I like my small-to-nonexistent obstacles, with consequences and pain to match.
I don't want to be The Guy On The Airplane Who Stops The Terrorist. I don't want to be The Person Who Saves The World.
I want to be the guy sitting at his computer, doing accounting things. Not a life of quiet desperation, just a life of quiet. If somebody could make the real world into a utopia, I'd consider living there.
But if it requires a world where we still need people who can take on big obstacles, face big consequences, and suffer big pain to produce writers that can write compelling stories, I think I'd rather stay here.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2011-09-24T04:01:16.045Z · LW(p) · GW(p)
I can think of lots of fictional universes I'd love to live in. Either civilization in Against the Fall of Night is pretty nifty, and I wouldn't mind living there, even during the events of the story. Both could use improvements, sure, but a lot less than our civilization! But that's not exactly a thrill-a-minute book. Less far-futuristically, The Door into Summer seems pretty cool too, and as long as you're not the main character you're just your own person.
More often, I'd restrict it to not during the story. Like, Foundation. Sounds pretty swell, if you live in the thousands of years before it starts, or on Gaia. The Hyperion-verse is just great before Hyperion and more so after Rise of Endymion. The worlds of Schild's Ladder and Incandescence are fine except when you have reasonable concerns that everything is about to end, and in the latter case very few people are put in that situation at any point. Similarly, Glasshouse's world is very nice whenever there isn't a massive galactic war going on and you haven't been kidnapped.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-01-09T20:02:36.000Z · LW(p) · GW(p)
Aaron, sad as it may seem to say, I think George Orwell's imagination simply failed him at the last. As Orwell also wrote:
In the last part, in contrast with disgusting Yahoos, we are shown the noble Houyhnhnms, intelligent horses who are free from human failings. Now these horses, for all their high character and unfailing common sense, are remarkably dreary creatures. Like the inhabitants of various other Utopias, they are chiefly concerned with avoiding fuss. They live uneventful, subdued, 'reasonable' lives, free not only from quarrels, disorder or insecurity of any kind, but also from 'passion', including physical love. They choose their mates on eugenic principles, avoid excesses of affection, and appear somewhat glad to die when their time comes. In the earlier parts of the book Swift has shown where man's folly and scoundrelism lead him: but take away the folly and scoundrelism, and all you are left with, apparently, is a tepid sort of existence, hardly worth leading.
If what Orwell wanted was a sense of "human brotherhood" in place of "swindling", he needed to say more clearly what distinguishes that from the Houyhnhnms or McCarthy's "ants marching in a circle". I am left with the impression that he ducked out a very serious problem and concluded his essay with an applause light, when applause lights were the whole problem behind tepid, watery utopias in the first place.
It's generally easier to point out problems than to find solutions. A sense of human siblinghood, that sounds like a fine thing to me - such a sense as space enthusiasts or existential risk preventers have - but if that's instead of human conflicts, because everyone just wants to help each other out so much...
Did Orwell ever write a story about his world of "human brotherhood"? Was it any good? Why not, if a world like Orwell was envisioning would have more interesting inhabitants than the Houyhnhnms?
Replies from: pnrjuliuscomment by Dagon · 2009-01-09T22:01:40.000Z · LW(p) · GW(p)
Why are you biased toward the status quo for this human desire for "meaning" or "intensity" (both of which boil down to "emotional motivation"). The vast majority of terminal goals that I can imagine can be better pursued if fewer people (in the wide sense; people = sentient actors) are struggling to have an effect on the universe because they're more afraid of meaninglessness than of doing harm.
comment by JulianMorrison · 2009-01-09T22:29:08.000Z · LW(p) · GW(p)
Caledonian: "If we could learn to simply get along with any level of pain... how would it constitute an obstacle?"
It would still hurt. You'd still not want it. It just wouldn't forcefully intrude itself on your every thought.
Compare a loud noise in your house - bubbles making the pipes howl, perhaps. You could put it off, even learn to sleep through it, but without an urgent reason you'd certainly prefer to call the plumber. If you did have an urgent reason, say you were scrimping money for something vastly more important, you would have the ability to put up with it. My suggestion would extend this sort of tuning-out to physical and emotional pain.
(Yes, I'm aware that a loud enough noise will force your attention and a trivial enough pain can be set aside, but the scales aren't the same.)
comment by Gwern_Branwen · 2009-01-09T23:18:13.000Z · LW(p) · GW(p)
TGGP: why are you opposed to the idea that we may want to retain parts of pain?
If we could get rid of the 'painfulness' of pain, and keep the informative part of pain, that'd be ideal. With no pain at all, we're in the situation of someone with nerve damage who might lose a limb to gangrene when she accidentally damages something but doesn't notice it. (Anyone for The Chronicles of Thomas Covenant, the Unbeliever?)
Painless pain isn't all that strange an idea:
'The second pain pathway is a much more recent scientific discovery. It runs parallel to the sensory pathway, but isn't necessarily rooted in signals from the body. The breakthrough came when neurologists discovered a group of people who, after a brain injury, were no longer bothered by pain. They still felt the pain, and could accurately describe its location and intensity, but didn't seem to mind it at all. The agony wasn't agonizing.
This strange condition - it's known as pain asymbolia - results from damage to a specific subset of brain areas, like the amygdala, insula and anterior cingulate cortex, that are involved in the processing of emotions. As a result, these people are missing the negative feelings that normally accompany our painful sensations. Their muted response to bodily injury demonstrates that it is our feelings about pain - and not the pain sensation itself - that make the experience of pain so awful. Take away the emotion and a stubbed toe isn't so bad.' http://scienceblogs.com/cortex/2009/01/back_pain.php
Replies from: pnrjulius↑ comment by pnrjulius · 2012-06-07T00:17:44.811Z · LW(p) · GW(p)
On the other hand, there must be some downside to pain asymbolia, or we'd all have it. (Plainly the mutation exists; why isn't it selected for?)
Replies from: TheOtherDave, brahmaneya, Chrysophylax↑ comment by TheOtherDave · 2012-06-07T01:21:39.072Z · LW(p) · GW(p)
Perhaps it is selected for, but selection hasn't had long enough to operate to disseminate it throughout the population.
↑ comment by brahmaneya · 2012-11-15T02:01:12.678Z · LW(p) · GW(p)
Probably because the negative feelings about the pain are what strongly motivate you to avoid it, and hence avoid physical damage.
Replies from: gwern↑ comment by gwern · 2012-11-15T02:12:18.052Z · LW(p) · GW(p)
There may be disadvantage, yes. But it could also be that pain asymbolia is fine in a creature as high-level as a human - but without any selective fitness advantage, what would drive it to fixation in a selective sweep? Given zero reproductive advantage and possible disadvantage, it's no surprise that it's rare.
↑ comment by Chrysophylax · 2014-01-14T21:45:17.865Z · LW(p) · GW(p)
Because a child who doesn't find pain unpleasant is really, really handicapped, even in the modern world. The people who founded A Gift of Pain had a daughter with pain asymbolia who is now mostly blind, amongst other disabilities, through self-inflicted damage. I'm not sure whether leprosy sufferers have the no-pain or no-suffering version of pain insensitivity (I think the former) but apparently it's the reason they suffer such damage.
This book seems to be a useful source for people considering the question of whether pain could be improved.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-01-10T00:14:28.000Z · LW(p) · GW(p)
Ancient conversation on #sl4 (ran across it when checking my quotesfile):
<yed> brings up the important fear, of not knowing what to do when everything is easy, nothing to resist and conquer. <outlawpoet> if people aren't interested in intellectual pursuits, then they'll sit on earth eating 95 billion potato chips a second and playing quake 9 <Michael^2> if not having problems around to solve turns out to be a problem, then that problem too will quickly be solved <Michael^2> existential dead ends only happen when someone's imagination runs out <outlawpoet> yed, are you just trying find the downside to solving all major human problems, or what? <outlawpoet> people don't really like pain. <outlawpoet> removing it won't make them unhappy. <yed> outlaw, did you read prime-intellect? <outlawpoet> yeah. I read ender's game too, should i be afraid of insectoid aliens?
comment by PJ_Eby · 2009-01-10T00:35:25.000Z · LW(p) · GW(p)
It is important to note that while emotions are triggered by relative perceptions, they are not themselves relative -- and what they are triggered by can be changed.
Tony Robbins tells an interesting story of how a class he was teaching kept being interrupted by a train thundering past (this was before he made enough money to be booked in nicer venues). After he and the class had been annoyed by it for a while, he announced a new rule: when the train passes, it's time to celebrate!
They then proceeded to cheer and whoop and jump up and down like crazy people every time the train passed... and everybody smiled and laughed and had a good time.
Not all emotional engineering is this trivial, I'll admit -- but it hardly requires a transhuman Power to do.
Physical pain and torture, I'll agree with you on: let's get that stuff out of the genome right quick. But emotional pain is something we already have the tools to get rid of. Really, status-quo bias is the only thing keeping us from virtually wiping out emotional suffering within a single generation.
Replies from: pnrjuliuscomment by Aurini · 2009-01-10T08:31:01.000Z · LW(p) · GW(p)
Yudkowski, I'm going to have to disagree with you on the intensity of pleasure. Done properly, orgasms can sometimes be so intense that you lose track of your personal narrative for a moment - though I'm no expert on experiencing torture, I'd wager that our capacities our comparable on the pleasure/pain scales.
The big difference between the two is duration - our pleasure centres naturally revert to boredom, while pain is unceasing. Perhaps a modification that made pain as brief as an orgasm, before subsiding into throbbing? Then you'd know that you just smacked your toe while running up the concrete steps, but you wouldn't be stuck rolling around on the floor for ten minutes afterwards.
Also, I'd suggest that trying to prevent torture isn't the same as modifying pain. Even if the torturers can't somehow re-modify you to experience pain (like that robot Vader used on Leia), they could find some other way to hurt you, emotionally (torture a puppy) or existentially (burn the last copy of Shakespeare).
comment by luzr · 2009-01-10T09:23:23.000Z · LW(p) · GW(p)
Eliezer:
I am starting to be sort of frightened by your premises - especially considering that there is non-zero probablity of creating some nonsentient singleton that tries to realize your values.
Before going any further, I STRONGLY suggest that you think AGAIN what might be interesting in carving wooden legs.
Yes, I like to SEE MOVIES with strong main characters going through the hell. But I would not want any of that.
It does not matter that AI can do everything better than me. Right now, I am not the best carving the wood either. But working with wood is still fun. Or swimming, skiing, playing chess (despite the fact computer can beat you each time), caring about animals etc.
I do not need to do dangerous things to be happy. I am definitely sure about that.
comment by Tim_Tyler · 2009-01-10T11:06:19.000Z · LW(p) · GW(p)
David Pearce has written extensively on the topic of the elimination of suffering - e.g. see: THE ABOLITIONIST PROJECT and Paradise Engineering.
comment by Pablo_Stafforini_duplicate0.27024432527832687 · 2009-01-10T17:54:41.000Z · LW(p) · GW(p)
If a human child grew up in a less painful world - if they had never lived in a world of AIDS or cancer or slavery, and so did not know these things as evils that had been triumphantly eliminated - and so did not feel that they were "already done" or that the world was "already changed enough"... Would they take the next step, and try to eliminate the unbearable pain of broken hearts, when someone's lover stops loving them?
Here is a more instructive thought experiment. Suppose a human child grew up in a painless world and did not feel that pain was already there or that the world had already changed enough. Should she try to create, in that possible world, the kind of pain that Eliezer doesn't think we should destroy in the actual world?
Replies from: None↑ comment by [deleted] · 2012-02-04T20:37:46.418Z · LW(p) · GW(p)
That's a good point. Except I'm not sure why I should care about hypothetical humans that develop from a different set of starting conditions. How is that different from regular speculating of what different kinds of minds might want?
(If its not obvious I think that biologically normal human brains might end up with very different values, different enough for several kinds of future universes preferred by any one such brain to be utterly worthless to many of the others.)
comment by steve-roberts · 2009-01-10T18:02:24.000Z · LW(p) · GW(p)
If pain were 'programmable', rather than delete it altogether, how about a shorter half-life ? Your finger still hurts when you stick it in the fire, but for hours or minutes afterwards rather than days ?
comment by Utilitarian · 2009-01-10T19:19:17.000Z · LW(p) · GW(p)
I intuitively sympathize with the complaints of status-quo bias, though it's of course also true that more changes from evolution's current local optimum entail more risk.
Here is another interesting reference on one form of congenital lack of pain.
comment by Zubon · 2009-01-12T15:58:07.000Z · LW(p) · GW(p)
Is there a point where Romeo and Juliet just seems less and less relevant, more and more a relic of some distant forgotten world?
Aren't we already there for people who know the details rather than the outline? She was thirteen, they knew each other for less than a week, and he spent Act One mooning about a different girl. We no longer consider romantic difficulties between college boys and middle school girls to be the pinnacle of tragedy.
While we are arguing fictional evidence, I will take the third season of Buffy as its pinnacle, and that ends in triumph rather than tragedy (although again triumph over adversity). It might say more about your preferences if you think all the great works are tragedies; I know someone who thinks the sixth season of Buffy was the best, which is definitely a statement on her sense of life.
comment by Caledonian2 · 2009-01-12T17:18:23.000Z · LW(p) · GW(p)
Gwern, why do you think we have those emotional responses to pain in the first place?
Yes, I'm aware of forms of brain damage that make people not care about negative stimuli. They're extraordinarily crippling.
Replies from: gwern↑ comment by gwern · 2012-06-05T21:00:12.147Z · LW(p) · GW(p)
Nothing in the link indicated that pain asymbolia (as opposed to other conditions like aboulia or not feeling pain at all) is 'extraordinarily crippling'.
Why don't we come with asymbolia by default? I don't know. Possibly the same reason we feel pain even while dying, at which point there's no productive purpose to feeling pain - because evolution is dumb and doesn't care about us.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-05T23:20:41.350Z · LW(p) · GW(p)
Why don't we come with asymbolia by default?
What would be the point of signaling pain if the animal does not care about it?
Replies from: gwern↑ comment by gwern · 2012-06-05T23:54:35.960Z · LW(p) · GW(p)
'Caring', whatever that is, doesn't have to involve 'pain'. I care about many things that do not trigger pain neurons.
Replies from: None, shminux, JGWeissman, army1987↑ comment by Shmi (shminux) · 2012-06-06T17:00:54.958Z · LW(p) · GW(p)
Having reread your reply several times, I still have no idea how it is related to what I said.
↑ comment by JGWeissman · 2012-06-06T17:28:37.413Z · LW(p) · GW(p)
You are a much more powerful consequentialist than our evolutionary ancestors which first evolved pain. They probably couldn't care about avoiding events which had not previously caused them pain.
Replies from: gwern↑ comment by gwern · 2012-06-06T17:39:54.007Z · LW(p) · GW(p)
I am also an animal.
Replies from: JGWeissman↑ comment by JGWeissman · 2012-06-06T17:44:24.210Z · LW(p) · GW(p)
You are an animal of the first species to invent computers. You have superpowers not posessed by other animals.
↑ comment by A1987dM (army1987) · 2012-06-06T20:38:19.463Z · LW(p) · GW(p)
So pain asymbolia means something else than “being able to feel pain but not caring about it”?
Replies from: CuSithBell, shminux↑ comment by CuSithBell · 2012-06-06T20:41:18.113Z · LW(p) · GW(p)
It means "being able to feel pain but not suffering from it."
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-06-06T21:40:34.635Z · LW(p) · GW(p)
Where “pain” and “suffering” are defined, respectively, as... what?
Replies from: CuSithBell↑ comment by CuSithBell · 2012-06-06T23:42:09.889Z · LW(p) · GW(p)
Roughly, pain is a sensation typically associated with damage to the body, suffering is an experience of stimuli as intrinsically unpleasant.
I do not suffer if my room is painted a color I do not like, but I still may care about the color my room is painted.
Replies from: wedrifid↑ comment by wedrifid · 2012-06-06T23:59:13.109Z · LW(p) · GW(p)
Roughly, pain is a sensation typically associated with damage to the body, suffering is an experience of stimuli as intrinsically unpleasant.
Agree, with the assumption that "stimuli" as relevant to suffering includes internal stimuli generated from one's own thoughts.
Replies from: CuSithBell↑ comment by CuSithBell · 2012-06-07T03:17:28.559Z · LW(p) · GW(p)
Yeah, that's certainly a fair clarification. It'd probably take a lot more space to give a really robust definition of "suffering", but that's close enough for gummint work.
↑ comment by Shmi (shminux) · 2012-06-06T22:39:28.401Z · LW(p) · GW(p)
"When pain does not hurt"
comment by Libertarian_Girl · 2009-05-01T06:02:02.000Z · LW(p) · GW(p)
Pablo's analogy is very thought-provoking and the best argument I've heard for ending suffering.
comment by PeerInfinity · 2009-08-11T03:41:43.403Z · LW(p) · GW(p)
I'm surprised that none of the other commenters suggested "What if you just invert the human sense of pain, causing it to become a new form of pleasure, and then write stories about that?"
You might think that would turn a horribly tragic story into a deliciously pornographic story, but I've personally tried this trick, and it doesn't seem to work well. When the story keeps reminding you that the people are in fact suffering, and not at all enjoying the experiences, then it's hard to keep imagining the pain being inverted as pleasure.
But if you wrote a story in which it was made explicit that the people find this inverted pain to be intensely pleasurable... lots of twisted possibilities instantly spring to mind.
For example, involving fully immersive virtual realities in which there is no threat of actually dying, massively multiplayer games, a creative assortment of tools, respawning after death, the ability to record particularly "high-scoring" sessions to play back and experience later, "cheaters" who do silly things like just turn on invincibility and dive into a fire...
Should I feel guilty about all of these ideas coming to mind so easily? What if I imagined what this guilt would feel like if it was inverted?
On a related topic, there is a genre where you're allowed to write stories in which nothing bad happens to any of the characters. That genre is called "pornography".
I especially enjoy the beautiful kinky transhuman sex scenes that already exist in sci-fi literature. I <3 Greg Egan :D
I, for one, would like to see more erotic fiction set in a positive, transhuman future. A Utopia. Or better yet, a Weirdtopia! :D
I have made some attempts to write such fiction myself, but I kept trying to make it too realistic, and just ended up with something like "Peer's exoself finishes the latest set of modifications to Peer's mind, and turns the Orgasmotron back on, causing Peer to experience yet another orgasm, which is both far more intensely powerful, and far more subtly nuanced, then anything ever dreamed possible by an unaugmented human mind. And also unlike anything Peer had experienced before this latest set of modifications."
Oh, and if anyone shares these tastes and would like to meet me for some beautiful kinky transhuman sex in Second Life, you can find me by searching for "Peer Infinity". :)
(Note to self: read through my old Second Life chat logs, and see if any of my previous beautiful kinky transhuman lovemaking sessions would make good short erotic stories...)
Replies from: RomanDavis, grendelkhan↑ comment by RomanDavis · 2010-06-01T21:58:32.175Z · LW(p) · GW(p)
You haven't dove deep enough into asstr yet. Some of them are truly transhuman, most of them are more like, "We terraformed a dozen planets or so, now we need to fill them up! And the best way is to start early, therefore pedo incest is the way of the future."
Replies from: PeerInfinity↑ comment by PeerInfinity · 2010-06-02T03:41:09.147Z · LW(p) · GW(p)
that's awesome! thanks! do you have links to any of these stories? you can send them by private message if you prefer. :)
and yay for someone daring to write a story about transhuman pedo incest :)
but maybe I shouldn't say that until I actually read the story. pedo incest is really hard to write a story about without it feeling all evil and squicky.
The final chapter of Metamorphosis of Prime Intellect is an example of how to do it right. Though in that story it wasn't actually pedophilia, there were practical reasons why the incest was necessary. They were trying to repopulate a planet starting with only 2 people, and they were trying to maximize genetic diversity. heh, and compared to the evilsquickiness of the other chapters, that last chapter felt almost wholesome by comparison...
Also, I should make it clear that this is not a general endorsement of incest or pedophilia. Incest and pedophilia are extremely dangerous. Though I wouldn't be too quick to label them as so evil that incestuous or underage sex should never hapen ever ever. That last chapter of MoPI shows an example of when underage incest is the lesser of two evils... and the way it's portrayed in the story, I'm not sure if I would even call it evil. To me, that scene felt mostly pure and inocent, though I happen to be kinda creepy that way. Anyway, I've already gotten myself into lots of trouble by writing this, I had better stop writing now...
Edit: yeah, I know that this is likely to get lots of downvotes, and will probably make most readers really uncomfortable, and give me a really bad reputation, but I'm still undecided about whether censoring this would be a bad idea...
↑ comment by grendelkhan · 2012-06-05T20:23:39.203Z · LW(p) · GW(p)
I'm surprised that none of the other commenters suggested "What if you just invert the human sense of pain, causing it to become a new form of pleasure, and then write stories about that?"
I think you'd get Crossed. It makes sense, at least through the first book (I haven't read the others), that the infection makes every experience pleasurable, and since painful or horrific experiences are more intense and memorable than good ones, it makes people into Reavers, pretty much.
comment by Masaki · 2009-10-22T15:44:10.108Z · LW(p) · GW(p)
Though I speak here somewhat outside my experience, I expect that it takes a highly talented and experienced sexual artist working for hours to produce a good feeling as intense as the pain of one strong kick in the testicles - which is doable in seconds by a novice.
True. I always believed that it takes much more skill to make people laugh than it takes to make them cry or suffer, which, I guess, would put comedians above most fiction authors.
comment by Document · 2010-10-20T09:46:34.334Z · LW(p) · GW(p)
But then, what is the analogous pleasure that feels that good? A victim of skillful torture will do anything to stop the pain and anything to prevent it from being repeated. Is the equivalent pleasure one that overrides everything with the demand to continue and repeat it?
Another possible definition is that it makes you feel so good that you can endure the previously unendurable torture.
comment by NancyLebovitz · 2010-10-20T11:36:42.516Z · LW(p) · GW(p)
Discussion of stories without pain for the characters
I think there'd be no cost to eliminating PTSD. Not everyone gets it from a given trauma, and vulnerability to it goes up with repeated traumas, so not getting PTSD seems to be well within the human range, and getting it doesn't have any obvious advantages.
One of my friends describes one of the worst things about pain is that it's boring-- she has a bad hip. The pain from it is enough sometimes that she can't think about anything else.
Pain like that-- or worse-- all the time might well be mind-breaking.
World without pain? The good news is that if you allow individual autonomy, everyone isn't going to try giving up pain at the same time. There will be a chance to observe the effects on the early adopters.
comment by madair · 2010-11-11T23:19:17.788Z · LW(p) · GW(p)
All that and no reference to Nietzsche. Didn't he say everything above? That is, with different case studies. Why presented as original?
Nevermind...deleting this comment (after this edit, in case its logged...) I think in general this community is irrational rational when it comes to irrationality.
comment by lockeandkeynes · 2010-12-08T20:49:49.238Z · LW(p) · GW(p)
There we go. Gotta know whether you cut yourself, but you don't need to know MORE about how someone broke your finger. Knowing is knowing.
comment by Raw_Power · 2010-12-13T15:31:19.119Z · LW(p) · GW(p)
I was told Romeo and Juliet was conceived as a Black Comedy, and indeed the protagonists seem to firmly grasp the Idiot Ball throughout the story. It was revived in the Elizabethan Britain as a tragedy because Romeo and Juliet seemed to embody the weird "morality" and sensitivity of that era.
comment by MarkusRamikin · 2011-09-24T08:14:06.560Z · LW(p) · GW(p)
Mm, Planescape:Torment. Hope you've played it with all the fixes and restorations.
If there's one fictional universe I'd like to live in it's Planescape. A reality shapeable by willpower and belief. Give me a karach blade and I can, in principle, do anything. But sadly, *knowing* this would not in any way help me design a better future for this reality...
comment by [deleted] · 2012-02-04T20:45:20.393Z · LW(p) · GW(p)
And if so, is there any point in delaying that last step? Or should we just throw away our fears and... throw away our fears?
Because as you would say, growing stronger is fun. Slowly, and with much effort, throwing away your fears one by one seems like a potentially rich mine of fun for a mind. At the very least it sounds like a good story.
comment by ilovemath224 · 2012-02-19T18:05:41.154Z · LW(p) · GW(p)
Ironically, this post about how hard it is to write good stories about things that have good outcomes has given me inspiration on how to write those stories.
The biggest revelation I had? Use the reader's uncertainty as a source of conflict..
If you can make the reader uncertain about what you're writing about, then there's a perceived conflict in the story by the reader, making him/her ask, "Is this really a good thing?" This perceived conflict between reality and what the reader wants is definitely enough to drive a story, as now the reader wants to know whether it is a good thing or not. Creating that conflict is the easiest way, or one of the easiest, to write a good story about any situation where there shouldn't be much pain and conflict, such as a positive Singularity. When you're struggling to come up with good conflicts for a difficult subject to think about, it helps a lot to have an easy way to create more conflict.
One of the easiest ways to create the above conflict is to think of something positive but seemingly unreasonable to happen within that story, as described in one of the previous subjects here. (I forgot which one, but it's only a few topics back, at most.) By doing so, you create an internal conflict between what society thinks is good and what the reader thinks is good, which, for the average person, can be very powerful. Another way to create the above conflict is to show trust issues that would pop up.
For example, why would any politician want to use an FAI? They could do it, seeing the potential benefits, or they could fear for their inner, corrupt system to be unraveled and that they will no longer be in a position of power, causing a blantant, yet stupid refusal to adopt one. That latter situation is realistic and definitely creates conflict, so long as the FAI has been newly made and people are still getting used to it.
Offering the reader a choice of opinion also can help to create the uncertainty conflict. After all, if they are presented both sides of an argument, now they have to try and figure out which one is better, even if they do just go for the first presented, and even then, the choice not to take the first presented option will still create uncertainty.
For example: Let's say that there are cranes that use antigravity technology in the future. An article is presented to you about the antigravity engine of one of these cranes failing, crashing to earth and killing the operator inside, depicting it as an immense tragedy. But then an article saying that the cranes work correctly 99.9% of the time, with over 10 thousand hover cranes operated worldwide without incident, is shown to you, also indicating how much building speeds have gone up from the usefulness of having a crane with unlimited vertical range. Now you have to make a choice, are hover cranes a good thing, or a bad thing?
Now, I have no doubt that writing a book like this can be seen as too extreme, and push people a bit too far. However, looking back up at the main piece of advice, "Use the reader's uncertainty as a source of conflict", one can notice how that and the definition of humor, "The contradiction between Form and Content", can line up. As you're already contradicting the reader to make him/her uncertain, it shouldn't take too much effort to make the Form and Content also contradict, writing something one way while the actual information that you receive is going another way. This has the benefit of making readers less angry at what you're trying to say, as they'll let certain messages in a comedy slide that they wouldn't let slide in a more serious work.
Of course, some things are naturally funny to begin with, like, for example, a human telling a humanoid FAI a good joke, the FAI actually laughing, the human being confused, and then the FAI entering a detailed academic discussion on humor, knowing full well that the human would find that funny-after all, nobody expects robots to have a sense of humor! These things can definitely help ease the reader into a story, even if it is an otherwise serious work, as now they'll feel more comfortable with something that would otherwise disturb them highly.
Of course, there are also situations where there might not be huge conflicts to deal with in a story. To make that story work, you can make striving for something a conflict in itself. Have the character striving for something fail a few times before they get that something. Especially fun. That's a continuing conflict that can be used throughout any story, and although seemingly bad, it can actually be written well if done right-and by right, I mean making it so that the reader is uncertain if the character is actually going to reach that goal or not.
For example: Teague, a character I plan to use in my book, is living in a post-singularity world. He helped make the FAI to help save the many lives of the people in the world, and in fact, saving that many lives was a major motivating factor in doing so. When hoverpacks become commercially available, he goes out and buys one, thinking that flying one is cool and that he'd have some fun. He has some initial fun due to the new experience, but it quickly fades away. It takes him awhile to realize he isn't having fun, and when he does, he starts to look for something new to do to help others. So he decides to do X to try and achieve that goal of helping as many people as possible. What X is may vary from story to story, but it could easily fail or succeed, and be nearly anything, so long as it isn't similar to what he first did. (If he barely got any enjoyment from flying a jetpack, then why would he get any enjoyment from flying a fighter jet?)
By making it seem as if attempts to have fun can fail, it makes the reader ask, "Will I really have fun doing this?" That question creates uncertainty, and if you're writing a book with the reader's uncertainty as the main conflict, then you're going to need the uncertainty. Of course, you want there to seemingly be a chance of failure and a chance of success, or else the reader will stop reading.
Forgot an additional way to create the reader's uncertainty conflict: Uncertain characters. If you have characters that are uncertain of/discovering themselves, then the reader is going to be uncertain of them. This is a double bonus, in that you get to do quite a bit of character development, and that character development contributes to the conflict, and thus the plot, of the story. For example: Greg isn't the bravest guy. He isn't the most self confident person in the world, definitely not the bravest, but he knows what he's good at. He just has significant motivation problems. So he spends quite a bit of time searching for motivators that he could use to help with his work, without much success. Eventually, he finds that X is exactly what he needs to get motivated, and X was in front of his face all along! Now Greg just has to remember to implement X. By making a character that is uncertain of/discovering themselves, the reader is uncertain about whether or not the character will improve, and also prompts the reader to ask "Do I have anything I don't know about myself?" The only problem with this is that it needs to seem realistic, because (as far as I know of) very few fictional books, or other fictional media, portray conflicts like discovering oneself, and therefore, because there's little consistency with other fictional works to be had, it has to have some consistency with real life.But won't writing a book like this lead to a bad story? Well, if your main conflict is making the reader uncertain, then that pretty much rules out Mary Sues from the gecko, as they will certainly achieve what they want. A character that seems like a Mary Sue, or just an amazing character, in these kinds of stories should either make the reader ask, "Will it work?" or "Will this be a good thing?" to continue the reader's conflict. Having the future be entirely good is also not a good thing, as that does not provoke the question, "Will this be a good thing?" Having the future be mostly good can be a good thing, as long as that provokes the question, "Will this be a good thing?" Bottom line is: Using the conflict of making the reader uncertain is not only the easiest way to write a book about a good thing with a good ending, but it also acts as a catalyst for writing an entertaining story.
If you can keep the "Use the reader's uncertainty as a source of conflict" piece of advice in the back of your head while you write a book about some really good future, post-singularity or not, then you could potentially write a great story about that future. If you don't, well, you're going to fall victim to the trap so many bad writers fall into, and if you're uncertain whether you're good enough to write a story like this, then I'd strongly suggest you don't.
Geez, this is a long comment.
comment by Ishaan · 2013-10-06T03:48:40.347Z · LW(p) · GW(p)
I suggest that a story needs suspense, dissonance, and strong emotional reactions. These need not necessarily be pain. Some stories are interesting because they provoke wonder - ideas clicking together, strange worlds being explored. A lot of science fiction and fantasy stories accomplish this.
I've been mulling this over for a while. I'm posting this now because I just thought of a concrete example of a story which is popular and contains no pain or conflict:
http://www.galactanet.com/oneoff/theegg_mod.html
What makes the story work is the "mind-blown" effect. It's fun to imagine the implications if the story were true. It invokes a sense of awe and wonder. it implicitly implies hundreds of additional possible storylines.
Granted, it's a very short story.
comment by Emanresu · 2014-04-23T06:08:19.897Z · LW(p) · GW(p)
I just thought of another, larger and more unsettling problem. Although it's kind of hard for me to explain, but I'll try.
If the following statements are true:
- The only reason we need pain is to notify us of damage to ourselves or to things that matter to us.
- The only reason we need fear is to motivate us to avoid things that could cause damage to ourselves or things that matter to us.
- The only reason we need happiness or pleasure is so that we are motivated to seek out things that would help us or things that matter to us.
- The only reason we need beliefs is to predict reality.
Then I am extremely concerned about whether the answers to the following questions might doom the continued, dynamic existence of sentient life merely by its very nature:
What would life be like for sentient beings such as ourselves if we either eliminated damage to ourselves and the things that matter to us, or minimized that damage to the point where that damage was insignificant to our overall well-being, and therefore could be mostly ignored if we so chose, only dealt with in such a way to prevent it from becoming significant? In other words, what if we eliminated the need for pain? This was the question discussed in the article above.
What would life be like for sentient beings such as ourselves if we neutralized all threats to our survival and health, as well as eliminating all of the reasons we would have to misjudge something as a threat to our survival and health? Or at least minimized these threats and misjudgements of threat so that they are insignificant to our overall well-being and can be mostly ignored if we chose, only dealt with in such a way as to prevent them from becoming significant? In other words, what if we eliminated the need for fear?
What would life be like for sentient beings such as ourselves if the health, the safety, and the sustainability of the health and safety of all individual members of sentient species such as ourselves were maximized, to the point that we never needed to seek out things that help us or help the things that matter to us, or at least that the need for such help is minimized to the point of insignificance to our overall well-being, and therefore could be mostly ignored if we so chose, only dealt with in such a way to prevent it from becoming significant? In other words, what if we eliminated the need for happiness?
Note: i did notice that our very definition of "human health" and "overall-wellbeing" includes happiness, or perhaps average happiness. If you can't feel happiness, then we say you're not mentally healthy. I think this neglects the problem that we need happiness for a reason; it exists in the context of an environment where we need to seek out stimuli that help us, or at least that would have probably helped us in the ancestral environment. If we improve the capabilities of our own brains and bodies enough, eventually we will no longer need to rely on each other or on tools outside our own bodies and brains to compensate for our individual weaknesses. Which brings me to the fourth question.
- (I am aware that it looks like a 1 instead of a 4. I don't know why, since it looks like a 4 again when I go to edit it.) What if our mental models of reality became so accurate that they were identical, or nearly identical, to the point where the only difference between reality and our models of it was ever so slightly more than the time it took for us to receive sensory information? Could a human mind become a highly realistic simulation of the universe merely by learning how to increase its own mental capacity enough and systematically eliminating all false models of the universe? And in that case, how can we know if our own universe is not such a simulation? If it is, if our universe is a map of another universe, is it a perfect map? Or is there a small amount of error, even inconsistency in our own universe, which would not exist in the original?
I recently learned in a neuroscience class that thinking is by definition a problem-solving tool--a means to identify a path of causality from a current less desirable state to a more desirable goal state. At least that's what I think it said. If we reached all possible goals, and ran out of possible goals to strive for, what do we do then? Generate a new virtual reality in which there are more possible goals to reach? Or stop thinking altogether? Something about both of those options doesn't sound right for some reason.
I know it says on this very site that perfectionism is one of the twelve virtues of rationality, but then it says that the goal of perfection is impossible to reach. That doesn't make sense to me. If the goal you are trying to reach is unattainable, then why attempt to attain it? Because the amount of effort you expend towards the unattainable goal of perfection allows you to reach better goal states than you otherwise would reach if you did not expend that much effort? But what if we found a way to make the amount of effort spent equal, or at least proportional or close to proportional to the actual desirability of the goal state that effort allows you to reach?
These questions are really bothering me.
Replies from: Elia_G, Elia_G↑ comment by Elia_G · 2015-09-07T22:52:49.167Z · LW(p) · GW(p)
The only reason we need happiness or pleasure is so that we are motivated to seek out things that would help us or things that matter to us. See: http://lesswrong.com/lw/l0/adaptationexecuters_not_fitnessmaximizers/ If we reached all possible goals, and ran out of possible goals to strive for, what do we do then?
↑ comment by Elia_G · 2015-09-08T17:33:04.838Z · LW(p) · GW(p)
The only reason we need happiness or pleasure is so that we are motivated to seek out things that would help us or things that matter to us.
That may be the only reason we evolved happiness or pleasure, but we don't have to care about what evolution optimized for, when designing a utopia. We're allowed to value happiness for its own sake. See Adaptation-Executers, not Fitness-Maximizers.
If we reached all possible goals, and ran out of possible goals to strive for, what do we do then?
Worthwhile goals are finite, so it's true we might run out of goals someday, and from then on be bored. But it doesn't frighten me too much because:
We're not going to run out of goals as soon as we create an AI that can achieve them for us; we can always tell it to let us solve some things on our own, if it's more fun that way.
The space of worthwhile goals is still ridiculously big. To live a life where I accomplish literally everything I want to accomplish is good enough for me, even if that life can't be literally infinite.* Plus, I'm somewhat open to the idea of deleting memories/experience in order to experience the same thing again.
There's other fun things to do that don't involve achieving goals, and that aren't used up when you do them.
*Actually, I am a little worried about a situation where the stronger and more competent I get, the quicker I run out of life to live... but I'm sure we'll work that out somehow.
I know it says on this very site that perfectionism is one of the twelve virtues of rationality, but then it says that the goal of perfection is impossible to reach. That doesn't make sense to me. If the goal you are trying to reach is unattainable, then why attempt to attain it?
I guess technically the real goal is to be "close to perfection", as close as possible. We pretend that the goal is "perfection" for ease of communication, and because (as imperfect humans) we can sometimes trick ourselves into achieving more by setting our goals higher than what's really possible.
comment by Elia_G · 2015-09-13T23:57:52.075Z · LW(p) · GW(p)
I'm curious as to what, more specifically, The Path of Courage looks like.
If broken legs have not been eliminated... Would a person still learn, over time, how to completely avoid breaking a leg - and the difference lies in having to learn it, rather than starting out with unbreakable legs? Or do we remain forever in danger of breaking our legs (which is okay because we'll heal and because the rest of life is less brutal in general)?
If the latter... What happens to "optimizing literally everything"? Will we experience pain and then make a conscious decision not to prevent it for next time, knowing that our life is actually richer for it? Or will we have mental states such that we bemoan and complain that pain happened, and hope it doesn't again, but just-not-think-about actually trying to prevent it ourselves? Or do we, in fact, keep optimizing as hard as we can... and simply trust that we'll never actually succeed so greatly that we de-story our life and regret it?
comment by Computious (computious) · 2022-04-13T03:11:55.266Z · LW(p) · GW(p)
Rewrote Elder Author theorum, to Long-term Success theorum & it's now mentally Kosher. I've seen the same pattern reflected in music, movies, & most art that traipses near scientific-emulation. The greatest artists can be so Machiavellian, pain-exegesis as the impetus, that they become magicians, potion-sellers. Once they are paid enough they rejoin the rational & shutup. We have a deal for regulating emotions in slightly fringe & treatable class differences. The problem that arises is that typically the well off & intelligent aren't all that interested in art magicians, since, "we can do it too". Thus we don't foot the bill.
The more enjoyable art classes I agree, involve striving stories, they ask you to reframe, but typically don't ask you to hold something nihilistic(at least for very long). I view good arts as meaning-bakeries(they made the dough, you bake the bread), that can help steer the muddied-mind. Art reward still favors the intelligent, as usual, but it's a collective attempt to allow boat-missers a second chance.
comment by Motley Fool (Evren Izzet) · 2023-04-24T16:08:10.441Z · LW(p) · GW(p)
I am aware of the 14 year difference between the time of this essay's writing and that of my comment.
When one reads Siddhartha, they find that the commands to be naïve to the pain one experiences and enjoy the pleasure bestowed upon them would be difficult to adhere to in the presence of extreme pain (Anyone after learning about Buddhist tenets questions what to do when you lose a loved one or touch a gympie-gympie) Some ideals of the eightfold path would be easier to adhere to where pain is present but not unbearable. Buddhist tenets also instruct to discipline oneself from accepting pleasure, and the issue of the absence of pleasure creating a new threshold for what is 'suffering' would be solved if humanity had the self discipline to maintain the eightfold path (or some other idea like it) instead of charging into a game of hedonistic Mario Kart.
However, I do not expect anyone to have the self control to not accept more pleasure if given the decision, even understanding that this would bring up their 'pain threshold' or make them require exponentially more resources per unit of time as Yudkowsky has discussed. If managed by a superintelligence, it would be possible for said superintelligence to limit this so that one can be 'satisfied' without needing to slurp up whole galaxys for human pleasure. Singularity, in this scenario, would be a more conditioned, Buddha like lifestyle of indifference and gratitude.
I disagree with Vedic theology as much as I do Abrahamic and mythical theology.
I beg to be corrected if I am wrong.