The Gift I Give Tomorrow
post by Raemon · 2012-01-11T04:02:58.006Z · LW · GW · Legacy · 33 commentsContents
33 comments
This is the final post in my Ritual Mini-Sequence. Previous posts include the Introduction, a discussion on the Value (and Danger) of Ritual, and How to Design Ritual Ceremonies that reflect your values.
I wrote this as a concluding essay in the Solstice ritual book. It was intended to be at least comprehensible to people who weren’t already familiar with our memes, and to communicate why I thought this was important. It builds upon themes from the ritual book, and in particular, the readings of Beyond the Reach of God and The Gift We Give to Tomorrow. Working on this essay was transformative to me - it allowed me to finally bypass my scope insensitivity and other biases, so that I could evaluate organizations like the Singularity Institute with fairness. I haven’t yet decided what to do with my charitable dollars - it’s a complex problem. But I’ve overcome my emotional restistance to the idea of fighting X-Risk.
I don’t know if that was due to the words themselves, or to the process I had to go through to write them, but I hope others may benefit from this.
I thought ‘The Gift We Give to Tomorrow’ was incredibly beautiful when I first read it. I actually cried. I wanted to share it with friends and family, except that work ONLY has meaning in the context of the Sequences. Practically every line is a hyperlink to an important, earlier point, and without many hours of previous reading, it just won’t have the impact. But to me, it felt like the perfect endcap to everything the Sequences covered, taking all of the facts and ideas and weaving them into a coherent, poetic narrative that left me feeling satisfied with my place in the world.
Except that... I wasn’t sure that it actually said anything.
And when I showed it to a few other people, they reacted similarly: “This is pretty, but what does it mean?” I knew I wanted to include it in our Solstice celebration, if for no other reason than “it was pretty.” Particularly pretty in a particular way that seemed right for the occassion. But I’m wary about things that seem beautiful and moving without really understanding why, especially when those things become part of your worldview, perhaps subtly impacting your decisions.
In order to use The Gift as part of the Solstice, it needed to be pared down. It’s not designed to be read out loud. This meant I needed to study it in detail, figuring out what made it beautiful so I could be sure to capture that part while pruning away the words that were difficult to pronounce or read in dim candlelight.
Shortly afterwards, I began the same work with “Beyond the Reach of God.”
Unlike The Gift, Beyond the Reach of God is important and powerful for very obvious reasons. If you have something you value more than your own happiness, if you care about your children’s children, then you need to understand that there is no God. Or at the very least, for whatever reason, for whatever mysterious end that you don’t understand, God doesn’t intervene.
If your loved ones are being tortured, or are dying of illness, or getting run over by a car, God will not save them. The actions that matter are the ones that impact the physical world, the world of interlinked causes that we can perceive. The beliefs that ultimately matter, when you care about more than your own subjective happiness, are the beliefs that allow you to make accurate predictions about the future. These beliefs allow you to establish the right social policies to protect your children from harm. They allow you to find the right medicine and treatment to keep your aging parents alive and healthy, both mentally and physically, for as long as possible. To keep the people you love part of your life. And to keep yourself part of theirs.
Unlike some in this community, I don’t entirely dismiss unprovable, comforting beliefs, so long as you have the right compartmentalization to keep them separate from your other decision making processes. A vague, comforting belief in an afterlife, or in a ‘natural, cyclical order of things’... returning to the earth and pushing up daisies... it can be useful to help accept the things you cannot change.
We still live in a world where Death exists. There are things we can’t change. Yet.
And those things can be horrible, and I don’t begrudge anyone a tool to work through them.
But if someone’s vague, comforting beliefs lead them to let a person go, not because they’d done everything they could to save them, but because they had a notion that they’d be together somehow in a supernatural world... if a belief leads someone to believe that they couldn’t change something that they, in fact, could have...
No. I can’t condone that.
It can be disturbing, going down the rationality rabbit hole. I started by thinking “I want to be succeeding at life,” and learned about a few biases that are affecting me, and I made some better choices, and that was good. But it wasn’t fully satisfying. I needed to form some coherent long term goals. Someone in my position might then say “Alright, I want to be more succesful at my career.”
But then maybe they realize that success at their career wasn’t actually what was most important to them. They didn’t need that money, what they wanted was the ability to purchase things that make them happy, and support their family, and have the security to periodically do fun projects. The career was just one way of doing that. And it may not have been the best way. And suddenly they’re open to the entirety of possibility-space, a million different paths they could take that might or might not leave them satisfied. And they don’t have any of the tools they need to figure out which ones to take. Some of those tools have already been invented, and they just need to find them. Others, they may need to invent for themselves.
The problem is that most people don’t have a good understanding of their values.. “Be Happy” is vague, so is “Have a nice family,” so is “Make the world a better place.” Vaguest of all is “Some combination of the above.”
If you’re going down the rationality rabbit hole, you need to start figuring out your REAL values, instead of reciting cached thoughts that you’ve picked up from society. You might start exploring cognitive science to give you some insight into how your mind works. And then you’d start to learn that the mind is a machine, that follows physical rules. And that it’s an incoherent mess, shaped by a blind idiot god that wasn’t trying to make us happy or give us satisfying love lives or a promising future - it was just following a set of mathematical rules that caused the propagation of whatever traits increased reproductive fitness at the time.
And it’s not even clear that there’s a singular you in any of this. Your brain is full of separate entities working at cross purposes; your conscious mind isn’t necessarily responsible for your decisions; the “you” of today isn’t necessarily the same as the “you” of yesterday or tomorrow. And like it or not, this incoherent mess is what your hopes and dreams and morals are made of.
Maybe for a moment, you may come to believe that it all IS really meaningless. We’re not put here with a purpose. The universe doesn’t care about us. Love isn’t inherently any more important than paperclips. The very concept of a continuous self isn’t obviously true. When all is said and done, morality isn’t “real” in an objective sense. There’s just matter, and math. So why the hell worry about anything?
Or maybe instead you’d flinch away from these ideas. Avoid the discomfort. You can do that. But these aren’t just silly philosophical questions that can be ignored. Somebody has to think about them. Because as technology moves forward, we *will* be relying increasingly on automated processes. Not just to work for us, but to think for us. Computers are already better at solving certain types of problems than the average expert. Machine intelligence is almost definitely coming, and society will have to change rapidly around it, and it will become incredibly important for us to know what it is we actually care about. Partly so that we don’t accidentally change ourselves into something we regret. But also so that if and when an AI is created which has the ability to improve itself, and rapidly becomes smart enough to convince its human creators to give it additional resources for perfectly “good” reasons, until it suddenly is powerful enough to grow on its own with only our initial instructions to guide it... we better hope that those initial instructions contained detailed notes about everything we hold dear.
We better hope that the AI’s interior world of pure math includes some kind of ghost in the machine that looks over each step and thinks “yes, my decisions are still moving in a good direction.” That ghost-in-the-machine will only exist if we deliberately put it there. And the only way to do that is to understand ourselves well enough to bother explaining that no, you don’t use the atoms of people to create paperclips. You don’t just “save as many lives as possible” by hooking people up to feeding tubes. You don’t make everyone happy by pumping them full of heroin, you don’t go changing people’s bodies or minds without their consent.
None of these things are remotely obvious to a ghost of perfect emptiness that wasn’t shaped for millions of years by a blind idiot god. Many humans wouldn’t even consider them as options. But someday people may build a decision-making machine with the capacity to surpass us, and those people will need to understand the convoluted mess of values that makes up their mind, and mine, and yours. They’ll need to be able to reduce an understanding of love to pure math, that a computer can comprehend. Because the future is at stake.
It would be nice to just say “don’t build the superintelligence.” But in the Information Age, preventing technological development is just not a reliable safeguard.
This may all seem far fetched, and if you weren’t already familiar with a lot of these ideas, I wouldn’t expect you to be convinved in these few pages (indeed, you should be demanding more than three paragraphs of assertions as evidence). But even without the risk of AI, the future is still at stake. Hell, the present is at stake. People are dying as we speak. And suffering. Losing their autonomy. Their equality. Losing the ability to control their bodies. Even those who lived good lives in modern countries, age can creep over them and cripple their ability not just to move but to think and decide, destroying everything they thought made them human until all that’s left is a person, trapped in a crumbling body, who can’t control their own life but who desperately doesn’t want to die alone.
This is a monstrously harsh reality. A monstrously hard problem, not at all calibrated to our current skills. The problems extend beyond the biological processes that make death a reality, and into the world of resources and politics and limited space. It’s easy to decide that the problem is too hard, that we’ll never be able to solve it. And this is just the present. All of the suffering of the people currently alive pales in comparison the potential suffering of future generations, or worse, to the lives that might go unlived if humanity makes too many mistakes in an unfair universe and erases itself.
What is it about the future that’s worth protecting? What makes it worth it to drag eight thousand pound stones across 150 miles of land, for the benefit of people who won’t be born for centuries, who you’ll never see? I can tell you my answer: a young mind, born millenia from now, whose values are still close enough to mine that I can at least recognize them. Who has the mental framework to ask of its parents, “Why does love exist?” and to care about the answer to the question.
The answer is as ludicrously simple as it is immensely complicated, and you may not have needed the Gift We Give to Tomorrow to explain it to you. Love exists, it was shaped by blind mathematical forces that don’t care about anything. But it exists and we care about it - we care so, so very deeply. And not just about love. Creativity. Curiosity. Excitement. Autonomy. Other people. Morality. Our children’s children. We don’t need a reason to care about these things. We may not fully understand them. But they exist. For us, they are real.
The Gift We Give to Tomorrow walked me through all this understanding. Deep, down into the heart of the abyss where nothing actually matters. Pretending no comforting lies. Cutting away the last illusions. And still, it somehow left me with a vision of the humanity, of the universe, of the future, that is beautiful and satisfying.
It doesn’t matter that it didn’t really say anything new, that I hadn’t already worked out.
It was just beautiful. Just because.
That beauty, that vision of the future, that is what is worth protecting. That’s why I’m sacrificing comfort and peace of mind. That’s why I’m thinking hard, rebelling against my initial instincts to make fun video games. My second instinct to give to the first charity that shows me a picture of an adorable orphan, or that I’m already familiar with in some way. My third instinct to settle for saving maybe a few dozen lives.
My instincts were shaped by blind mathematical forces in an ancestral environment where one orphan was the most I could be expected to worry about. And it is my prerogative, as one small conscious fragment of an incoherent sentient mind, to look at the part of my brain that thinks “that’s all that matters”, and rebel. Take the cold, calculating long view. It’s not enough to think in the moral terms that my mind is naturally good at.
A million people feel like a statistic. They feel even more like a statistic when they live in a distant country. They feel even more like a statistic when they live in a distant future and their values have drifted somewhat from the things we care about today.
But those people are not a statistic. A million deaths is a million tragedies. A billion deaths is a billion tragedies. The possible extinction of the human race is something fundamentally worse than a tragedy, something I still don’t have a word for.
I don’t know what exactly I’m capable of doing, to bring about the most good I can. It might be working hard at a high paying programming job and donating to effective charities. It be directly working on problems that save lives, or which educate future generations to be able to save even more. It might be investing in companies that are producing important services but in a for-profit context. It might be working on scientific research in any one of a hundred important fields. It might be creating art that moves people in important ways.
It might be contributing to AI research. It might not. I don’t know. This isn’t about abandoning one familiar cause for another. When the future is at stake, I don’t have the luxury of not thinking hard about the right choice or passing the buck to a modern Pascal’s wager. Current organizations working on AI research might be effective enough at their jobs to be worth supporting. They might not. They might be worth supporting later, but not yet. Or vice versa. So many factors to consider.
I have to decide what’s good, and I have to decide alone, and I can’t take forever to think about it.
I can’t expect everyone, or even me, to devote their lives to this question. I don’t know what kind of person I am yet. Right now I’m in a fever pitch of inspiration and I feel ready to take on the world, but when all is said and done, I do mostly care about my own happiness. And I think that’s okay - I think most people should spend most of their time seeing to their own needs, building their own community. But my hope is that I can find a way to be happy and contribute to a greater good at the same time.
In the meantime, I wrote this book, and planned an evening that bordered on religious service. I did this for a lot of reasons, but the biggest one was to battle parts of my mind that I am not satisfied with. The parts of me that think a rare, spectacular disease is more important that a common, easily fixed problem. The parts of me that think a smiling, hungry orphan is more important than a billion lives. The parts of me that I have to keep fighting in order to make good decisions.
Because I am fucking sick of having to feel like a cold hearted bastard, when I try to make the choice that is good.
I’m willing to feel that way, if I have to. It’s worth it. But I shouldn’t have to, and neither should you.
To fix this, I use art. And good art sometimes has to blur the line between fact and fiction, using certain kinds of lies so that my lizard brain can fully comprehend certain other kinds of truths. To understand why 6 billion people are more important than a single hungry orphan, it can help to tell a story.
Not about six billion people, but about one child.
Across space and time, ages from now, ever so far away: In a universe of pure math, where there is no loving god to shelter us nor punish the Genghis Khans of the world.... there exists the possibility of a child whose values I can understand, asking their parents “Why does love exist?”
That child’s existence is not inevitable. It will be born, or not, depending on actions that humans take today. It will suffer, or not, depending on the direction that humanity steers itself. It will die in hundred, a thousand or million years, depending on how far we progress in solving the problem of death. And I don’t know for sure whether any of this will specifically require your actions, or mine.
That child is beautiful. The very possibility of that child is beautiful. That beauty is worth protecting. I don’t speak for the entire Less Wrong community, but I write this to honor the birth of that child, and everything that child represents: Peace on earth, and perhaps across the galaxy. Good will, among all sentient minds. Scientific and ethical progress. All the hard work and sacrifice that these things entail.
33 comments
Comments sorted by top scores.
comment by cousin_it · 2012-01-11T13:02:19.205Z · LW(p) · GW(p)
And not just about love. Creativity. Curiosity. Excitement. Autonomy. Other people. Morality. Our children’s children.
That's a nice list, but also disturbing in a way. I hope that FAI's understanding of "extrapolated human volition" doesn't reduce to "pick the values that humans profess in public".
Replies from: Will_Newsome, Raemon, wedrifid↑ comment by Will_Newsome · 2012-01-11T13:52:40.860Z · LW(p) · GW(p)
Yay for scapegoating, rent seeking, humor at the expense of low status social groups, unreflective support of information cascades, asserting social dominance, solitaire, and adultery!
Replies from: TheOtherDave, kilobug, wedrifid, pjeby↑ comment by TheOtherDave · 2012-01-11T14:29:18.977Z · LW(p) · GW(p)
...and jaywalking. Don't forget jaywalking.
One of the implications of this is that if a superintelligence ever does work out humanity's coherent extrapolated volition and develop the means to implement it, and for some inexplicable reason asked humanity to endorse the resulting plan before implementing it, humanity would presumably reject it... perhaps not outright, but it would contain too many things that we profess to abhor for most of us to endorse it out loud.
You'd get a million versions of "Hey, that's a great start, but this bit here with the little kid in the basement in the middle of the city, could you maybe get rid of that part, and oh by the way paint the shed blue?" and the whole thing would die in committee.
Replies from: kilobug↑ comment by kilobug · 2012-01-11T15:11:18.977Z · LW(p) · GW(p)
The FAI would more likely implement the CEV progressively than in one go. Any change too drastic at once will be rejected. But if you go by steps, it's much easier to accept.
Also, don't underestimate the persuasion power a super-intelligence would have. For the same reason an AI box would not work, a powerful enough AI (friendly or not) will find a way to persuade most of humanity to accept its plans, because it'll understand why our rejections come from and find way to counter and circumvent them, and use enough superstimulus or offers of future superstimulus.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-11T16:49:08.651Z · LW(p) · GW(p)
I completely agree about the ability to circumvent humanity's objections, either by propaganda as you describe, or just by ignoring those objections altogether and doing what it thinks best. Of course, if for whatever reason the system were designed to require uncoerced consent before implementing its plans, it might not use that ability. (Designing it to require consent but to also be free to coerce that consent via superstimuli seems simply silly: neither safe nor efficient.)
Replies from: cousin_it, kilobug↑ comment by kilobug · 2012-01-11T17:07:55.356Z · LW(p) · GW(p)
Coercion is not binary. I was not thinking into the AI threatening to blow Earth if we refuse the plan, or exposing us to quick burst of superstimulus so high we would do anything to get it again, or lying about its plans, not any of those forms of "cheating".
But even an AI which is forbidden to use those techniques, and require "uncoerced" consent as : no lying, no threats, not creating addiction, ... would be able to present the plan (without lying, even by omission, on its content) in such a way that we'll accept it relatively easily. Superstimulus for example doesn't need to be used to create addiction or to blackmail, but just as natural, genuine consequence of accepting the plan. Things we might find horrifying because they are too alien would be presented with a clear analogy, or as the conclusion of a slow introductory path, in which no step is too much of inferential distance, ...
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-11T17:36:08.749Z · LW(p) · GW(p)
I agree with you that, if a sufficiently powerful superintelligence is constrained to avoid any activities that a human would honestly classify as "coercion," "threat," "blackmail," "addiction," or "lie by omission" and is constrained to only induce changes in belief via means that a human would honestly classify as "natural" and "genuine," it can nevertheless induce humans to accept its plan while satisfying those constraints.
I don't think that prevents such a superintelligence from inducing humans to accept its plan through the use of means that would horrify us had we ever thought to consider them.
It's also not at all clear to me that the fact that X would horrify me if I'd thought to consider it is sufficient grounds to reject using X.
↑ comment by kilobug · 2012-01-11T15:15:12.897Z · LW(p) · GW(p)
Most of those seem to me things humans would not do much "if we knew more, thought faster, were more the people we wished we were, had grown up farther together", to take Eliezer's words as definition of CEV. Those are things humans do now because they don't know enough (about game theory, fun theory, ...), they don't think fast enough of the consequences, they suffer from different kind of akrasia and are not "the people they wished they were", and they didn't grow up far enough together.
That's one of the thing I really like about CEV : it acknowledges that was most humans spontaneously do now are not what our CEV is.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-01-11T16:01:30.070Z · LW(p) · GW(p)
You can't have your dynamic inconsistency and eat it too.
↑ comment by wedrifid · 2012-01-11T15:21:06.505Z · LW(p) · GW(p)
Yay for scapegoating, rent seeking, humor at the expense of low status social groups, unreflective support of information cascades, asserting social dominance, solitaire, and adultery!
I needed to find the context but that found I have to say this is the best comment I've seen all month! Deceptively insightful.
Replies from: Raemon↑ comment by Raemon · 2012-01-11T17:27:44.016Z · LW(p) · GW(p)
I'm not sure what you mean by context. Is there a specific reference at work here?
Replies from: wedrifid↑ comment by wedrifid · 2012-01-11T18:40:17.046Z · LW(p) · GW(p)
The immediate parent of the comment in question? You can imagine what it looked like seeing Will's comment as it appears in the recent comments thread. There are at least three wildly different messages it could be conveying based on what it is a response to. This was the best case.
Replies from: Raemon↑ comment by pjeby · 2012-01-13T04:40:37.551Z · LW(p) · GW(p)
Yay for scapegoating, rent seeking, humor at the expense of low status social groups, unreflective support of information cascades, asserting social dominance, solitaire, and adultery!
Don't forget satire and sarcasm! ;-)
Replies from: wedrifid↑ comment by wedrifid · 2012-01-13T04:44:56.660Z · LW(p) · GW(p)
And, as we see in the case of what Will seems to be doing here - irony and accepting-optimistic-cynicism (we need a word for that.)
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-01-17T17:43:35.960Z · LW(p) · GW(p)
How many upvoters do you reckon interpreted my comment the way you did versus the way Eby apparently did?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-17T18:00:09.656Z · LW(p) · GW(p)
...Vs. interpreting it as non-ironic endorsement of the items you list.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-01-18T19:02:38.086Z · LW(p) · GW(p)
Ironic versus non-ironic endorsement is somewhat blurred in this case I think.
↑ comment by wedrifid · 2012-01-11T13:06:12.230Z · LW(p) · GW(p)
That's a nice list, but also disturbing in a way. I hope that FAI's understanding of "extrapolated human volition" doesn't reduce to "pick the values that humans profess in public".
I know! I hope it is also able to pick up things like , and .
comment by rwallace · 2012-01-11T14:10:49.264Z · LW(p) · GW(p)
Most of this post, along with the previous posts in the series, is both beautiful and true - the best combination. It's a pity it had to be mixed in with the meme about computers magically waking up with superpowers. I don't think that meme is necessary here, any more than it's necessary to believe the world was created in 4004 BC to appreciate Christmas. Taking it out - discussing it in separate posts if you wish to discuss it - is the major improvement I would suggest.
Replies from: Raemon, nshepperd↑ comment by Raemon · 2012-01-11T15:44:24.069Z · LW(p) · GW(p)
A few people commented that that section was jarring, and I kept editing it to be less jarring, but if folks on Less Wrong are actively bothered by it then it may simply need to get cut.
The ritual book as a whole is meant to reflect my beliefs (and the beliefs of a particular community) at the time that I wrote it. Partly so I can hand it to friends and family who are interested and say "this basically sums up my worldview right now" (possibly modifying them towards my worldview would be a nice plus, but not the main goal." But also so that in 10-20 years I can look back and see a snapshot of what I cared about in 2011. Grappling with the implications of the Singularity was one of the defining things of this process. If it was just a matter of "I care a lot about the world", this whole project wouldn't have had the urgency it did. It would have just been a matter of persuading others to care, or sharing a meme, rather than forcing myself to rebel against a powerful aspect of my psyche. It's important that I took it seriously, and it's important that I ended on a note of "I still do not know the answer to this question, but I think I'd be better able to deal with it if it turned out to be true, and I commit to studying it further."
So I'm leaving the Solstice pdf as is. But this post, as a standalone Less Wrong article, may be instrumentally useful as a way to help people take the future seriously in a general sense. I'm going to leave it for now, to get a few more data points on people's reaction, but probably edit it some more in a day or so.
There will be a new Solstice book next year, and there's at least a 50% chance that I will dramatically toning down the transhumanist elements to create a more.... secular? (ha) version of it that I can promote to a slightly larger humanist population.
One data point btw: My cousin (25 year old male, educated and nerdy but not particularly affiliated with our meme cluster) said that he found the AI section of this essay jarring, but in the end understood why I took it seriously and updated towards it. I don't know if that would have happened if we hadn't already been personally close.
Replies from: hamnox↑ comment by hamnox · 2012-01-12T22:31:06.896Z · LW(p) · GW(p)
I like it as is, but I think that's partly because I'm trying to do the same thing you are at the moment--update emotionally on existential risks like uFAI. It's a problem that needs to be taken seriously, and its placement here gives a concrete villain to what might otherwise turn into a feel-good applause lights speech.
Replies from: Raemon↑ comment by Raemon · 2012-01-13T01:15:19.648Z · LW(p) · GW(p)
I think that's a good point, and I'll be leaving it as is for now.
I will eventually want to rewrite it (or something similar) with more traditional humanist elements. This brings up another question: if you remove the uFAI antagonist, is it actually bad that it's a feel-good applause lights speech? It's intended to be a call to action of sorts, without railroading the reader down any single action, other than to figure out their own values and work towards them. I don't know if it really succeeded at that, with or without the uFAI references.
Edit: wow, totally forgot to add a word that altered the meaning of a sentence dramatically. Fixed.
↑ comment by nshepperd · 2012-01-11T14:46:44.756Z · LW(p) · GW(p)
If the singularity was magical I'd be a lot more hopeful about the future of humankind (even humans aren't clever enough to implement magic).
I agree with you a bit though.
ETA: Wait, that's actually technically inaccurate. If I believed the singularity was magical I'd be a lot more hopeful about the future of humankind. But I do hope to believe the truth, whatever is really the case.
comment by Dmytry · 2012-01-14T10:56:20.686Z · LW(p) · GW(p)
Well tbh the only way I see the ritual as useful is for manipulation - the manipulation of kids, the conformance to the society which is exclusively tolerant of weird repetitive rituals than of equally weird spontaneous actions (the same kind of action, if repeatedly done by multiple people, is protected by constitution and allows to avoid taxation, and if done by one person spontaneously gets that person sectioned into mental hospital)
With regards to the utility, the rituals really annoy people who are not into ritualized actions. When your relative is into some ritual, and you have to conform to it, move your schedule around, miss things, and so on - you feel you are being manipulated, pushed around, and screwed over. You are being manipulated by the manipulator who's adjusting their own happiness function to force you to do nonsensical stuff to avoid making them unhappy. Arbitrarily adjusting own happiness function when there are other people around who care about your happiness, is in some important way deeply dishonest and abuses their care.
Replies from: Rain↑ comment by Rain · 2012-01-15T01:26:09.140Z · LW(p) · GW(p)
Read this chapter of Secular Wholeness for an engineer's take on the purpose and usefulness of rituals in non-religious life.
Summary:
- They give time-structure to our lives on the daily, weekly, and annual levels.
- They assist and encourage the formation of trust and community between people.
- They give shape to public expressions of powerful emotions: expressions of grief, as at funerals; and of joy, as at weddings, graduations, birthdays and anniversaries.
- They help to reorient and stabilize our own feelings when we need to comprehend and cope with crucial life passages.
comment by graviton · 2012-01-13T12:33:14.761Z · LW(p) · GW(p)
I suspect that making it about rationality might be kind of a mixing utilitons with warm fuzzies situation, where you end up doing both poorly. However the person(s) leading the thing damn well better be rationalists. Probably everyone else involved as well.
I exist within a subculture where rituals are kind of normal, and other things I would expect to make this audience cringe. I violently rejected it all while reading the sequences because the value I had perceived in it was insane. Around the time I finished them I began to understand the actual value of it, and I really think the sequences provide more than enough to safely engage in this sort of thing.
My first few attempts at commenting on this turned into giant walls of text and I think I might have some things to contribute to the discussions in that mailing list.
Replies from: Raemon↑ comment by Raemon · 2012-01-13T15:13:54.824Z · LW(p) · GW(p)
I suspect that making it about rationality might be kind of a mixing utilitons with warm fuzzies situation, where you end up doing both poorly.
I understand the point, but I'm not sure what you're saying it should be about?
Mailing list will probably start early next week.
Replies from: graviton↑ comment by graviton · 2012-10-16T21:40:51.269Z · LW(p) · GW(p)
Sorry about the very lengthy delay in response.
In my experiences there has always been (at minimum) a surface layer of magical nonsense, but it has always seemed that the real point was just bonding with other individuals; sharing the aggrandized experience with them for the sake of feeling like you're part of the same thing.
That sort of thing was (I imagine) a relatively ubiquitous feature of ancestral tribes, and I suspect that that led to our neural pathways evolving in such a way that sharing in ritualized experiences is a vital part of how we come to feel like we are truly a part of the group/tribe/etc.
And magical nonsense can be made in such a way that it parallels the situation of anything from one person to the whole group in order to... gently trick a person into thinking about something you think they really ought to think about without overtly putting them on the spot. Also, if you're the one making up the magical nonsense, and you're completely misguided about what another person's situation is, more abstract ways of communicating essentially leave infinite degrees of freedom in terms of reasonable-seeming-interpretations. This way, you could think you're giving one specific message to the whole group, when really everyone walks away with a completely different significant-feeling message in their head, and yours was actually far less relevant than you thought it was.
And then everyone feels refreshed and closer to the others involved after sharing in the experience.
Of course, people really, really, REALLY, should be intelligent rationalists on their own if they're going to get into that sort of behavior, since it is arguably a recipe for a cult.
comment by MatthewBaker · 2012-01-11T05:55:03.962Z · LW(p) · GW(p)
I believe that as thoughtful citizens of a future trans-humanist republic The Gift We Give To Tomorrow is our pursuit of a world that can never have an event fundamentally worse than a tragedy because of the way it has been structured. Then again... structuring that universe correctly is almost as scary as having the world undergo a fundamental change.
That being said, I love this post series and wish I lived in New York for the event itself ^^