What happens when your beliefs fully propagate
post by Alexei · 2012-02-14T07:53:25.005Z · LW · GW · Legacy · 79 commentsContents
79 comments
This is a very personal account of thoughts and events that have led me to a very interesting point in my life. Please read it as such. I present a lot of points, arguments, conclusions, etc..., but that's not what this is about.
I've started reading LW around spring of 2010. I was at the rationality minicamp last summer (2011). The night of February 10, 2012 all the rationality learning and practice finally caught up with me. Like a water that has been building up behind a damn, it finally broke through and flooded my poor brain.
"What if the Bayesian Conspiracy is real?" (By Bayesian Conspiracy I just mean a secret group that operates within and around LW and SIAI.) That is the question that set it all in motion. "Perhaps they left clues for those that are smart enough to see it. And to see those clues, you would actually have to understand and apply everything that they are trying to teach." The chain of thoughts that followed (conspiracies within conspiracies, shadow governments and Illuminati) it too ridiculous to want to repeat, but it all ended up with one simple question: How do I find out for sure? And that's when I realized that almost all the information I have has been accepted without as much as an ounce of verification. So little of my knowledge has been tested in the real world. In that moment I achieved a sort of enlightenment: I realized I don't know anything. I felt a dire urge to regress to the very basic questions: "What is real? What is true?" And then I laughed, because that's exactly where The Sequences start.
Through the turmoil of jumbled and confused thoughts came a shock of my most valuable belief propagating through my mind, breaking down final barriers, reaching its logical conclusion. FAI is the most important thing we should be doing right now! I already knew that. In fact, I knew that for a long time now, but I didn't... what? Feel it? Accept it? Visualize it? Understand the consequences? I think I didn't let that belief propagate to its natural conclusion: I should be doing something to help this cause.
I can't say: "It's the most important thing, but..." Yet, I've said it so many times inside my head. It's like hearing other people say: "Yes, X is the rational thing to do, but..." What follows is a defense that allows them to keep the path to their goal that they are comfortable with, that they are already invested in.
Interestingly enough, I've already thought about this. Right after rationality minicamp, I've asked myself the question: Should I switch to working on FAI, or should I continue to make games? I've thought about it heavily for some time, but I felt like I lacked the necessary math skills to be of much use on FAI front. Making games was the convenient answer. It's something I've been doing for a long time, it's something I am good. I decided to make games that explain various ideas that LW presents in text. This way I could help raise the sanity waterline. Seemed like a very nice, neat solution that allowed me to do what I wanted and feel a bit helpful to the FAI cause.
Looking back, I was dishonest with myself. In my mind, I already wrote the answer I wanted. I convinced myself that I didn't, but part of me certainly sabotaged the whole process. But that's okay, because I was still somewhat helpful, even though may be not in the most optimal way. Right? Right?? The correct answer is "no". So, now I have to ask myself again: What is the best path for me? And to answer that, I have to understand what my goal is.
Rationality doesn't just help you to get what you want better/faster. Increased rationality starts to change what you want. May be you wanted the air to be clean, so you bought a hybrid. Sweet. But then you realized that what you actually want is for people to be healthy. So you became a nurse. That's nice. Then you realized that if you did research, you could be making an order of magnitude more people healthier. So you went into research. Cool. Then you realized that you could pay for multiple researchers if you had enough money. So you went out, become a billionaire, and created your own research institute. Great. There was always you, and there was your goal, but everything in between was (and should be) up for grabs.
And if you follow that kind of chain long enough, at some point you realize that FAI is actually the thing right before your goal. Why wouldn't it be? It solves everything in the best possible way!
People joke that LW is a cult. Everyone kind of laughs it off. It's funny because cultists are weird and crazy, but they are so sure they are right. LWers are kind of like that. Unlike other cults, though, we are really, truly right. Right? But, honestly, I like the term, and I think it has a ring of truth to it. Cultists have a goal that's beyond them. We do too. My life isn't about my preferences (I can change those), it's about my goals. I can change those too, of course, but if I'm rational (and nice) about it, I feel that it's hard not to end up wanting to help other people.
Okay, so I need a goal. Let's start from the beginning:
What is truth?
Reality is truth. It's what happens. It's the rules that dictate what happens. It's the invisible territory. It's the thing that makes you feel surprised.
(Okay, great, I won't have to go back to reading Greek philosophy.)
How do we discover truth?
So far, the best method has been the scientific principle. It's has also proved itself over and over again by providing actual tangible results.
(Fantastic, I won't have to reinvent the thousands of years of progress.)
Soon enough humans will commit a fatal mistake.
This isn't a question, it's an observation. The technology is advancing on all fronts to the point where it can be used on a planetary (and wider) scale. Humans make mistakes. Making mistake with something that affects the whole world could result in an injury or death... for the planet (and potentially beyond).
That's bad.
To be honest, I don't have a strong visceral negative feeling associated with all humans becoming extinct. It doesn't feel that bad, but then again I know better than to trust my feelings on such a scale. However, if I had to simply push a button to make one person's life significantly better, I would do it. And I would keep pushing that button for each new person. For something like 222 years, by my rough calculations. Okay, then. Humanity injuring or killing itself would be bad, and I can probably spent a century or so to try to prevent that, while also doing something that's a lot more fun that mashing a button.
We need a smart safety net.
Not only smart enough to know that triggering an atomic bomb inside a city is bad, or that you get the grandma out of a burning building by teleporting her in one piece to a safe spot, but also smart enough to know that if I keep snoozing every day for an hour or two, I'd rather someone stepped in and stopped me, no matter how much I want to sleep JUST FIVE MORE MINUTES. It's something I might actively fight, but it's something that I'll be grateful for later.
FAI
There it is: the ultimate safety net. Let's get to it?
Having FAI will be very very good, that's clear enough. Getting FAI wrong will be very very bad. But there are different levels of bad, and, frankly, a universe tiled with paper-clips is actually not that high on the list. Having an AI that treats humans as special objects is very dangerous. An AI that doesn't care about humans will not do anything to humans specifically. It might borrow a molecule, or an arm or two from our bodies, but that's okay. An AI that treats humans as special, yet is not Friendly could be very bad. Imagine 3^^^3 different people being created and forced to live really horrible lives. It's hell on a whole another level. So, if FAI goes wrong, pure destruction of all humans is a pretty good scenario.
Should we even be working on FAI? What are the chances we'll get it right? (I remember Anna Salamon's comparison: "getting FAI right" is like "trying to make the first atomic bomb explode in a shape of an elephant" would have been a century ago.) What are the chances we'll get it horribly wrong and end up in hell? By working on FAI, how are we changing the probability distribution for various outcomes? Perhaps a better alternative is to seek a decisive advantage like brain uploading, where a few key people can take a century or so to think the problem through?
I keep thinking about FAI going horribly wrong, and I want to scream at the people who are involved with it: "Do you even know what you are doing?!" Everything is at stake! And suddenly I care. Really care. There is curiosity, yes, but it's so much more than that. At LW minicamp we compared curiosity to a cat chasing a mouse. It's a kind of fun, playful feeling. I think we got it wrong. The real curiosity feels like hunger. The cat isn't chasing the mouse to play with it; it's chasing it to eat it because it needs to survive. Me? I need to know the right answer.
I finally understand why SIAI isn't focusing very hard on the actual AI part right now, but is instead pouring most of their efforts into recruiting talent. The next 50-100 years is going to be a marathon for our lives. Many participants might not make it to the finish line. It's important that we establish a community that can continue to carry the research forward until we succeed.
I finally understand why when I was talking about making games that help people be more rational with Carl Shulman, his value metric was to see how many academics it could impact/recruit. That didn't make sense to me. I just wanted to raise the sanity waterline for people in general. I think when LWers say "raise the sanity waterline," there are two ideas being presented. One is to make everyone a little bit more sane. That's nice, but overall probably not very beneficial to FAI cause. Another is to make certain key people a bit more sane, hopefully sane enough to realize that FAI is a big deal, and sane enough to do some meaningful progress on it.
I finally realized that when people were talking about donating to SIAI during the rationality minicamp, most of us (certainly myself) were thinking of may be tens of thousands of dollars a year. I now understand that's silly. If our goal is truly to make the most money for SIAI, then the goal should be measured in billions.
I've realized a lot of things lately. A lot of things have been shaken up. It has been a very stressful couple of days. I'll have to re-answer the question I asked myself not too long ago: What should I be doing? And this time, instead of hoping for an answer, I'm afraid of the answer. I'm truly and honestly afraid. Thankfully, I can fight pushing a lot better than pulling: fear is easier to fight than passion. I can plunge into the unknown, but it breaks my heart to put aside a very interesting and dear life path.
I've never felt more afraid, more ready to fall into a deep depression, more ready to scream and run away, retreat, abandon logic, go back to the safe comfortable beliefs and goals. I've spent the past 10 years making games and getting better at it. And just recently I've realized how really really good I actually am at it. Armed with my rationality toolkit, I could probably do wonders in that field.
Yet, I've also never felt more ready to make a step of this magnitude. Maximizing utility, all the fallacies, biases, defense mechanisms, etc, etc, etc. One by one they come to mind and help me move forward. Patterns of thoughts and reasoning that I can't even remember the name of. All these tools and skills are right here with me, and using them I feel like I can do anything. I feel that I can dodge bullets. But I also know full well that I am at the starting line of a long and difficult marathon. A marathon that has no path and no guides, but that has to be run nonetheless.
May the human race win.
79 comments
Comments sorted by top scores.
comment by cousin_it · 2012-02-14T18:01:54.585Z · LW(p) · GW(p)
Does everyone else here think that putting aside your little quirky interests to do big important things is a good idea? It seems to me that people who choose that way typically don't end up doing much, even when they're strongly motivated, while people who follow their interests tend to become more awesome over time. Though I know Anna is going to frown on me for advocating this path...
Replies from: AnnaSalamon, Yvain, jimmy, Alexei, fiddlemath, scientism, Vladimir_Golovin, None, David_Gerard↑ comment by AnnaSalamon · 2012-02-15T18:13:45.972Z · LW(p) · GW(p)
Though I know Anna is going to frown on me for advocating this path...
Argh, no I'm not going to advocate ignoring one's quirky interests to follow one's alleged duty. My impression is more like fiddlemath's, below. You don't want to follow shiny interests at random (though even that path is much better than drifting randomly or choosing a career to appease one's parents, and cousin_it is right that even this tends to make people more awesome over time). Instead, ideally, you want to figure out what it would be useful to be interested in, cultivate real, immediate, curiosity and urges to be interested in those things, work to update your anticipations and urges so that they know more of what your abstract/verbal reasoning knows, and can see why certain subjects are pivotal…
Not "far-mode reasoning over actual felt interests" but "far-mode reasoning in dialog with actual felt interests, and both goals and urges relating strongly to what you end up actually trying to do, and so that you develop new quirky interests in the questions you need to answer, the way one develops quirky interests in almost any question if one is willing to dwell on it patiently for a long time, with staring with intrinsic interest while the details of the question come out to inhabit your mind...
Replies from: multifoliaterose↑ comment by multifoliaterose · 2012-02-19T01:34:07.700Z · LW(p) · GW(p)
I find this comment vague and abstract, do you have examples in mind?
↑ comment by Scott Alexander (Yvain) · 2012-02-15T01:50:11.629Z · LW(p) · GW(p)
I think the flowchart for thinking about this question should look something like:
If in a least convenient possible world where following your interests did not maximize utility, are you pretty sure you really would forego your personal interests to maximize utility? If no, go to 2; if yes, go to 3.
Why are you even thinking about this question? Are you just trying to come up with a clever argument for something you're going to do anyway?
Okay, now you can think about this question.
I can't answer your question because I've never gotten past 2.
Replies from: AnnaSalamon, None, Nectanebo, cousin_it, Viliam_Bur↑ comment by AnnaSalamon · 2012-02-15T18:31:00.728Z · LW(p) · GW(p)
I mostly-agree, except that question 1 shouldn't say:
"In a least convenient world, would you utterly forgo all interest in return for making some small difference to global utility".
It should say: "… is there any extent to which impact on strangers' well-being would influence your choices? For example, if you were faced with a choice between reading a chapter of a kind-of-interesting book with no external impact, or doing chores for an hour and thereby saving a child's life, would you sometimes choose the latter?"
If the answer to that latter question is yes -- if expected impact on others' well-being can potentially sway your actions at some margin -- then it is worth looking into the empirical details, and seeing what bundles of global well-being and personal well-being can actually be bought, and how attractive those bundles are.
Replies from: Alex_Altair↑ comment by Alex_Altair · 2012-02-15T19:04:38.568Z · LW(p) · GW(p)
impact on strangers' well-being
I object to this being framed as primarily about others versus self. I pursue FAI for the perfectly selfish reason that it maximizes my expected life span and quality. I think the conflict being discussed is about near interest conflicting with far interest, and how near interest creates more motivation.
↑ comment by [deleted] · 2012-02-15T17:17:51.847Z · LW(p) · GW(p)
Why are you even thinking about this question?
Because even if we don't have the strength or desire to willingly renounce all selfishness, we recognize that better versions of ourselves would do so, and that perhaps there's a good way to make some lifestyle changes that look like personal sacrifices but are actually net positive (and even more so when we nurture our sense of altruism)?
↑ comment by Nectanebo · 2012-02-15T09:08:31.233Z · LW(p) · GW(p)
Isn't this statement also a clever argument for why you're not going to do it anyway, at least to an extent?
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2012-02-17T13:38:44.656Z · LW(p) · GW(p)
Not a clever argument, more of an admission of current weakness. Admitting current weakness has the advantage of having the obvious next step of "consider becoming stronger".
But saying "Pursuing my interests would increase utility anyway" has the disadvantage of requiring no further actions. Which is fine if it's true, but if you evaluate the truth of the statement while you still have this potential source of bias lurking in the background, it might not be.
↑ comment by Viliam_Bur · 2012-02-15T08:35:54.846Z · LW(p) · GW(p)
Maybe the personal interests are the real utility, but we don't want to admit it -- because for our survival as members of social species it is better to pretend that our utility is aligned with utility of others, although it is probably just weakly positively correlated. In more complex society the correlation is probably even weaker, because the choicespace is larger.
Or maybe just the utility choice mechanism is broken in this society, because it evolved in an ancient environment. It probably uses some mechanism like "if you are rewarded for doing something, or if you see that someone is rewarded for something, then develop a strong desire... and keep the desire even in times when you are not rewarded (because it may require long-term effort)". In ancient environment you would see rewards mostly for useful things. Today there are too many exciting things -- I don't say they are mostly bad, just that there is too much of them, so people's utility functions are spread too much, and sometimes there are not enough people with desire to do some critical tasks.
↑ comment by jimmy · 2012-02-15T03:34:39.867Z · LW(p) · GW(p)
That's why it's a very important skill to become interested in what you should be interested in. I made a conscious decision to become interested in what I'm working on now becase it seemed like an area full of big low hanging fruit, and now it genuinely fascinates me.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-02-15T08:39:26.991Z · LW(p) · GW(p)
How to become really interested in something?
I would suggest spending time with people interested in X, because this would give one's brain signal "X is socially rewarded", which would motivate them to do X. Any other good ideas?
Replies from: jimmy, Karmakaiser↑ comment by jimmy · 2012-02-15T18:12:34.125Z · LW(p) · GW(p)
What worked for me was to spend time thinking about the types of things I could do if it worked right, and feeling those emotions while trying to figure out rough paths to get there.
I also chose to strengthen the degree to which I identify as someone who can do this kind of thing, so it felt natural
↑ comment by Karmakaiser · 2012-02-15T14:33:37.275Z · LW(p) · GW(p)
I'm spitballing different ideas I've used:
Like you said, talk to people who know the topic and find it interesting.
Read non technical introductory books on the topic. I found the algorithmn's part of CS interesting, but the EE dimensions of computing was utterly boring until I read Code by Charles Petzold.
Research the history of a topic in order to see the lives of the humans who worked on it. Humans being social creatures, may find a topic more interesting after they have learned of some of the more interesting people who have worked in that field.
↑ comment by Alexei · 2012-02-14T21:19:36.342Z · LW(p) · GW(p)
First, making games isn't a little quirky interest. Second, I don't necessarily have to put it aside. My goal it to contribute to FAI. I will have to figure out the best way to do that. If I notice that whatever I try, I fail at because I can't summon enough motivation, then may be making games is the best options I've got. But the point is that I have to maximize my contribution.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-02-15T10:05:31.200Z · LW(p) · GW(p)
Why do you say it's not a little quirky interest? I'm asking this as I've been fixated on various game-making stuff for close to 20 years now, but now feel like I'm mostly going on because it's something I was really interested at 14 and subsequently tinkered with enough that it now seems like the thing I can do most interesting things with, but I suspect it's not what I'd choose to have built a compulsive interest in if I could make the choice today.
Nowadays I am getting alienated from the overall gaming culture that's still mostly optimized to primarily appeal to teenagers, and I often have trouble coming up with a justification why most games should be considered anything other than shiny escapist distractions and how the enterprise of game development aspires to anything other than being a pageant for coming up with the shiniest distraction. So I would go for both quirky, gaming has a bunch of weird insider culture things going for it, and little, most gaming and gamedev has little effect in the big picture of fixing things that make life bad for people (though distractions can be valuable too), and might have a negative effect if clever people who could make a contribution elsewhere get fixated into gamedev instead.
It does translate to a constantly growing programming skill for me, so at least there's that good reason to keep up at it. But that's more a side effect than a primary value of the interest.
Replies from: Alexei↑ comment by Alexei · 2012-02-15T16:49:47.411Z · LW(p) · GW(p)
You've committed mind projection fallacy. :) For me games have started out as a hobby and grew into a full blown passion. It's something I live and breathe about 10 hours day (full time job and then making a game on the side).
I'll agree that current games suck, but their focus has extended way past teenagers. And just because they are bad, doesn't mean the medium is bad. It's possible to make good games, for almost any definition of good.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-02-15T17:37:20.900Z · LW(p) · GW(p)
I was describing my own mind, didn't get around to projecting it yet.
Let me put the question this way: You can probably make a case for why people should want to be interested in, say, mathematics, physics or effective reasoning, even if they are not already interested in them. Is there any compelling similar reason why someone not already interested in game development should want to be interested in game development?
Replies from: Alexei↑ comment by Alexei · 2012-02-16T02:42:39.484Z · LW(p) · GW(p)
Sure, but it would be on case by case basis. I think game development is too narrow (especially when compared to things like math and physics), but if you consider game design in general, that's a useful field to know any time you are trying to design an activity so that it's engaging and understandable.
↑ comment by fiddlemath · 2012-02-14T20:52:04.673Z · LW(p) · GW(p)
I expect that completely ignoring your quirky interests leads to completely destroying your motivation for doing useful work. On the other hand, I find myself demotivated, even from my quirky interests, when I haven't done "useful" things recently. I constantly question "why am I doing what I'm doing?" and feel pretty awful, and completely destroy my motivation for doing anything at all.
But! Picking from "fiddle with shiny things" and "increase global utility" is not a binary decision. The trick is to find a workable compromise between the ethics you endorse and the interests you're drawn to, so that you don't exhaust yourself on either, and gain "energy" from both. Without some sort of deep personal modification, very few people can usefully work 12 hours a day, 7 days a week, at any one task. You can, though, spend about four hours a day on each of three different projects, so long as they're sufficiently varied, and they provide an appropriate mixture of short-term and long-term goals, near and far goals, personal time and social time, and seriousness and fun.
↑ comment by scientism · 2012-02-15T20:21:38.469Z · LW(p) · GW(p)
Doing what's right is hard and takes time. For a long time I've been of the opinion that I should do what's most important and let my little quirky interests wither on the vine, and that's what I've done. But it took many years to get it right, not because of issues of intrinsic motivation, but because I'm tackling hard problems and it was difficult to even know what I was supposed to be doing. But once I figured out what I'm doing, I was really glad I'd taken the risk, because I can't imagine ever returning to my little quirky interests.
I think it involves a genuine leap into the unknown. For example, even if you decide that you should dedicate your life to FAI, there's still the problem of figuring out what you should be doing. It might take years to find the right path and you'll probably have doubts about whether you made the right decision until you've found it. It's a vocation fraught with uncertainty and you might have several false starts, you might even discover that FAI is not the most important thing after all. Then you've got to start over.
Should everyone being doing it? Probably not. Is there a good way to decide whether you should be doing or not? I doubt it. I think what really happens is you start going down that road and there's a point where you can't turn back.
Replies from: cousin_it, Alexei↑ comment by cousin_it · 2012-02-15T20:45:16.354Z · LW(p) · GW(p)
Just being curious, what are you doing now, and what were your interests before?
Replies from: scientism↑ comment by Vladimir_Golovin · 2012-02-14T19:40:45.683Z · LW(p) · GW(p)
A couple of years ago, I'd side with Anna. Today, I'm more inclined to agree with you. As I learned the hard way, intrinsic motivation is extremely important for me.
(Long story short: I have a more than decent disposable income, which I earned by following my "little quirky interests". I could use this income for direct regular donations, but instead I decided to invest it, along with my time, in a potentially money-making project I had little intrinsic motivation for. I'm still evaluating the results, but so far it's likely that I'll make intrinsic motivation mandatory for all my future endeavors.)
↑ comment by [deleted] · 2012-02-14T21:36:59.414Z · LW(p) · GW(p)
My trouble is that my "little quirky interests" are all I really want to do. It can be a bit hard to focus on all the things that must get done when I'd much rather be working on something totally unrelated to my "work".
I'm not sure how to solve that.
↑ comment by David_Gerard · 2012-02-14T19:19:53.179Z · LW(p) · GW(p)
Indeed. Humans in general aren't good at trying to be utility-function-satisfying machines.
The question you're asking is "is there life before death?"
comment by Shmi (shminux) · 2012-02-14T16:05:57.311Z · LW(p) · GW(p)
Replace FAI with Rapture and LW with Born Again, and you can publish this "very personal account" full of non-sequiturs on a more mainstream site.
Replies from: ZankerH, Alexei↑ comment by Alexei · 2012-02-14T17:37:14.147Z · LW(p) · GW(p)
Thanks, I'm actually glad to see your kind of comment here. The point you make is something I am very wary off, since I've had dramatic swings like that in the past. From Christianity to Buddhism, back to Christianity, then to Agnosticism. Each one felt final, each one felt like the most right and definite step. I've learned not to trust that feeling, to be a bit more skeptical and cautious.
You are correct that my post was full of non-sequiturs. That's because I wrote it in a stream-of-thought kind of way. (I've also omitted a lot of thoughts.) It wasn't meant to be an argument for anything other than "think really hard about your goals, and then do the absolute best to fulfill them."
Replies from: fiddlemath, David_Gerard↑ comment by fiddlemath · 2012-02-14T19:41:06.102Z · LW(p) · GW(p)
tl;dr: If you can spot non-sequiturs in your writing, and you put a lot of weight on the conclusion it's pointing at, it's a really good idea to take the time to fill in all the sequiturs.
Writing an argument in detail is a good way to improve the likelihood that your argument isn't somewhere flawed. Consider:
- Writing allows reduction. By pinning the argument to paper, you can separate each logical step, and make sure that each step makes sense in isolation.
- Writing gives the argument stability. For example, the argument won't secretly change when you think about it while you're in a different mood. This can help to prevent you from implicitly proving different points of your argument from contradictory claims.
- Writing makes your argument vastly easier to share. Like in open source software, enough eyeballs makes all bugs trivial.
Further, notice that we probably underestimate the value of improving our arguments, and are overconfident in apparently-solid logical arguments. If an argument contains 20 inferences in sequence, and you're wrong about such inferences 5% without noticing the misstep, then you have about a 64% chance of being wrong somewhere in the argument. If you can reduce your chance of a misstep in logic to 1% per inference, then you only have an 18% chance of being wrong, somewhere. Improving the reliability of the steps in your arguments, then, has a high value-of-information -- even when 1% and 5% both feel like similar amounts of uncertainty. Conjunction fallacy. It is probable, then, that we underestimate the value of information attained by subjecting ourselves to processes that improve our arguments.
If being wrong about an argument is highly costly -- if you would stand to lose much by believing incorrectly -- then it is well worth writing these sorts of arguments formally, and ensuring that you're getting them right.
All that said... I suspect I know exactly what you're talking about. I haven't performed a similar, convulsive update myself, but I can practically feel the pressure for it in my own head, growing. I fight that update longer than parts of me think I should, because I'm afraid of strong mental attractors. If you can write the sound, solid argument publicly, I will be happy to double-check your steps.
↑ comment by David_Gerard · 2012-02-14T19:23:53.653Z · LW(p) · GW(p)
Yes. Even if this one is right, you're still running on corrupt hardware and need to know when to consciously lower your enthusiasm.
comment by Eugine_Nier · 2012-02-14T18:00:49.629Z · LW(p) · GW(p)
The problem with this argument is that you've spent so much emotional effort arguing why the world is screwed without FAI, that you've neglected to hold the claim "The FAI effort currently being conducted by SIAI is likely to succeed in saving the world" to the standards of evidence you would otherwise demand.
Consider the following exercise in leaving a line of retreat: suppose Omega told you that SIAI's FAI project was going to fail, what would you do?
Replies from: Alexei↑ comment by Alexei · 2012-02-14T21:23:35.979Z · LW(p) · GW(p)
I wasn't making any arguments to the fact that SIAI is likely to succeed in saving the world or even that they are the best option for FAI. (In fact, I have a lot of doubts about it.) That's a really complicated argument, and I really don't have enough information to make a statement like that. As I've said, my goal is to make FAI happen. If SIAI isn't the best option, I'll find another best option. If it turns out that FAI is not really what we need, then I'll work on whatever it is we do.
comment by [deleted] · 2012-02-14T08:50:42.437Z · LW(p) · GW(p)
There's a lot to process here, but: I hear you. As you investigate your path, just remember that a) paths that involve doing what you love should be favored as you decide what to do with yourself, because depression and boredom do not a productive person make, and b) if you can make a powerful impact in gaming, you can still translate that impact into an impact on FAI by converting your success and labor into dollars. I expect these are clear to you, but bear mentioning explicitly.
These decisions are hard but important. Those who take their goals seriously must choose their paths carefully. Remember that the community is here for you, so you aren't alone.
Replies from: Alexei↑ comment by Alexei · 2012-02-14T17:14:54.419Z · LW(p) · GW(p)
Thanks! :) I'm fully aware of both points, but I definitely appreciate you brining them up. You're right, depression and boredom is not good. I sincerely doubt boredom will be a problem, and as for depression, it's something I'll have to be careful about. Thankfully, there are things in life that I like doing aside from making games.
Yes, I could convert that success into dollars, but as I've mentioned in my article, that's probably not the optimal way of making money. (It still might be, I'd have to really think about it, but I'd definitely have to change my approach if that's what I decided to do.)
comment by Viliam_Bur · 2012-02-14T09:05:28.680Z · LW(p) · GW(p)
I've spent the past 10 years making games and getting better at it. And just recently I've realized how really really good I actually am at it.
Good enough to make billions and/or impact/recruit many (future) academics? Then do it! Use your superpowers to do it better than before.
And if you are not good enough, then what else will you do? Will you be good enough in that other thing? You should not replace one thing for another just for the sake of replacing, but because it increases your utility function. You should be able to do more in the new area, or the new area should be so significant that even if you do less, the overall result is better.
I have an idea, though I am not sure if it is good and if you will like it. From the reviews it seems to me that you are a great storyteller (except the part of writing dialogs), but your weak point is game mechanics. And if you made a game, you are obviously good at programming. So I would suggest to focus on the mechanical part and, for a moment, to forget about stories. People as SIAI are preparing a rationality curicullum; they try to make exercises that will help people improve some of their skills. I don't know how far they are, but if they already have something... is there a chance you could make a program (not a game, yet) that students could use? Focus on the function, not form; do useful software, not a game. So, step 1, you do something immediately useful for increasing sanity baseline. Step 2, after the rationality curicullum is ready and you have made dozen programs, then think about the game where you could reuse these exercises as parts of the game mechanic. And invite other people to help you (you already know you need someone to help with dialogs; and you probably also need game testers). My point is, first do something useful for teaching rationality, and only then make it a game. Thus the game will not only be about rationality, but it will actually teach some parts of rationality -- and that will be also a great recruiting tool, because if someone liked doing it in the game, then LW will be like an improved version of the game, except that it is also real.
comment by nick012000 · 2012-02-16T07:12:17.388Z · LW(p) · GW(p)
Oh, wow. I was reading your description of your experiences in this, and I was like, "Oh, wow, this is like a step-by-step example of brainwashing. Yup, there's the defreezing, the change while unfrozen, and the resolidification."
Replies from: Alexei↑ comment by Alexei · 2012-02-16T22:07:39.809Z · LW(p) · GW(p)
It's certainly what it feels like from inside as well. I'm familiar with that feeling, having gone through several indoctrinations in my life. This time I am very wary of rushing into anything, or claiming that this belief if absolutely right, or anything like that. I have plenty of skepticism; however, not acting on what I believe to be correct would be very silly.
comment by CronoDAS · 2012-02-15T01:15:10.455Z · LW(p) · GW(p)
I finally realized that when people were talking about donating to SIAI during the rationality minicamp, most of us (certainly myself) were thinking of may be tens of thousands of dollars a year. I now understand that's silly. If our goal is truly to make the most money for SIAI, then the goal should be measured in billions.
Eliezer has said that he doesn't know how to usefully spend more than 10 million dollars...
Replies from: Alexei↑ comment by Alexei · 2012-02-15T16:53:47.417Z · LW(p) · GW(p)
Thankfully there are more people at SIAI than just him. :) As a very simple start, they could make sure that all SIAI researchers can focus on their work, by taking care of their other chores for them.
Replies from: shminux, hairyfigment↑ comment by Shmi (shminux) · 2012-02-17T20:33:00.654Z · LW(p) · GW(p)
I am guessing that a part of what EY had in mind was that large organizations tend to lose its purposes and work towards self-preservation as much as towards the original objective. $10M/year translates into less than 100 full-time jobs, which is probably a good rule of thumb for an organization becoming too big to keep its collective eyes on the ball.
↑ comment by hairyfigment · 2012-02-17T19:54:47.562Z · LW(p) · GW(p)
What chores did you have in mind? Seems like you wouldn't even need 100 million dollars to hire the world's best mathematicians (if you have any hope of doing so) and give all involved a comfortable lifestyle. Do you mean, billions total by the time we get FAI? Because you started out speaking of donating such-and-such "a year".
Replies from: Alexei↑ comment by Alexei · 2012-02-18T02:58:29.594Z · LW(p) · GW(p)
My point was that if your goal is to give money to an organization (and it seemed that that's what a lot of people at rationality minicamp were planning to do for SIAI), thinking in thousands is, while a nice gesture, is not very helpful. We should be thinking in billions (not necessarily per year, I'm just talking order of magnitude here).
As for chores it's can start as basic as living accommodations, private jets, and get as complex as security guards and private staff. I won't argue for any specifics here, though. I'm just arguing that having lots of money is nice and helpful.
Replies from: hairyfigment↑ comment by hairyfigment · 2012-02-19T00:15:24.886Z · LW(p) · GW(p)
See, I'm not sure more is better after a certain point - shminux touched on this.
I also think if we hired the world's best mathematicians for a year - assuming FAI would take more than this - some of them would either get interested and work for less money, or find some proof SI's current approach won't work.
Replies from: Alexeicomment by [deleted] · 2012-02-14T09:51:24.877Z · LW(p) · GW(p)
I wish you well, but be wary. I would guess that many of us on this site had dreams of saving the world when younger, and there is no doubt that FAI appeals to that emotion. If the claims on the SI are true, then donating to them will mean you contributed to saving the world. Be wary of the emotions associated with that impulse. Its very easy for the brain to pick out a train of thoughts and ignore all others- those doubts you admit to may not be entirely unreasonable. Before making drastic changes to your lifestyle, give it a while. Listen to skeptical voices. Read the best arguments as to why donating to SI may not be a good idea (there are some on this very site).
If you are convinced after some time to think that helping the SI is all you want to do with life, then, as Villiam suggests, do something you love to promote it. Donate what you can spare to SI, and keep on doing what makes you happy, because I doubt you will be more productive doing something that makes you miserable. So make those rational board games, but make some populist ones too, because while the former may convert, the latter might generate more income to allow you to pay someone else to convert people.
Replies from: Alexei, John_Maxwell_IV↑ comment by Alexei · 2012-02-14T17:19:35.753Z · LW(p) · GW(p)
Yes, I probably need a healthy dose of counter-arguments. Can you link any? (I'll do my own search too.)
Replies from: None↑ comment by [deleted] · 2012-02-15T17:13:17.855Z · LW(p) · GW(p)
I have to admit that no particular examples come to mind, but usually in the comments threads on topics such as optimal giving, and occasional posts arguing agains the probability of the singularity. I certainly have seen some, but can't remember where exactly, so any search you do will probably be as effective as my own. To present you with a few possible arguments (which I believe to varying degrees of certainty)
-A lot of the arguments for becoming commited to donating to FAI are based on "even if theres a low probability of it happening, the expected gains are incredibly huge". I'm wary of this argument because I think it can be applied anywhere. For instance, even now, and certainly 40 years ago, one could make a credible argument that theres a not insignificant chance of a nuclear war eradicating human life from the planet. So we should contribute all our money to organisations devoted to stopping nuclear war. -This leads directly to another argument- how effective do we expect the SI to be? Is friendly AI possible? Are SI going to be the ones to find it? If SI create friendliness, will it be implemented? If I had devoted all my money to the CND, I would not have had a significant impact on the proliferation of nuclear weaponry. -A lot of the claims based on a singularity assume that intelligence can solve all problems. But there may be hard limits to the universe. If the speed of light is the limit, then we are trapped with finite resources, and maybe there is no way for us to use them much more efficiently than we can now. Maybe cold fusion isn't possible, maybe nanotechnology can't get much more sophisticated? -Futurism is often inaccurate. The jokes about "wheres my hover car" are relevant- the progress over the last 200 years has rocketed in some spheres but slowed in others. For instance, current medical advances have been slowing recently. They might jump forwards again, but maybe not. Predicting which bits of science will advance in a certain time scale are unlikely. -Intelligence might have a hard limit, or an exponential decay. It could be argued that we might be able to wire up millions of humanlike intelligence in a computer array, but that might hit physical limits
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-02-16T02:41:21.920Z · LW(p) · GW(p)
Read the best arguments as to why donating to SI may not be a good idea (there are some on this very site).
This doesn't sound easy to do a keyword search for; did you have anything in mind you could link us to?
Edit: Sorry, I see this has already been asked.
comment by NancyLebovitz · 2012-02-14T19:14:10.114Z · LW(p) · GW(p)
In re FAI vs. snoozing: What I'd hope from an FAI is that it would know how much rest I needed. Assuming that you don't need that snoozing time at all strikes me as a cultural assumption that theories (in this case, possibly about willpower, productivity, and virtue) should always trump instincts.
A little about hunter-gatherer sleep. What I've read elsewhere is that with an average of 12 hours of darkness and an average need for 8 hours of sleep, hunter-gathers would not only have different circadian rhythms (teenagers tend to run late, old people tend to run early), but a common pattern was to spend some hours in the middle of the night for talk, sex, and/or contemplation. To put it mildly, this pattern in not available for the vast majority of modern people, and we don't know what if anything this is costing.
I think of FAI as being like gorillas trying to invent a human-- a human which will be safe for gorillas, but I may be unduly pessimistic.
I'm inclined to think that raising the sanity waterline is more valuable than you do for such a long range project-- FAI is so dependent on a small number of people, and I think it will continue to be so. Improved general conditions means that the odds of someone who would be really valuable not having their life screwed up early are improved.
On the other hand, this is a "by feel" argument, and I'm not sure what I might be missing.
Replies from: David_Gerard, NancyLebovitz↑ comment by David_Gerard · 2012-02-14T19:18:41.170Z · LW(p) · GW(p)
I think of FAI as being like gorillas trying to invent a human-- a human which will be safe for gorillas, but I may be unduly pessimistic.
Leave out "artificial" - what would constitute a "human-friendly intelligence"? Humans don't. Even at our present intelligence we're a danger to ourselves.
I'm not sure "human-friendly intelligence" is a coherent concept, in terms of being sufficiently well-defined (as yet) to say things about. The same way "God" isn't really a coherent concept.
↑ comment by NancyLebovitz · 2012-02-22T21:45:14.334Z · LW(p) · GW(p)
comment by scientism · 2012-02-14T16:36:28.796Z · LW(p) · GW(p)
Here's what I was thinking as I read this: Maybe you need to reassess cost/benefits. Apply the Dark Arts to games and out-Zynga Zynga. Highly addictive games with in-game purchases designed using everything we know about the psychology of addiction, reward, etc. Create negative utility for a small group of people, yes, but syphon off their money to fund FAI.
I think if I really, truly believed FAI was the only and right option I'd probably do a lot of bad stuff.
Replies from: Alex_Altair, JenniferRM↑ comment by Alex_Altair · 2012-02-14T17:19:17.402Z · LW(p) · GW(p)
Let's start a Singularity Casino and Lottery.
↑ comment by JenniferRM · 2012-02-17T01:54:31.804Z · LW(p) · GW(p)
I think if I really, truly believed FAI was the only and right option I'd probably do a lot of bad stuff.
You might want to read through some decision theory stuff and ponder it for a while. Also, even before that, please consider the possibility that your political instincts are optimized to get a group of primates to change a group policy in a way you prefer while surviving the likely factional fight. If you really want to be effective here or in any other context requiring significant coordination with numerous people, it seems likely to me that you'll need to adjust your goal directed tactics so that you don't leap into counter-productive actions the moment you decide they are actually worth doing.
Baby steps. Caution. Lines of retreat. Compare and contrast your prediction with: the valley of bad rationality.
I have approached numerous intelligent and moral people who are perfectly capable of understanding the basic pitch for singularity activism but who will not touch it with a ten-foot pole because they are afraid to be associated with anything that has so much potential to appeal to the worst sorts of human craziness. Please do something other than confirm these bleak but plausible predictions.
comment by fubarobfusco · 2012-02-15T05:51:32.298Z · LW(p) · GW(p)
I think when LWers say "raise the sanity waterline," there are two ideas being presented. One is to make everyone a little bit more sane. That's nice, but overall probably not very beneficial to FAI cause. Another is to make certain key people a bit more sane, hopefully sane enough to realize that FAI is a big deal, and sane enough to do some meaningful progress on it.
There's another possible scenario: The AI Singularity isn't far, but it is not very near, either. AGI is a generation or more beyond our current understanding of minds, and FAI is a generation or more beyond our current understanding of values. We're making progress; and current efforts are on the critical path to success — but that success may not come during our lifetimes.
Since this is a possible scenario, it's worth having insurance against it. And that means making sure that the next generation are competent to carry on the effort, and themselves survive to make it.
Cultivating a culture of rationality, awareness of existential risks, etc. is surely valuable for that purpose, too.
Replies from: Alexeicomment by PECOS-9 · 2012-02-14T09:13:06.861Z · LW(p) · GW(p)
I recently had a very similar realization and accompanying shift of efforts. It's good to know others like you have as well.
A couple of principles I'm making sure to follow (which may be obvious, but I think are worth pointing out):
Happier people are more productive, so it is important to apply a rational effort toward being happy (e.g. by reading and applying the principles in "The How of Happiness"). This is entirely aside from valuing happiness in itself. The point is that I am more likely to make FAI happen if I make myself happier, as a matter of human psychology. If the reverse were true, and happiness made me less effective, I would apply a rational effort to make myself less happy instead.
In line with #1, be aware of the risks of stress, boredom, and burnout. If I hate a certain task, then, even though in most cases it may be the best choice for working toward FAI, it may not be in my case. At the same time, be aware of when this is just an excuse, and when it's possible to change so as to actually enjoy things I otherwise wouldn't have.
↑ comment by Alexei · 2012-02-14T17:22:57.994Z · LW(p) · GW(p)
Interesting! I'm also happy to hear I'm not the only one. :) Where did you shift your efforts to?
Yes and yes to both points. Seems like everyone is giving the same advice. Must be important.
Replies from: PECOS-9, Giles, kmacneill↑ comment by PECOS-9 · 2012-02-14T18:02:44.742Z · LW(p) · GW(p)
Like you seemed to be looking toward in your post, I'm focusing on making as much money as possible right now. For me, the best path in that direction is probably in the field of tech startups.
This has the added benefit of probably being the right choice even if I decide for some reason that SIAI isn't the most important cause. Pretty much any set of preferences seems to me like it will be best served by making a lot of money (even something like "live a simple and humble life and focus only on your own happiness" is probably easier if you can just retire early and then have the freedom to do whatever you want).
Edited to add: It's worth noting that making as much money as possible via a tech startup is also good for me personally because I actually do have a fair amount of motivation and interest in that area (if I never heard of FAI, tech entrepreneurship would still be appealing to me, although I probably wouldn't dive into it as much as I did once I decided I wanted to work toward FAI). So I don't necessarily think the best thing for everyone who wants to contribute to FAI is to try to make as much money as possible (and certainly not necessarily via a tech startup), but for me personally it seems like the best path.
↑ comment by Giles · 2012-02-16T23:29:45.507Z · LW(p) · GW(p)
I've had pretty much the exact same experience. Some random thoughts (despite the surface similarities, I'm describing myself not you here so usual caveats apply).
For me the trigger was deciding that I wanted to be a utilitarian. At that point the Intelligence Explosion hypothesis (and also GiveWell's message that some charities are orders of magnitude more effective than others) went from being niggling concerns to urgently important concerns. Unlike you, I do have a visceral negative feeling associated with the end of the world. Or rather I did - now it just feels like the new normal (but I'd still put a lot of effort into reducing its probability very slightly).
It's entirely nonobvious to me that FAI is the marginally most important thing for humanity to be working on right now. That depends factors such as whether the approach can be made safe and viable, how different x-risks compare with each other, and some icky stuff to do with social dynamics and information hazards. In other words, it's a question that I have no hope of correctly answering as an individual. But this doesn't suggest that inaction is the best strategy, but rather that I should be supporting smarter people who are trying to answer that kind of question.
The most important thing for humanity to be doing right now is probably very different from the most important thing for me to be doing. It depends on where my comparative advantage is. Current top priority for me is actually just sorting my own life out.
Other commenters here have made the analogy with religious conversion and/or brainwashing. I think that the analogy is valid in as far as something similar may have been happening in my brain; on the other hand, I don't really know what to do with this information. It doesn't mean I'm wrong. I'm still on the lookout for any really good skeptical arguments that address the main issues.
I've really been suffering due to a lack of a community of like-minded people. (At least face-to-face - there are people on the Internet obviously, but I have trouble getting much from that either because they're too busy or because I'm just not good at staying in touch that way). Obviously if I surround myself only with people with the exact same worldview it increases the risk of epistemic closure; on the other hand, having no-one around who shares my goals is very frustrating. Incidentally I've generally got on well with people from Giving What We Can when I have the opportunity to meet them.
I've been suffering depression-like symptoms - they are very likely connected to this whole thing somehow, but I don't know the exact reason. Hypothesis 1 is the lack-of-community thing. Hypothesis 2 is that like you I felt I could do anything, and then discovered that actually no I couldn't.
The "conversion" experience felt very much like "beliefs fully propagating" but I'm not sure that's entirely accurate - I think it was just a few (important) beliefs that propagated. Like you, I felt like my belief map was wiped clean, but actually most of my old beliefs still seem to be valid and useful. It seems to be mainly just information relevant to the safety of far people that I'd kept compartmentalized.
Replies from: Alexei↑ comment by Alexei · 2012-02-17T03:26:17.741Z · LW(p) · GW(p)
You know, it should be more non-obvious for me that FAI is the right thing to do, too. It's just that the idea clicks with me very well, but I would still prefer to collect more evidence for it.
I agree with you on doing what you can do best (and for many of us that will probably be some kind of support role). I agree with the "yeah, it's kind of like brainwashing, but I'm open to counter-arguments."
Not having like-minded in-person friends is really bad. Wherever you are, I would urge you to relocate. I only recently moved to the Bay Area, and it's just so much nicer here (I've lived in small towns in Midwest before). This goes double if your career is in technology.
I also suffer depression-like symptoms. My head feels like it's about to explode half the time, and every so often I just want to scream. I also often want to just lie down and not do anything. I take all these feelings, and I say: "Yeah, it's normal to feel this. I'm going through a lot right now. It's going to be okay."
comment by Alexei · 2012-02-14T07:56:40.034Z · LW(p) · GW(p)
A question that I'm really curious about: Has anyone (SIAI?) created a roadmap to FAI? Luke talks about granularizing all the time. Has it been done for FAI? Something like: build a self-sustaining community of intelligent rational people, have them work on problems X, Y, Z. Put those solutions together with black magic. FAI.
Replies from: PECOS-9, John_Maxwell_IV↑ comment by PECOS-9 · 2012-02-14T08:37:13.049Z · LW(p) · GW(p)
Lukeprog's So You Want to Save the World is sort of like a roadmap, although it's more of a list of important problems with a "strategies" section at the end, including things like raising the sanity waterline.
Replies from: Alexei↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-02-16T03:12:46.856Z · LW(p) · GW(p)
comment by ryjm · 2012-02-15T20:09:25.240Z · LW(p) · GW(p)
As a relatively new member of this site, I'm having trouble grasping this particular reasoning and motivation for participating in FAI. I've browsed Eleizer's various writings on the subject of FAI itself, so I have a vague understanding of why FAI is important, and such a vague understanding is enough for me to conclude that FAI is one, if not the most, important topic that currently needs to be discussed. This belief may not be entirely my own and is perhaps largely influenced by the amount of comments and posts in support of FAI, in conjunction with my lack of knowledge in the area.
With this lack of understanding, I think it is clear /why/ I haven't given up my life to support FAI. But it seems to me that many others on this site know much, much more about the subject, and they still have not given up their lives for FAI.
So my brain has made an equivalence between supporting FAI and other acts of extreme charity. I think highly of those who work for years in impoverished countries battling local calamities, but I don't find myself very motivated to participate. From my observations, I think this is because I have never heard of anyone with the goal of saving the world actually making significant progress in that direction. However, I have heard of many people who have made the world a better place while never exhibiting such lofty motivations.
I guess this is similar to cousin_it's response in that it seems strange to me to pursue something because it is a "big important problem". But I am also worried about the following line of reasoning:
Motivation to participate in FAI => motivation to do charitable work => I should be motivated to do all sorts of charitable work.
This seems like it would become reality only if my interests were aligned with the charitable work. In the OP's reasoning, is the motivation to save the world enough to align interest with work? To me, it seems analogous to the effect of a sugar high on your energy level.
If I did find myself working with FAI, it would probably be because I found that these were interesting problems to solve, and not because I wanted to save the world.
Replies from: Alexei↑ comment by Alexei · 2012-02-16T02:53:54.706Z · LW(p) · GW(p)
Even when you understand that FAI is the most important thing to be doing, there are many ways in which you can fail to translate that into action.
It seems most people are making the assumption that I'll suddenly start doing really boring work that I hate. That's not the case. I have to maximize my benefit, which means considering all the factors. I can't be productive in something that I'm just bad at, or something that I really hate, so I won't do that. But there are plenty of things that I'm somewhat interested in and somewhat familiar with, that would probably do a lot more to help with FAI than making games. But, again, it's something that has to be carefully determined. That's all I was trying to say in this post. I have an important goal -> I need to really consider what the best way to achieve that goal is.
Replies from: ryjm↑ comment by ryjm · 2012-02-16T03:52:44.112Z · LW(p) · GW(p)
I see. I wasn't asserting that you are going to do work you hate, however. I was mainly looking at the value of having a seemingly unachievable and incredibly broad goal as one's primary motivation.
I'm sure you have a much more nuanced view of how and why you are undertaking this life change, and I don't want to discourage you. Seeing as how the general consensus is that FAI is the most important thing to be doing, I think it would take a lot of effort to discourage you. I just can't help but think that there should be a primary technical interest in the problems presented by FAI motivating these kinds of decisions. If it was me, I would be confused as to what exactly I would be working on, which would be very discouraging.
Replies from: Alexeicomment by nshepperd · 2012-02-14T16:54:45.816Z · LW(p) · GW(p)
Sometimes, it feels like part of me would take over the world just to get people to pay attention to the danger of UFAI and the importance of Friendliness. Figuratively speaking. And part of me wants to just give up and get the most fun out of my life until death, accepting our inevitable destruction because I can't do anything about it.
So far, seems like the latter part is winning.
Replies from: Alexei, John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-02-16T03:32:22.765Z · LW(p) · GW(p)
What's worked for me is acting on some average of the different parts of my personality. I realize that I do care about myself more than random other people, and that the future is uncertain, and I let that factor into my decisions.
comment by [deleted] · 2012-02-17T18:19:14.569Z · LW(p) · GW(p)
Interesting. It's good to see that you are at least aware of why you're choosing this path now just as you've chosen other paths (like Buddhism) before.
However, faith without action is worthless so I am curious, as others below are, what's your next goal exactly? For all this reasoning what effect do you hope to accomplish in the real world? I don't mean the pat "raise sanity, etc." answer. I mean what tangible thing do you hope to accomplish next, under these beliefs and this line of reasoning?