...And Say No More Of It
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-02-09T00:15:35.000Z · LW · GW · Legacy · 25 commentsContents
25 comments
Followup to: The Thing That I Protect
Anything done with an ulterior motive has to be done with a pure heart. You cannot serve your ulterior motive, without faithfully prosecuting your overt purpose as a thing in its own right, that has its own integrity. If, for example, you're writing about rationality with the intention of recruiting people to your utilitarian Cause, then you cannot talk too much about your Cause, or you will fail to successfully write about rationality.
This doesn't mean that you never say anything about your Cause, but there's a balance to be struck. "A fanatic is someone who can't change his mind and won't change the subject."
In previous months, I've pushed this balance too far toward talking about Singularity-related things. And this was for (first-order) selfish reasons on my part; I was finally GETTING STUFF SAID that had been building up painfully in my brain for FRICKIN' YEARS. And so I just kept writing, because it was finally coming out. For those of you who have not the slightest interest, I'm sorry to have polluted your blog with that.
When Less Wrong starts up, it will, by my own request, impose a two-month moratorium on discussion of "Friendly AI" and other Singularity/intelligence explosion-related topics.
There's a number of reasons for this. One of them is simply to restore the balance. Another is to make sure that a forum intended to have a more general audience, doesn't narrow itself down and disappear.
But more importantly - there are certain subjects which tend to drive people crazy, even if there's truth behind them. Quantum mechanics would be the paradigmatic example; you don't have to go funny in the head but a lot of people do. Likewise Godel's Theorem, consciousness, Artificial Intelligence -
The concept of "Friendly AI" can be poisonous in certain ways. True or false, it carries risks to mental health. And not just the obvious liabilities of praising a Happy Thing. Something stranger and subtler that drains enthusiasm.
If there were no such problem as Friendly AI, I would probably be devoting more or less my entire life to cultivating human rationality; I would have already been doing it for years.
And though I could be mistaken - I'm guessing that I would have been much further along by now.
Partially, of course, because it's easier to tell people things that they're already prepared to hear. "Rationality" doesn't command universal respect, but it commands wide respect and recognition. There is already the New Atheist movement, and the Bayesian revolution; there are already currents flowing in that direction.
One has to be wary, in life, of substituting easy problems for hard problems. This is a form of running away. "Life is what happens to you while you are making other plans", and it takes a very strong and non-distractable focus to avoid that...
But I'd been working on directly launching a Singularity movement for years, and it just wasn't getting traction. At some point you also have to say, "This isn't working the way I'm doing it," and try something different.
There are many ulterior motives behind my participation in Overcoming Bias / Less Wrong. One of the simpler ones is the idea of "First, produce rationalists - people who can shut up and multiply - and then, try to recruit some of them." Not all. You do have to care about the rationalist community for its own sake. You have to be willing not to recruit all the rationalists you create. The first rule of acting with ulterior motives is that it must be done with a pure heart, faithfully serving the overt purpose.
But more importantly - the whole thing only works if the strange intractability of the direct approach - the mysterious slowness of trying to build an organization directly around the Singularity - does not contaminate the new rationalist movement.
There's an old saw about the lawyer who works in a soup kitchen for an hour in order to purchase moral satisfaction, rather than work the same hour at the law firm and donate the money to hire 5 people to work at the soup kitchen. Personal involvement isn't just pleasurable, it keeps people involved; the lawyer is more likely to donate real money to the soup kitchen later. Research problems don't have a lot of opportunity for outsiders to get personally involved, including FAI research. (This is why scientific research isn't usually supported by individuals, I suspect; instead scientists fight over the division of money that has been block-allocated by governments and foundations. I should write about this later.)
If it were the Cause of human rationality - if that had always been the purpose I'd been pursuing - then there would have been all sorts of things people could have done to personally help out, to keep their spirits high and encourage them to stay involved. Writing letters to the editor, trying to get heuristics and biases taught in organizations and in classrooms; holding events, handing out flyers; starting a magazine, increasing the number of subscribers; students handing out copies of the "Twelve Virtues of Rationality" at campus events...
It might not be too late to start going down that road - but only if the "Friendly AI" meme doesn't take over and suck out the life and motivation.
In a purely utilitarian sense - the sort of thinking that would lead a lawyer to actually work that extra hour at the law firm and donate the money - someone who thinks that handing out flyers is important to the Cause of human rationality, should be strictly less enthusiastic than someone who thinks that handing out flyers for human rationality has directly rationality-related benefits and might help a Friendly AI project. It's a strictly added benefit; it should result in strictly more enthusiasm...
But in practice - it's as though the idea of "Friendly AI" exerts an attraction that sucks the emotional energy out of its own subgoals.
You only press the "Run" button after you finish coding and teaching a Friendly AI; which happens after the theory has been worked out; which happens after theorists have been recruited; which happens after (a) mathematically smart people have comprehended cognitive naturalism on a deep gut level and (b) a regular flow of funding exists to support these professional specialists; which first requires that the whole project get sufficient traction; for which handing out flyers may be involved...
But something about the fascination of finally building the AI, seems to make all the mere preliminaries pale in emotional appeal. Or maybe it's that the actual researching takes on an aura of the sacred magisterium, and then it's impossible to scrape up enthusiasm for any work outside the sacred magisterium.
If you're handing out flyers for the Cause of human rationality... it's not about a faraway final goal that makes the mere work seem all too mundane by comparison, and there isn't a sacred magisterium that you're not part of.
And this is only a brief gloss on the mental health risks of "Friendly AI"; there are others I haven't even touched on, though the others are relatively more obvious. Import morality.crazy, import metaethics.crazy, import AI.crazy, import Noble Cause.crazy, import Happy Agent.crazy, import Futurism.crazy, etcetera.
But it boils down to this: From my perspective, my participation in Overcoming Bias / Less Wrong has many different ulterior motives, and many different helpful potentials, many potentially useful paths leading out of it. But the meme of Friendly AI potentially poisons many of those paths, if it interacts in the wrong way; and so the ability to shut up about the Cause is more than usually important, here. Not shut up entirely - but the rationality part of it needs to have its own integrity. Part of protecting that integrity is to not inject comments about "Friendly AI" into any post that isn't directly about "Friendly AI".
I would like to see "Friendly AI" be a rationalist Cause sometimes discussed on Less Wrong, alongside other rationalist Causes whose members likewise hang out there for companionship and skill acquisition. This is as much as is necessary to recruit a fraction of the rationalists created. Anything more would poison the community, I think. Trying to find hooks to steer every arguably-related conversation toward your own Cause is not virtuous, it is dangerously and destructively greedy. All Causes represented on LW will have to bear this in mind, on pain of their clever conversational hooks being downvoted to oblivion.
And when Less Wrong starts up, its integrity will be protected in a simpler way: shut up about the Singularity entirely for two months.
...and that's it.
Back to rationality.
WHEW.
(This would be a great time to announce that Less Wrong is ready to go, but they're still working on it. Possibly later this week, possibly not.)
25 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Russell_Wallace · 2009-02-09T01:37:18.000Z · LW(p) · GW(p)
Suffice it to say that I think the above is a positive move ^.^
comment by nazgulnarsil3 · 2009-02-09T01:58:13.000Z · LW(p) · GW(p)
scientists fight over the division of money that has been block-allocated by governments and foundations. I should write about this later.
yes you should. this is a very serious issue. in art the artist caters to his patron. the more I see of the world of research in the U.S. the more I am disturbed by the common source of the vast majority of funding. science is being tailored and politicized.
comment by Nominull3 · 2009-02-09T04:17:08.000Z · LW(p) · GW(p)
By my math it should be impossible to faithfully serve your overt purpose while making any moves to further your ulterior goal. It has been said that you can only maximize one variable; if you consider factor A when making your choices, you will not fully optimize for factor B.
comment by New_Reader · 2009-02-09T05:12:11.000Z · LW(p) · GW(p)
What! NOOOOO. I've only been around two months, and I cam for the sigularity/AI stuff. Bring it back. Please!
comment by Another_New_Reader · 2009-02-09T06:56:23.000Z · LW(p) · GW(p)
I believe the relevant phrase is 'shut up and multiply', New Reader. :)
comment by Richard_Kennaway · 2009-02-09T07:31:09.000Z · LW(p) · GW(p)
You only press the "Run" button after you finish coding and teaching a Friendly AI; which happens after the theory has been worked out; which happens after...
This sounds like the Waterfall model of software development, which is not well thought of these days.
If I had concrete ideas about how to make a strong AI, I'd start coding them at once. I'd only worry about Friendliness if and when what I'd actually built worked well enough to make this a serious question. Irresponsible? Maybe. But thinking out the entire theory before attempting implementation has no chance of producing any sort of AI. Look at the rivers of ink that have already been expended (Ben Goertzel's books, for example).
Working out the theory first is substituting an easy problem for a hard problem, and the rivers of ink are just another way of going crazy.
comment by Richard_Hollerith2 · 2009-02-09T08:37:52.000Z · LW(p) · GW(p)
But I'd been working on directly launching a Singularity movement for years, and it just wasn't getting traction. At some point you also have to say, "This isn't working the way I'm doing it," and try something different.
Eliezer, do you still think the Singularity movement is not getting any traction?
(My personal opinion is it has too much traction.)
comment by Kellopyy · 2009-02-09T08:37:57.000Z · LW(p) · GW(p)
So long as the heart doth pulsate and beat, So long as the sun bestows light and heat, So long as the blood thro' our veins doth flow, So long as the mind in knowledge doth grow, So long as the tongue retains power of speech, So long as wise men true wisdom do teach. (from depths of internet and attributed to Prof. Haroun Mustafa Leon)
I will study what you write in addition to my normal readings in any case. Problem with programming, science and math is that one doesn't know how long finding an answer will take in general.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-02-09T10:21:05.000Z · LW(p) · GW(p)
Nominull, now imagine that your agents aren't perfect Bayesians and ask under what circumstances maximizing to first order fails to maximize to second order.
New Reader, there is a lot of stuff in the archives, and Less Wrong is going to try to make the archives substantially more accessible. Meanwhile, see here for links to a couple of indexes.
Kennaway, what works for launching a Web 2.0 startup doesn't necessarily work for building a self-modifying AI that starts out dumber than you and then becomes smarter than you, but on this I have already spoken. Besides, I don't think there's time to do things the ordinary stupid way, and plenty of AI researchers have already found out that 'I'll just write it and see if it works' tends not to generate human-level intelligence - though it sure generates labor.
Hollerith, if by that you're referring to the mutant alternate versions of the "Singularity" that have taken over public mindshare, then we can be glad that despite the millions of dollars being poured into them by certain parties, the public has been reluctant to uptake. Still, the Singularity Institute may have to change its name at some point - we just haven't come up with a really good alternative.
comment by Richard_Kennaway · 2009-02-09T11:55:03.000Z · LW(p) · GW(p)
plenty of AI researchers have already found out that 'I'll just write it and see if it works' tends not to generate human-level intelligence
I've noticed. But the failure of hacking does not imply that its opposite must succeed, and it's been enough years since the "AGI" phrase was invented to start passing judgement on the new wave's achievements. Most writings on mental architecture look like word salad to me. The mathematical stuff like AIXI is all very well as mathematics, but I don't see any design coming out of it.
comment by a_soulless_automaton · 2009-02-09T12:24:00.000Z · LW(p) · GW(p)
There's nothing wrong with "empirical" research in computer programs, especially with complex systems. If you can get something that is closer to what you want, you can study its behavior and analyze the results, looking for patterns or failures in order to design a better version.
I know Eliezer hates the word "emergent", but the emergent properties of complex systems are very difficult to theorize about without observation or simulation, and with computer programs there's precious little difference between those and just running the damn program. Could you design a glider gun after reading the rules of Conway's game of life, without ever having run it?
It's no way to write a safely self-modifying AI, to be sure, but it might be a valid research tool with which to gain insight on the overall problem of AI.
comment by Nick_Tarleton · 2009-02-09T14:30:05.000Z · LW(p) · GW(p)
When a mistake might kill you, the rules are different.
Eliezer: brilliant post.
comment by Peter_de_Blanc · 2009-02-09T14:38:50.000Z · LW(p) · GW(p)
Kennaway: Working out the theory first is substituting an easy problem for a hard problem
Really? Would you also say that about physics? If not, can you give some other historical examples?
comment by Richard_Hollerith2 · 2009-02-09T15:46:15.000Z · LW(p) · GW(p)
I think the idea of self-improving AI is advertised too much. I would prefer that a person have to work harder or have to have more well-informed friends to learn about it.
comment by Richard_Kennaway · 2009-02-09T17:15:56.000Z · LW(p) · GW(p)
Peter de Blanc: Would you also say that about physics?
Probably not. I was talking specifically about AGI. Just compare the mountain of theorising, for example here with the paucity of results.
Actual, solid mathematical theories, that are hard to arrive at and predict things you can test, such as you get in physics, hardly exist in AI.
comment by JamesAndrix · 2009-02-09T18:24:12.000Z · LW(p) · GW(p)
I think my greatest likelihood of strong involvement promoting rationality would be as the person working in the soup kitchen on behalf of the lawyer.
I'm not sure what the soup kitchen work would entail, or where the lawyers are.
comment by TGGP4 · 2009-02-10T03:33:04.000Z · LW(p) · GW(p)
I should write about this later. I highly encourage you to. I find it an interesting topic without enough attention (with economic-type broad analysis rather than direct participation not part of public knowledge).
comment by Recruit · 2009-02-11T21:11:01.000Z · LW(p) · GW(p)
It seems you only need to recruit one billionaire (or someone who succeeds in becoming one). You've already done your part to raise the probability of achieving that pretty close to 1 (I wonder how many readers of yours would OVERfund you if they became billionaires? Here's one.) I don't think you need to sell FAI or the Singularity any more. You can move on to implementation. The universe needs your brain interfacing with the problems, not the public.
comment by advancdaltruist · 2009-02-23T23:07:19.000Z · LW(p) · GW(p)
Friendly AI is too dangerous for pretty much anyone to think about. It will create and reveal insanity in any person who tries, and carries a real risk of destroying the world if a mistake is made. The scary part is that this is very rapidly becoming an issue that demands a timely solution.
comment by David_Gerard · 2011-01-05T23:26:49.011Z · LW(p) · GW(p)
This post is why LessWrong needs to be separated from SIAI. I don't actually object to the posts on SIAI-related matters at all - I file them mentally with the banners on Wikipedia - but it's a step that has to happen at some stage, even if not yet.
(I said this to Anna Salomon at the London Meet on Sunday. I conceded at the time I might just have been pattern matching, but I increasingly think it's the case. "A community blog devoted to refining the art of human rationality" is an ABSOLUTELY KILLER mission statement, and it should be every contributor's watchword to breathe around here.)
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-03-11T23:22:44.839Z · LW(p) · GW(p)
This post is why LessWrong needs to be separated from SIAI.
As an alternative, perhaps we should get other causes to participate/recruit on LW?
Actually, why aren't they here already? We have a bunch of individuals with non-SIAI interests, but no causes other than SIAI, despite Eliezer repeatedly saying that they would be welcome.
Replies from: David_Gerard, Larks↑ comment by David_Gerard · 2011-03-12T07:16:19.149Z · LW(p) · GW(p)
Can I plug Wikimedia at this point? As I've noted, that's more analogous to a software project than an ordinary charity - your money is useful and most welcomed, but the real contribution is your knowledge.
(Then we need to fix the things wrong with the editor experience on Wikipedia ... though the Wikimedia Foundation is paying serious attention to that as well of late.)
Or, more generally: create educational material under a free content licence. If it's CC-by-sa, CC-by or PD, it can interbreed.
I suggest you make your comment a discussion post.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-03-12T07:31:25.606Z · LW(p) · GW(p)
the real contribution is your knowledge
I wonder if there is even more leverage that LWers can apply. Maybe we could try come up with possible solutions to some of the institutional problems of Wikipedia, such as dark side editing? (Notice that article was written by our own gwern.)
I suggest you make your comment a discussion post.
Or you can just write a plug for Wikipedia for the front page, and lead by example. :)
Replies from: David_Gerard↑ comment by David_Gerard · 2011-03-12T10:13:35.917Z · LW(p) · GW(p)
Not front page, but posted!