Rationality, Singularity, Method, and the Mainstream
post by Mitchell_Porter · 2011-03-22T12:06:16.404Z · LW · GW · Legacy · 35 commentsContents
35 comments
Upon reading this, my immediate response was:
What does this have to do with the Singularity Institute's purpose? You're the Singularity Institute, not the Rationality Institute.
I can see that, if you have a team of problem solvers, having a workshop or a retreat designed to enhance their problem-solving skills makes sense. But as described, there's no indication that graduates of the Boot Camp will then go on to tackle conceptual problems of AI design or tactics for the Singularity.
What seems to be happening is that, instead of making connections to people who know about cognitive neuroscience, decision theory, and the theory of algorithms, there is a drive to increase the number of people who share a particular subjective philosophy and subjective practice of rationality - perhaps out of a belief that the discoveries needed to produce Friendly AI won't be made by people who haven't adopted this philosophy and this practice.
I find this a little ominous for several reasons:
It could be a symptom of mission creep. The mission, as I recall, was to design and code a Friendly artificial intelligence. But "produc[ing] formidable rationalists" sounds like it's meant to make the world better in a generalized way, by producing people who can shine the light of rationality into every dark corner, et cetera. Maybe someone should be doing this, but it's potentially a huge distraction from the more important task.
Also, I'm far more impressed by the specific ideas Eliezer has come up with over the years - the concept of seed AI; the concept of Friendly AI; CEV; TDT - than by his ruminations about rationality in the Sequences. They're interesting, yes. It's also interesting to hear Feynman talk about how to do science, or to read Einstein's reflections on life. But the discoveries in physics which complemented those of Einstein and Feynman weren't achieved by people who studied their intellectual biographies and sought to reproduce their subjective method; they were achieved by other people of high intelligence who also studied the physical world.
It may seem at times that the supposed professionals in the FAI-relevant fields I listed above are terminally obtuse, for having to failed to grasp their own relevance to the FAI problem, or the schema of the solution as proposed by SIAI. That, and the way that people working in AI are just sleepwalking towards the creation of superhuman intelligence without grasping that the world won't get a second chance if they get machine intelligence very right but machine values very wrong - all of that could reinforce the attitude that to have any chance of succeeding, SIAI needs to have a group of people who share a subjective methodology, and not just domain expertise.
However, I think we are rapidly approaching a point where a significant number of people are going to understand that the "intelligence explosion" will above all be about the utility function dominating that event. There have been discussions about how a proto-friendly AI might try to infer the human utility-function schema, how to do so without creating large numbers of simulated persons who might be subjected to cognitive vivisection, and so forth. But I suspect that will never happen, at least not in this brute-force fashion, in which whole adult brains might be scanned, simulated, modified and so on, for the purpose of reverse-engineering the human decision architecture.
My expectation is that the presently small fields of machine ethics and neuroscience of morality will grow rapidly and will come into contact, and there will be a distributed research subculture which is consciously focused on determining the optimal AI value system in the light of biological human nature. In other words, there will be human minds trying to answer this question long before anyone has the capacity to direct an AI to solve it. We should expect that before we reach the point of a Singularity, there will be a body of educated public opinion regarding what the ultimate utility function or decision method (for a transhuman AI) should be, deriving from work in those fields which ought to be FAI-relevant but which have yet to engage with the problem. In other words, they will be collectively engaging with the problem before anyone gets to outsource the necessary research to AIs.
The conclusion I draw from this for the present is that there needs to be more preparation for this future circumstance, and less attempt to spread a set of methods intended just to facilitate generalized rationality. People who want to see Friendly AI created need to be ready to talk with researchers in those other fields, who never attended "Rationality Boot Camp" but who will nonetheless be independently coming to the threshold of thinking about the FAI problem (perhaps under a different name) and developing solutions to it. When the time comes, there will be a phase transition in academia and R&D, from ignoring the problem to wanting to work on it. The creation of ethical artificial minds is not going to be the work of one startup or one secret military project, working in isolation from mainstream intellectual culture; nor is it a mirage that will hang on the horizon of the future forever. It will happen because of that phase transition, and tens of thousands of people will be working on it, in one way or another. That doesn't mean they all get to be relevant or right, but there will be a pre-Singularity ferment that develops very quickly, and in which certain specific understandings of the people who did labor in isolation on this problem for many years will be surpassed and superseded. People will have ingrained assumptions about the answer to subproblem X or subproblem Y - assumptions to which one will have grown accustomed due to the years of isolation spent trying to solve all subproblems at once - and one must be ready for these answer-schemas to be junked when the time finally arrives that the true experts in that area deign to turn their attention to the subproblem in question.
One other observation about "lessons in rationality". Luke recently posted about LW's philosophy as being just a form of "naturalism" (i.e. materialism), a view that has already been well-developed by mainstream philosophy, but it was countered that these philosophers have few results to show for their efforts, even if they get the basics right. I think the crucial question, regarding both LW's originality and its efficacy, concerns method. It has been demonstrated that there is this other intellectual culture, the naturalistic sector of analytic philosophy, which shares a lot of the basic LW worldview. But are there people "producing results" (or perhaps just arriving at opinions) in a way comparable to the way that opinions are being produced here? For example, Will Sawin suggested that LW's epistemic method consists of first imagining how a perfectly rational being would think about a problem. As a method of rationality, this is still very "subjective" and "intuitive" - it's not as if you're plugging numbers into a Bayesian formula and computing the answer, which remains the idealized standard of rationality here.
So, if someone wants to do some comparative scholarship regarding methods of rationality that already exist out there, an important thing to recognize is that LW's method or practice, whatever it is, is a subjective method. I don't call it subjective in order to be derogatory, but just to point out that it is a method intended to be used by conscious beings, whose practice has to involve conscious awareness, whether through real-time reflection or after-the-fact analysis of behavior and results. The LW method is not an algorithm or a computation in the normal sense, though these non-subjective epistemological ideas obviously play a normative and inspirational role for LW humans trying to "refine their rationality". So if there is "prior art", if LW's methods have been anticipated or even surpassed somewhere, it's going to be in some tradition, discipline, or activity where the analysis of subjectivity is fairly advanced, and not just one where some calculus of objectivities, like probability theory or computer science, has been raised to a high art.
For that matter, the art of getting the best performance out of the human brain won't just involve analysis; not even analysis of subjectivity is the whole story. The brain spontaneously synthesizes and creates, and one also needs to identify the conditions under which it does so most fluently and effectively.
35 comments
Comments sorted by top scores.
comment by Jasen · 2011-03-23T01:55:02.939Z · LW(p) · GW(p)
But "produc[ing] formidable rationalists" sounds like it's meant to make the world better in a generalized way, by producing people who can shine the light of rationality into every dark corner, et cetera.
Precisely. The Singularity Institute was founded due to Eliezer's belief that trying to build FAI was the best strategy for making the world a better place. That is the goal. FAI is just a sub-goal. There is still consensus that FAI is the most promising route, but it does not seem wise to put all of our eggs in one basket. We can't do all of the work that needs to be done within one organization and we don't plan to try.
Through programs like Rationality Boot Camp, we expect to identify people who really care about improving the world and radically increase their chances of coming to correct conclusions about what needs to be done and then actually doing so. Not only will more highly-motivated, rational people improve the world at a much faster rate, they will also serve as checks on our sanity. I don't expect that we are sufficiently sane at the moment to reliably solve the world's problems and we're really going to need to step up our game if we hope to solve FAI. This program is just the beginning. The initial investment is relatively small and, if we can actually do what we think we can, the program should pay for itself in the future. We'd have to be crazy not to try this. It may well be too confusing from a PR perspective to run future versions of the program within SingInst, but if so we can just turn it into its own organization.
If you have concrete proposals for valuable projects that you think we're neglecting and would like to help out with I would be happy to have a Skype chat and then put you in contact with Michael Vassar.
Replies from: JoshuaFox↑ comment by JoshuaFox · 2011-03-24T08:36:42.792Z · LW(p) · GW(p)
Yet as frequently discussed, the instrumental rationality techniques advocated here have not yet proven that they can generate significantly more successful people, in research or other areas.
I am all in favor of attempting the impossible, but do you want to attempt one impossible task (generating significantly more rational/successful people in a way never before done) as a prerequisite to another impossible task (FAI)?
comment by steven0461 · 2011-03-22T20:15:29.787Z · LW(p) · GW(p)
On a note related to "mission creep", I'd like to mention that the SL4 list has been lying dead for a long time, and that it would be nice to have an official successor. LW could be seen as that successor in that LW seems to have displaced the singularitarian community around SL4, but the latter had some virtues relative to the former, and I'm not sure this is on the whole a good situation.
Replies from: Kevin, None↑ comment by [deleted] · 2011-03-22T20:17:08.098Z · LW(p) · GW(p)
but the latter had some virtues relative to the former
Such as? (I am unfamiliar with SL4. )
Replies from: steven0461↑ comment by steven0461 · 2011-03-22T20:45:36.656Z · LW(p) · GW(p)
Most of them flow from SL4's specific focus on singularity issues. LW's "common interest of many causes" approach makes it so that:
- some people who used to read SL4 don't read LW, because they're not interested in most of what is discussed here
- singularity posts on LW need to spend more effort appealing to a wider audience
- singularity posts on LW need to spend more effort bridging inferential gaps
There are also subtler cultural differences that it's harder for me to put my finger on. I suspect the strangest-sounding true thing that could comfortably be said on SL4 was stranger-sounding than the strangest-sounding true thing that can comfortably be said on LW.
Of course, I don't mean to deny the huge advantages LW brings; the karma system works well, and a large audience is useful in many ways. But maybe there's a way to keep the best of both worlds.
Replies from: None↑ comment by [deleted] · 2011-03-22T20:59:30.372Z · LW(p) · GW(p)
David Gerard recommends that LW and SIAI be kept separate - I had thought that a similar thing is currently done by singularity posts being in the discussion section.
I agree that a more singularity-focused forum would be nice. Perhaps LW could link to the successor of SL4 in the sidebar.
Replies from: PhilGoetzcomment by Louie · 2011-03-23T07:54:06.707Z · LW(p) · GW(p)
What underlies your intuition that machine ethics will come to the rescue? Have you read the literature? It seems optimistic given what has been published to date. Most people are so fundamentally wrong that their presence in the field is net-negative. The handful of good researchers have to work extra hard to pull against the dead-weight of incompetence in this field already. If there was a "rapid growth" of this field, I would expect the few reasonable voices to be even more drowned out by the incoming rush of less-thoughtful colleagues.
Also, your analogy of x-rationality to Einstein's musing on life is eloquent but misleading. It actually is the case that scientists approaching FAI-level problems while working outside the framework of x-rationality are net-detrimental to progress. Having a 40 year career worth of bad intuitions driven by predictable biases doesn't help anyone find a solution.
Replies from: torekp↑ comment by torekp · 2011-03-29T02:05:15.256Z · LW(p) · GW(p)
Thank you for taking on an otherwise sorely neglected issue from the post, i.e. how we can hope to tackle the utility function of a general AI. For those of us on LW not familiar with the machine ethics literature - most of us, I suspect - can you link an article explaining what's going wrong with it? Or link to a particularly bad and influential example?
Replies from: Louie↑ comment by Louie · 2011-03-29T12:54:14.603Z · LW(p) · GW(p)
Luke's site has a good roundup of some influential articles.
Wallach & Allen, Moral Machines: Teaching Robots Right from Wrong (2009) is great if it's the only paper in the field you're gonna read.
I'm currently publishing a paper on machine ethics so it would be bad form to point out which papers are bad in case they read this and I meet them at conferences.
Hmm... actually, having just written that I realize how entirely awful the fact that experts in academic fields are expected to be polite to each other no matter how terrible their ideas because we might meet them at conferences. That said, I might still see these guys at conferences...
Replies from: PhilGoetzcomment by lukeprog · 2011-03-22T15:15:47.714Z · LW(p) · GW(p)
At this time, my opinion stands with Jasen and SIAI and not with Mitchell. (But I haven't downvoted this post; it's worth discussing and is well-written.)
I think JoshuaFox hit several of the right points. Here are my main reasons for thinking the rationality boot camp is a good idea:
1. This is training to win at life.
Most people who go through the summer program will not end up working on AI directly. But this kind of bootcamp is training for winning at life, like a bootcamp on public speaking or social networking or dating skills. The boot camp Jasen is putting together will likely be more useful to more people than what SIAI has done in the past, and more useful to more people than a summer program all about AI.
2. This boot camp recruits the right people.
If you can't handle this kind of rigorous and specific rationality training, it's less likely you will be able to make useful, long-term contributions to the project of Friendly AI. If you're only up to the level of Traditional Rationality, you are not cut out to work on the single most important and difficult problem humanity has to face.
3. FAI doesn't allow experimentation. You have to be optimally rational.
SIAI must take steps to ensure that it's people are about as rational as humans are capable of being. One little bias could fuck the whole planet. There is no do-over after your experiment failed.
Replies from: steven0461↑ comment by steven0461 · 2011-03-22T18:38:15.426Z · LW(p) · GW(p)
All of these seem to me to ignore Mitchell's claim that:
as described, there's no indication that graduates of the Boot Camp will then go on to tackle conceptual problems of AI design or tactics for the Singularity
comment by cousin_it · 2011-03-22T14:12:42.229Z · LW(p) · GW(p)
Maybe it's just me, but I think your writing has been getting much better in the last few weeks, especially your Vienna Circle comment and this post. Both have managed to change my opinions on several issues. Keep up the good work :-)
comment by rwallace · 2011-03-23T02:31:17.293Z · LW(p) · GW(p)
For what it's worth, my view is precisely the reverse: the Singularity is wishful thinking and trying to figure out Friendly AI today is like Roger Bacon trying to design a spam filter, but rationality is a worthwhile pursuit, the Sequences are some of the best writing on philosophy in the English language - I'd put them fully on a par with Hofstadter - and the rationality boot camp sounds like an interesting and potentially productive experiment.
Replies from: None, Gray↑ comment by [deleted] · 2011-03-23T14:38:47.907Z · LW(p) · GW(p)
I tend to agree with much of this. But the SIAI is in existence to develop a Friendly AI and has solicited donations on that basis. The people running it believe - or claim to - that this is the single most important task in the history of the universe. As such, it's reasonable to examine if their actions match up to that.
↑ comment by Gray · 2011-03-24T14:57:04.602Z · LW(p) · GW(p)
...the Sequences are some of the best writing on philosophy in the English language...
Huh? I'd love to know what works of philosophy you are comparing a series of self-referencing blog posts to. The sequences aren't actually philosophy, in my opinion, but a series of positions on philosophical issues. You guys are doctrinaires, you don't take other positions seriously enough to argue against them. That's why none of this constitutes "philosophy" in my opinion.
comment by Bongo · 2011-05-15T03:31:03.403Z · LW(p) · GW(p)
Also, I'm far more impressed by the specific ideas Eliezer has come up with over the years - the concept of seed AI; the concept of Friendly AI; CEV; TDT - than by his ruminations about rationality in the Sequences.
The opposite for me.
Replies from: PhilGoetz, PhilGoetzcomment by JoshuaFox · 2011-03-22T13:48:21.172Z · LW(p) · GW(p)
I couldn't have put it better myself.
I do understand the SIAI's explanations for their rationality work.
- It recruits good people.
- FAI is a field that does not allow experimentation, and the stakes are high, and so top rationality skills are needed. And maybe:
- Understanding correct rationality as well as human biases may cast light on the best architecture for an AGI.
But in the end, I quite agree with Mitchell's points.
Replies from: David_Gerard↑ comment by David_Gerard · 2011-03-22T17:12:23.645Z · LW(p) · GW(p)
I do understand the SIAI's explanations for their rationality work. 1. It recruits good people.
I can't find the cite, but I vaguely recall someone from SIAI saying that LessWrong and the rationality stuff was by far the most effective recruitment method they've ever used.
Replies from: benelliott↑ comment by benelliott · 2011-03-22T18:40:20.554Z · LW(p) · GW(p)
What other methods have they used?
Replies from: David_Gerard↑ comment by David_Gerard · 2011-03-22T18:41:54.215Z · LW(p) · GW(p)
Presumably more conventional methods of recruitment. I did say "vaguely recall" :-) Anyone from SIAI here?
comment by Perplexed · 2011-03-29T20:17:29.799Z · LW(p) · GW(p)
Why did Eliezer take a detour from his AI work to write the sequences and create Less Wrong? Here is what he says in this interview with John Baez:
There are also an absolutely huge number of pitfalls that people stumble into when they try to think about, as I would put it, Friendly AI. Consider how many pitfalls people run into when they try to think about Artificial Intelligence. Next consider how many pitfalls people run into when they try to think about morality. Next consider how many pitfalls philosophers run into when they try to think about the nature of morality. Next consider how many pitfalls people run into when they try to think about hypothetical extremely powerful agents, especially extremely powerful agents that are supposed to be extremely good. Next consider how many pitfalls people run into when they try to imagine optimal worlds to live in or optimal rules to follow or optimal governments and so on.
Now imagine a subject matter which offers discussants a lovely opportunity to run into all of those pitfalls at the same time.
That’s what happens when you try to talk about Friendly Artificial Intelligence. And it only takes one error for a chain of reasoning to end up in Outer Mongolia. So one of the great motivating factors behind all the writing I did on rationality and all the sequences I wrote on Less Wrong was to actually make it possible, via two years worth of writing and probably something like a month’s worth of reading at least, to immunize people against all the usual mistakes.
So, in effect, the sequences constitute a training manual for new SIAI hires. And presumably Boot Camp consists of a trial run for an SIAI training program which involves some practical work.
comment by hairyfigment · 2011-03-22T19:26:04.587Z · LW(p) · GW(p)
I'll have to think about the subjective method part. In the meantime:
The mission, as I recall, was to design and code a Friendly artificial intelligence.
That'd be nice, sure. It technically seems less likely than the SIAI producing a detailed theory of Friendliness. So someone at some point may need to convince a lot of people to think about the issue in a certain (rational) way.
comment by KrisC · 2011-04-10T19:13:28.021Z · LW(p) · GW(p)
Here is a segment of the line of reasoning that I have been considering lately.
The details of the creation of fAI are unknown; they must be discovered if we wish to avert the forecast catastrophe and reap the full benefits of AI.
Further insights into decision-making are necessary for the creation of fAI.
Teaching can be used as an efficient means of learning.
Teaching rationality is expected to produce insights into decision-making.
Insights into decision-making are what is missing from an understanding of fAI, AFAIK.
Furthermore, there will be significant amount of disutility before fAI is implemented. As utility is that thing which we wish to maximize, we ought to want to want to decrease disutility. The primary means we have at our disposal is rationality.
The caveat is that we must make sure that the benefits of teaching rationality in the present exceed the cost from delaying fAI.
Replies from: PhilGoetzcomment by atucker · 2011-03-22T22:44:55.773Z · LW(p) · GW(p)
My expectation is that the presently small fields of machine ethics and neuroscience of morality will grow rapidly and will come into contact, and there will be a distributed research subculture which is consciously focused on determining the optimal AI value system in the light of biological human nature.
Is SIAI working on trying to cause that?
It seems like it would do more harm than good, since it does a lot of work for FAI, and almost none for AI.
Replies from: LukeStebbing↑ comment by Luke Stebbing (LukeStebbing) · 2011-03-27T23:40:49.280Z · LW(p) · GW(p)
Without speaking toward its plausibility, I'm pretty happy with a scenario where we err on the side of figuring out FAI before we figure out seed AIs.
comment by PhilGoetz · 2011-05-15T03:54:07.255Z · LW(p) · GW(p)
Funny, I just posted a shorter and less-specific discussion post for largely the same reason - my perception that LW is abandoning addressing these problems, in favor of rationality techniques.
My impression is that that was Eliezer's intention from the start. And, I love the idea of rationality bootcamp.
Michael Vassar has put a lot more thought into what SIAI should do now than I have, and I'm not going to disagree with him. Addressing this requires a financial analysis, not just picking what you think is the biggest problem and going straight at it. My impression is that SIAI operates like a college, which pays for operating expenses (neglecting grants and donations) by taking in a large number of undergraduates, in order to fund the work of grad students and professors.
I think LW is not going to be the place where we tackle these problems. That place does not yet exist. And I think LW should not be the place where we tackle these problems. It has already converged on a strong local attractor of opinion which it cannot escape from. It is too strongly-affiliated with a small number of people with largely similar viewpoints. We need some random restarts.
(I think the last 3 paragraphs of OP should be a separate post; their position at the conclusion of the post distracts greatly from the focus of the first part.)
comment by XiXiDu · 2011-03-22T12:30:45.222Z · LW(p) · GW(p)
...specific ideas Eliezer has come up with over the years - the concept of seed AI...
I don't want to nitpick on this again but when exactly did he come up with that concept? Because it is mentioned in Karl Schroeder's Ventus (Tor Books, 2000.) ISBN 978-0312871970.
Here are a few quotes from Ventus:
Look at it this way. Once long ago two kinds of work converged. We'd figured out how to make machines that could make more machines. And we'd figured out how to get machines to... not exactly think, but do something very much like it. So one day some people built a machine which knew how to build a machine smarter than itself. That built another, and that another, and soon they were building stuff the men who made the first machine didn't even recognize.
[...]
And, some of the mechal things kept developing, with tremendous speed, and became more subtle than life. Smarter than humans. Conscious of more. And, sometimes, more ambitious. We had little choice but to label them gods after we saw what they could do--namely, anything.
[...]
They did not command the wealth of nations, these researchers. Although their grants amounted to millions of Euros, they could never have funded a deep-space mission on their own, nor could they have built the giant machineries they conceived of. In order to achieve their dream, they built their prototypes only in computer simulation, and paid to have a commercial power satellite boost the Wind seeds to a fraction of light speed. [...] no one expected the Winds to bloom and grow the way they ultimately did.
It is further explained that the Winds were designed to evolve on their own so they are not mere puppets of human intentions but possess their own intrinsic architecture.
In other places in the book it is explained how humans did not create their AI Gods but that they evolved themselves from seeds designed by humans.
Replies from: steven0461, lukeprog↑ comment by steven0461 · 2011-03-22T18:44:03.870Z · LW(p) · GW(p)
This at least is older than that.