Introducing Leverage Research
post by lukeprog · 2012-01-09T19:47:00.843Z · LW · GW · Legacy · 34 commentsContents
34 comments
Geoff Anders asked me to post this introduction to Leverage Research. Several friends of the Singularity Institute are now with Leverage Research, and we have overlapping goals.
Hello Less Wrong! I'm Geoff Anders, founder of Leverage Research. Many Less Wrong readers are already familiar with Leverage. But many are not, and because of our ties to the Less Wrong community and our deep interest in rationality, I thought it would be good to formally introduce ourselves.
I founded Leverage at the beginning of 2011. At that time we had six members. Now we have a team of more than twenty. Over half of our people come from the Less Wrong / Singularity Institute community. One of our members is Jasen Murray, the leader of the Singularity Institute's recent Rationality Boot Camp. Another is Justin Shovelain, a two-year Visiting Fellow at SIAI and the former leader of their intelligence amplification research. A third is Adam Widmer, a former co-organizer of the New York Less Wrong group.
Our goal at Leverage is to make the world a much better place, using the most effective means we can. So far, our conclusion has been that the most effective way to change the world is by means of high-value projects, projects that will have extremely positive effects if they succeed and that have at least a fair probability of success.
One of our projects is existential risk reduction. We have conducted a study of the efficacy of methods for persuading people to take the risks of artificial general intelligence (AGI) seriously. We have begun a detailed analysis of AGI catastrophe scenarios. We are working with risk analysts inside and outside of academia. Ultimately, we intend to achieve a comprehensive understanding of AGI and other global risks, develop response plans, and then enact those plans.
A second project is intelligence amplification. We have reviewed the existing research and analyzed current approaches. We then created an initial list of research priorities, ranking techniques by likelihood of success, likely size of effect, safety, cost and so on. We plan to start testing novel techniques soon.
These are just two of our projects. We have several others, including the development of rationality training program, the construction and testing of theories of the human mind and an investigation of the laws of idea propagation.
Changing the world is a complex task. Thus we have a plan that guides our efforts. We know that to succeed, we need to become better than we are. So we take training and self-improvement very seriously. Finally, we know that to succeed, we need more talented people. If you want to significantly improve the world, are serious about self-improvement and believe that changing the world means we need to work together, contact us. We're looking for people who are interested in our current projects or who have ideas of their own.
We've been around for just over a year. In that time we've gotten many of our projects underway. We doubled once in our first six months and again in our second six months. And we have just set up our first physical location, in New York City.
If you want to learn more, visit our website. If you want to get involved, want to send a word of encouragement, or if you have suggestions for how we can improve, write to us.
With hope for the future,
Geoff Anders, on behalf of the Leverage Team
34 comments
Comments sorted by top scores.
comment by lukeprog · 2012-01-10T00:47:45.345Z · LW(p) · GW(p)
Geoff,
Of course you and I are pursuing many of the same goals and we have come to many shared conclusions, though our methodologies seem quite different to me, and our models of the human mind are quite different. I take myself to be an epistemic Bayesian and (last I heard) you take yourself to be an epistemic Cartesian. You say things like "Philosophically, there is no known connection between simplicity... and truth," while I take Occam's razor (aka Solomonoff's lightsaber) very seriously. My model of the human mind ignores philosophy almost completely and is instead grounded in the hundreds of messy details from current neuroscience and psychology, while your work on Connection Theory cites almost no cognitive science and instead appears to be motivated by folk psychology, philosophical considerations, and personal anecdote. I place a pretty high probability on physicalism being true (taking "physicalism" to include radical platonism), but you say here that "it follows [from physicalism] that Connection Theory, as stated, is false," but that some variations of CT may still be correct.
Why bring this up? I suspect many LWers are excited (like me) to see another organization working on (among other things) x-risk reduction and rationality training, especially one packed with LW members. But I also suspect many LWers (like me) have many concerns about your research methodology and about connection theory. I think this would be a good place for you to not just introduce yourself (and Leverage Research) but also to address some likely concerns your potential supporters may have (like I did for SI here and here).
For example:
- Is my first paragraph above accurate? Which corrections, qualifications, and additions would you like to make?
- How important is Connection Theory to what Leverage does?
- How similar are your own research assumptions and methodology to those of other Leverage researchers?
I suspect it will be more beneficial to your organization to address such concerns directly and not let them lurk unanswered for long periods of time. That is one lesson I take from my recent experiences with the Singularity Institute.
BTW, I appreciate how many public-facing documents Leverage produces to explain its ideas to others. Please keep that up.
Replies from: Geoff_Anders, Incorrect↑ comment by Geoff_Anders · 2012-01-10T04:29:38.497Z · LW(p) · GW(p)
Hi Luke,
I'm happy to talk about these things.
First, in answer to your third question, Leverage is methodologically pluralistic. Different members of Leverage have different views on scientific methodology and philosophical methodology. We have ongoing discussions about these things. My guess is that probably two or three of our more than twenty members share my views on scientific and philosophical methodology.
If there’s anything methodological we tend to agree on, it’s a process. Writing drafts, getting feedback, paying close attention to detail, being systematic, putting in many, many hours of effort. When you imagine Leverage, don’t imagine a bunch of people thinking with a single mind. Imagine a large number of interacting parallel processes, aimed at a single goal.
Now, I’m happy to discuss my personal views on method. In a nutshell: my philosophical method is essentially Cartesian; in science, I judge theories on the basis of elegance and fit with the evidence. (“Elegance”, in my lingo, is like Occam’s razor, so in practice you and I actually both take Occam’s razor seriously.) My views aren’t the views of Leverage, though, so I’m not sure I should try to give an extended defense here. I’m going to write up some philosophical material for a blog soon, though, so people who are interested in my personal views should check that out.
As for Connection Theory, I could say a bit about where it came from. But the important thing here is why I use it. The primary reason I use CT is because I’ve used it to predict a number of antecedently unlikely phenomena, and the predictions appear to have come true at a very high rate. Of course, I recognize that I might have made some errors somewhere in collecting or assessing the evidence. This is one reason I’m continuing to test CT.
Just as with methodology, people in Leverage have different views on CT. Some people believe it is true. (Not me, actually. I believe it is false; my concern is with how useful it is.) Others believe it is useful in particular contexts. Some think it’s worth investigating, others think it’s unlikely to be useful and not worth examining. A person who thought CT was not useful and who wanted to change the world by figuring out how the mind really works would be welcome at Leverage.
So, in sum, there are many views at Leverage on methodology and CT. We discuss these topics, but no one insists on any particular view and we’re all happy to work together.
I'm glad you like that we're producing public-facing documents. Actually, we're going to be posting a lot more stuff in the relatively near future.
Replies from: CronoDAS, Geoff_Anders, lukeprog, thomblake↑ comment by CronoDAS · 2012-01-10T10:45:37.591Z · LW(p) · GW(p)
::follows various links::
Is CT falsifiable? There's no obvious way to determine a person's intrinsic goods except by observing their behavior, but a person's behavior is what CT is supposed to predict in the first place. If a person appears to be acting in a way that contradicts the Action Rule, then "CT is wrong" and "CT is fine; the person had different intrinsic goods than I thought they did" are both consistent with the evidence.
Replies from: Geoff_Anders↑ comment by Geoff_Anders · 2012-01-10T15:45:21.672Z · LW(p) · GW(p)
Short answer: Yes, CT is falsifiable. Here's how to see this. Take a look at the example CT chart. By following the procedures stated in the Theory and Practice document, you can produce and check a CT chart like the example chart. Once you've checked the chart, you can make predictions using CT and the CT chart. From the example chart, for instance, we can see that the person sometimes plays video games and tries to improve and sometimes plays video games while not trying to improve. From the chart and CT, we can predict: "If the person comes to believe that he stably has the ability to be cool, as he conceives of coolness, then he will stop playing video games while not trying to improve." We would measure belief here primarily by the person's belief reports. So we have a concrete procedure that yields specific predictions. In this case, if the person followed various recommendations designed to increase his ability to be cool, ended up reporting that he stably had the ability to be cool, but still reported playing video games while not trying to improve, CT would be falsified.
Longer answer: In practice, almost any specific theory can be rendered consistent with the data by adding epicycles, positing hidden entities, and so forth. Instead of falsifying most theories, then, what happens is this: You encounter some recalcitrant data. You add some epicycles to your theory. You encounter more recalcitrant data. You posit some hidden entities. Eventually, though, the global theory that includes your theory becomes less elegant than the global theory that rejects your theory. So, you switch to the global theory that rejects your theory and you discard your specific theory. In practice with CT, so far we haven't had to add many epicycles or posit many hidden entities. In particular, we haven't had the experience of having to frequently change what we think a person's intrinsic goods are. If we found that we kept having to revise our views about a person's intrinsic goods (especially if the old posited intrinsic goods were not instrumentally useful for achieving the new posited intrinsic goods), this would be a serious warning sign.
Speaking more generally, we're following particular procedures, as described in the CT Theory and Practice document. We expect to achieve particular results. If in a relatively short time frame we find that we can't, that will provide evidence against the claim "CT is useful for achieving result X". For example, I've been able to work for more than 13 hours a day, with only occasional days off, for more than two years. I attribute this to CT and I expect we'll be able to replicate this. If we end up not being able to, that'll be obvious to us and everyone else.
Thanks for raising the issue of falsifiability. I'm going to add it to our CT FAQ.
Replies from: Emile, John_Maxwell_IV, Craig_Heldreth, None, Incorrect↑ comment by Emile · 2012-01-10T21:55:22.751Z · LW(p) · GW(p)
For example, I've been able to work for more than 13 hours a day, with only occasional days off, for more than two years. I attribute this to CT and I expect we'll be able to replicate this. If we end up not being able to, that'll be obvious to us and everyone else.
It's not an infrequent occurrence that someone comes up with a self-help technique that works for himself, but then doesn't work nearly as well for others - but then if he's say selling a book he may still be able to find 5 people out of 500 on which it works to add their testimony on the back cover!
So far, I see no reason to think that CT would be any better (either for prediction or self-improvement) than say Neuro-Linguistic Programming, which is also an alternative theory that claims impressive results and has a pretty big following.
I think it is possible some alternative psychology model will help people improve themselves and make humanity better etc. - but there are many candidates (including tings like Scientology), and much potential for self-delusion or misunderstanding or death spirals ...
Replies from: CronoDAS↑ comment by CronoDAS · 2012-01-12T00:55:37.190Z · LW(p) · GW(p)
What I've heard is that, when they try to do studies, all types of therapy seem to be about as good as every other type; one study found that talking to a teenage girl with no training in particular was about as effective as talking with a professional therapist. (On the other hand, they also all tend to be better than nothing.)
(Note that this is all vague "what I remember hearing" stuff, so there's probably something more definitive to be found if you Google it.)
Replies from: CarlShulman↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-08-02T07:20:10.990Z · LW(p) · GW(p)
For example, I've been able to work for more than 13 hours a day, with only occasional days off, for more than two years. I attribute this to CT and I expect we'll be able to replicate this. If we end up not being able to, that'll be obvious to us and everyone else.
Any updates?
↑ comment by Craig_Heldreth · 2012-01-10T20:10:20.966Z · LW(p) · GW(p)
Do you have html for those documents? PDF is OK for me, but my guess is html is more openly accessible.
Replies from: Emile↑ comment by Emile · 2012-01-10T21:42:30.469Z · LW(p) · GW(p)
Seconded, I find pdf annoying, especially on my home computer where they don't open in a browser tab, but in a separate application. I don't see any benefit at all to pdf, except for stuff that needs to be printed out so you can write on it or something.
↑ comment by Incorrect · 2012-01-11T05:37:23.165Z · LW(p) · GW(p)
For example, I've been able to work for more than 13 hours a day, with only occasional days off, for more than two years. I attribute this to CT and I expect we'll be able to replicate this. If we end up not being able to, that'll be obvious to us and everyone else.
But what quality of work? Organizing my closet is very different than reading a dense academic paper with full concentration.
Replies from: Geoff_Anders↑ comment by Geoff_Anders · 2012-01-11T15:25:40.327Z · LW(p) · GW(p)
I can usually do any type of work. Sometimes it becomes harder for me to write detailed documents in the last couple hours of my day.
↑ comment by Geoff_Anders · 2012-01-10T05:33:10.670Z · LW(p) · GW(p)
Oops, I forgot to answer your question about how central Connection Theory is to what we're doing.
The answer is that CT is one part of what some of us believe is our best current answer to the question of how the human mind works. I say "one part" because CT does not cover emotions. In all contexts pertaining to emotions, everyone uses something other than CT. I say "some of us" because not everyone in Leverage uses CT. And I say "best current answer" because all of us are happy to throw CT away if we come up with something better.
In terms of our projects, some people use CT and others don't. Some parts of some training programs are designed with CT in mind; other parts aren't. In some contexts, it is very hard to do anything at all without relying on some background psychological framework. In those contexts, some people rely on CT and others don't.
In terms of our overall plan, CT is potentially extremely useful. That said, CT itself is inessential. If it ends up breaking, we can find new psychological tools. And we actually have a backup plan in case we ultimately can't figure out much at all about how the mind works.
↑ comment by lukeprog · 2012-01-10T04:58:16.377Z · LW(p) · GW(p)
Geoff,
Thanks for your clarifications! Especially: "I believe (Connection Theory) is false; my concern is with how useful it is." That sentence sounds very different than the opening paragraph of your Connection Theory page; you may want to tweak the wording on that page.
↑ comment by thomblake · 2012-01-10T15:35:24.321Z · LW(p) · GW(p)
I believe it is false; my concern is with how useful it is.
I do believe Peirce is either rolling over in his grave, or doing whatever the opposite of that is.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-10T15:36:52.589Z · LW(p) · GW(p)
Rolling over in his grave in the other direction?
comment by timtyler · 2012-01-10T02:17:23.178Z · LW(p) · GW(p)
See also: Trimtab.
Replies from: JenniferRM↑ comment by JenniferRM · 2012-01-10T02:51:12.701Z · LW(p) · GW(p)
I had never heard of that before but it is interesting on a bunch of levels (mechanical, sociological, memetic, etc). My presumption is that you're interested primarily in the idea captured by the this quote by Buckminster Fuller at the bottom of the wikipedia page:
Something hit me very hard once, thinking about what one little man could do. Think of the Queen Mary—the whole ship goes by and then comes the rudder. And there's a tiny thing at the edge of the rudder called a trim tab. It's a miniature rudder. Just moving the little trim tab builds a low pressure that pulls the rudder around. Takes almost no effort at all. So I said that the little individual can be a trim tab. Society thinks it's going right by you, that it's left you altogether. But if you're doing dynamic things mentally, the fact is that you can just put your foot out like that and the whole big ship of state is going to go. So I said, call me Trim Tab.
I'm curious how deep the analogy you're suggesting is. Can you extend it into something more explicit? My naive thought, would be that Eliezer_2007-ish was the trim tab and all of what's happening from ~2009 to ~2013 (Leverage Research included) is more like "the rudder" starting to move, and the ship won't even have visibly changed course until 2020-ish and then only slightly. The place the analogy really seems to fail to me is that it presumes there is this single tiny thing that matters (which is quite complimentary and thus a nice PR angle), when really there are probably thousands of things that will retrospectively be seen to have mattered and the english speaking singularitarian political movement is just one of them.
EDA: I don't understand LW's voting here. Tim was the one with the idea, I just spelled out the implications for the sake of discussion, and his comment's at 1 and mine is 9 now?!? He's the one with the awesome signal/noise ratio and relevant links, not me, but I can't vote myself down to rectify this.
Replies from: timtyler, JenniferRM↑ comment by timtyler · 2012-01-10T11:18:14.357Z · LW(p) · GW(p)
The place the analogy really seems to fail to me is that it presumes there is this single tiny thing that matters (which is quite complimentary and thus a nice PR angle), when really there are probably thousands of things that will retrospectively be seen to have mattered and the english speaking singularitarian political movement is just one of them.
Well there may be some cases where a little effort can make a big difference - and it may pay for individuals to seek them out. However, there's obviously a big influence from technological determinism - which would tend to damp out small fluctuations due to the efforts of individuals.
Replies from: JenniferRM↑ comment by JenniferRM · 2012-01-10T17:45:06.654Z · LW(p) · GW(p)
Do you know of any solid methodologies for predicting outcomes from technology? To cash out political determinism I'd go with something like Bruce Bruno De Mesquita's work, but I don't know any methods to analyze technological determination of history other than using case-by-case reasoning, and nearly all of the "cases" I've seen are post hoc.
Replies from: timtyler↑ comment by timtyler · 2012-01-10T19:35:13.980Z · LW(p) · GW(p)
Nobody has a practical methodology for predicting the future in very much detail.
Technological determinism still seems like a big and important idea to me, though.
↑ comment by JenniferRM · 2012-01-10T16:40:57.933Z · LW(p) · GW(p)
EDA: I don't understand LW's voting here. Tim was the one with the idea, I just spelled out the implications for the sake of discussion, and his comment's at 1 and mine is 9 now?!? He's the one with the awesome signal/noise ratio and relevant links, not me, but I can't vote myself down to rectify this.
comment by Larks · 2012-01-09T21:11:09.089Z · LW(p) · GW(p)
"Unfortunately, the exposition in the document is incomplete; we've been adding so many consequences to CT it's hard to keep the documents current. The sections on evidence are also somewhat dated; since this version was written, we’ve amassed significantly more evidence in favor of CT. These things said, what we have here explains the core theory thoroughly and accurately and also explains how to create a comprehensive CT chart."
Reminds me of a footnote in Kripke,
"(1) This outline was prepared hastily -- at the editor's insistence -- from a taped manuscript of a lecture. Since I was not even given the opportunity to revise the first draft before publication, I cannot be held responsible for any lacunae in the (published version of the) argument, or for any fallacious or garbled inferences resulting from faulty preparation of the typescript. Also, the argument now seems to me to have problems which I did not know when I wrote it, but which I can't discuss here, and which are completely unrelated to any criticisms that have appeared in the literature (or that I have seen in manuscript); all such criticisms misconstrue my argument. It will be noted that the present version of the argument seems to presuppose the (intuitionistically unacceptable) law of double negation. But the argument can easily be reformulated in a way that avoids employing such an inference rule. I hope to expand on these matters further in a separate monograph. "
Replies from: vallinder↑ comment by vallinder · 2012-01-09T21:35:14.044Z · LW(p) · GW(p)
Unfortunately, the Kripke footnote appears to be a joke only.
Replies from: Larks↑ comment by Larks · 2012-01-09T21:46:58.347Z · LW(p) · GW(p)
Nope, it's near the beginning of Naming and Necessity. I got the copy-paste from the internet, but first came accross it while writing an essay on definite descriptions.
Replies from: vallinder↑ comment by vallinder · 2012-01-10T08:55:01.914Z · LW(p) · GW(p)
There are a couple of similar-sounding footnotes in the preface and the first chapter, but I'm unable to find this particular one.
Replies from: Larks↑ comment by Larks · 2012-01-10T19:44:42.681Z · LW(p) · GW(p)
Ahhh, I may have mis-remembered. I'm away from the faculty library at the moment so can't easily check.
Replies from: gjm↑ comment by gjm · 2012-01-13T23:05:26.739Z · LW(p) · GW(p)
I've got a copy right here and (1) I can't find that footnote, or anything close enough that it might be a benignly garbled copy, in it (either in the main text or in footnotes) but (2) there's plenty very near the start that's like it in tone and that the footnote might well be a parody of. For instance, here's some material from near the start of the preface, some phrases of which you will recognize:
[...] as far as revision is concerned, there is something to be said for preserving a work in its original form, warts and all. I have thus followed a very conservative policy of correction for the present printing. [...] A good indication of my conservative policy is in footnote 56. In that footnote the letter-nomenclature for the various objects involved, inexplicably garbled in the original printing, has been corrected; but I make no mention of the fact that the argument of the footnote now seems to me to have problems which I did not know when I wrote it and which at least require further discussion.
To which he adds a footnote:
Although I have not had time for careful study of Nathan Salmon's criticism [...] of this footnote, it seems likely that his criticism of the argument, though related to mine, is not the same and reconstructs it in a way that does not correspond to my exact intent and makes the argument unnecessarily weak. [...]
comment by Viliam_Bur · 2012-01-16T09:17:39.461Z · LW(p) · GW(p)
This part of Connection Theory seemed very interesting to me; I will say it in my own words, because I forgot the original words:
Each person has a few "life goals", the goals behind the smaller goals. People are updating on evidence... unless such update would mean (according to their current beliefs) that some of their "life goals" becomes unreachable.
Seems to me that it relates to the idea of leaving a line of retreat. The way to cross a possible valley of bad rationality in yourself or in other person is to build new knowledge in such sequence that it does not endanger your values in the process. Sometimes the web of irrational beliefs may be very difficult to disentangle.
comment by [deleted] · 2012-04-27T10:41:29.369Z · LW(p) · GW(p)
Checked the plan...
Lots of burdensome details...