Call for new SIAI Visiting Fellows, on a rolling basis
post by AnnaSalamon · 2009-12-01T01:42:45.088Z · LW · GW · Legacy · 272 commentsContents
Interested, but not sure whether to apply? What we’re looking for None 272 comments
Last summer, 15 Less Wrongers, under the auspices of SIAI, gathered in a big house in Santa Clara (in the SF bay area), with whiteboards, existential risk-reducing projects, and the ambition to learn and do.
Now, the new and better version has arrived. We’re taking folks on a rolling basis to come join in our projects, learn and strategize with us, and consider long term life paths. Working with this crowd transformed my world; it felt like I was learning to think. I wouldn’t be surprised if it can transform yours.
A representative sample of current projects:
- Research and writing on decision theory, anthropic inference, and other non-dangerous aspects of the foundations of AI;
- The Peter Platzer Popular Book Planning Project;
- Editing and publicizing theuncertainfuture.com;
- Improving the LW wiki, and/or writing good LW posts;
- Getting good popular writing and videos on the web, of sorts that improve AI risks understanding for key groups;
- Writing academic conference/journal papers to seed academic literatures on questions around AI risks (e.g., takeoff speed, economics of AI software engineering, genie problems, what kinds of goal systems can easily arise and what portion of such goal systems would be foreign to human values; theoretical compsci knowledge would be helpful for many of these questions).
Interested, but not sure whether to apply?
Past experience indicates that more than one brilliant, capable person refrained from contacting SIAI, because they weren’t sure they were “good enough”. That kind of timidity destroys the world, by failing to save it. So if that’s your situation, send us an email. Let us be the one to say “no”. Glancing at an extra application is cheap, and losing out on a capable applicant is expensive.
And if you’re seriously interested in risk reduction but at a later time, or in another capacity -- send us an email anyway. Coordinated groups accomplish more than uncoordinated groups; and if you care about risk reduction, we want to know.
What we’re looking for
At bottom, we’re looking for anyone who:
- Is capable (strong ability to get things done);
- Seriously aspires to rationality; and
- Is passionate about reducing existential risk.
Bonus points for any (you don’t need them all) of the following traits:
- Experience with management, for example in a position of responsibility in a large organization;
- Good interpersonal and social skills;
- Extraversion, or interest in other people, and in forming strong communities;
- Dazzling brilliance at math or philosophy;
- A history of successful academic paper-writing; strategic understanding of journal submission processes, grant application processes, etc.
- Strong general knowledge of science or social science, and the ability to read rapidly and/or to quickly pick up new fields;
- Great writing skills and/or marketing skills;
- Organization, strong ability to keep projects going without much supervision, and the ability to get mundane stuff done in a reliable manner;
- Skill at implementing (non-AI) software projects, such as web apps for interactive technological forecasting, rapidly and reliably;
- Web programming skill, or website design skill;
- Legal background;
- A history of successfully pulling off large projects or events;
- Unusual competence of some other sort, in some domain we need, but haven’t realized we need.
- Cognitive diversity: any respect in which you're different from the typical LW-er, and in which you're more likely than average to notice something we're missing.
If you think this might be you, send a quick email to jasen@intelligence.org. Include:
- Why you’re interested;
- What particular skills you would bring, and what evidence makes you think you have those skills (you might include a standard resume or c.v.);
- Optionally, any ideas you have for what sorts of projects you might like to be involved in, or how your skillset could help us improve humanity’s long-term odds.
Our application process is fairly informal, so send us a quick email as initial inquiry and we can decide whether or not to follow up with more application components.
As to logistics: we cover room, board, and, if you need it, airfare, but no other stipend.
Looking forward to hearing from you,
Anna
ETA (as of 3/25/10): We are still accepting applications, for summer and in general. Also, you may wish to check out http://www.singinst.org/grants/challenge#grantproposals for a list of some current projects.
272 comments
Comments sorted by top scores.
comment by Henrik_Jonsson · 2009-12-01T02:12:21.068Z · LW(p) · GW(p)
I took part of the 2009 summer program during the vacation of my day job as a software developer in Sweden. This entailed spending five weeks with the smartest and most dedicated people I have ever met, working on a wide array of projects both short- and long-term, some of which were finished by the time I left and some of which are still on-going.
My biggest worry beforehand was that I would not be anywhere near talented enough to participate and contribute in the company of SIAI employees and supporters. That seems to not have occurred, though I don't claim to have anywhere near the talent of most others involved. Some of the things I was involved with during the summer was work on the Singularity Summit website as well continuing the Uncertain Future project for assigning probability distributions to events and having the conclusions calculated for you. I also worked on papers with Carl Shulman and Nick Tarleton, read a massive amount of papers and books, took trips to San Fransisco and elsewhere, played games, discussed weird forms of decision theories and counter-factual everything, etc, etc.
My own comparative advantages seem to be having the focus to keep hacking away at projects, as well as the specialized skills that came from having a CS background and some experience (less than a year though) of working in the software industry. I'm currently writing this from the SIAI house, to which I returned about three weeks ago. This time I mainly focused on getting a job as a software developer in the Bay area (I seem to have succeeded), for the aims of earning money (some of which will go to donations) and also making it easier for me to participate in SIAI projects.
I'd say that the most important factor for people considering applying should be if they have strong motivations and a high level of interest in the issues that SIAI involves itself with. Agreeing with specific perceived beliefs of the SIAI or people involved with it is not necessary, and the disagreements will be brought out and discussed as thoroughly as you could ever wish for. As long as the interest and motivation is there, the specific projects you want to work with should work itself out nicely. My own biggest regret is that I kept lurking for so long before getting in touch with the people here.
comment by Yorick_Newsome · 2009-12-01T06:39:15.910Z · LW(p) · GW(p)
I'm slowly waking up to the fact that people at the Singularity Institute as well as Less Wrong are dealing with existential risk as a Real Problem, not just a theoretical idea to play with in an academic way. I've read many essays and watched many videos, but the seriousness just never really hit my brain. For some reason I had never realized that people were actually working on these problems.
I'm an 18 year old recent high school dropout, about to nab my GED. I could go to community college, or I could go along with my plan of leading a simple life working a simple job, which I would be content doing. I'm a sort of tabla rossa here: if I wanted to get into the position where I would be of use to the SIAI, what skills should I develop? Which of the 'What we're looking for' traits would be most useful in a few years? (The only thing I'm good at right now is reading very quickly and retaining large amounts of information about various fields: but I rarely understand the math, which is currently very limiting.)
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-12-01T09:10:49.946Z · LW(p) · GW(p)
Yorick, and anyone else who is serious about reducing existential risk and is not in our contact network: please email me. anna at singinst dot org. The reason you should email is that empirically, people seem to make much better decisions about what paths will reduce existential risks when in dialog with others. Improved information here can go a long way.
I'll answer anyway, for the benefit of lurkers (but Yorick, don't believe my overall advice. Email me instead, about your specific strengths and situation):
- Work on rationality. To help existential risk at all, you need: (a) unusual ability to weigh evidence fairly, in confusing instances and despite the presence of strong emotions; (b) the ability to take far-more evidence seriously on an emotional and action-based level. (But (b) is only an asset after you have formed careful, robust, evidence-based conclusions. If you're as bad a thinker as 95% of the population, acting on far-mode conclusions can be dangerous, and can make your actions worse.)
- Learn one of: math, physics, programming, or possibly analytic philosophy, because they teach useful habits of thought. Programming is perhaps the most useful of these because it can additionally be used to make money.
- Learn people skills. Tutoring skills; sales skills; the ability to start and maintain positive conversations with strangers; management skills and experience; social status non-verbals (which one can learn in the pickup community, among other places); observational skills and the ability to understand and make accurate predictions about the people around you; skill at making friends; skill at building effective teams...
- Learn to track details, to direct your efforts well within complex projects, and to reliably get things done. Exercise regularly, too.
↑ comment by Kaj_Sotala · 2009-12-01T16:02:28.673Z · LW(p) · GW(p)
Note that it's also good to have some preliminary discussion here, moving on to e-mail mainly if personal details come up that one feels unwilling to share in public. If a lot of people publicly post their interest to participate, then that will encourage others to apply as well. Plus it gives people a picture of what sort of other folks they might end up working with. Also, discussing the details of the issue in public will help those who might initially be too shy to send a private e-mail, as they can just read what's been discussed before. Even if you weren't shy as such, others might raise questions you didn't happen to think of. For instance, I think Anna's four points above are good advice for a lot of people, and I'm happy that Yorick posted the comment that prompted this response and didn't just e-mail Anna directly.
(EDIT: Removed a few paragraphs as I realized I'd have to rethink their content.)
Replies from: Morendil↑ comment by Morendil · 2009-12-02T19:20:45.285Z · LW(p) · GW(p)
I've sent an email your way. Given that email has become a slightly unreliable medium, thanks to the arms race between Spam and Bayesian (and other) countermeasures, I'd appreciate an acknowledge (even if just to say "got it"), here or via email.
Replies from: AnnaSalamon, None↑ comment by AnnaSalamon · 2009-12-02T21:35:32.159Z · LW(p) · GW(p)
Thanks for the heads up. Oddly enough, it was sitting in the spam filter on my SIAI account (without making it through forwarding to my gmail account, where I was checking the spam filter). Yours was the only message caught in the SIAI spam filter, out of 19 who emailed so far in response to this post.
Did you have special reason to expect to be caught in a spam filter?
Replies from: Morendil↑ comment by Morendil · 2009-12-02T22:08:26.182Z · LW(p) · GW(p)
It happens every so often to email people send me, so I periodically check the spam folder on Gmail; by symmetry I assume it happens to email I send. It's more likely to occur on a first contact, too. And last, I spent a fair bit of time composing that email, getting over the diffidence you're accurately assuming.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-12-03T01:29:27.556Z · LW(p) · GW(p)
your handle sounds like a brand name drug ;) e.g. paxil
↑ comment by [deleted] · 2009-12-05T07:54:28.707Z · LW(p) · GW(p)
I wonder how long I can expect to wait before receiving a response. I sent my email on Wednesday, by the way.
Replies from: SilasBarta, AnnaSalamon, Morendil↑ comment by SilasBarta · 2009-12-07T16:02:46.513Z · LW(p) · GW(p)
So you want to know f(x) := P(will receive a response|have not received a response in x days) for values of x from 0 to say, 7?
↑ comment by AnnaSalamon · 2009-12-07T06:16:08.532Z · LW(p) · GW(p)
I'm sorry; I still haven't responded to many of them. Somewhere in the 1-3 days range for an initial response, probably.
↑ comment by arbimote · 2010-01-15T10:24:03.188Z · LW(p) · GW(p)
I sent an email on January the 10th, and haven't yet got a reply. Has my email made it to you? Granted, it is over a month since this article was posted, so I understand if you are working on things other than applications at this point...
comment by komponisto · 2009-12-01T21:27:02.033Z · LW(p) · GW(p)
Past experience indicates that more than one brilliant, capable person refrained from contacting SIAI, because they weren’t sure they were “good enough”.
Well, who can blame them?
Seriously, FYI (where perhaps the Y stands for "Yudkowsky's"): that document (or a similar one) really rubbed me the wrong way the first time I read it. It just smacked of "only the cool kids can play with us". I realize that's probably because I don't run into very many people who think they can easily solve FAI, whereas Eliezer runs into them constantly; but still.
Replies from: DanArmak, Eliezer_Yudkowsky, anonym↑ comment by DanArmak · 2009-12-01T21:55:11.106Z · LW(p) · GW(p)
It rubbed me the wrong way when, after explaining for several pages that successful FAI Programmers would have to be so good that the very best programmers on the planet may not be good enough, it added - "We will probably, but not definitely, end up working in Java".
...I don't know if that's a bad joke or a hint that the writer isn't being serious. Well, if it's a joke, it's bad and not funny. Now I'll have nightmares of the best programmers Planet Earth could field failing to write a FAI because they used Java of all things.
Replies from: Liron, komponisto, Jordan↑ comment by Liron · 2009-12-01T22:42:14.132Z · LW(p) · GW(p)
This was written circa 2002 when Java was at least worthy of consideration compared to the other options out there.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-01T22:50:23.096Z · LW(p) · GW(p)
Yup. The logic at the time went something like, "I want something that will be reasonably fast and scale to lots of multiple processors and runs in a tight sandbox and has been thoroughly debugged with enterprise-scale muscle behind it, and which above all is not C++, and in a few years (note: HAH!) when we start coding, Java will probably be it." There were lots of better-designed languages out there but they didn't have the promise of enterprise-scale muscle behind their implementation of things like parallelism.
Also at that time, I was thinking in terms of a much larger eventual codebase, and was much more desperate to use something that wasn't C++. Today I would say that if you can write AI at all, you can write the code parts in C, because AI is not a coding problem.
Mostly in that era there weren't any good choices, so far as I knew then. Ben Goertzel, who was trying to scale a large AI codebase, was working in a mix of C/C++ and a custom language running on top of C/C++ (I forget which), which I think he had transitioned either out of Java or something else, because nothing else was fast enough or handled parallelism correctly. Lisp, he said at that time, would have been way too slow.
Replies from: kpreid, komponisto, DanArmak↑ comment by kpreid · 2009-12-01T23:15:35.569Z · LW(p) · GW(p)
Today I would say that if you can write AI at all, you can write the code parts in C, because AI is not a coding problem.
I'd rather the AI have a very low probability of overwriting its supergoal by way of a buffer overflow.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-12-02T04:14:26.177Z · LW(p) · GW(p)
Proving no buffer overflows would be nothing next to the other formal verification you'd be doing (I hope).
↑ comment by komponisto · 2009-12-03T20:54:34.628Z · LW(p) · GW(p)
Today I would say that if you can write AI at all, you can write the code parts in C, because AI is not a coding problem.
Exactly -- which is why the sentence sounded so odd.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-03T21:18:32.087Z · LW(p) · GW(p)
Well, yes, Yudkowsky-2002 is supposed to sound odd to a modern LW reader.
↑ comment by DanArmak · 2009-12-02T02:12:57.335Z · LW(p) · GW(p)
I fully agree that C++ is much, much, worse than Java. The wonder is that people still use it for major new projects today. At least there are better options than Java available now (I don't know what the state of art was in 2002 that well).
If you got together an "above-genius-level" programming team, they could design and implement their own language while they were waiting for your FAI theory. Probably they would do it anyway on their own initiative. Programmers build languages all the time - a majority of today's popular languages started as a master programmer's free time hobby. (Tellingly, Java is among the few that didn't.)
A custom language built and maintained by a star team would be at least as good as any existing general-purpose one, because you would borrow design you liked and because programming language design is a relatively well explored area (incl. such things as compiler design). And you could fit the design to the FAI project's requirements: choosing a pre-existing language means finding one that happens to match your requirements.
Incidentally, all the good things about Java - including the parallelism support - are actually properties of the JVM, not of the Java the language; they're best used from other languages that compile to the JVM. If you said "we'll probably run on the JVM", that would have sounded much better than "we'll probably write in Java". Then you'll only have to contend with the CLR and LLVM fans :-)
Replies from: Eliezer_Yudkowsky, Eliezer_Yudkowsky, anonym↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-02T04:27:23.369Z · LW(p) · GW(p)
I don't think it will mostly be a coding problem. I think there'll be some algorithms, potentially quite complicated ones, that one will wish to implement at high speed, preferably with reproducible results (even in the face of multithreading and locks and such). And there will be a problem of reflecting on that code, and having the AI prove things about that code. But mostly, I suspect that most of the human-shaped content of the AI will not be low-level code.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-02T09:17:08.194Z · LW(p) · GW(p)
How's the JVM on concurrency these days? My loose impression was that it wasn't actually all that hot.
Replies from: mattnewport, Henrik_Jonsson↑ comment by mattnewport · 2009-12-02T09:51:12.103Z · LW(p) · GW(p)
I think it's pretty fair to say that no language or runtime is that great on concurrency today. Coming up with a better way to program for many-core machines is probably the major area of research in language design today and there doesn't appear to be a consensus on the best approach yet.
I think a case could be made that the best problem a genius-level programmer could devote themselves to right now is how to effectively program for many-core architectures.
↑ comment by Henrik_Jonsson · 2009-12-02T19:50:39.679Z · LW(p) · GW(p)
My impression is that JVM is worse at concurrency than every other approach that's been tried so far.
Haskell and other functional programming languages has many promising ideas but isn't widely used in the industry AFAIK.
This presentation gives a good short overview of the current state of concurrency approaches.
↑ comment by anonym · 2009-12-02T08:29:48.385Z · LW(p) · GW(p)
Speaking of things that aren't Java but run on the JVM, Scala is one such (really nice) language. It's designed and implemented by one of the people behind the javac compiler, Martin Odersky. The combination of excellent support for concurrency and functional programming would make it my language of choice for anything that I would have used Java for previously, and it seems like it would be worth considering for AI programming as well.
↑ comment by komponisto · 2009-12-01T22:14:49.970Z · LW(p) · GW(p)
It rubbed me the wrong way when, after explaining for several pages that successful FAI Programmers would have to be so good that the very best programmers on the planet may not be good enough, it added - "We will probably, but not definitely, end up working in Java"
I had the same thought -- how incongruous! (Not that I'm necessarily particularly qualified to critique the choice, but it just sounded...inappropriate. Like describing a project to build a time machine and then solemnly announcing that the supplies would be purchased at Target.)
I assume, needless to say, that (at least) that part is no longer representative of Eliezer's current thinking.
Replies from: DanArmak↑ comment by DanArmak · 2009-12-01T22:25:32.187Z · LW(p) · GW(p)
I can't understand how it could ever have been part of his thinking. (Java was even worse years ago!)
Replies from: timtyler↑ comment by timtyler · 2009-12-09T15:06:03.848Z · LW(p) · GW(p)
Not relative to its competitors, surely. Many of them didn't exist back then.
Replies from: DanArmak↑ comment by DanArmak · 2009-12-09T18:32:45.483Z · LW(p) · GW(p)
That's true. But inasfar the requirements of the FAI project are objective, independent of PL development in the industry, they should be the main point of reference. Developing your own language is a viable alternative and was even more attractive years ago - that's what I meant to imply.
Replies from: timtyler↑ comment by timtyler · 2009-12-09T19:20:18.536Z · LW(p) · GW(p)
It depends on whether you want to take advantage of resources like editors, IDEs, refactoring tools, lint tools - and a pool of developers.
Unless you have a very good reason to do so, inventing your own language is a large quantity of work - and one of its main effects is to cut you off from the pool of other developers - making it harder to find other people to work on your project and restricting your choice of programming tools to ones you can roll for yourself.
Replies from: DanArmak↑ comment by DanArmak · 2009-12-09T19:53:28.561Z · LW(p) · GW(p)
Anecdotally, half the benefit of inventing your own language is cutting yourself off from the pool of other, inferior developers :-)
Remember that Eliezer's assumption is that he'd be starting with a team of super-genius developers. They wouldn't have a problem with rolling their own tools.
Replies from: timtyler↑ comment by timtyler · 2009-12-09T20:04:00.534Z · LW(p) · GW(p)
Well, it's not that it's impossible, it's more that it drains off energy from your project into building tools. If your project is enormous, that kind of expense might be justified. Or if you think you can make a language for your application domain which works much better than the best of the world's professional language designers.
However, in most cases, these kinds of proposals are a recipe for disaster. You spend a lot of your project resources pointlessly reinventing the wheel in terms of lint, refactoring, editing and code-generation technology - and you make it difficult for other developers to help you out. I think this sort of thing is only rather rarely a smart move.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-01T22:54:43.534Z · LW(p) · GW(p)
That's if you want to be an FAI developer and on the Final Programming Team of the End of the World, not if you want to work for SIAI in any capacity whatsoever. If you're writing to myself, rather than Anna, then yes, mentioning e.g. the International Math Olympiad will help to get my attention. (Though I'm certain the document does need updating - I haven't looked at it myself in a long while.)
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2009-12-01T23:32:30.134Z · LW(p) · GW(p)
It does kinda give the impression that a) donors and b) programmers are all that SIAI has a use for, though. It mentions that if you want to help but aren't a genius, sure, you can be a donor, or you can see if you get into a limited number of slots for non-genius programmers, but that's it.
I'm also one of the people who's been discouraged from the thought of being useful for SIAI by that document, though. (Fortunately people have afterwards been giving the impression I might be of some use after all. Submitted an application today.)
Replies from: Eliezer_Yudkowsky, Nick_Tarleton, Nick_Tarleton↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-02T04:25:41.771Z · LW(p) · GW(p)
Anna, and in general the Vassarian lineage, are more effective cooperators than I am. The people who I have the ability to cooperate with, form a much more restricted set than those who they can cooperate with.
↑ comment by Nick_Tarleton · 2009-12-02T04:12:36.843Z · LW(p) · GW(p)
It does kinda give the impression that a) donors and b) programmers are all that SIAI has a use for, though.
I once had that impression too, almost certainly in part from SYWTBASAIP.
↑ comment by Nick_Tarleton · 2009-12-02T04:11:17.242Z · LW(p) · GW(p)
It does kinda give the impression that a) donors and b) programmers are all that SIAI has a use for, though.
FWIW, I and almost everyone outside SIAI who I've discussed it with have had this misconception; in my case, SYWTBASAIP did support it.
↑ comment by anonym · 2009-12-02T06:51:09.148Z · LW(p) · GW(p)
SYWTBASAIP always makes me think of Reid Barton -- which I imagine is probably quite a bit higher that EY meant to convey as a lower bound -- so I know what you mean.
comment by steven0461 · 2009-12-02T03:13:10.957Z · LW(p) · GW(p)
After some doubts as to ability to contribute and the like, I went to be an intern in this year's summer program. It was fun and I'm really glad I went. At the moment, I'm back there as a volunteer, mostly doing various writing tasks, like academic papers.
Getting to talk a lot to people immersed in these ideas has been both educational and motivating, much more so than following things through the internet. So I'd definitely recommend applying.
Also, the house has an awesome library that for some reason isn't being mentioned. :-)
Replies from: Morendil, Yorick_Newsome↑ comment by Morendil · 2009-12-02T08:40:50.810Z · LW(p) · GW(p)
Is that library's catalog available on a site like LibraryThing ?
If it isn't, please get one of those visiting fellows to spend as long as it takes entering ISBNs so that others can virtually browse your bookshelves.
Replies from: MBlume, None↑ comment by MBlume · 2009-12-05T08:00:44.312Z · LW(p) · GW(p)
House Librarian, at your service ^_^
http://spreadsheets.google.com/pub?key=tizaeqM8qBve3B_8fZR2GmQ&output=html
Replies from: Morendil, Morendil, Kevin, Morendil, anonym↑ comment by Morendil · 2009-12-11T09:32:22.545Z · LW(p) · GW(p)
I've set up a SIAI account on LibraryThing, for a bunch of reasons even though I've not heard back from MBlume.
http://www.librarything.com/catalog/siai
The heuristic "it's easier to seek forgiveness than permission" seemed to apply, the upvotes on the comments below indicate interest, I wanted to separate my stuff from SIAI's but still have a Web 2.0-ish way to handle it, and information wants to be free.
If this was a mistake on my part, it's easily corrected.
↑ comment by Morendil · 2009-12-05T09:12:20.239Z · LW(p) · GW(p)
Thanks !
Replies from: Matt_Duing↑ comment by Matt_Duing · 2010-01-01T06:49:21.159Z · LW(p) · GW(p)
I second Morendil's thanks. This list provides a view of what material is being thought about and discussed by the SIAI volunteers, and I hope that it alleviates some of the concerns of potential applicants who are hesitating.
↑ comment by anonym · 2009-12-05T08:56:38.131Z · LW(p) · GW(p)
If it's an option, please make the spreadsheet sortable. It would be much easier to browse if it were sorted by (location, creator), so all math books would be together, and books by the same author on the same topic would be together.
Thanks for making this available though. I enjoyed browsing and already bought one. You might consider putting Amazon links in there with an affiliate tag for SIAI.
Replies from: Morendil↑ comment by Morendil · 2009-12-05T09:55:58.369Z · LW(p) · GW(p)
Try http://www.librarything.com/catalog/Morendil/siai
Replies from: anonym↑ comment by anonym · 2009-12-05T20:14:43.203Z · LW(p) · GW(p)
Thanks, that's helpful, but the original spreadsheet being sortable would still be very useful, because the librarything doesn't have "shelf", so you can't sort and view all math books together, for example.
Replies from: wkvong↑ comment by wkvong · 2009-12-07T16:27:17.145Z · LW(p) · GW(p)
I've sorted MBlume's original list so that it displays all the books of the same location together...however some of the places (living room floor/shelf etc.) are a collection of books on different topics. I may sort them out another time.
Here it is: http://spreadsheets.google.com/pub?key=t5Fz_UEo8JLZyEFfUvJVvPA&output=html
Replies from: anonym↑ comment by [deleted] · 2009-12-05T07:30:35.105Z · LW(p) · GW(p)
And make sure they use a barcode scanner. Given that books tend to have ISBN barcodes, it would be... irrational not to.
(If it seems to you like a matter of knowledge, not rationality, then take a little while to ponder how you could be wrong.)
Replies from: MBlume↑ comment by Yorick_Newsome · 2009-12-02T06:40:50.228Z · LW(p) · GW(p)
I had a dream where some friends and I invaded the "Less Wrong Library", and I agree it was most impressive. ...in my dream.
Replies from: MBlumecomment by Wei Dai (Wei_Dai) · 2009-12-02T08:23:55.180Z · LW(p) · GW(p)
This is a bit off topic, but I find it strange that for years I was unable to find many people interested in decision theory and anthropic reasoning (especially a decision theoretic approach to anthropic reasoning) to talk with, and now they're hot topics (relatively speaking) because they're considered matters of existential risk. Why aren't more people working on these questions just because they can't stand not knowing the answers?
Replies from: Eliezer_Yudkowsky, Wei_Dai, Yorick_Newsome, whpearson↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-03T00:04:42.543Z · LW(p) · GW(p)
You might as well ask why all the people wondering if all mathematical objects exist, haven't noticed the difference in style between the relation between logical axioms and logical models versus causal laws and causal processes.
If something isn't a cached reply to the question "What should I think about?" then it's just not surprising if no one is thinking about it. People are crazy, the world is mad.
Replies from: Vladimir_Nesov, Wei_Dai, Tyrrell_McAllister↑ comment by Vladimir_Nesov · 2009-12-03T09:11:41.313Z · LW(p) · GW(p)
haven't noticed the difference in style between the relation between logical axioms and logical models versus causal laws and causal processes.
Amplify?
Replies from: DanArmak↑ comment by Wei Dai (Wei_Dai) · 2009-12-04T01:19:07.408Z · LW(p) · GW(p)
People are crazy, the world is mad.
Eliezer, it makes me nervous when my behavior or reasoning differs from the vast majority of human beings. Surely that's a reasonable concern? Knowing that people are crazy and the world is mad helps a bit, but not too much because people who are even crazier than average probably explain their disagreements with the world in exactly this way.
So, I'm inclined to try to find more detailed explanations of the differences. Is there any reason you can think of why that might be unproductive, or otherwise a bad idea?
Replies from: Eliezer_Yudkowsky, aausch, byrnema↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-04T07:07:36.314Z · LW(p) · GW(p)
Eliezer, it makes me nervous when my behavior or reasoning differs from the vast majority of human beings. Surely that's a reasonable concern?
On this planet? No. On this planet, I think you're better off just worrying about the object-level state of the evidence. Your visceral nervousness has nothing to do with Aumann. It is conformity.
Knowing that people are crazy and the world is mad helps a bit, but not too much because people who are even crazier than average probably explain their disagreements with the world in exactly this way.
What do you care what people who are crazier than average do? You already have enough information to know you're not one of them. You care what these people do, not because you really truly seriously think you might be one of them, but because of the gut-level, bone-deep fear of losing status by seeming to affiliate with a low-prestige group by saying something that sounds similar to what they say. You may be reluctant to admit that you know perfectly well you're not in this group, because that also sounds like something this low-prestige group would say; but in real life, you have enough info, you know you have enough info, and the thought has not seriously crossed your mind in a good long while, whatever your dutiful doubts of your foregone conclusion.
Seriously, just make the break, clean snap, over and done.
So, I'm inclined to try to find more detailed explanations of the differences. Is there any reason you can think of why that might be unproductive, or otherwise a bad idea?
Occam's Imaginary Razor. Spending lots of time on the meta-level explaining away what other people think is bad for your mental health.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-12-04T11:47:30.365Z · LW(p) · GW(p)
You're wrong, Elizer. I am sure that I'm not crazier than average, and I'm not reluctant to admit that. But in order to disagree with most of the world, I have to have good reason to think that I'm more rational than everyone I disagree with, or have some other explanation that lets me ignore Aumann. The only reason I referred to people who are crazier than average is to explain why "people are crazy, the world is mad" is not one of those explanations.
Spending lots of time on the meta-level explaining away what other people think is bad for your mental health.
That's only true if I'm looking for rationalizations, instead of real explanations, right? If so, noted, and I'll try to be careful.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-04T13:00:38.998Z · LW(p) · GW(p)
But in order to disagree with most of the world, I have to have good reason to think that I'm more rational than everyone I disagree with
You're more rational than the vast majority of people you disagree with. There, I told you up front. Is that reason enough? I can understand why you'd doubt yourself, but why should you doubt me?
That's only true if I'm looking for rationalizations, instead of real explanations, right? If so, noted, and I'll try to be careful.
I'm not saying that you should deliberately stay ignorant or avoid thinking about it, but I suspect that some of the mental health effects of spending lots of time analyzing away other people's disagreements would happen to you even if you miraculously zeroed in on the true answer every time. Which you won't. So it may not be wise to deliberately invest extra thought-time here.
Or maybe divide healthy and risky as follows: Healthy is what you do when you have a serious doubt and are moving to resolve it, for example by reading more of the literature, not to fulfill a duty or prove something to yourself, but because you seriously think there may be stuff out there you haven't read. Risky is anything you do because you want to have investigated in order to prove your own rationality to yourself, or because it would feel too immodest to just think outright that you had the right answer.
The only reason I referred to people who are crazier than average is to explain why "people are crazy, the world is mad" is not one of those explanations.
It is if you stick to the object level. Does it help if I rephrase it as "People are crazy, the world is mad, therefore everyone has to show their work"? You just shouldn't have to spend all that much effort to suppose that a large number of people have been incompetent. It happens so frequently that if there were a Shannon code for describing Earth, "they're nuts" would have a single-symbol code in the language. Now, if you seriously don't know whether someone else knows something you don't, then figure out where to look and look there. But the answer may just be "4", which stands for Standard Explanation #4 in the Earth Description Language: "People are crazy, the world is mad". And in that case, spending lots of effort in order to develop an elaborate dismissal of their reasons is probably not good for your mental health and will just slow you down later if it turns out they did know something else. If by a flash of insight you realize there's a compact description of a mistake that a lot of other people are making, then this is a valuable thing to know so you can avoid it yourself; but I really think it's important to learn how to just say "4" and move on.
Replies from: RobinHanson↑ comment by RobinHanson · 2009-12-04T14:25:21.722Z · LW(p) · GW(p)
I will come as a surprise to few people that I disagree strongly with Eliezer here; Wei should not take his word for the claim that Wei is so much more rational than all the folks he might disagree with that he can ignore their differing opinions. Where is this robust rationality test used to compare Wei to the rest of the intellectual world? Where is the evidence for this supposed mental health risk of considering the important evidence of the opinions of other? If the world is crazy, then very likely so are you. Yes it is a good sign if you can show some of your work, but you can almost never show all of your relevant work. So we must make inferences about the thought we have not seen.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-04T15:01:18.917Z · LW(p) · GW(p)
Well, I think we both agree on the dangers of a wide variety of cheap talk - or to put it more humbly, you taught me on the subject. Though even before then, I had developed the unfortunate personal habit of calling people's bluffs.
So while we can certainly interpret talk about modesty and immodesty in terms of rhetoric, isn't the main testable prediction at stake, the degree to which Wei Dai should often find, on further investigation, that people who disagree with him turn out to have surprisingly good reasons to do so?
Do you think - to jump all the way back to the original question - that if Dai went around asking people "Why aren't you working on decision theory and anthropics because you can't stand not knowing the answers?" that they would have some brilliantly decisive comeback that Dai never thought of which makes Dai realize that he shouldn't be spending time on the topic either? What odds would you bet at?
Replies from: RobinHanson, CronoDAS↑ comment by RobinHanson · 2009-12-05T04:32:49.117Z · LW(p) · GW(p)
Brilliant decisive reasons are rare for most topics, and most people can't articulate very many of their reasons for most of their choices. Their most common reason would probably be that they found other topics more interesting, and to evaluate that reason Wei would have to understand the reasons for thinking all those other topics interesting. Saying "if you can't prove to me why I'm wrong in ten minutes I must be right" is not a very reliable path to truth.
↑ comment by aausch · 2009-12-04T05:03:07.756Z · LW(p) · GW(p)
I typically class these types of questions with other similar ones:
What are the odds that a strategy of approximately continuous insanity, interrupted by clear thinking, is a better evolutionary adaptation than continuous sanity, interrupted by short bursts of madness? That the first, in practical, real-world terms, causes me to lead a more moral or satisfying life? Or even, that the computational resources that my brain provides to me as black boxes, can only be accessed at anywhere near peak capacity when I am functioning in a state of madness?
Is it easier to be sane, emulating insanity when required to, or to be insane, emulating sanity when required to?
↑ comment by byrnema · 2009-12-04T04:45:36.032Z · LW(p) · GW(p)
Given that we're sentient products of evolution, shouldn't we expect a lot of variation in our thinking?
Finding solutions to real world problems often involve searching through a space of possibilities that is too big and too complex to search systematically and exhaustively. Evolution optimizes searches in this context by using a random search with many trials: inherent variation among zillions of modular components. I hypothesize that we individually think in non-rational ways so that as a population we search through state space for solutions in a more random way.
Observing the world for 32-odd years, it appears to me that each human being is randomly imprinted with a way of thinking and a set of ideas to obsess about. (Einstein had a cluster of ideas that were extremely useful for 20th century physics, most people's obsessions aren't historically significant.)
Replies from: Liron, GuySrinivasan↑ comment by Liron · 2009-12-04T08:19:32.078Z · LW(p) · GW(p)
I hypothesize that we individually think in non-rational ways so that as a population we search through state space for solutions in a more random way.
That's a group selection argument.
GAME OVER
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2009-12-04T09:21:59.597Z · LW(p) · GW(p)
Is it necessarily? Consider a population dominated by individuals with an allele for thinking in a uniform fashion. Then insert individuals who will come up with original ideas. A lot of the original ideas are going to be false, but some of them might hit the right spot and confer an advantage. It's a risky, high variance strategy - the bearers of the originality alleles might not end up as the majority, but might not be selected out of the population either.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-04T09:26:26.964Z · LW(p) · GW(p)
Sure, you can resurrect it as a high-variance high-expected-value individual strategy with polymorphism maintained by frequency-dependent selection... but then there's still no reason to expect original thinking to be less rational thinking. And the original hypothesis was indeed group selection, so byrnema loses the right to talk about evolutionary psychology for one month or something. http://wiki.lesswrong.com/wiki/Group_selection
Replies from: byrnema, Alicorn↑ comment by byrnema · 2009-12-04T13:10:41.732Z · LW(p) · GW(p)
It seems to be extremely popular among a certain sort of amateur evolutionary theorist, though - there's a certain sort of person who, if they don't know about the incredible mathematical difficulty, will find it very satisfying to speculate about adaptations for the good of the group.
That's me. I don't know anything about evolutionary biology -- I'm not even an amateur. Group selection sounded quite reasonable to me, and now I know that it isn't borne by observation or the math. I can't jump into evolutionary arguments; moratorium accepted.
Replies from: timtyler↑ comment by Alicorn · 2009-12-04T13:52:20.062Z · LW(p) · GW(p)
I'm no evo-bio expert, but it seems like you could make it work as something of a kin selection strategy too. If you don't think exactly like your family, then when your family does something collaborative, the odds that one of you has the right idea is higher. Families do often work together on tasks; the more the family that thinks differently succeeds, the better they and their think-about-random-nonconforming-things genes do. Or does assuming that families will often collaborate and postulating mechanisms to make that go well count as a group selection hypothesis?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-04T14:57:02.734Z · LW(p) · GW(p)
Anecdotally, it seems to me that across tribes and families, people are less likely to try to occupy a niche that already looks filled. (Which of course would be a matter of individual advantage, not tribal advantage!) Some of the people around me may have failed to enter their area of greatest comparative advantage, because even though they were smarter than average, I looked smarter.
Example anecdote: A close childhood friend who wanted to be a lawyer was told by his parents that he might not be smart enough because "he's not Eliezer Yudkowsky". I heard this, hooted, and told my friend to tell his parents that I said he was plenty smart enough. He became a lawyer.
Replies from: andrewbreese↑ comment by andrewbreese · 2011-01-31T05:04:01.910Z · LW(p) · GW(p)
THAT had a tragic ending!
He became a lawyer.
↑ comment by SarahSrinivasan (GuySrinivasan) · 2009-12-04T05:08:14.334Z · LW(p) · GW(p)
Why would evolution's search results tend to search in the same way evolution searches?
Replies from: byrnema↑ comment by byrnema · 2009-12-04T06:03:34.013Z · LW(p) · GW(p)
They search in the same way because random sampling via variability is an effective way to search. However, humans could perform effective searches by variation at the individual or population level (for example, a sentient creature could model all different kinds of thought to think of different solutions) but I was arguing for the variation at the population level.
Variability at the population level is explained by the fact that we are products of evolution.
Of course, human searches are effective as a result of both kinds of variation.
Not that any of this was thought out before your question... This the usual networked-thought-reasoning verses linear-written-argument mapping problem.
Replies from: Eliezer_Yudkowsky, GuySrinivasan↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-04T07:22:22.012Z · LW(p) · GW(p)
random sampling via variability is an effective way to search
No it's not. It is one of the few search methods that are simple enough to understand without reading an AI textbook, so a lot of nontechnical people know about it and praise it and assign too much credit to it. And there are even a few problem classes where it works well, though what makes a problem this "easy" is hard to understand without reading an AI textbook. But no, it's not a very impressive kind of search.
↑ comment by SarahSrinivasan (GuySrinivasan) · 2009-12-04T06:14:13.437Z · LW(p) · GW(p)
Heh, I came to a similar thought walking home after asking the question... that it seems at least plausible the only kinda powerful optimization processes that are simple enough to pop up randomlyish are the ones that do random sampling via variability.
I'm not sure it makes sense that variability at the population level is much explained by coming from evolution, though. Seems to me, as a bound, we just don't have enough points in the search space to be worth it even with 6b minds, and especially not down at the population levels during most of evolution. Then there's the whole difficulty with group selection, of course. My intuition says no... yours says yes though?
↑ comment by Tyrrell_McAllister · 2009-12-03T00:50:14.719Z · LW(p) · GW(p)
You might as well ask why all the people wondering if all mathematical objects exist, haven't noticed the difference in style between the relation between logical axioms and logical models versus causal laws and causal processes.
If you'd read much work by modern mathematical platonists, you'd know that many of them obsess over such differences, at least in the analytical school. (Not that it's worth your time to read such work. You don't need to do that to infer that they are likely wrong in their conclusions. But not reading it means that you aren't in a position to declare confidently how "all" of them think.)
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-03T05:35:32.971Z · LW(p) · GW(p)
Interesting. I wonder if you've misinterpreted me or if there's actually someone competent out there? Quick example if possible?
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2009-12-03T23:55:22.124Z · LW(p) · GW(p)
Interesting. I wonder if you've misinterpreted me or if there's actually someone competent out there? Quick example if possible?
Heh, false dilemma, I'm afraid :). My only point was that modern platonists aren't making the mistake that you described. They still make plenty of other mistakes.
Mathematical platonists are "incompetent" in the sense that they draw incorrect conclusions (e.g., mathematical platonism). In fact, all philosophers of mathematics whom I've read, even the non-platonists, make the mistake of thinking that physical facts are contingent in some objective sense in which mathematical facts are not. Not that this is believed unanimously. For example, I gather that John Stewart Mill held that mathematical facts are no more necessary than physical ones, but I haven't read him, so I don't know the details of his view.
But all mathematical philosophers whom I know recognize that logical relations are different from causal relations. They realize that Euclid's axioms "make" the angles in ideal triangles sum to 180 degrees in a manner very different from how the laws of physics make a window break when a brick hits it. For example, mathematical platonists might say (mistakenly) that every mathematically possible object exists, but not every physically possible object exists.
Another key difference for the platonist is that causal relations don't hold among mathematical objects, or between mathematical objects and physical objects. They recognize that they have a special burden to explain how we can know about mathematical objects if we can't have any causal interaction with them.
http://plato.stanford.edu/entries/platonism-mathematics/#WhaMatPla http://plato.stanford.edu/entries/abstract-objects/#5
Replies from: Vladimir_Nesov, Eliezer_Yudkowsky↑ comment by Vladimir_Nesov · 2009-12-04T00:34:33.940Z · LW(p) · GW(p)
I'd appreciate it if you write down your positions explicitly, even if in one-sentence form, rather than implying that so-and-so position is wrong because [exercise to the reader]. These are difficult questions, so even communicating what you mean is non-trivial, not even talking about convincing arguments and rigorous formulations.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2009-12-04T01:22:01.106Z · LW(p) · GW(p)
That's fair. I wrote something about my own position here.
Here is what I called mistaken, and why:
Mathematical platonism: I believe that we can't know about something unless we can interact with it causally.
The belief that physical facts are contingent: I believe that this is just an example of the mind projection fallacy. A fact is contingent only with respect to a theory. In particular, the fact is contingent if the theory neither predicts that it must be the case nor that it must not be the case. Things are not contingent in themselves, independently of our theorizing. They just are. To say that something is contingent, like saying that it is surprising, is to say something about our state of knowledge. Hence, to attribute contingency to things in themselves is to commit the mind projection fallacy.
↑ comment by byrnema · 2009-12-04T02:04:58.683Z · LW(p) · GW(p)
My interest is piqued as well. You appear to be articulating a position that I've encountered on Less Wrong before, and that I would like to understand better.
So physical facts are not contingent. All of them just happen to be independently false or true? What then is the status of a theory?
I'm speculating.. perhaps you consider that there is a huge space of possible logical and consistent theories, one for every (independent) fact being true or false. (For example, if there are N statements about the physical universe, 2^N theories.) Of course, they are all completely relatively arbitrary. As we learn about the universe, we pick among theories that happen to explain all the facts that we know of (and we have preferences for theories that do so in ever simpler ways.) Then, any new fact may require updating to a new theory, or may be consistent with the current one. So theories are arbitrary but useful. Is this consistent with what you are saying?
Thank you. I apologize if I've misinterpreted -- I suspect the inferential distance between our views is quite great.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2009-12-06T04:36:13.895Z · LW(p) · GW(p)
Let me start with my slogan-version of my brand of realism: "Things are a certain way. They are not some other way."
I'll admit up front the limits of this slogan. It fails to address at least the following: (1) What are these "things" that are a certain way? (2) What is a "way", of which "things are" one? In particular (3) what is the ontological status of the other ways aside from the "certain way" that "things are"? I don't have fully satisfactory answers to these questions. But the following might make my meaning somewhat more clear.
To your questions:
So physical facts are not contingent. All of them just happen to be independently false or true?
First, let me clear up a possible confusion. I'm using "contingent" in the sense of "not necessarily true or necessarily false". I'm not using it in the sense of "dependent on something else". That said, I take independence, like contingency, to be a theory-relative term. Things just are as they are. In and of themselves, there are no relations of dependence or independence among them.
What then is the status of a theory?
Theories are mechanisms for generating assertions about how things are or would be under various conditions. A theory can be more or less wrong depending on the accuracy of the assertions that it generates.
Theories are not mere lists of assertions (or "facts"). All theories that I know of induce a structure of dependency among their assertions. That structure is a product of the theory, though. (And this relation between the structure and the theory is itself a product of my theory of theories, and so on.)
I should try to clarify what I mean by a "dependency". I mean something like logical dependency. I mean the relation that holds between two statements, P and Q, when we say "The reason that P is true is because Q is true".
Not all notions of "dependency" are theory-dependent in this sense. I believe that "the way things are" can be analyzed into pieces, and these pieces objectively stand in certain relations with one another. To give a prosaic example. The cup in front of me is really there, the table in front of me is really there, and the cup really sits in the relation of "being on" the table. If a cat knocks the cup off the table, an objective relation of causation will exist between the cat's pushing the cup and the cup's falling off the table. All this would be the case without my theorizing. These are facts about the way things are. We need a theory to know them, but they aren't mere features of our theory.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-04T07:28:27.757Z · LW(p) · GW(p)
Checking these references doesn't show the distinction I was thinking of between the mathematical form of first-order or higher-order logic and model theory, versus causality a la Pearl.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2009-12-04T14:57:10.935Z · LW(p) · GW(p)
So, is your complaint just that they use the same formalism to talk about logical relations and causal relations? Or even just that they don't use the same two specific formalisms that you use?
That seems to me like a red herring. Pearl's causal networks can be encoded in ZFC. Conversely, ZFC can be talked about using various kinds of decorated networks --- that's what category theory is. Using the same formalism for the two different kinds of relations should only be a problem if it leads one to ignore the differences between them. As I tried to show above, philosophers of mathematics aren't making this mistake in general. They are keenly aware of differences between logical relations and causal relations. In fact, many would point to differences that don't, in my view, actually exist.
And besides, I don't get the impression that philosophers these days consider nth- order logic to be the formalism for physical explanations. As mentioned on the Wikipedia page for the deductive-nomological model, it doesn't hold the dominant position that it once had.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-04T19:05:57.987Z · LW(p) · GW(p)
Pearl's causal networks can be encoded in ZFC
That's what I would expect most mathematical-existence types to think. It's true, but it's also the wrong thought.
Wei, do you see it now that I've pointed it out? Or does anyone else see it? As problems in philosophy go, it seems like a reasonable practice exercise to see it once I've pointed to it but before I've explained it.
Replies from: Liron, Wei_Dai, Vladimir_Nesov, Tyrrell_McAllister, Nick_Tarleton, Tyrrell_McAllister↑ comment by Liron · 2009-12-04T21:36:03.061Z · LW(p) · GW(p)
Is this it:
In logic, any time you have a set of axioms from which it is impossible to derive a contradiction, a model exists about which all the axioms are true. Here, "X exists" means that you can prove, by construction, that an existentially quantified proposition about some model X is true in models of set theory. So all consistent models are defined into "existence".
A causal process is an unfolded computation. Parts of its structure have relationships that are logically constrained, if not fully determined, by other parts. But like any computation, you can put an infinite variety of inputs on the tape of the Causality Turing machine's tape, and you'll get a different causal process. Here, "X exists" means that X is a part of the same causal process that you are a part of. So you have to entangle with your surroundings in order to judge what "exists".
↑ comment by Wei Dai (Wei_Dai) · 2009-12-05T19:13:40.471Z · LW(p) · GW(p)
Eliezer, I still don't understand Pearl well enough to answer your question. Did anyone else get it?
Right now I'm working on the following related question, and would appreciate any ideas. Some very smart people have worked hard on causality for years, but UDT1 seemingly does fine without an explicit notion of causality. Why is that, or is there a flaw in it that I'm not seeing? Eliezer suggested earlier that causality is a way of cashing out the "mathematical intuition module" in UDT1. I'm still trying to see if that really makes sense. It would be surprising if mathematical intuition is so closely related to causality, which seems to be very different at first glance.
↑ comment by Vladimir_Nesov · 2009-12-04T20:11:06.547Z · LW(p) · GW(p)
It's still unclear what you mean. One simple idea is that many formalisms allow to express each other, but some give more natural ways of representing a given problem than others. In some contexts, a given way of stating things may be clearly superior. If you e.g. see math as something happening in heads of mathematicians, or see implication of classical logic as a certain idealization of material implication where nothing changes, one may argue that a given way is more fundamental, closer to what actually happens.
When you ask questions like "do you see it now?", I doubt there is even a good way of interpreting them as having definite answers, without already knowing what you expect to hear, a lot more context about what kinds of things you are thinking about than is generally available.
↑ comment by Tyrrell_McAllister · 2009-12-06T03:33:13.772Z · LW(p) · GW(p)
That's what I would expect most mathematical-existence types to think. It's true, but it's also the wrong thought.
Wei, do you see it now that I've pointed it out? Or does anyone else see it?
I take this to be your point:
Suppose that you want to understand causation better. Your first problem is that your concept of causation is still vague, so you try to develop a formalism to talk about causation more precisely. However, despite the vagueness, your topic is sufficiently well-specified that it's possible to say false things about it.
In this case, choosing the wrong language (e.g., ZFC) in which to express your formalism can be fatal. This is because a language such as ZFC makes it easy to construct some formalisms but difficult to construct others. It happens to be the case that ZFC makes it much easier to construct wrong formalisms for causation than does, say, the language of networks.
Making matters worse, humans have a tendency to be attracted to impressive-looking formalisms that easily generate unambiguous answers. ZFC-based formalisms can look impressive and generate unambiguous answers. But the answers are likely to be wrong because the formalisms that are natural to construct in ZFC don't capture the way that causation actually works.
Since you started out with a vague understanding of causation, you'll be unable to recognize that your formalism has led you astray. And so you wind up worse than you started, convinced of false beliefs rather than merely ignorant. Since understanding causation is so important, this can be a fatal mistake.
--- So, that's all well and good, but it isn't relevant to this discussion. Philosophers of mathematics might make a lot of mistakes. And maybe some have made the mistake of trying to use ZFC to talk about physical causation. But few, if any, haven't "noticed the difference in style between the relation between logical axioms and logical models versus causal laws and causal processes." That just isn't among the vast catalogue of their errors.
↑ comment by Nick_Tarleton · 2009-12-04T21:53:46.590Z · LW(p) · GW(p)
Is it simply that: causal graphs (can) have locality, and you can perform counterfactual surgery on intermediate nodes and get meaningful results, while logic has no locality and (without the hoped-for theory of impossible possible worlds) you can't contradict one theorem without the system exploding?
↑ comment by Tyrrell_McAllister · 2009-12-04T21:10:14.845Z · LW(p) · GW(p)
That's what I would expect most mathematical-existence types to think. It's true, but it's also the wrong thought.
Perhaps, but irrelevant, because I'm not what you would call a mathematical-existence type.
ETA: The point is that you can't be confident about what thought stands behind the sentence "Pearl's causal networks can be encoded in ZFC" until you have some familiarity with how the speaker thinks. On what basis do you claim that familiarity?
↑ comment by Wei Dai (Wei_Dai) · 2009-12-02T23:55:48.751Z · LW(p) · GW(p)
Ok, one possible answer to my own question: people who are interested just to satisfy their curiosity, tend to find an answer they like, and stop inquiring further, whereas people who have something to protect have a greater incentive to make sure the answer is actually correct.
For some reason, I can't stop trying to find flaws in every idea I come across, including my own, which causes me to fall out of this pattern.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-03T00:05:35.650Z · LW(p) · GW(p)
More like if a question activates Philosophy mode, then people just make stuff up at random like the greek philosophers did, unless they are modern philosophers, in which case they invent a modal logic.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2009-12-03T00:45:03.709Z · LW(p) · GW(p)
Ancient philosophy would look very different if the Greek philosophers had been making stuff up at random. Plato and Aristotle followed cognitive strategies, strategies that they (1) could communicate and (2) felt constrained to follow. For these reasons, I don't think that those philosophers could be characterized in general as "making stuff up".
Of course, they followed different strategies respectively, and they often couldn't communicate their feelings of constraint to one another. And of course their strategies often just didn't work.
↑ comment by Yorick_Newsome · 2009-12-02T11:25:56.555Z · LW(p) · GW(p)
Maybe I'm wrong, but it seems most people here follow the decision theory discussions just for fun. Until introduced, we just didn't know it was so interesting! That's my take anyways.
↑ comment by whpearson · 2009-12-03T09:57:38.774Z · LW(p) · GW(p)
I'm curious why you find it interesting. To me pure decision theory is an artifact of language. We have the language constructs to describe situations and their outcomes for communicating with other humans and because of this try to make formalisms that take the model/utility as inputs.
In a real intelligence I expect decisions to be made on an ad hoc local basis for efficiency reasons. In an evolved creature the expected energy gain from theoretically sound decisions could easily be less than the energetic cost of the extra computation.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-12-04T02:52:41.238Z · LW(p) · GW(p)
I'm probably not the best person to explain why decision theory is interesting from an FAI perspective. For that you'd want to ask Eliezer or other SIAI folks. But I think the short answer there is that without a well-defined decision theory for an AI, we can't hope to prove that it has any Friendliness properties.
My own interest in decision theory is mainly philosophical. Originally, I wanted to understand how probabilities should work when there are multiple copies of oneself, either due to mind copying technology, or because all possible universes exist. That led me to ask, "what are probabilities, anyway?" The philosophy of probability is its own subfield in philosophy, but I came to the conclusion that probabilities only have meaning within a decision theory, so the real question I should be asking is what kind of decision theory one should use when there are multiple copies of oneself.
Replies from: Eliezer_Yudkowsky, whpearson↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-04T15:02:45.853Z · LW(p) · GW(p)
Your own answer is also pretty relevant to FAI. Because anything that confuses you can turn out to contain the black box surprise from hell.
Until you know, you don't know if you need to know, you don't know how much you need to know, and you don't know the penalty for not knowing.
↑ comment by whpearson · 2009-12-07T14:31:13.599Z · LW(p) · GW(p)
Thanks.
I'll try and explain a bit more why I am not very interested in probabilities and DTs. I am interested in how decisions are made, but I am far more interested in how an agent gets to have a certain model in the first place (before it is converted into an action). With a finite agent there are questions such as why have model X rather than Y. Which I think impinges on the question on what topics we should discuss. I'd view most people not having a low probability that DTs are important, but simply not storing a probability for that preposition at all. They have never explored it so have no evidence either way.
The model of the world you have can dominate the DT, in determining the action taken. And in the end that is what we care about, the action taken in response to the input and history.
I also think that DT with its fixed model ignores the possibility of communication between the bit running through the model and picking an action and the bit that creates the model. For example if I see a very good contest/offer I might think it too good to be true, and look for more information to alter my model and find the catch before taking the offer up.
Replies from: Sebastian_Hagen, wedrifid↑ comment by Sebastian_Hagen · 2009-12-07T15:36:17.428Z · LW(p) · GW(p)
For example if I see a very good contest/offer I might think it too good to be true, and look for more information to alter my model and find the catch before taking the offer up.
How is this case different from any other decision? You compute the current probabilities for this is a fraud and this is an unusually good deal. You compute the cost of collecting more data in a specific fashion, and the probability distribution over possible futures containing a future version of you with better knowledge about this problem. You do the same for various alternative actions you could take instead of collecting more data right now, calculate expected long-run utility for each of the considered possible futures, and choose an action based on that information - either to prod the universe to give you more data about this, or doing something else.
I am glossing over all the interesting hard parts, of course. But still, is there anything fundamentally different about manipulating the expected state of knowledge of your future-self from manipulating any other part of reality?
Replies from: whpearson↑ comment by whpearson · 2009-12-07T23:14:13.200Z · LW(p) · GW(p)
Interesting question. Not quite what I was getting at. I hope you don't mind if I use a situation where extra processing can get you more information.
A normal decision theory can be represented as simple function from model to action. It should halt.
decisiontheory :: Model -> Action
Lets say you have a model that you can keep on expanding the consequences of and get a more accurate picture of what is going to happen, like playing chess with a variable amount of look ahead. What the system is looking for is a program that will recursively self improve and be Friendly (where making an action is considered making an AI).
It has a function that can either carry on expanding the model or return an action.
modelOrAct :: Model -> Either Action Model
You can implement decisiontheory with this code
decisiontheory :: Model -> Action
decisiontheory m = either (decisionModel) (id) (modelOrAct m)
However this has the potential to infinite loop due to its recursive definition. This would happen if the expected utility of increasing the accuracy of the model is greater than performing an action and there is no program it can prove safe. You would want some way of interrupting it to update the model with information from the real world as well as the extrapolation.
So I suppose the difference in this case is that due to making a choice on which mental actions to perform you can get stuck not getting information from the world about real world actions.
↑ comment by wedrifid · 2009-12-07T15:28:12.140Z · LW(p) · GW(p)
The model of the world you have can dominate the DT, in determining the action taken. And in the end that is what we care about, the action taken in response to the input and history.
No, the model of the world you have can not dominate the DT or, for that matter, do anything at all. There must be a decision theory either explicit or implicit in some action generating algorithm that you are running. Then it is just a matter of how much much effort you wish to spend developing each.
I also think that DT with its fixed model ignores the possibility of communication between the bit running through the model and picking an action and the bit that creates the model. For example if I see a very good contest/offer I might think it too good to be true, and look for more information to alter my model and find the catch before taking the offer up.
A Decision Theory doesn't make you naive or impractical. Deciding to look for more information is just a good decision.
Replies from: whpearson↑ comment by whpearson · 2009-12-08T08:44:37.229Z · LW(p) · GW(p)
No, the model of the world you have can not dominate the DT or, for that matter, do anything at all. There must be a decision theory either explicit or implicit in some action generating algorithm that you are running. Then it is just a matter of how much much effort you wish to spend developing each.
I spoke imprecisely. I meant that the part of the program that generates the model of the world dominates the DT in terms of what action is taken. That is; with a fixed DT you can make it perform any action dependent upon what model you give it. The converse is not true as the model constrains the possible actions.
A Decision Theory doesn't make you naive or impractical. Deciding to look for more information is just a good decision.
I think in terms of code and Types. Most discussions of DTs don't have discussions of feeding back the utilities to the model making section, so I'm assuming a simple type. It might be wrong, but at least I can be precise about what I am talking about. See my reply to Sebastian.
comment by mormon2 · 2009-12-01T18:06:38.962Z · LW(p) · GW(p)
Is it just me or does this seem a bit backwards? SIAI is trying to make FAI yet so much of the time spent is spent on risks and benefits of this FAI that doesn't exist. For a task that is estimated to be so dangerous and so world changing would it not behoove SIAI to be the first to make FAI? If this be the case then I am a bit confused as to the strategy SIAI is employing to accomplish the goal of FAI.
Also if FAI is the primary goal here then it seems to me that one should be looking not at LessWrong but at gathering people from places like Google, Intel, IBM, and DARPA... Why would you choose to pull from a predominantly amateur talent pool like LW (sorry to say that but there it is)?
Replies from: Tyrrell_McAllister, Roko, Eliezer_Yudkowsky↑ comment by Tyrrell_McAllister · 2009-12-01T18:24:06.102Z · LW(p) · GW(p)
I think that you answered your own question. One way to develop FAI is to attract talented people such as those at Google, etc. One way to draw such people is to convince them that FAI is worth their time. One way to convince them that FAI is worth their time is to lay out strong arguments for the risks and benefits of FAI.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-02T04:34:54.067Z · LW(p) · GW(p)
For a task that is estimated to be so dangerous and so world changing would it not behoove SIAI to be the first to make FAI?
That's my end of the problem.
Also if FAI is the primary goal here then it seems to me that one should be looking not at LessWrong but at gathering people from places like Google, Intel, IBM, and DARPA
Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don't exist, and so I look for proof of relevant talent and learning rate.
Most people who consider this problem do not realize the degree to which it is sheerly impossible to put up a job ad for the skills you need. LW itself is probably as close as it gets.
Replies from: alexflint, mormon2, Vladimir_Nesov, Roko↑ comment by Alex Flint (alexflint) · 2009-12-02T12:33:07.025Z · LW(p) · GW(p)
Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don't exist, and so I look for proof of relevant talent and learning rate.
I'm not sure the olympiads are such a uniquely optimal selector. For sure there were lots of superstars at the IOI, but now doing a phd makes me realise that many of those small-scale problem solving skills don't necessarily transfer to broader-scale AI research (putting together a body of work, seeing analogies between different theories, predicting which research direction will be most fruitful). Equally I met a ton of superstars working at Google, and I mean deeply brilliant superstars, not just well-trained professional coders. Google is trying to attract much the same crowd as SIAI, but they have a ton more resources, so insofar as it's possible it makes sense to try to recruit people from Google.
Replies from: AnnaSalamon, Jack↑ comment by AnnaSalamon · 2009-12-02T19:27:05.934Z · LW(p) · GW(p)
It would be nice if we could get both groups (international olympiads and Google) reading relevant articles, and thinking about rationality and existential risk. Any thoughts here, alexflint or others?
Replies from: alexflint↑ comment by Alex Flint (alexflint) · 2009-12-02T21:37:24.745Z · LW(p) · GW(p)
Well for the olympiads, each country runs training camp leading up to the actual olympiad and they'd probably be more than happy to have someone from SIAI give a guest lecture. These kids would easily pick up the whole problem from a half hour talk.
Google also has guest speakers and someone from SIAI could certainly go along and give a talk. It's a much more difficult nut to crack as Google has a somewhat insular culture and they're constantly dealing with overblown hype so many may tune out as soon as something that sounds too "futuristic" comes up.
What do you think?
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-12-02T21:44:09.308Z · LW(p) · GW(p)
Yes, those seem worth doing.
Re: the national olympiad training camps, my guess is that it is easier to talk if an alumnus of the program recommends us. We know alumni of the US math olympiad camp, and the US computing olympiad camp, but to my knowledge we don't know alumni from any of the other countries or from other subjects. Do you have connections there, Alex? Anyone else?
Replies from: Kevin, alexflint↑ comment by Kevin · 2010-03-07T09:08:09.308Z · LW(p) · GW(p)
What about reaching out to people who scored very highly when taking the SATs as 7th graders? Duke sells the names and info of the test-takers to those that can provide "a unique educational opportunity."
http://www.tip.duke.edu/talent_searches/faqs/grade_7.html#release
↑ comment by Alex Flint (alexflint) · 2009-12-03T08:51:02.577Z · LW(p) · GW(p)
Sure, but only in Australia I'm afraid :). If there's anyone from SIAI in that part of the world then I'm happy to put them in contact.
↑ comment by Jack · 2009-12-02T13:08:06.337Z · LW(p) · GW(p)
Thinking about this point is leading me to conclude that Google is substantially more likely than SIAI to develop a General AI before anyone else. Gintelligence anyone?
Replies from: alexflint↑ comment by Alex Flint (alexflint) · 2009-12-02T17:10:43.275Z · LW(p) · GW(p)
Well, I don't think Google is working on GAI explicitly (though I wouldn't know), and I think they're not working on it for much the same reason that most research labs aren't working on it: it's difficult, risky research, outside the mainstream dogma, and most people don't put very much thought into the implications.
Replies from: Jack↑ comment by Jack · 2009-12-02T19:04:49.690Z · LW(p) · GW(p)
I think the conjunction of the probability that (1) Google decides to start working on it AND the probability that Google can (2) put together a team that could develop an AGI AND the probability that (3) that team succeeds might be higher than the probability of (2) and (3) for SIAI/Eliezer.
(1) Is pretty high because Google gets its pick of the most talented young programmers and gives them a remarkable amount of freedom to pursue their own interests. Especially if interest in AI increases it wouldn't be surprising if a lot of people with an interest in AGI ended up working there. I bet a fair number already do.
2/3 are high because Google's resources, their brand/reputation and the fact that they've shown they are capable of completing and deploying innovative code and business ideas.
All of the above is said with very low confidence.
Of course Gintelligence might include censoring the internet for the Chinese government as part of its goal architecture and we'd all be screwed.
Edit: I knew this would get downvoted :-)... or not.
Replies from: wedrifid, alexflint↑ comment by wedrifid · 2009-12-03T03:05:51.371Z · LW(p) · GW(p)
Edit: I knew this would get downvoted :-)
I voted up. I think you may be mistaken but you are looking at relevant calculations.
Of course Gintelligence might include censoring the internet for the Chinese government as part of its goal architecture and we'd all be screwed.
Nice.
↑ comment by Alex Flint (alexflint) · 2009-12-02T21:00:02.353Z · LW(p) · GW(p)
Fair point. I actually rate (1) quite low just because there are so few people that think along the lines of AGI as an immediate problem to be solved. Tenured professors, for example, have a very high degree of freedom, yet very few of them chose to pursue AGI in comparison to the manpower dedicated to other AI fields. Amongst Googlers there is presumably also a very small fraction of folks potentially willing to tackle AGI head-on.
↑ comment by mormon2 · 2009-12-02T07:07:52.816Z · LW(p) · GW(p)
"That's my end of the problem."
Ok, so where are you in the process? Where is the math for TDT? Where is the updated version of LOGI?
"Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don't exist, and so I look for proof of relevant talent and learning rate."
So tell me have you worked with anyone from DARPA (I have worked with DARPA) or Intel? Have you ever work at a research organization with millions or billions of dollars to throw at R&D? If not how can you be so sure?
"Most people who consider this problem do not realize the degree to which it is sheerly impossible to put up a job ad for the skills you need. LW itself is probably as close as it gets."
If thats the case why does Ben Goertzel have a company working on AGI the very problem your trying to solve? Why does he actually have design and some portions implemented and you do not have any portions implemented? What about all the other AGI work being done like LIDA, SOAR, and what ever Peter Voss calls his AGI project, so are all of those just misguided since I would imagine they hire the people who work on the projects?
Just an aside for some posters above this post who have been talking about Java as the superior choice to C++ what planet do you come from? Java is slower then C++ because of all the overheads of running the code. You are much better off with C++ or Ct or some other language like that without all the overheads esp. since one can use OpenCL or CUDA to take advantage of the GPU for more computing power.
Replies from: Eliezer_Yudkowsky, None, Vladimir_Nesov, DanArmak, wedrifid↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-02T07:42:30.983Z · LW(p) · GW(p)
Goertzel, Voss, and similar folks are not working on the FAI problem. They're working on the AGI problem. Contrary to what Goertzel, Voss, and similar folks find most convenient to believe, these two problems are not on the same planet or even in the same galaxy.
I shall also be quite surprised if Goertzel's or Voss's project yields AGI. Code is easy. Code that is actually generally intelligent is hard. Step One is knowing which code to write. It's futile to go on to Step Two until finishing Step One. If anyone tries to tell you otherwise, bear in mind that the advice to rush ahead and write code has told quite a lot of people that they don't in fact know which code to write, but has not actually produced anyone who does know which code to write. I know I can't sit down and write an FAI at this time; I don't need to spend five years writing code in order to collapse my pride.
The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects - once your "mysterious answer to mysterious question" detector is initialized and switched on, and so on - so I consider my work of explanation in that area to be pretty much done. Anything left is public relations, taking an existing explanation and making it more persuasive.
Replies from: None, mormon2↑ comment by [deleted] · 2009-12-05T07:46:41.232Z · LW(p) · GW(p)
Goertzel, Voss, and similar folks are not working on the FAI problem. They're working on the AGI problem. Contrary to what Goertzel, Voss, and similar folks find most convenient to believe, these two problems are not on the same planet or even in the same galaxy.
No? I've been thinking of both problems as essentially problems of rationality. Once you have a sufficiently rational system, you have a Friendliness-capable, proto-intelligent system.
And it happens that I have a copy of "Do the Right Thing: Studies in Limited Rationality", but I'm not reading it, even though I feel like it will solve my entire problem perfectly. I wonder why this is.
↑ comment by mormon2 · 2009-12-03T02:25:14.383Z · LW(p) · GW(p)
Ok opinions on the relative merits of the AGI projects mentioned aside you did not answer my first and the question. The question I am actually most interested in the answer too which is where is the technical work? I was looking for some detail as to what part of step one you are working on? So if TDT is important to your FAI then how is the math coming? Are you updating LOGI or are you discarding it and doing it all over?
"The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects - once your "mysterious answer to mysterious question" detector is initialized and switched on, and so on - so I consider my work of explanation in that area to be pretty much done. Anything left is public relations, taking an existing explanation and making it more persuasive."
Ok, this being said where is your design? This reminds me of a movement in physics that wants to discard GR because it fails to explain some phenomena and is part of the rift in physics. Of course these people have nothing to replace GR with so the fact that you can argue that GR is not completely right is a bit pointless until you have something to replace it with, GR not being totally wrong. That being said how is your dismissal of the rest of AGI any better then that?
Its easy enough to sit back with no formal theories or in progress AGI code out for public review and say all these other AGI projects won't work. Even if that is the case it begs the question where are your contributions, your code, and published papers etc.? Without your formal working being out for public review is it really fair to make statements that all the current AGI projects are wrong-headed essentially?
"So tell me have you worked with anyone from DARPA (I have worked with DARPA) or Intel? Have you ever work at a research organization with millions or billions of dollars to throw at R&D? If not how can you be so sure?"
So I take it from the fact that you didn't answer the question that you have in fact not worked for Intel or DARPA etc. That being said I think a measure of humility is an order before you categorically dismiss them as being minor players in FAI. Sorry if that sounds harsh but there it is (I prefer to be blunt because it leaves no room for interpretation).
Replies from: wedrifid, Vladimir_Nesov, Nick_Tarleton, wedrifid↑ comment by wedrifid · 2009-12-03T02:47:10.122Z · LW(p) · GW(p)
Sorry if that sounds harsh but there it is (I prefer to be blunt because it leaves no room for interpretation).
Really, we get it. We don't have automated signatures on this system but we can all pretend that this is included in yours. All this serves is to create a jarring discord between the quality of your claims and your presumption of status.
↑ comment by Vladimir_Nesov · 2009-12-03T09:26:13.202Z · LW(p) · GW(p)
Its easy enough to sit back with no formal theories or in progress AGI code out for public review and say all these other AGI projects won't work.
The hypothesis is that yes, they won't work as steps towards FAI. Worse, they might actually backfire. And FAI progress is not as "impressive". What do you expect should be done, given this conclusion? Continue running to the abyss, just for the sake of preserving appearance of productivity?
↑ comment by Nick_Tarleton · 2009-12-03T03:13:17.968Z · LW(p) · GW(p)
Without your formal working being out for public review is it really fair to make statements that all the current AGI projects are wrong-headed essentially?
Truth-seeking is not about fairness.
↑ comment by wedrifid · 2009-12-03T02:48:04.069Z · LW(p) · GW(p)
Ok, this being said where is your design? This reminds me of a movement in physics that wants to discard GR because it fails to explain some phenomena and is part of the rift in physics. Of course these people have nothing to replace GR with so the fact that you can argue that GR is not completely right is a bit pointless until you have something to replace it with, GR not being totally wrong. That being said how is your dismissal of the rest of AGI any better then that?
For this analogy to hold there would need to be an existing complete theory of AGI.
(There would also need to be something in the theory or proposed application analogous to "hey! We should make a black hole just outside our solar system because black holes are like way cool and powerful and stuff!")
Ok opinions on the relative merits of the AGI projects mentioned aside you did not answer my first and the question. The question I am actually most interested in the answer too which is where is the technical work? I was looking for some detail as to what part of step one you are working on? So if TDT is important to your FAI then how is the math coming? Are you updating LOGI or are you discarding it and doing it all over?
These are good questions. Particularly the TDT one. Even if the answer happened to be "not that important".
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-03T05:34:22.155Z · LW(p) · GW(p)
I was working on something related to TDT this summer, can't be more specific than that. If I get any of the remaining problems in TDT nailed down beyond what was already presented, and it's not classified, I'll let y'all know. Writing up the math I've already mentioned with impressive Greek symbols so it can be published is lower priority than the rationality book.
LOGI's out the window, of course, as anyone who's read the arc of LW could very easily guess.
Replies from: anonym, wedrifid, mormon2↑ comment by anonym · 2009-12-03T16:20:40.033Z · LW(p) · GW(p)
Writing up the math I've already mentioned with impressive Greek symbols so it can be published is lower priority than the rationality book.
I'm curious to know your reasoning behind this, if you can share it.
It seems to me that the publication of some high-quality technical papers would increase the chances of attracting and keeping the attention of one-in-a-million people like this much more than a rationality book would.
↑ comment by mormon2 · 2009-12-03T16:45:04.972Z · LW(p) · GW(p)
Thank you thats all I wanted to know. You don't have any math for TDT. TDT is just an idea and thats it just like the rest of your AI work. Its nothing more then nambi-pambi philosophical mumbo-jumbo... Well, I will spend my time reading people who have a chance of creating AGI or FAI and its not you...
To sum up you have nothing but some ideas for FAI, no theory, no math and the best defense you have is you don't care about the academic community. The other key one is that you are the only person smart enough to make and understand FAI. This delusion is fueled by your LW followers.
The latest in lame excuses is this "classified" statement which is total (being honest here) BS. Maybe if you had it protected under NDA, or a patent pending, but neither are the case therefore since most LW people understanding the math is unlikely the most probable conclusion is your making excuses for your lack of due diligence in study and actually producing a single iota of a real theory.
Happy pretense of solving FAI... (hey we should have a holiday)
Further comments refer to the complaint department at 1-800-i dont care....
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-12-03T18:13:30.869Z · LW(p) · GW(p)
The problem is that even if nothing "impressive" is available at SIAI, there is no other source where something is. Nada. The only way to improve this situation is to work on the problem. Criticism would be constructive if you suggested a method of improvement on this situation, e.g. organize a new team that is expected to get to FAI more likely than SIAI. Merely arguing about status won't help to solve the problem.
You keep ignoring the distinction between AGI and FAI, which doesn't add sanity to this conversation. You may disagree that there is a difference, but that's distinct from implying that people who believe there is a difference should also act as if there is none. To address the latter, you must directly engage this disagreement.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-03T18:20:46.725Z · LW(p) · GW(p)
Feed ye not the trolls. No point in putting further comments underneath anything that's been voted down under -2.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-12-03T18:24:49.511Z · LW(p) · GW(p)
That comment was intended as the last if m. doesn't suddenly turn reasonable, and more for the benefit of lurkers (since this topic isn't frequently discussed).
Replies from: wedrifid↑ comment by wedrifid · 2009-12-03T22:37:08.174Z · LW(p) · GW(p)
As a curiosity, having one defector in a group who is visibly socially penalized is actually a positive influence on those who witness it (as distinct from having a significant minority, which is a negative influence.) I expect this to be particularly the case when the troll is unable to invoke a similarly childish response.
Replies from: mormon2↑ comment by mormon2 · 2009-12-05T05:22:30.714Z · LW(p) · GW(p)
"As a curiosity, having one defector in a group who is visibly socially penalized is actually a positive influence on those who witness it (as distinct from having a significant minority, which is a negative influence.) I expect this to be particularly the case when the troll is unable to invoke a similarly childish response."
Wow I say one negative thing and all of a sudden I am a troll.
Let's consider the argument behind my comment:
Premises: Has EY ever constructed AI of any form FAI, AGI or narrow AI? Does EY have any degrees in any relevant fields regarding FAI? Is EY backed by a large well funded research organization? Could EY get a technical job at such an organization? Does EY have a team of respected experts helping him make FAI? Does EY have a long list of technical math and algorithm rich publications on any area regarding FAI? Has EY ever published a single math paper in for example a real math journal like AMS? Has he published findings on FAI in something like IEEE?
The answer to each of these questions is no.
The final question to consider is: If EY's primary goal is to create FAI first then why is he spending most of his time blogging and working on a book on rationality (which would never be taken seriously outside of LW)?
Answer: this is counter to his stated goal.
So if all answers being in the negative then what hope should any here hold for EY making FAI? Answer: zero, zilch, none, zip...
If you have evidence to the contrary for example proof that not all the answers to the above questions are no then please... otherwise I rest my case. If you come back with this lame troll response I will consider my case proven, closed and done. Oh and to be clear I have no doubt in failing to sway any from the LW/EY worship cult but the exercise is useful for other reasons.
Replies from: wedrifid, wedrifid↑ comment by wedrifid · 2009-12-05T06:00:58.014Z · LW(p) · GW(p)
Has EY ever constructed AI of any form FAI, AGI or narrow AI?
Nobody has done the first two (fortunately). I am not sure if he has created a narrow AI. I have, it took me a few years to realise that the whole subfield I was working in was utter bullshit. I don't disrespect anyone else for reaching the same conclusion.
Does EY have any degrees in any relevant fields regarding FAI?
He can borrow mine. I don't need to make any paper planes any time soon and I have found ways to earn cash without earning the approval of any HR guys.
Is EY backed by a large well funded research organization?
No.
Could EY get a technical job at such an organization?
He probably lacks the humility. Apart from that, probably yes if you gave him a year.
Does EY have a team of respected experts helping him make FAI?
There are experts in FAI?
Does EY have a long list of technical math and algorithm rich publications on any area regarding FAI?
I would like to see some of those. Not the algorithm rich ones (that'd be a bad sign indeed) but the math ones certainly. I'm not sure I would be comfortable with your definition of 'rich' either.
Has EY ever published a single math paper in for example a real math journal like AMS? Has he published findings on FAI in something like IEEE?
No. No. Both relevant.
Replies from: CronoDAS↑ comment by CronoDAS · 2009-12-05T07:16:46.498Z · LW(p) · GW(p)
Has EY ever published a single math paper in for example a real math journal like AMS? Has he published findings on FAI in something like IEEE?
No. No. Both relevant.
Indeed, they are very relevant. As far as I can tell, Eliezer's job description is "blogger". He is, indeed, brilliant at it, but I haven't seen evidence that he's done anything else of value. As for TDT, everyone here ought to remember the rule for academic research: if it's not published, it doesn't count.
Which is why I don't fault anyone for accusing Eliezer of not having done anything - because they're right.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-05T08:04:10.504Z · LW(p) · GW(p)
As for TDT, everyone here ought to remember the rule for academic research: if it's not published, it doesn't count.
It takes some hard work against my natural inclinations not to walk out on academia entirely so long as they have a rule like that. By nature and personality I have an extremely low tolerance for tribal inbreeding rules and dimwit status games. If it were anything other than Academia - a corporation with the same rule about its own internal journals, say - I would just shrug and write them off as idiots whose elaborate little games I have no interest in playing, and go off to find someone else who can scrape up some interest in reality rather than appearances.
As it stands I'm happy to see the non-direct-FAI-research end of SIAI trying to publish papers, but it seems to me that the direct-FAI-research end should have a pretty hard-and-fast rule of not wasting time and in dealing with reality rather than appearances. That sort of thing isn't a one-off decision rule, it's a lifestyle and a habit of thinking. For myself I'm not sure that I lose much from dealing only with academics who are willing to lower themselves to discuss a blog post. Sure, I must be losing something that I would gain if I magically by surgical intervention gained a PhD. Sure, there are people who are in fact smart who won't in fact deal with me. But there are other smart people who are more relaxed and more grounded than that, so why not just deal with them instead?
Replies from: Vladimir_Nesov, wedrifid, CronoDAS↑ comment by Vladimir_Nesov · 2009-12-05T12:57:43.998Z · LW(p) · GW(p)
There is no such clear-cut general rule wherein you are required to publish your results in a special ritual form to make an impact (in most cases, it's merely a bureaucratic formality in the funding/hiring process; history abounds with informally-distributed works that were built upon, it's just that in most cases good works are also published, 'cause "why not?"). There is a simple need for a clear self-contained explanation that it's possible to understand for other people, without devoting a special research project to figuring out what you meant and hunting down notes on the margins. The same reason you are writing a book. Once a good explanation is prepared, there are usually ways to also "publish" it.
Replies from: Wei_Dai, Eliezer_Yudkowsky↑ comment by Wei Dai (Wei_Dai) · 2009-12-05T19:56:34.533Z · LW(p) · GW(p)
I agree with Nesov and can offer a personal example here. I have a crypto design that was only "published" to a mailing list and on my homepage, and it still got eighty-some citations according to Google Scholar.
Also, just because you (Eliezer) don't like playing status games, doesn't mean it's not rational to play them. I hate status games too, but I can get away with ignoring them since I can work on things that interest me without needing external funding. Your plans, on the other hand, depend on donors, and most potential donors aren't AI or decision theory experts. What do they have to go on except status? What Nesov calls "a bureaucratic formality in the funding/hiring process" is actually a human approximation to group rationality, I think.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-05T20:22:48.555Z · LW(p) · GW(p)
That's fair enough. In which case I can only answer that some things are higher-priority than others, which is why I'm writing that book and not a TDT paper.
↑ comment by wedrifid · 2009-12-05T11:14:18.913Z · LW(p) · GW(p)
I empathize rather strongly with the position you are taking here. Even so, I am very much looking forward to seeing (for example) a published TDT. I don't particularly care whether it is published in blog format or in a journal. There are reasons for doing so that are not status related.
↑ comment by CronoDAS · 2009-12-05T09:19:18.611Z · LW(p) · GW(p)
What's the difference between an real artist and a poseur?
Artists ship.
And you, Eliezer Yudkowsky, haven't shipped.
Until you publish something, you haven't really done anything. Your "timeless decision theory", for example, isn't even published on a web page. It's vaporware. Until you actually write down your ideas, you really can't call yourself a scientist, any more than someone who hasn't published a story can claim the title of author. If you get hit by a bus tomorrow, what great work will you have left behind? Is there something in the SIAI vault that I don't know about, or is it all locked up in that head of yours where nobody can get to it? I don't expect you to magically produce a FAI out of your hat, but any advance that isn't written down might as well not exist, for all the good it will do.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-05T13:31:57.257Z · LW(p) · GW(p)
Eh? TDT was explained in enough detail for Dai and some others to get it. It might not make sense to a lay audience but any philosophically competent fellow who's read the referenced books could reconstruct TDT out of Ingredients of Timeless Decision Theory.
I don't understand your concept of "shipping". There are many things I want to understand, some I understand already, a few of those that I've gone so far as to explain for the sake of people who are actually interested in them, and anything beyond that falls under the heading of PR and publicity.
Not to put too fine a point on it, but I find that no matter how much I do, the people who previously told me that I hadn't yet achieved it, find something else that I haven't yet achieved to focus on. First it's "show you can invent something new", and then when you invent it, "show you can get it published in a journal", and if my priority schedule ever gets to the point I can do that, I have no doubt that the same sort of people will turn around and say "Anyone can publish a paper, where are the prominent scholars who support you?" and after that they will say "Does the whole field agree with you?" I have no personal taste for any part of this endless sequence except the part where I actually figure something out. TDT is rare in that I can talk about it openly and it looks like other people are actually making progress on it.
Replies from: Wei_Dai, CronoDAS, mormon2↑ comment by Wei Dai (Wei_Dai) · 2009-12-06T20:21:32.878Z · LW(p) · GW(p)
TDT was explained in enough detail for Dai and some others to get it.
It's explained in enough detail for me to get an intuitive understanding of it, and to obtain some inspirations and research ideas to follow up. But it's not enough for me to try to find flaws in it. I think that should be the standard of detail in scientific publication: the description must be detailed enough that if the described idea or research were to have a flaw, then a reader would be able to find it from the description.
It might not make sense to a lay audience but any philosophically competent fellow who's read the referenced books could reconstruct TDT out of Ingredients of Timeless Decision Theory.
Ok, but what if TDT is flawed? In that case, whoever is trying to reconstruct TDT would just get stuck somewhere before they got to a coherent theory, unless they recreated the same flaw by coincidence. If they do get stuck, how can they know or convince you that it's your fault, and not theirs? Unless they have super high motivation and trust in you, they'll just give up and do something else, or never attempt the reconstruction in the first place.
Replies from: Eliezer_Yudkowsky, wedrifid↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-06T23:57:00.838Z · LW(p) · GW(p)
I already know it's got a couple of flaws (the "Problems I Can't Solve" post, you solved one of them). The "Ingredients" page should let someone get as far as I got, no further, if they had all the standard published background knowledge that I had.
The theory has two main formal parts that I know how to formalize. One is the "decision diagonal", and I wrote that out as an equation. It contains a black box, but I haven't finished formalizing that black box either! The other main part that needs formalizing is the causal network. Judea Pearl wrote all this up in great detail; why should I write it again? There's an amendment of the causal network to include logical uncertainty. I can describe this in the same intuitive way that CDT theorists took for granted when they were having their counterfactual distributions fall out of the sky as manna from heaven, but I don't know how to give it a Pearl-grade formalization.
Hear me well! If I wanted to look impressive, I could certainly attach Greek symbols to key concepts - just like the classical causal decision theory theorists did in order to make CDT look much more formalized than it actually was. This is status-seeking and self-deception and it got in the way of their noticing what work they had left to do. It was a mistake for them to pretend to formality that way. It is part of the explanation for how they bogged down. I don't intend to make the same mistake.
↑ comment by wedrifid · 2009-12-07T01:19:49.421Z · LW(p) · GW(p)
It's explained in enough detail for me to get an intuitive understanding of it, and to obtain some inspirations and research ideas to follow up. But it's not enough for me to try to find flaws in it. I think that should be the standard of detail in scientific publication: the description must be detailed enough that if the described idea or research were to have a flaw, then a reader would be able to find it from the description.
This is where I get stuck. I can get an intuitive understanding of it easily enough. In fact, I got a reasonable intuitive understanding of it just from observing application to problem cases. But I know I don't have enough to go on to find flaws. I would have to do quite a lot of further background research to construct the difficult parts of the theory and I know that even then I would not be able to fully trust my own reasoning without dedicating several years to related fields.
Basically, it would be easier for me to verify a completed theory if I just created it myself from the premise "a decision theory shouldn't be bloody stupid". That way I wouldn't have to second guess someone else's reasoning.
Since I know I do not have the alliances necessary to get a commensurate status pay-off for any work I put into such research that probably isn't the best way to satisfy my curiosity. Ricardo would suggest that the most practical approach would be for me to spend my time leveraging my existing position to earn cash and making a donation earmarked for 'getting someone to finish the TDT theory'.
↑ comment by CronoDAS · 2009-12-05T19:33:12.401Z · LW(p) · GW(p)
Eh? TDT was explained in enough detail for Dai and some others to get it. It might not make sense to a lay audience but any philosophically competent fellow who's read the referenced books could reconstruct TDT out of Ingredients of Timeless Decision Theory.
All right, then.
↑ comment by mormon2 · 2009-12-05T21:11:27.642Z · LW(p) · GW(p)
"Not to put too fine a point on it, but I find that no matter how much I do, the people who previously told me that I hadn't yet achieved it, find something else that I haven't yet achieved to focus on."
Such is the price of being an innovator or claiming innovation...
"First it's "show you can invent something new", and then when you invent it, "show you can get it published in a journal", and if my priority schedule ever gets to the point I can do that, I have no doubt that the same sort of people will turn around and say "Anyone can publish a paper, where are the prominent scholars who support you?""
Sure, but you have not invented a decision theory using the example of TDT until you have math to back it up. Decision theory is a mathematical theory not just some philosophical ideas. What-is-more thanks to programs like Mathematica etc. there are easy ways to post equations online. For example "[Nu] Derivative[2][w][[Nu]] + 2 Derivative[1][w][[Nu]] + ArcCos[z]^2 [Nu] w[[Nu]] == 0 /; w[[Nu]] == Subscript[c, 1] GegenbauerC[[Nu], z] + Subscript[c, 2] (1/[Nu]) ChebyshevU[[Nu], z]" put this in mathematica and presto. Further the publication of the theory is necessary part of getting the theory accepted be that good or bad. Not only that but it helps in formalizing ones ideas which is positive especially when working with other people and trying to explain what you are doing.
"and after that they will say "Does the whole field agree with you?" I have no personal taste for any part of this endless sequence except the part where I actually figure something out. TDT is rare in that I can talk about it openly and it looks like other people are actually making progress on it."
There are huge areas of non-FAI specific work and people who's help would be of value. For example knowledge representation, embodiment virtual or real, and sensory stimulus recognition... Each of these will need work to make FAI practical and there are people who can help you and probably know more about those specific areas then you.
↑ comment by wedrifid · 2009-12-05T05:45:24.400Z · LW(p) · GW(p)
If EY's primary goal is to create FAI first then why is he spending most of his time blogging and working on a book on rationality?
Because LaTeX has already been done.
So if all answers being in the negative then what hope should any here hold for EY making FAI? Answer: zero, zilch, none, zip...
Zero, zilch, none and zip are not probabilities but the one I would assign is rather low. (Here is where 'shut up and do the impossible' fits in.)
PS: Is it acceptable to respond to trolls when the post is voted up to (2 - my vote)?
Replies from: mormon2↑ comment by mormon2 · 2009-12-05T20:57:14.057Z · LW(p) · GW(p)
How am I troll? Did I not make a valid point? Have I not made other valid points? You may disagree with how I say something but that in no way labels me a troll.
The intention of my comment was to find what the hope for EYs FAI goals are based on here. I was trying to make the point with the zero, zilch idea... that the faith in EY making FAI is essentially blind faith.
Replies from: wedrifid, Zack_M_Davis↑ comment by wedrifid · 2009-12-06T00:17:00.919Z · LW(p) · GW(p)
The intention of my comment was to find what the hope for EYs FAI goals are based on here. I was trying to make the point with the zero, zilch idea... that the faith in EY making FAI is essentially blind faith.
I am not sure who here has faith in EY making FAI. In fact, I don't even recall EY claiming a high probability of such a success.
Replies from: Technologos, CarlShulman↑ comment by Technologos · 2009-12-17T06:34:01.966Z · LW(p) · GW(p)
Agreed. As I recall, EY posted at one point that prior to thinking about existential risks and FAI, his conception of an adequate life goal was moving the Singularity up an hour. Sure doesn't sound like he anticipates single-handedly making an FAI.
At best, he will make major progress toward a framework for friendliness. And in that aspect he is rather a specialist.
↑ comment by CarlShulman · 2009-12-17T07:28:31.801Z · LW(p) · GW(p)
Agreed. I don't know anyone at SIAI or FHI so absurdly overconfident as to expect to avert existential risk that would otherwise be fatal. The relevant question is whether their efforts, or supporting efforts, do more to reduce risk than alternative uses of their time or that of supporters.
↑ comment by Zack_M_Davis · 2009-12-05T22:52:06.851Z · LW(p) · GW(p)
You may disagree with how I say something but that in no way labels me a troll.
I'm not so sure. You don't seem to be being downvoted for criticizing Eliezer's strategy or sparse publication record: you got upvoted earlier, as did CronoDAS for making similar points. But the hostile and belligerent tone of many of your comments does come off as kind of, well, trollish.
Incidentally, I can't help but notice that subject and style of your writing is remarkably similar to that of DS3618. Is that just a coincidence?
Replies from: Tiredoftrolls↑ comment by Tiredoftrolls · 2009-12-06T00:45:44.658Z · LW(p) · GW(p)
Not to mention mormon1 and psycho.
The same complaints and vitriol about Eliezer and LW, unsupported claims of technical experience convenient to conversational gambits (CMU graduate degree with no undergrad degree, AI and DARPA experience), and support for Intelligent Design creationism.
Plus sadly false claims of being done with Less Wrong because of his contempt for its participants.
Replies from: mormon2↑ comment by mormon2 · 2009-12-08T02:23:05.824Z · LW(p) · GW(p)
Responding to both Zack and Tiredoftrolls:
The similarity of DS3618 and my posts is coincidental. As for mormon1 or psycho also coincidental. The fact that I have done work with DARPA in no way connects me unless you suppose only one person has ever worked with DARPA nor does AI connect me.
For Tiredoftrolls specifically: The fact that you are blithely unaware of the possibility of and the reality of being smart enough to do a PhD without undergrad work is not my concern. The fact that I rail against EY and his lack of math should be something that more people do here. I do not agree with now nor have I ever agreed with ID or creationism or whatever you want to call that tripe.
To head off the obvious question why mormon2 because mormon and mormon1 was not available or didn't work. I thought about mormonpreacher but decided against it.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-12-13T23:02:43.300Z · LW(p) · GW(p)
As for mormon1... also coincidental.
Bullshit. Note, if the names aren't evidence enough, the same misspelling of "namby-pamby" here and here.
I propose banning.
↑ comment by [deleted] · 2009-12-03T03:20:15.624Z · LW(p) · GW(p)
. . . Java is slower then C++ because of all the overheads of running the code. . . .
A fast programming language is the last thing we need. Literally--when you're trying to create a Friendly AI, compiling it and optimizing it and stuff is probably the very last step.
(Yes, I did try to phrase the latter half of that in such a way to make the former half seem true, for the sake of rhetoric.)
↑ comment by Vladimir_Nesov · 2009-12-02T11:32:17.237Z · LW(p) · GW(p)
If that's the case why does Ben Goertzel have a company working on AGI the very problem your trying to solve? Why does he actually have design and some portions implemented and you do not have any portions implemented?
He is solving a wrong problem (i.e. he is working towards destroying the world), but that's completely tangential.
Replies from: timtyler↑ comment by wedrifid · 2009-12-03T02:57:07.328Z · LW(p) · GW(p)
Just an aside for some posters above this post who have been talking about Java as the superior choice to C++ what planet do you come from? Java is slower then C++ because of all the overheads of running the code.
A world in which a segfault in an FAI could end it.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-12-03T03:05:19.721Z · LW(p) · GW(p)
I hope any FAI would be formally verified to a far greater extent than any existing JVM.
Replies from: timtyler↑ comment by timtyler · 2009-12-09T15:04:06.229Z · LW(p) · GW(p)
Formal verification is typically only used in highly safety-critical systems - and often isn't used there either. If you look at the main applications for intelligent systems, not terribly many are safety critical - and the chances of being able to do much in the way of formal verification at a high-level seems pretty minimal anyway.
↑ comment by Vladimir_Nesov · 2009-12-02T11:27:06.187Z · LW(p) · GW(p)
Only if you can expect to manage to get a supply of these folks. On the absolute scale, assuming that level of ability X is absolutely necessary to make meaningful progress (where X is relative to current human population) seems as arbitrary as assuming that human intelligence is exactly the greatest possible level of intelligence theoretically possible. FAI still has a lot of low-hanging fruit, simply because the problem was never seriously considered in this framing.
↑ comment by Roko · 2009-12-02T08:07:47.813Z · LW(p) · GW(p)
gathering people from places like Google, Intel, IBM, and DARPA
Though, people from these places would undoubtedly have skills in management, PR and marketing, teamwork, narrow AI, etc that would be extremely useful as supporting infrastructure. Supporting infrastructure totally counts.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-02T09:13:23.217Z · LW(p) · GW(p)
Oh, hell yeah. Anna's side can recruit them, no problem. And I'm certainly not saying that no one who works at these organizations could make the cut for the Final Programmers. Just that you can't hire Final Programmers at random from anywhere, not even Google.
comment by MichaelAnissimov · 2009-12-02T02:42:18.528Z · LW(p) · GW(p)
I participated in the 2008 summer intern program and visited the 2009 program several times and thought it was a lot of fun and very educational. The ideas that I bounced off of people at these programs still inform my writing and thinking now.
comment by alyssavance · 2009-12-01T01:53:57.483Z · LW(p) · GW(p)
"Getting good popular writing and videos on the web, of sorts that improve AI risks understanding for key groups;"
Though good popular writing is, of course, very important, I think we sometimes overestimate the value of producing summaries/rehashings of earlier writing by Vinge, Kurzweil, Eliezer, Michael Vassar and Anissimov, etc.
Replies from: outlawpoet↑ comment by outlawpoet · 2009-12-01T01:58:03.477Z · LW(p) · GW(p)
I must agree with this, although video and most writing OTHER than short essays and polemics would be mostly novel, and interesting.
comment by nhamann · 2009-12-03T06:18:26.876Z · LW(p) · GW(p)
I have a (probably stupid) question. I have been following Less Wrong for a little over a month, and I've learned a great deal about rationality in the meantime. My main interest, however, is not rationality, it is in creating FAI. I see that the SIAI has an outline of a research program, described here: http://www.singinst.org/research/researchareas.
Is there an online community that is dedicated solely to discussing friendly AI research topics? If not, is the creation of one being planned? If not, why not? I realize that the purpose of these SIAI fellowships is to foster such research, but I'd imagine that a discussion community focused on relevant topics in evolutionary psych, cogsci, math, CS, etc. would provide a great deal more stimulation for FAI research than would the likely limited number of fellowships available.
A second benefit would be that it would provide a support group to people (like me) who want to do FAI research but who do not know enough about cogsci, math, CS, etc. to be of much use to SIAI at the moment. I have started combing through SIAI's reading list, which has been invaluable in narrowing down what I need to be reading, but at the end of the day, it's only a reading list. What would be ideal is an active community full of bright and similarly-motivated people who could help to clarify misconceptions and point out novel connections in the material.
I apologize if this comment is off-topic.
Replies from: Kaj_Sotala, Vladimir_Nesov↑ comment by Kaj_Sotala · 2009-12-03T08:57:05.230Z · LW(p) · GW(p)
Is there an online community that is dedicated solely to discussing friendly AI research topics?
None that would be active and of a high quality. SL4 is probably the closest, but these days it's kinda quiet and the discussions aren't very good. Part of the problem seems to be that a community dedicated purely to FAI draws too many cranks. Now, even if you had active moderation, it's often pretty hard for people in general to come up with good, original questions and topics of FAI. SL4 is dead partly because the people there are tired of basically rehashing the same topics over and over. It seems like having FAI discussion on the side of an established rationalist community is a good idea, both to drive out the cranks and to have other kinds of discussion going on that isn't directly relevant to FAI might still contribute to an understanding of the topic indirectly.
↑ comment by Vladimir_Nesov · 2009-12-03T10:04:59.613Z · LW(p) · GW(p)
This forum is as close as there is to a FAI discussion group. SL4 is very much (brain-)dead at the moment. There aren't even a lot of people who are known to be specifically attacking the FAI problem -- one can name Yudkowsky, Herreshoff, maybe Rayhawk, others keep quiet. Drop me a mail, I may have some suggestions on what to study.
Replies from: righteousreason, righteousreason↑ comment by righteousreason · 2009-12-07T01:40:28.852Z · LW(p) · GW(p)
Also Peter de Blanc
↑ comment by righteousreason · 2009-12-04T14:01:19.132Z · LW(p) · GW(p)
Whatever happened to Nick Hay, wasn't he doing some kind of FAI related research?
Replies from: CarlShulmancomment by whpearson · 2009-12-01T23:01:00.495Z · LW(p) · GW(p)
I really like what SIAI is trying to do, the spirit that it embodies.
However I am getting more skeptical of any projections or projects based on non-good old fashioned scientific knowledge (my own included).
You can progress scientifically to make AI if you copy human architecture somewhat. By making predictions about how the brain works and organises itself. However I don't see how we can hope make significant progress on non-human AI. How will we test whether our theories are correct or on the right path? For example, what evidence from the real world would convince the SIAI to abandon the search for a fixed decision theory as a module of the AI. And why isn't SIAI looking for the evidence, to make sure that you aren't wasting your time?
For every Einstein that makes the "right" cognitive leap there are probably many orders of magnitudes of more Kelvin's that do things like predict that meteors provide fuel for the sun.
How are you going to winnow out the wrong ideas if they are consistent with everything we know, especially if they are pure mathematical constructs.
Replies from: AngryParsley, None↑ comment by AngryParsley · 2009-12-04T09:20:06.882Z · LW(p) · GW(p)
You can progress scientifically to make AI if you copy human architecture somewhat.
I think you're making the mistake of relying too heavily on our one sample of a general intelligence: the human brain. How do we know which parts to copy and which parts to discard? To draw an analogy to flight, how can we tell which parts of the brain are equivalent to a bird's beak and which parts are equivalent to wings? We need to understand intelligence before we can successfully implement it. Research on the human brain is expensive, requires going through a lot of red tape, and it's already being done by other groups. More importantly, planes do not fly because they are similar to birds. Planes fly because we figured out a theory of aerodynamics. Planes would fly just as well if no birds ever existed, and explaining aerodynamics doesn't require any talk of birds.
I don't see how we can hope make significant progress on non-human AI. How will we test whether our theories are correct or on the right path?
I don't see how we can hope to make significant progress on non-bird flight. How will we test whether our theories are correct or on the right path?
Just because you can't think of a way to solve a problem doesn't mean that a solution is intractable. We don't yet have the equivalent of a theory of aerodynamics for intelligence, but we do know that it is a computational process. Any algorithm, including whatever makes up intelligence, can be expressed mathematically.
As to the rest of your comment, I can't really respond to the questions about SIAI's behavior, since I don't know much about what they're up to.
Replies from: Jordan, whpearson↑ comment by Jordan · 2009-12-04T10:10:34.187Z · LW(p) · GW(p)
The bird analogy rubs me the wrong way more and more. I really don't think it's a fair comparison. Flight is based on some pretty simple principles, intelligence not necessarily so. If intelligence turns out to be fundamentally complex then emulating a physical brain might be the easiest way to create AI. Certainly intelligence might have some nice underlying theory, so we should pursue that angle as well, but I don't see how we can be certain either way.
Replies from: AngryParsley, timtyler↑ comment by AngryParsley · 2009-12-04T18:55:08.949Z · LW(p) · GW(p)
Flight is based on some pretty simple principles, intelligence not necessarily so.
I think the analogy still maps even if this is true. We can't build useful AIs until we really understand intelligence. This holds no matter how complicated intelligence ends up being.
If intelligence turns out to be fundamentally complex then emulating a physical brain might be the easiest way to create AI.
First, nothing is "fundamentally complex." (See the reductionism sequence.) Second, brain emulation won't work for FAI because humans are not stable goal systems over long periods of time.
Replies from: Jordan↑ comment by Jordan · 2009-12-05T02:19:44.856Z · LW(p) · GW(p)
We can't build useful AIs until we really understand intelligence.
You're overreaching. Uploads could clearly be useful, whether we understand how they are working or not.
brain emulation won't work for FAI because humans are not stable goal systems over long periods of time.
Agreed, uploads aren't provably friendly. But you have to weigh that danger against the danger of AGI arriving before FAI.
Replies from: Vladimir_Nesov, wedrifid↑ comment by Vladimir_Nesov · 2009-12-05T02:23:06.615Z · LW(p) · GW(p)
Agreed, uploads aren't provably friendly. But you have to weigh that danger against the danger of AGI arriving before FAI.
But you still can't get to FAI unless you (or the uploads) understand intelligence.
Replies from: Jordan↑ comment by Jordan · 2009-12-05T10:01:52.433Z · LW(p) · GW(p)
Right, the two things you must weigh and 'choose' between (in the sense of research, advocacy, etc):
1) Go for FAI, with the chance that AGI comes first
2) Go for uploads, with the chance they go crazy when self modifying
You don't get provable friendless with uploads without understanding intelligence, but you do get a potential upgrade path to super intelligence that doesn't result in the total destruction of humanity. The safety of that path may be small, but the probability of developing FAI before AGI is likewise small, so it's not clear in my mind which option is better.
Replies from: CarlShulman, Vladimir_Nesov↑ comment by CarlShulman · 2009-12-05T10:26:04.683Z · LW(p) · GW(p)
At the workshop after the Singularity Summit, almost everyone (including Eliezer, Robin, and myself), including all the SIAI people, said they hoped that uploads would be developed before AGI. The only folk who took the other position were those actively working on AGI (but not FAI) themselves.
Also, people at SIAI and FHI are working on papers on strategies for safer upload deployment.
Replies from: Jordan, timtyler, wedrifid↑ comment by Jordan · 2009-12-06T06:54:45.989Z · LW(p) · GW(p)
Interesting, thanks for sharing that. I take it then that it was generally agreed that the time frame for FAI was probably substantially shorter than for uploads?
Replies from: CarlShulman↑ comment by CarlShulman · 2009-12-06T10:43:08.894Z · LW(p) · GW(p)
Separate (as well as overlapping) inputs go into de novo AI and brain emulation, giving two distinct probability distributions. AI development seems more uncertain, so that we should assign substantial probability to it coming before or after brain emulation. If AI comes first/turns out to be easier, then FAI-type safety measures will be extremely important, with less time to prepare, giving research into AI risks very high value.
If brain emulations come first, then shaping the upload transition to improve the odds of solving collective action problems like regulating risky AI development looks relatively promising. Incidentally, however, a lot of useful and as yet unpublished analysis (e.g. implications of digital intelligences that can be copied and run at high speed) is applicable to thinking about both emulation and de novo AI.
Replies from: Mitchell_Porter, Jordan↑ comment by Mitchell_Porter · 2009-12-06T11:26:42.935Z · LW(p) · GW(p)
I think AGI before human uploads is far more likely. If you have hardware capable of running an upload, the trial-and-error approach to AGI will be a lot easier (in the form of computationally expensive experiments). Also, it is going to be hard to emulate a human brain without knowing how it works (neurons are very complex structures and it is not obvious which component processes need to appear in the emulation), and as you approach that level of knowledge, trial-and-error again becomes easier, in the form of de novo AI inspired by knowledge of how the human brain works.
Maybe you could do a coarse-grained emulation of a living brain by high-resolution fMRI-style sampling, followed by emulation of the individual voxels on the basis of those measurements. You'd be trying to bypass the molecular and cellular complexities, by focusing on the computational behavior of brain microregions. There would still be potential for leakage of discoveries made in this way into the AGI R&D world before a complete human upload was carried out, but maybe this method closes the gap a little.
I can imagine upload of simple nonhuman nervous systems playing a role in the path to AGI, though I don't think it's at all necessary - again, if you have hardware capable of running a human upload, you can carry out computational experiments in de novo AI which are currently expensive or impossible. I can also see IA (intelligence augmentation) of human beings through neurohacks, computer-brain interfaces, and sophisticated versions of ordinary (noninvasive) interfaces. I'd rate a Singularity initiated by that sort of IA as considerably more likely than one arising from uploads, unless they're nondestructive low-resolution MRI-produced uploads. Emulating a whole adult human brain is not just an advanced technological action, it's a rather specialized one, and I expect the capacity to do so to coincide with the capacity to do IA and AI in a variety of other forms, and for superhuman intelligence to arise first on that front.
To sum up, I think the contenders in the race to produce superintelligence are trial-and-error AGI, theory-driven AGI, and cognitive neuroscience. IA becomes a contender only when cognitive neuroscience advances enough that you know what you're doing with these neurohacks and would-be enhancements. And uploads are a bit of a parlor trick that's just not in the running, unless it's accomplished via modeling the brain as a network of finite-state-machine microregions to be inferred from high-resolution fMRI. :-)
Replies from: Jordan↑ comment by Jordan · 2009-12-07T01:48:50.128Z · LW(p) · GW(p)
The following is a particular take on the future, hopefully demonstrating a realistic path for uploads occurring before AGI.
Imagine a high fidelity emulation of a small mammal brain (on the order of 1 g) is demonstrated, running at about 1/1000th real time. The computational demand for such a code is roughly a million times less than for emulating a human brain in real time.
Such a demonstration would give immense credibility to whole brain emulations, even of humans. It's not unlikely that the military would be willing to suddenly throw billions into WBE research. That is, the military isn't without imagination, and once the potential for human brain emulation has been shown, it's easy to see the incredible ramifications they would bring.
The big unknown would be how much optimization could be made to the small brain uploads. If we can't optimize the emulations' code, then the only path to human uploads would be through Moore's law, which would take two decades: ample time for the neuroscience breakthroughs to impact AGI. If, on the other hand, the codes prove to allow large optimizations, then intense funding from the military could get us to human uploads in a matter of years, leaving very little time for AGI theory to catch up.
My own intuition is that the first whole brain emulations will allow for substantial room for optimization.
↑ comment by Jordan · 2009-12-07T01:50:37.606Z · LW(p) · GW(p)
How valuable is trying to shape the two probability distributions themselves? Should we be devoting resources to encouraging people to do research in computational neuroscience instead of AGI?
Replies from: CarlShulman↑ comment by CarlShulman · 2009-12-07T01:56:07.477Z · LW(p) · GW(p)
It's hard to change the rate of development of fields. It's easier to do and publish core analyses of the issues with both approaches, so as to 1) Know better where to focus efforts 2) Make a more convincing case for any reallocation of resources.
↑ comment by timtyler · 2009-12-09T23:46:27.742Z · LW(p) · GW(p)
re: "almost everyone [...] said they hoped that uploads would be developed before AGI"
IMO, that explains much of the interest in uploads: wishful thinking.
Replies from: gwern↑ comment by gwern · 2009-12-10T00:20:53.976Z · LW(p) · GW(p)
Reminds me of Kevin Kelly's The Maes-Garreau Point:
"Nonetheless, her colleagues really, seriously expected this bridge to immortality to appear soon. How soon? Well, curiously, the dates they predicted for the Singularity seem to cluster right before the years they were expected to die. Isn’t that a coincidence?"
Possibly the most single disturbing bias-related essay I've read, because I realized as I was reading it that my own uploading prediction was very close to my expected lifespan (based on my family history) - only 10 or 20 years past my death. It surprises me sometimes that no one else on LW/OB seems to've heard of Kelly's Maes-Garreau Point.
Replies from: CarlShulman, Vladimir_Nesov, mattnewport↑ comment by CarlShulman · 2009-12-10T15:04:41.512Z · LW(p) · GW(p)
It's an interesting methodology, but the Maes-Garreau data is just terrible quality. For every person I know on that list, the attached point estimate is misleading to grossly misleading. For instance, it gives Nick Bostrom as predicting a Singularity in 2004, when Bostrom actually gives a broad probability distribution over the 21st century, with much probability mass beyond it as well. 2004 is in no way a good representative statistic of that distribution, and someone who had read his papers on the subject or emailed him could easily find that out. The Yudkowsky number was the low end of a range (if I say that between 100 and 500 people were at an event, that's not the same thing as an estimate of 100 people!), and subsequently disavowed in favor of a broader probability distribution regardless. Marvin Minsky is listed as predicting 2070, when he has also given an estimate of most likely "5 to 500" years, and this treatment is inconsistent with the treatment of the previous two estimates. Robin Hanson's name is spelled incorrectly, and the figure beside his name is grossly unrepresentative of his writing on the subject (available for free on his website for the 'researcher' to look at). The listing for Kurzweil gives 2045, which is when Kurzweil expects a Singularity, as he defines it (meaning just an arbitrary benchmark for total computing power), but in his books he suggests that human brain emulation and life extension technology will be available in the previous decade, which would be the "living long enough to live a lot longer" break-even point if he were right about that.
I'm not sure about the others on that list, but given the quality of the observed date, I don't place much faith in the dataset as a whole. It also seems strangely sparse: where is Turing, or I.J. Good? Dan Dennett, Stephen Hawking, Richard Dawkins, Doug Hofstadter, Martin Rees, and many other luminaries are on record in predicting the eventual creation of superintelligent AI with long time-scales well after their actuarially predicted deaths. I think this search failed to pick up anyone using equivalent language in place of the term 'Singularity,' and was skewed as a result. Also, people who think that a technological singularity or the like will probably not occur for over 100 years are less likely to think it an important issue to talk about right now, and so are less likely to appear in a group selected by looking for attention-grabbing pronouncements.
A serious attempt at this analysis would aim at the following:
1) Not using point estimates, which can't do justice to a probability distribution. Give a survey that lets people assign their probability mass to different periods, or at least specifically ask for an interval, e.g. 80% confidence that an intelligence explosion will have begun/been completed after X but before Y.
2) Emailing the survey to living people to get their actual estimates.
3) Surveying a group identified via some other criterion (like knowledge of AI, note that participants at the AI@50 conference were electronically surveyed on timelines to human-level AI) to reduce selection effects.
Replies from: gwern↑ comment by gwern · 2009-12-10T21:52:44.829Z · LW(p) · GW(p)
It's an interesting methodology, but the Maes-Garreau data is just terrible quality.
See, this is the sort of response I would expect: a possible bias is identified, some basic data is collected which suggests that it's plausible, and then we begin a more thorough inspection. Complete silence, though, was not.
where is Turing
Turing would be hard to do. He predicts in 1950 a machine could pass his test 70% of the time in another 50 years (2000; Turing was born 1912, so he would've been 88), and that this would be as good as a real mind. But is this a date for the Singularity or a genuine consciousness?
Replies from: CarlShulman↑ comment by CarlShulman · 2009-12-11T01:18:54.258Z · LW(p) · GW(p)
Yes, I considered that ambiguity, and certainly you couldn't send him a survey. But it gives a lower bound, and Turing does talk about machines equaling or exceeding human capacities across the board.
Replies from: gwern↑ comment by Vladimir_Nesov · 2009-12-10T12:10:18.707Z · LW(p) · GW(p)
It surprises me sometimes that no one else on LW/OB seems to've heard of Kelly's Maes-Garreau Point.
It would be very surprising if you are right. I expect most of the people who have thought about the question of how such estimates could be biased would think of this idea within the first several minutes (even if without experimental data).
Replies from: gwern↑ comment by gwern · 2009-12-10T13:40:18.217Z · LW(p) · GW(p)
It may be an obvious point on which to be biased, but how many of such people then go on to work out birthdates and prediction dates or to look for someone else's work on those lines like Maes-Garreau?
Replies from: CarlShulman↑ comment by CarlShulman · 2009-12-10T15:26:45.176Z · LW(p) · GW(p)
A lot of folk at SIAI have looked at and for age correlations.
Replies from: gwern↑ comment by gwern · 2009-12-10T21:36:22.329Z · LW(p) · GW(p)
And found?
Replies from: CarlShulman↑ comment by CarlShulman · 2009-12-11T02:00:18.146Z · LW(p) · GW(p)
1) Among those sampled, the young do not seem to systematically predict a later Singularity.
2) People do update their estimates based on incremental data (as they should), so we distinguish between estimated dates, and estimated time-from-present.
2a) A lot of people burned by the 1980s AI bubble shifted both of those into the future.
3) A lot of AI folk with experience from that bubble have a strong taboo against making predictions for fear of harming the field by raising expectations. This skews the log of public predictions.
4) Younger people working on AGI (like Shane Legg, Google's Moshe Looks) are a self-selected group and tend to think that it is relatively close (within decades,and their careers).
5) Random smart folk, not working on AI (physicists, philosophers, economists), of varied ages, tend to put broad distributions on AGI development with central tendencies in the mid-21st century.
Replies from: gwern, gwern↑ comment by gwern · 2012-08-14T00:53:23.075Z · LW(p) · GW(p)
Is there any chance of the actual data or writeups being released? It's been almost 3 years now.
Replies from: CarlShulman↑ comment by CarlShulman · 2012-08-14T01:12:54.133Z · LW(p) · GW(p)
Lukeprog has a big spreadsheet. I don't know his plans for it.
Replies from: gwern↑ comment by gwern · 2012-08-14T01:21:37.887Z · LW(p) · GW(p)
Hm... I wonder if that's the big spreadsheet ksotala has been working on for a while?
Replies from: lukeprog↑ comment by lukeprog · 2012-08-18T05:29:43.362Z · LW(p) · GW(p)
Yes. An improved version of the spreadsheet, which serves as the data set for Stuart's recent writeup, will probably be released when the Stuart+Kaj paper is published, or perhaps earlier.
↑ comment by gwern · 2009-12-11T17:24:30.061Z · LW(p) · GW(p)
- evidence for, apparently
- Yes but shouldn't we use the earliest predictions by a person? Even a heavily biased person may produce reasonable estimates given enough data. The first few estimates are likely to be based most on intuition - or bias, in another word.
- But which way? There may be a publication bias to 'true believers' but then there may also be a bias towards unobjectionably far away estimates like Minsky's 5 to 500 years. (One wonders what odds Minsky genuinely assigns to the first AI being created in 2500 AD.)
- Reasonable. Optimism is an incentive to work, and self-deception is probably relevant.
- Evidence for, isn't it? Especially if they assign even weak belief in significant life-extension breakthroughs, ~2050 is within their conceivable lifespan (since they know humans currently don't live past ~120, they'd have to be >~80 to be sure of not reaching 2050).
↑ comment by mattnewport · 2009-12-10T00:28:03.001Z · LW(p) · GW(p)
Kelly doesn't give references for the dates he cites as predictions for the singularity. Did Eliezer really predict at some point that the singularity would occur in 2005? That sounds unlikely to me.
Replies from: mattnewport, timtyler↑ comment by mattnewport · 2009-12-10T00:31:15.104Z · LW(p) · GW(p)
Hmm, I found this quote on Google:
A few years back I would have said 2005 to 2020. I got this estimate by taking my real guess at the Singularity, which was around 2008 to 2015, and moving the dates outward until it didn't seem very likely that the Singularity would occur before then or after then.
Seems to me that Kelly didn't really interpret the prediction entirely reasonably (picking the earlier date) but the later date would not disconfirm his theory either.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2009-12-10T00:54:18.411Z · LW(p) · GW(p)
Did Eliezer really predict at some point that the singularity would occur in 2005? That sounds unlikely to me.
Eliezer has disavowed many of his old writings:
I’ve been online since a rather young age. You should regard anything from 2001 or earlier as having been written by a different person who also happens to be named “Eliezer Yudkowsky”. I do not share his opinions.
But re the 2005 listing, cf. the now-obsolete "Staring Into the Singularity" (2001):
Replies from: Eliezer_YudkowskyI do not "project" when the Singularity will occur. I have a "target date". I would like the Singularity to occur in 2005, which I think I would have a reasonable chance of doing via AI if someone handed me a hundred million dollars a year. The Singularity Institute would like to finish up in 2008 or so.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-10T01:26:57.207Z · LW(p) · GW(p)
Doesn't sound much like the Eliezer you know, does it...
↑ comment by timtyler · 2009-12-10T18:38:34.803Z · LW(p) · GW(p)
Re: "Kelly doesn't give references for the dates he cites as predictions for the singularity."
That sucks. Also, "the singularity" is said to occur when minds get uploaded?!?
And "all agreed that once someone designed the first super-human artificial intelligence, this AI could be convinced to develop the technology to download a human mind immediately"?!?
I have a rather different take on things on my "On uploads" video:
↑ comment by wedrifid · 2009-12-06T08:08:18.193Z · LW(p) · GW(p)
almost everyone (including Eliezer, Robin, and myself), including all the SIAI people, said they hoped that uploads would be developed before AGI.
If so, I rather hope they keep the original me around too. I think I would prefer the higher res post super-intelligence version than the first versions that work well enough to get a functioning human like being out the other end.
↑ comment by Vladimir_Nesov · 2009-12-05T13:16:30.321Z · LW(p) · GW(p)
I tentatively agree, there well may be a way to FAI that doesn't involve normal humans understanding intelligence, but rather improved humans understanding intelligence, for example carefully modified uploads or genetically engineered/selected smarter humans.
↑ comment by wedrifid · 2009-12-05T03:06:35.796Z · LW(p) · GW(p)
Agreed, uploads aren't provably friendly. But you have to weigh that danger against the danger of AGI arriving before FAI.
I rather suspect uploads would arrive at AGI before their more limited human counterparts. Although I suppose uploading only the right people could theoretically increase the chances of FAI coming first.
↑ comment by timtyler · 2009-12-09T23:52:27.607Z · LW(p) · GW(p)
Re: "Flight is based on some pretty simple principles, intelligence not necessarily so. If intelligence turns out to be fundamentally complex then emulating a physical brain might be the easiest way to create AI."
Hmm. Are there many more genes expressed in brains than in wings? IIRC, it's about equal.
↑ comment by whpearson · 2009-12-04T11:34:38.381Z · LW(p) · GW(p)
Okay, let us say you want to make a test for intelligence, just as there was a test for the lift generated by a fixed wing.
As you are testing a computational system there are two things you can look at, the input-output relation and the dynamics of the internal system.
Looking purely at the IO relation is not informative, they can be fooled by GLUTs or compressed versions of the same. This is why the loebner prize has not lead to real AI in general. And making a system that can solve a single problem that we consider requires intelligence (such as chess), just gets you a system that can solve chess and does not generalize.
Contrast this with the air tunnels that the wright brothers had, they could test for lift which they knew would keep them up
If you want to get into the dynamics of the internals of the system they are divorced from our folk idea of intelligence which is problem solving (unlike the folk theory of flight, which connects nicely with lift from a wing). So what sort of dynamics should we look for?
If the theory of intelligence is correct the dynamics will have to be found in the human brain. Despite the slowness and difficulties of analysing it it. we are generating more data which we should be able to use to narrow down the dynamics.
How would you go about creating a testable theory of intelligence? Preferably without having to build a many person-year project each time you want to test your theory.
Replies from: timtylercomment by LauraABJ · 2009-12-01T17:07:47.906Z · LW(p) · GW(p)
When you say 'rotating,' what time frame do you have in mind? A month? A year? Are there set sessions, like the summer program, or are they basically whenever someone wants to show up?
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-12-01T20:26:06.383Z · LW(p) · GW(p)
Initial stints can be anywhere from three weeks to three months, depending on the individual's availability, on the projects planned, and on space and other constraints on this end. There are no set sessions through most of the year, but we may try to have a coordinated start time for a number of individuals this summer.
Still, non-summer is better for any applicants whose schedules are flexible; we'll have fewer new folks here, and so individual visiting fellows will get more attention from experienced folks.
comment by alyssavance · 2009-12-01T01:48:26.700Z · LW(p) · GW(p)
"Working with this crowd transformed my world; it felt like I was learning to think. I wouldn’t be surprised if it can transform yours."
I was there during the summer of 2008 and 2009, and I wholeheartedly agree with this.
comment by zero_call · 2009-12-03T23:42:02.727Z · LW(p) · GW(p)
Does SIAI have subscription access to scientific journals?
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-12-03T23:53:28.514Z · LW(p) · GW(p)
Yes
Replies from: zero_call↑ comment by zero_call · 2009-12-04T00:03:16.193Z · LW(p) · GW(p)
Request for elaboration.... Is this at the scale of a university library or is there only access for a few select journals, etc? This stuff is expensive... I would be somewhat impressed if SIAI had full access, comparable to a research university. Also, I would be curious as to what part of your budget must be dedicated just to this information access? (Although I guess I could understand if this information is private.)
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-12-04T03:48:03.363Z · LW(p) · GW(p)
In practice, enough of us retain online library access through our former universities that we can reach the articles we need reasonably easily. Almost everything is online.
If this ceases to be the case, we'll probably buy library privileges through Stanford, San Jose State, or another nearby university.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2009-12-04T04:02:06.711Z · LW(p) · GW(p)
Do you mean that only those individuals who have UC logins have access to the online journals (JSTOR, etc.)? That would mean that you retain those privileges for only as long as the UC maintains your account. In my experience, that isn't forever.
ETA: I have to correct myself, here. They terminated my e-mail account, but I just discovered that I can still log into some UC servers and access journals through them.
Replies from: anonymcomment by Morendil · 2009-12-01T11:50:21.344Z · LW(p) · GW(p)
Who's Peter Platzer ?
Replies from: Roko↑ comment by Roko · 2009-12-01T20:40:19.628Z · LW(p) · GW(p)
He is a donor who is kindly sponsoring a project to create a book on rational thinking about the singularity with 2.5K of his own money.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-12-01T20:56:18.691Z · LW(p) · GW(p)
As well as a trial conference grants project to make it easier for folks to present AI risks work at conferences (the grants cover conference fees).
He's also offering useful job coaching services for folks interested in reducing existential risk by earning and donating. We increasingly have a useful, though informal, career network for folks interested in earning and donating; anyone who is interested in this should let me know. I believe Frank Adamek will be formalizing it shortly.
Replies from: Aleksei_Riikonen↑ comment by Aleksei_Riikonen · 2009-12-02T01:28:19.407Z · LW(p) · GW(p)
"We increasingly have a useful, though informal, career network for folks interested in earning and donating"
I wonder if people will soon start applying to this SIAI program just as a means to get into the Silicon Valley job market?
Not that it would be a negative thing (great instead!), if they anyway are useful while in the program and perhaps even value the network they receive enough to donate to SIAI simply for keeping it...
Replies from: Jordan, Jordan↑ comment by Jordan · 2009-12-02T04:37:51.089Z · LW(p) · GW(p)
I wonder if people will soon start applying to this SIAI program just as a means to get into the Silicon Valley job market?
This brings up an interesting point in my mind. There are so many smart people surrounding the discussion of existential risk that there must be a better way for them to cohesively raise money than just asking for donations. Starting an 'inner circle' to help people land high paying jobs is a start, but maybe it could be taken to the next level. What if we actively funded startups, ala Y Combinator, but in a more selective fashion, really picking out the brightest stars?
Replies from: CarlShulman, CarlShulman↑ comment by CarlShulman · 2009-12-02T07:32:13.571Z · LW(p) · GW(p)
You have to work out internal rates of return for both sorts of project, taking into account available data, overconfidence and other biases, etc. If you spend $50,000 on VC investments, what annual return do you expect? 30% return on investment, up there with the greatest VCs around? Then consider research or other projects (like the Singularity Summit) that could mobilize additional brainpower and financial resources to work on the problem. How plausible is it that you can get a return of more than 50% there?
There is a reasonably efficient capital market, but there isn't an efficient charitable market. However, on the entrepreneurship front, check out Rolf Nelson.
↑ comment by CarlShulman · 2009-12-02T07:17:27.469Z · LW(p) · GW(p)
You have to compare the internal rate of return on VC investment (after selection effects, overconfidence, etc) versus the internal rate of return on projects launchable now (including projects that are likely to mobilize additional brainpower and resources). With regard to startups, some are following that route.
↑ comment by Jordan · 2009-12-02T01:36:50.216Z · LW(p) · GW(p)
... making the Singularity seem even more like a cult. We'll help you get a good job! Just make sure to tithe 10% of your income.
I'm totally OK with this though.
Replies from: AnnaSalamon, alyssavance, Aleksei_Riikonen↑ comment by AnnaSalamon · 2009-12-02T02:45:00.141Z · LW(p) · GW(p)
Does it help that the "tithe 10% of your income" is to an effect in the world (existential risk reduction) rather than to a specific organization (SIAI)? FHI contributions, or the effective building of new projects, are totally allied.
Replies from: Jordan↑ comment by Jordan · 2009-12-02T04:33:23.178Z · LW(p) · GW(p)
I'm OK with donating to SIAI in particular, even if the single existential risk my funding went towards is preventing run away AIs. What makes the biggest difference for me is having met some of the people, having read some of their writing, and in general believing that they are substantially more dedicated to solving a problem than to just preserving an organization set up to solve that problem.
↑ comment by alyssavance · 2009-12-02T05:25:10.573Z · LW(p) · GW(p)
The Catholic Church asks that you tithe 10% of your income, and it's not even a quid pro quo.
↑ comment by Aleksei_Riikonen · 2009-12-02T01:57:32.066Z · LW(p) · GW(p)
Yes, there's that.
In these comparisons it's good to remember, though, that all the most respected universities also value their alumni donating to the university.
comment by alyssavance · 2009-12-01T01:59:01.108Z · LW(p) · GW(p)
"Improving the LW wiki, and/or writing good LW posts;"
Does anyone have data on how many people actually use the LW wiki? If few people use it, then we should find out why and solve it; if it cannot be solved, we should avoid wasting further time on it. If many people use it, of course, we should ask for their comments on what could be improved.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-12-01T18:21:37.158Z · LW(p) · GW(p)
For usage statistics, see http://wiki.lesswrong.com/wiki/Special:PopularPages
comment by FeministX · 2009-12-02T04:49:32.553Z · LW(p) · GW(p)
Hmm. Maybe I should apply...
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-12-02T19:37:27.678Z · LW(p) · GW(p)
You should apply. I liked my 90 second skim of your blog just now, and also, everyone who thinks they should maybe apply, should apply.
comment by Daniel_Burfoot · 2009-12-02T15:41:27.963Z · LW(p) · GW(p)
What kind of leeway are the fellows given in pursuing their own projects? I have an AI project I am planning to work on after I finish my Phd; it would be fun to do it at SIAI, as opposed to my father's basement.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-12-02T19:41:11.363Z · LW(p) · GW(p)
Less leeway than that. We want fellows, and others, to do whatever most effectively reduces existential risk... and this is unlikely to be a pre-existing AI project that someone is attached to.
Although we do try to work with individual talents and enthusiasms, and group brainstorming processes to create many of the projects we work on.
comment by Paul Crowley (ciphergoth) · 2009-12-02T09:02:11.839Z · LW(p) · GW(p)
So just to make sure I understand correctly: successful applicants will spend a month with the SIAI in the Bay Area. Board and airfare are paid but no salary can be offered.
I may not be the sort of person you're looking for, but taking a month off work with no salary would be difficult for me to manage. No criticism of the SIAI intended, who are trying to achieve the best outcomes with limited funds.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-12-02T09:09:54.238Z · LW(p) · GW(p)
That's right. Successful applicants will spend three weeks to three months working with us here, with living and transit expenses paid but with no salary. Some who prove useful will be invited to stay long-term, at which point stipends can be managed; visiting fellow stints are for exploring possibilities, building relationships, and getting some risk reducing projects done.
If it makes you feel any better, think of it as getting the most existential risk reduction we can get for humanity, and as much bang as possible per donor buck.
Apart from questions of salary, are you the sort we're looking for, ciphergoth?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-12-03T09:09:28.065Z · LW(p) · GW(p)
Possibly - I've published papers, organised events, my grasp of mathematics and philosophy is pretty good for an amateur, and I work as a programmer. But unfortunately I have a mortgage to pay :-( Again, no criticism of SIAI, who as you say must get the most bang per buck.
comment by Matt_Simpson · 2009-12-02T06:00:01.016Z · LW(p) · GW(p)
How long will this opportunity be available? I'm very interested, but I probably won't have a large enough block of free time for a year and a half.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-12-02T07:33:11.759Z · LW(p) · GW(p)
We will probably be doing this at that time as well.
Still, we're getting high rates of return lately on money and on time, which suggests that if existential risk reduction is your aim, sooner is better than later. What are your aims and your current plans?
Replies from: Matt_Simpson↑ comment by Matt_Simpson · 2009-12-02T16:27:53.719Z · LW(p) · GW(p)
At the moment I'm beginning my Ph.D. in statistics at Iowa State, so the school year is pretty much filled with classes - at least until I reach the dissertation stage. That leaves summers. I'm not completely sure what I'll be doing this summer, but I'm about 90% sure I'll be taking a summer class to brush up on some math so I'm ready for measure theory in the fall. If the timing of that class is bad, I may not have more than a contiguous week free over the summer. Next summer I expect to have more disposable time.
comment by Johnicholas · 2009-12-01T20:31:18.713Z · LW(p) · GW(p)
Logistics question: Is the cost to SIAI approximately 1k per month? (aside from the limited number of slots, which is harder to quantify)
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-12-01T20:56:40.948Z · LW(p) · GW(p)
Yep. Living expenses are moderately lower than $1k per visiting fellow per month, but given international air fair and such, $1k per person per month is a good estimate.
Why do you ask?
Replies from: SilasBarta, Johnicholas↑ comment by SilasBarta · 2009-12-01T23:35:01.203Z · LW(p) · GW(p)
Yeah, similar question: if someone is an "edge case", would it ever make a difference that they could cover part of the costs (normally covered by SIAI) with their own funds?
Oh, also, how many people would avoid going just because I would be there at the time?
(Will send you a prospective email to see if I might be what you're looking for.)
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-12-02T01:27:12.559Z · LW(p) · GW(p)
If someone is an "edge case", would it ever make a difference that they could cover part of the costs (normally covered by SIAI) with their own funds?
Yes, it might, although covering one's own costs is no guarantee (since space and our attention is limited), and unusually large expenses (e.g., airfare from Australia, or being really attached to having one's partner with one) is not a reason not to apply.
If someone's serious about reducing risks and wishes to cover their own expenses, though, coming for a short visit (1 to 7 days, say) is almost certainly an option. We have a guest room and can help people hash out conceptual confusions (you don't need to agree with us; just to care about evidence and accuracy) and think through strategy for careers and for reducing existential risks.
↑ comment by Johnicholas · 2009-12-01T21:35:39.379Z · LW(p) · GW(p)
Because I could possibly be self-funding.
comment by alyssavance · 2009-12-01T01:51:33.446Z · LW(p) · GW(p)
Minor editing thing: theuncertainfuture.com links to http://lesswrong.com/theuncertainfuture.com, not http://theuncertainfuture.com/.