Posts

Comments

Comment by ghf on A Scholarly AI Risk Wiki · 2012-05-27T23:36:47.247Z · LW · GW

I definitely agree.

For (3), now is the time to get this moving. Right now, machine ethics (especially regarding military robotics) and medical ethics (especially in terms of bio-engineering) are hot topics. Connecting AI Risk to either of these trends would allow you extend and, hopefully, bud it off as a separate focus.

Unfortunately, academics are pack animals, so if you want to communicate with them, you can't just stake out your own territory and expect them to do the work of coming to you. You have to pick some existing field as a starting point. Then, knowing the assumptions of that field, you point out the differences in what you're proposing and slowly push out and extend towards what you want to talk about (the pseudopod approach). This fits well with (1) since choosing what journals you're aiming at will determine the field of researchers you'll be able to recruit from.

One note, if you hold a separate conference, you are dependent on whatever academic credibility SIAI brings to the table (none, at present (besides, you already have the Singularity Summit to work with)). But, if you are able to get a track started at an existing conference, suddenly you can define this as the spot where the cool researchers are hanging out. Convince DARPA to put a little money towards this and suddenly you have yourselves a research area. The DOD already pushes funds for things like risk analyses of climate change and other 30-100 year forward threats so it's not even a stretch.

Comment by ghf on My Algorithm for Beating Procrastination · 2012-05-27T17:43:38.607Z · LW · GW

I try to avoid pure cheerleading comments, but this post was extremely helpful. Thank you!

Comment by ghf on Funding Good Research · 2012-05-27T17:36:53.191Z · LW · GW

I'm curious as to why you chose to target this paper at academic philosophers. Decision theory isn't my focus, but it seems that while the other groups of researchers in this area (mathematicians, computer scientists, economists, etc) talk to one another (at least a little), the philosophers are mostly isolated. The generation of philosophers trained while it was still the center of research in logic and fundamental mathematics is rapidly dying off and, with them, the remaining credibility of such work in philosophy.

Of course, philosophers are the only group that pay any attention to things like Newcomb's problem so, if you were writing for another group, you'd probably have to devote one paper to justifying the importance of the problem. Also, given some of the discussions on here, perhaps the goal is precisely to write this in an area isolated from actual implementation to avoid the risk of misuse (can't find the link, but I recall seeing several comment threads discussing the risks of publishing this at all).

Comment by ghf on What useful skills can be learned in three months? · 2012-05-26T07:15:43.118Z · LW · GW

If your goal is to found an IT startup, I'd recommend learning basic web development. I formerly used rails and, at the time I picked it up, the learning curve was about a month (just pick a highly rated book and work through). If not web, consider app development. If you know a bit of Java, Android would probably be the way to go. With either of these, you'll have a skill that allows you to single-handedly create a product.

At the same time, start keeping a list of ideas you have for startups. Some will be big, others small. But start looking for opportunities. Particularly focus on those that fit with the skills you're learning (web or app).

Potentially, that leaves you two months to start your first startup. Doesn't have to be great. Doesn't even have to be good. But knowing that you can take something from idea to product is extremely powerful. Because now, as you're learning, when you see an opportunity, you'll know how to take it.

More, it will allow you to fit your studies into your ideas. In your algorithms class, you'll see techniques and realize how those could solve problems you've had with your existing ideas or spark all new ideas. And if you don't walk out of your first AI class with a long list of new possibilities, something went seriously wrong :). But everything you're learning will have a context which will be extremely powerful.

All this time, keep creating. Any good entrepreneur goes through a training process of learning how to see opportunities and take them. You have four years of access to excellent technical resources, free labor (your peers), and no cost to failure (and learning how to handle those will be another step in your growth). If you go in with an ability to create (even a very basic ability), you will not only be able to make use of those opportunities, you'll get far more out of the process than you otherwise would.

[also: I'd like to second the recommendations to establish an exercise habit]

Comment by ghf on What useful skills can be learned in three months? · 2012-05-25T19:07:52.084Z · LW · GW

A little more information (if you have it) would help with some of this. Computer Science is a huge field, so getting a sense of what you're interested in, why you're doing it, and what background you already have would probably help with recommendations.

Comment by ghf on Group rationality diary, 5/21/12 · 2012-05-25T07:45:15.252Z · LW · GW

Rather than thinking of it as spending 30 minutes a day on rationality when you should be doing other things, it might be more accurate to think of it as 30 minutes a day spent optimizing the other 23.5 hours. At least in my experience, taking that time yields far greater total productivity than when I claim to be too busy.

Comment by ghf on How do you find good scholarly criticism of a book? · 2012-05-25T00:06:43.229Z · LW · GW

And, if the research is fundamentally new, you may have to wait another few years (at least) before the good scholarly criticism comes out.

Comment by ghf on Open Thread, May 16-31, 2012 · 2012-05-21T19:19:38.524Z · LW · GW

It works for me, but only after changing my preferences to view articles with lower scores (my cutoff had been set at -2).

Comment by ghf on A thought about Internet procrastination · 2012-05-17T09:34:18.479Z · LW · GW

Some are specifically focused in at the level of prompting addiction. Zynga, for example, has put a lot of work into optimizing the rate of rewards for just this effect.

Comment by ghf on How can we ensure that a Friendly AI team will be sane enough? · 2012-05-17T09:21:03.941Z · LW · GW

Strongly seconded. While getting good people is essential (the original point about rationality standards), checks and balances are a critical element of a project like this.

The level of checks needed probably depends on the scope of the project. For the feasibility analysis, perhaps you don't need anything more than splitting your research group into two teams, one assigned to prove, the other disprove the feasibility of a given design (possibly switching roles at some point in the process).

Comment by ghf on How can we ensure that a Friendly AI team will be sane enough? · 2012-05-17T09:03:58.166Z · LW · GW

Good point. And, depending on your assessment of the risks involved, especially for AGI research, the level of the lapses might be more important than the peak or even the average. A researcher who is perfectly rational (hand waving for the moment about how we measure that) 99% of the time but has, say, fits of rage every so often might be even more dangerous than the slightly less rational on average colleague who is nonetheless stable.

Comment by ghf on Thoughts on the Singularity Institute (SI) · 2012-05-15T01:36:05.410Z · LW · GW

I think some of it comes down to the range of arguments offered. For example, posted alone, I would not have found Objection 2 particularly compelling, but I was impressed by many other points and in particular the discussion of organizational capacity. I'm sure there are others for whom those evaluations were completely reversed. Nonetheless, we all voted it up. Many of us who did so likely agree with one another less than we do with SIAI, but that has only showed up here and there on this thread.

Critically, it was all presented, not in the context of an inside argument, but in the context of "is SI an effective organization in terms of its stated goals." The question posed to each of us was: do you believe in SI's mission and, if so, do you think that donating to SI is an effective way to achieve that goal? It is a wonderful instantiation of the standard test of belief, "how much are you willing to bet on it?"

Comment by ghf on Thoughts on the Singularity Institute (SI) · 2012-05-13T23:33:25.412Z · LW · GW

The different emphasis comes down to your comment that:

...they support SI despite not agreeing with SI's specific arguments. Perhaps you should, too...

In my opinion, I can more effectively support those activities that I think are effective by not supporting SI. Waiting until the Center for Applied Rationality gets its tax-exempt status in place allows me to both target my donations and directly signal where I think SI has been most effective up to this point.

If they end up having short-term cashflow issues prior to that split, my first response would be to register for the next Singularity Summit a bit early since that's another piece that I wish to directly support.

Comment by ghf on Finding Research Papers · 2012-05-13T21:12:59.814Z · LW · GW

In addition to going directly to articles, consider dropping an email or two to researchers working on those topics (perhaps once you've found an interesting article of theirs). Many are very willing to provide their overview of the area and point you to interesting resources. While there are times when you won't get a response (for example, before conference season or at the end of the semester), most are genuinely pleased to be contacted by people interested in the topics they care about.

Comment by ghf on Thoughts on the Singularity Institute (SI) · 2012-05-13T20:12:00.048Z · LW · GW

The primary reason I think SI should be supported is that I like what the organization actually does, and wish it to continue. The Less Wrong Sequences, Singularity Summit, rationality training camps, and even HPMoR and Less Wrong itself are all worth paying some amount of money for.

I think that my own approach is similar, but with a different emphasis. I like some of what they've done, so my question is how do encourage those pieces. This article was very helpful in prompting some thought into how to handle that. I generally break down their work into three categories:

  1. Rationality (minicamps, training, LW, HPMoR): Here I think they've done some very good work. Luckily, the new spinoff will allow me to support these pieces directly.

  2. Existential risk awareness (singularity summit, risk analysis articles): Here their record has been mixed. I think the Singularity Summit has been successful, other efforts less so but seemingly improving. I can support the Singularity Summit by continuing to attend and potentially donating directly if necessary (since it's been running positive in recent years, for the moment this does not seem necessary).

  3. Original research (FAI, timeless decision theory): This is the area where I do not find them to be at all effective. From what I've read, there seems a large disconnect between ambitions and capabilities. Given that I can now support the other pieces separately, this is why I would not donate generally to SIAI.

My overall view would be that, at present, there is no real organization to support. Rather there is a collection of talented people whose freedom to work on interesting things I'm supporting. Given that, I want to support those people where I think they are effective.

I find Eliezer in particular to be one of the best pop-science writers around (and I most assuredly do not mean that term as an insult). Things like the sequences or HPMoR are thought-provoking and worth supporting. I find the general work on rationality to be critically important and timely.

So, while I agree that much of the work being done is valuable, my conclusion has been to consider how to support that directly rather than SI in general.

Comment by ghf on Thoughts on the Singularity Institute (SI) · 2012-05-11T23:15:03.978Z · LW · GW

First, let me say that, after re-reading, I think that my previous post came off as condescending/confrontational which was not my intent. I apologize.

Second, after thinking about this for a few minutes, I realized that some of the reason your papers seem so fluffy to me is that they argue what I consider to be obvious points. In my mind, of course we are likely "to develop human-level AI before 2100." Because of that, I may have tended to classify your work as outreach more than research.

But outreach is valuable. And, so that we can factor out the question of the independent contribution of your research, having people associated with SIAI with the publications/credibility to be treated as experts has gigantic benefits in terms of media multipliers (being the people who get called on for interviews, panels, etc). So, given that, I can see a strong argument for publication support being valuable to the overall organization goals regardless of any assessment of the value of the research.

Note that this isn't uncommon. SI is far from the only think tank with researchers who publish in academic journals. Researchers at private companies do the same.

My only point was that, in those situations, usually researchers are brought in with prior recognized achievements (or, unfortunately all too often, simply paper credentials). SIAI is bringing in people who are intelligent but unproven and giving them the resources reserved for top talent in academia or industry. As you've pointed out, one of the differences with SIAI is the lack of hoops to jump through.

Edit: I see you commented below that you view your own work as summarization of existing research and we agree on the value of that. Sorry that my slow typing speed left me behind the flow of the thread.

Comment by ghf on Thoughts on the Singularity Institute (SI) · 2012-05-11T22:38:10.640Z · LW · GW

My hope is that the upcoming deluge of publications will answer this objection, but for the moment, I am unclear as to the justification for the level of resources being given to SIAI researchers.

Additionally, I alone have a dozen papers in development, for which I am directing every step of research and writing, and will write the final draft, but am collaborating with remote researchers so as to put in only 5%-20% of the total hours required myself.

This level of freedom is the dream of every researcher on the planet. Yet, it's unclear why these resources should be devoted to your projects. While I strongly believe that the current academic system is broken, you are asking for a level of support granted to top researchers prior to have made any original breakthroughs yourself.

If you can convince people to give you that money, wonderful. But until you have made at least some serious advancement to demonstrate your case, donating seems like an act of faith.

It's impressive that you all have found a way to hack the system and get paid to develop yourselves as researchers outside of the academic system and I will be delighted to see that development bear fruit over the coming years. But, at present, I don't see evidence that the work being done justifies or requires that support.

Comment by ghf on Thoughts on the Singularity Institute (SI) · 2012-05-11T22:06:54.752Z · LW · GW

And note that these improvements would not and could not have happened without more funding than the level of previous years

Given the several year lag between funding increases and the listed improvements, it appears that this was less a result of a prepared plan and more a process of underutilized resources attracting a mix of parasites (the theft) and talent (hopefully the more recent staff additions).

Which goes towards a critical question in terms of future funding: is SIAI primarily constrained in its mission by resources or competence?

Of course, the related question is: what is SIAI's mission? Someone donating primarily for AGI research might not count recent efforts (LW, rationality camps, etc) as improvements.

What should a potential donor expect from money invested into this organization going forward? Internally, what are your metrics for evaluation?

Edited to add: I think that the spin-off of the rationality efforts is a good step towards answering these questions.