New Q&A by Nick Bostrom
post by Stuart_Armstrong · 2011-11-15T11:32:40.630Z · LW · GW · Legacy · 22 commentsContents
22 comments
Underground Q&A session with Nick Bostrom (http://www.nickbostrom.com) on existential risks and artificial intelligence with the Oxford Transhumanists (recorded 10 October 2011).
http://www.youtube.com/watch?v=KQeijCRJSog
22 comments
Comments sorted by top scores.
comment by daenerys · 2011-11-15T20:55:14.468Z · LW(p) · GW(p)
Transcribing.
Replies from: lukeprog, Grognor, Vladimir_Nesov, None↑ comment by lukeprog · 2011-11-15T21:17:02.543Z · LW(p) · GW(p)
Will more people please vote this up? I think we should be strongly reinforcing this kind of behavior.
Replies from: daenerys↑ comment by daenerys · 2011-11-17T02:26:27.030Z · LW(p) · GW(p)
Thanks for the karma, everyone. :)
I'm just about done, and I expect it to be up tonight or tomorrow (depending on if I'm going to work on it more tonight, or watch some Terra Nova and Glee instead, lol).
It's pretty big, so I'll be posting it as a new discussion post, rather than as a comment under this one.
Replies from: lukeprog, None↑ comment by lukeprog · 2011-11-17T03:46:36.157Z · LW(p) · GW(p)
Oh good. Another chance to give you more karma for this.
Replies from: None↑ comment by [deleted] · 2011-11-17T07:03:15.454Z · LW(p) · GW(p)
I'm struck by the irony of a human manually transcribing a talk that focuses extensively on the dangers of AI.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-11-17T10:17:31.773Z · LW(p) · GW(p)
Not ironic. More... appropriate.
↑ comment by Vladimir_Nesov · 2011-11-18T21:18:54.727Z · LW(p) · GW(p)
(For the reference: transcription of the video is posted here.)
comment by lukeprog · 2011-11-15T13:34:04.092Z · LW(p) · GW(p)
Pleased to see that when asked about the relationship of FHI and SIAI, Nick gives the same answer I did.
Replies from: Gedusa, CarlShulman↑ comment by Gedusa · 2011-11-16T11:09:11.884Z · LW(p) · GW(p)
I was the one who asked that question!
I was slightly disappointed by his answer - surely there can only be one optimal charity to give to? The only donation strategy he recommended was giving to whichever one was about to go under.
I guess what I'm really thinking is that it's pretty unlikely that the two charities are equally optimal.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-11-16T12:10:10.301Z · LW(p) · GW(p)
I was slightly disappointed by his answer - surely there can only be one optimal charity to give to?
It seems that argument applies primarily to well-defined goals. Do you necessarily have to view the SI and FHI as two charities? The SI is currently pursuing a wide range of sub-goals, e.g. rationality camps. I perceive the FHI to be mainly about researching existential risks in general. Clearly you should do your own research and then decide which x-risk is the most urgent one and then support its mitigation. Yet you should also reassess your decision from time to time. And here I think it might be justified to contribute part of your money to the FHI. By doing so you can externalize the review of existential risks. You concentrate most of your effort on the risk that the FHI deems most urgent until it does revise its opinion.
In other words, view the SI and FHI as one charity with different departments and your ability to contribute separately as a way to weight different sub-goals aimed at the same overall big problem, saving humanity.
Replies from: None, wedrifid↑ comment by wedrifid · 2011-11-16T15:34:23.918Z · LW(p) · GW(p)
In other words, view the SI and FHI as one charity with different departments and your ability to contribute separately as a way to weight different sub-goals aimed at the same overall big problem, saving humanity.
Four!
Replies from: XiXiDu↑ comment by XiXiDu · 2011-11-16T17:17:03.422Z · LW(p) · GW(p)
"I'd rather live with a good question than a bad answer." -- Aryeh Frimer
I am not sure how to interpret your comment:
- I gave a bad answer to a good question.
- You'd rather support the FHI exclusively as they are asking the right questions, whereas the SI might give a bad answer.
I'll comment on the first interpretation that I deem most likely.
To fix complex problems you have to solve many other problems at the same time, problems that are either directly relevant to the bigger problem or necessitated by other needs.
That the Singularity Institute might be best equipped to solve the friendly AI problem does not mean that they are the best choice to research general questions about existential risks. That risks from AI are the most urgent existential risk does not mean that it would be wise to abandon existential risk research until friendly AI is solved.
By contributing to the Singularity Institute you are supporting various activities that you might not equally value. If you thought that they knew better than you how to distribute your money among those activities, you wouldn't mind. But that they are good at doing one thing does not mean that they are good at doing another.
Now you might argue that even less of your money would be spend on the activity you value the most if you were going to distribute it among different charities. But that's not relevant here. Existential risk research is something you have to do anyway, something you have to invest a certain amount of resources into while pursuing your main objective, just like eating and drinking. If the Singularity Institute isn't doing that for you then you have to it yourself, or, in the case of existential risk research, pay others to do it for you who are better at it.
Replies from: endoself, wedrifid↑ comment by endoself · 2011-11-19T01:46:36.856Z · LW(p) · GW(p)
The second quote mentions the number four; wedrifid was referring to that, not the number four.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-11-19T10:23:49.389Z · LW(p) · GW(p)
The second quote mentions the number four; wedrifid was referring to that, not the number four.
Aha! I didn't even read the other quotes and just went straight to quote number four.
I don't think that suggesting new definitions for words is problematic if it helps. In the case of calling a tail a leg it would deprive the word leg of most of its meaning. But the case of calling two charities departments of a single charity highlights a problem with Steven Landsburg's advice for charitable giving:
So why is charity different? Here's the reason: An investment in Microsoft can make a serious dent in the problem of adding some high-tech stocks to your portfolio; now it's time to move on to other investment goals. Two hours on the golf course makes a serious dent in the problem of getting some exercise; maybe it's time to see what else in life is worthy of attention. But no matter how much you give to CARE, you will never make a serious dent in the problem of starving children. The problem is just too big; behind every starving child is another equally deserving child.
This disregards the fact that problems like cancer, heart disease or hunger consist of a huge amount of sub-problems, many of which need to be tackled at the same time to make the main objective technically feasible.
What if you were able to assign weight to the various problems that need to be solved in order to reach the charity's overall goal? You would do so if you didn't believe that the charity itself was efficiently distributing its money among its various sub-goals.
Take for example the case of the Singularity Institute. If people could weight various of the SI's projects by defining how their money should be used, some people wouldn't support the idea of rationality camps.
And here it is useful to view the SI and FHI as two departments of the same charity. They both pursue goals that either support each other or that need to be solved at the same time.
If you were to follow Landsburgs argumentation, if you were interested in defeating hunger, you might just contribute to a project that researches certain genetic modification of useful plants. Or why not contribute to the company that tries to engineer better DNA sequencers?
My point is that the concept of a charity is an artificially created black box with the label "No User Serviceable Parts Inside" and Landsburg's argument makes it sound like we should draw a line at that point and don't try to give even more efficiently. I don't see that, I am saying that in certain cases you can as well view one charity as many and two charities as one.
↑ comment by CarlShulman · 2011-11-15T19:59:53.196Z · LW(p) · GW(p)
What's the time for that in the video?
Replies from: lukeprog↑ comment by lukeprog · 2011-11-15T20:47:34.938Z · LW(p) · GW(p)
2m38s. My rapidly typed transcript:
QUESTIONER:
There are two organizations, FHI and SIAI, working on this. Let's say I thought this was the most important problem in the world, and I should be donating money to this...
NICK:
It's good. We've come to the chase!
I think there is a sense that both organizations are synergistic. If one were about to go under... that would probably be the one [to donate to]. If both were doing well, it's... different people will have different opinions. We work quite closely with the folks from SIAI...
There is an advantage to having one academic platform and one outside academia. There are different things these types of organizations give us. If you wanna get academics to pay more attention to this, to get postdocs to work on this, that's much easier to do within academia; also to get the ear of policy-makers and media...
On the other hand, for SIAI there might be things that are easier for them to do. More flexibility, they're not embedded in a big bureaucracy. So they can more easily hire people with non-standard backgrounds... and also more grass-roots stuff like Less Wrong...
So yeah. I'll give the non-answer answer to that question.