[LINK] John Baez Interview with astrophysicist Gregory Benford

post by multifoliaterose · 2011-03-02T09:53:26.298Z · LW · GW · Legacy · 18 comments

Contents

18 comments

The content of John Baez's This Week's Finds: Week 310:

Includes

Note: The upcoming This Week's Finds: Week 311 is an interview with Eliezer Yudkowsky by John Baez.

 

18 comments

Comments sorted by top scores.

comment by XiXiDu · 2011-03-02T10:20:14.008Z · LW(p) · GW(p)

A reference to a paper by David Wolpert and Gregory Benford on Newcomb's paradox

Isn't the whole issue with Newcomb's paradox the fact that if you take two boxes Omega will predict it and if you take one box Omega will predict it? It doesn't matter if both boxes are transparent, you'll only take one if you did indeed precommit (or if you're the kind of person who one-boxes 'naturally') to only take one box. Since I've read the first time about it I'm puzzled by why people think there is a paradox or that the problem is difficult. Maybe I just don't get it.

Replies from: John_Baez, Kevin
comment by John_Baez · 2011-03-03T01:18:42.641Z · LW(p) · GW(p)

In my interview of Gregory Benford I wrote:

If you say you’d take both boxes, I’ll argue that’s stupid: everyone who did that so far got just a thousand dollars, while the folks who took only box B got a million!

If you say you’d take only box B, I’ll argue that’s stupid: there has got to be more money in both boxes than in just one of them!

It sounds like you find the second argument so unconvincing that you don't see why people consider it a paradox.

For what it's worth, I'd take only one box.

Replies from: XiXiDu
comment by XiXiDu · 2011-03-03T09:56:32.511Z · LW(p) · GW(p)

It sounds like you find the second argument so unconvincing that you don't see why people consider it a paradox.

It doesn't make sense given the rules. The rules say that there will only be a million in box B iff you only take box B. I'm not the kind of person who calls the police when faced with the trolley problem thought experiment. Besides that, the laws of physics obviously do not permit you to deliberately take both boxes if a nearly perfect predictor knows that you'll only take box B. Therefore considering that counterfactual makes no sense (much less than a nearly perfect predictor).

comment by Kevin · 2011-03-02T14:29:08.993Z · LW(p) · GW(p)

It mostly seems to be confusion about the impossibility of a perfect predictor. On LW we accept the concept of a philosophical Superintelligence, but among mainstream philosophers many disavow the notion of a perfect predictor, even when that is specified very clearly.

Steve+Anna at SIAI did a pretty thorough dissolution of Newcomb's problem with variable accuracy for Omega as part of the problem definition.

comment by XiXiDu · 2011-03-02T10:08:40.582Z · LW(p) · GW(p)

The upcoming This Week's Finds: Week 311 is an interview with Eliezer Yudkowsky by John Baez.

I'm waiting for this for so long. I really hope that John Baez is going to explain himself and argue for why he is more concerned with global warming than risks from AI. So far there exists literally no valuable third-party critique.

Replies from: John_Baez, timtyler
comment by John_Baez · 2011-03-03T00:30:34.328Z · LW(p) · GW(p)

XiXiDu wrote:

I really hope that John Baez is going to explain himself and argue for why he is more concerned with global warming than risks from AI.

Since I was interviewing Yudkowsky rather than the other way around, I didn't explain my views - I was getting him to explain his. But the last part of this interview will touch on global warming, and if you want to ask me questions, that would be a great time to do it.

(Week 311 is just the first part of a multi-part interview.)

For now, you might be interested to read about Gregory Benford's assessment of the short-term future, which somewhat resembles my own.

Tim Tyler wrote:

It looks like a conventional "confused environmentalist" prioritisation to me.

I'm probably confused (who isn't?), but I doubt I'm conventional. If I were, I probably wouldn't be so eager to solicit the views of Benford, Yudkowsky and Drexler on my blog. A big problem is that different communities of intelligent people have very different views on which threats and opportunities are most important, and these communities don't talk to each other enough and think clearly enough to come to agreement, even on factual issues. I'd like to make a dent in that problem.

The list you cite is not the explanation that XiXiDu seeks.

Replies from: XiXiDu
comment by XiXiDu · 2011-03-03T09:39:04.961Z · LW(p) · GW(p)

Since I was interviewing Yudkowsky rather than the other way around, I didn't explain my views - I was getting him to explain his.

Would you be willing to write a blog post reviewing his arguments and explaining why you either reject them, don't understand them or accept them and start working to mitigate risks from AI? It would be valuable to have someone like you, who is not deeply involved with the SIAI (Singularity Institute) or LessWrong.com, to write a critique on their arguments and objectives. I myself don't have the education (yet) to do so and welcome any reassurance that would help me to take action.

If you don't have the time to write a blog post, maybe you can answer just the following question. If someone was going to donate $100k and you could pick the charity, would you choose the SIAI? Yes/No answer if you're too busy, a short explanation if you've the time. Thank you!

For now, you might be interested to read about Gregory Benford's assessment of the short-term future, which somewhat resembles my own.

You mean, "before we take on the galaxy, let’s do a smaller problem"? So you don't think that we'll have to face risks from AI before climate change takes a larger toll? You don't think that working on AGI means working on the best possible solution to the problem of climate change? And even if we had to start taking active measures against climate change in the 2020s, you don't think we should rather spend that time on AI because we can survive a warmer world but no runaway AI? Gregory Benford writes that "we still barely glimpse the horrors we could be visiting on our children and their grandchildren’s grandchildren". That sounds to me like he assumes that there will be grandchildren, which might not be the case if some kind of AGI doesn't take care of a lot of other problems we'll have to face soon.

A big problem is that different communities of intelligent people have very different views on which threats and opportunities are most important, and these communities don't talk to each other enough and think clearly enough to come to agreement, even on factual issues.

I tell you that all you have to do is to read the LessWrong Sequences and the publications written by the SIAI to agree that working on AI is much more important than climate change, are you going to take the time and do it?

Replies from: John_Baez
comment by John_Baez · 2011-03-04T04:53:25.707Z · LW(p) · GW(p)

Since XiXiDu also asked this question on my blog, I answered over there.

I tell you that all you have to do is to read the LessWrong Sequences and the publications written by the SIAI to agree that working on AI is much more important than climate change, are you going to take the time and do it?

I have read most of those things, and indeed I've been interested in AI and the possibility of a singularity at least since college (say, 1980). That's why I interviewed Yudkowsky.

Replies from: XiXiDu, XiXiDu
comment by XiXiDu · 2011-03-04T09:51:29.957Z · LW(p) · GW(p)

I have read most of those things, and indeed I've been interested in AI and the possibility of a singularity at least since college (say, 1980).

That answers my questions. There are only two options, either there is no strong case for risks from AI or a world-class mathematician like you didn't manage to understand the arguments after trying for 30 years. For me that means that I can only hope to be much smarter than you (to understand the evidence myself) or to conclude that Yudkowsky et al. are less intelligent than you are. No offense, but what other option is there?

Replies from: endoself
comment by endoself · 2011-03-10T01:27:38.424Z · LW(p) · GW(p)

Understanding of the singularity is not a monotonically increasing function of intelligence.

comment by XiXiDu · 2011-03-04T11:01:31.187Z · LW(p) · GW(p)

I should also state how I would answer my question. My answer would be No. The SIAI deserves funding but since it currently receives $500,000 per year I would not recommend someone to donate another $100,000 right now. The reason is that I think that there are valid arguments that justify the existence of such an organisation. But there are no reasons to expect that they currently need more money. The SIAI does publish no progress report and does not disclose how it uses the money it gets. There are various other issues to decide that the SIAI does currently not deserve more donations. That is not to say that the problem of risks from AI may not deserve more funding, but differently. My current uncertainty about how urgent and substantive the risks are also does contribute to the decision that the SIAI is at this time well-funded.

I'm asking people like you to assess how likely it is that I am wrong about my judgement and if I should make it a priority to seek more information right now or concentrate on other projects.

Replies from: gwern, timtyler, timtyler
comment by gwern · 2013-09-03T21:31:55.626Z · LW(p) · GW(p)

does not disclose how it uses the money it gets.

Just a minor correction: this cannot be a true statement to make of an American 501(c)3 charity because it would be illegal of them to not disclose what they're spending money on in their Form 990. Hence, it's easy to examine SIAI/MIRI, Girl Scouts, Edge Foundation, Lifeboat Foundation, JSTOR, ALCOR... Really, all the information is there for anyone who wants to know it, you can download for free, one just has to just not be lazy and not assume that it doesn't exist.

comment by timtyler · 2011-03-05T13:14:01.200Z · LW(p) · GW(p)

The SIAI does publish no progress report

They do do various things like that from time to time - e.g.; http://singinst.org/achievements

and does not disclose how it uses the money it gets.

Up to 2008, almost half of it went into paying their own salaries, IIRC.

The SIAI accounts are on Guidestar. You have to register, though.

In 2009, they received 432,139 of "gifts", made 194,686 by putting on a conference - and paid Yudkowsky 95,550 and Vassar 52,083. Yudkowsky probably also got a fair bit of the 83,934 spent on project 4c. 400,000 was also spent on the things described at the end of the document. Figures are all USD.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-03T06:16:21.130Z · LW(p) · GW(p)

"Probably?" According to what priors? Do not make stuff up. As of 2013, MIRI has never paid anyone more than $99K in one year, and IIRC the $95K shown there was due to an error by the payroll service we were using which accidentally shifted one month of my salary backward by one year (paid on Dec 31 instead of Jan 1).

Replies from: lukeprog
comment by lukeprog · 2013-09-03T20:59:48.727Z · LW(p) · GW(p)

I confirm the payroll error part; I remember speaking to Amy about it a couple times, though it happened shortly before my time. I also suspect MIRI has never paid anymore than $99k in one year, but I haven't looked it up.

comment by timtyler · 2011-03-05T13:14:49.758Z · LW(p) · GW(p)

The SIAI does publish no progress report

They do do various things like that from time to time - e.g.; http://singinst.org/achievements

comment by timtyler · 2011-03-02T14:05:22.927Z · LW(p) · GW(p)

The list is here:

  • Global warming - human caused climate change.

  • Extinction - mass die-offs caused by global warming and habitat changes.

  • Deforestation - loss of primary and secondary forests.

  • Ocean acidification - rise in ocean acidity due to rising CO2.

  • Dead zones - large areas of the ocean that can’t support life.

  • Water crisis - drawdowns in aquifers and freshwater supplies.

  • Peak oil - the decline in the availability of oil as an energy source.

It looks like a conventional "confused environmentalist" prioritisation to me.

I would council caution before attempting to drag conventional environmentalists into the fray.

They usually just want to slam on the brakes - and the historical effects of that look mostly negative to me - take the case of GM crops, for example.

Yudkowsky even advocated a "stealth" approach once:

if we can make it all the way to Singularity without it ever becoming a "public policy" issue, I think maybe we should.

Replies from: John_Baez
comment by John_Baez · 2011-03-02T16:09:32.462Z · LW(p) · GW(p)

I like GM crops.