AI Impacts project

post by KatjaGrace · 2015-02-02T19:40:14.612Z · LW · GW · Legacy · 17 comments

Contents

  The goal of the project is to clearly present and organize the considerations which inform contemporary views on these and related issues, to identify and explore disagreements, and to assemble whatever empirical evidence is relevant.
None
17 comments

I've been working on a thing with Paul Christiano that might interest some of you: the AI Impacts project.

The basic idea is to apply the evidence and arguments that are kicking around in the world and various disconnected discussions respectively to the big questions regarding a future with AI. For instance, these questions

  • What should we believe about timelines for AI development?
  • How rapid is the development of AI likely to be near human-level? 
  • How much advance notice should we expect to have of disruptive change?
  • What are the likely economic impacts of human-level AI?
  • Which paths to AI should be considered plausible or likely?
  • Will human-level AI tend to pursue particular goals, and if so what kinds of goals?
  • Can we say anything meaningful about the impact of contemporary choices on long-term outcomes?
For example we have recently investigated technology's general proclivity to abrupt progress, surveyed existing AI surveys, and examined the evidence from chess and other applications regarding how much smarter Einstein is than an intellectually disabled person, among other things. 

Some more on our motives and strategy, from our about page:

Today, public discussion on these issues appears to be highly fragmented and of limited credibility. More credible and clearly communicated views on these issues might help improve estimates of the social returns to AI investment, identify neglected research areas, improve policy, or productively channel public interest in AI.

The project is provisionally organized as a collection of posts concerning particular issues or bodies of evidence, describing what is known and attempting to synthesize a reasonable view in light of available evidence. These posts are intended to be continuously revised in light of outstanding disagreements and to make explicit reference to those disagreements.

In the medium run we'd like to provide a good reference on issues relating to the consequences of AI, as well as to improve the state of understanding of these topics. At present, the site addresses only a small fraction of questions one might be interested in, so only suitable for particularly risk-tolerant or topic-neutral reference consumers. However if you are interested in hearing about (and discussing) such research as it unfolds, you may enjoy our blog.

If you take a look and have thoughts, we would love to hear them, either in the comments here or in our feedback form

Crossposted from my blog.

17 comments

Comments sorted by top scores.

comment by is4junk · 2015-02-02T21:34:46.107Z · LW(p) · GW(p)

The first link, AI Impacts, is broken.

Replies from: KatjaGrace
comment by KatjaGrace · 2015-02-03T01:09:38.368Z · LW(p) · GW(p)

Thanks, fixed.

comment by owencb · 2015-02-04T13:22:20.113Z · LW(p) · GW(p)

Thanks, I think there is a lot of instrumental value in collecting high-quality answers to these questions. I look forward to reading more, and to pointing people to this site.

comment by is4junk · 2015-02-03T01:41:55.828Z · LW(p) · GW(p)

Looking at the very bottom of AI Impacts home page - the disclaimer looks rather unfriendly.

I'd suggest petitioning to change it to the LessWrong variety

Here is the text: To the extent possible under law, the person who associated CC0 with AI Impacts has waived all copyright and related or neighboring rights to AI Impacts. This work is published from: United States.

Replies from: KatjaGrace, None
comment by KatjaGrace · 2015-02-03T17:47:41.817Z · LW(p) · GW(p)

What do you mean by 'unfriendly'?

Replies from: is4junk
comment by is4junk · 2015-02-03T18:41:20.547Z · LW(p) · GW(p)

If you take a look and have thoughts, we would love to hear them, either in the comments here or in our feedback form.

My comment is intended as helpful feedback. If it is not helpful I'd be happy to delete it.

Replies from: RyanCarey
comment by RyanCarey · 2015-02-03T23:38:45.433Z · LW(p) · GW(p)

Your original feedback seems helpful but your follow-up doesn't. You could have said "I don't know" or "I have nothing further to add on that point".

Replies from: is4junk
comment by is4junk · 2015-02-04T00:15:15.151Z · LW(p) · GW(p)

I mean unfriendly in the ordinary sense of the word. Maybe uninviting would be as good.

Perhaps a careful reading of that disclaimer would be friendly or neutral - I don't know. My quick reading of it was: by interacting with AI Impacts you could be waiving some sort of right. To be honest I don't know what a CCO is.

I have nothing further to add to this.

Replies from: KatjaGrace
comment by KatjaGrace · 2015-02-09T17:53:00.850Z · LW(p) · GW(p)

Ah, I see. Thanks. We just meant that Paul and I are waiving our own rights to the content - it's like Wikipedia in the sense that other people are welcome to use the content. We should perhaps make that clearer.

comment by [deleted] · 2015-02-03T02:53:29.927Z · LW(p) · GW(p)

"waived all"? you mean "assigned all" right?

Replies from: KatjaGrace
comment by KatjaGrace · 2015-02-03T17:48:43.961Z · LW(p) · GW(p)

I don't think so—the copyright rights to AI Impacts are waived, in the sense that we don't have them.

Replies from: None
comment by [deleted] · 2015-02-03T23:47:02.821Z · LW(p) · GW(p)

The text I was questioning (see above) would have the contributor waive copyright without assigning it, which ends up placing the contributed work in the public domain. If that is the intention I find it a little surprising.

Replies from: KatjaGrace
comment by KatjaGrace · 2015-02-09T17:57:27.745Z · LW(p) · GW(p)

Yes, it's in the public domain.

Replies from: None
comment by [deleted] · 2015-02-09T20:26:23.956Z · LW(p) · GW(p)

Cool, so I take all the content of the site, re-purpose it as I see fit, including changing attributions or using in derivative work without attribution. That's what you had in mind, right?

Replies from: KatjaGrace
comment by KatjaGrace · 2015-02-10T16:11:59.598Z · LW(p) · GW(p)

Yes. What is the problematic case?

comment by Dr_Manhattan · 2015-02-03T22:08:31.258Z · LW(p) · GW(p)

These are very important questions that are coming up as points of disagreement between FLI and some AI researchers (triggered by recent announcements). Interested in knowing if you are collaborating with FLI in some form.

Replies from: KatjaGrace
comment by KatjaGrace · 2015-02-09T17:55:20.724Z · LW(p) · GW(p)

We haven't been so far.