A Scholarly AI Risk Wiki

post by lukeprog · 2012-05-25T20:53:27.955Z · LW · GW · Legacy · 57 comments

Contents

  The Idea
  Benefits
  Costs
None
57 comments

Series: How to Purchase AI Risk Reduction

One large project proposal currently undergoing cost-benefit analysis at the Singularity Institute is a scholarly AI risk wiki. Below I will summarize the project proposal, because:

 

 

The Idea

Think Scholarpedia:

But the scholarly AI risk wiki would differ from Scholarpedia in these respects:

Example articles: Eliezer Yudkowsky, Nick Bostrom, Ben Goertzel, Carl Shulman, Artificial General Intelligence, Decision Theory, Bayesian Decision Theory, Evidential Decision Theory, Causal Decision Theory, Timeless Decision Theory, Counterfactual Mugging, Existential Risk, Expected Utility, Expected Value, Utility, Friendly AI, Intelligence Explosion, AGI Sputnik Moment, Optimization Process, Optimization Power, Metaethics, Tool AI, Oracle AI, Unfriendly AI, Complexity of Value, Fragility of Value, Church-Turing Thesis, Nanny AI, Whole Brain Emulation, AIXI, Orthogonality Thesis, Instrumental Convergence Thesis, Biological Cognitive Enhancement, Nanotechnology, Recursive Self-Improvement, Intelligence, AI Takeoff, AI Boxing, Coherent Extrapolated Volition, Coherent Aggregated Volition, Reflective Decision Theory, Value Learning, Logical Uncertainty, Technological Development, Technological Forecasting, Emulation Argument for Human-Level AI, Evolutionary Argument for Human-Level AI, Extensibility Argument for Greater-Than-Human Intelligence, Anvil Problem, Optimality Notions, Universal Intelligence, Differential Intellectual Progress, Brain-Computer Interfaces, Malthusian Scenarios, Seed AI, Singleton, Superintelligence, Pascal's Mugging, Moore's Law, Superorganism, Infinities in Ethics, Economic Consequences of AI and Whole Brain Emulation, Creating Friendly AI, Cognitive Bias, Great Filter, Observation Selection Effects, Astronomical Waste, AI Arms Races, Normative and Moral Uncertainty, The Simulation Hypothesis, The Simulation Argument, Information Hazards, Optimal Philanthropy, Neuromorphic AI, Hazards from Large-Scale Computation, AGI Skepticism, Machine Ethics, Event Horizon Thesis, Acceleration Thesis, Singularitarianism, Subgoal Stomp, Wireheading, Ontological Crisis, Moral Divergence, Utility Indifference, Personhood Predicates, Consequentialism, Technological Revolutions, Prediction Markets, Global Catastrophic Risks, Paperclip Maximizer, Coherent Blended Volition, Fun Theory, Game Theory, The Singularity, History of AI Risk Thought, Utility Extraction, Reinforcement Learning, Machine Learning, Probability Theory, Prior Probability, Preferences, Regulation and AI Risk, Godel Machine, Lifespan Dilemma, AI Advantages, Algorithmic Complexity, Human-AGI Integration and Trade, AGI Chaining, Value Extrapolation, 5 and 10 Problem.

Most of these articles would contain previously unpublished research (not published even in blog posts or comments), because most of the AI risk research that has been done has never been written up in any form but sits in the brains and Google docs of people like Yudkowsky, Bostrom, Shulman, and Armstrong.

 

Benefits

More than a year ago, I argued that SI would benefit from publishing short, clear, scholarly articles on AI risk. More recently, Nick Beckstead expressed the point this way:

Most extant presentations of SIAI's views leave much to be desired in terms of clarity, completeness, concision, accessibility, and credibility signals.

Chris Hallquist added:

I've been trying to write something about Eliezer's debate with Robin Hanson, but the problem I keep running up against is that Eliezer's points are not clearly articulated at all. Even making my best educated guesses about what's supposed to go in the gaps in his arguments, I still ended up with very little.

Of course, SI has long known it could benefit from clearer presentations of its views, but the cost was too high to implement it. Scholarly authors of Nick Bostrom's skill and productivity are extremely rare, and almost none of them care about AI risk. But now, let's be clear about what a scholarly AI risk wiki could accomplish:

There are some benefits to the wiki structure in particular:

 

Costs

This would be a large project, and has significant costs. I'm still estimating the costs, but here are some ballpark numbers for a scholarly AI risk wiki containing all the example articles above:

 

57 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2012-05-27T07:30:59.608Z · LW(p) · GW(p)

I vote for spending the resources in one or more of the following ways instead:

  1. Write down any previously unpublished ideas in SIAI people's heads, as concisely and completely as possible, as blog posts or papers.
  2. Incrementally improve the LW Wiki. Add entries for any of the topics on your list that are missing, and link to existing blog posts and papers.
  3. Make a push for "AI Risk" (still don't like the phrase, but that's a different issue) to become a proper academic discipline (i.e., one that's studied by many academics outside FHI and SIAI). I'm not sure how this is usually done, but I think hosting an academic conference and calling for papers would help accomplish this.

(I've noticed a tendency in Luke's LW writings (specifically the Metaethics and AI Risk Strategy sequences) to want to engage in scholarship and systematically write down the basics in preparation for "getting to the good stuff" and then petering out before actually getting to the good stuff, and don't want to see this repeated by SIAI as a whole, on a larger scale.)

Replies from: lukeprog, ghf, wedrifid, None
comment by lukeprog · 2012-05-30T23:01:07.667Z · LW(p) · GW(p)

Thanks for your suggestions! All these are, in fact, in the works.

Write down any previously unpublished ideas in SIAI people's heads, as concisely and completely as possible, as blog posts or papers.

Carl is writing up some of these on his blog, Reflective Disequilibrium.

Incrementally improve the LW Wiki. Add entries for any of the topics on your list that are missing, and link to existing blog posts and papers.

This is in my queue of things for remote researchers to do.

Make a push for "AI Risk" to become a proper academic discipline

I'm working with Sotala and Yampolskiy on a paper that summarizes the problem of AI risk and the societal & technical responses to it that have been suggested so far, to give some "form" to the field. I already published my AI Risk Bibliography, which I'll update each year in January. More importantly, AGI-12 is being paired with a new conference called AGI-Impacts. Also, SIAI is developing a "best paper" prize specific to AGI impacts or AI risk or something (we're still talking it through).

comment by ghf · 2012-05-27T23:36:47.247Z · LW(p) · GW(p)

I definitely agree.

For (3), now is the time to get this moving. Right now, machine ethics (especially regarding military robotics) and medical ethics (especially in terms of bio-engineering) are hot topics. Connecting AI Risk to either of these trends would allow you extend and, hopefully, bud it off as a separate focus.

Unfortunately, academics are pack animals, so if you want to communicate with them, you can't just stake out your own territory and expect them to do the work of coming to you. You have to pick some existing field as a starting point. Then, knowing the assumptions of that field, you point out the differences in what you're proposing and slowly push out and extend towards what you want to talk about (the pseudopod approach). This fits well with (1) since choosing what journals you're aiming at will determine the field of researchers you'll be able to recruit from.

One note, if you hold a separate conference, you are dependent on whatever academic credibility SIAI brings to the table (none, at present (besides, you already have the Singularity Summit to work with)). But, if you are able to get a track started at an existing conference, suddenly you can define this as the spot where the cool researchers are hanging out. Convince DARPA to put a little money towards this and suddenly you have yourselves a research area. The DOD already pushes funds for things like risk analyses of climate change and other 30-100 year forward threats so it's not even a stretch.

comment by wedrifid · 2012-05-27T08:40:22.877Z · LW(p) · GW(p)

This exactly.

Wikis just aren't a practical place to put original, ongoing research.

comment by [deleted] · 2012-05-27T10:52:20.311Z · LW(p) · GW(p)

+1

comment by [deleted] · 2012-05-25T21:54:30.585Z · LW(p) · GW(p)

Consider the risk that after the initial burst of effort the wiki ends up sitting empty; The internet is littered with abandoned wikis, actively maintained ones being rare exceptions.

After the wiki is initially published, what would motivate one to spend an hour curating and revising wiki content instead of doing anything else?

Replies from: Emile
comment by Emile · 2012-05-25T22:01:11.091Z · LW(p) · GW(p)

The plan seems to involve paying people to edit the Wiki, which is a solid way of preventing decay.

Replies from: lukeprog
comment by lukeprog · 2012-05-25T22:21:33.814Z · LW(p) · GW(p)

Yes. I have no hope of doing this entirely with volunteers.

Replies from: Tuxedage
comment by Tuxedage · 2012-05-26T14:38:28.146Z · LW(p) · GW(p)

I would like to interject for a moment and say that I would be very willing to volunteer a substantial portion of my time, given that I'm specifically taught what I should do, and given sufficient time to learn.

Although this is a purely anecdotal thought, I believe that there is a significant number of people, including myself, who would like to see the Singularity happen, and would be willing to volunteer vast amounts of time in order to increase the probability of a friendly intelligence explosion happen. You might be underestimating the number of people willing to volunteer for free.

Replies from: lukeprog
comment by lukeprog · 2012-05-26T15:47:21.685Z · LW(p) · GW(p)

This article rings very true to me:

In our experience, valuable volunteers are rare. The people who email us about volunteer opportunities generally seem enthusiastic about GiveWell’s mission, and motivated by a shared belief in our goals to give up their free time to help us. Yet, the majority of these people never complete useful work for us.

...almost 80% of people who take the initiative to seek us out and ask for unpaid work fail to complete a single assignment. But maybe this shouldn’t be surprising. Writing an email is quick and exciting; spending a few hours fixing punctuation is not.

Now, maybe you are one of the volunteers who will turn out to be productive. I already have 6-8 volunteers who are pretty productive. But given my past experience, an excited email volunteering to help provides me almost no information on whether that person will actually help.

Replies from: Tuxedage, evand, hirvinen
comment by Tuxedage · 2012-05-26T18:26:23.784Z · LW(p) · GW(p)

Then I suppose we should test that theory out. I've already repeatedly sent the SIAI messages asking if I may volunteer, unfortunately, with no reply.

Give me a task and I'll see if I'm really committed enough to spend hours fixing punctuation.

Replies from: lukeprog
comment by lukeprog · 2012-05-26T19:55:27.560Z · LW(p) · GW(p)

Where did you send the message? Actually, if your past messages didn't go through, probably best to just email me at luke [at] singularity.org.

comment by evand · 2012-05-27T02:02:05.912Z · LW(p) · GW(p)

If 20% of people who seek you out actually volunteer, what's the fraction like for those that don't seek you out? 1e-8? So that email is worth over 20 bits of information? Maybe we have very different definitions of "almost no information", but it seems to me that email is quite valuable even if only 20% do a single assignment, and still quite valuable information even if only 20% of those do anything beyond that one assignment.

Replies from: lukeprog
comment by lukeprog · 2012-05-27T03:35:13.807Z · LW(p) · GW(p)

Right, it's much less than 20% who actually end up being useful.

comment by hirvinen · 2012-10-23T05:41:08.163Z · LW(p) · GW(p)

With good collaboration tools, for many kinds of tasks testing the commitment of volunteers by putting them to work should be rather cheap to test, especially if they can be given less time-critical tasks, or tasks where they help speed up someone else's work.

Serious thought should go into looking for ways unpaid volunteers could help, since there's loads of bright people with more time and enthusiasm than money, and for whom it is much easier to put in a few hours a week than to donate equivalent money towards paid contributors' work

Replies from: lukeprog
comment by Kaj_Sotala · 2012-05-26T06:42:23.787Z · LW(p) · GW(p)

This idea sounds promising, but I find it hard to say anything about "should this be funded" without knowing what the alternative uses for the money are. Almost any use of money can be made to sound attractive with some effort, but the crucial question in budgeting is not "would this be useful" but "would this be the most useful thing".

I recognize that it might take an excessively long time to write up all the ideas that are being thrown around, but even brief descriptions of the top three competing alternatives would help.

Replies from: lukeprog
comment by lukeprog · 2012-05-26T15:52:55.261Z · LW(p) · GW(p)

the crucial question in budgeting is not "would this be useful" but "would this be the most useful thing".

Yes, that is the question we're always asking ourselves. :)

I do plan to explain some alternate uses for money over the next couple weeks.

comment by Paul Crowley (ciphergoth) · 2012-05-26T14:06:49.353Z · LW(p) · GW(p)

Actually, it occurs to me that my previous comment makes the case that SI should have a single logical document, kept up to date, maintaining its current case. It doesn't argue that it should be a wiki. One alternative would be to keep a book in revision control - there are doubtless others, but let me discuss this one.

Pros:

  • A book has a prescribed reading order; it may be easier to take in the content if you can begin at the beginning and work forwards. This is a huge advantage - I'd upload it to the Kindles of all my friends who have given me permission.
  • The book would be written in LaTeX, so it would be easier to convert parts of it to academic papers. MediaWiki format is the most awful unparseable dog's breakfast; it seems a shame to use it to create content of lasting value.
  • Real revision control is utterly wonderful (eg hg, git) - what MediaWiki provides absolutely pales in comparison.
  • Real revision control makes it easier for outsiders to contribute without special permissions - they just send you patches, or invite you to pull from them.

Cons:

  • Mediawiki is easier to use
  • People are used to wikis
  • Wikis more naturally invite contribution
  • Wikis don't need you to install or compile anything
  • Much of the content is more wiki-like than book-like - it's not a core part of what SI are discussing but an aside about the work of others, and in a book it would probably go in an appendix.
Replies from: Emile, hirvinen
comment by Emile · 2012-05-26T19:32:45.027Z · LW(p) · GW(p)

Other cons (advantages of Wiki):

  • It's easier to have a bunch of hyperlinks in a wiki (be it to internal or external material)
  • A wiki's comment page is a natural place to have a discussion; collaborative work on a book would also require a separate medium (Mailing list, LessWrong) for discussion
  • A wiki is indexed by search engines
  • You can link to a specific page of a wiki
comment by hirvinen · 2012-10-23T05:49:30.866Z · LW(p) · GW(p)

There are several relatively mature wiki engines beside mediawiki, with different markup languages etc. The low barrier of entry for wikis, even with less familiar markup languages is a very important consideration.

comment by Paul Crowley (ciphergoth) · 2012-05-26T13:44:17.147Z · LW(p) · GW(p)

I like this proposal a lot. What are the alternatives?

SI could greatly reduce public engagement, directing funds on research

As I see it, SI has two main ways to spend money: on research, and on public engagement. Obviously it has to spend money on running itself, but it's best to see that as money indirectly spent on its activities. It could direct nearly all its funding to research.

Pros: SI's research can directly bring about its central goal. We don't have all the time in the world.

Cons: Public engagement, in various ways, is what makes the research possible: it brings in the funding and it makes it easier to recruit people. In a self-sustaining non-profit, money spent on public engagement now should mean money to spend on research later. Also, public engagement directly serves the aims of SI by making more people aware of the risk.

SI could stick to presenting its existing case, leaving the gaps unfilled

Pros: That would be cheaper, and allow more to be spent on research.

Cons: given that SI's case is not intuitively appealing, making it strong seems the best way to win the right people over; as Holden Karnofsky's commentary demonstrates, leaving the holes unfilled is harming credibility and making public engagement less effective. Further, the earlier problems in this case are discovered, the more effectively future work can be directed.

SI could stick to the academic paper format, or another un-wiki "write, finish, move on" format

Pros: This presents another big cost saving: you only have to write what's new. Much of the proposed wiki content would come from work SI have already written up; there would be significant costs in adapting that work for the new format, which could be avoided if SI stick to writing new work in new papers. Furthermore, SI pretty much have to write the academic papers anyway; the work involved in writing for one format, then converting to another, can be avoided.

Cons: What you have to read to understand SI's case grows linearly. An argument made sloppily in one paper is strengthened in a later one; but you have to read both papers, notice the sloppy argument, and then reach the later paper to fix it. Or try to read the later paper, and fail to understand why this point matters, until you read the earlier one and see the context. A wiki-like "here is our whole case" format allows the case to be presented as a coherent whole, with problems with previous revisions largely elided, or relegated to specific wiki pages that need only be read by the curious.

Further, in practice the academic paper format does not free you from the need to cover old ground; in my experience finding new ways to say the same old things in the "Introduction" section of such papers introducing the problem you intend to discuss is dull and tiresome work.

I think there's lots of discussion to be had about how to get the most out of the wiki and how to minimize the costs, but as you can see, on the "is it a good idea at all" I'm pretty sold.

comment by John_Maxwell (John_Maxwell_IV) · 2012-05-26T04:44:43.006Z · LW(p) · GW(p)

This sounds like a great idea to me!

A sister project that might be worth considering is to create a Stack Overflow type question-and-answer website as a companion for the wiki. (1, 2, open source clones.)

Potential benefits:

  • Iron out confusing, unclear, or poorly supported points in the wiki as they are brought up as questions.
  • This sort of question-and-answer platform could be a good way to hear from a wider variety of perspectives. A natural reaction of a visitor to the wiki might be "where are those who disagree"? If successful, a question-and-answer site could attract dissenters.
  • By soliciting input from random Internet users, you might see them correctly find flaws in your arguments or contribute useful insights.
  • Save even more time explaining things with your searchable question-and-answer database. (It's likely that the software would encourage users to search for their question before asking it.)
  • I suspect this question-and-answer model works so well for reason. Don't be surprised if wiki editors find answering questions enjoyable or addictive.

Potential harms:

  • Could hurt scholarly atmosphere.
  • Could incentivize users to respond quickly when they should be contemplating carefully instead.

Some other thoughts: wiki talk pages could potentially be eliminated or replaced by questions like "how can we improve the page on so-and-so"? Discussion on AI risk topics could potentially be moved off Less Wrong and onto this site.

comment by JGWeissman · 2012-05-25T21:24:44.840Z · LW(p) · GW(p)

Do you have plans to invite any particular people outside of SIAI to contribute?

What is your expectation that the academic community will seriously engage with a scholarly wiki? What is the relative value of a wiki article vs a journal article? How does this compare to relative cost?

Could this help SIAI recruit FAI researchers?

Replies from: lukeprog
comment by lukeprog · 2012-05-25T22:18:51.943Z · LW(p) · GW(p)

Do you have plans to invite any particular people outside of SIAI to contribute?

Certainly!

What is your expectation that the academic community will seriously engage with a scholarly wiki? What is the relative value of a wiki article vs a journal article? How does this compare to relative cost?

The academic community generally will not usefully engage with AI risk issues unless they (1) hear the arguments and already accept (or are open to) the major premises of the central arguments, or unless they (2) come around to caring by way of personal conversation and personal relationships. Individual scholarly articles, whether in journals or in a wiki, don't generally persuade people to care. Everyone has their own list of objections to the basic arguments, and you can't answer all of them in a single article. (But again, a wiki format is better for this.)

The main value of journal papers or wiki articles on AI risk is not for people who have strong counter-intuitions (e.g. "more intelligence implies more benevolence," "machines can't be smarter than humans"). Instead, they are mostly of value to people who already accept the premises of the arguments but hadn't previously noticed their implications, or who are open enough to the ideas that with enough clear explanation they can grok it.

As long as you're not picky about which journal you get into, the cost of a journal article isn't much more than that of a good scholarly wiki article. Yes, you have to do more revisions, but in most cases you can ignore the revision suggestions you don't want to make, and just make the revisions you do want to make. (Whaddyaknow? Peer review comments are often helpful.) A journal article has some special credibility value in having gone through peer review, while a wiki article has some special usefulness value in virtue of being linked directly to articles that explain other parts of the landscape.

A journal article won't necessarily get read more than a wiki article, though. More people read Bostrom's preprints on his website than the same journal articles in the actual journals. One exception to this is that journal articles sometimes get picked up by the popular media, whereas they won't write a story about a wiki article. But as I said in the OP, it won't be that expensive to convert material from good scholarly wiki articles to journal articles and vice versa, so we can have both without much extra expense.

I'm not sure I answered your question, though: feel free to ask follow-up questions.

Could this help SIAI recruit FAI researchers?

Heck yes. As near as I can tell, what happens today is this:

  • An ubermath gets interested enough to devote a dozen or more hours reading the relevant papers and blog posts, gets pretty interested.
  • In most cases, the ubermath does nothing and contacts nobody, except maybe asking some friends what they think and mostly keeping these crazy-sounding ideas at arm's length. Or they ask an AI expert what they think of this stuff and the AI expert sends them back a critique of Kurzweil. (Yes, this has happened!)
  • In some cases, the ubermath hangs out on LW and occasionally comments but doesn't make direct contact or show us that they are an ubermath.
  • In other cases, the ubermath makes contact (or, we discover their ubermathness by accident while talking about other subjects), and this leads to personal conversations with us in which the ubermath explains which parts of the picture they got from The Sequences don't make sense, and we say "Yes, you're right, and very perceptive. The picture you have doesn't make sense, because it's missing these 4 pieces that are among the 55 pieces that have never been written up. Sorry about that. But here, let's talk it through." And then the ubermath is largely persuaded and starts looking into decision theory, or thinking about strategy, or starts donating, or collaborates with us and occasionally thinks about whether they'd like to do FAI research for SI some day, when SI can afford the ubermath.

A scholarly AI risk wiki can help ubermaths (and non-ubermaths like myself) to (1) understand our picture of AI risk better, more quickly, more cheaply, and in a way that requires less personal investment from SI, (2) see that there is enough serious thought going into these issues that maybe they should take it seriously and contact us, (3) see where the bleeding edges of research are that they might contribute to it, and more.

BTW, an easy way to score a conversation with SI staff is to write one of us an email that simply says "Hi my name is , I got a medal in the IMO or scored well on the Putnam, and I'm starting to think seriously about AI risk."

We currently spend a lot of time in conversation with promising people, in part because one really can't get a very good idea of our current situation via the articles and blog posts that currently exist.

(These opinions are my own and may or may not represent those of other SI staffers, for example people who may or may not be named Eliezer Yudkowsky.)

Replies from: JGWeissman, JGWeissman
comment by JGWeissman · 2012-05-25T23:06:36.740Z · LW(p) · GW(p)

BTW, an easy way to score a conversation with SI staff is to write one of us an email that simply says "Hi my name is , I got a medal in the IMO or scored well on the Putnam, and I'm starting to think seriously about AI risk."

Would it be useful for SIAI to run a math competition to identify ubermaths, or to try contacting people who have done well in existing competitions?

Replies from: lukeprog
comment by lukeprog · 2012-05-25T23:26:04.510Z · LW(p) · GW(p)

Yes. It's on our to-do list to reach out to such people, and also to look into sponsoring these competitions, but we haven't had time to do those things yet.

comment by JGWeissman · 2012-05-25T22:42:22.188Z · LW(p) · GW(p)

Do you have plans to invite any particular people outside of SIAI to contribute?

Certainly!

Who? People at FHI? Other AGI researchers?

(And thanks for good answers to the other questions.)

Replies from: lukeprog
comment by lukeprog · 2012-05-25T23:26:31.687Z · LW(p) · GW(p)

FHI researchers, AGI researchers, other domain experts, etc.

comment by Stuart_Armstrong · 2012-06-01T14:07:27.126Z · LW(p) · GW(p)

The FHI already has a private wiki, run by me (with some access of some outside-FHIers). It hasn't been a great success. If we do a public wiki, it's absolutely essential that we get people involved who know how to run a wiki, keep people involved, and keep it up to date (or else it's just embarrassing). After the first flush of interest, I'm not confident we have enough people with free time to sustain it.

Would a subsection of an existing major wiki be a better way to go?

Replies from: lukeprog, John_Maxwell_IV
comment by lukeprog · 2012-06-01T18:28:54.447Z · LW(p) · GW(p)

You showed me that wiki once. The problem of course is that there isn't enough budget invested in it to make it grow and keep it active. We would only create this scholarly AI risk wiki if we had the funds required to make it worthwhile.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2012-06-02T00:01:10.716Z · LW(p) · GW(p)

Are you confident you can translate budget into sustained wiki activity?

Replies from: lukeprog
comment by lukeprog · 2012-06-02T00:42:57.208Z · LW(p) · GW(p)

Not 100%, obviously. But most of the work developing the wiki would be paid work, if that's what you mean by "activity."

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2012-06-04T19:01:20.432Z · LW(p) · GW(p)

Well, as long as it's well curated and maintained, I suppose it could work... But why not work on making the less wrong wiki better? That comes attached to the website already.

Anyway, I'm not sure a new wiki has much of an advantage over "list of recent AI risk papers + links to youtube videos + less wrong wiki updated a bit more" for researchers - at least, not enough advantage to justify the costs. A few well-maintained pages ("AI risks", "friendly AI", "CEV", "counterarguments", "various models"), no more than a dozen at most, that summarise the core arguments with links to the more advanced stuff, should be enough for what we'd want, I feel.

Replies from: lukeprog
comment by lukeprog · 2012-06-04T22:23:09.183Z · LW(p) · GW(p)

You might be right. I do have 3 people right now improving the LW wiki and adding all the pages listed in the OP that aren't already in the LW wiki.

comment by John_Maxwell (John_Maxwell_IV) · 2012-06-22T21:20:48.336Z · LW(p) · GW(p)

I would guess that the primary factors related to whether wikis succeed or fail don't have much to do with whether they are about AI risk or some other topic. So, perhaps beware of extrapolating too much from the FHI wiki data point.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2012-06-25T08:39:18.030Z · LW(p) · GW(p)

As I said: it's absolutely essential that we get people involved who know how to run a wiki, keep people involved, and keep it up to date.

I'm unfortunately not one of these types of people.

comment by Vaniver · 2012-05-26T07:17:04.961Z · LW(p) · GW(p)

If a wiki format is what SIAI researchers want to write in- and I suspect it is- then go with that. Like you say, it's fairly easy to switch content between forms and the bottleneck appears to be getting that content out of minds.

The total cost doesn't seem all that relevant- the relative cost between writing wiki articles and writing journal articles seems to me like the driving factor.

comment by Paul Crowley (ciphergoth) · 2012-05-28T07:29:07.484Z · LW(p) · GW(p)

Inspired by Wei_Dai's comment, and assuming that SI should have a single logical document, kept up to date, maintaining its current case:

SI could use the existing LW wiki for this purpose, rather than creating a new one

Pros:

  • The LW wiki exists now; there will be no further design, development, or hosting costs.
  • The LW wiki already has relevant useful content that could be linked to.
  • There is at least some chance of at least some useful volunteer engagement

Cons:

  • SI want the final say in what their evolving document says. It's not clear that we on LW want SI to treat "our" wiki that way.
  • SI want their evolving document to contain only content that they explicitly approve. The LW wiki doesn't work that way.
  • SI want their evolving document to have SI branding, not LW branding.

My conclusion: All of the "pros" are about the immediate advantages of using the LW wiki, while all the "cons" are about the longer term goals for the evolving document. That suggests we should start using the LW wiki now, and create an SI wiki later. Taken together with my last comment this suggests the following course of action: first, start putting all that unpublished work straight into the LW wiki. This is a change of the rules that require things to be elsewhere first, but I'd favour it. SI can fork the content into its own, separately branded wiki when it's sufficiently complete to do the job for which it was created.

The biggest thing I worry about here is eg an edit war where eg someone wants to include something in the "Eliezer Yudkowsky" page that SI think is silly and irrelevant criticism. If SI say "we run this wiki, we're not having that in there, go argue with Wikipedia to put it in there" then people may take that as meaning more than it does.

Replies from: wedrifid
comment by wedrifid · 2012-05-28T15:36:57.568Z · LW(p) · GW(p)

Adding to the Cons, it isn't desirable for SingInst to make itself more affiliated with Lesswrong. Lesswrong users say all sorts of things that SingInst people cannot say and do not agree with. They can (and should) also be far more free to make candid assessments of the merits of pieces of academic work.

For example lesswrong folks can cavalierly dismiss Wang's thoughts on AGI and discount him as an expert while SingInst folks must show respect according to his status and power, only disagreeing indirectly via academic publications. So I consider Luke's decision to publicly post the Wang conversation (and similar conversations) to be a mistake and this kind of thing would be exacerbated by stronger Lesswrong/SIAI association.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-05-28T19:36:21.449Z · LW(p) · GW(p)

Agreed with your broader point; disagree that the decision to post the conversation here was a mistake.

comment by Paul Crowley (ciphergoth) · 2012-05-28T07:11:58.713Z · LW(p) · GW(p)

Inspired by Wei_Dai's comment, and assuming that SI should have a single logical document, kept up to date, maintaining its current case:

SI could try to get their "previously unpublished research (not published even in blog posts or comments)" out there in the form of roughly written and loosely argued blog posts, before worrying about creating this document.

Pros:

  • As Wei_Dai observes, it's easy to spend lots of time laying the groundwork and never getting to the actual building stage.
  • At least some people not in SI are at least roughly familiar with the groundwork already; those people would be better informed for these blog posts and might be able to usefully contribute to the discussion.
  • The posts would provide material later to be made part of the single evolving case document.

Cons:

  • People will try to use them to evaluate SI's case anyway, despite the disclaimer at the top.
  • If I'm more convinced of SI's case from reading it, great! If I'm less convinced, hold off, we haven't finished writing it up yet. Are we asking people to suspend Conservation of Expected Evidence? (I don't think this is a problem, but not confident enough to leave it out)
  • Now we're converting material twice: once from blog post to evolving document, and once from document to papers.

My conclusion: I like this idea a lot less having written it up. However, I still like the idea of getting it out there first and polishing it later. I'd rather say, start work on the evolving case document immediately, but write the new stuff first, and worry about the stuff that's already out there in other forms later. Or pages on old stuff could start life simply linking to existing sources, with very little new text. No matter what disclaimers you put on a blog post, people will treat it as a finished thing to some extent, and major edits during discussion are problematic. Linking to a revision-controlled, evolving document like a wiki doesn't have the same feel at all.

comment by Bruno_Coelho · 2012-05-26T16:12:30.491Z · LW(p) · GW(p)

It's necessary to understand that "AI Risk" as a field? StackOverflow and MathOverflow have critical mass to create and respond to questions, respectively, in programming/computer science and math, estabilished fields of study. I supose constant feedback will maintain the wiki, if exist out there sufficient scholars who pretend to donate their time to this. But, there are?

comment by hirvinen · 2012-10-23T05:19:42.629Z · LW(p) · GW(p)

That 1920 h should be 24 months of 80 h/month, not 80 h / week.

comment by John_Maxwell (John_Maxwell_IV) · 2012-05-26T05:00:23.171Z · LW(p) · GW(p)

One risk about making arguments public is that those who originated the arguments stand to lose more face if they are shown to be incorrect publicly than privately. (I'm sure all four of the individuals you refer to care more about having accurate beliefs than preserving face, but I thought it was worth pointing out.) Not that this problem is not exclusive to the wiki proposal; it also applies to just writing more papers.

I think it makes sense in general to present one's conclusions as tentative even if one holds them with high confidence, just to guard against this sort of thing. Redundancy is good. Why not have two mechanisms to guard against any face-saving instinct?

comment by hirvinen · 2012-10-23T14:32:40.518Z · LW(p) · GW(p)

The price tag of the wiki itself sounds too high: If 1920 hours of SI staff costs USD 48000, that's USD 25/h. If hosting and maintenance is 500 / month(should be much less), over 24 months that would leave USD 18k to design and development, and at SI staff rates that would be 720 hours of work, which sounds waaay too much for setting up a relatively simple(?) wiki site

Replies from: DaFranker, gwern
comment by DaFranker · 2012-10-25T21:32:34.326Z · LW(p) · GW(p)

You seem to be vastly underestimating the time-cost of running a successful, pertinent, engaging, active and informative online community that isn't held together by a starting group of close friends or partners who have fun doing a particular activity together.

For a bit of practical grounding, consider that simple "clan" (or "guild" or whatever other term they pick) websites for online gaming communities that somehow manage to go above 100 members, a paltry number compared to what I believe is the target size of userbase of this wiki, often require at least three or four active administrators who each put in at least 5 hours of activity per week in order to keep things running smoothly, prevent mass exodus, prevent drama and discord, etc.

The goal isn't just to make a wiki website and then leave it there, hoping that people will come visit and contribute. The goal is to go from a bunch of low-profile scholarly stuff to the scientific AI risk version of TV Tropes, with a proportional user base to the corresponding target populations.

comment by gwern · 2012-10-25T22:44:03.778Z · LW(p) · GW(p)

FWIW, I estimate that I spend 5-15 minutes every day dealing with spam on the existing LessWrong wiki and dealing with collateral damage from autoblocks, which would be ~3 hours a month; I don't even try to review edits by regular users. That doesn't seem to be included in your estimate of maintenance cost.

Replies from: hirvinen
comment by hirvinen · 2012-11-14T09:56:47.464Z · LW(p) · GW(p)
  • 1,920 hours of SI staff time (80 hrs/week for 24 months). This comes out to about $48,000, depending on who is putting in these hours.
  • $384,000 paid to remote researchers and writers ($16,000/mo for 24 months; our remote researchers generally work part-time, and are relatively inexpensive).
  • $30,000 for wiki design, development, hosting costs
  • Dealing with spam shouldn't be counted under "design, development and hosting".
  • The first item establishes SIAI staff time cost at 25 $ / h. If the (virtual) server itself, bandwidth and technical expert maintenance is 500 $ / month, that still leaves 720 hours of SIAI staff-priced work in the "design, development and hosting" budget.
  • If we roughly quadruple your time estimate to 3 hours per week to combat spam, then that still leaves 720 hours - 2 years 52 weeks 3 hours/week = 408 hours, which still seems excessive for "design, development and hosting" considering that we have a lot of nice relatively easily customisable wiki software available for free.
Replies from: gwern
comment by gwern · 2012-11-14T16:15:47.956Z · LW(p) · GW(p)

Dealing with spam needs to be counted somehow for an open wiki, and if you go to closed wiki, then that needs to be taken into account while reducing the expected benefits from it...

comment by BaconServ · 2012-05-26T14:42:45.435Z · LW(p) · GW(p)

I do not see the point in an exhaustive list of failure scenarios before the existence of any AI is established.

Yeah, I'm not going to care about reading it, and I really don't think it's possible for anyone to get close to AI without it dawning on them what the thing might be capable of. I mean, why don't we get at least /one/ made before we invest our time and effort into something that, in my belief, wont have been relevant, and in all likelihood, wont get to the people who it needs to get to if they even cared about it.

Replies from: ciphergoth, TheOtherDave
comment by Paul Crowley (ciphergoth) · 2012-05-26T15:39:34.713Z · LW(p) · GW(p)

We have to sometimes be allowed to have the second discussion. There sometimes has to be a discussion among those who agree that X is an issue, about what to do about it. We can't always return to the discussion of whether X is an issue at all, because there's always someone who dissents. Save it for the threads which are about your dissent.

Replies from: BaconServ
comment by BaconServ · 2012-05-27T02:42:19.953Z · LW(p) · GW(p)

I'm voicing my dissent because the amount of confidence that it takes to justify the proposal is not rational as I see it.

I am in support of collecting a list of failure scenarios. I am not in support of making an independent wiki on the subject. I'd need to see a considerable argument of confidence be made before I'd understand why to put all this effort into it instead of, say, simply making a list in LessWrong's existing wiki.

comment by TheOtherDave · 2012-05-26T15:23:39.668Z · LW(p) · GW(p)

I mean, why don't we get at least /one/ made before we invest our time and effort into something that, in my belief, wont have been relevant

I endorse you preferentially allocating your time and effort to those things that you expect to be relevant.
But I also endorse others doing the same.

Also, if you don't see the point in planning for failure scenarios before completing the project, I dearly hope you aren't responsible for planning for projects that can fail catastrophically.

Replies from: BaconServ
comment by BaconServ · 2012-05-27T02:59:26.486Z · LW(p) · GW(p)

I like being convinced that my preferential allocation of time is non-optimal. That way I can allocate my time to something more constructive. I vastly prefer more rational courses of action to less rational courses of action.

I of course advocate understanding failure scenarios, but the bronze age wasn't really the time to be contemplating grey goo countermeasures. Even if they'd want to at that time, they would have had nowhere near the competence to be doing anything other than writing science fiction. Which is what I see such a wiki at this point in time as being.

As an aspiring AI coder, suppose I were to ask, for any given article on the wiki, for any given failure scenario, to see some example code that would produce such a failure, so that, while coding my own AI, I am able to more coherently avoid that particular failure. As it is my understanding that nothing of the sort is even close to being able to be produced (to not even touch upon the security concerns), I do not see how such a wiki would be useful at this point in (lack of?) development.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-05-27T07:02:26.971Z · LW(p) · GW(p)

What does your tag mean?

Replies from: BaconServ
comment by BaconServ · 2012-05-27T10:45:42.191Z · LW(p) · GW(p)

That is the notation my author has chosen to indicate that te is the one communicating. Te uses it primarily in posts on LessWrong from the time before I was written in computer language.