Posts

Comments

Comment by Jeff_Alexander on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-24T03:04:23.299Z · LW · GW

For me, it "works" similarly to the original, but emphasizes (1) the underspecification of "far surpass", and (2) that the creation of a greater intelligence may require resources (intellectual or otherwise) beyond those of the proposed ultraintelligent person, the way an ultraintelligent wasp may qualify as far superior in all intellectual endeavors to a typical wasp yet still remain unable to invent and build a simple computing machine, nevermind constructing a greater intelligence.

Comment by Jeff_Alexander on Fixing Moral Hazards In Business Science · 2014-10-23T21:20:29.323Z · LW · GW

I'm a little worried a solution here will call for whoever controls the webapp to also be an expert at creating placebos for every product type.

How about... company with product type X suggests placebo Y. Webapp/process owner confirms suitability of placebo Y with unaffiliated/blinded subject matter expert in the field of product X. If confirmed as suitable, placebo is produced by unaffiliated external company (who doesn't know what the placebo is intended for, only the formulation of requested items).

Alternately, the webapp/process owner could produce the confirmed placebo, but I'm not sure if this makes sense cost-wise, and also it may open the company up to accusations of corruption, because the webapp/process owner is not blinded to who the recipient company is, and therefore might collude.

Comment by Jeff_Alexander on Superintelligence Reading Group 3: AI and Uploads · 2014-10-04T06:38:19.432Z · LW · GW

the followup research questions could be better suited for an afternoon rather than a PhD

Could they? Very well! I hereby request at least one such research question in a future week, marked as such, for comparison to the grander-scale research questions.

An online meetup might be nice, but I'm not confident in my ability to consistently attend at a particular time, as evinced by my not generally participating live on Monday evenings.

Interviewing a relevant expert is useful and related, but somewhat beyond the scope of a reading group. I vote for this only if it suits your non-reading-group goals.

Multiple choice questions are a good idea, but mainly because taking tests is a useful way to study. Doing it to gather data on how much participants remember seems less useful, unless you randomly assign participants to differently arranged reading groups and then want to assess effectiveness of different approaches. (I'm not suggesting this latter bit be done.)

Thank you for the examples.

Comment by Jeff_Alexander on Superintelligence Reading Group 3: AI and Uploads · 2014-10-03T08:20:36.366Z · LW · GW

What are some ways it might be modified? The summaries are clear, and the links to additional material quite apt and helpful for those who wish to pursue the ideas in greater depth. So the ways in which one might modify the reading group in future weeks are not apparent to me.

Comment by Jeff_Alexander on Superintelligence Reading Group 3: AI and Uploads · 2014-10-03T02:48:51.645Z · LW · GW

One way changing architecture could be particularly important is improvement in the space- or time-complexity of its algorithms. A seed AI with a particular set of computational resources that improves its architecture to make decisions in (for example) logarithmic time instead of linear could markedly advance along the "speed superintelligence" spectrum through such an architectural self-modification.

Comment by Jeff_Alexander on Superintelligence Reading Group 3: AI and Uploads · 2014-10-03T02:20:49.248Z · LW · GW

If the idea is obvious enough to AI researchers (evolutionary approaches are not uncommon -- they have entire conferences dedicated to the sub-field)), then avoiding discussion by Bostrom et al. doesn't reduce information hazard, it just silences the voices of the x-risk savvy while evolutionary AI researchers march on, probably less aware of the risks of what they are doing than if the x-risk savvy keep discussing it.

So, to the extent this idea is obvious / independently discoverable by AI researchers, this approach should not be taken in this case.

Comment by Jeff_Alexander on Superintelligence Reading Group 3: AI and Uploads · 2014-10-01T19:58:50.109Z · LW · GW

I agree with this. Brute force searching AI did not seem to be a relevant possibility to me prior to reading this chapter and this comment, and now it does.

One more thought/concern regarding the evolutionary approach: Humans perform poorly when estimating the cost and duration of software projects, particularly as the size and complexity of the project grows. Recapitulating evolution is a large project, and so it wouldn't be at all surprising if it ended up requiring more compute time and person-hours than expected, pushing out the timeline for success via this approach.

Comment by Jeff_Alexander on Superintelligence Reading Group 2: Forecasting AI · 2014-09-23T07:36:42.612Z · LW · GW

According to this week's Muehlhauser, as summarized by you:

The estimates of informed people can vary between a small number of decades and a thousand years.

What about the thousand year estimates? Rarity / outliers?

Comment by Jeff_Alexander on Superintelligence Reading Group 2: Forecasting AI · 2014-09-23T07:28:13.281Z · LW · GW

how the development of AI compares to more pressing concerns

Which concerns are more pressing? How was this assessed? I don't object to other things being more important, but I do find the suggestion there are more pressing concerns if AI is a bit further out one of the least persuasive aspects of the readings given the lack of comparison & calculation.

2.

I agree with all of this, more or less. Perhaps I didn't state my caveats strongly enough. I just want an explicit comparison attempted (e.g., given a 10% chance of AI in 20 years, 50% in 50 years, 70% within 100 years, etc., the expected value of working on AI now vs. synthetic biology risk reduction, healthy human life extension, making the species multi-planetary, raising the rationality waterline, etc.) and presented before accepting that AI is only worth thinking about if it's near.

Comment by Jeff_Alexander on Superintelligence Reading Group 2: Forecasting AI · 2014-09-23T06:57:10.043Z · LW · GW

Though if human-level AI is very fary away, I think there might be better things to do now than work on very direct safety measures.

Agreed. That is the meaning I intended by

estimates comparing this against the value of other existential risk reduction efforts would be needed to determine this [i.e. whether effort might be better used elsewhere]

Comment by Jeff_Alexander on Superintelligence Reading Group 2: Forecasting AI · 2014-09-23T02:28:07.102Z · LW · GW

This feels like a trap -- if the experts are so unreliable, and we are going out of our way to be clear about how unclear this forecasting business is (currently, anyway), settling on a number seems premature. If we want to disagree with experts, we should first be able to indicate where they went wrong, and how, and why our method and data will let us do better.

Comment by Jeff_Alexander on Superintelligence Reading Group 2: Forecasting AI · 2014-09-23T02:19:44.831Z · LW · GW
  1. Why do you think the scale of the bias is unlikely to be more than a few decades?

  2. Many expert physicists declared flight by humans impossible (e.g. Kelvin). Historical examples of a key insight taking a discovery from "impossible" or distant to very near term seem to exist, so might AI be similar? (In such a case, the likelihood of AI by year X may be higher than experts say.)

Comment by Jeff_Alexander on Superintelligence Reading Group 2: Forecasting AI · 2014-09-23T01:58:45.518Z · LW · GW

The lack of expected utility estimates understates the case for working on FAI. Even if AGI is 100 years away or more, the safety issues might still be top or very high priority (though estimates comparing this against the value of other existential risk reduction efforts would be needed to determine this). Surely once we realize the potential impact of AGI, we shouldn't delay working on safety concerns only until it is dangerously near. Some mathematical problems and engineering issues have taken humans hundreds of years to resolve (and some of course are still open/unsolved), so we should start immediately regardless of how far the estimate is (if there is no other imminent existential risk that takes precedent).

Edited to add: That said, I can see how introducing far future Fermi estimates at this stage could be problematic from an expository standpoint, given the intended audience.

Comment by Jeff_Alexander on Superintelligence Reading Group 2: Forecasting AI · 2014-09-23T01:47:23.040Z · LW · GW

Could you give three examples of "very specific questions about specific technologies", and perhaps one example of a dependency between two technologies and how it aids prediction?