Superintelligence reading group

post by KatjaGrace · 2014-08-31T14:59:31.480Z · LW · GW · Legacy · 2 comments

Contents

  Posts in this sequence
None
2 comments

In just over two weeks I will be running an online reading group on Nick Bostrom's Superintelligence, on behalf of MIRI. It will be here on LessWrong. This is an advance warning, so you can get a copy and get ready for some stimulating discussion. MIRI's post, appended below, gives the details.

Added: At the bottom of this post is a list of the discussion posts so far.


Nick Bostrom’s eagerly awaited Superintelligence comes out in the US this week. To help you get the most out of it, MIRI is running an online reading group where you can join with others to ask questions, discuss ideas, and probe the arguments more deeply.

The reading group will “meet” on a weekly post on the LessWrong discussion forum. For each ‘meeting’, we will read about half a chapter of Superintelligence, then come together virtually to discuss. I’ll summarize the chapter, and offer a few relevant notes, thoughts, and ideas for further investigation. (My notes will also be used as the source material for the final reading guide for the book.)

Discussion will take place in the comments. I’ll offer some questions, and invite you to bring your own, as well as thoughts, criticisms and suggestions for interesting related material. Your contributions to the reading group might also (with permission) be used in our final reading guide for the book.

We welcome both newcomers and veterans on the topic. Content will aim to be intelligible to a wide audience, and topics will range from novice to expert level. All levels of time commitment are welcome.

We will follow this preliminary reading guide, produced by MIRI, reading one section per week.

If you have already read the book, don’t worry! To the extent you remember what it says, your superior expertise will only be a bonus. To the extent you don’t remember what it says, now is a good time for a review! If you don’t have time to read the book, but still want to participate, you are also welcome to join in. I will provide summaries, and many things will have page numbers, in case you want to skip to the relevant parts.

If this sounds good to you, first grab a copy of Superintelligence. You may also want to sign up here to be emailed when the discussion begins each week. The first virtual meeting (forum post) will go live at 6pm Pacific on Monday, September 15th. Following meetings will start at 6pm every Monday, so if you’d like to coordinate for quick fire discussion with others, put that into your calendar. If you prefer flexibility, come by any time! And remember that if there are any people you would especially enjoy discussing Superintelligence with, link them to this post!

Topics for the first week will include impressive displays of artificial intelligence, why computers play board games so well, and what a reasonable person should infer from the agricultural and industrial revolutions.


Posts in this sequence

Week 1: Past developments and present capabilities

Week 2: Forecasting AI

Week 3: AI and uploads

Week 4: Biological cognition, BCIs, organizations

Week 5: Forms of superintelligence

Week 6: Intelligence explosion kinetics

Week 7: Decisive strategic advantage

Week 8: Cognitive superpowers

Week 9: The orthogonality of intelligence and goals

Week 10: Instrumentally convergent goals

Week 11: The treacherous turn

Week 12: Malignant failure modes

Week 13: Capability control methods

Week 14: Motivation selection methods

Week 15: Oracles, genies and sovereigns

Week 16: Tool AIs

Week 17: Multipolar scenarios

Week 18: Life in an algorithmic economy

Week 19: Post-transition formation of a singleton

Week 20: The value-loading problem

Week 21: Value learning

Week 22: Emulation modulation and institution design

Week 23: Coherent extrapolated volition

Week 24: Morality models and "do what I mean"

Week 25: Components list for acquiring values

Week 26: Science and technology strategy

Week 27: Pathways and enablers

Week 28: Collaboration

Week 29: Crunch time

2 comments

Comments sorted by top scores.

comment by almostvoid · 2014-09-30T10:39:03.805Z · LW(p) · GW(p)

WBEs are a worry. They can be used to carry dangerous information which a normal [suppressed laughter] may recoil from. But worse if this is carried off it may also attract sentient consciousness-awareness just like us. Frankenstein 2.0 Anyway we got 7 billion [6 too many] humans. Why would se want to do this? Space exploration by remote control to get the human feel of alien environments. Again-my only worry is that this process-construct may become a-live. And have it's own ideas which are not in conjunction for the very reason it was crafted. Or it may outsmart its creators. And if controlled by whatever means - insertion of compliant resonant mind-states- it could rebel and become a terrorist. We are mad enough as it is. Personally as stated initially this is not the best solution to AI.

comment by almostvoid · 2014-09-30T10:36:10.689Z · LW(p) · GW(p)

WBEs are a worry. They can be used to carry dangerous information which a normal [suppressed laughter] may recoil from. But worse if this is carried off it may also attract sentient consciousness-awareness just like us. Frankenstein 2.0 Anyway we got 7 billion [6 too many] humans. Why would se want to do this? Space exploration by remote control to get the human feel of alien environments. Again-my only worry is that this process-construct may become a-live. And have it's own ideas which are not in conjunction for the very reason it was crafted. Or it may outsmart its creators. And if controlled by whatever means - insertion of compliant resonant mind-states- it could rebel and become a terrorist. We are mad enough as it is. Personally as stated initially this is not the best solution to AI.