Oren's Field Guide of Bad AGI Outcomes

post by Oren Montano (oren-montano) · 2022-09-26T04:06:46.057Z · LW · GW · 0 comments

Contents

No comments

This field guide is my attempt to collect my observations and conceptual ideas regarding artificial general intelligence (AGI), it’s place in our current moment of AI history and designs for AGI that might just get us out of our current exposure to existential risk from technology.

I am writing it as a series of topical papers which I will link to from here when they are completed.

Let’s begin with the most basic rules regarding general intelligence and safety.
Chapter 1: Here Be Dragons [LW · GW]
Chapter 2: On Generality [LW · GW]

More to come...

0 comments

Comments sorted by top scores.