Southern California FAI Workshop

post by Scott Garrabrant · 2014-04-20T08:55:19.467Z · LW · GW · Legacy · 9 comments

Contents

9 comments

This Saturday, April 26th, we will be holding a one day FAI workshop in southern California, modeled after MIRI's FAI workshops. We are a group of individuals who, aside from attending some past MIRI workshops, are in no way affiliated with the MIRI organization. More specifically, we are a subset of the existing Los Angeles Less Wrong meetup group that has decided to start working on FAI research together. 

The event will start at 10:00 AM, and the location will be:

USC Institute for Creative Technologies
12015 Waterfront Drive
Playa Vista, CA 90094-2536.

This first workshop will be open to anyone who would like to join us. If you are interested, please let us know in the comments or by private message. We plan to have more of these in the future, so if you are interested but unable to makethis event, please also let us know. You are welcome to decide to join at the last minute. If you do, still comment here, so we can give you necessary phone numbers.

Our hope is to produce results that will be helpful for MIRI, and so we are starting off by going through the MIRI workshop publications. If you will be joining us, it would be nice if you read the papers linked to here, here, here, here, and here before Saturday. Reading all of these papers is not necessary, but it would be nice if you take a look at one or two of them to get an idea of what we will be doing.

Experience in artificial intelligence will not be at all necessary, but experience in mathematics probably is. If you can follow the MIRI publications, you should be fine. Even if you are under-qualified, there is very little risk of holding anyone back or otherwise having a negative impact on the workshop. If you think you would enjoy the experience, go ahead and join us.

This event will be in the spirit of collaboration with MIRI, and will attempt to respect their guidelines on doing research that will decrease, rather than increase, existential risk. As such, practical implementation questions related to making an approximate Bayesian reasoner fast enough to operate in the real world will not be on-topic. Rather, the focus will be on the abstract mathematical design of a system capable of having reflexively consistent goals, preforming naturalistic induction, et cetera. 

Food and refreshments will be provided for this event, courtesy of MIRI.

9 comments

Comments sorted by top scores.

comment by AlexMennen · 2014-04-20T16:48:26.125Z · LW(p) · GW(p)

It's great that you're doing this, but I have a nitpick: since the event isn't affiliated with MIRI, it might be better to title it a "Mini-FAI Workshop" or something like that, instead of "Mini-MIRI Workshop".

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-04-20T18:50:08.376Z · LW(p) · GW(p)

Changed. Thanks!

comment by Squark · 2014-04-20T17:01:40.377Z · LW(p) · GW(p)

This even will be in the spirit of collaboration with MIRI, and will attempt to respect their guidelines on doing research that will decrease, rather than increase, existential risk. As such, practical implementation questions related to making an approximate Bayesian reasoner fast enough to operate in the real world will not be on-topic.

Are these guidelines written somewhere? In particular, is there a policy for what sort of ideas are safe for publication? Assuming there is, is there a procedure for handling ideas which falls outside that category (e.g. submitting them to some sort of trusted mailing list)?

Replies from: abramdemski
comment by abramdemski · 2014-04-21T04:04:16.407Z · LW(p) · GW(p)

I don't believe so. The policy we will be following will attempt to line up with how it was explained at the workshop I attended. The official policy is to think hard about whether publication increases or decreases risk. The first approximation to this policy is to not publish information which is useful for AI generally, but do publish things which have specific application to FAI and don't help other approaches.

Replies from: Squark
comment by Squark · 2014-04-21T18:46:08.004Z · LW(p) · GW(p)

Hi Abram, thx for commenting!

I'm not sure this rule is sufficient. When you have a cool idea about AGI, there is strong emotional motivation to rationalize the notion that your idea decreases risk. "Think hard" might be a procedure that is not sufficiently trustworthy, since you're running on corrupted hardware.

comment by Schlega · 2014-04-23T21:09:00.370Z · LW(p) · GW(p)

I'm in the under-qualified but interested camp. I'll plan on coming.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-04-25T06:33:28.442Z · LW(p) · GW(p)

Great! See you there.

comment by Alexei · 2014-04-22T22:39:06.495Z · LW(p) · GW(p)

This is awesome! Please make a post about how it went, lessons learned, things discovered, etc...

Replies from: abramdemski
comment by abramdemski · 2014-04-26T02:28:58.764Z · LW(p) · GW(p)

Do you think it's plausible for you to try a similar thing in your location? :)