Talk: "AI Would Be A Lot Less Alarming If We Understood Agents"

post by johnswentworth · 2023-12-17T23:46:32.814Z · LW · GW · 3 comments

This is a link post for https://www.youtube.com/watch?v=ERJ8QEnGlF0

Contents

3 comments

This is a linkpost for a talk I gave this past summer for the ALIFE conference. If you haven't heard of it before, ALIFE (short for "artificial life") is a subfield of biology which... well, here are some of the session titles from day 1 of the conference to give the gist:

... so you can see how this sort of crowd might be interested in AI alignment.

Rory Greig [LW · GW] and Simon McGregor definitely saw how such a crowd might be interested in AI alignment, so they organized an alignment workshop at the conference.

I gave this talk as part of that workshop. The stated goal of the talk was to "nerd-snipe ALIFE researchers into working on alignment-relevant questions of agency". It's pretty short (~20 minutes), and aims for a general energy of "hey here's some cool research hooks".

If you want to nerd-snipe technical researchers into thinking about alignment-relevant questions of agency, this talk is a short and relatively fun one to share.

Thankyou to Rory and Simon for organizing, and thankyou to Rory for getting the video posted publicly.

3 comments

Comments sorted by top scores.

comment by rorygreig (rorygreig100) · 2023-12-18T11:21:16.460Z · LW(p) · GW(p)

Thanks again John for giving this talk! I really enjoyed the talk at the time and was pleasantly surprised with the positive engagement from the audience. I'm also pleased that this turned into a resource that can be re-shared.

Replies from: Algon
comment by Algon · 2023-12-18T12:24:41.911Z · LW(p) · GW(p)

You wouldn't happen to have any recordings of a Q/A session after John's talk? I'd be interested to hear what things interested the audience, what things confused them etc.

comment by S Benfield (steven-benfield) · 2023-12-18T21:34:02.465Z · LW(p) · GW(p)

I was CTO of a company called Agentis in 2006-2008. Agentis was founded by Michael Georgeff, who published the seminal paper on BDI agents. Agents backed with LLM will be amazing. Unfortunately, AI will never quite reach our level of creativity and ability to form new insights. But on compute and basic level emotional stages, it will be fairly remarkable. HMU if you want to discuss. steve@beselfevident.com