Vignettes Workshop (AI Impacts)

post by Daniel Kokotajlo (daniel-kokotajlo) · 2021-06-15T12:05:38.516Z · LW · GW · 3 comments

Contents

  Plan
  Date, Time, Location
    Facebook event
  FAQ
None
3 comments

AI Impacts is organizing an online gathering to write down how AI will go down! For more details, see this announcement, or read on.

Plan

1. Try to write plausible future histories of the world, focusing on AI-relevant features. (“Vignettes.”)
2. Read each others’ vignettes and critique the implausible bits: “Wouldn’t the US government do something at that point?” “You say twenty nations promise each other not to build agent AI–could you say more about why and how?”
3. Amend and repeat.
4. Better understand your own views about how the development of advanced AI may go down.
(5. Maybe add your vignette to our collection.)

This event will happen over two days, so you can come Friday if this counts as work for you, Saturday if it counts as play, and both if you are keen. RSVP to particular days is somewhat helpful; let us know in the comments.

Date, Time, Location

The event will happen on Friday the 25th of June and Saturday the 26th.

It’ll go from 10am (California time) until probably around 4pm both days.

It will take place online, in the LessWrong Walled Garden. Here are the links to attend:
Friday [? · GW]
Saturday [? · GW]

Facebook event

FAQ

> Do I need literary merit or creativity?
No.

> Do I need to have realistic views about the future?
No, the idea is to get down what you have and improve it.

> Do I need to write stories?
Nah, you can just critique them if you want.

> What will this actually look like?
We’ll meet up online, discuss the project and answer questions, and then spend chunks of time (online or offline) writing and/or critiquing vignettes, interspersed with chatting together.

> Have you done this before? Can I see examples?
Yes, on a small scale. See here for some resulting vignettes. We thought it was fun and interesting.

> Any advice on how to get started?

We have lots! We can give it in the comments here, or on the day itself; just ask. You may be interested in this random future generator.


This event is co-organized by Katja Grace and Daniel Kokotajlo. Thanks to everyone who participated in the trial Vignettes Day months ago. Thanks to John Salvatier for giving us the idea. This work is supported by Center on Long-Term Risk, my employer.

3 comments

Comments sorted by top scores.

comment by adamShimi · 2021-06-15T13:51:44.859Z · LW(p) · GW(p)

Already told you yesterday, but great idea! I'll definitely be a part of it, and will try to bring some people with me.

comment by Donald Hobson (donald-hobson) · 2021-06-25T14:02:58.680Z · LW(p) · GW(p)

Viginette.

The next task to fall to narrow AI is adversarial attacks against humans. Virulent memes and convincing ideologies become easy to generate on demand. A small number of people might see what is happening, and try to shield themselves off from dangerous ideas. They might even develop tools that auto-filter web content. Most of society becomes increasingly ideologized, with more decisions being made on political rather than practical grounds. Educational and research institutions become full of ideologues crowding out real research. There are some wars. The lines of division are between people and their neighbours, so the wars are small scale civil wars. 

Researcher have been replaced with people parroting the party line. Society is struggling to produce chips of the same quality as before. Depending on how far along renewables are, there may be an energy crisis. Ideologies targeted at baseline humans are no longer as appealing. The people who first developed the ideology generating AI didn't share it widely. The tech to AI generate new ideologies is lost. 

The clear scientific thinking needed for major breakthroughs has been lost. But people can still follow recipes. And make rare minor technical improvements to some things. Gradually, idealogical immunity develops. The beliefs are still crazy by a truth tracking standard, but they are crazy beliefs that imply relatively non-detrimental actions. Many years of high, stagnant tech pass. Until the culture is ready to reembrace scientific thought.

comment by Aprillion (Peter Hozák) (Aprillion) · 2021-06-27T12:02:03.459Z · LW(p) · GW(p)

oh, I didn't realize there was this event yesterday, I wrote an ai-safety inspired short story independently 😅 if anyone would wish to comment, feel free to leave me a github issue

https://peter.hozak.info/fiction/heat_death/prologue