Japan AI Alignment Conference Postmortem

post by Chris Scammell (chris-scammell), Katrina Joslin · 2023-04-20T10:58:34.065Z · LW · GW · 8 comments

Contents

  The goal
  What went well?
  What could have been better?
  Operations notes
  Summary and next steps
None
8 comments

The goal

Conjecture collaborated with Araya to host a two day AI Safety conference in Japan, the first Japan AI Alignment Conference [LW · GW] (“JAC2023”). Our aim was to put together a small 30-40 person event to generate excitement around alignment for researchers in Japan and fuel new ideas for research topics. Wired Japan covered the event and interviewed Ryota Kanai (CEO of ARAYA), who co-organized it with us, here (original in JP). 

The conference agenda was broken into four sections that aimed to progress deeper into alignment as the weekend went on (full agenda available here):

While AI Safety is a discussion subject in Japan, AI alignment ideas have received very little attention. We organized the conference because we were optimistic about the reception to alignment ideas in Japan, having found on previous trips to Japan that researchers there were receptive and interested in learning more. In the best case, we hoped the conference could plant seeds for an organic AI alignment conversation to start in Japan. In the median case, we hoped to meet 2-3 sharp researchers who were eager to work directly on the alignment problem and contribute new ideas to the field. 

Now that the conference is over, we're left wondering how successful we were in raising awareness of alignment issues in Japan and fostering new research directions. 

What went well?

By the aims above, the event was a success. 

We had a total of 65 participants, including 21 from the West, 27 from Japan, and 17 online attendees. We were pleasantly surprised by the amount of interest generated by the event, and had to turn down several participants as we reached capacity. We are grateful to LTFF for having supported the event via a grant, which allowed us to cover event costs and reimburse travel and accommodation for some participants who would not otherwise have come. 

While it is too early to know whether or not the conference had a lasting impact, there seems to be some traction. CEA organizers Anneke Pogarell and Moon Nagai and other conference participants created the AI Alignment Japan Slack channel, which has nearly 150 members. Some participants have begun working on translating alignment-related texts into Japanese. Others have begun to share more alignment-related content on social media, or indicated that they are discussing the subject with their organizations. Some participants are planning to apply for grant funding to continue independent research. Conjecture is in talks with two researchers interested in pursuing research projects we think are helpful, and ARAYA has hired at least one researcher to continue working on alignment full-time.

As for the event itself, we conducted a survey after the event and found that 91% of respondents would recommend the conference to a friend, and that overall participant satisfaction was high. The "networking" aspect of the conference was rated as the most valuable component, but all other sections received a majority score of 4 out of 5, indicating that the content was received positively. Nearly all respondents from Japan indicated their knowledge of alignment had improved from the event. When asked how the conference had impacted their thoughts on the subject, the majority expressed a sense of urgency and concern about the concept of AI alignment, and were motivated to direct their research towards solving this problem. Western participants tended to rate the conference as less helpful, with one noting that it was not helpful at all. Anecdotally, some of the Western participants appreciated having a longer opportunity to speak with each other.

In terms of operations, we are happy with how the event went. The agenda progressed as we had hoped. While in retrospect we'd try to fine-tune each of the sections to better meet our aims, the four-part structure seemed good enough to build off for future events. The venue was received well, and participants appreciated the social events around the conference. This was a good update for Conjecture on our ability to put together larger and more complex events than we've hosted in the past.

What could have been better?

We made a few mistakes with the event which we note below so that we and others can learn from them. We also note some complexities that we encountered that aren't mistakes, but challenges that others may encounter in hosting similar events. In particular, multiple cross-cultural differences made it difficult to communicate AI alignment ideas to a Japanese audience, and we'd expect that anyone doing field building or policy work outside Western contexts may encounter similarly-shaped difficulties. 

Operations notes

For those who are curious about how we organized the event, we would like to share some additional operational notes. 

It took us about three months to put this event together, with about 50% of one operations employee’s bandwidth and 10% of another’s, plus some researcher time to gather feedback. The event also cost ~2 days of attendee time for the participants, which included some high context alignment researchers where the opportunity cost is particularly expensive (with extra cost from international travel and any associated time taken off). 

We collaborated closely with Araya for those three months, meeting once a week for an hour. Our first step was to define the conference goal. Once that was established, we applied for funding from the Long Term Future Fund, which required us to estimate the costs and develop a budget from the outset (happy to share more detailed notes with anyone curious). 

We also had to carefully choose a date and time that would be convenient for as many people as possible. As noted above, we could have done a bit better here. 

With the date and venue secured, we built a website (which allowed us to signpost the event and made it easier to coordinate around) and drafted a guest list. We then sent out our initial invitations, hoping that we’d hear back from LTFF with enough time to let participants know whether we could reimburse their travel expenses.

We finalized the operations tidbits last - equipment, refreshments, and accommodations. Since the event was international some of this required more difficult logistics than expected. 

To keep us on track, we developed a timeline with various deadlines for planning components. We stayed mostly on track with that schedule, but found it helpful to have set expectations in advance so we could kick ourselves to work a bit faster when things were falling behind. 

Summary and next steps

The event itself was a success with high attendance, positive feedback, and increased awareness of AI alignment in Japanese organizations, leading to a few new research collaborations. While the majority of attendees found the content accessible and interesting, some had difficulty understanding alignment ideas due to differences in language and ontology. To address this, our future events will use less technical language and we will rework the history-to-alignment slides. Networking was the most valuable component, but little benefit was seen for those already familiar with the industry. We plan to record future events for public sharing and would like to devise more concrete research proposals by providing direction to smaller breakout groups. We are delighted that ARAYA has made one full-time hire dedicated to alignment research and we’ve received interest in funding from other independent researchers. 

At the moment, we think the evidence presented above could be sufficient for two worlds:

The cost of the event is now roughly fixed (ongoing cost de minimis), though the benefit hasn't been borne out to see if we are in World A or World B.

If World A, then it is probably not useful for Conjecture to host more events like this, though not conclusively so. The event may have failed to kickstart an ongoing conversation for reasons that others could avoid (from solving the issues highlighted in the previous section, or hosting elsewhere, etc.). If World B, then it seems useful to hold more events like this in the future, though it is not clear if Conjecture is the best-position org to do so.
 

8 comments

Comments sorted by top scores.

comment by Bill Benzon (bill-benzon) · 2023-04-20T21:33:04.020Z · LW(p) · GW(p)

A surface-level explanation is that Japan is quite techno-optimistic compared to the west, and has strong intuitions that AI will operate harmoniously with humans. A more nuanced explanation is that Buddhist- and Shinto-inspired axioms in Japanese thinking lead to the conclusion that superintelligence will be conscious and aligned by default.

YES. 

I've got some knowledge of Japanese popular culture. Robots, particularly anthropomorphic robots, have a strong presence in Japanese popular culture, one that is quite different from Western culture. You should get a book by Fredrick Schodt, Inside the Robot Kingdom: Japan, Mechatronics and the Coming Robotopia. It's a bit old (1988), but it is excellent and has been recently reissued in a Kindle edition. Schodt knows Japanese popular culture quite well as he has translated many manga, including Astro Boy and Ghost in the Shell. He talks about the Shinto influence and tells a story from the early days of industrial robotics. When a new robot was to be brought online they'd perform a Shinto ceremony to welcome the robot to the team.

I've written a blog post about the Astro Boy stories, The Robot as Subaltern: Tezuka's Mighty Atom, where I point out that many of the stories are about civil rights for robots. Fear of rogue robots and AIs plays little role in those stories. I've also got a post, Who’s losing sleep at the prospect of AIs going rogue? As far as I can tell, not the Japanese, where I quote from an article by Joi Ito (former director of MIT's Media Lab) on why the Japanese do not fear robots.

As an exercise, you might want to compare the anime Ghost in the Shell with The Matrix, which derives style and motifs from the anime. The philosophical concerns of the two are very different. The central characters in Ghost are almost all cyborg to some extent. At the very least they've got sockets through which they can plug into the net, but some have a mostly artificial body. Humans are not dominated by AIs in the way they are in The Matrix.

I've written two essays about two manga by Osamu Tezuka, who has had enormous influence on Japanese popular culture. They are about two of the three manga in his early so-called Science Fiction sequence (from about 1950). Each, in a way, is about alignment. Dr. Tezuka’s Ontology Laboratory and the Discovery of Japan runs through an extensive ontology from insects to space aliens while Tezuka’s Metropolis: A Modern Japanese Fable about Art and the Cosmos turns on the difference between electro-mechanical robots and artificial beings.

Replies from: chris-scammell
comment by Chris Scammell (chris-scammell) · 2023-04-21T09:24:51.731Z · LW(p) · GW(p)

Thanks for the helpful context! We had intuitions in this direction but its nice to substantiate it with these examples. Do you speak any Japanese / have you considered joining the Japan AI Alignment slack channel? You may have a useful perspective to deconfuse conversations there if/when ontology gaps arise.

Replies from: bill-benzon
comment by Bill Benzon (bill-benzon) · 2023-04-21T17:22:50.214Z · LW(p) · GW(p)

Thanks. I don't speak Japanese. I'll take a look at the slack channel.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-04-26T17:23:11.987Z · LW(p) · GW(p)

Speaking of connecting with potential allies in other countries, I feel like I haven't heard much talk of AI Alignment education/recruitment focused on India. Is that on someone's roadmap?

comment by M. Y. Zuo · 2023-04-20T16:07:43.556Z · LW(p) · GW(p)

Thanks for the summary. Sounds like it would have been interesting for more folks to participate, One thing that I don't quite understand is:

We also learned too late into planning that another AI-related conference had been organized for the same weekend, which reduced Japanese participation.

Is the AI community in Japan really so large that this would have been missed by the ARAYA organizers in their initial inquiries?

Or was it more of a communications problem where they knew but wasn't shared in time?

Replies from: chris-scammell
comment by Chris Scammell (chris-scammell) · 2023-04-21T08:21:53.560Z · LW(p) · GW(p)

Thanks for the question. "Conference" might be the wrong word. 

RIKEN, one of the top research institutions in Japan, held a (AFAIK) whole-company event that same weekend. My understanding is that ARAYA did a public sweep of events but didn't learn about this conflict until a few weeks after the date had been set and we sent out invitations.  While some RIKEN members attended our conference, others who expressed interest declined ("I would have attended but...").  Asking key participants ahead of the conference about the date probably could have avoided this.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-04-21T12:29:47.125Z · LW(p) · GW(p)

Ah that is a tough problem to resolve. Unless they have contacts within RIKEN willing to share future scheduling that is probably not possible to fix.

By the way, where was this event hosted? Because your point about the quality of audio-visual facilities, or lack thereof, suggests it wasn't in a typical conference venue.

Replies from: chris-scammell
comment by Chris Scammell (chris-scammell) · 2023-04-21T13:38:29.936Z · LW(p) · GW(p)

It was held at a standard conference centre in Kudanshita.  This one is completely on me/Katrina as the ops team! We bought our own equipment last minute and should have instead used the venue's, or bought better equipment.