How can we promote AI alignment in Japan?

post by Shoka Kadoi (shoka-kadoi) · 2023-03-11T18:52:42.939Z · LW · GW · 2 comments

This is a question post.

Contents

  Answers
    8 Harold
    5 Hiroshi Yamakawa
    5 M. Y. Zuo
    5 trevor
    1 Shoka Kadoi
None
2 comments

Why don't we proceed with the discussion in Japanese and English, to begin with?
どのようにすれば日本でAIアライメントを普及させることができるだろう?

Answers

answer by Harold · 2023-03-12T14:06:19.426Z · LW(p) · GW(p)

I’ve a developing a hunch that the abstract framing of arguments for AI Safety are unlikely to ever gain a foothold in Japan. The way forward here is in contextual framing of the same arguments. (Whether in English or Japanese is less and less relevant with machine translation.)

I’ve been a resident of Tokyo for twelve years, half of that as a NY lawyer in a Japanese international law firm. I’m also a founding member working with AI Safety 東京 and the Chair of the Tokyo rationality community [? · GW]. Shoka Kadoi, please express interest in our 勉強会.

As a lawyer engaged with AI safety, I often have conversations with the more abstract-minded members of our groups that reveal an intellectual acceptance but strong aesthetic distaste for the contextual nature of legal systems. (The primitives of legal systems are abstraction-resistant ideas like ‘reasonableness’.)

Aesthetic distaste for contextual primitives leads to abstract framing of problems. Abstract framing of the AI safety issues tends to lead from standard AI premises to narrow conclusions that are often hard for contextual-minded people to follow. Conclusions like, we’ve found a very low-X percent chance of some very specific bad outcome, and so we logically need to take urgent preventative actions.

To generalize, Japan as a whole (and perhaps most of the world) does not approach problems abstractly. Contextual framing of AI safety issues tends to lead from standard AI premises to broad and easily accepted conclusions. Conclusions like, we’ve found a very high-Y chance of social disruption, and we are urgently compelled to take information-gathering actions. 

There’s obviously much more support needed for these framing claims. But you can see those essential differences in outcomes in the AI regulatory approaches of the EU and Japan, respectively. (The EU is targeting abstract AI issues like bias, adversarial attacks, and biometrics with specific legislation. Japan is instead attempting to develop an ‘agile governance’ approach to AI in order to keep up with “the speed and complexity of AI innovation”. In this case, Japan's approach seems wiser, to me.)

If the conclusions leading to existential risk are sound, both these framings should converge on similar actions and outcomes. Japan is a tough nut to crack. But having both framings active around the world would mobilize a significantly larger number of brains on the problem. Mobilizing all those brains in Japan is the course to chart now. 

comment by ColinRowat · 2023-03-12T22:14:04.269Z · LW(p) · GW(p)

I have less experience in Japan than Harold does, but would generally advocate a grounded approach to issues of AI safety and alignment, rather than an abstract one.  

I was perhaps most struck over the weekend that I did not speak to anyone who had actually been involved in developing or running safety-critical systems (aviation, nuclear energy, CBW...), on which lives depended.  This gave a lot of the conversations the flavour of a 'glass bead game'.

As Japan is famously risk-averse, it would seem to me - perhaps naively - that grounded arguments should land well here.

Replies from: harold-1
comment by Harold (harold-1) · 2023-03-13T00:38:54.053Z · LW(p) · GW(p)

I wholeheartedly agree, Colin. (I think we're saying the same thing--let me know where we may disagree.)

It's a daily challenge in my work to 'translate' what can sometimes seem like abstract nonsense into scenarios grounded in real context, and the reverse.

I want to add that a grounded, high context decision process is slower (still wearing masks?) but significantly wiser (see the urbanism of Tokyo compared to any given US city).

comment by Max TK (the-sauce) · 2023-03-19T11:41:16.501Z · LW(p) · GW(p)

I am under the impression that the public attitude towards AI safety / alignment is about to change significantly.
Strategies that aim at informing parts of the public that may have been pointless in the past (abstract risks etc.) may now become more successful, because mainstream newspapers are now beginning to write about AI risks, people are beginning to be concerned. The abstract risks are becoming more concrete.

answer by Hiroshi Yamakawa · 2024-09-12T02:27:31.136Z · LW(p) · GW(p)

Including the above discussion, I have considered the reasons why Japan tends to be passive in AI X-Risk discussions.

Cultural Factors

  • Culture aiming for coexistence and co-prosperity with AI: Influenced by polytheistic worldviews and AI-friendly anime, there's an optimistic tendency to view AI as a cooperative entity rather than an adversary, leading to underestimation of risks.
  • Suppression of risk identification due to "sontaku" (anticipatory obedience) culture: The tendency to refrain from dissent by anticipating superiors' or organizations' intentions hinders X-Risk discussions.
  • Preference for contextual approaches over abstract discussions: Favoring discussions based on specific situations (e.g., setting regulations) makes it difficult to engage in abstract X-Risk discussions (strategic level).
  • Agile governance: Emphasizing flexibility in responses often leads to delayed measures against long-term X-Risks.

Cognitive and Psychological Factors

  • Lack of awareness regarding AGI feasibility: Insufficient understanding of AI technology's progress speed and potential impact.
  • Psychological barrier to excessively large risks: The enormous scale of X-Risks makes it challenging to perceive them as realistic problems.

International Factors

  • Language barrier: Access to AI X-Risk discussions is limited as they are primarily conducted in English.
  • Low expectations: Insufficient presence both technologically and in risk strategy leads to low expectations from the international community.
answer by M. Y. Zuo · 2023-03-11T21:55:43.911Z · LW(p) · GW(p)

One approach that may help would be to assess who are likely to be the major supporters and opponents of such efforts in the Japanese economic/political/academic/cultural/social spheres.

And who are likely to remain neutral in regards to promoting alignment.

A lot of the prior discussions on LW presume certain norms, standards, expectations, etc., that might not fully hold in the Japanese context.

I don't enough about the similarities and differences to estimate but probably there are such individuals who would be willing to contribute.

answer by trevor · 2023-03-11T20:07:05.297Z · LW(p) · GW(p)

Hello and welcome, Mr. Kadoi,

(I'm sorry that I don't speak Japanese. I asked ChatGPT to translate this into Japanese)

People in the USA and the United Kingdom are still debating about this. Some people think that it is best to promote AI alignment, to tell as many people as possible. Other people think that it will cause problems if everyone knows about AI alignment, there is a risk that more people will try to be the first, and then more people will build AI quickly instead of safely.

Right now, everyone agrees that we should tell AI scientists and AI workers in Japan about AI alignment. I don't know what the best strategy is, but I think one good strategy is this: we should have Japanese AI scientists and AI workers in Japan go out and introduce AI alignment to other Japanese AI scientists and AI workers.

These are two really good posts about ways to introduce people to AI alignment (unfortunately they are in english). Here in the US and the UK, we wish we had these things 10 years ago, when we started talking about AI alignment. The first is this post, which is the best thing to show to people to introduce them to AI alignment for the first time (it needs to be translated into japanese): https://www.lesswrong.com/posts/hXHRNhFgCEFZhbejp/the-best-way-so-far-to-explain-ai-risk-the-precipice-p-137 [LW · GW]

The second is this post, which is the lessons one man learned after talking to 100 academics and scientists and introducing them to AI safety for the first time. This post is supposed to help people who will go out and talk about AI safety: https://forum.effectivealtruism.org/posts/kFufCHAmu7cwigH4B/lessons-learned-from-talking-to-greater-than-100-academics [EA · GW]

I think it's a good idea for more people to talk about AI alignment in japanese, so that more conversations can be in japanese instead of english.

日本語が話せなくて申し訳ありません。ChatGPTにこれを日本語に翻訳してもらいました。

アメリカとイギリスの人々は今でもこのことについて議論しています。ある人々は、AIアライメントを最大限に普及させ、できるだけ多くの人に伝えることが最善だと考えています。一方、他の人々は、AIアライメントについて誰もが知ることで問題が発生すると考えており、より多くの人々が最初になろうとしてリスクを抱え、そしてより多くの人々が安全ではなく、急いでAIを構築する可能性があると主張しています。

現時点では、AI科学者やAIワーカーが日本でAIアライメントについて知ることは、誰もが合意しています。最善の戦略はわかりませんが、良い戦略の1つは、日本のAI科学者やAIワーカーが他の日本のAI科学者やAIワーカーにAIアライメントを紹介することです。

これらは、人々にAIアライメントを紹介する方法についての非常に良い投稿です(残念ながら英語です)。私たちは、アメリカとイギリスで10年前にAIアライメントについて話し始めたときに、これらのものが欲しかったと思っています。最初のものは、初めてAIアライメントについて紹介する人々に最適な投稿であり、日本語に翻訳する必要があります。https://www.lesswrong.com/posts/hXHRNhFgCEFZhbejp/the-best-way-so-far-to-explain-ai-risk-the-precipice-p-137 [LW · GW]

2つ目は、ある男性が100人の学者や科学者と話して、彼らに初めてAI安全性を紹介した後に得た教訓についての投稿です。この投稿は、AI安全性について話す人々向けに作られています。https://forum.effectivealtruism.org/posts/kFufCHAmu7cwigH4B/lessons-learned-from-talking-to-greater-than-100-academics [EA · GW]

私は日本語でのAIアライメントについての話し合いがもっと広がることが良いアイデアだと思います。そうすれば、英語ではなく日本語での会話が増えることになります。

answer by Shoka Kadoi · 2024-09-10T13:42:26.426Z · LW(p) · GW(p)

Thank you all for your answers and comments! (I am sorry for the delay in answering, as I do not speak fluent English and asked DeepL to translate the following text for me).

Nearly a year and a half after this question, organizations related to alignment, AI Safety, and AI Governance have been established in Japan, such as the AI Governance Association, AISI Japan (Japan AI Safety Institute), and ALIGN. These organizations represent industry, government, and academia in Japan today.

At a symposium held yesterday to celebrate the establishment of ALIGN, representatives from each of these organizations expressed their desire to collaborate with each other. This is a welcome development for Japan, the world, and humanity's survival.

In particular, ALIGN, an affiliated organization representing Japanese academia, has brought together researchers from various fields to discuss alignment from multiple angles, including X-risk, on Slack, and mainly in Japanese. ALIGN also organizes webinars in Japanese and English.

As for discussions in Japanese, there seems to be an expectation that we can build an affinity with AI and robots, as represented by some sci-fi animation works. I am exactly one of them, but from a Japanese animistic and polytheistic cultural background, I have been asking the question, ‘How can AGI and humanity coexist and co-prosper?’ I feel that many people are asking and thinking about this question.

In fact, in ALIGN,

持続する超知能が支配的な影響力をもつシンギュラリティ後の世界において、人類が適応・存続を図りつつ、可能な限り現在の価値を維持しながら発展させる方策を多角的に探求する学際的かつ予防的な学問分野。

The research field of Post-Singularity Symbiosis (PSS) has been proposed by ALIGN board members Hiroshi Yamakawa Yusuke Hayashi, and Yoshinori Okamoto. If anyone is interested in this field, we would be glad if you could translate Japanese into your mother language, as we usually do, read the article, and comment on it. Let's work together with AI to overcome the challenges of diversity and language barriers to dialogue.

皆さん、回答とコメントありがとうございました。(英語が流暢に話せず、回答が遅くなってしまい申し訳ありませんでした。Claudeに翻訳してもらい以下の文章を翻訳してもらいました。)

この質問から1年半近くの月日が経ち、日本でもALIGN, AISI Japan (Japan AI Safety Institute), AI Governance AssociationのようなアラインメントやAI Safety、AI ガバナンスに関連する組織が設立されました。これらは現在の日本において、産官学をそれぞれ代表するような関連組織といえます。

昨日、ALIGNの設立記念シンポジウムが行われ、それぞれの組織の代表から相互に連携していきたいという意向が示されました。これは日本にとっても世界にとっても、そして人類の存続にとっても喜ばしいことだと思います。

特に日本のアカデミアを代表する関連組織ALIGNでは、さまざまな分野の研究者が集まりX-riskを含む様々な角度からアラインメントに関する議論がSlack上で、しかも主に日本語で交わされるようになっています。また、ALIGNが開催するウェビナーでは、日本語と英語で展開されています。

日本語での議論としては、一部のSFアニメーション作品に代表されるように、AIやロボットと親和的な関係性を構築できるのではないかという期待があるように思います。私もまさにそのうちの1人ですが、日本のアニミズムや多神教的な文化的な背景から、「どのようにすればAGIと人類が共生共栄していけるだろうか?」と問いを立て、考えている人が多いように感じます。

実際にALIGNでは、

持続する超知能が支配的な影響力をもつシンギュラリティ後の世界において、人類が適応・存続を図りつつ、可能な限り現在の価値を維持しながら発展させる方策を多角的に探求する学際的かつ予防的な学問分野。

として、ポストシンギュラリティ共生学、Post-Singularity Symbiosis(PSS)という研究分野がALIGNの理事でもある山川宏と林 祐輔、そして岡本義則も加わった3名によって提案されています。もしこの分野に興味を持つ方がいらっしゃったら、ぜひ我々が普段やっているように日本語を自身の言語に翻訳して、記事を読んで、コメントをいただけると嬉しいです。多様性の課題と言語の障壁をAIと共に乗り越えて対話をしていきましょう。

2 comments

Comments sorted by top scores.

comment by Gordon Seidoh Worley (gworley) · 2023-03-14T04:14:24.311Z · LW(p) · GW(p)

How important is it to promote AI alignment in Japan? I ask this not to troll, but seriously. I've not heard of a lot of rapid progress towards transformative AI coming from Japan. Current progress seems to be coming out of the US. Are there a lot of folks in Japan working on things that could become AGI and don't engage with the existing AI alignment content enough to warrant a specific Japanese focus?

I've wondered the same about how important it is to spread certain ideas to other cultures/languages, not because I don't think it's not a nice thing to do, but because, given limited resources, it's unclear to me how much it will matter to the project of mitigating AI x-risks. Since it takes a lot of effort to bridge each culture gap, seems worth having a sense of how likely we think it matters for Japanese, Russian, Chinese, etc. so we can choose how to deploy people to such projects.

Replies from: Chris_Leong
comment by Chris_Leong · 2023-03-19T05:24:08.424Z · LW(p) · GW(p)

I think it could be valuable if academics in Japan were less allergic to alignment than those in the West. Then, perhaps we could reimport alignment ideas back into the US as people are generally more open to listening to strange ideas from people from another culture. In any case, it sounds like the OP is in Japan, so that they have more opportunity to promote alignment there than elsewhere.