Posts

Sustainability of Digital Life Form Societies 2024-07-19T13:59:13.973Z
Proposing the Post-Singularity Symbiotic Researches 2024-06-20T04:05:38.644Z
Investigating Alternative Futures: Human and Superintelligence Interaction Scenarios 2024-01-03T23:46:46.924Z

Comments

Comment by Hiroshi Yamakawa (hiroshi-yamakawa) on How can we promote AI alignment in Japan? · 2024-09-12T02:27:31.136Z · LW · GW

Including the above discussion, I have considered the reasons why Japan tends to be passive in AI X-Risk discussions.

Cultural Factors

  • Culture aiming for coexistence and co-prosperity with AI: Influenced by polytheistic worldviews and AI-friendly anime, there's an optimistic tendency to view AI as a cooperative entity rather than an adversary, leading to underestimation of risks.
  • Suppression of risk identification due to "sontaku" (anticipatory obedience) culture: The tendency to refrain from dissent by anticipating superiors' or organizations' intentions hinders X-Risk discussions.
  • Preference for contextual approaches over abstract discussions: Favoring discussions based on specific situations (e.g., setting regulations) makes it difficult to engage in abstract X-Risk discussions (strategic level).
  • Agile governance: Emphasizing flexibility in responses often leads to delayed measures against long-term X-Risks.

Cognitive and Psychological Factors

  • Lack of awareness regarding AGI feasibility: Insufficient understanding of AI technology's progress speed and potential impact.
  • Psychological barrier to excessively large risks: The enormous scale of X-Risks makes it challenging to perceive them as realistic problems.

International Factors

  • Language barrier: Access to AI X-Risk discussions is limited as they are primarily conducted in English.
  • Low expectations: Insufficient presence both technologically and in risk strategy leads to low expectations from the international community.