aisafety.info, the Table of Content

post by Charbel-Raphaël (charbel-raphael-segerie) · 2023-12-31T13:57:15.916Z · LW · GW · 1 comments

Contents

  🆕 New to AI safety? Start here.
    📘 Introduction to AI Safety
    🧠 Introduction to ML
    🤖 Types of AI
    🚀 Takeoff & Intelligence explosion
    📅 Timelines
    ❗ Types of Risks
    🔍 What would an AGI be able to do?
    🌋 Technical source of unalignment
    🎉 Current prosaic solutions
    🗺️ Strategy
    💭 Consciousness
  ❓ Not convinced? Explore the arguments.
    🤨 Superintelligence is unlikely?
    😌 Superintelligence won’t be a big change?
    ⚠️ Superintelligence won’t be risky?
    🤔 Why not just?
    🧐 Isn't the real concern…
    📜 I have certain philosophical beliefs, so this is not an issue
  🔍 Want to understand the research? Dive deeper.
    💻 Prosaic alignment
    📝 Agent foundation
    🏛️ Governance
    🔬 Research Organisations
  🤝 Want to help with AI safety? Get involved!
    📌 General
    📢 Outreach
    🧪 Research
    🏛️ Governance
    🛠️ Ops & Meta
    💵 Help financially 
    📚 Other resources
None
1 comment

Here is a list of Q&A from https://aisafety.info/ . When I discovered the site, I was impressed by the volume of material produced. However, the interface is optimized for beginners. The following table of contents is for individuals who wish to navigate the various sections more freely. It was constructed by clustering the Q&A into subtopics. I'm not involved with aisafety.info, I just want to increase the visibility of the content they produced by presenting it in a different way. They are also working on a new interface. This table can also be found https://aisafety.info/toc/.

🆕 New to AI safety? Start here.

📘 Introduction to AI Safety

🧠 Introduction to ML

🤖 Types of AI

🚀 Takeoff & Intelligence explosion

📅 Timelines

❗ Types of Risks

🔍 What would an AGI be able to do?

🌋 Technical source of unalignment

🎉 Current prosaic solutions

🗺️ Strategy

💭 Consciousness

❓ Not convinced? Explore the arguments.

🤨 Superintelligence is unlikely?

😌 Superintelligence won’t be a big change?

⚠️ Superintelligence won’t be risky?

🤔 Why not just?

🧐 Isn't the real concern…

📜 I have certain philosophical beliefs, so this is not an issue

🔍 Want to understand the research? Dive deeper.

💻 Prosaic alignment

📝 Agent foundation

🏛️ Governance

🔬 Research Organisations

🤝 Want to help with AI safety? Get involved!

📌 General

📢 Outreach

🧪 Research

🏛️ Governance

🛠️ Ops & Meta

💵 Help financially 

📚 Other resources

1 comments

Comments sorted by top scores.

comment by Martin Vlach (martin-vlach) · 2024-02-26T14:26:53.396Z · LW(p) · GW(p)

With a quick test, I find their chat interface prototype experience quite satisfying.