LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Turing-Test-Passing AI implies Aligned AI
Roko · 2024-12-31T19:59:27.917Z · comments (28)
Making LLMs safer is more intuitive than you think: How Common Sense and Diversity Improve AI Alignment
Jeba Sania (jeba-sania) · 2024-12-29T19:27:35.685Z · comments (0)
Teaching Claude to Meditate
Gordon Seidoh Worley (gworley) · 2024-12-29T22:27:44.657Z · comments (4)
Emotional Superrationality
nullproxy · 2025-01-02T22:54:53.303Z · comments (4)
The AI Agent Revolution: Beyond the Hype of 2025
DimaG (di-wally-ga) · 2025-01-02T18:55:22.824Z · comments (1)
Morality Is Still Demanding
utilistrutil · 2024-12-29T00:33:40.471Z · comments (2)
[link] Merry Sciencemas: A Rat Solstice Retrospective
leebriskCyrano · 2025-01-01T01:08:36.433Z · comments (0)
The Engineering Argument Fallacy: Why Technological Success Doesn't Validate Physics
Wenitte Apiou (wenitte-apiou) · 2024-12-28T00:49:53.300Z · comments (5)
AI Alignment, and where we stand.
afeller08 · 2024-12-29T14:08:47.276Z · comments (0)
Rejecting Anthropomorphic Bias: Addressing Fears of AGI and Transformation
Gedankensprünge (gedankenspruenge) · 2024-12-29T01:48:47.583Z · comments (1)
So you want to be a witch
lucid_levi_ackerman · 2024-12-31T04:31:52.196Z · comments (3)
The Misconception of AGI as an Existential Threat: A Reassessment
Gedankensprünge (gedankenspruenge) · 2024-12-29T01:39:57.780Z · comments (0)
← previous page (newer posts) · next page (older posts) →