LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

next page (older posts) →

Making a conservative case for alignment
Cameron Berg (cameron-berg) · 2024-11-15T18:55:40.864Z · comments (7)
The Third Fundamental Question
Screwtape · 2024-11-15T04:01:33.770Z · comments (3)
Ayn Rand’s model of “living money”; and an upside of burnout
AnnaSalamon · 2024-11-16T02:59:07.368Z · comments (1)
Win/continue/lose scenarios and execute/replace/audit protocols
Buck · 2024-11-15T15:47:24.868Z · comments (1)
[link] College technical AI safety hackathon retrospective - Georgia Tech
yix (Yixiong Hao) · 2024-11-15T00:22:53.159Z · comments (2)
OpenAI Email Archives (from Musk v. Altman)
habryka (habryka4) · 2024-11-16T06:38:03.937Z · comments (0)
[link] A Theory of Equilibrium in the Offense-Defense Balance
Maxwell Tabarrok (maxwell-tabarrok) · 2024-11-15T13:51:33.376Z · comments (2)
Boston Secular Solstice 2024: Call for Singers and Musicans
jefftk (jkaufman) · 2024-11-15T13:50:07.827Z · comments (0)
Dance Differentiation
jefftk (jkaufman) · 2024-11-15T02:30:07.694Z · comments (0)
Using Dangerous AI, But Safely?
habryka (habryka4) · 2024-11-16T04:29:20.914Z · comments (2)
Fundamental Uncertainty: Epilogue
Gordon Seidoh Worley (gworley) · 2024-11-16T00:57:48.823Z · comments (0)
[link] An Uncanny Moat
Adam Newgas (BorisTheBrave) · 2024-11-15T11:39:15.165Z · comments (0)
[question] What are some positive developments in AI safety in 2024?
Satron · 2024-11-15T10:32:39.541Z · answers+comments (0)
If I care about measure, choices have additional burden (+AI generated LW-comments)
avturchin · 2024-11-15T10:27:15.212Z · comments (9)
[link] Proposing the Conditional AI Safety Treaty (linkpost TIME)
otto.barten (otto-barten) · 2024-11-15T13:59:01.050Z · comments (4)
What are Emotions?
Myles H (zarsou9) · 2024-11-15T04:20:27.388Z · comments (5)
Private Capabilities, Public Alignment: De-escalating Without Disadvantage
wassname · 2024-11-16T07:26:58.052Z · comments (0)
Breaking beliefs about saving the world
Oxidize · 2024-11-15T00:46:03.693Z · comments (0)
Antonym Heads Predict Semantic Opposites in Language Models
Jake Ward (jake-ward) · 2024-11-15T15:32:14.102Z · comments (0)
The Case For Giving To The Shrimp Welfare Project
omnizoid · 2024-11-15T16:03:57.712Z · comments (1)
next page (older posts) →