LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

next page (older posts) →

[question] Have LLMs Generated Novel Insights?
abramdemski · 2025-02-23T18:22:12.763Z · answers+comments (15)
[link] New Report: Multi-Agent Risks from Advanced AI
Lewis Hammond (lewis-hammond-1) · 2025-02-23T00:32:29.534Z · comments (0)
Reflections on the state of the race to superintelligence, February 2025
Mitchell_Porter · 2025-02-23T13:58:07.663Z · comments (7)
Test of the Bene Gesserit
lsusr · 2025-02-23T11:51:10.279Z · comments (1)
Judgements: Merging Prediction & Evidence
abramdemski · 2025-02-23T19:35:51.488Z · comments (0)
List of most interesting ideas I encountered in my life, ranked
Lucien (lucien) · 2025-02-23T12:36:48.158Z · comments (3)
Deep sparse autoencoders yield interpretable features too
Armaan A. Abraham (armaanabraham) · 2025-02-23T05:46:59.189Z · comments (0)
The case for corporal punishment
Yair Halberstadt (yair-halberstadt) · 2025-02-23T15:05:28.149Z · comments (1)
[link] The Geometry of Linear Regression versus PCA
criticalpoints · 2025-02-23T21:01:33.415Z · comments (2)
Intelligence as Privilege Escalation
Cole Wyeth (Amyr) · 2025-02-23T19:31:27.604Z · comments (0)
The GDM AGI Safety+Alignment Team is Hiring for Applied Interpretability Research
Arthur Conmy (arthur-conmy) · 2025-02-24T02:17:12.991Z · comments (0)
Moral gauge theory: A speculative suggestion for AI alignment
James Diacoumis (james-diacoumis) · 2025-02-23T11:42:31.083Z · comments (2)
[question] Does human (mis)alignment pose a significant and imminent existential threat?
jr · 2025-02-23T10:03:40.269Z · answers+comments (2)
Poll on AI opinions.
Niclas Kupper (niclas-kupper) · 2025-02-23T22:39:09.027Z · comments (0)
next page (older posts) →