LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Effectively Communicating with DC Policymakers
PolicyTakes · 2025-03-17T22:11:56.197Z · comments (0)
[link] Sentinel's Global Risks Weekly Roundup #12/2025: Famine in Gaza, H7N9 outbreak, US geopolitical leadership weakening.
NunoSempere (Radamantis) · 2025-03-24T16:46:51.490Z · comments (0)
Counting Objections to Housing
jefftk (jkaufman) · 2025-03-16T18:20:06.898Z · comments (7)
Open Challenges in Representation Engineering
Jan Wehner · 2025-04-03T19:21:45.971Z · comments (0)
Understanding Trust - Overview Presentations
abramdemski · 2025-04-16T18:05:39.792Z · comments (0)
British and American Connotations
jefftk (jkaufman) · 2025-04-18T13:00:09.440Z · comments (4)
[link] Paper: Field-building and the epistemic culture of AI safety
peterslattery · 2025-03-15T12:30:14.088Z · comments (3)
[link] Taxonomy of possibility
dkl9 · 2025-04-09T04:24:09.439Z · comments (1)
LLMs may enable direct democracy at scale
Davey Morse (davey-morse) · 2025-03-14T22:51:13.384Z · comments (16)
Meridian Cambridge Visiting Researcher Programme: Turn AI safety ideas into funded projects in one week!
Meridian Cambridge · 2025-03-11T17:46:29.656Z · comments (0)
[question] Comprehensive up-to-date resources on the Chinese Communist Party's AI strategy, etc?
Mateusz Bagiński (mateusz-baginski) · 2025-04-18T04:58:32.037Z · answers+comments (4)
[Linkpost] The value of initiating a pursuit in temporal decision-making
Gunnar_Zarncke · 2025-03-27T21:47:05.123Z · comments (0)
Anti-memes: x-risk edition
WillPetillo · 2025-04-10T23:35:30.756Z · comments (0)
A case for peer-reviewed conspiracy theories
Sam G (sam-g) · 2025-03-08T20:41:56.161Z · comments (2)
Why I think AI will go poorly for humanity
Alek Westover (alek-westover) · 2025-03-19T15:52:18.373Z · comments (0)
X as phenomenon vs as policy, Goodhart, and the AB problem
Sniffnoy · 2025-03-27T04:32:18.848Z · comments (0)
The Jackpot Jinx (or why “Superintelligence Strategy” is wrong)
E.G. Blee-Goldman · 2025-03-10T19:18:56.380Z · comments (0)
[link] AI companies’ unmonitored internal AI use poses serious risks
sjadler · 2025-04-04T18:17:46.924Z · comments (2)
[link] METR’s preliminary evaluation of o3 and o4-mini
Christopher King (christopher-king) · 2025-04-16T20:23:00.285Z · comments (2)
Quarter Inch Cables are Devious
jefftk (jkaufman) · 2025-04-05T02:40:05.054Z · comments (4)
Convergence 2024 Impact Review
David_Kristoffersson · 2025-03-24T20:28:58.422Z · comments (0)
The Best Lecture Series on Every Subject
Rauno Arike (rauno-arike) · 2025-03-24T20:03:14.772Z · comments (1)
Probability Theory Fundamentals 102: Source of the Sample Space
Ape in the coat · 2025-03-23T17:23:57.790Z · comments (17)
Kamelo: A Rule-Based Constructed Language for Universal, Logical Communication
Saif Khan (saif-khan) · 2025-04-16T18:44:00.139Z · comments (7)
Efficiency as a 2-place word
Adam Zerner (adamzerner) · 2025-03-31T01:17:52.944Z · comments (2)
[link] Telescoping
za3k (lispalien) · 2025-04-16T17:05:52.392Z · comments (1)
Can I learn language faster? Or, perhaps, can I memorize the foreign words and recall them faster?
jmh · 2025-04-11T00:01:25.530Z · comments (6)
Attending LessOnline
jefftk (jkaufman) · 2025-04-02T11:10:06.301Z · comments (0)
Janet must die
Shmi (shminux) · 2025-03-19T20:35:09.768Z · comments (3)
[link] Scaling the NAO's Stealth Pathogen Early-Warning System
jefftk (jkaufman) · 2025-04-02T13:28:51.034Z · comments (0)
Calling Bullshit - the Cheatsheet
Niklas Lehmann · 2025-04-12T11:43:23.822Z · comments (3)
Quick thoughts on the difficulty of widely conveying a non-stereotyped position
Sniffnoy · 2025-03-27T07:30:41.353Z · comments (0)
Workflow vs interface vs implementation
Sniffnoy · 2025-03-27T07:38:49.109Z · comments (0)
[link] Announcing Progress Conference 2025
jasoncrawford · 2025-04-17T17:12:44.191Z · comments (0)
Crash scenario 1: Rapidly mobilise for a 2025 AI crash
Remmelt (remmelt-ellen) · 2025-04-11T06:54:47.974Z · comments (4)
What alignment-relevant abilities might Terence Tao lack?
Towards_Keeperhood (Simon Skade) · 2025-04-07T19:44:18.620Z · comments (2)
How Do We Fix the Education Crisis?
James Camacho (james-camacho) · 2025-03-08T02:59:32.664Z · comments (4)
ISBN Visualization
eyesack · 2025-04-01T05:55:56.526Z · comments (0)
FW25 Color Stats
sarahconstantin · 2025-03-25T18:40:03.145Z · comments (1)
Learning about AI regulation should be easier
mfg (Magnus Gjerde) · 2025-03-24T19:22:33.824Z · comments (0)
Moonlight Reflected
Jacob Falkovich (Jacobian) · 2025-04-07T15:35:11.708Z · comments (0)
[link] Delicious Boy Slop - Boring Diet, Effortless Weightloss
sapphire (deluks917) · 2025-03-24T15:01:58.355Z · comments (8)
[link] A response to OpenAI’s “How we think about safety and alignment”
Harlan · 2025-03-31T20:58:31.901Z · comments (0)
What does Yann LeCun think about AGI? A summary of his talk, "Mathematical Obstacles on the Way to Human-Level AI"
Adam Jones (domdomegg) · 2025-04-05T12:21:25.024Z · comments (0)
The world according to ChatGPT
Richard_Kennaway · 2025-04-07T13:44:43.781Z · comments (0)
[link] Human-level is not the limit
Vishakha (vishakha-agrawal) · 2025-04-16T08:33:15.498Z · comments (2)
[link] Can LLM-based models do model-based planning?
jylin04 · 2025-04-16T12:38:00.793Z · comments (1)
Theories of Impact for Causality in AI Safety
alexisbellot (alexis-1) · 2025-04-11T20:16:37.571Z · comments (1)
[link] Seeking feedback on "MAD Chairs: A new tool to evaluate AI"
Chris Santos-Lang (chris-santos-lang) · 2025-04-02T03:04:43.182Z · comments (0)
← previous page (newer posts) · next page (older posts) →