LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

next page (older posts) →

Surprising LLM reasoning failures make me think we still need qualitative breakthroughs for AGI
Kaj_Sotala · 2025-04-15T15:56:19.466Z · comments (43)
AI-enabled coups: a small group could use AI to seize power
Tom Davidson (tom-davidson-1) · 2025-04-16T16:51:29.561Z · comments (16)
Ctrl-Z: Controlling AI Agents via Resampling
Aryan Bhatt (abhatt349) · 2025-04-16T16:21:23.781Z · comments (0)
Three Months In, Evaluating Three Rationalist Cases for Trump
Arjun Panickssery (arjun-panickssery) · 2025-04-18T08:27:27.257Z · comments (10)
[link] ASI existential risk: Reconsidering Alignment as a Goal
habryka (habryka4) · 2025-04-15T19:57:42.547Z · comments (14)
Training AGI in Secret would be Unsafe and Unethical
Daniel Kokotajlo (daniel-kokotajlo) · 2025-04-18T12:27:35.795Z · comments (2)
To be legible, evidence of misalignment probably has to be behavioral
ryan_greenblatt · 2025-04-15T18:14:53.022Z · comments (11)
Map of AI Safety v2
Bryce Robertson (bryceerobertson) · 2025-04-15T13:04:40.993Z · comments (4)
OpenAI #13: Altman at TED and OpenAI Cutting Corners on Safety Testing
Zvi · 2025-04-15T15:30:02.518Z · comments (3)
A Dissent on Honesty
eva_ · 2025-04-15T02:43:44.163Z · comments (29)
ALLFED emergency appeal: Help us raise $800,000 to avoid cutting half of programs
denkenberger · 2025-04-16T21:47:40.687Z · comments (8)
D&D.Sci Tax Day: Adventurers and Assessments
aphyer · 2025-04-15T23:43:14.733Z · comments (8)
Handling schemers if shutdown is not an option
Buck · 2025-04-18T14:39:18.609Z · comments (0)
What Makes an AI Startup "Net Positive" for Safety?
jacquesthibs (jacques-thibodeau) · 2025-04-18T20:33:22.682Z · comments (7)
Can SAE steering reveal sandbagging?
jordine · 2025-04-15T12:33:41.264Z · comments (3)
[link] The Russell Conjugation Illuminator
TimmyM (timmym) · 2025-04-17T19:33:06.924Z · comments (13)
OpenAI rewrote its Preparedness Framework
Zach Stein-Perlman · 2025-04-15T20:00:50.614Z · comments (1)
AI #112: Release the Everything
Zvi · 2025-04-17T15:10:02.029Z · comments (6)
[link] Understanding and overcoming AGI apathy
Dhruv Sumathi (dhruv-sumathi) · 2025-04-17T01:04:53.853Z · comments (1)
Scaffolding Skills
Screwtape · 2025-04-18T17:39:25.634Z · comments (0)
o3 Will Use Its Tools For You
Zvi · 2025-04-18T21:20:02.566Z · comments (1)
GPT-4.1 Is a Mini Upgrade
Zvi · 2025-04-16T19:00:03.181Z · comments (6)
[link] Nucleic Acid Observatory Updates, April 2025
jefftk (jkaufman) · 2025-04-15T18:58:29.839Z · comments (0)
Prodromes and Biomarkers in Chronic Disease
sarahconstantin · 2025-04-16T21:30:02.978Z · comments (2)
Understanding Trust: Overview Presentations
abramdemski · 2025-04-16T18:08:31.064Z · comments (0)
[link] Inside OpenAI's Controversial Plan to Abandon its Nonprofit Roots
garrison · 2025-04-18T18:46:57.310Z · comments (0)
GPT-4.5 is Cognitive Empathy, Sonnet 3.5 is Affective Empathy
Jack (jack-3) · 2025-04-16T19:12:38.789Z · comments (2)
[link] Top OpenAI Catastrophic Risk Official Steps Down Abruptly
garrison · 2025-04-16T16:04:28.115Z · comments (0)
[link] The real reason AI benchmarks haven’t reflected economic impacts
Noosphere89 (sharmake-farah) · 2025-04-15T13:44:06.225Z · comments (0)
[link] METR’s preliminary evaluation of o3 and o4-mini
Christopher King (christopher-king) · 2025-04-16T20:23:00.285Z · comments (2)
Understanding Trust - Overview Presentations
abramdemski · 2025-04-16T18:05:39.792Z · comments (0)
[question] Comprehensive up-to-date resources on the Chinese Communist Party's AI strategy, etc?
Mateusz Bagiński (mateusz-baginski) · 2025-04-18T04:58:32.037Z · answers+comments (2)
Kamelo: A Rule-Based Constructed Language for Universal, Logical Communication
Saif Khan (saif-khan) · 2025-04-16T18:44:00.139Z · comments (7)
[link] Telescoping
za3k (lispalien) · 2025-04-16T17:05:52.392Z · comments (1)
[link] Announcing Progress Conference 2025
jasoncrawford · 2025-04-17T17:12:44.191Z · comments (0)
[link] Should AIs be Encouraged to Cooperate?
PeterMcCluskey · 2025-04-15T21:57:06.096Z · comments (2)
A Talmudic Rationalist Cautionary Tale
Noah Birnbaum (daniel-birnbaum) · 2025-04-15T04:11:16.972Z · comments (1)
British and American Connotations
jefftk (jkaufman) · 2025-04-18T13:00:09.440Z · comments (2)
[link] Can LLM-based models do model-based planning?
jylin04 · 2025-04-16T12:38:00.793Z · comments (1)
Host Keys and SSHing to EC2
jefftk (jkaufman) · 2025-04-17T15:10:29.139Z · comments (6)
The Mirror Problem in AI: Why Language Models Say Whatever You Want
RobT · 2025-04-15T18:40:02.793Z · comments (2)
Risers for Foot Percussion
jefftk (jkaufman) · 2025-04-15T11:10:08.577Z · comments (1)
What empirical research directions has Eliezer commented positively on?
Chris_Leong · 2025-04-15T08:53:41.677Z · comments (1)
[Rockville] Rationalist Shabbat
maia · 2025-04-18T15:38:30.650Z · comments (0)
[link] Conditional Forecasting as Model Parameterization
Molly (hickman-santini) · 2025-04-18T02:35:42.110Z · comments (0)
[link] Human-level is not the limit
Vishakha (vishakha-agrawal) · 2025-04-16T08:33:15.498Z · comments (2)
Mass Exposure Paradox
max-sixty · 2025-04-16T20:18:00.492Z · comments (0)
0 Motivation Mapping through Information Theory
P. João (gabriel-brito) · 2025-04-18T00:53:34.360Z · comments (0)
[link] Nihilism Is Not Enough By Peter Thiel
shawkisukkar · 2025-04-15T00:13:01.375Z · comments (4)
$500 bounty for best short-form fiction about our near future world; $100 for recommending winning piece: new “Art of Near Future World” quarterly art project
Ramon Gonzalez (ramon-gonzalez) · 2025-04-15T00:46:10.637Z · comments (0)
next page (older posts) →