LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

next page (older posts) →

Arbital has been imported to LessWrong
RobertM (T3t) · 2025-02-20T00:47:33.983Z · comments (22)
Eliezer's Lost Alignment Articles / The Arbital Sequence
Ruby · 2025-02-20T00:48:10.338Z · comments (6)
Timaeus in 2024
Jesse Hoogland (jhoogland) · 2025-02-20T23:54:56.939Z · comments (1)
The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better
Thane Ruthenis · 2025-02-21T20:15:11.545Z · comments (5)
[link] The first RCT for GLP-1 drugs and alcoholism isn't what we hoped
dynomight · 2025-02-20T22:30:07.536Z · comments (3)
Alignment can be the ‘clean energy’ of AI
Cameron Berg (cameron-berg) · 2025-02-22T00:08:30.391Z · comments (0)
On OpenAI’s Model Spec 2.0
Zvi · 2025-02-21T14:10:06.827Z · comments (2)
AI #104: American State Capacity on the Brink
Zvi · 2025-02-20T14:50:06.375Z · comments (9)
ParaScope: Do Language Models Plan the Upcoming Paragraph?
NickyP (Nicky) · 2025-02-21T16:50:20.745Z · comments (0)
[link] The Takeoff Speeds Model Predicts We May Be Entering Crunch Time
johncrox · 2025-02-21T02:26:31.768Z · comments (0)
HPMOR Anniversary Guide
Screwtape · 2025-02-22T16:17:25.093Z · comments (0)
Longtermist implications of aliens Space-Faring Civilizations - Introduction
Maxime Riché (maxime-riche) · 2025-02-21T12:08:42.403Z · comments (0)
Proselytizing
lsusr · 2025-02-22T11:54:12.740Z · comments (0)
The case for the death penalty
Yair Halberstadt (yair-halberstadt) · 2025-02-21T08:30:41.182Z · comments (45)
MAISU - Minimal AI Safety Unconference
Linda Linsefors · 2025-02-21T11:36:25.202Z · comments (0)
Seeing Through the Eyes of the Algorithm
silentbob · 2025-02-22T11:54:35.782Z · comments (1)
[link] US AI Safety Institute will be 'gutted,' Axios reports
Matrice Jacobine · 2025-02-20T14:40:13.049Z · comments (0)
Human-AI Relationality is Already Here
bridgebot (puppy) · 2025-02-20T07:08:22.420Z · comments (0)
[link] Published report: Pathways to short TAI timelines
Zershaaneh Qureshi (zershaaneh-qureshi) · 2025-02-20T22:10:12.276Z · comments (0)
What makes a theory of intelligence useful?
Cole Wyeth (Amyr) · 2025-02-20T19:22:29.725Z · comments (0)
Make Superintelligence Loving
Davey Morse (davey-morse) · 2025-02-21T06:07:17.235Z · comments (0)
Inefficiencies in Pharmaceutical Research Practices
ErioirE (erioire) · 2025-02-22T04:43:09.147Z · comments (2)
[link] Linguistic Imperialism in AI: Enforcing Human-Readable Chain-of-Thought
Lukas Petersson (lukas-petersson-1) · 2025-02-21T15:45:00.146Z · comments (0)
[link] Gradual Disempowerment: Simplified
Annapurna (jorge-velez) · 2025-02-22T16:59:39.072Z · comments (0)
[link] Neural Scaling Laws Rooted in the Data Distribution
aribrill (Particleman) · 2025-02-20T21:22:10.306Z · comments (0)
Information throughput of biological humans and frontier LLMs
benwr · 2025-02-22T07:15:45.457Z · comments (0)
Safe Distillation With a Powerful Untrusted AI
Alek Westover (alek-westover) · 2025-02-20T03:14:04.893Z · comments (1)
Build a Metaculus Forecasting Bot in 30 Minutes: A Practical Guide
ChristianWilliams · 2025-02-22T03:52:14.753Z · comments (0)
[link] Demonstrating specification gaming in reasoning models
Matrice Jacobine · 2025-02-20T19:26:20.563Z · comments (0)
Fun, endless art debates v. morally charged art debates that are intrinsically endless
danielechlin · 2025-02-21T04:44:22.712Z · comments (2)
AI Apocalypse and the Buddha
pchvykov · 2025-02-22T16:33:01.349Z · comments (0)
[link] Modularity and assembly: AI safety via thinking smaller
D Wong (d-nell) · 2025-02-20T00:58:39.714Z · comments (0)
[link] Forecasting Uncontrolled Spread of AI
Alvin Ånestrand (alvin-anestrand) · 2025-02-22T13:05:57.171Z · comments (0)
Workshop: Interpretability in LLMs Using Geometric and Statistical Methods
Karthik Viswanathan (vkarthik095) · 2025-02-22T09:39:26.446Z · comments (0)
On Static Space-Like Nature of Intelligence & Superintelligence
ank · 2025-02-22T00:12:36.263Z · comments (0)
Biological humans collectively exert at most 400 gigabits/s of control over the world.
benwr · 2025-02-20T23:44:06.509Z · comments (1)
[link] Humans are Just Self Aware Intelligent Biological Machines
asksathvik · 2025-02-21T01:03:59.950Z · comments (3)
AGI Safety & Brief History of Inequality
ank · 2025-02-22T16:26:16.546Z · comments (2)
next page (older posts) →