LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

An Unexpected Victory: Container Stacking at the Port of Long Beach
Zvi · 2021-10-28T14:40:00.497Z · comments (41)
Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists
Zack_M_Davis · 2019-09-24T04:12:07.560Z · comments (40)
Book Review: How Minds Change
bc4026bd4aaa5b7fe (bc4026bd4aaa5b7fe0bdcd47da7a22b453953f990d35286b9d315a619b23667a) · 2023-05-25T17:55:32.218Z · comments (51)
[link] Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky
jacquesthibs (jacques-thibodeau) · 2023-03-29T23:16:19.431Z · comments (296)
Leaky Delegation: You are not a Commodity
Darmani · 2021-01-25T02:04:55.942Z · comments (34)
LW Team is adjusting moderation policy
Raemon · 2023-04-04T20:41:07.603Z · comments (182)
[link] Industrial literacy
jasoncrawford · 2020-09-30T16:39:06.520Z · comments (128)
LessWrong is providing feedback and proofreading on drafts as a service
Ruby · 2021-09-07T01:33:10.666Z · comments (53)
Why Agent Foundations? An Overly Abstract Explanation
johnswentworth · 2022-03-25T23:17:10.324Z · comments (56)
[link] When do "brains beat brawn" in Chess? An experiment
titotal (lombertini) · 2023-06-28T13:33:23.854Z · comments (79)
EfficientZero: How It Works
1a3orn · 2021-11-26T15:17:08.321Z · comments (50)
[link] Is Success the Enemy of Freedom? (Full)
alkjash · 2020-10-26T20:25:50.503Z · comments (68)
[link] Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
evhub · 2024-01-12T19:51:01.021Z · comments (94)
Speaking to Congressional staffers about AI risk
Akash (akash-wasil) · 2023-12-04T23:08:52.055Z · comments (23)
The Parable of the King and the Random Process
moridinamael · 2023-03-01T22:18:59.734Z · comments (22)
Lies, Damn Lies, and Fabricated Options
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-10-17T02:47:24.909Z · comments (132)
Two-year update on my personal AI timelines
Ajeya Cotra (ajeya-cotra) · 2022-08-02T23:07:48.698Z · comments (60)
Predictable updating about AI risk
Joe Carlsmith (joekc) · 2023-05-08T21:53:34.730Z · comments (23)
[link] Towards Monosemanticity: Decomposing Language Models With Dictionary Learning
Zac Hatfield-Dodds (zac-hatfield-dodds) · 2023-10-05T21:01:39.767Z · comments (21)
Science in a High-Dimensional World
johnswentworth · 2021-01-08T17:52:02.261Z · comments (53)
Politics is way too meta
Rob Bensinger (RobbBB) · 2021-03-17T07:04:42.187Z · comments (46)
Social Dark Matter
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-11-16T20:00:00.000Z · comments (112)
[link] Thoughts on seed oil
dynomight · 2024-04-20T12:29:14.212Z · comments (102)
Mysteries of mode collapse
janus · 2022-11-08T10:37:57.760Z · comments (56)
Hooray for stepping out of the limelight
So8res · 2023-04-01T02:45:31.397Z · comments (24)
Study Guide
johnswentworth · 2021-11-06T01:23:09.552Z · comments (48)
[link] Intentionally Making Close Friends
Neel Nanda (neel-nanda-1) · 2021-06-27T23:06:49.269Z · comments (35)
A central AI alignment problem: capabilities generalization, and the sharp left turn
So8res · 2022-06-15T13:10:18.658Z · comments (53)
OpenAI: The Battle of the Board
Zvi · 2023-11-22T17:30:04.574Z · comments (82)
Is AI Progress Impossible To Predict?
alyssavance · 2022-05-15T18:30:12.103Z · comments (39)
We Choose To Align AI
johnswentworth · 2022-01-01T20:06:23.307Z · comments (16)
What Are You Tracking In Your Head?
johnswentworth · 2022-06-28T19:30:06.164Z · comments (81)
Covid 12/24: We’re F***ed, It’s Over
Zvi · 2020-12-24T15:10:02.975Z · comments (135)
Sazen
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2022-12-21T07:54:51.415Z · comments (83)
My May 2023 priorities for AI x-safety: more empathy, more unification of concerns, and less vilification of OpenAI
Andrew_Critch · 2023-05-24T00:02:08.836Z · comments (39)
Guide to rationalist interior decorating
mingyuan · 2023-06-19T06:47:13.704Z · comments (45)
A tale from Communist China
Wei Dai (Wei_Dai) · 2020-10-18T17:37:42.228Z · comments (65)
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)
Andrew_Critch · 2021-03-31T23:50:31.620Z · comments (64)
Don't die with dignity; instead play to your outs
Jeffrey Ladish (jeff-ladish) · 2022-04-06T07:53:05.172Z · comments (59)
Notes on Teaching in Prison
jsd · 2023-04-19T01:53:00.427Z · comments (12)
Toni Kurz and the Insanity of Climbing Mountains
GeneSmith · 2022-07-03T20:51:58.429Z · comments (67)
Gentleness and the artificial Other
Joe Carlsmith (joekc) · 2024-01-02T18:21:34.746Z · comments (33)
The Base Rate Times, news through prediction markets
vandemonian · 2023-06-06T17:42:56.718Z · comments (39)
Humans are very reliable agents
alyssavance · 2022-06-16T22:02:10.892Z · comments (35)
Seven Years of Spaced Repetition Software in the Classroom
tanagrabeast · 2021-03-04T02:42:01.475Z · comments (38)
We don’t trade with ants
KatjaGrace · 2023-01-10T23:50:11.476Z · comments (108)
Alignment Research Field Guide
abramdemski · 2019-03-08T19:57:05.658Z · comments (9)
OpenAI: Facts from a Weekend
Zvi · 2023-11-20T15:30:06.732Z · comments (158)
Accidentally Load Bearing
jefftk (jkaufman) · 2023-07-13T16:10:00.806Z · comments (14)
Building up to an Internal Family Systems model
Kaj_Sotala · 2019-01-26T12:25:11.162Z · comments (86)
← previous page (newer posts) · next page (older posts) →