LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Deep Honesty
Aletheophile (aletheo) · 2024-05-07T20:31:48.734Z · comments (25)
Why all the fuss about recursive self-improvement?
So8res · 2022-06-12T20:53:42.392Z · comments (62)
[Intro to brain-like-AGI safety] 1. What's the problem & Why work on it now?
Steven Byrnes (steve2152) · 2022-01-26T15:23:22.429Z · comments (19)
Godzilla Strategies
johnswentworth · 2022-06-11T15:44:16.385Z · comments (71)
Biology-Inspired AGI Timelines: The Trick That Never Works
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-12-01T22:35:28.379Z · comments (142)
Change my mind: Veganism entails trade-offs, and health is one of the axes
Elizabeth (pktechgirl) · 2023-06-01T17:10:02.075Z · comments (85)
[link] grey goo is unlikely
bhauth · 2023-04-17T01:59:57.054Z · comments (123)
What’s up with LLMs representing XORs of arbitrary features?
Sam Marks (samuel-marks) · 2024-01-03T19:44:33.162Z · comments (61)
Ironing Out the Squiggles
Zack_M_Davis · 2024-04-29T16:13:00.371Z · comments (36)
Inside the mind of a superhuman Go model: How does Leela Zero read ladders?
Haoxing Du (haoxing-du) · 2023-03-01T01:47:20.660Z · comments (8)
Covid 1/7: The Fire of a Thousand Suns
Zvi · 2021-01-07T17:00:01.843Z · comments (56)
My thoughts on the social response to AI risk
Matthew Barnett (matthew-barnett) · 2023-11-01T21:17:08.184Z · comments (37)
My computational framework for the brain
Steven Byrnes (steve2152) · 2020-09-14T14:19:21.974Z · comments (26)
EIS XIII: Reflections on Anthropic’s SAE Research Circa May 2024
scasper · 2024-05-21T20:15:36.502Z · comments (16)
Deep Learning Systems Are Not Less Interpretable Than Logic/Probability/Etc
johnswentworth · 2022-06-04T05:41:56.713Z · comments (55)
AI doom from an LLM-plateau-ist perspective
Steven Byrnes (steve2152) · 2023-04-27T13:58:10.973Z · comments (24)
Big Mac Subsidy?
jefftk (jkaufman) · 2023-02-23T04:00:03.996Z · comments (25)
Catastrophe through Chaos
Marius Hobbhahn (marius-hobbhahn) · 2025-01-31T14:19:08.399Z · comments (16)
parenting rules
Dave Orr (dave-orr) · 2020-12-21T19:48:42.365Z · comments (9)
Tips for Empirical Alignment Research
Ethan Perez (ethan-perez) · 2024-02-29T06:04:54.481Z · comments (4)
Nonprofit Boards are Weird
HoldenKarnofsky · 2022-06-23T14:40:11.593Z · comments (26)
Your posts should be on arXiv
JanB (JanBrauner) · 2022-08-25T10:35:12.087Z · comments (44)
Dear Self; We Need To Talk About Social Media
Elizabeth (pktechgirl) · 2021-12-07T00:40:01.949Z · comments (19)
Password-locked models: a stress case for capabilities evaluation
Fabien Roger (Fabien) · 2023-08-03T14:53:12.459Z · comments (14)
What Is The Alignment Problem?
johnswentworth · 2025-01-16T01:20:16.826Z · comments (37)
[question] things that confuse me about the current AI market.
DMMF · 2024-08-28T13:46:56.908Z · answers+comments (27)
Negative Feedback and Simulacra
Elizabeth (pktechgirl) · 2020-04-29T02:00:01.734Z · comments (24)
Formal verification, heuristic explanations and surprise accounting
Jacob_Hilton · 2024-06-25T15:40:03.535Z · comments (11)
[question] What are some beautiful, rationalist artworks?
Bird Concept (jacobjacob) · 2020-10-17T06:32:43.142Z · answers+comments (140)
Announcing Dialogues
Ben Pace (Benito) · 2023-10-07T02:57:39.005Z · comments (58)
Introduction to Cartesian Frames
Scott Garrabrant · 2020-10-22T13:00:00.000Z · comments (32)
Limits to Legibility
Jan_Kulveit · 2022-06-29T17:42:19.338Z · comments (11)
[link] Conjecture internal survey: AGI timelines and probability of human extinction from advanced AI
Maris Sala (maris-sala) · 2023-05-22T14:31:59.139Z · comments (5)
An Orthodox Case Against Utility Functions
abramdemski · 2020-04-07T19:18:12.043Z · comments (65)
A freshman year during the AI midgame: my approach to the next year
Buck · 2023-04-14T00:38:49.807Z · comments (15)
[link] The Death of Behavioral Economics
habryka (habryka4) · 2021-08-22T22:39:12.697Z · comments (24)
Omicron Post #7
Zvi · 2021-12-16T17:30:01.676Z · comments (41)
Request: stop advancing AI capabilities
So8res · 2023-05-26T17:42:07.182Z · comments (24)
Dyslucksia
Shoshannah Tekofsky (DarkSym) · 2024-05-09T19:21:33.874Z · comments (45)
Emotionally Confronting a Probably-Doomed World: Against Motivation Via Dignity Points
TurnTrout · 2022-04-10T18:45:08.027Z · comments (7)
Meta Questions about Metaphilosophy
Wei Dai (Wei_Dai) · 2023-09-01T01:17:57.578Z · comments (80)
LessWrong Has Agree/Disagree Voting On All New Comment Threads
Ben Pace (Benito) · 2022-06-24T00:43:17.136Z · comments (217)
[link] Will no one rid me of this turbulent pest?
Metacelsus · 2023-10-14T15:27:21.497Z · comments (23)
Curated conversations with brilliant rationalists
spencerg · 2021-05-28T14:23:30.631Z · comments (18)
[link] "Diamondoid bacteria" nanobots: deadly threat or dead-end? A nanotech investigation
titotal (lombertini) · 2023-09-29T14:01:15.453Z · comments (79)
The Incredible Fentanyl-Detecting Machine
sarahconstantin · 2024-06-28T22:10:01.223Z · comments (26)
Sapir-Whorf for Rationalists
Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-01-25T07:58:46.794Z · comments (49)
o3
Zach Stein-Perlman · 2024-12-20T18:30:29.448Z · comments (159)
Potential Bottlenecks to Taking Over The World
johnswentworth · 2021-07-06T19:34:53.016Z · comments (22)
The Fusion Power Generator Scenario
johnswentworth · 2020-08-08T18:31:38.757Z · comments (30)
← previous page (newer posts) · next page (older posts) →