LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

A bicycle for your memory
sortega · 2022-03-30T21:41:07.046Z · comments (8)
ELK Computational Complexity: Three Levels of Difficulty
abramdemski · 2022-03-30T20:56:37.239Z · comments (9)
Abandoned prototype video game teaching elementary circuit theory
Steven Byrnes (steve2152) · 2022-03-30T20:37:19.653Z · comments (3)
Meta wants to use AI to write Wikipedia articles; I am Nervous™
Yitz (yitz) · 2022-03-30T19:05:44.735Z · comments (12)
No, EDT Did Not Get It Right All Along: Why the Coin Flip Creation Problem Is Irrelevant
Heighn · 2022-03-30T18:41:20.099Z · comments (6)
How to Make Your Article Change People's Minds or Actions? (Spoiler: Do User Testing Like a Startup Would)
Yonatan Cale (yonatan-cale-1) · 2022-03-30T17:39:34.373Z · comments (8)
Procedurally evaluating factual accuracy: a request for research
Jacob_Hilton · 2022-03-30T16:37:37.675Z · comments (2)
[ASoT] Some thoughts about LM monologue limitations and ELK
leogao · 2022-03-30T14:26:15.381Z · comments (0)
[Intro to brain-like-AGI safety] 10. The alignment problem
Steven Byrnes (steve2152) · 2022-03-30T13:24:33.181Z · comments (7)
Progress Report 2
Nathan Helm-Burger (nathan-helm-burger) · 2022-03-30T02:29:32.670Z · comments (1)
[question] What would make you confident that AGI has been achieved?
Yitz (yitz) · 2022-03-29T23:02:58.250Z · answers+comments (6)
Beyond Blame Minimization: Thoughts from the comments
physicaleconomics · 2022-03-29T22:28:38.493Z · comments (8)
Gears-Level Mental Models of Transformer Interpretability
KevinRoWang · 2022-03-29T20:09:53.073Z · comments (4)
Retreat for rationality meetup organizers (apply to attend!)
mingyuan · 2022-03-29T18:42:58.292Z · comments (0)
Debating myself on whether “extra lives lived” are as good as “deaths prevented”
HoldenKarnofsky · 2022-03-29T18:30:23.792Z · comments (9)
[link] Hinges and crises
Jan_Kulveit · 2022-03-29T11:11:03.605Z · comments (7)
Strategies for differential divulgation of key ideas in AI capability
azsantosk · 2022-03-29T03:22:35.825Z · comments (0)
Towards a better circuit prior: Improving on ELK state-of-the-art
evhub · 2022-03-29T01:56:39.684Z · comments (0)
[link] Summary: "How to read a book" by Mortimer Adler
Pablo Repetto (pablo-repetto-1) · 2022-03-29T00:34:33.502Z · comments (0)
Can we simulate human evolution to create a somewhat aligned AGI?
Thomas Kwa (thomas-kwa) · 2022-03-28T22:55:20.628Z · comments (6)
Mental nonsense: my anti-insomnia trick
DirectedEvolution (AllAmericanBreakfast) · 2022-03-28T22:47:03.434Z · comments (19)
Vaniver's ELK Submission
Vaniver · 2022-03-28T21:14:37.019Z · comments (0)
[ASoT] Some thoughts about deceptive mesaoptimization
leogao · 2022-03-28T21:14:27.217Z · comments (5)
Ukraine Post #7: Prediction Market Update
Zvi · 2022-03-28T16:10:01.427Z · comments (3)
Humans pretending to be robots pretending to be human
Richard_Kennaway · 2022-03-28T15:13:15.144Z · comments (14)
[question] Consequentialist veganism
tslarm · 2022-03-28T01:48:21.786Z · answers+comments (9)
Toward a general structure of economic explanation
physicaleconomics · 2022-03-27T23:33:57.994Z · comments (1)
[question] How many generals does Russia have left?
Yitz (yitz) · 2022-03-27T23:11:03.857Z · answers+comments (16)
Do a cost-benefit analysis of your technology usage
TurnTrout · 2022-03-27T23:09:26.753Z · comments (53)
[question] Your specific attitudes towards AI safety
Esben Kran (esben-kran) · 2022-03-27T22:33:45.102Z · answers+comments (21)
[link] Sums and products
Ege Erdil (ege-erdil) · 2022-03-27T21:57:38.410Z · comments (11)
[ASoT] Searching for consequentialist structure
leogao · 2022-03-27T19:09:13.370Z · comments (2)
The Opposite Of Autism
Capybasilisk · 2022-03-27T15:30:24.921Z · comments (16)
Practical everyday human strategizing
[deleted] · 2022-03-27T14:20:19.389Z · comments (0)
Manhattan project for aligned AI
Chris van Merwijk (chrisvm) · 2022-03-27T11:41:00.818Z · comments (8)
[ASoT] Some ways ELK could still be solvable in practice
leogao · 2022-03-27T01:15:16.607Z · comments (1)
[question] What are the top 1-10 posts / sequences / articles / etc. that you've found most useful for yourself for becoming "less wrong"?
Aryeh Englander (alenglander) · 2022-03-27T00:37:45.180Z · answers+comments (11)
Beyond Blame Minimization
physicaleconomics · 2022-03-27T00:03:31.650Z · comments (47)
[question] When people ask for your P(doom), do you give them your inside view or your betting odds?
Vivek Hebbar (Vivek) · 2022-03-26T23:08:17.712Z · answers+comments (11)
A Primer on God, Liberalism and the End of History
Mahdi Complex (mahdi-complex) · 2022-03-26T20:07:58.074Z · comments (2)
Agency and Coherence
David Udell · 2022-03-26T19:25:21.649Z · comments (2)
Magic, tricks, and high-dimensional configuration spaces
pchvykov · 2022-03-26T19:14:00.176Z · comments (2)
[link] Compute Governance: The Role of Commodity Hardware
Jan (jan-2) · 2022-03-26T10:08:07.518Z · comments (7)
Game that might improve research productivity
Jack R (Jack Ryan) · 2022-03-26T07:00:11.599Z · comments (3)
You must on average be improving to not guarantee failure.
eeegnu · 2022-03-26T05:28:12.690Z · comments (1)
Your risk of developing long COVID is probably high
Siebe · 2022-03-26T04:10:38.159Z · comments (16)
Primitive Perspectives and Sleeping Beauty
dadadarren · 2022-03-26T01:55:39.460Z · comments (14)
[ASoT] Observations about ELK
leogao · 2022-03-26T00:42:20.540Z · comments (0)
Lesson Plan: Biases in Quantity Estimation
abstractapplic · 2022-03-26T00:23:17.822Z · comments (2)
Why Agent Foundations? An Overly Abstract Explanation
johnswentworth · 2022-03-25T23:17:10.324Z · comments (56)
next page (older posts) →