LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI
Andrew_Critch · 2021-06-01T18:45:39.820Z · comments (26)
Announcing Timaeus
Jesse Hoogland (jhoogland) · 2023-10-22T11:59:03.938Z · comments (15)
Cryonics and Regret
MvB (martin-von-berg) · 2023-07-24T09:16:01.456Z · comments (35)
On Not Pulling The Ladder Up Behind You
Screwtape · 2024-04-26T21:58:29.455Z · comments (21)
Interfaces as a Scarce Resource
johnswentworth · 2020-03-05T18:20:26.733Z · comments (15)
The other side of the tidal wave
KatjaGrace · 2023-11-03T05:40:05.363Z · comments (86)
A basic systems architecture for AI agents that do autonomous research
Buck · 2024-09-23T13:58:27.185Z · comments (15)
Covid-19: My Current Model
Zvi · 2020-05-31T17:40:00.945Z · comments (73)
Effective Aspersions: How the Nonlinear Investigation Went Wrong
TracingWoodgrains (tracingwoodgrains) · 2023-12-19T12:00:23.529Z · comments (171)
A Golden Age of Building? Excerpts and lessons from Empire State, Pentagon, Skunk Works and SpaceX
jacobjacob · 2023-09-01T04:03:41.067Z · comments (26)
[link] Why I’m not a Bayesian
Richard_Ngo (ricraz) · 2024-10-06T15:22:45.644Z · comments (92)
The LessWrong 2018 Book is Available for Pre-order
Ben Pace (Benito) · 2020-12-01T08:00:00.000Z · comments (110)
[link] The King and the Golem
Richard_Ngo (ricraz) · 2023-09-25T19:51:22.980Z · comments (18)
What a compute-centric framework says about AI takeoff speeds
Tom Davidson (tom-davidson-1) · 2023-01-23T04:02:07.672Z · comments (30)
How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions
JanB (JanBrauner) · 2023-09-28T18:53:58.896Z · comments (39)
The basic reasons I expect AGI ruin
Rob Bensinger (RobbBB) · 2023-04-18T03:37:01.496Z · comments (73)
Moloch Hasn’t Won
Zvi · 2019-12-28T16:30:00.947Z · comments (40)
[link] Contra Ngo et al. “Every ‘Every Bay Area House Party’ Bay Area House Party”
Ricki Heicklen (bayesshammai) · 2024-02-22T23:56:02.318Z · comments (5)
The Rationalists of the 1950s (and before) also called themselves “Rationalists”
Owain_Evans · 2021-11-28T20:17:22.259Z · comments (32)
[link] Social behavior curves, equilibria, and radicalism
Eric Neyman (UnexpectedValues) · 2021-06-05T01:39:22.063Z · comments (18)
Thinking By The Clock
Screwtape · 2023-11-08T07:40:59.936Z · comments (29)
The case for aligning narrowly superhuman models
Ajeya Cotra (ajeya-cotra) · 2021-03-05T22:29:41.577Z · comments (75)
Generalized Heat Engine
johnswentworth · 2020-11-05T19:01:32.699Z · comments (11)
EigenKarma: trust at scale
Henrik Karlsson (henrik-karlsson) · 2023-02-08T18:52:24.490Z · comments (52)
What will GPT-2030 look like?
jsteinhardt · 2023-06-07T23:40:02.925Z · comments (43)
Have You Tried Hiring People?
rank-biserial · 2022-03-02T02:06:39.656Z · comments (117)
[link] Daniel Kahneman has died
DanielFilan · 2024-03-27T15:59:14.517Z · comments (11)
AI as a science, and three obstacles to alignment strategies
So8res · 2023-10-25T21:00:16.003Z · comments (80)
[link] How factories were made safe
jasoncrawford · 2021-09-12T19:58:12.597Z · comments (46)
Information vs Assurance
johnswentworth · 2024-10-20T23:16:25.762Z · comments (17)
Growing Independence
jefftk (jkaufman) · 2020-06-07T20:20:02.805Z · comments (27)
Evolution of Modularity
johnswentworth · 2019-11-14T06:49:04.112Z · comments (12)
Humming is not a free $100 bill
Elizabeth (pktechgirl) · 2024-06-06T20:10:02.457Z · comments (6)
Skills from a year of Purposeful Rationality Practice
Raemon · 2024-09-18T02:05:58.726Z · comments (18)
Notes on "The Anthropology of Childhood"
juliawise · 2020-08-27T17:11:19.205Z · comments (10)
Why Would Belief-States Have A Fractal Structure, And Why Would That Matter For Interpretability? An Explainer
johnswentworth · 2024-04-18T00:27:43.451Z · comments (21)
Split and Commit
Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-11-21T06:27:07.557Z · comments (34)
[link] Large Language Models will be Great for Censorship
Ethan Edwards · 2023-08-21T19:03:55.323Z · comments (14)
Simulacrum 3 As Stag-Hunt Strategy
johnswentworth · 2021-01-26T19:40:42.727Z · comments (37)
Transportation as a Constraint
johnswentworth · 2020-04-06T04:58:28.862Z · comments (34)
[link] OpenAI API base models are not sycophantic, at any size
nostalgebraist · 2023-08-29T00:58:29.007Z · comments (20)
dalle2 comments
nostalgebraist · 2022-04-26T05:30:07.748Z · comments (14)
Introducing Alignment Stress-Testing at Anthropic
evhub · 2024-01-12T23:51:25.875Z · comments (23)
Language models seem to be much better than humans at next-token prediction
Buck · 2022-08-11T17:45:41.294Z · comments (60)
Is being sexy for your homies?
Valentine · 2023-12-13T20:37:02.043Z · comments (99)
Contra papers claiming superhuman AI forecasting
nikos (followtheargument) · 2024-09-12T18:10:50.582Z · comments (16)
Radical Probabilism
abramdemski · 2020-08-18T21:14:19.946Z · comments (49)
human psycholinguists: a critical appraisal
nostalgebraist · 2019-12-31T00:20:01.330Z · comments (59)
There should be more AI safety orgs
Marius Hobbhahn (marius-hobbhahn) · 2023-09-21T14:53:52.779Z · comments (25)
Inner Alignment: Explain like I'm 12 Edition
Rafael Harth (sil-ver) · 2020-08-01T15:24:33.799Z · comments (47)
← previous page (newer posts) · next page (older posts) →