LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Thinking By The Clock
Screwtape · 2023-11-08T07:40:59.936Z · comments (29)
Discontinuous progress in history: an update
KatjaGrace · 2020-04-14T00:00:02.478Z · comments (25)
The other side of the tidal wave
KatjaGrace · 2023-11-03T05:40:05.363Z · comments (86)
Evaluating the historical value misspecification argument
Matthew Barnett (matthew-barnett) · 2023-10-05T18:34:15.695Z · comments (161)
OMMC Announces RIP
Adam Scholl (adam_scholl) · 2024-04-01T23:20:00.433Z · comments (5)
A basic systems architecture for AI agents that do autonomous research
Buck · 2024-09-23T13:58:27.185Z · comments (15)
Towards Developmental Interpretability
Jesse Hoogland (jhoogland) · 2023-07-12T19:33:44.788Z · comments (10)
My Interview With Cade Metz on His Reporting About Slate Star Codex
Zack_M_Davis · 2024-03-26T17:18:05.114Z · comments (187)
Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI
Andrew_Critch · 2021-06-01T18:45:39.820Z · comments (26)
"Humanity vs. AGI" Will Never Look Like "Humanity vs. AGI" to Humanity
Thane Ruthenis · 2023-12-16T20:08:39.375Z · comments (34)
Cryonics and Regret
MvB (martin-von-berg) · 2023-07-24T09:16:01.456Z · comments (35)
Effective Aspersions: How the Nonlinear Investigation Went Wrong
TracingWoodgrains (tracingwoodgrains) · 2023-12-19T12:00:23.529Z · comments (171)
Announcing Timaeus
Jesse Hoogland (jhoogland) · 2023-10-22T11:59:03.938Z · comments (15)
Interfaces as a Scarce Resource
johnswentworth · 2020-03-05T18:20:26.733Z · comments (15)
Covid-19: My Current Model
Zvi · 2020-05-31T17:40:00.945Z · comments (73)
A Golden Age of Building? Excerpts and lessons from Empire State, Pentagon, Skunk Works and SpaceX
Bird Concept (jacobjacob) · 2023-09-01T04:03:41.067Z · comments (26)
On Not Pulling The Ladder Up Behind You
Screwtape · 2024-04-26T21:58:29.455Z · comments (21)
[link] The King and the Golem
Richard_Ngo (ricraz) · 2023-09-25T19:51:22.980Z · comments (19)
How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions
JanB (JanBrauner) · 2023-09-28T18:53:58.896Z · comments (39)
What a compute-centric framework says about AI takeoff speeds
Tom Davidson (tom-davidson-1) · 2023-01-23T04:02:07.672Z · comments (30)
The LessWrong 2018 Book is Available for Pre-order
Ben Pace (Benito) · 2020-12-01T08:00:00.000Z · comments (110)
The basic reasons I expect AGI ruin
Rob Bensinger (RobbBB) · 2023-04-18T03:37:01.496Z · comments (73)
[link] Social behavior curves, equilibria, and radicalism
Eric Neyman (UnexpectedValues) · 2021-06-05T01:39:22.063Z · comments (18)
The Rationalists of the 1950s (and before) also called themselves “Rationalists”
Owain_Evans · 2021-11-28T20:17:22.259Z · comments (32)
[link] Contra Ngo et al. “Every ‘Every Bay Area House Party’ Bay Area House Party”
Ricki Heicklen (bayesshammai) · 2024-02-22T23:56:02.318Z · comments (5)
Generalized Heat Engine
johnswentworth · 2020-11-05T19:01:32.699Z · comments (11)
Skills from a year of Purposeful Rationality Practice
Raemon · 2024-09-18T02:05:58.726Z · comments (18)
EigenKarma: trust at scale
Henrik Karlsson (henrik-karlsson) · 2023-02-08T18:52:24.490Z · comments (52)
Moloch Hasn’t Won
Zvi · 2019-12-28T16:30:00.947Z · comments (40)
The case for aligning narrowly superhuman models
Ajeya Cotra (ajeya-cotra) · 2021-03-05T22:29:41.577Z · comments (75)
[link] Daniel Kahneman has died
DanielFilan · 2024-03-27T15:59:14.517Z · comments (11)
Is being sexy for your homies?
Valentine · 2023-12-13T20:37:02.043Z · comments (99)
Have You Tried Hiring People?
rank-biserial · 2022-03-02T02:06:39.656Z · comments (117)
Humming is not a free $100 bill
Elizabeth (pktechgirl) · 2024-06-06T20:10:02.457Z · comments (6)
What will GPT-2030 look like?
jsteinhardt · 2023-06-07T23:40:02.925Z · comments (43)
Growing Independence
jefftk (jkaufman) · 2020-06-07T20:20:02.805Z · comments (27)
AI as a science, and three obstacles to alignment strategies
So8res · 2023-10-25T21:00:16.003Z · comments (80)
[link] How factories were made safe
jasoncrawford · 2021-09-12T19:58:12.597Z · comments (46)
Evolution of Modularity
johnswentworth · 2019-11-14T06:49:04.112Z · comments (12)
Information vs Assurance
johnswentworth · 2024-10-20T23:16:25.762Z · comments (17)
Notes on "The Anthropology of Childhood"
juliawise · 2020-08-27T17:11:19.205Z · comments (10)
Why Would Belief-States Have A Fractal Structure, And Why Would That Matter For Interpretability? An Explainer
johnswentworth · 2024-04-18T00:27:43.451Z · comments (21)
Split and Commit
Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-11-21T06:27:07.557Z · comments (34)
Simulacrum 3 As Stag-Hunt Strategy
johnswentworth · 2021-01-26T19:40:42.727Z · comments (37)
Transportation as a Constraint
johnswentworth · 2020-04-06T04:58:28.862Z · comments (34)
[link] Large Language Models will be Great for Censorship
Ethan Edwards · 2023-08-21T19:03:55.323Z · comments (14)
dalle2 comments
nostalgebraist · 2022-04-26T05:30:07.748Z · comments (14)
[link] OpenAI API base models are not sycophantic, at any size
nostalgebraist · 2023-08-29T00:58:29.007Z · comments (20)
Language models seem to be much better than humans at next-token prediction
Buck · 2022-08-11T17:45:41.294Z · comments (60)
Introducing Alignment Stress-Testing at Anthropic
evhub · 2024-01-12T23:51:25.875Z · comments (23)
← previous page (newer posts) · next page (older posts) →