LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

next page (older posts) →

Is Clickbait Destroying Our General Intelligence?
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2018-11-16T23:06:29.506Z · comments (65)
Act of Charity
jessicata (jessica.liu.taylor) · 2018-11-17T05:19:20.786Z · comments (49)
Embedded Agency (full-text version)
Scott Garrabrant · 2018-11-15T19:49:29.455Z · comments (17)
Incorrect hypotheses point to correct observations
Kaj_Sotala · 2018-11-20T21:10:02.867Z · comments (37)
Is Science Slowing Down?
Scott Alexander (Yvain) · 2018-11-27T03:30:01.516Z · comments (77)
“She Wanted It”
sarahconstantin · 2018-11-11T22:00:01.645Z · comments (19)
Robust Delegation
abramdemski · 2018-11-04T16:38:38.750Z · comments (10)
Subsystem Alignment
abramdemski · 2018-11-06T16:16:45.656Z · comments (12)
Embedded World-Models
abramdemski · 2018-11-02T16:07:20.946Z · comments (16)
Update the best textbooks on every subject list
ryan_b · 2018-11-08T20:54:35.300Z · comments (14)
Embedded Curiosities
Scott Garrabrant · 2018-11-08T14:19:32.546Z · comments (1)
Sam Harris and the Is–Ought Gap
Tyrrell_McAllister · 2018-11-16T01:04:08.185Z · comments (46)
Double-Dipping in Dunning--Kruger
isovector · 2018-11-28T03:40:58.206Z · comments (31)
Counterintuitive Comparative Advantage
Wei Dai (Wei_Dai) · 2018-11-28T20:33:30.023Z · comments (8)
Mandatory Obsessions
Jacob Falkovich (Jacobian) · 2018-11-14T18:19:53.007Z · comments (14)
Topological Fixed Point Exercises
Scott Garrabrant · 2018-11-17T01:40:06.342Z · comments (51)
Current AI Safety Roles for Software Engineers
ozziegooen · 2018-11-09T20:57:16.159Z · comments (9)
When does rationality-as-search have nontrivial implications?
nostalgebraist · 2018-11-04T22:42:01.452Z · comments (11)
Clarifying "AI Alignment"
paulfchristiano · 2018-11-15T14:41:57.599Z · comments (83)
History of LessWrong: Some Data Graphics
Said Achmiz (SaidAchmiz) · 2018-11-16T07:07:15.501Z · comments (18)
Preschool: Much Less Than You Wanted To Know
Zvi · 2018-11-20T19:30:01.155Z · comments (15)
MIRI’s 2018 Fundraiser
Malo (malo) · 2018-11-27T05:30:11.931Z · comments (1)
Fixed Point Exercises
Scott Garrabrant · 2018-11-17T01:39:50.233Z · comments (8)
The easy goal inference problem is still hard
paulfchristiano · 2018-11-03T14:41:55.464Z · comments (20)
Four factors that moderate the intensity of emotions
Ruby · 2018-11-24T20:40:12.139Z · comments (11)
Corrigibility
paulfchristiano · 2018-11-27T21:50:10.517Z · comments (8)
What is ambitious value learning?
Rohin Shah (rohinmshah) · 2018-11-01T16:20:27.865Z · comments (28)
Humans can be assigned any values whatsoever…
Stuart_Armstrong · 2018-11-05T14:26:41.337Z · comments (27)
[link] On MIRI's new research directions
Rob Bensinger (RobbBB) · 2018-11-22T23:42:06.521Z · comments (12)
Competitive Markets as Distributed Backprop
johnswentworth · 2018-11-10T16:47:37.622Z · comments (10)
[link] The Vulnerable World Hypothesis (by Bostrom)
Ben Pace (Benito) · 2018-11-06T20:05:27.496Z · comments (17)
Winter Solstice 2018 Roundup
Raemon · 2018-11-28T03:09:44.938Z · comments (8)
[link] Burnout: What it is and how to Treat it.
Elizabeth (pktechgirl) · 2018-11-07T22:02:23.649Z · comments (0)
Future directions for ambitious value learning
Rohin Shah (rohinmshah) · 2018-11-11T15:53:52.888Z · comments (9)
If You Want to Win, Stop Conceding
Davis_Kingsley · 2018-11-22T18:10:31.439Z · comments (15)
Prosaic AI alignment
paulfchristiano · 2018-11-20T13:56:39.773Z · comments (10)
Prediction-Augmented Evaluation Systems
ozziegooen · 2018-11-09T10:55:36.181Z · comments (12)
Preface to the sequence on iterated amplification
paulfchristiano · 2018-11-10T13:24:13.200Z · comments (8)
The Steering Problem
paulfchristiano · 2018-11-13T17:14:56.557Z · comments (12)
Fixed Point Discussion
Scott Garrabrant · 2018-11-24T20:53:39.545Z · comments (2)
Speculative Evopsych, Ep. 1
Optimization Process · 2018-11-22T19:00:04.676Z · comments (9)
[meta] Upcoming: Open Questions
Raemon · 2018-11-24T01:39:33.385Z · comments (7)
Stoicism: Cautionary Advice
VivaLaPanda · 2018-11-14T23:18:00.988Z · comments (17)
Diagonalization Fixed Point Exercises
Scott Garrabrant · 2018-11-18T00:31:19.683Z · comments (24)
Bodega Bay: workshop
KatjaGrace · 2018-11-27T03:20:01.290Z · comments (1)
EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday)
habryka (habryka4) · 2018-11-21T03:39:15.247Z · comments (0)
[meta] New GreaterWrong feature: image zoom + image slideshows
Said Achmiz (SaidAchmiz) · 2018-11-04T07:34:44.907Z · comments (3)
Explore/Exploit for Conversations
Hazard · 2018-11-15T04:11:30.372Z · comments (2)
[link] Rationality Is Not Systematized Winning
namespace (ingres) · 2018-11-11T22:05:19.153Z · comments (20)
Formal Open Problem in Decision Theory
Scott Garrabrant · 2018-11-29T03:25:46.134Z · comments (28)
next page (older posts) →