LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

EA Bristol Strategy Meeting
thegreatnick · 2018-11-21T10:57:14.199Z · comments (0)
Rationality Café No. 6 - The Sequences, Part 1; Section B Repeat
thegreatnick · 2018-11-21T10:54:49.292Z · comments (2)
EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday)
habryka (habryka4) · 2018-11-21T03:39:15.247Z · comments (0)
Incorrect hypotheses point to correct observations
Kaj_Sotala · 2018-11-20T21:10:02.867Z · comments (37)
Preschool: Much Less Than You Wanted To Know
Zvi · 2018-11-20T19:30:01.155Z · comments (15)
[link] New safety research agenda: scalable agent alignment via reward modeling
Vika · 2018-11-20T17:29:22.751Z · comments (12)
Prosaic AI alignment
paulfchristiano · 2018-11-20T13:56:39.773Z · comments (10)
Moscow LW meetup in "Nauchka" library
Alexander230 · 2018-11-20T12:19:52.485Z · comments (0)
[Insert clever intro here]
Bae's Theorem (Senarin) · 2018-11-20T03:26:32.420Z · comments (13)
Alignment Newsletter #33
Rohin Shah (rohinmshah) · 2018-11-19T17:20:03.463Z · comments (0)
Games in Kocherga club: Fallacymania, Tower of Chaos, Scientific Discovery
Alexander230 · 2018-11-19T14:23:15.618Z · comments (0)
Letting Others Be Vulnerable
[deleted] · 2018-11-19T02:59:21.423Z · comments (6)
Clickbait might not be destroying our general Intelligence
Donald Hobson (donald-hobson) · 2018-11-19T00:13:12.674Z · comments (13)
South Bay Meetup 12/8
DavidFriedman · 2018-11-19T00:04:40.859Z · comments (0)
[Link] "They go together: Freedom, Prosperity, and Big Government"
CronoDAS · 2018-11-18T16:51:00.579Z · comments (3)
Collaboration-by-Design versus Emergent Collaboration
Davidmanheim · 2018-11-18T07:22:16.340Z · comments (2)
Diagonalization Fixed Point Exercises
Scott Garrabrant · 2018-11-18T00:31:19.683Z · comments (24)
Ia! Ia! Extradimensional Cephalopod Nafl'fhtagn!
ExCeph · 2018-11-17T23:00:05.983Z · comments (5)
[link] Effective Altruism, YouTube, and AI (talk by Lê Nguyên Hoang)
Paperclip Minimizer · 2018-11-17T19:21:19.247Z · comments (0)
An unaligned benchmark
paulfchristiano · 2018-11-17T15:51:03.448Z · comments (0)
On Rigorous Error Handling
Martin Sustrik (sustrik) · 2018-11-17T09:20:00.753Z · comments (4)
Act of Charity
jessicata (jessica.liu.taylor) · 2018-11-17T05:19:20.786Z · comments (49)
Topological Fixed Point Exercises
Scott Garrabrant · 2018-11-17T01:40:06.342Z · comments (51)
Fixed Point Exercises
Scott Garrabrant · 2018-11-17T01:39:50.233Z · comments (8)
Is Clickbait Destroying Our General Intelligence?
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2018-11-16T23:06:29.506Z · comments (61)
Dimensional regret without resets
Vanessa Kosoy (vanessa-kosoy) · 2018-11-16T19:22:32.551Z · comments (0)
History of LessWrong: Some Data Graphics
Said Achmiz (SaidAchmiz) · 2018-11-16T07:07:15.501Z · comments (18)
Sam Harris and the Is–Ought Gap
Tyrrell_McAllister · 2018-11-16T01:04:08.185Z · comments (46)
Embedded Agency (full-text version)
Scott Garrabrant · 2018-11-15T19:49:29.455Z · comments (17)
[meta] Switching hosting providers today, there probably will be some hiccups
habryka (habryka4) · 2018-11-15T19:45:59.181Z · comments (0)
Clarifying "AI Alignment"
paulfchristiano · 2018-11-15T14:41:57.599Z · comments (83)
[link] The Inspection Paradox is Everywhere
Chris_Leong · 2018-11-15T10:55:43.654Z · comments (3)
SSC Atlanta Meetup Number 3
Steve French (steve-french) · 2018-11-15T05:19:16.370Z · comments (0)
Explore/Exploit for Conversations
Hazard · 2018-11-15T04:11:30.372Z · comments (2)
Washington, DC LW Meetup
rusalkii · 2018-11-15T02:13:54.789Z · comments (0)
Stoicism: Cautionary Advice
VivaLaPanda · 2018-11-14T23:18:00.988Z · comments (17)
Mandatory Obsessions
Jacob Falkovich (Jacobian) · 2018-11-14T18:19:53.007Z · comments (14)
Pub meetup: Developmental Milestones
Giles · 2018-11-14T04:18:03.625Z · comments (0)
Deck Guide: Burning Drakes
Zvi · 2018-11-13T19:40:00.409Z · comments (0)
Acknowledging Human Preference Types to Support Value Learning
Nandi · 2018-11-13T18:57:52.965Z · comments (4)
The Steering Problem
paulfchristiano · 2018-11-13T17:14:56.557Z · comments (12)
Post-Rationality and Rationality, A Dialogue
agilecaveman · 2018-11-13T05:55:06.551Z · comments (2)
Laughing Away the Little Miseries
Rossin · 2018-11-13T03:31:56.666Z · comments (7)
Kelly bettors
DanielFilan · 2018-11-13T00:40:01.074Z · comments (3)
[link] Wireheading as a Possible Contributor to Civilizational Decline
avturchin · 2018-11-12T20:33:39.947Z · comments (6)
Alignment Newsletter #32
Rohin Shah (rohinmshah) · 2018-11-12T17:20:03.572Z · comments (0)
AI development incentive gradients are not uniformly terrible
rk · 2018-11-12T16:27:31.886Z · comments (12)
What is being?
Andrew Bindon (andrew-bindon) · 2018-11-12T15:33:30.467Z · comments (20)
Aligned AI, The Scientist
shminux · 2018-11-12T06:36:30.972Z · comments (2)
Combat vs Nurture: Cultural Genesis
Ruby · 2018-11-12T02:11:42.921Z · comments (12)
← previous page (newer posts) · next page (older posts) →