LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

next page (older posts) →

A non-magical explanation of Jeffrey Epstein
lc · 2021-12-28T21:15:41.953Z · comments (59)
The Plan
johnswentworth · 2021-12-10T23:41:39.417Z · comments (78)
Omicron: My Current Model
Zvi · 2021-12-28T17:10:00.629Z · comments (72)
[link] ARC's first technical report: Eliciting Latent Knowledge
paulfchristiano · 2021-12-14T20:09:50.209Z · comments (90)
Morality is Scary
Wei Dai (Wei_Dai) · 2021-12-02T06:35:06.736Z · comments (116)
Zvi’s Thoughts on the Survival and Flourishing Fund (SFF)
Zvi · 2021-12-14T14:30:01.096Z · comments (65)
Book Launch: The Engines of Cognition
Ben Pace (Benito) · 2021-12-21T07:24:45.170Z · comments (55)
Biology-Inspired AGI Timelines: The Trick That Never Works
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-12-01T22:35:28.379Z · comments (142)
2021 AI Alignment Literature Review and Charity Comparison
Larks · 2021-12-23T14:06:50.721Z · comments (28)
Worst-case thinking in AI alignment
Buck · 2021-12-23T01:29:47.954Z · comments (18)
Dear Self; We Need To Talk About Social Media
Elizabeth (pktechgirl) · 2021-12-07T00:40:01.949Z · comments (19)
Omicron Post #7
Zvi · 2021-12-16T17:30:01.676Z · comments (41)
Omicron Post #4
Zvi · 2021-12-06T17:00:01.470Z · comments (66)
Reply to Eliezer on Biological Anchors
HoldenKarnofsky · 2021-12-23T16:15:43.508Z · comments (46)
[link] Transformer Circuits
evhub · 2021-12-22T21:09:22.676Z · comments (4)
COVID Skepticism Isn't About Science
jaspax · 2021-12-29T17:53:43.354Z · comments (76)
My Overview of the AI Alignment Landscape: A Bird's Eye View
Neel Nanda (neel-nanda-1) · 2021-12-15T23:44:31.873Z · comments (9)
Moore's Law, AI, and the pace of progress
Veedrac · 2021-12-11T03:02:24.558Z · comments (38)
Law of No Evidence
Zvi · 2021-12-20T13:50:01.189Z · comments (19)
Perpetual Dickensian Poverty?
jefftk (jkaufman) · 2021-12-21T13:30:03.543Z · comments (18)
Experiences raising children in shared housing
juliawise · 2021-12-21T17:09:05.008Z · comments (4)
[link] In Defense of Attempting Hard Things, and my story of the Leverage ecosystem
Cathleen · 2021-12-17T23:08:20.150Z · comments (43)
The 2020 Review
Raemon · 2021-12-02T00:39:19.191Z · comments (39)
Internet Literacy Atrophy
Elizabeth (pktechgirl) · 2021-12-26T12:30:01.540Z · comments (49)
Conversation on technology forecasting and gradualism
Richard_Ngo (ricraz) · 2021-12-09T21:23:21.187Z · comments (30)
Merry Christmas
lsusr · 2021-12-26T07:03:30.261Z · comments (16)
Omicron Post #5
Zvi · 2021-12-09T21:10:00.469Z · comments (18)
Perishable Knowledge
lsusr · 2021-12-18T05:53:03.343Z · comments (6)
Two (very different) kinds of donors
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-12-22T01:43:52.498Z · comments (19)
Interpreting Yudkowsky on Deep vs Shallow Knowledge
adamShimi · 2021-12-05T17:32:26.532Z · comments (32)
Omicron Post #8
Zvi · 2021-12-20T23:10:01.630Z · comments (33)
More Christiano, Cotra, and Yudkowsky on AI progress
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-12-06T20:33:12.164Z · comments (28)
Shulman and Yudkowsky on AI progress
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-12-03T20:05:22.552Z · comments (16)
There is essentially one best-validated theory of cognition.
abramdemski · 2021-12-10T15:51:06.423Z · comments (33)
Omicron Post #6
Zvi · 2021-12-13T18:00:01.098Z · comments (30)
Omicron Post #9
Zvi · 2021-12-23T21:50:10.466Z · comments (11)
[link] Deepmind's Gopher--more powerful than GPT-3
hath · 2021-12-08T17:06:32.650Z · comments (26)
Ten Minutes with Sam Altman
lsusr · 2021-12-28T07:32:59.140Z · comments (10)
ML Alignment Theory Program under Evan Hubinger
ozhang (oliver-zhang) · 2021-12-06T00:03:15.443Z · comments (3)
A Summary Of Anthropic's First Paper
Sam Ringer · 2021-12-30T00:48:15.290Z · comments (1)
Reviews of “Is power-seeking AI an existential risk?”
Joe Carlsmith (joekc) · 2021-12-16T20:48:26.808Z · comments (20)
Privacy and Manipulation
Raemon · 2021-12-05T00:39:18.484Z · comments (41)
LessWrong discussed in New Ideas in Psychology article
rogersbacon · 2021-12-09T21:01:17.920Z · comments (11)
Behavior Cloning is Miscalibrated
leogao · 2021-12-05T01:36:01.802Z · comments (3)
Teaser: Hard-coding Transformer Models
MadHatter · 2021-12-12T22:04:53.092Z · comments (19)
Language Model Alignment Research Internships
Ethan Perez (ethan-perez) · 2021-12-13T19:53:32.156Z · comments (1)
COVID and the holidays
Connor_Flexman · 2021-12-08T23:13:56.097Z · comments (31)
Some abstract, non-technical reasons to be non-maximally-pessimistic about AI alignment
Rob Bensinger (RobbBB) · 2021-12-12T02:08:08.798Z · comments (35)
Risks from AI persuasion
Beth Barnes (beth-barnes) · 2021-12-24T01:48:17.231Z · comments (15)
Considerations on interaction between AI and expected value of the future
Beth Barnes (beth-barnes) · 2021-12-07T02:46:19.215Z · comments (28)
next page (older posts) →