LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

next page (older posts) →

How To Get Into Independent Research On Alignment/Agency
johnswentworth · 2021-11-19T00:00:21.600Z · comments (38)
Discussion with Eliezer Yudkowsky on AGI interventions
Rob Bensinger (RobbBB) · 2021-11-11T03:01:11.208Z · comments (251)
Feature Selection
Zack_M_Davis · 2021-11-01T00:22:29.993Z · comments (24)
Frame Control
Aella · 2021-11-27T22:59:29.436Z · comments (282)
EfficientZero: How It Works
1a3orn · 2021-11-26T15:17:08.321Z · comments (50)
Study Guide
johnswentworth · 2021-11-06T01:23:09.552Z · comments (48)
A Brief Introduction to Container Logistics
Vitor · 2021-11-11T15:58:11.510Z · comments (22)
Omicron Variant Post #1: We’re F***ed, It’s Never Over
Zvi · 2021-11-26T19:00:00.988Z · comments (95)
larger language models may disappoint you [or, an eternally unfinished draft]
nostalgebraist · 2021-11-26T23:08:56.221Z · comments (31)
Ngo and Yudkowsky on alignment difficulty
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-11-15T20:31:34.135Z · comments (148)
Concentration of Force
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T08:20:18.991Z · comments (23)
Attempted Gears Analysis of AGI Intervention Discussion With Eliezer
Zvi · 2021-11-15T03:50:01.141Z · comments (49)
Almost everyone should be less afraid of lawsuits
alyssavance · 2021-11-27T02:06:52.176Z · comments (18)
Yudkowsky and Christiano discuss "Takeoff Speeds"
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-11-22T19:35:27.657Z · comments (176)
Speaking of Stag Hunts
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T08:20:34.967Z · comments (373)
The Rationalists of the 1950s (and before) also called themselves “Rationalists”
Owain_Evans · 2021-11-28T20:17:22.259Z · comments (30)
Split and Commit
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-21T06:27:07.557Z · comments (33)
Preprint is out! 100,000 lumens to treat seasonal affective disorder
Fabienne · 2021-11-12T17:59:20.077Z · comments (10)
[link] The bonds of family and community: Poverty and cruelty among Russian peasants in the late 19th century
jasoncrawford · 2021-11-28T17:22:23.136Z · comments (36)
You are probably underestimating how good self-love can be
Charlie Rogers-Smith (charlie.rs) · 2021-11-14T00:41:35.011Z · comments (19)
Comments on Carlsmith's “Is power-seeking AI an existential risk?”
So8res · 2021-11-13T04:29:30.673Z · comments (14)
App and book recommendations for people who want to be happier and more productive
KatWoods (ea247) · 2021-11-06T17:40:40.592Z · comments (43)
[link] EfficientZero: human ALE sample-efficiency w/MuZero+self-supervised
gwern · 2021-11-02T02:32:41.856Z · comments (52)
Sci-Hub sued in India
Connor_Flexman · 2021-11-13T23:12:40.559Z · comments (19)
Ngo and Yudkowsky on AI capability gains
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-11-18T22:19:05.913Z · comments (61)
Transcript: "You Should Read HPMOR"
TurnTrout · 2021-11-02T18:20:53.161Z · comments (12)
How do we become confident in the safety of a machine learning system?
evhub · 2021-11-08T22:49:41.080Z · comments (5)
Soares, Tallinn, and Yudkowsky discuss AGI cognition
So8res · 2021-11-29T19:26:33.232Z · comments (39)
Omicron Variant Post #2
Zvi · 2021-11-29T16:30:01.368Z · comments (34)
Christiano, Cotra, and Yudkowsky on AI progress
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-11-25T16:45:32.482Z · comments (95)
Why I'm excited about Redwood Research's current project
paulfchristiano · 2021-11-12T19:26:26.159Z · comments (6)
[link] Where did the 5 micron number come from? Nowhere good. [Wired.com]
Elizabeth (pktechgirl) · 2021-11-09T07:14:25.167Z · comments (8)
Rapid Increase of Highly Mutated B.1.1.529 Strain in South Africa
dawangy · 2021-11-26T01:05:49.516Z · comments (15)
The Maker of MIND
Tomás B. (Bjartur Tómas) · 2021-11-20T16:28:56.327Z · comments (19)
Money Stuff
Jacob Falkovich (Jacobian) · 2021-11-01T16:08:02.700Z · comments (18)
Improving on the Karma System
Raelifin · 2021-11-14T18:01:30.049Z · comments (35)
Effective Evil
lsusr · 2021-11-02T00:26:29.910Z · comments (7)
Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22]
habryka (habryka4) · 2021-11-03T18:22:58.879Z · comments (4)
[Book Review] "The Bell Curve" by Charles Murray
lsusr · 2021-11-02T05:49:22.198Z · comments (134)
Comments on OpenPhil's Interpretability RFP
paulfchristiano · 2021-11-05T22:36:04.733Z · comments (5)
[Book Review] "Sorceror's Apprentice" by Tahir Shah
lsusr · 2021-11-20T11:29:21.361Z · comments (11)
AI Safety Needs Great Engineers
Andy Jones (andyljones) · 2021-11-23T15:40:18.358Z · comments (43)
A Bayesian Aggregation Paradox
Jsevillamol · 2021-11-22T10:39:59.935Z · comments (23)
Coordination Skills I Wish I Had For the Pandemic
Raemon · 2021-11-13T23:32:11.510Z · comments (9)
[link] Satisficers Tend To Seek Power: Instrumental Convergence Via Retargetability
TurnTrout · 2021-11-18T01:54:33.589Z · comments (8)
Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation
Rob Bensinger (RobbBB) · 2021-11-08T02:19:04.189Z · comments (97)
A positive case for how we might succeed at prosaic AI alignment
evhub · 2021-11-16T01:49:48.464Z · comments (46)
What would we do if alignment were futile?
Grant Demaree (grant-demaree) · 2021-11-14T08:09:29.318Z · comments (39)
[question] Worst Commonsense Concepts?
abramdemski · 2021-11-15T18:22:31.465Z · answers+comments (34)
Covid 11/25: Another Thanksgiving
Zvi · 2021-11-25T13:40:00.787Z · comments (9)
next page (older posts) →