LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

next page (older posts) →

Power Buys You Distance From The Crime
Elizabeth (pktechgirl) · 2019-08-02T20:50:01.106Z · comments (75)
Why Subagents?
johnswentworth · 2019-08-01T22:17:26.415Z · comments (33)
Soft takeoff can still lead to decisive strategic advantage
Daniel Kokotajlo (daniel-kokotajlo) · 2019-08-23T16:39:31.317Z · comments (46)
The "Commitment Races" problem
Daniel Kokotajlo (daniel-kokotajlo) · 2019-08-23T01:58:19.669Z · comments (34)
Trauma, Meditation, and a Cool Scar
elriggs · 2019-08-06T16:17:39.912Z · comments (17)
Partial summary of debate with Benquo and Jessicata [pt 1]
Raemon · 2019-08-14T20:02:04.314Z · comments (66)
[question] Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening?
MakoYass · 2019-08-05T00:12:14.630Z · answers+comments (55)
Subagents, trauma and rationality
Kaj_Sotala · 2019-08-14T13:14:46.838Z · comments (4)
Clarifying some key hypotheses in AI alignment
Ben Cottier (ben-cottier) · 2019-08-15T21:29:06.564Z · comments (11)
Troll Bridge
abramdemski · 2019-08-23T18:36:39.584Z · comments (51)
Problems in AI Alignment that philosophers could potentially contribute to
Wei_Dai · 2019-08-17T17:38:31.757Z · comments (14)
2-D Robustness
vlad_m · 2019-08-30T20:27:34.432Z · comments (1)
Subagents, neural Turing machines, thought selection, and blindspots
Kaj_Sotala · 2019-08-06T21:15:24.400Z · comments (3)
Classifying specification problems as variants of Goodhart's Law
Vika · 2019-08-19T20:40:29.499Z · comments (5)
Six AI Risk/Strategy Ideas
Wei_Dai · 2019-08-27T00:40:38.672Z · comments (18)
Markets are Universal for Logical Induction
johnswentworth · 2019-08-22T06:44:56.532Z · comments (0)
Book Review: Secular Cycles
Scott Alexander (Yvain) · 2019-08-13T04:10:01.201Z · comments (11)
Response to Glen Weyl on Technocracy and the Rationalist Community
John_Maxwell (John_Maxwell_IV) · 2019-08-22T23:14:58.690Z · comments (9)
Dual Wielding
Zvi · 2019-08-27T14:10:00.715Z · comments (23)
How to Make Billions of Dollars Reducing Loneliness
John_Maxwell (John_Maxwell_IV) · 2019-08-30T17:30:50.006Z · comments (32)
Intentional Bucket Errors
Scott Garrabrant · 2019-08-22T20:02:11.357Z · comments (6)
Permissions in Governance
sarahconstantin · 2019-08-02T19:50:00.592Z · comments (12)
Tabooing 'Agent' for Prosaic Alignment
Hjalmar_Wijk · 2019-08-23T02:55:57.591Z · comments (9)
A Personal Rationality Wishlist
DanielFilan · 2019-08-27T03:40:00.669Z · comments (54)
[question] Why so much variance in human intelligence?
Ben Pace (Benito) · 2019-08-22T22:36:55.499Z · answers+comments (28)
Actually updating
SaraHax · 2019-08-23T17:46:01.924Z · comments (10)
Schelling Categories, and Simple Membership Tests
Zack_M_Davis · 2019-08-26T02:43:53.347Z · comments (10)
AI Forecasting Dictionary (Forecasting infrastructure, part 1)
jacobjacob · 2019-08-08T16:10:51.516Z · comments (None)
[link] Status 451 on Diagnosis: Russell Aphasia
Zack_M_Davis · 2019-08-06T04:43:30.359Z · comments (1)
Vaniver's View on Factored Cognition
Vaniver · 2019-08-23T02:54:00.915Z · comments (4)
September Bragging Thread
Raemon · 2019-08-30T21:58:45.918Z · comments (12)
[question] How Can People Evaluate Complex Questions Consistently?
Elizabeth (pktechgirl) · 2019-08-26T20:33:14.840Z · answers+comments (12)
[Link] Book Review: Reframing Superintelligence (SSC)
ioannes (ioannes_shade) · 2019-08-28T22:57:09.455Z · comments (9)
Towards a mechanistic understanding of corrigibility
evhub · 2019-08-22T23:20:57.134Z · comments (26)
The Missing Math of Map-Making
johnswentworth · 2019-08-28T21:18:25.298Z · comments (8)
My recommendations for gratitude exercises
MaxCarpendale · 2019-08-05T19:04:41.973Z · comments (3)
Call for contributors to the Alignment Newsletter
rohinmshah · 2019-08-21T18:21:31.113Z · comments (None)
Computational Model: Causal Diagrams with Symmetry
johnswentworth · 2019-08-22T17:54:11.274Z · comments (27)
Epistemic Spot Check: The Fate of Rome (Kyle Harper)
Elizabeth (pktechgirl) · 2019-08-24T21:40:01.164Z · comments (3)
LW Team Updates - September 2019
Ruby · 2019-08-29T22:12:55.747Z · comments (16)
Unstriving
Jacob Falkovich (Jacobian) · 2019-08-19T14:31:56.786Z · comments (7)
[question] Does Agent-like Behavior Imply Agent-like Architecture?
Scott Garrabrant · 2019-08-23T02:01:09.651Z · answers+comments (7)
Optimization Provenance
Adele Lopez (adele-lopez-1) · 2019-08-23T20:08:13.013Z · comments (5)
Diana Fleischman and Geoffrey Miller - Audience Q&A
Jacob Falkovich (Jacobian) · 2019-08-10T22:37:53.090Z · comments (14)
AI Alignment Open Thread August 2019
habryka (habryka4) · 2019-08-04T22:09:38.431Z · comments (96)
New paper: Corrigibility with Utility Preservation
Koen.Holtman · 2019-08-06T19:04:26.386Z · comments (11)
AI Forecasting Resolution Council (Forecasting infrastructure, part 2)
jacobjacob · 2019-08-29T17:35:26.962Z · comments (2)
Cephaloponderings
Jacob Falkovich (Jacobian) · 2019-08-04T16:45:57.065Z · comments (4)
Verification and Transparency
DanielFilan · 2019-08-08T01:50:00.935Z · comments (6)
[question] What authors consistently give accurate pictures of complex topics they discuss?
seez · 2019-08-21T00:09:52.057Z · answers+comments (3)
next page (older posts) →