LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Humans Consulting HCH
paulfchristiano · 2018-11-25T23:18:55.247Z · comments (9)
No Really, Why Aren't Rationalists Winning?
Sailor Vulcan · 2018-11-04T18:11:15.586Z · comments (90)
Combat vs Nurture: Cultural Genesis
Ruby · 2018-11-12T02:11:42.921Z · comments (12)
[link] New safety research agenda: scalable agent alignment via reward modeling
Vika · 2018-11-20T17:29:22.751Z · comments (12)
Acknowledging Human Preference Types to Support Value Learning
Nandi · 2018-11-13T18:57:52.965Z · comments (4)
Values Weren't Complex, Once.
Davidmanheim · 2018-11-25T09:17:02.207Z · comments (13)
Model Mis-specification and Inverse Reinforcement Learning
Owain_Evans · 2018-11-09T15:33:02.630Z · comments (3)
Iteration Fixed Point Exercises
Scott Garrabrant · 2018-11-22T00:35:09.885Z · comments (12)
Letting Others Be Vulnerable
[deleted] · 2018-11-19T02:59:21.423Z · comments (6)
Approval-directed agents
paulfchristiano · 2018-11-22T21:15:28.956Z · comments (10)
[link] How rapidly are GPUs improving in price performance?
gallabytes · 2018-11-25T19:54:10.830Z · comments (9)
Reflective oracles as a solution to the converse Lawvere problem
SamEisenstat · 2018-11-29T03:23:51.180Z · comments (0)
An unaligned benchmark
paulfchristiano · 2018-11-17T15:51:03.448Z · comments (0)
Stabilize-Reflect-Execute
ozziegooen · 2018-11-28T17:26:39.741Z · comments (1)
Hyperreal Brouwer
Scott Garrabrant · 2018-11-29T03:15:23.650Z · comments (2)
Summary: Surreal Decisions
Chris_Leong · 2018-11-27T14:15:07.342Z · comments (19)
Genetically Modified Humans Born (Allegedly)
ryan_b · 2018-11-28T16:14:05.477Z · comments (12)
[link] October gwern.net links
gwern · 2018-11-01T01:11:28.763Z · comments (8)
Discussion on the machine learning approach to AI safety
Vika · 2018-11-01T20:54:39.195Z · comments (3)
[meta] The new Effective Altruism forum just launched
habryka (habryka4) · 2018-11-08T01:59:01.502Z · comments (6)
Status model
Bucky · 2018-11-26T15:05:12.105Z · comments (7)
Bounded Oracle Induction
Diffractor · 2018-11-28T08:11:28.183Z · comments (0)
Beliefs at different timescales
Nisan · 2018-11-04T20:10:59.223Z · comments (12)
Clickbait might not be destroying our general Intelligence
Donald Hobson (donald-hobson) · 2018-11-19T00:13:12.674Z · comments (13)
[link] Specification gaming examples in AI
Samuel Rødal (samuel-rodal) · 2018-11-10T12:00:29.369Z · comments (6)
Kelly bettors
DanielFilan · 2018-11-13T00:40:01.074Z · comments (3)
deluks917 on Online Weirdos
Jacob Falkovich (Jacobian) · 2018-11-24T17:03:20.474Z · comments (3)
[link] The Inspection Paradox is Everywhere
Chris_Leong · 2018-11-15T10:55:43.654Z · comments (3)
Latent Variables and Model Mis-Specification
jsteinhardt · 2018-11-07T14:48:40.434Z · comments (8)
Alignment Newsletter #34
Rohin Shah (rohinmshah) · 2018-11-26T23:10:03.388Z · comments (0)
Rationality of demonstrating & voting
bfinn · 2018-11-07T00:09:44.239Z · comments (21)
Alignment Newsletter #33
Rohin Shah (rohinmshah) · 2018-11-19T17:20:03.463Z · comments (0)
[link] On first looking into Russell's History
Richard_Ngo (ricraz) · 2018-11-08T11:20:00.935Z · comments (6)
[meta] Suggestion: New material shouldn't be released too fast
Chris_Leong · 2018-11-21T16:39:19.495Z · comments (7)
Approval-directed bootstrapping
paulfchristiano · 2018-11-25T23:18:47.542Z · comments (0)
[link] Speculations on improving debating
Richard_Ngo (ricraz) · 2018-11-05T16:10:02.799Z · comments (4)
Bayes Questions
Bucky · 2018-11-07T16:54:38.800Z · comments (13)
Review: Artifact
Zvi · 2018-11-22T15:00:01.335Z · comments (3)
The Ubiquitous Converse Lawvere Problem
Scott Garrabrant · 2018-11-29T03:16:16.453Z · comments (0)
AI development incentive gradients are not uniformly terrible
rk · 2018-11-12T16:27:31.886Z · comments (12)
Meta-execution
paulfchristiano · 2018-11-01T22:18:10.656Z · comments (1)
[link] Implementations of immortality
Richard_Ngo (ricraz) · 2018-11-01T14:20:01.494Z · comments (11)
[link] The Semantic Man
namespace (ingres) · 2018-11-22T08:38:21.220Z · comments (4)
Alignment Newsletter #32
Rohin Shah (rohinmshah) · 2018-11-12T17:20:03.572Z · comments (0)
[Insert clever intro here]
Bae's Theorem (Senarin) · 2018-11-20T03:26:32.420Z · comments (13)
Real-time hiring with prediction markets
ryan_b · 2018-11-09T22:10:18.576Z · comments (9)
Alignment Newsletter #31
Rohin Shah (rohinmshah) · 2018-11-05T23:50:02.432Z · comments (0)
What if people simply forecasted your future choices?
ozziegooen · 2018-11-23T10:52:25.471Z · comments (6)
Madison Solstice Gathering
mingyuan · 2018-11-28T21:36:24.846Z · comments (0)
Goodhart's Law and Genies
thomascolthurst · 2018-11-01T01:38:41.933Z · comments (5)
← previous page (newer posts) · next page (older posts) →