LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

Human-AI Interaction
Rohin Shah (rohinmshah) · 2019-01-15T01:57:15.558Z · comments (10)
[link] Critical Thinking in Medicine
Yoav Ravid · 2019-01-15T15:09:42.716Z · comments (0)
[question] What are the components of intellectual honesty?
abramdemski · 2019-01-15T20:00:09.144Z · answers+comments (4)
[question] Is there a.. more exact.. way of scoring a predictor's calibration?
mako yass (MakoYass) · 2019-01-16T08:19:15.744Z · answers+comments (6)
What AI Safety Researchers Have Written About the Nature of Human Values
avturchin · 2019-01-16T13:59:31.522Z · comments (3)
Book Summary: Consciousness and the Brain
Kaj_Sotala · 2019-01-16T14:43:59.202Z · comments (20)
A Framework for Internal Debugging
Matt Goldenberg (mr-hire) · 2019-01-16T16:04:16.478Z · comments (3)
Buy shares in a megaproject
ryan_b · 2019-01-16T16:18:50.177Z · comments (10)
The reward engineering problem
paulfchristiano · 2019-01-16T18:47:24.075Z · comments (3)
And My Axiom! Insights from 'Computability and Logic'
TurnTrout · 2019-01-16T19:48:47.388Z · comments (17)
Some Thoughts on My Psychiatry Practice
Laura B (Lara_Foster) · 2019-01-16T23:16:54.264Z · comments (43)
Clothing For Men
Solnassant · 2019-01-17T00:23:50.567Z · comments (41)
Alignment Newsletter #41
Rohin Shah (rohinmshah) · 2019-01-17T08:10:01.958Z · comments (6)
Moscow LW meetup in "Nauchka" library
Alexander230 · 2019-01-17T10:37:34.892Z · comments (0)
In SIA, reference classes (almost) don't matter
Stuart_Armstrong · 2019-01-17T11:29:26.131Z · comments (15)
The questions and classes of SSA
Stuart_Armstrong · 2019-01-17T11:50:50.828Z · comments (0)
Solving the Doomsday argument
Stuart_Armstrong · 2019-01-17T12:32:23.104Z · comments (16)
Anthropics is pretty normal
Stuart_Armstrong · 2019-01-17T13:26:22.929Z · comments (9)
Debate AI and the Decision to Release an AI
Chris_Leong · 2019-01-17T14:36:53.512Z · comments (18)
Finance Followups
Jacob Falkovich (Jacobian) · 2019-01-17T17:54:44.256Z · comments (8)
EA Funds: Long-Term Future fund is open to applications until Feb. 7th
habryka (habryka4) · 2019-01-17T20:27:17.619Z · comments (2)
The Very Repugnant Conclusion
Stuart_Armstrong · 2019-01-18T14:26:08.083Z · comments (19)
Synthesising divergent preferences: an example in population ethics
Stuart_Armstrong · 2019-01-18T14:29:18.805Z · comments (8)
How could shares in a megaproject return value to shareholders?
ryan_b · 2019-01-18T18:36:34.916Z · comments (12)
[link] Curing the World of Men
Jacob Falkovich (Jacobian) · 2019-01-18T20:23:18.006Z · comments (4)
[link] Bundle your Experiments
PeterMcCluskey · 2019-01-18T23:22:08.660Z · comments (0)
[link] Doing Despite Disliking: Self‐regulatory Strategies in Everyday Aversive Activities
[deleted] · 2019-01-19T00:27:05.605Z · comments (1)
Reward uncertainty
Rohin Shah (rohinmshah) · 2019-01-19T02:16:05.194Z · comments (3)
[question] What math do i need for data analysis?
Yoav Ravid · 2019-01-19T10:08:28.398Z · answers+comments (13)
[question] Why not tool AI?
smithee · 2019-01-19T22:18:11.248Z · answers+comments (10)
Less Competition, More Meritocracy?
Zvi · 2019-01-20T02:00:00.974Z · comments (19)
Capability amplification
paulfchristiano · 2019-01-20T07:03:27.879Z · comments (8)
Announcement: AI alignment prize round 4 winners
cousin_it · 2019-01-20T14:46:47.912Z · comments (41)
Following human norms
Rohin Shah (rohinmshah) · 2019-01-20T23:59:16.742Z · comments (10)
Do the best ideas float to the top?
Quinn (quinn-dougherty) · 2019-01-21T05:22:51.182Z · comments (11)
Disentangling arguments for the importance of AI safety
Richard_Ngo (ricraz) · 2019-01-21T12:41:43.615Z · comments (23)
Game Analysis Index
Zvi · 2019-01-21T15:30:00.371Z · comments (0)
January 2019 Nashville SSC Meetup
Dude McDude (dude-mcdude) · 2019-01-21T21:51:45.118Z · comments (3)
Pittsburgh SSC Meetup
fr8train · 2019-01-22T00:04:03.782Z · comments (0)
Alignment Newsletter #42
Rohin Shah (rohinmshah) · 2019-01-22T02:00:02.082Z · comments (1)
[meta] [question] Should questions be called "questions" or "confusions" (or "other")?
Raemon · 2019-01-22T02:45:01.211Z · answers+comments (4)
[link] Why don't people use formal methods?
John_Maxwell (John_Maxwell_IV) · 2019-01-22T09:39:46.721Z · comments (5)
Stale air / high CO2 may decrease your cognitive function
Yoav Ravid · 2019-01-22T12:52:06.070Z · comments (6)
[meta] Vote counting bug?
Richard_Ngo (ricraz) · 2019-01-22T15:44:48.154Z · comments (8)
Too Smart for My Own Good
isovector · 2019-01-22T17:51:10.960Z · comments (4)
The Relationship Between Hierarchy and Wealth
sarahconstantin · 2019-01-23T02:00:00.467Z · comments (8)
Learning with catastrophes
paulfchristiano · 2019-01-23T03:01:26.397Z · comments (9)
Link: That Time a Guy Tried to Build a Utopia for Mice and it all Went to Hell
Alexei · 2019-01-23T06:27:05.219Z · comments (6)
Could declining interest to the Doomsday Argument explain the Doomsday Argument?
avturchin · 2019-01-23T11:51:57.012Z · comments (3)
Games in Kocherga club: Fallacymania, Tower of Chaos, Scientific Discovery
Alexander230 · 2019-01-23T11:57:49.020Z · comments (0)
← previous page (newer posts) · next page (older posts) →