LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

[meta] [question] Should questions be called "questions" or "confusions" (or "other")?
Raemon · 2019-01-22T02:45:01.211Z · answers+comments (4)
Alignment Newsletter #42
Rohin Shah (rohinmshah) · 2019-01-22T02:00:02.082Z · comments (1)
Pittsburgh SSC Meetup
fr8train · 2019-01-22T00:04:03.782Z · comments (0)
January 2019 Nashville SSC Meetup
Dude McDude (dude-mcdude) · 2019-01-21T21:51:45.118Z · comments (3)
Game Analysis Index
Zvi · 2019-01-21T15:30:00.371Z · comments (0)
Disentangling arguments for the importance of AI safety
Richard_Ngo (ricraz) · 2019-01-21T12:41:43.615Z · comments (23)
Do the best ideas float to the top?
Quinn (quinn-dougherty) · 2019-01-21T05:22:51.182Z · comments (11)
Following human norms
Rohin Shah (rohinmshah) · 2019-01-20T23:59:16.742Z · comments (10)
Announcement: AI alignment prize round 4 winners
cousin_it · 2019-01-20T14:46:47.912Z · comments (41)
Capability amplification
paulfchristiano · 2019-01-20T07:03:27.879Z · comments (8)
Less Competition, More Meritocracy?
Zvi · 2019-01-20T02:00:00.974Z · comments (19)
[question] Why not tool AI?
smithee · 2019-01-19T22:18:11.248Z · answers+comments (10)
[question] What math do i need for data analysis?
Yoav Ravid · 2019-01-19T10:08:28.398Z · answers+comments (13)
Reward uncertainty
Rohin Shah (rohinmshah) · 2019-01-19T02:16:05.194Z · comments (3)
[link] Doing Despite Disliking: Self‐regulatory Strategies in Everyday Aversive Activities
[deleted] · 2019-01-19T00:27:05.605Z · comments (1)
[link] Bundle your Experiments
PeterMcCluskey · 2019-01-18T23:22:08.660Z · comments (0)
[link] Curing the World of Men
Jacob Falkovich (Jacobian) · 2019-01-18T20:23:18.006Z · comments (4)
How could shares in a megaproject return value to shareholders?
ryan_b · 2019-01-18T18:36:34.916Z · comments (12)
Synthesising divergent preferences: an example in population ethics
Stuart_Armstrong · 2019-01-18T14:29:18.805Z · comments (8)
The Very Repugnant Conclusion
Stuart_Armstrong · 2019-01-18T14:26:08.083Z · comments (19)
EA Funds: Long-Term Future fund is open to applications until Feb. 7th
habryka (habryka4) · 2019-01-17T20:27:17.619Z · comments (2)
Finance Followups
Jacob Falkovich (Jacobian) · 2019-01-17T17:54:44.256Z · comments (8)
Debate AI and the Decision to Release an AI
Chris_Leong · 2019-01-17T14:36:53.512Z · comments (18)
Anthropics is pretty normal
Stuart_Armstrong · 2019-01-17T13:26:22.929Z · comments (9)
Solving the Doomsday argument
Stuart_Armstrong · 2019-01-17T12:32:23.104Z · comments (16)
The questions and classes of SSA
Stuart_Armstrong · 2019-01-17T11:50:50.828Z · comments (0)
In SIA, reference classes (almost) don't matter
Stuart_Armstrong · 2019-01-17T11:29:26.131Z · comments (15)
Moscow LW meetup in "Nauchka" library
Alexander230 · 2019-01-17T10:37:34.892Z · comments (0)
Alignment Newsletter #41
Rohin Shah (rohinmshah) · 2019-01-17T08:10:01.958Z · comments (6)
Clothing For Men
Solnassant · 2019-01-17T00:23:50.567Z · comments (41)
Some Thoughts on My Psychiatry Practice
Laura B (Lara_Foster) · 2019-01-16T23:16:54.264Z · comments (43)
And My Axiom! Insights from 'Computability and Logic'
TurnTrout · 2019-01-16T19:48:47.388Z · comments (17)
The reward engineering problem
paulfchristiano · 2019-01-16T18:47:24.075Z · comments (3)
Buy shares in a megaproject
ryan_b · 2019-01-16T16:18:50.177Z · comments (10)
A Framework for Internal Debugging
Matt Goldenberg (mr-hire) · 2019-01-16T16:04:16.478Z · comments (3)
Book Summary: Consciousness and the Brain
Kaj_Sotala · 2019-01-16T14:43:59.202Z · comments (20)
What AI Safety Researchers Have Written About the Nature of Human Values
avturchin · 2019-01-16T13:59:31.522Z · comments (3)
[question] Is there a.. more exact.. way of scoring a predictor's calibration?
mako yass (MakoYass) · 2019-01-16T08:19:15.744Z · answers+comments (6)
[question] What are the components of intellectual honesty?
abramdemski · 2019-01-15T20:00:09.144Z · answers+comments (4)
[link] Critical Thinking in Medicine
Yoav Ravid · 2019-01-15T15:09:42.716Z · comments (0)
Human-AI Interaction
Rohin Shah (rohinmshah) · 2019-01-15T01:57:15.558Z · comments (10)
Anthropic probabilities: answering different questions
Stuart_Armstrong · 2019-01-14T18:50:56.086Z · comments (2)
A short insight I see everywhere
ArthurLidia · 2019-01-14T18:16:39.352Z · comments (1)
Anthropics: Full Non-indexical Conditioning (FNC) is inconsistent
Stuart_Armstrong · 2019-01-14T15:03:04.288Z · comments (4)
The Tether Theory and the Concrete, Subtle and Causal tiers
Elo · 2019-01-14T00:07:50.829Z · comments (4)
CDT=EDT=UDT
abramdemski · 2019-01-13T23:46:10.866Z · comments (16)
South Bay Meetup 1/27/19
David Friedman (david-friedman) · 2019-01-13T21:30:57.831Z · comments (0)
Towards formalizing universality
paulfchristiano · 2019-01-13T20:39:21.726Z · comments (19)
[question] When is CDT Dutch-Bookable?
abramdemski · 2019-01-13T18:54:12.070Z · answers+comments (2)
Directions and desiderata for AI alignment
paulfchristiano · 2019-01-13T07:47:13.581Z · comments (1)
← previous page (newer posts) · next page (older posts) →