LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

next page (older posts) →

Building up to an Internal Family Systems model
Kaj_Sotala · 2019-01-26T12:25:11.162Z · comments (86)
The 3 Books Technique for Learning a New Skilll
Matt Goldenberg (mr-hire) · 2019-01-09T12:45:19.294Z · comments (48)
Book Summary: Consciousness and the Brain
Kaj_Sotala · 2019-01-16T14:43:59.202Z · comments (20)
Some Thoughts on My Psychiatry Practice
Laura B (Lara_Foster) · 2019-01-16T23:16:54.264Z · comments (43)
From Personal to Prison Gangs: Enforcing Prosocial Behavior
johnswentworth · 2019-01-24T18:07:33.262Z · comments (26)
Disentangling arguments for the importance of AI safety
Richard_Ngo (ricraz) · 2019-01-21T12:41:43.615Z · comments (23)
Sequence introduction: non-agent and multiagent models of mind
Kaj_Sotala · 2019-01-07T14:12:30.297Z · comments (15)
[link] Reframing Superintelligence: Comprehensive AI Services as General Intelligence
Rohin Shah (rohinmshah) · 2019-01-08T07:12:29.534Z · comments (77)
S-Curves for Trend Forecasting
Matt Goldenberg (mr-hire) · 2019-01-23T18:17:56.436Z · comments (23)
Book Review: The Structure Of Scientific Revolutions
Scott Alexander (Yvain) · 2019-01-09T07:10:02.152Z · comments (30)
[question] Why is so much discussion happening in private Google Docs?
Wei Dai (Wei_Dai) · 2019-01-12T02:19:19.332Z · answers+comments (22)
[question] What are the open problems in Human Rationality?
Raemon · 2019-01-13T04:46:38.581Z · answers+comments (93)
Less Competition, More Meritocracy?
Zvi · 2019-01-20T02:00:00.974Z · comments (19)
Announcement: AI alignment prize round 4 winners
cousin_it · 2019-01-20T14:46:47.912Z · comments (41)
Comments on CAIS
Richard_Ngo (ricraz) · 2019-01-12T15:20:22.133Z · comments (14)
Strategy is the Deconfusion of Action
ryan_b · 2019-01-02T20:56:28.124Z · comments (4)
AI safety without goal-directed behavior
Rohin Shah (rohinmshah) · 2019-01-07T07:48:18.705Z · comments (15)
Combat vs Nurture & Meta-Contrarianism
abramdemski · 2019-01-10T23:17:58.703Z · comments (12)
[link] "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros]
gwern · 2019-01-24T20:49:01.350Z · comments (52)
Will humans build goal-directed agents?
Rohin Shah (rohinmshah) · 2019-01-05T01:33:36.548Z · comments (43)
[question] Does anti-malaria charity destroy the local anti-malaria industry?
Viliam · 2019-01-05T19:04:57.601Z · answers+comments (16)
The Relationship Between Hierarchy and Wealth
sarahconstantin · 2019-01-23T02:00:00.467Z · comments (8)
Two More Decision Theory Problems for Humans
Wei Dai (Wei_Dai) · 2019-01-04T09:00:33.436Z · comments (14)
Megaproject management
ryan_b · 2019-01-11T17:08:37.308Z · comments (11)
Masculine Virtues
Jacob Falkovich (Jacobian) · 2019-01-30T16:03:56.000Z · comments (32)
What AI Safety Researchers Have Written About the Nature of Human Values
avturchin · 2019-01-16T13:59:31.522Z · comments (3)
Learning-Intentions vs Doing-Intentions
Ruby · 2019-01-01T22:22:39.364Z · comments (14)
Non-Consequentialist Cooperation?
abramdemski · 2019-01-11T09:15:36.875Z · comments (15)
Directions and desiderata for AI alignment
paulfchristiano · 2019-01-13T07:47:13.581Z · comments (1)
Book Trilogy Review: Remembrance of Earth’s Past (The Three Body Problem)
Zvi · 2019-01-30T01:10:00.414Z · comments (15)
Book Recommendations: An Everyone Culture and Moral Mazes
sarahconstantin · 2019-01-10T21:40:04.163Z · comments (13)
A Framework for Internal Debugging
Matt Goldenberg (mr-hire) · 2019-01-16T16:04:16.478Z · comments (3)
Optimizing for Stories (vs Optimizing Reality)
Ruby · 2019-01-07T08:03:22.512Z · comments (11)
And My Axiom! Insights from 'Computability and Logic'
TurnTrout · 2019-01-16T19:48:47.388Z · comments (17)
Anthropics is pretty normal
Stuart_Armstrong · 2019-01-17T13:26:22.929Z · comments (9)
CDT=EDT=UDT
abramdemski · 2019-01-13T23:46:10.866Z · comments (16)
Can there be an indescribable hellworld?
Stuart_Armstrong · 2019-01-29T15:00:54.481Z · comments (19)
I want it my way!
nickhayes · 2019-01-04T18:08:43.955Z · comments (2)
Too Smart for My Own Good
isovector · 2019-01-22T17:51:10.960Z · comments (4)
[link] Littlewood's Law and the Global Media
gwern · 2019-01-12T17:46:09.753Z · comments (3)
[question] What are questions?
Pee Doom (DonyChristie) · 2019-01-09T07:37:30.246Z · answers+comments (17)
Human-AI Interaction
Rohin Shah (rohinmshah) · 2019-01-15T01:57:15.558Z · comments (10)
AlphaGo Zero and capability amplification
paulfchristiano · 2019-01-09T00:40:13.391Z · comments (23)
[question] What is a reasonable outside view for the fate of social movements?
jacobjacob · 2019-01-04T00:21:20.603Z · answers+comments (27)
Disadvantages of Card Rebalancing
Zvi · 2019-01-06T23:30:08.255Z · comments (5)
Starting to see 2 months later
Pausecafe · 2019-01-23T20:46:45.901Z · comments (3)
Alignment Newsletter #39
Rohin Shah (rohinmshah) · 2019-01-01T08:10:01.379Z · comments (2)
[question] What exercises go best with 3 blue 1 brown's Linear Algebra videos?
Raemon · 2019-01-01T21:29:37.599Z · answers+comments (12)
Following human norms
Rohin Shah (rohinmshah) · 2019-01-20T23:59:16.742Z · comments (10)
One Website To Rule Them All?
anna_macdonald · 2019-01-11T19:14:51.445Z · comments (23)
next page (older posts) →