LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

Sunset at Noon
Raemon · 2017-11-29T14:52:45.889Z · comments (23)
The Mad Scientist Decision Problem
Linda Linsefors · 2017-11-29T11:41:33.640Z · comments (20)
The Right to be Wrong
sarahconstantin · 2017-11-28T23:43:24.210Z · comments (10)
Any Good Criticism of Karl Popper's Epistemology?
Elliot_Temple · 2017-11-28T22:31:05.362Z · comments (32)
Free Speech as Legal Right vs. Ethical Value
ozymandias · 2017-11-28T16:49:38.540Z · comments (29)
Stable agent, subagent-unstable
Stuart_Armstrong · 2017-11-28T16:05:53.000Z · comments (None)
Stable agent, subagent-unstable
Stuart_Armstrong · 2017-11-28T16:04:02.612Z · comments (None)
Reward learning summary
Stuart_Armstrong · 2017-11-28T15:55:08.000Z · comments (None)
[link] Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”
avturchin · 2017-11-28T15:39:37.000Z · comments (None)
Big Advance in Infinite Ethics
bwest · 2017-11-28T15:10:47.396Z · comments (14)
USA v Progressive 1979 excerpt
RyanCarey · 2017-11-27T17:32:10.425Z · comments (2)
You Have the Right to Think
Zvi · 2017-11-27T02:10:00.348Z · comments (2)
Security Mindset and the Logistic Success Curve
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2017-11-26T15:58:23.127Z · comments (45)
An Intuitive Explanation of Inferential Distance
RichardJActon · 2017-11-26T14:13:36.512Z · comments (6)
Changing habits for open threads
Hazard · 2017-11-26T12:54:27.413Z · comments (4)
[link] Letter from Utopia: Talking to Nick Bostrom
morganism · 2017-11-25T22:19:48.345Z · comments (2)
Security Mindset and Ordinary Paranoia
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2017-11-25T17:53:18.049Z · comments (21)
The Darwin Results
Zvi · 2017-11-25T13:30:00.351Z · comments (10)
Some mind experiments
RST · 2017-11-25T12:46:47.713Z · comments (2)
[link] Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”
turchin · 2017-11-25T11:44:51.077Z · comments (19)
Message to any future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” [AI alignment prize entry]
avturchin · 2017-11-25T11:28:04.420Z · comments (12)
Communities you might join thread
whpearson · 2017-11-25T09:07:09.087Z · comments (13)
Unjustified ideas comment thread
MrRobot · 2017-11-24T20:15:20.065Z · comments (24)
Timeless Modesty?
abramdemski · 2017-11-24T11:12:46.869Z · comments (2)
Gears Level & Policy Level
abramdemski · 2017-11-24T07:17:51.525Z · comments (8)
List of civilisational inadequacy
ChristianKl · 2017-11-23T13:56:23.822Z · comments (41)
[link] Open Letter to MIRI + Tons of Interesting Discussion
curi · 2017-11-22T21:16:45.231Z · comments (162)
Open thread, November 21 - November 28, 2017
ChristianKl · 2017-11-22T19:32:01.522Z · comments (None)
Fire drill proposal
MrRobot · 2017-11-22T19:07:58.441Z · comments (7)
A Day in Utopia
ozymandias · 2017-11-22T16:57:26.982Z · comments (10)
Civility Is Never Neutral
ozymandias · 2017-11-22T16:54:06.248Z · comments (15)
Next narrow-AI challenge proposal
MrRobot · 2017-11-22T11:32:57.215Z · comments (4)
An Educational Curriculum
DragonGod · 2017-11-22T10:11:58.779Z · comments (6)
Catastrophe Mitigation Using DRL
Vanessa Kosoy (vanessa-kosoy) · 2017-11-22T05:54:42.000Z · comments (None)
For fantasy fans
MrRobot · 2017-11-22T04:27:43.220Z · comments (None)
[meta] Tags or Sub-Groups
Chris_Leong · 2017-11-21T23:28:52.901Z · comments (5)
Hero Licensing
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2017-11-21T21:13:36.019Z · comments (83)
[meta] Sorry for being less around, will be back soon
habryka (habryka4) · 2017-11-21T20:40:14.509Z · comments (1)
Project proposal: Rationality Cookbook
toonalfrink · 2017-11-21T14:34:01.537Z · comments (19)
The Archipelago Model of Community Standards
Raemon · 2017-11-21T03:21:07.679Z · comments (26)
Arbitrary Math Questions
ryan_b · 2017-11-21T01:18:47.430Z · comments (2)
The Darwin Pregame
Zvi · 2017-11-21T01:10:00.372Z · comments (6)
Hubris, Pride, and Arrogance
linkhyrule5 · 2017-11-20T20:32:43.563Z · comments (7)
[link] A behaviorist approach to building phenomenological bridges
Johannes_Treutlein · 2017-11-20T19:36:46.000Z · comments (None)
Hogwarts House Primaries
ozymandias · 2017-11-20T17:56:34.504Z · comments (16)
[link] My Philosophy of Intelligence Alignment
whpearson · 2017-11-19T14:44:18.097Z · comments (1)
[meta] Revisiting HPMOR
DragonGod · 2017-11-19T08:55:31.552Z · comments (None)
Tragedy of the Commons
DragonGod · 2017-11-18T20:30:57.841Z · comments (None)
[link] Asgardia - The Space Kingdom
morganism · 2017-11-18T20:17:50.326Z · comments (2)
Implications of a feelings-first metaphysics?
efficientfox · 2017-11-18T17:10:35.828Z · comments (2)
next page (older posts) →