LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

[question] The tech left behind
Leafcraft · 2019-03-12T14:47:16.217Z · answers+comments (22)
Retrospective on a quantitative productivity logging attempt
femtogrammar · 2019-03-22T02:31:42.598Z · comments (5)
The Game Theory of Blackmail
Linda Linsefors · 2019-03-22T17:44:36.545Z · comments (17)
To understand, study edge cases
shminux · 2019-03-02T21:18:41.198Z · comments (8)
[link] Blackmailers are privateers in the war on hypocrisy
Benquo · 2019-03-14T08:13:12.824Z · comments (21)
Verifying vNM-rationality requires an ontology
jeyoor · 2019-03-13T00:03:17.017Z · comments (5)
[meta] List of Q&A Assumptions and Uncertainties [LW2.0 internal document]
Ruby · 2019-03-29T23:55:41.168Z · comments (15)
[link] The Unexpected Philosophical Depths of the Clicker Game Universal Paperclips
Jayson_Virissimo · 2019-03-28T23:39:28.461Z · comments (3)
Alignment Newsletter #49
Rohin Shah (rohinmshah) · 2019-03-20T04:20:01.333Z · comments (1)
Review of Q&A [LW2.0 internal document]
Ruby · 2019-03-29T23:15:57.335Z · comments (11)
[question] How large is the harm from info-cascades? [Info-cascade series]
jacobjacob · 2019-03-13T10:55:38.872Z · answers+comments (2)
Rationality Retreat in Cologne Area, Germany, Spring 2019
ZeitPolizei · 2019-03-04T21:36:27.401Z · comments (0)
Being wrong in ethics
Stuart_Armstrong · 2019-03-29T11:28:55.436Z · comments (0)
[question] Asking for help teaching a critical thinking class.
Ronny Fernandez (ronny-fernandez) · 2019-03-07T02:15:38.208Z · answers+comments (9)
[question] What should we expect from GPT-3?
avturchin · 2019-03-21T14:28:37.702Z · answers+comments (2)
Humans aren't agents - what then for value learning?
Charlie Steiner · 2019-03-15T22:01:38.839Z · comments (14)
Announcing predictions
Nathan Galt · 2019-03-28T21:01:17.102Z · comments (0)
Smoothmin and personal identity
Stuart_Armstrong · 2019-03-08T15:16:28.980Z · comments (0)
[link] IRL 1/8: Inverse Reinforcement Learning and the problem of degeneracy
RAISE · 2019-03-04T13:11:45.287Z · comments (2)
Leaky Concepts
Elo · 2019-03-05T22:01:37.595Z · comments (2)
Ideas for an action coordination website
Yoav Ravid · 2019-03-08T13:32:06.756Z · comments (37)
[question] How large is the fallout area of the biggest cobalt bomb we can build?
habryka (habryka4) · 2019-03-17T05:50:13.848Z · answers+comments (8)
The low cost of human preference incoherence
Stuart_Armstrong · 2019-03-27T11:58:14.845Z · comments (5)
Book Review: The Eureka Factor
drossbucket · 2019-03-04T19:47:23.483Z · comments (2)
[link] Cryopreservation of Valia Zeldin
avturchin · 2019-03-17T19:15:36.510Z · comments (0)
Example population ethics: ordered discounted utility
Stuart_Armstrong · 2019-03-11T16:10:43.458Z · comments (15)
What Vibing Feels Like
Matt Goldenberg (mr-hire) · 2019-03-11T20:10:30.017Z · comments (17)
[link] Blegg Mode
Zack_M_Davis · 2019-03-11T15:04:20.136Z · comments (68)
Alignment Newsletter #47
Rohin Shah (rohinmshah) · 2019-03-04T04:30:11.524Z · comments (0)
[link] Unsolved research problems vs. real-world threat models
catherio · 2019-03-26T22:10:08.371Z · comments (2)
[question] If you wrote a letter to your future self every day, what would you put in it?
Pee Doom (DonyChristie) · 2019-03-12T09:42:37.185Z · answers+comments (3)
[meta] LW Update 2019-03-12 -- Bugfixes, small features
jimrandomh · 2019-03-12T21:56:40.109Z · comments (7)
[link] Exponential Secretary
Jacob Falkovich (Jacobian) · 2019-03-04T19:47:48.912Z · comments (0)
[question] Formalising continuous info cascades? [Info-cascade series]
Ben Pace (Benito) · 2019-03-13T10:55:46.133Z · answers+comments (5)
A Concrete Proposal for Adversarial IDA
evhub · 2019-03-26T19:50:34.869Z · comments (5)
The Politics of Age (the Young vs. the Old)
Martin Sustrik (sustrik) · 2019-03-24T06:40:04.359Z · comments (17)
Muqaata'a by Fahad Himsi (I.)
Martin Sustrik (sustrik) · 2019-03-10T15:10:00.962Z · comments (4)
Question: MIRI Corrigbility Agenda
algon33 · 2019-03-13T19:38:05.729Z · comments (11)
"Moral" as a preference label
Stuart_Armstrong · 2019-03-26T10:30:17.102Z · comments (1)
Alignment Newsletter #50
Rohin Shah (rohinmshah) · 2019-03-28T18:10:01.264Z · comments (2)
On the Regulation of Perception
lionhearted (Sebastian Marshall) (lionhearted) · 2019-03-09T16:28:19.887Z · comments (8)
[question] Distribution of info-cascades across fields? [Info-cascade series]
jacobjacob · 2019-03-13T10:55:17.194Z · answers+comments (2)
[question] How can we respond to info-cascades? [Info-cascade series]
jacobjacob · 2019-03-13T10:55:25.685Z · answers+comments (12)
Syntax vs semantics: alarm better example than thermostat
Stuart_Armstrong · 2019-03-04T12:43:58.280Z · comments (1)
Mysteries, identity, and preferences over non-rewards
Stuart_Armstrong · 2019-03-14T13:52:40.170Z · comments (1)
[Method] The light side of motivation: positive feedback-loop
Vincent B · 2019-03-26T14:05:12.533Z · comments (1)
[question] Is there a difference between uncertainty over your utility function and uncertainty over outcomes?
Chris_Leong · 2019-03-18T18:41:38.246Z · answers+comments (12)
[Link] OpenAI LP
Alexei · 2019-03-12T23:22:59.861Z · comments (0)
A Tale of Four Moralities
Sailor Vulcan · 2019-03-24T03:46:34.026Z · comments (9)
Microcosmographia excerpt
ioannes (ioannes_shade) · 2019-03-29T18:29:14.239Z · comments (0)
← previous page (newer posts) · next page (older posts) →