LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

Discussion Thread: The AI Does Not Hate You by Tom Chivers
Ruby · 2019-06-17T23:43:00.297Z · comments (10)
[link] Reason isn't magic
Benquo · 2019-06-18T04:04:58.390Z · comments (19)
[question] Is the "business cycle" an actual economic principle?
Liron · 2019-06-18T14:52:00.348Z · answers+comments (15)
Magic Arena Bot Drafting
Zvi · 2019-06-18T16:00:00.402Z · comments (1)
[link] Reneging prosocially by Duncan Sabien
philip_b (crabman) · 2019-06-18T18:52:46.501Z · comments (5)
1hr talk: Intro to AGI safety
Steven Byrnes (steve2152) · 2019-06-18T21:41:29.371Z · comments (4)
[meta] LW2.0: Community, Culture, and Intellectual Progress
Ruby · 2019-06-19T20:25:08.682Z · comments (2)
[meta] LW2.0: Technology Platform for Intellectual Progress
Ruby · 2019-06-19T20:25:20.228Z · comments (5)
Arguing Definitions
Hazard · 2019-06-19T20:29:44.323Z · comments (1)
Words Aren't Type Safe
Hazard · 2019-06-19T20:34:23.699Z · comments (12)
Defending points you don't care about
Hazard · 2019-06-19T20:40:05.152Z · comments (6)
Should there be a header feature?
Chris_Leong · 2019-06-20T06:45:38.654Z · comments (2)
STRUCTURE: A Hazardous Guide to Words
Hazard · 2019-06-20T15:27:45.276Z · comments (7)
[meta] [question] LW Dev question: FB-style tagging?
Dr_Manhattan · 2019-06-20T19:19:45.807Z · answers+comments (1)
A case for strategy research: what it is and why we need more of it
Siebe · 2019-06-20T20:22:14.478Z · comments (19)
Let Values Drift
Gordon Seidoh Worley (gworley) · 2019-06-20T20:45:36.618Z · comments (19)
Should rationality be a movement?
Chris_Leong · 2019-06-20T23:09:10.555Z · comments (13)
Is your uncertainty resolvable?
Raemon · 2019-06-21T07:32:00.819Z · comments (15)
[link] Modeling AGI Safety Frameworks with Causal Influence Diagrams
Ramana Kumar (ramana-kumar) · 2019-06-21T12:50:08.233Z · comments (6)
Splitting Concepts
Hazard · 2019-06-21T16:03:11.177Z · comments (0)
Decisions are hard, words feel easier
Hazard · 2019-06-21T16:17:22.366Z · comments (4)
[link] "The Bitter Lesson", an article about compute vs human knowledge in AI
the gears to ascension (lahwran) · 2019-06-21T17:24:50.825Z · comments (14)
[link] The Hacker Learns to Trust
Ben Pace (Benito) · 2019-06-22T00:27:55.298Z · comments (18)
How hard is it for altruists to discuss going against bad equilibria?
abramdemski · 2019-06-22T03:42:24.416Z · comments (6)
The Foundational Toolbox for Life: Introduction
ExCeph · 2019-06-22T06:11:59.497Z · comments (2)
Podcast - Putanumonit on The Switch
Jacob Falkovich (Jacobian) · 2019-06-23T04:09:25.723Z · comments (2)
How does one get invited to the alignment forum?
Chris_Leong · 2019-06-23T09:39:20.042Z · comments (5)
Explaining "The Crackpot Bet"
glennonymous · 2019-06-24T15:17:59.126Z · comments (20)
[AN #58] Mesa optimization: what it is, and why we should care
Rohin Shah (rohinmshah) · 2019-06-24T16:10:01.330Z · comments (10)
Being the (Pareto) Best in the World
johnswentworth · 2019-06-24T18:36:45.929Z · comments (57)
Machine Learning Projects on IDA
Owain_Evans · 2019-06-24T18:38:18.873Z · comments (3)
Causal Reality vs Social Reality
Ruby · 2019-06-24T23:50:19.079Z · comments (96)
Pittsburgh SSC/RAT-adjacent meetup
fr8train · 2019-06-25T02:11:44.243Z · comments (0)
[question] What's up with self-esteem?
abramdemski · 2019-06-25T03:38:15.991Z · answers+comments (14)
Arbital Scrape V2
emmab · 2019-06-25T10:03:21.962Z · comments (6)
Research Agenda in reverse: what *would* a solution look like?
Stuart_Armstrong · 2019-06-25T13:52:48.934Z · comments (25)
[question] Only optimize to 95 %
Leo Shine (leo-shine) · 2019-06-25T20:50:27.908Z · answers+comments (9)
[link] Writing children's picture books
jessicata (jessica.liu.taylor) · 2019-06-25T21:43:45.578Z · comments (22)
Epistemic Spot Check: The Role of Deliberate Practice in the Acquisition of Expert Performance
Elizabeth (pktechgirl) · 2019-06-25T23:00:00.689Z · comments (9)
[question] What does the word "collaborative" mean in the phrase "collaborative truthseeking"?
Zack_M_Davis · 2019-06-26T05:26:42.295Z · answers+comments (23)
Apocalypse, corrupted
Stuart_Armstrong · 2019-06-26T13:46:05.548Z · comments (13)
Jordan Peterson on AI-FOOM
Bound_up · 2019-06-26T17:05:49.221Z · comments (7)
Embedded Agency: Not Just an AI Problem
johnswentworth · 2019-06-27T00:35:31.857Z · comments (10)
[question] What are principled ways for penalising complexity in practice?
Bucky · 2019-06-27T07:28:16.850Z · answers+comments (12)
[question] Is it good practice to write questions/comments on old posts you're trying to understand?
Liam Donovan (liam-donovan) · 2019-06-27T09:23:01.619Z · answers+comments (5)
[question] How to deal with a misleading conference talk about AI risk?
rmoehn · 2019-06-27T21:04:32.828Z · answers+comments (12)
[question] Why would "necro-ing" be a bad idea?
Nebu · 2019-06-28T02:21:43.537Z · answers+comments (0)
False assumptions and leaky abstractions in machine learning and AI safety
David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-06-28T04:54:47.119Z · comments (3)
[link] Instead of "I'm anxious," try "I feel threatened"
Holly_Elmore · 2019-06-28T05:24:52.593Z · comments (19)
[link] GreaterWrong Arbital Viewer
clone of saturn · 2019-06-28T06:32:22.278Z · comments (9)
← previous page (newer posts) · next page (older posts) →