LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Hell Must Be Destroyed
algekalipso · 2018-12-06T04:11:19.417Z · comments (1)
Trivial Inconvenience Day (December 9th at 12 Noon PST)
namespace (ingres) · 2018-12-07T01:26:22.870Z · comments (1)
Assuming we've solved X, could we do Y...
Stuart_Armstrong · 2018-12-11T18:13:56.021Z · comments (16)
Systems Engineering and the META Program
ryan_b · 2018-12-20T20:19:25.819Z · comments (3)
You can be wrong about what you like, and you often are
Adam Zerner (adamzerner) · 2018-12-17T23:49:39.935Z · comments (21)
[link] Reinterpreting "AI and Compute"
habryka (habryka4) · 2018-12-25T21:12:11.236Z · comments (9)
[question] What precisely do we mean by AI alignment?
Gordon Seidoh Worley (gworley) · 2018-12-09T02:23:28.809Z · answers+comments (8)
A hundred Shakespeares
Stuart_Armstrong · 2018-12-11T23:11:48.668Z · comments (5)
On Disingenuity
Chris_Leong · 2018-12-26T17:08:47.138Z · comments (2)
[question] Can dying people "hold on" for something they are waiting for?
Raemon · 2018-12-27T19:53:35.436Z · answers+comments (7)
Peanut Butter
Jacob Falkovich (Jacobian) · 2018-12-03T19:30:56.907Z · comments (3)
[link] Book review: Artificial Intelligence Safety and Security
PeterMcCluskey · 2018-12-08T03:47:17.098Z · comments (3)
Conceptual Analysis for AI Alignment
David Scott Krueger (formerly: capybaralet) (capybaralet) · 2018-12-30T00:46:38.014Z · comments (3)
Open and Welcome Thread December 2018
Ben Pace (Benito) · 2018-12-04T22:20:53.076Z · comments (23)
Akrasia is confusion about what you want
Gordon Seidoh Worley (gworley) · 2018-12-28T21:09:20.692Z · comments (7)
Testing Rationality Apps for Science
BayesianMind · 2018-12-24T10:46:20.229Z · comments (1)
Alignment Newsletter #37
Rohin Shah (rohinmshah) · 2018-12-17T19:10:01.774Z · comments (4)
[question] What is abstraction?
Adam Zerner (adamzerner) · 2018-12-15T08:36:01.089Z · answers+comments (11)
[question] What are some concrete problems about logical counterfactuals?
Chris_Leong · 2018-12-16T10:20:26.618Z · answers+comments (4)
Good arguments against "cultural appropriation"
Tyrrell_McAllister · 2018-12-18T17:23:52.900Z · comments (12)
[question] Why should I care about rationality?
TurnTrout · 2018-12-08T03:49:29.451Z · answers+comments (5)
Interpreting genetic testing
jefftk (jkaufman) · 2018-12-15T15:56:57.339Z · comments (1)
[link] Penalizing Impact via Attainable Utility Preservation
TurnTrout · 2018-12-28T21:46:00.843Z · comments (0)
Review: Slay the Spire
Zvi · 2018-12-09T20:40:01.616Z · comments (1)
[question] Is the human brain a valid choice for the Universal Turing Machine in Solomonoff Induction?
habryka (habryka4) · 2018-12-08T01:49:56.073Z · answers+comments (13)
Alignment Newsletter #36
Rohin Shah (rohinmshah) · 2018-12-12T01:10:01.398Z · comments (0)
[link] 1987 Sci-Fi Authors Timecapsule Predictions For 2012
namespace (ingres) · 2018-12-28T06:50:28.202Z · comments (3)
[question] In what ways are holidays good?
DanielFilan · 2018-12-28T00:42:06.849Z · answers+comments (19)
Card Collection and Ownership
Zvi · 2018-12-27T13:10:00.977Z · comments (8)
[question] Who's welcome to our LessWrong meetups?
ChristianKl · 2018-12-10T13:31:17.357Z · answers+comments (5)
[meta] Anyone use the "read time" on Post Items?
Raemon · 2018-12-01T23:16:23.249Z · comments (11)
[meta] LW Update 2018-12-06 – All Posts Page, Questions Page, Posts Item rework
Raemon · 2018-12-08T21:30:13.874Z · comments (1)
Is cognitive load a factor in community decline?
ryan_b · 2018-12-07T15:45:20.605Z · comments (6)
[link] [Video] Why Not Just: Think of AGI Like a Corporation? (Robert Miles)
habryka (habryka4) · 2018-12-23T21:49:06.438Z · comments (1)
Anthropic paradoxes transposed into Anthropic Decision Theory
Stuart_Armstrong · 2018-12-19T18:07:42.251Z · comments (23)
Anthropic probabilities and cost functions
Stuart_Armstrong · 2018-12-21T17:54:20.921Z · comments (1)
[question] Best arguments against worrying about AI risk?
Chris_Leong · 2018-12-23T14:57:09.905Z · answers+comments (16)
Benign model-free RL
paulfchristiano · 2018-12-02T04:10:45.205Z · comments (1)
COEDT Equilibria in Games
Diffractor · 2018-12-06T18:00:08.442Z · comments (0)
Alignment Newsletter #35
Rohin Shah (rohinmshah) · 2018-12-04T01:10:01.209Z · comments (0)
[question] Experiences of Self-deception
Bucky · 2018-12-18T11:10:26.965Z · answers+comments (3)
Figuring out what Alice wants: non-human Alice
Stuart_Armstrong · 2018-12-11T19:31:13.830Z · comments (17)
Bounded rationality abounds in models, not explicitly defined
Stuart_Armstrong · 2018-12-11T19:34:17.476Z · comments (9)
[question] Why should EA care about rationality (and vice-versa)?
Gordon Seidoh Worley (gworley) · 2018-12-09T22:03:58.158Z · answers+comments (13)
Fifteen Things I Learned From Watching a Game of Secret Hitler
Zvi · 2018-12-17T13:40:01.047Z · comments (6)
[question] What podcasts does the community listen to?
hristovassilev · 2018-12-14T15:40:31.414Z · answers+comments (6)
[link] Sabine "Bee" Hossenfelder (and Robin Hanson) on How to fix Academia with Prediction Markets
shminux · 2018-12-16T06:37:13.623Z · comments (0)
Equivalence of State Machines and Coroutines
Martin Sustrik (sustrik) · 2018-12-18T04:40:00.750Z · comments (1)
Artifact Embraces Card Balance Changes
Zvi · 2018-12-26T13:10:00.384Z · comments (1)
Boston Secular Solstice
jefftk (jkaufman) · 2018-12-10T01:59:24.756Z · comments (0)
← previous page (newer posts) · next page (older posts) →