LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

[link] Time Biases
PeterMcCluskey · 2019-01-12T21:35:54.276Z · comments (0)
Towards no-math, graphical instructions for prediction markets
ryan_b · 2019-01-04T16:39:58.479Z · comments (14)
One Website To Rule Them All?
anna_macdonald · 2019-01-11T19:14:51.445Z · comments (23)
[question] What are the components of intellectual honesty?
abramdemski · 2019-01-15T20:00:09.144Z · answers+comments (4)
[link] Supervising strong learners by amplifying weak experts
paulfchristiano · 2019-01-06T07:00:58.680Z · comments (1)
Anthropics: Full Non-indexical Conditioning (FNC) is inconsistent
Stuart_Armstrong · 2019-01-14T15:03:04.288Z · comments (4)
[meta] LW Update 2019-1-09 – Question Updates, UserProfile Sorting
Raemon · 2019-01-09T22:34:31.338Z · comments (2)
Prediction Contest 2018: Scores and Retrospective
jbeshir · 2019-01-27T17:20:15.245Z · comments (5)
[link] Visualizing the power of multiple step selection processes in JS: Galton's bean machine
gwern · 2019-01-12T17:58:34.584Z · comments (2)
Deconfusing Logical Counterfactuals
Chris_Leong · 2019-01-30T15:13:41.436Z · comments (16)
Towards formalizing universality
paulfchristiano · 2019-01-13T20:39:21.726Z · comments (19)
Learning with catastrophes
paulfchristiano · 2019-01-23T03:01:26.397Z · comments (9)
[question] Which textbook would you recommend to learn decision theory?
supermartingale · 2019-01-29T20:48:11.577Z · answers+comments (6)
Bay Area SSC Meetup (special guest Steve Hsu)
Scott Alexander (Yvain) · 2019-01-03T03:02:05.532Z · comments (0)
Wireheading is in the eye of the beholder
Stuart_Armstrong · 2019-01-30T18:23:07.143Z · comments (10)
Reward uncertainty
Rohin Shah (rohinmshah) · 2019-01-19T02:16:05.194Z · comments (3)
Dutch-Booking CDT
abramdemski · 2019-01-13T00:10:07.941Z · comments (6)
The reward engineering problem
paulfchristiano · 2019-01-16T18:47:24.075Z · comments (3)
Finance Followups
Jacob Falkovich (Jacobian) · 2019-01-17T17:54:44.256Z · comments (8)
The Very Repugnant Conclusion
Stuart_Armstrong · 2019-01-18T14:26:08.083Z · comments (19)
Río Grande: judgment calls
KatjaGrace · 2019-01-27T03:50:01.102Z · comments (5)
Ambitious vs. narrow value learning
paulfchristiano · 2019-01-12T06:18:21.747Z · comments (16)
EA Funds: Long-Term Future fund is open to applications until Feb. 7th
habryka (habryka4) · 2019-01-17T20:27:17.619Z · comments (2)
Capability amplification
paulfchristiano · 2019-01-20T07:03:27.879Z · comments (8)
Open Thread January 2019
Raemon · 2019-01-09T20:25:02.716Z · comments (54)
[question] When is CDT Dutch-Bookable?
abramdemski · 2019-01-13T18:54:12.070Z · answers+comments (2)
Techniques for optimizing worst-case performance
paulfchristiano · 2019-01-28T21:29:53.164Z · comments (12)
[Speech] Worlds That Never Were
mingyuan · 2019-01-12T19:53:51.241Z · comments (0)
What is narrow value learning?
Rohin Shah (rohinmshah) · 2019-01-10T07:05:29.652Z · comments (3)
Do the best ideas float to the top?
Quinn (quinn-dougherty) · 2019-01-21T05:22:51.182Z · comments (11)
[question] Is there a.. more exact.. way of scoring a predictor's calibration?
mako yass (MakoYass) · 2019-01-16T08:19:15.744Z · answers+comments (6)
How much can value learning be disentangled?
Stuart_Armstrong · 2019-01-29T14:17:00.601Z · comments (30)
[question] Is Agent Simulates Predictor a "fair" problem?
Chris_Leong · 2019-01-24T13:18:13.745Z · answers+comments (19)
Alignment Newsletter #41
Rohin Shah (rohinmshah) · 2019-01-17T08:10:01.958Z · comments (6)
Towards equilibria-breaking methods
ryan_b · 2019-01-29T16:19:57.564Z · comments (3)
No surjection onto function space for manifold X
Stuart_Armstrong · 2019-01-09T18:07:26.157Z · comments (0)
Hierarchical system preferences and subagent preferences
Stuart_Armstrong · 2019-01-11T18:47:08.860Z · comments (2)
Alignment Newsletter #40
Rohin Shah (rohinmshah) · 2019-01-08T20:10:03.445Z · comments (2)
Anthropic probabilities: answering different questions
Stuart_Armstrong · 2019-01-14T18:50:56.086Z · comments (2)
[link] Why don't people use formal methods?
John_Maxwell (John_Maxwell_IV) · 2019-01-22T09:39:46.721Z · comments (5)
[link] December gwern.net newsletter
gwern · 2019-01-02T15:13:02.771Z · comments (0)
Freely Complying With the Ideal: A Theory of Happiness
Solnassant · 2019-01-27T12:28:59.810Z · comments (2)
Alignment Newsletter #42
Rohin Shah (rohinmshah) · 2019-01-22T02:00:02.082Z · comments (1)
Imitation learning considered unsafe?
David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-01-06T15:48:36.078Z · comments (11)
[link] Evolution as Backstop for Reinforcement Learning: multi-level paradigms
gwern · 2019-01-12T17:45:35.485Z · comments (0)
[link] Bundle your Experiments
PeterMcCluskey · 2019-01-18T23:22:08.660Z · comments (0)
The human side of interaction
Rohin Shah (rohinmshah) · 2019-01-24T10:14:33.906Z · comments (5)
[question] Why not tool AI?
smithee · 2019-01-19T22:18:11.248Z · answers+comments (10)
[question] What are good ML/AI related prediction / calibration questions for 2019?
james_t · 2019-01-04T02:40:28.921Z · answers+comments (4)
mindlevelup 3 Year Review
[deleted] · 2019-01-09T06:36:01.090Z · comments (0)
← previous page (newer posts) · next page (older posts) →