LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

next page (older posts) →

"Other people are wrong" vs "I am right"
Buck · 2019-02-22T20:01:16.012Z · comments (19)
Humans Who Are Not Concentrating Are Not General Intelligences
sarahconstantin · 2019-02-25T20:40:00.940Z · comments (34)
Rule Thinkers In, Not Out
Scott Alexander (Yvain) · 2019-02-27T02:40:05.133Z · comments (62)
Thoughts on Human Models
Ramana Kumar (ramana-kumar) · 2019-02-21T09:10:43.943Z · comments (32)
Blackmail
Zvi · 2019-02-19T03:50:04.606Z · comments (55)
Unconscious Economics
jacobjacob · 2019-02-27T12:58:50.320Z · comments (30)
The Tale of Alice Almost: Strategies for Dealing With Pretty Good People
sarahconstantin · 2019-02-27T19:34:03.906Z · comments (6)
Epistemic Tenure
Scott Garrabrant · 2019-02-18T22:56:03.158Z · comments (27)
Probability space has 2 metrics
Donald Hobson (donald-hobson) · 2019-02-10T00:28:34.859Z · comments (11)
[question] How does Gradient Descent Interact with Goodhart?
Scott Garrabrant · 2019-02-02T00:14:51.673Z · answers+comments (19)
[meta] The Case for a Bigger Audience
John_Maxwell (John_Maxwell_IV) · 2019-02-09T07:22:07.357Z · comments (58)
RAISE is launching their MVP
toonalfrink · 2019-02-26T11:45:53.647Z · comments (1)
Pavlov Generalizes
abramdemski · 2019-02-20T09:03:11.437Z · comments (4)
Test Cases for Impact Regularisation Methods
DanielFilan · 2019-02-06T21:50:00.760Z · comments (5)
Some Thoughts on Metaphilosophy
Wei_Dai · 2019-02-10T00:28:29.482Z · comments (27)
How the MtG Color Wheel Explains AI Safety
Scott Garrabrant · 2019-02-15T23:42:59.637Z · comments (4)
Two Small Experiments on GPT-2
jimrandomh · 2019-02-21T02:59:16.199Z · comments (28)
When to use quantilization
RyanCarey · 2019-02-05T17:17:12.169Z · comments (5)
The Argument from Philosophical Difficulty
Wei_Dai · 2019-02-10T00:28:07.472Z · comments (31)
[question] If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix?
Raemon · 2019-02-21T21:32:56.366Z · answers+comments (39)
Arguments for moral indefinability
Richard_Ngo (ricraz) · 2019-02-12T10:40:01.226Z · comments (10)
[question] How does OpenAI's language model affect our AI timeline estimates?
jimrandomh · 2019-02-15T03:11:51.779Z · answers+comments (7)
[question] Why didn't Agoric Computing become popular?
Wei_Dai · 2019-02-16T06:19:56.121Z · answers+comments (22)
[question] How good is a human's gut judgement at guessing someone's IQ?
habryka (habryka4) · 2019-02-25T21:23:17.159Z · answers+comments (18)
Coherent behaviour in the real world is an incoherent concept
Richard_Ngo (ricraz) · 2019-02-11T17:00:25.665Z · comments (17)
Conclusion to the sequence on value learning
rohinmshah · 2019-02-03T21:05:11.631Z · comments (20)
Avoiding Jargon Confusion
Raemon · 2019-02-17T23:37:16.986Z · comments (35)
(notes on) Policy Desiderata for Superintelligent AI: A Vector Field Approach
Ben Pace (Benito) · 2019-02-04T22:08:34.337Z · comments (5)
The Hamming Question
Raemon · 2019-02-08T19:34:33.993Z · comments (38)
[link] Learning preferences by looking at the world
rohinmshah · 2019-02-12T22:25:16.905Z · comments (10)
The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work
ozziegooen · 2019-02-14T16:21:13.564Z · comments (15)
Major Donation: Long Term Future Fund Application Extended 1 Week
habryka (habryka4) · 2019-02-16T23:30:11.243Z · comments (3)
Is voting theory important? An attempt to check my bias.
Jameson Quinn (jameson-quinn) · 2019-02-17T23:45:57.960Z · comments (14)
HCH is not just Mechanical Turk
William_S · 2019-02-09T00:46:25.729Z · comments (6)
Philosophy as low-energy approximation
Charlie Steiner · 2019-02-05T19:34:18.617Z · comments (20)
Urgent & important: How (not) to do your to-do list
bfinn · 2019-02-01T17:44:34.573Z · comments (15)
Knowing I’m Being Tricked is Barely Enough
Elizabeth (pktechgirl) · 2019-02-26T17:50:07.608Z · comments (10)
How to get value learning and reference wrong
Charlie Steiner · 2019-02-26T20:22:43.155Z · comments (2)
Some disjunctive reasons for urgency on AI risk
Wei_Dai · 2019-02-15T20:43:17.340Z · comments (24)
[meta] New versions of posts in "Map and Territory" and "How To Actually Change Your Mind" are up (also, new revision system)
habryka (habryka4) · 2019-02-26T03:17:28.065Z · comments (3)
[link] Is the World Getting Better? A brief summary of recent debate
ErickBall · 2019-02-06T17:38:43.631Z · comments (7)
[question] When should we expect the education bubble to pop? How can we short it?
jacobjacob · 2019-02-09T21:39:10.918Z · answers+comments (12)
[link] Drexler on AI Risk
PeterMcCluskey · 2019-02-01T05:11:01.008Z · comments (10)
EA grants available (to individuals)
Jameson Quinn (jameson-quinn) · 2019-02-07T15:17:38.921Z · comments (8)
Can HCH epistemically dominate Ramanujan?
zhukeepa · 2019-02-23T22:00:33.363Z · comments (4)
The RAIN Framework for Informational Effectiveness
ozziegooen · 2019-02-13T12:54:20.297Z · comments (12)
Complexity Penalties in Statistical Learning
michael_h · 2019-02-06T04:13:57.140Z · comments (3)
Constructing Goodhart
johnswentworth · 2019-02-03T21:59:53.785Z · comments (10)
What makes a good culture?
toonalfrink · 2019-02-05T13:31:57.792Z · comments (7)
Informal Post on Motivation
Ruby · 2019-02-23T23:35:14.430Z · comments (4)
next page (older posts) →