LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Informal Post on Motivation
Ruby · 2019-02-23T23:35:14.430Z · comments (4)
Constructing Goodhart
johnswentworth · 2019-02-03T21:59:53.785Z · comments (10)
[question] Is LessWrong a "classic style intellectual world"?
Gordon Seidoh Worley (gworley) · 2019-02-26T21:33:37.736Z · answers+comments (6)
Make an appointment with your saner self
MalcolmOcean (malcolmocean) · 2019-02-08T05:05:49.784Z · comments (0)
Functional silence: communication that minimizes change of receiver's beliefs
chaosmage · 2019-02-12T21:32:27.015Z · comments (5)
[meta] [question] How important is it that LW has an unlimited supply of karma?
jacobjacob · 2019-02-11T01:41:51.797Z · answers+comments (9)
[question] What's your favorite LessWrong post?
pepe_prime · 2019-02-21T10:39:53.919Z · answers+comments (8)
Quantifying anthropic effects on the Fermi paradox
Lukas Finnveden (Lanrian) · 2019-02-15T10:51:04.298Z · comments (5)
Extraordinary ethics require extraordinary arguments
aaq · 2019-02-17T14:59:17.156Z · comments (6)
Kocherga's leaflet
berekuk · 2019-02-19T12:06:28.257Z · comments (2)
Predictive Reasoning Systems
ozziegooen · 2019-02-20T19:44:45.778Z · comments (2)
My atheism story
Pausecafe · 2019-02-04T14:33:51.043Z · comments (3)
[question] How could "Kickstarter for Inadequate Equilibria" be used for evil or turn out to be net-negative?
Raemon · 2019-02-21T21:36:07.707Z · answers+comments (17)
Would I think for ten thousand years?
Stuart_Armstrong · 2019-02-11T19:37:53.591Z · comments (13)
Reinforcement Learning in the Iterated Amplification Framework
William_S · 2019-02-09T00:56:08.256Z · comments (12)
Alignment Newsletter #45
Rohin Shah (rohinmshah) · 2019-02-14T02:10:01.155Z · comments (2)
Thoughts on Ben Garfinkel's "How sure are we about this AI stuff?"
David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-02-06T19:09:20.809Z · comments (17)
AI Safety Prerequisites Course: Revamp and New Lessons
philip_b (crabman) · 2019-02-03T21:04:16.213Z · comments (5)
[question] When does introspection avoid the pitfalls of rumination?
rk · 2019-02-20T14:14:46.798Z · answers+comments (12)
Rationalist Vipassana Meditation Retreat
DreamFlasher · 2019-02-25T10:10:27.100Z · comments (2)
Ideas for Next Generation Prediction Technologies
ozziegooen · 2019-02-21T11:38:57.798Z · comments (25)
Three Kinds of Research Documents: Exploration, Explanation, Academic
ozziegooen · 2019-02-13T21:25:51.393Z · comments (18)
Can We Place Trust in Post-AGI Forecasting Evaluations?
ozziegooen · 2019-02-17T19:20:41.446Z · comments (16)
Decelerating: laser vs gun vs rocket
Stuart_Armstrong · 2019-02-18T23:21:46.294Z · comments (16)
Cooperation is for Winners
Jacob Falkovich (Jacobian) · 2019-02-15T14:58:08.949Z · comments (6)
[question] What are some of bizarre theories based on anthropic reasoning?
Dr. Jamchie · 2019-02-03T18:48:17.237Z · answers+comments (13)
[question] If Rationality can be likened to a 'Martial Art', what would be the Forms?
Bae's Theorem (Senarin) · 2019-02-06T05:48:33.589Z · answers+comments (10)
Security amplification
paulfchristiano · 2019-02-06T17:28:19.995Z · comments (0)
So You Want to Colonize The Universe
Diffractor · 2019-02-27T10:17:50.427Z · comments (18)
[link] Impact Prizes as an alternative to Certificates of Impact
ozziegooen · 2019-02-20T00:46:25.912Z · comments (0)
So You Want To Colonize The Universe Part 3: Dust
Diffractor · 2019-02-27T10:20:14.780Z · comments (9)
Nuances with ascription universality
evhub · 2019-02-12T23:38:24.731Z · comments (1)
Automated Nomic Game 2
jefftk (jkaufman) · 2019-02-05T22:11:13.914Z · comments (2)
Fighting the allure of depressive realism
aaq · 2019-02-10T16:46:21.371Z · comments (2)
So You Want to Colonize The Universe Part 5: The Actual Design
Diffractor · 2019-02-27T10:23:28.424Z · comments (4)
Open Thread February 2019
ryan_b · 2019-02-07T18:00:45.772Z · comments (19)
Layers of Expertise and the Curse of Curiosity
Gyrodiot · 2019-02-12T23:41:45.980Z · comments (1)
On Long and Insightful Posts
Qria (qria) · 2019-02-13T03:52:20.249Z · comments (3)
Alignment Newsletter #44
Rohin Shah (rohinmshah) · 2019-02-06T08:30:01.424Z · comments (0)
So You Want to Colonize the Universe Part 2: Deep Time Engineering
Diffractor · 2019-02-27T10:18:18.209Z · comments (6)
[question] Native mental representations that give huge speedups on problems?
two-ox-heads · 2019-02-25T23:42:44.634Z · answers+comments (4)
[question] Is there an assurance-contract website in work?
Yoav Ravid · 2019-02-20T06:14:00.633Z · answers+comments (31)
[question] Who owns OpenAI's new language model?
ioannes (ioannes_shade) · 2019-02-14T17:51:26.367Z · answers+comments (9)
How to notice being mind-hacked
shminux · 2019-02-02T23:13:48.812Z · comments (22)
[link] So you want to be a wizard
NaiveTortoise (An1lam) · 2019-02-15T15:43:48.274Z · comments (0)
[link] January 2019 gwern.net newsletter
gwern · 2019-02-04T15:53:42.553Z · comments (0)
[link] Robin Hanson on Lumpiness of AI Services
DanielFilan · 2019-02-17T23:08:36.165Z · comments (2)
[question] Where to find Base Rates?
adam demirel · 2019-02-26T10:44:59.226Z · answers+comments (7)
[Link] OpenAI on why we need social scientists
ioannes (ioannes_shade) · 2019-02-19T16:59:32.319Z · comments (3)
So You Want to Colonize The Universe Part 4: Velocity Changes and Energy
Diffractor · 2019-02-27T10:22:46.371Z · comments (9)
← previous page (newer posts) · next page (older posts) →