LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

Open thread, October 16 - October 22, 2017
root · 2017-10-16T18:53:16.852Z · comments (24)
On the construction of beacons
Benquo · 2017-10-16T09:10:00.866Z · comments (9)
Multidimensional signaling
KatjaGrace · 2017-10-16T07:00:00.176Z · comments (16)
Against naming things, and so on
whales · 2017-10-15T23:48:05.600Z · comments (8)
Why no total winner?
Paul Crowley (ciphergoth) · 2017-10-15T22:01:37.920Z · comments (19)
Identities are [Subconscious] Strategies
Ruby · 2017-10-15T18:10:46.042Z · comments (1)
Building an civilisation scale OODA loop for the problem of AGI
whpearson · 2017-10-15T17:56:55.582Z · comments (2)
The Fish-Head Monk
Zvi · 2017-10-14T12:10:00.359Z · comments (2)
Beginners’ Meditation
Zvi · 2017-10-14T12:10:00.338Z · comments (5)
Welcome to the World
Zvi · 2017-10-14T12:10:00.282Z · comments (3)
You can never be universally inclusive
Kaj_Sotala · 2017-10-14T11:30:04.250Z · comments (9)
[link] "Focusing," for skeptics.
Conor Moreton · 2017-10-14T07:00:00.000Z · comments (27)
[link] My Attempt to Justify The Principle of Insufficient Reason (PIR)
DragonGod · 2017-10-14T06:12:29.384Z · comments (1)
[meta] Activated social login, temporarily deactivated normal signup
habryka (habryka4) · 2017-10-14T03:59:11.873Z · comments (11)
Instrumental Rationality 5: Interlude II
[deleted] · 2017-10-14T02:05:37.208Z · comments (1)
Offloading Executive Functioning to Morality
weft · 2017-10-14T01:43:39.507Z · comments (6)
Oxford Prioritisation Project Review
[deleted] (tommsittler) · 2017-10-13T23:07:15.874Z · comments (6)
Rare Exception or Common Exception
weft · 2017-10-13T22:02:49.026Z · comments (3)
There's No Fire Alarm for Artificial General Intelligence
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2017-10-13T21:38:16.797Z · comments (72)
Creating Space to Cultivate Skill
hamnox · 2017-10-13T15:51:11.448Z · comments (8)
Humans can be assigned any values whatsoever...
Stuart_Armstrong · 2017-10-13T11:32:57.118Z · comments (36)
Humans can be assigned any values whatsoever...
Stuart_Armstrong · 2017-10-13T11:29:06.065Z · comments (6)
[link] Johnathon Blow - Introspection techniques for dealing with lack of motivation, malaise, depression
nBrown · 2017-10-13T07:21:25.559Z · comments (0)
[link] Universal Paperclips
Morendil · 2017-10-13T06:16:51.347Z · comments (0)
SSC Meetup: Bay Area 10/14
Scott Alexander (Yvain) · 2017-10-13T03:30:00.269Z · comments (0)
[meta] LesserWrong is now de facto the main site
Chris_Leong · 2017-10-13T01:19:35.386Z · comments (4)
Instrumental Rationality 4.3: Breaking Habits and Conclusion
[deleted] · 2017-10-12T23:11:18.127Z · comments (7)
[link] Alan Kay - Programming and Scaling
namespace (ingres) · 2017-10-12T14:46:19.979Z · comments (2)
Beauty as a signal (map)
turchin · 2017-10-12T10:02:15.628Z · comments (1)
Signal seeding
KatjaGrace · 2017-10-12T07:00:00.245Z · comments (5)
Instrumental Rationality 4.2: Creating Habits
[deleted] · 2017-10-12T02:25:06.007Z · comments (0)
Gnostic Rationality
Gordon Seidoh Worley (gworley) · 2017-10-11T21:44:22.144Z · comments (40)
[meta] Events section
the gears to ascension (lahwran) · 2017-10-11T16:24:41.356Z · comments (6)
Mini-conference "Near-term AI safety"
turchin · 2017-10-11T15:19:50.889Z · comments (1)
Mini-conference "Near-term AI safety"
avturchin · 2017-10-11T14:54:10.147Z · comments (3)
Winning is for Losers
Jacob Falkovich (Jacobian) · 2017-10-11T04:01:43.000Z · comments (12)
Centralisation shapes Idealists into Cynics
whpearson · 2017-10-10T20:01:27.691Z · comments (11)
[meta] 10/10/2017: Development update ("autosaving" & Intercom options)
habryka (habryka4) · 2017-10-10T19:15:12.804Z · comments (2)
What would convince you you'd won the lottery?
Stuart_Armstrong · 2017-10-10T13:45:44.996Z · comments (11)
Toy model of the AI control problem: animated version
Stuart_Armstrong · 2017-10-10T11:12:07.207Z · comments (2)
Toy model of the AI control problem: animated version
Stuart_Armstrong · 2017-10-10T11:06:41.518Z · comments (8)
Robustness as a Path to AI Alignment
abramdemski · 2017-10-10T08:14:31.054Z · comments (9)
Distinctions in Types of Thought
sarahconstantin · 2017-10-10T03:36:06.820Z · comments (24)
[link] Artificial Unintelligence
Gordon Seidoh Worley (gworley) · 2017-10-10T01:37:22.000Z · comments (0)
[link] Building a Community Institution In Five Hours A Week
spiralingintocontrol · 2017-10-09T21:12:29.119Z · comments (0)
Learnday
hamnox · 2017-10-09T19:15:07.080Z · comments (1)
Initial thoughts on assisted formatting of discussion posts
Ezra · 2017-10-09T18:46:47.479Z · comments (4)
[link] The Recognizing vs Generating Distinction
[deleted] · 2017-10-09T16:56:09.379Z · comments (2)
The Just World Hypothesis
michael_vassar2 · 2017-10-09T06:03:41.485Z · comments (9)
[meta] Double Crux Example: Should HPMOR be on the Front Page?
Raemon · 2017-10-09T03:50:55.531Z · comments (12)
← previous page (newer posts) · next page (older posts) →