LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

18-month follow-up on my self-concept work
Kaj_Sotala · 2018-12-18T17:40:03.941Z · comments (4)
Good arguments against "cultural appropriation"
Tyrrell_McAllister · 2018-12-18T17:23:52.900Z · comments (12)
[question] Experiences of Self-deception
Bucky · 2018-12-18T11:10:26.965Z · answers+comments (3)
2018 AI Alignment Literature Review and Charity Comparison
Larks · 2018-12-18T04:46:55.445Z · comments (26)
Equivalence of State Machines and Coroutines
Martin Sustrik (sustrik) · 2018-12-18T04:40:00.750Z · comments (1)
You can be wrong about what you like, and you often are
adamzerner · 2018-12-17T23:49:39.935Z · comments (21)
Alignment Newsletter #37
rohinmshah · 2018-12-17T19:10:01.774Z · comments (4)
A simple approach to 5-and-10
Gurkenglas · 2018-12-17T18:33:46.735Z · comments (10)
[link] In Defense of Finance
Jacob Falkovich (Jacobian) · 2018-12-17T16:59:11.149Z · comments (31)
Fifteen Things I Learned From Watching a Game of Secret Hitler
Zvi · 2018-12-17T13:40:01.047Z · comments (6)
Two Neglected Problems in Human-AI Safety
Wei_Dai · 2018-12-16T22:13:29.196Z · comments (24)
[link] Babble, Learning, and the Typical Mind Fallacy
NaiveTortoise (An1lam) · 2018-12-16T16:51:53.827Z · comments (None)
[question] What are some concrete problems about logical counterfactuals?
Chris_Leong · 2018-12-16T10:20:26.618Z · answers+comments (4)
The E-Coli Test for AI Alignment
johnswentworth · 2018-12-16T08:10:50.502Z · comments (24)
on wellunderstoodness
Quinn (quinn-dougherty) · 2018-12-16T07:22:19.250Z · comments (2)
[link] Sabine "Bee" Hossenfelder (and Robin Hanson) on How to fix Academia with Prediction Markets
shminux · 2018-12-16T06:37:13.623Z · comments (None)
New edition of "Rationality: From AI to Zombies"
Rob Bensinger (RobbBB) · 2018-12-15T21:33:56.713Z · comments (24)
[link] Gwern about centaurs: there is no chance that any useful man+machine combination will work together for more than 10 years, as humans soon will be only a liability
avturchin · 2018-12-15T21:32:55.180Z · comments (3)
Argue Politics* With Your Best Friends
sarahconstantin · 2018-12-15T19:00:00.549Z · comments (6)
Interpreting genetic testing
jefftk (jkaufman) · 2018-12-15T15:56:57.339Z · comments (1)
[question] What is abstraction?
adamzerner · 2018-12-15T08:36:01.089Z · answers+comments (11)
Introducing the Longevity Research Institute
sarahconstantin · 2018-12-14T20:20:00.532Z · comments (11)
Player vs. Character: A Two-Level Model of Ethics
sarahconstantin · 2018-12-14T19:40:00.520Z · comments (27)
[question] How to reset my password?
hirvinen · 2018-12-14T16:18:02.936Z · answers+comments (1)
[question] What podcasts does the community listen to?
hristovassilev · 2018-12-14T15:40:31.414Z · answers+comments (6)
Meditations on Momentum
Richard Meadows (richard-meadows-1) · 2018-12-14T10:53:05.446Z · comments (30)
[question] Can I use Less Wrong branding in youtube videos?
Bae's Theorem (Senarin) · 2018-12-14T07:10:16.072Z · answers+comments (5)
Three AI Safety Related Ideas
Wei_Dai · 2018-12-13T21:32:25.415Z · comments (38)
An Extensive Categorisation of Infinite Paradoxes
Chris_Leong · 2018-12-13T18:36:53.972Z · comments (48)
The Bat and Ball Problem Revisited
drossbucket · 2018-12-13T07:16:30.017Z · comments (27)
Multi-agent predictive minds and AI alignment
Jan_Kulveit · 2018-12-12T23:48:03.155Z · comments (18)
[question] What went wrong in this interaction?
t3tsubo · 2018-12-12T19:59:57.790Z · answers+comments (8)
[link] Internet Search Tips: how I use Google/Google Scholar/Libgen
gwern · 2018-12-12T14:50:30.970Z · comments (None)
Zagreb Rationality Meetup
Roko Jelavić (roko-jelavic) · 2018-12-12T13:08:34.403Z · comments (None)
Should ethicists be inside or outside a profession?
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2018-12-12T01:40:13.298Z · comments (6)
Alignment Newsletter #36
rohinmshah · 2018-12-12T01:10:01.398Z · comments (None)
A hundred Shakespeares
Stuart_Armstrong · 2018-12-11T23:11:48.668Z · comments (4)
Norms of Membership for Voluntary Groups
sarahconstantin · 2018-12-11T22:10:00.975Z · comments (10)
Quantum immortality: Is decline of measure compensated by merging timelines?
avturchin · 2018-12-11T19:39:28.534Z · comments (8)
Bounded rationality abounds in models, not explicitly defined
Stuart_Armstrong · 2018-12-11T19:34:17.476Z · comments (9)
Figuring out what Alice wants: non-human Alice
Stuart_Armstrong · 2018-12-11T19:31:13.830Z · comments (17)
Assuming we've solved X, could we do Y...
Stuart_Armstrong · 2018-12-11T18:13:56.021Z · comments (16)
[question] Who's welcome to our LessWrong meetups?
ChristianKl · 2018-12-10T13:31:17.357Z · answers+comments (5)
[question] How Old is Smallpox?
Raemon · 2018-12-10T10:50:33.960Z · answers+comments (5)
LessWrong Tel Aviv: Civilizational Collapse
JoshuaFox · 2018-12-10T07:26:46.272Z · comments (None)
Boston Secular Solstice
jefftk (jkaufman) · 2018-12-10T01:59:24.756Z · comments (None)
[question] Why should EA care about rationality (and vice-versa)?
G Gordon Worley III (gworley) · 2018-12-09T22:03:58.158Z · answers+comments (13)
Measly Meditation Measurements
justinpombrio · 2018-12-09T20:54:46.781Z · comments (19)
Review: Slay the Spire
Zvi · 2018-12-09T20:40:01.616Z · comments (1)
[question] Instead of using the high-level languages, programmers will start using programming of more high-level or human language level programming?
manhobby · 2018-12-09T18:54:07.626Z · answers+comments (7)
← previous page (newer posts) · next page (older posts) →