LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

next page (older posts) →

[link] Luck based medicine: my resentful story of becoming a medical miracle
Elizabeth (pktechgirl) · 2022-10-16T17:40:03.702Z · comments (119)
Counterarguments to the basic AI x-risk case
KatjaGrace · 2022-10-14T13:00:05.903Z · comments (124)
So, geez there's a lot of AI content these days
Raemon · 2022-10-06T21:32:20.833Z · comments (140)
Introduction to abstract entropy
Alex_Altair · 2022-10-20T21:03:02.486Z · comments (78)
Lessons learned from talking to >100 academics about AI safety
Marius Hobbhahn (marius-hobbhahn) · 2022-10-10T13:16:38.036Z · comments (17)
What does it take to defend the world against out-of-control AGIs?
Steven Byrnes (steve2152) · 2022-10-25T14:47:41.970Z · comments (47)
[link] The Social Recession: By the Numbers
antonomon · 2022-10-29T18:45:09.001Z · comments (29)
Decision theory does not imply that we get to have nice things
So8res · 2022-10-18T03:04:48.682Z · comments (58)
Why I think there's a one-in-six chance of an imminent global nuclear war
Max Tegmark (MaxTegmark) · 2022-10-08T06:26:40.235Z · comments (169)
[link] Six (and a half) intuitions for KL divergence
CallumMcDougall (TheMcDouglas) · 2022-10-12T21:07:07.796Z · comments (25)
Age changes what you care about
Dentin · 2022-10-16T15:36:36.148Z · comments (36)
AI Timelines via Cumulative Optimization Power: Less Long, More Short
jacob_cannell · 2022-10-06T00:21:02.447Z · comments (33)
Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley
maxnadeau · 2022-10-27T01:32:44.750Z · comments (14)
Warning Shots Probably Wouldn't Change The Picture Much
So8res · 2022-10-06T05:15:39.391Z · comments (42)
Niceness is unnatural
So8res · 2022-10-13T01:30:02.046Z · comments (20)
Mnestics
Jarred Filmer (4thWayWastrel) · 2022-10-23T00:30:11.159Z · comments (5)
[link] Why Weren't Hot Air Balloons Invented Sooner?
Lost Futures (aeviternity1) · 2022-10-18T00:41:43.156Z · comments (52)
Am I secretly excited for AI getting weird?
porby · 2022-10-29T22:16:52.592Z · comments (4)
Don't leave your fingerprints on the future
So8res · 2022-10-08T00:35:35.430Z · comments (46)
Actually, All Nuclear Famine Papers are Bunk
Lao Mein (derpherpize) · 2022-10-12T05:58:40.306Z · comments (37)
That one apocalyptic nuclear famine paper is bunk
Lao Mein (derpherpize) · 2022-10-12T03:33:32.488Z · comments (10)
Consider your appetite for disagreements
Adam Zerner (adamzerner) · 2022-10-08T23:25:44.096Z · comments (18)
Plans Are Predictions, Not Optimization Targets
johnswentworth · 2022-10-20T21:17:07.000Z · comments (20)
[link] Alignment 201 curriculum
Richard_Ngo (ricraz) · 2022-10-12T18:03:03.454Z · comments (3)
Analysis: US restricts GPU sales to China
aogara (Aidan O'Gara) · 2022-10-07T18:38:06.517Z · comments (58)
[link] Scaling Laws for Reward Model Overoptimization
leogao · 2022-10-20T00:20:06.920Z · comments (13)
Contra shard theory, in the context of the diamond maximizer problem
So8res · 2022-10-13T23:51:29.532Z · comments (19)
[link] Some Lessons Learned from Studying Indirect Object Identification in GPT-2 small
KevinRoWang · 2022-10-28T23:55:44.755Z · comments (9)
How To Make Prediction Markets Useful For Alignment Work
johnswentworth · 2022-10-18T19:01:01.292Z · comments (18)
Transformative VR Is Likely Coming Soon
jimrandomh · 2022-10-13T06:25:38.852Z · comments (46)
A shot at the diamond-alignment problem
TurnTrout · 2022-10-06T18:29:10.586Z · comments (58)
[link] A blog post is a very long and complex search query to find fascinating people and make them route interesting stuff to your inbox
Henrik Karlsson (henrik-karlsson) · 2022-10-05T19:07:55.069Z · comments (12)
Why Balsa Research is Worthwhile
Zvi · 2022-10-10T13:50:00.950Z · comments (12)
Polysemanticity and Capacity in Neural Networks
Buck · 2022-10-07T17:51:06.686Z · comments (14)
«Boundaries», Part 3a: Defining boundaries as directed Markov blankets
Andrew_Critch · 2022-10-30T06:31:00.277Z · comments (20)
Untapped Potential at 13-18
belkarx · 2022-10-18T18:09:56.352Z · comments (53)
The Teacup Test
lsusr · 2022-10-08T04:25:16.461Z · comments (31)
[link] Paper: Discovering novel algorithms with AlphaTensor [Deepmind]
LawrenceC (LawChan) · 2022-10-05T16:20:11.984Z · comments (18)
More Recent Progress in the Theory of Neural Networks
jylin04 · 2022-10-06T16:57:10.023Z · comments (6)
"Normal" is the equilibrium state of past optimization processes
Alex_Altair · 2022-10-30T19:03:19.328Z · comments (5)
The heritability of human values: A behavior genetic critique of Shard Theory
geoffreymiller · 2022-10-20T15:51:35.786Z · comments (59)
I learn better when I frame learning as Vengeance for losses incurred through ignorance, and you might too
chaosmage · 2022-10-15T12:41:52.504Z · comments (9)
Voting Theory Introduction
Scott Garrabrant · 2022-10-17T08:48:42.781Z · comments (8)
Resources that (I think) new alignment researchers should know about
Akash (akash-wasil) · 2022-10-28T22:13:36.537Z · comments (9)
Response to Katja Grace's AI x-risk counterarguments
Erik Jenner (ejenner) · 2022-10-19T01:17:54.728Z · comments (18)
The "you-can-just" alarm
Emrik (Emrik North) · 2022-10-08T10:43:23.977Z · comments (3)
Neural Tangent Kernel Distillation
Thomas Larsen (thomas-larsen) · 2022-10-05T18:11:54.687Z · comments (20)
Open Problem in Voting Theory
Scott Garrabrant · 2022-10-17T20:42:05.130Z · comments (16)
What does it mean for an AGI to be 'safe'?
So8res · 2022-10-07T04:13:05.176Z · comments (29)
Maximal Lotteries
Scott Garrabrant · 2022-10-17T08:54:09.001Z · comments (11)
next page (older posts) →