LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Luna Lovegood and the Chamber of Secrets - Part 3
lsusr · 2020-12-01T12:43:42.647Z · comments (11)
AI Forecasting: One Year In
jsteinhardt · 2022-07-04T05:10:18.470Z · comments (12)
Third Time: a better way to work
bfinn · 2022-01-07T21:15:57.789Z · comments (76)
Sci-Hub sued in India
Connor_Flexman · 2021-11-13T23:12:40.559Z · comments (19)
Orexin and the quest for more waking hours
ChristianKl · 2022-09-24T19:54:56.207Z · comments (39)
Don't leave your fingerprints on the future
So8res · 2022-10-08T00:35:35.430Z · comments (48)
[link] Training on Documents About Reward Hacking Induces Reward Hacking
evhub · 2025-01-21T21:32:24.691Z · comments (13)
Gradual Disempowerment, Shell Games and Flinches
Jan_Kulveit · 2025-02-02T14:47:53.404Z · comments (33)
Creating a truly formidable Art
Valentine · 2021-10-14T04:39:16.641Z · comments (59)
Why I don't believe in the placebo effect
transhumanist_atom_understander · 2024-06-10T02:37:07.776Z · comments (22)
Book Launch: "The Carving of Reality," Best of LessWrong vol. III
Raemon · 2023-08-16T23:52:12.518Z · comments (22)
Conjecture: Internal Infohazard Policy
Connor Leahy (NPCollapse) · 2022-07-29T19:07:08.491Z · comments (6)
My simple AGI investment & insurance strategy
lc · 2024-03-31T02:51:53.479Z · comments (27)
Response to Quintin Pope's Evolution Provides No Evidence For the Sharp Left Turn
Zvi · 2023-10-05T11:39:02.393Z · comments (29)
2019 AI Alignment Literature Review and Charity Comparison
Larks · 2019-12-19T03:00:54.708Z · comments (18)
Updatelessness doesn't solve most problems
Martín Soto (martinsq) · 2024-02-08T17:30:11.266Z · comments (44)
Utility ≠ Reward
Vlad Mikulik (vlad_m) · 2019-09-05T17:28:13.222Z · comments (24)
[link] Introducing Fatebook: the fastest way to make and track predictions
Adam B (adam-b) · 2023-07-11T15:28:13.798Z · comments (41)
Niceness is unnatural
So8res · 2022-10-13T01:30:02.046Z · comments (20)
AI Summer Harvest
Cleo Nardo (strawberry calm) · 2023-04-04T03:35:58.473Z · comments (10)
[question] Which things were you surprised to learn are not metaphors?
Eric Neyman (UnexpectedValues) · 2024-11-21T18:56:18.025Z · answers+comments (80)
Basic Facts about Language Model Internals
beren · 2023-01-04T13:01:35.223Z · comments (19)
Ngo and Yudkowsky on AI capability gains
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-11-18T22:19:05.913Z · comments (61)
Why Not Subagents?
johnswentworth · 2023-06-22T22:16:55.249Z · comments (52)
The case for training frontier AIs on Sumerian-only corpus
Alexandre Variengien (alexandre-variengien) · 2024-01-15T16:40:22.011Z · comments (15)
Ten people on the inside
Buck · 2025-01-28T16:41:22.990Z · comments (26)
Intergenerational trauma impeding cooperative existential safety efforts
Andrew_Critch · 2022-06-03T08:13:25.439Z · comments (29)
[link] "Can AI Scaling Continue Through 2030?", Epoch AI (yes)
gwern · 2024-08-24T01:40:32.929Z · comments (4)
Confused why a "capabilities research is good for alignment progress" position isn't discussed more
Kaj_Sotala · 2022-06-02T21:41:44.784Z · comments (27)
Misgeneralization as a misnomer
So8res · 2023-04-06T20:43:33.275Z · comments (22)
Only Asking Real Questions
jefftk (jkaufman) · 2022-04-14T15:50:02.970Z · comments (45)
Current AIs Provide Nearly No Data Relevant to AGI Alignment
Thane Ruthenis · 2023-12-15T20:16:09.723Z · comments (157)
Ten Levels of AI Alignment Difficulty
Sammy Martin (SDM) · 2023-07-03T20:20:21.403Z · comments (24)
[link] Even Superhuman Go AIs Have Surprising Failure Modes
AdamGleave · 2023-07-20T17:31:35.814Z · comments (22)
[link] Cohabitive Games so Far
mako yass (MakoYass) · 2023-09-28T15:41:27.986Z · comments (141)
[Closed] Job Offering: Help Communicate Infrabayesianism
abramdemski · 2022-03-23T18:35:16.790Z · comments (22)
Self-sacrifice is a scarce resource
mingyuan · 2020-06-28T05:08:05.010Z · comments (18)
We have to Upgrade
Jed McCaleb (jed-mccaleb) · 2023-03-23T17:53:32.222Z · comments (35)
Tools for keeping focused
benkuhn · 2020-08-05T02:10:08.707Z · comments (26)
Seemingly Popular Covid-19 Model is Obvious Nonsense
Zvi · 2020-04-11T23:10:00.594Z · comments (28)
Patient Observation
LoganStrohl (BrienneYudkowsky) · 2022-02-23T19:31:45.062Z · comments (4)
Near-mode thinking on AI
Olli Järviniemi (jarviniemi) · 2024-08-04T20:47:28.085Z · comments (8)
Moore's Law, AI, and the pace of progress
Veedrac · 2021-12-11T03:02:24.558Z · comments (38)
How I started believing religion might actually matter for rationality and moral philosophy
zhukeepa · 2024-08-23T17:40:47.341Z · comments (41)
COVID Skepticism Isn't About Science
jaspax · 2021-12-29T17:53:43.354Z · comments (76)
Circuits in Superposition: Compressing many small neural networks into one
Lucius Bushnaq (Lblack) · 2024-10-14T13:06:14.596Z · comments (8)
The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables
johnswentworth · 2020-11-18T17:47:40.929Z · comments (49)
AGI safety from first principles: Introduction
Richard_Ngo (ricraz) · 2020-09-28T19:53:22.849Z · comments (18)
[link] Tales from Prediction Markets
ike · 2021-04-03T23:38:22.728Z · comments (15)
On infinite ethics
Joe Carlsmith (joekc) · 2022-01-31T07:04:44.244Z · comments (71)
← previous page (newer posts) · next page (older posts) →