LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Emotionally Confronting a Probably-Doomed World: Against Motivation Via Dignity Points
TurnTrout · 2022-04-10T18:45:08.027Z · comments (7)
Meta Questions about Metaphilosophy
Wei Dai (Wei_Dai) · 2023-09-01T01:17:57.578Z · comments (80)
What's General-Purpose Search, And Why Might We Expect To See It In Trained ML Systems?
johnswentworth · 2022-08-15T22:48:38.671Z · comments (18)
Omicron Post #4
Zvi · 2021-12-06T17:00:01.470Z · comments (66)
New York Times, Please Do Not Threaten The Safety of Scott Alexander By Revealing His True Name
Zvi · 2020-06-23T12:20:00.788Z · comments (2)
Apologizing is a Core Rationalist Skill
johnswentworth · 2024-01-02T17:47:35.950Z · comments (42)
The Felt Sense: What, Why and How
Kaj_Sotala · 2020-10-05T15:57:50.545Z · comments (23)
Luna Lovegood and the Chamber of Secrets - Part 1
lsusr · 2020-11-26T03:02:19.617Z · comments (31)
Staying Split: Sabatini and Social Justice
Duncan Sabien (Deactivated) (Duncan_Sabien) · 2022-06-08T08:32:58.633Z · comments (28)
[link] ARC Evals new report: Evaluating Language-Model Agents on Realistic Autonomous Tasks
Beth Barnes (beth-barnes) · 2023-08-01T18:30:57.068Z · comments (12)
OpenAI: Exodus
Zvi · 2024-05-20T13:10:03.543Z · comments (26)
AI: Practical Advice for the Worried
Zvi · 2023-03-01T12:30:00.703Z · comments (48)
Liability regimes for AI
Ege Erdil (ege-erdil) · 2024-08-19T01:25:01.006Z · comments (34)
Nate Soares' Life Advice
CatGoddess · 2022-08-23T02:46:43.369Z · comments (41)
Slack gives you space to notice/reflect on subtle things
Raemon · 2022-04-24T02:30:34.737Z · comments (18)
I'm leaving AI alignment – you better stay
rmoehn · 2020-03-12T05:58:37.523Z · comments (19)
Assume Bad Faith
Zack_M_Davis · 2023-08-25T17:36:32.678Z · comments (63)
[link] The bonds of family and community: Poverty and cruelty among Russian peasants in the late 19th century
jasoncrawford · 2021-11-28T17:22:23.136Z · comments (36)
The Plan - 2023 Version
johnswentworth · 2023-12-29T23:34:19.651Z · comments (40)
Mental Mountains
Scott Alexander (Yvain) · 2019-11-27T05:30:02.107Z · comments (14)
[link] Arithmetic is an underrated world-modeling technology
dynomight · 2024-10-17T14:00:22.475Z · comments (33)
[question] Where do (did?) stable, cooperative institutions come from?
AnnaSalamon · 2020-11-03T22:14:09.322Z · answers+comments (72)
Shutting down AI is not enough. We need to destroy all technology.
Matthew Barnett (matthew-barnett) · 2023-04-01T21:03:24.448Z · comments (36)
AI x-risk, approximately ordered by embarrassment
Alex Lawsen (alex-lszn) · 2023-04-12T23:01:00.561Z · comments (7)
My takes on SB-1047
leogao · 2024-09-09T18:38:37.799Z · comments (8)
[link] GPT-4
nz · 2023-03-14T17:02:02.276Z · comments (149)
The Geometric Expectation
Scott Garrabrant · 2022-11-23T18:05:12.206Z · comments (21)
Comment on "Endogenous Epistemic Factionalization"
Zack_M_Davis · 2020-05-20T18:04:53.857Z · comments (8)
LoRA Fine-tuning Efficiently Undoes Safety Training from Llama 2-Chat 70B
Simon Lermen (dalasnoin) · 2023-10-12T19:58:02.119Z · comments (29)
How Long Can People Usefully Work?
lynettebye · 2020-04-03T21:35:11.525Z · comments (8)
Priors and Prejudice
MathiasKB (MathiasKirkBonde) · 2024-04-22T15:00:41.782Z · comments (31)
DeepMind is hiring for the Scalable Alignment and Alignment Teams
Rohin Shah (rohinmshah) · 2022-05-13T12:17:13.157Z · comments (34)
[link] The First Sample Gives the Most Information
Mark Xu (mark-xu) · 2020-12-24T20:39:04.936Z · comments (16)
Shard Theory in Nine Theses: a Distillation and Critical Appraisal
LawrenceC (LawChan) · 2022-12-19T22:52:20.031Z · comments (30)
The Apprentice Experiment
johnswentworth · 2021-06-10T03:29:27.257Z · comments (11)
Prizes for ELK proposals
paulfchristiano · 2022-01-03T20:23:25.867Z · comments (152)
Alignment research exercises
Richard_Ngo (ricraz) · 2022-02-21T20:24:16.375Z · comments (17)
[link] If you weren't such an idiot...
kave · 2024-03-02T00:01:37.314Z · comments (74)
The date of AI Takeover is not the day the AI takes over
Daniel Kokotajlo (daniel-kokotajlo) · 2020-10-22T10:41:09.242Z · comments (32)
[link] Using axis lines for good or evil
dynomight · 2024-03-06T14:47:10.989Z · comments (39)
[link] Daniel Dennett has died (1942-2024)
kave · 2024-04-19T16:17:04.742Z · comments (5)
Cryonics signup guide #1: Overview
mingyuan · 2021-01-06T00:25:02.927Z · comments (34)
2023 Survey Results
Screwtape · 2024-02-16T22:24:28.132Z · comments (26)
"It's a 10% chance which I did 10 times, so it should be 100%"
egor.timatkov · 2024-11-18T01:14:27.738Z · comments (58)
Deep atheism and AI risk
Joe Carlsmith (joekc) · 2024-01-04T18:58:47.745Z · comments (22)
Understanding “Deep Double Descent”
evhub · 2019-12-06T00:00:10.180Z · comments (51)
OpenAI Launches Superalignment Taskforce
Zvi · 2023-07-11T13:00:06.232Z · comments (40)
[link] Vernor Vinge, who coined the term "Technological Singularity", dies at 79
Kaj_Sotala · 2024-03-21T22:14:14.699Z · comments (25)
[link] Alcohol, health, and the ruthless logic of the Asian flush
dynomight · 2021-06-04T18:14:08.797Z · comments (45)
Public-facing Censorship Is Safety Theater, Causing Reputational Damage
Yitz (yitz) · 2022-09-23T05:08:14.149Z · comments (42)
← previous page (newer posts) · next page (older posts) →