LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

[Closed] Job Offering: Help Communicate Infrabayesianism
abramdemski · 2022-03-23T18:35:16.790Z · comments (22)
Tools for keeping focused
benkuhn · 2020-08-05T02:10:08.707Z · comments (26)
Confused why a "capabilities research is good for alignment progress" position isn't discussed more
Kaj_Sotala · 2022-06-02T21:41:44.784Z · comments (27)
[link] Even Superhuman Go AIs Have Surprising Failure Modes
AdamGleave · 2023-07-20T17:31:35.814Z · comments (22)
Seemingly Popular Covid-19 Model is Obvious Nonsense
Zvi · 2020-04-11T23:10:00.594Z · comments (28)
Assume Bad Faith
Zack_M_Davis · 2023-08-25T17:36:32.678Z · comments (56)
Only Asking Real Questions
jefftk (jkaufman) · 2022-04-14T15:50:02.970Z · comments (45)
Self-sacrifice is a scarce resource
mingyuan · 2020-06-28T05:08:05.010Z · comments (18)
We have to Upgrade
Jed McCaleb (jed-mccaleb) · 2023-03-23T17:53:32.222Z · comments (35)
Notice When People Are Directionally Correct
Chris_Leong · 2024-01-14T14:12:37.090Z · comments (8)
On infinite ethics
Joe Carlsmith (joekc) · 2022-01-31T07:04:44.244Z · comments (71)
Near-mode thinking on AI
Olli Järviniemi (jarviniemi) · 2024-08-04T20:47:28.085Z · comments (8)
Patient Observation
LoganStrohl (BrienneYudkowsky) · 2022-02-23T19:31:45.062Z · comments (4)
The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables
johnswentworth · 2020-11-18T17:47:40.929Z · comments (49)
The Territory
LoganStrohl (BrienneYudkowsky) · 2022-02-15T18:56:36.992Z · comments (12)
[link] [Linkpost] Some high-level thoughts on the DeepMind alignment team's strategy
Vika · 2023-03-07T11:55:01.131Z · comments (13)
COVID Skepticism Isn't About Science
jaspax · 2021-12-29T17:53:43.354Z · comments (76)
[link] Introducing Fatebook: the fastest way to make and track predictions
Adam B (adam-b) · 2023-07-11T15:28:13.798Z · comments (40)
[link] Tales from Prediction Markets
ike · 2021-04-03T23:38:22.728Z · comments (15)
How I started believing religion might actually matter for rationality and moral philosophy
zhukeepa · 2024-08-23T17:40:47.341Z · comments (41)
Moore's Law, AI, and the pace of progress
Veedrac · 2021-12-11T03:02:24.558Z · comments (38)
Response to Quintin Pope's Evolution Provides No Evidence For the Sharp Left Turn
Zvi · 2023-10-05T11:39:02.393Z · comments (29)
AGI safety from first principles: Introduction
Richard_Ngo (ricraz) · 2020-09-28T19:53:22.849Z · comments (18)
My Overview of the AI Alignment Landscape: A Bird's Eye View
Neel Nanda (neel-nanda-1) · 2021-12-15T23:44:31.873Z · comments (9)
Community Notes by X
NicholasKees (nick_kees) · 2024-03-18T17:13:33.195Z · comments (15)
One-layer transformers aren’t equivalent to a set of skip-trigrams
Buck · 2023-02-17T17:26:13.819Z · comments (11)
On the Diplomacy AI
Zvi · 2022-11-28T13:20:00.884Z · comments (29)
"Pivotal Acts" means something specific
Raemon · 2022-06-07T21:56:00.574Z · comments (23)
I left Russia on March 8
avturchin · 2022-03-10T20:05:59.650Z · comments (16)
Clarifying AI X-risk
zac_kenton (zkenton) · 2022-11-01T11:03:01.144Z · comments (24)
High Status Eschews Quantification of Performance
niplav · 2023-03-19T22:14:16.523Z · comments (36)
Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think
Zack_M_Davis · 2019-12-27T05:09:22.546Z · comments (43)
Long covid: probably worth avoiding—some considerations
KatjaGrace · 2022-01-16T11:46:52.087Z · comments (88)
Why Don't We Just... Shoggoth+Face+Paraphraser?
Daniel Kokotajlo (daniel-kokotajlo) · 2024-11-19T20:53:52.084Z · comments (55)
Capital Ownership Will Not Prevent Human Disempowerment
beren · 2025-01-05T06:00:23.095Z · comments (11)
Circuits in Superposition: Compressing many small neural networks into one
Lucius Bushnaq (Lblack) · 2024-10-14T13:06:14.596Z · comments (8)
The case for becoming a black-box investigator of language models
Buck · 2022-05-06T14:35:24.630Z · comments (20)
Shared reality: a key driver of human behavior
kdbscott · 2022-12-24T19:35:51.126Z · comments (25)
Pantheon Interface
NicholasKees (nick_kees) · 2024-07-08T19:03:51.681Z · comments (22)
Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems)
NunoSempere (Radamantis) · 2020-12-03T22:00:26.889Z · comments (28)
Limerence Messes Up Your Rationality Real Bad, Yo
Raemon · 2022-07-01T16:53:10.914Z · comments (42)
[link] Parkinson's Law and the Ideology of Statistics
Benquo · 2025-01-04T15:49:21.247Z · comments (6)
Warning Shots Probably Wouldn't Change The Picture Much
So8res · 2022-10-06T05:15:39.391Z · comments (42)
AI Alignment 2018-19 Review
Rohin Shah (rohinmshah) · 2020-01-28T02:19:52.782Z · comments (6)
Some background for reasoning about dual-use alignment research
Charlie Steiner · 2023-05-18T14:50:54.401Z · comments (22)
Don't leave your fingerprints on the future
So8res · 2022-10-08T00:35:35.430Z · comments (48)
Re-Examining LayerNorm
Eric Winsor (EricWinsor) · 2022-12-01T22:20:23.542Z · comments (12)
[link] The Hubinger lectures on AGI safety: an introductory lecture series
evhub · 2023-06-22T00:59:27.820Z · comments (0)
Insights from Euclid's 'Elements'
TurnTrout · 2020-05-04T15:45:30.711Z · comments (17)
ARC is hiring theoretical researchers
paulfchristiano · 2023-06-12T18:50:08.232Z · comments (12)
← previous page (newer posts) · next page (older posts) →