LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Elon Musk May Be Transitioning to Bipolar Type I
Cyborg25 · 2025-03-11T17:45:06.599Z · comments (22)
[link] Eukaryote Skips Town - Why I'm leaving DC
eukaryote · 2025-03-26T17:16:29.663Z · comments (1)
[link] Preparing for the Intelligence Explosion
fin · 2025-03-11T15:38:29.524Z · comments (17)
[link] AI for AI safety
Joe Carlsmith (joekc) · 2025-03-14T15:00:23.491Z · comments (13)
Pick two: concise, comprehensive, or clear rules
Screwtape · 2025-02-03T06:39:05.815Z · comments (27)
[link] OpenAI releases deep research agent
Seth Herd · 2025-02-03T12:48:44.925Z · comments (21)
Why does LW not put much more focus on AI governance and outreach?
Severin T. Seehrich (sts) · 2025-04-12T14:24:54.197Z · comments (31)
[link] AI for Epistemics Hackathon
Austin Chen (austin-chen) · 2025-03-14T20:46:34.250Z · comments (10)
[link] New Paper: Infra-Bayesian Decision-Estimation Theory
Vanessa Kosoy (vanessa-kosoy) · 2025-04-10T09:17:38.966Z · comments (4)
Evaluating “What 2026 Looks Like” So Far
Jonny Spicer (jonnyspicer) · 2025-02-24T18:55:27.373Z · comments (4)
PauseAI and E/Acc Should Switch Sides
WillPetillo · 2025-04-01T23:25:51.265Z · comments (6)
Fun With GPT-4o Image Generation
Zvi · 2025-03-26T19:50:03.270Z · comments (3)
Anti-Slop Interventions?
abramdemski · 2025-02-04T19:50:29.127Z · comments (33)
The principle of genomic liberty
TsviBT · 2025-03-19T14:27:57.175Z · comments (51)
[link] The machine has no mouth and it must scream
zef (uzpg) · 2025-03-08T16:40:46.755Z · comments (1)
100+ concrete projects and open problems in evals
Marius Hobbhahn (marius-hobbhahn) · 2025-03-22T15:21:40.970Z · comments (1)
MATS Applications + Research Directions I'm Currently Excited About
Neel Nanda (neel-nanda-1) · 2025-02-06T11:03:40.093Z · comments (7)
The Simplest Good
Jesse Hoogland (jhoogland) · 2025-02-02T19:51:14.155Z · comments (6)
Steelmanning heuristic arguments
Dmitry Vaintrob (dmitry-vaintrob) · 2025-04-13T01:09:33.392Z · comments (0)
The Semi-Rational Militar Firefighter
P. João (gabriel-brito) · 2025-03-04T12:23:37.253Z · comments (10)
[link] birds and mammals independently evolved intelligence
bhauth · 2025-04-08T20:00:05.100Z · comments (23)
I'm resigning as Meetup Czar. What's next?
Screwtape · 2025-04-02T00:30:42.110Z · comments (2)
Osaka
lsusr · 2025-02-26T13:50:24.102Z · comments (11)
[link] Yudkowsky on The Trajectory podcast
Seth Herd · 2025-01-24T19:52:15.104Z · comments (39)
[link] Thermodynamic entropy = Kolmogorov complexity
Aram Ebtekar (EbTech) · 2025-02-17T05:56:06.960Z · comments (12)
Introducing 11 New AI Safety Organizations - Catalyze's Winter 24/25 London Incubation Program Cohort
Alexandra Bos (AlexandraB) · 2025-03-10T19:26:11.017Z · comments (0)
LLM AGI will have memory, and memory changes alignment
Seth Herd · 2025-04-04T14:59:13.070Z · comments (9)
Language Models Use Trigonometry to Do Addition
Subhash Kantamneni (subhashk) · 2025-02-05T13:50:08.243Z · comments (1)
Will compute bottlenecks prevent a software intelligence explosion?
Tom Davidson (tom-davidson-1) · 2025-04-04T17:41:37.088Z · comments (2)
Disempowerment spirals as a likely mechanism for existential catastrophe
Raymond D · 2025-04-10T14:37:58.301Z · comments (7)
Kessler's Second Syndrome
Jesse Hoogland (jhoogland) · 2025-01-26T07:04:17.852Z · comments (2)
[link] Paper: Open Problems in Mechanistic Interpretability
Lee Sharkey (Lee_Sharkey) · 2025-01-29T10:25:54.727Z · comments (0)
Detect Goodhart and shut down
Jeremy Gillen (jeremy-gillen) · 2025-01-22T18:45:30.910Z · comments (21)
$500 Bounty Problem: Are (Approximately) Deterministic Natural Latents All You Need?
johnswentworth · 2025-04-21T20:19:30.808Z · comments (1)
Alignment can be the ‘clean energy’ of AI
Cameron Berg (cameron-berg) · 2025-02-22T00:08:30.391Z · comments (8)
AI 2027: Dwarkesh’s Podcast with Daniel Kokotajlo and Scott Alexander
Zvi · 2025-04-07T13:40:05.944Z · comments (2)
Maintaining Alignment during RSI as a Feedback Control Problem
beren · 2025-03-02T00:21:43.432Z · comments (6)
[link] Phoenix Rising
Metacelsus · 2025-03-09T11:53:52.618Z · comments (7)
AI CoT Reasoning Is Often Unfaithful
Zvi · 2025-04-04T14:50:05.538Z · comments (4)
Should you go with your best guess?: Against precise Bayesianism and related views
Anthony DiGiovanni (antimonyanthony) · 2025-01-27T20:25:26.809Z · comments (15)
A Problem to Solve Before Building a Deception Detector
Eleni Angelou (ea-1) · 2025-02-07T19:35:23.307Z · comments (9)
Selective modularity: a research agenda
cloud · 2025-03-24T04:12:44.822Z · comments (2)
[link] Google DeepMind: An Approach to Technical AGI Safety and Security
Rohin Shah (rohinmshah) · 2025-04-05T22:00:14.803Z · comments (12)
Going Nova
Zvi · 2025-03-19T13:30:01.293Z · comments (14)
Feedback loops for exercise (VO2Max)
Elizabeth (pktechgirl) · 2025-03-18T00:10:06.827Z · comments (11)
[link] How do we solve the alignment problem?
Joe Carlsmith (joekc) · 2025-02-13T18:27:27.712Z · comments (8)
HPMOR Anniversary Guide
Screwtape · 2025-02-22T16:17:25.093Z · comments (7)
The Uses of Complacency
sarahconstantin · 2025-04-21T18:50:02.725Z · comments (4)
[link] Gary Marcus now saying AI can't do things it can already do
Benjamin_Todd · 2025-02-09T12:24:11.954Z · comments (12)
Book Review: Affective Neuroscience
sarahconstantin · 2025-03-10T06:50:04.602Z · comments (8)
← previous page (newer posts) · next page (older posts) →