LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Economic Post-ASI Transition
[deleted] · 2025-01-01T22:37:31.722Z · comments (11)
[link] Genesis
PeterMcCluskey · 2024-12-31T22:01:17.277Z · comments (0)
Monthly Roundup #25: December 2024
Zvi · 2024-12-23T14:20:04.682Z · comments (3)
[link] AI safety content you could create
Adam Jones (domdomegg) · 2025-01-06T15:35:56.167Z · comments (0)
[link] Can o1-preview find major mistakes amongst 59 NeurIPS '24 MLSB papers?
Abhishaike Mahajan (abhishaike-mahajan) · 2024-12-18T14:21:03.661Z · comments (0)
[link] From the Archives: a story
Richard_Ngo (ricraz) · 2024-12-27T16:36:50.735Z · comments (1)
A Collection of Empirical Frames about Language Models
Daniel Tan (dtch1997) · 2025-01-02T02:49:05.965Z · comments (0)
[link] A primer on machine learning in cryo-electron microscopy (cryo-EM)
Abhishaike Mahajan (abhishaike-mahajan) · 2024-12-22T15:11:58.860Z · comments (0)
Incredibow
jefftk (jkaufman) · 2025-01-07T03:30:02.197Z · comments (3)
[link] We are in a New Paradigm of AI Progress - OpenAI's o3 model makes huge gains on the toughest AI benchmarks in the world
garrison · 2024-12-22T21:45:52.026Z · comments (3)
Everything you care about is in the map
Tahp · 2024-12-17T14:05:36.824Z · comments (27)
Reality is Fractal-Shaped
silentbob · 2024-12-17T13:52:16.946Z · comments (1)
Latent Adversarial Training (LAT) Improves the Representation of Refusal
alexandraabbas · 2025-01-06T10:24:53.419Z · comments (5)
[question] What is the most impressive game LLMs can play well?
Cole Wyeth (Amyr) · 2025-01-08T19:38:18.530Z · answers+comments (3)
[link] Policymakers don't have access to paywalled articles
Adam Jones (domdomegg) · 2025-01-05T10:56:11.495Z · comments (4)
Rebuttals for ~all criticisms of AIXI
Cole Wyeth (Amyr) · 2025-01-07T17:41:10.557Z · comments (7)
Computational functionalism probably can't explain phenomenal consciousness
EuanMcLean (euanmclean) · 2024-12-10T17:11:28.044Z · comments (34)
Most Minds are Irrational
Davidmanheim · 2024-12-10T09:36:33.144Z · comments (4)
[link] Building AI safety benchmark environments on themes of universal human values
Roland Pihlakas (roland-pihlakas) · 2025-01-03T04:24:36.186Z · comments (3)
[link] Don't Associate AI Safety With Activism
Eneasz · 2024-12-18T08:01:50.357Z · comments (15)
EC2 Scripts
jefftk (jkaufman) · 2024-12-10T03:00:01.906Z · comments (1)
Predicting AI Releases Through Side Channels
Reworr R (reworr-reworr) · 2025-01-07T19:06:41.584Z · comments (1)
Can we rescue Effective Altruism?
Elizabeth (pktechgirl) · 2025-01-09T16:40:02.405Z · comments (0)
An exhaustive list of cosmic threats
Jordan Stone (jordan-stone) · 2025-01-09T19:59:08.368Z · comments (1)
[link] o3 is not being released to the public. First they are only giving access to external safety testers. You can apply to get early access to do safety testing
KatWoods (ea247) · 2024-12-20T18:30:44.421Z · comments (0)
Coin Flip
XelaP (scroogemcduck1) · 2024-12-27T11:53:01.781Z · comments (0)
[question] What would be the IQ and other benchmarks of o3 that uses $1 million worth of compute resources to answer one question?
avturchin · 2024-12-26T11:08:23.545Z · answers+comments (2)
Doing Sport Reliably via Dancing
Johannes C. Mayer (johannes-c-mayer) · 2024-12-20T12:06:59.517Z · comments (0)
Re Hanson's Grabby Aliens: Humanity is not a natural anthropic sample space
Lorec · 2024-12-09T18:07:23.510Z · comments (32)
The case for pay-on-results coaching
Chipmonk · 2025-01-03T18:40:22.304Z · comments (3)
[link] The Genesis Project
aproteinengine · 2024-12-19T21:26:51.344Z · comments (0)
Boston Solstice 2024 Retrospective
jefftk (jkaufman) · 2024-12-29T15:40:05.095Z · comments (0)
Apply to the 2025 PIBBSS Summer Research Fellowship
DusanDNesic · 2024-12-24T10:25:12.882Z · comments (0)
Stop Making Sense
JenniferRM · 2024-12-23T05:16:12.428Z · comments (0)
[link] The Legacy of Computer Science
Johannes C. Mayer (johannes-c-mayer) · 2024-12-29T13:15:28.606Z · comments (0)
Notes on Altruism
David Gross (David_Gross) · 2024-12-29T03:13:09.444Z · comments (1)
I'm Writing a Book About Liberalism
Yoav Ravid · 2024-12-19T00:13:33.895Z · comments (6)
Zombies! Substance Dualist Zombies?
Ape in the coat · 2024-12-11T06:10:28.318Z · comments (7)
The low Information Density of Eliezer Yudkowsky & LessWrong
Felix Olszewski (quick-maths) · 2024-12-30T19:43:59.355Z · comments (7)
The first AGI may be a good engineer but bad strategist
Knight Lee (Max Lee) · 2024-12-09T06:34:54.082Z · comments (2)
Dmitry's Koan
Dmitry Vaintrob (dmitry-vaintrob) · 2025-01-10T04:27:30.346Z · comments (0)
Don't fall for ontology pyramid schemes
Lorec · 2025-01-07T23:29:46.935Z · comments (4)
How I saved 1 human life (in expectation) without overthinking it
Christopher King (christopher-king) · 2024-12-22T20:53:13.492Z · comments (0)
[link] NeuroAI for AI safety: A Differential Path
nz · 2024-12-16T13:17:12.527Z · comments (0)
Comparing the AirFanta 3Pro to the Coway AP-1512
jefftk (jkaufman) · 2024-12-16T01:40:01.522Z · comments (0)
Low-effort review of "AI For Humanity"
Charlie Steiner · 2024-12-11T09:54:42.871Z · comments (0)
[link] AISN #45: Center for AI Safety 2024 Year in Review
Corin Katzke (corin-katzke) · 2024-12-19T18:15:56.416Z · comments (0)
Robbin's Farm Sledding Route
jefftk (jkaufman) · 2024-12-21T22:10:01.175Z · comments (1)
Mid-Generation Self-Correction: A Simple Tool for Safer AI
MrThink (ViktorThink) · 2024-12-19T23:41:00.702Z · comments (0)
A pragmatic story about where we get our priors
Fiora from Rosebloom · 2025-01-02T10:16:54.019Z · comments (6)
← previous page (newer posts) · next page (older posts) →