LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Using Dangerous AI, But Safely?
habryka (habryka4) · 2024-11-16T04:29:20.914Z · comments (2)
OpenAI defected, but we can take honest actions
Remmelt (remmelt-ellen) · 2024-10-21T08:41:25.728Z · comments (16)
Everything you care about is in the map
Tahp · 2024-12-17T14:05:36.824Z · comments (27)
A Collection of Empirical Frames about Language Models
Daniel Tan (dtch1997) · 2025-01-02T02:49:05.965Z · comments (0)
Most Minds are Irrational
Davidmanheim · 2024-12-10T09:36:33.144Z · comments (4)
[link] Building AI safety benchmark environments on themes of universal human values
Roland Pihlakas (roland-pihlakas) · 2025-01-03T04:24:36.186Z · comments (3)
Proposal to increase fertility: University parent clubs
Fluffnutt (Pear) · 2024-11-18T04:21:26.346Z · comments (3)
Should you have children? All LessWrong posts about the topic
Sherrinford · 2024-11-26T23:52:44.113Z · comments (0)
[link] some questionable space launch guns
bhauth · 2024-10-13T22:52:26.418Z · comments (0)
[question] What is the most impressive game LLMs can play well?
Cole Wyeth (Amyr) · 2025-01-08T19:38:18.530Z · answers+comments (3)
[link] We are in a New Paradigm of AI Progress - OpenAI's o3 model makes huge gains on the toughest AI benchmarks in the world
garrison · 2024-12-22T21:45:52.026Z · comments (3)
Is Text Watermarking a lost cause?
egor.timatkov · 2024-10-01T16:20:51.113Z · comments (13)
[link] A primer on machine learning in cryo-electron microscopy (cryo-EM)
Abhishaike Mahajan (abhishaike-mahajan) · 2024-12-22T15:11:58.860Z · comments (0)
[link] Every niche event should also be a meetup
DMMF · 2024-11-19T20:47:50.053Z · comments (0)
Evolutionary prompt optimization for SAE feature visualization
neverix · 2024-11-14T13:06:49.728Z · comments (0)
Appealing to the Public
jefftk (jkaufman) · 2024-10-23T19:00:07.669Z · comments (0)
Predicting AI Releases Through Side Channels
Reworr R (reworr-reworr) · 2025-01-07T19:06:41.584Z · comments (1)
Hiring a writer to co-author with me (Spencer Greenberg for ClearerThinking.org)
spencerg · 2024-10-27T17:34:50.479Z · comments (0)
[question] Is there a CFAR handbook audio option?
FinalFormal2 · 2024-10-26T17:08:36.480Z · answers+comments (0)
Coin Flip
XelaP (scroogemcduck1) · 2024-12-27T11:53:01.781Z · comments (0)
[link] The Neruda Factory
jenn (pixx) · 2024-11-29T15:20:02.276Z · comments (1)
[link] o3 is not being released to the public. First they are only giving access to external safety testers. You can apply to get early access to do safety testing
KatWoods (ea247) · 2024-12-20T18:30:44.421Z · comments (0)
[link] A Little Depth Goes a Long Way: the Expressive Power of Log-Depth Transformers
Bogdan Ionut Cirstea (bogdan-ionut-cirstea) · 2024-11-20T11:48:14.170Z · comments (0)
The Alignment Mapping Program: Forging Independent Thinkers in AI Safety - A Pilot Retrospective
Alvin Ånestrand (alvin-anestrand) · 2025-01-10T16:22:16.905Z · comments (0)
Current Attitudes Toward AI Provide Little Data Relevant to Attitudes Toward AGI
Seth Herd · 2024-11-12T18:23:53.533Z · comments (2)
[link] INTELLECT-1 Release: The First Globally Trained 10B Parameter Model
Matrice Jacobine · 2024-11-29T23:05:00.108Z · comments (1)
Historical Net Worth
jefftk (jkaufman) · 2024-12-07T23:10:01.519Z · comments (0)
Beliefs and state of mind into 2025
RussellThor · 2025-01-10T22:07:01.060Z · comments (3)
EC2 Scripts
jefftk (jkaufman) · 2024-12-10T03:00:01.906Z · comments (1)
[question] What would be the IQ and other benchmarks of o3 that uses $1 million worth of compute resources to answer one question?
avturchin · 2024-12-26T11:08:23.545Z · answers+comments (2)
Don't fall for ontology pyramid schemes
Lorec · 2025-01-07T23:29:46.935Z · comments (6)
The case for pay-on-results coaching
Chipmonk · 2025-01-03T18:40:22.304Z · comments (3)
Re Hanson's Grabby Aliens: Humanity is not a natural anthropic sample space
Lorec · 2024-12-09T18:07:23.510Z · comments (32)
[link] Don't Associate AI Safety With Activism
Eneasz · 2024-12-18T08:01:50.357Z · comments (15)
Do simulacra dream of digital sheep?
EuanMcLean (euanmclean) · 2024-12-03T20:25:46.296Z · comments (36)
Doing Sport Reliably via Dancing
Johannes C. Mayer (johannes-c-mayer) · 2024-12-20T12:06:59.517Z · comments (0)
LifeKeeper Diaries: Exploring Misaligned AI Through Interactive Fiction
Tristan Tran (tristan-tran) · 2024-11-09T20:58:09.182Z · comments (5)
I'm Writing a Book About Liberalism
Yoav Ravid · 2024-12-19T00:13:33.895Z · comments (6)
Two arguments against longtermist thought experiments
momom2 (amaury-lorin) · 2024-11-02T10:22:11.311Z · comments (5)
Join a LessWrong Team for the Unaging System Challenge
Crissman · 2024-10-23T06:01:08.018Z · comments (5)
Zombies! Substance Dualist Zombies?
Ape in the coat · 2024-12-11T06:10:28.318Z · comments (7)
Chaos Theory in Ecology
Elizabeth (pktechgirl) · 2024-11-09T17:50:01.727Z · comments (4)
Apply to the 2025 PIBBSS Summer Research Fellowship
DusanDNesic · 2024-12-24T10:25:12.882Z · comments (0)
[link] Levers for Biological Progress - A Response to "Machines of Loving Grace"
Niko_McCarty (niko-2) · 2024-11-01T16:35:08.221Z · comments (0)
The Shape of Heaven
ejk64 · 2024-11-30T23:38:06.628Z · comments (1)
Electric Grid Cyberattack: An AI-Informed Threat Model 
moonlightmaze · 2024-11-11T21:34:17.190Z · comments (0)
[link] The Genesis Project
aproteinengine · 2024-12-19T21:26:51.344Z · comments (0)
Facets and Social Networks
jefftk (jkaufman) · 2024-11-27T03:40:08.689Z · comments (1)
Stop Making Sense
JenniferRM · 2024-12-23T05:16:12.428Z · comments (0)
Mitigating Geomagnetic Storm and EMP Risks to the Electrical Grid (Shallow Dive)
Davidmanheim · 2024-11-26T08:00:04.810Z · comments (4)
← previous page (newer posts) · next page (older posts) →