LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Everything you care about is in the map
Tahp · 2024-12-17T14:05:36.824Z · comments (27)
[question] How useful would alien alignment research be?
Donald Hobson (donald-hobson) · 2025-01-23T10:59:22.330Z · answers+comments (5)
Current Attitudes Toward AI Provide Little Data Relevant to Attitudes Toward AGI
Seth Herd · 2024-11-12T18:23:53.533Z · comments (2)
Computational functionalism probably can't explain phenomenal consciousness
EuanMcLean (euanmclean) · 2024-12-10T17:11:28.044Z · comments (36)
Most Minds are Irrational
Davidmanheim · 2024-12-10T09:36:33.144Z · comments (4)
Undergrad AI Safety Conference
JoNeedsSleep (joanna-j-1) · 2025-02-19T03:43:47.969Z · comments (0)
Space-Faring Civilization density estimates and models - Review
Maxime Riché (maxime-riche) · 2025-02-27T11:44:21.101Z · comments (0)
The memorization-generalization spectrum and learning coefficients
Dmitry Vaintrob (dmitry-vaintrob) · 2025-01-28T16:53:24.628Z · comments (0)
Blackpool Applied Rationality Unconference 2025
Henry Prowbell · 2025-02-01T13:04:12.774Z · comments (0)
Should you have children? All LessWrong posts about the topic
Sherrinford · 2024-11-26T23:52:44.113Z · comments (0)
Seeing Through the Eyes of the Algorithm
silentbob · 2025-02-22T11:54:35.782Z · comments (1)
Don't fall for ontology pyramid schemes
Lorec · 2025-01-07T23:29:46.935Z · comments (8)
5,000 calories of peanut butter every week for 3 years straight
Declan Molony (declan-molony) · 2025-01-31T17:29:35.190Z · comments (8)
Defense Against the Dark Prompts: Mitigating Best-of-N Jailbreaking with Prompt Evaluation
Stuart_Armstrong · 2025-01-31T15:36:01.050Z · comments (2)
The case for pay-on-results coaching
Chipmonk · 2025-01-03T18:40:22.304Z · comments (3)
Coin Flip
XelaP (scroogemcduck1) · 2024-12-27T11:53:01.781Z · comments (0)
[question] Do you consider perfect surveillance inevitable?
samuelshadrach (xpostah) · 2025-01-24T04:57:48.266Z · answers+comments (34)
[question] What would be the IQ and other benchmarks of o3 that uses $1 million worth of compute resources to answer one question?
avturchin · 2024-12-26T11:08:23.545Z · answers+comments (2)
Predicting AI Releases Through Side Channels
Reworr R (reworr-reworr) · 2025-01-07T19:06:41.584Z · comments (1)
6 (Potential) Misconceptions about AI Intellectuals
ozziegooen · 2025-02-14T23:51:44.983Z · comments (11)
[link] A Little Depth Goes a Long Way: the Expressive Power of Log-Depth Transformers
Bogdan Ionut Cirstea (bogdan-ionut-cirstea) · 2024-11-20T11:48:14.170Z · comments (0)
Stop Making Sense
JenniferRM · 2024-12-23T05:16:12.428Z · comments (0)
Detecting AI Agent Failure Modes in Simulations
Michael Soareverix (michael-soareverix) · 2025-02-11T11:10:26.030Z · comments (0)
[link] o3 is not being released to the public. First they are only giving access to external safety testers. You can apply to get early access to do safety testing
KatWoods (ea247) · 2024-12-20T18:30:44.421Z · comments (0)
Doing Sport Reliably via Dancing
Johannes C. Mayer (johannes-c-mayer) · 2024-12-20T12:06:59.517Z · comments (0)
November-December 2024 Progress in Guaranteed Safe AI
Quinn (quinn-dougherty) · 2025-01-22T01:20:00.868Z · comments (0)
Half-baked idea: a straightforward method for learning environmental goals?
Q Home · 2025-02-04T06:56:31.813Z · comments (7)
[link] Don't Associate AI Safety With Activism
Eneasz · 2024-12-18T08:01:50.357Z · comments (15)
$300 Fermi Model Competition
ozziegooen · 2025-02-03T19:47:09.270Z · comments (14)
Appealing to the Public
jefftk (jkaufman) · 2024-10-23T19:00:07.669Z · comments (0)
[link] Constitutional Classifiers: Defending against universal jailbreaks (Anthropic Blog)
Archimedes · 2025-02-04T02:55:44.401Z · comments (0)
EC2 Scripts
jefftk (jkaufman) · 2024-12-10T03:00:01.906Z · comments (1)
Lecture Series on Tiling Agents #2
abramdemski · 2025-01-20T21:02:25.479Z · comments (0)
Do simulacra dream of digital sheep?
EuanMcLean (euanmclean) · 2024-12-03T20:25:46.296Z · comments (36)
[link] Training Data Attribution (TDA): Examining Its Adoption & Use Cases
Deric Cheng (deric-cheng) · 2025-01-22T15:40:13.393Z · comments (0)
[link] Rationalist Movie Reviews
Nicholas / Heather Kross (NicholasKross) · 2025-02-01T23:10:53.184Z · comments (2)
Hiring a writer to co-author with me (Spencer Greenberg for ClearerThinking.org)
spencerg · 2024-10-27T17:34:50.479Z · comments (0)
[link] INTELLECT-1 Release: The First Globally Trained 10B Parameter Model
Matrice Jacobine · 2024-11-29T23:05:00.108Z · comments (1)
[link] The Neruda Factory
jenn (pixx) · 2024-11-29T15:20:02.276Z · comments (1)
[question] Is there a CFAR handbook audio option?
FinalFormal2 · 2024-10-26T17:08:36.480Z · answers+comments (0)
Mitigating Geomagnetic Storm and EMP Risks to the Electrical Grid (Shallow Dive)
Davidmanheim · 2024-11-26T08:00:04.810Z · comments (4)
How different LLMs answered PhilPapers 2020 survey
Satron · 2025-01-27T21:41:12.334Z · comments (1)
Boston Solstice 2024 Retrospective
jefftk (jkaufman) · 2024-12-29T15:40:05.095Z · comments (0)
[link] Species as Canonical Referents of Super-Organisms
Yudhister Kumar (randomwalks) · 2024-10-18T07:49:52.944Z · comments (8)
[link] What About The Horses?
Maxwell Tabarrok (maxwell-tabarrok) · 2025-02-11T13:59:36.913Z · comments (17)
Two arguments against longtermist thought experiments
momom2 (amaury-lorin) · 2024-11-02T10:22:11.311Z · comments (5)
Apply to the 2025 PIBBSS Summer Research Fellowship
DusanDNesic · 2024-12-24T10:25:12.882Z · comments (0)
[link] Lazy Hasselback Pommes Anna
Brendan Long (korin43) · 2025-01-26T21:30:36.587Z · comments (18)
[link] Ideologies are slow and necessary, for now
Gabriel Alfour (gabriel-alfour-1) · 2024-12-23T01:57:47.153Z · comments (1)
[link] Levers for Biological Progress - A Response to "Machines of Loving Grace"
Niko_McCarty (niko-2) · 2024-11-01T16:35:08.221Z · comments (0)
← previous page (newer posts) · next page (older posts) →