LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

The current state of RSPs
Zach Stein-Perlman · 2024-11-04T16:00:42.630Z · comments (2)
LessWrong audio: help us choose the new voice
PeterH · 2024-12-11T02:24:37.026Z · comments (0)
Complete Feedback
abramdemski · 2024-11-01T16:58:50.183Z · comments (7)
2024 Unofficial LW Community Census, Request for Comments
Screwtape · 2024-11-01T16:34:14.758Z · comments (32)
[link] NAO Updates, January 2025
jefftk (jkaufman) · 2025-01-10T03:37:36.698Z · comments (0)
[link] The unreasonable effectiveness of plasmid sequencing as a service
Abhishaike Mahajan (abhishaike-mahajan) · 2024-10-08T02:02:55.352Z · comments (2)
Evolution's selection target depends on your weighting
tailcalled · 2024-11-19T18:24:53.117Z · comments (22)
[link] debating buying NVDA in 2019
bhauth · 2025-01-04T05:06:54.047Z · comments (0)
Apply to the Cooperative AI PhD Fellowship by October 14th!
Lewis Hammond (lewis-hammond-1) · 2024-10-05T12:41:24.093Z · comments (0)
AI #98: World Ends With Six Word Story
Zvi · 2025-01-09T16:30:07.341Z · comments (1)
The Second Gemini
Zvi · 2024-12-17T15:50:06.373Z · comments (0)
[link] AI safety tax dynamics
owencb · 2024-10-23T12:18:32.243Z · comments (0)
The average rationalist IQ is about 122
Rockenots (Ekefa) · 2024-12-28T15:42:07.067Z · comments (23)
[link] Job Opening: SWE to help improve grant-making software
Ethan Ashkie (ethan-ashkie-1) · 2025-01-08T00:54:22.820Z · comments (1)
Plausibly Factoring Conjectures
Quinn (quinn-dougherty) · 2024-11-22T20:11:56.479Z · comments (1)
Geoffrey Hinton on the Past, Present, and Future of AI
Stephen McAleese (stephen-mcaleese) · 2024-10-12T16:41:56.796Z · comments (5)
[link] Recommendations for Technical AI Safety Research Directions
Sam Marks (samuel-marks) · 2025-01-10T19:34:04.920Z · comments (0)
Boston Secular Solstice 2024: Call for Singers and Musicans
jefftk (jkaufman) · 2024-11-15T13:50:07.827Z · comments (0)
Why Isn't Tesla Level 3?
jefftk (jkaufman) · 2024-12-11T14:50:01.159Z · comments (7)
[link] Genetically edited mosquitoes haven't scaled yet. Why?
alexey · 2024-12-30T21:37:32.942Z · comments (0)
[link] PCR retrospective
bhauth · 2024-12-26T21:20:56.484Z · comments (0)
Why I'm bearish on mechanistic interpretability: the shards are not in the network
tailcalled · 2024-09-13T17:09:25.407Z · comments (40)
[link] Miles Brundage: Finding Ways to Credibly Signal the Benignness of AI Development and Deployment is an Urgent Priority
Zach Stein-Perlman · 2024-10-28T17:00:18.660Z · comments (4)
[link] Hyperpolation
Gunnar_Zarncke · 2024-09-15T21:37:00.002Z · comments (6)
Elon Musk and Solar Futurism
transhumanist_atom_understander · 2024-12-21T02:55:28.554Z · comments (27)
[link] How to choose what to work on
jasoncrawford · 2024-09-18T20:39:12.316Z · comments (6)
[question] What should OpenAI do that it hasn't already done, to stop their vacancies from being advertised on the 80k Job Board?
WitheringWeights (EZ97) · 2024-10-21T13:57:30.934Z · answers+comments (0)
[question] Meal Replacements in 2025?
alkjash · 2025-01-06T15:37:25.041Z · answers+comments (9)
Magnitudes: Let's Comprehend the Incomprehensible!
joec · 2024-12-01T03:08:46.503Z · comments (8)
The absolute basics of representation theory of finite groups
Dmitry Vaintrob (dmitry-vaintrob) · 2025-01-08T09:47:13.136Z · comments (0)
[link] Human-AI Complementarity: A Goal for Amplified Oversight
rishubjain · 2024-12-24T09:57:55.111Z · comments (2)
A short project on Mamba: grokking & interpretability
Alejandro Tlaie (alejandro-tlaie-boria) · 2024-10-18T16:59:45.314Z · comments (0)
How Often Does Taking Away Options Help?
niplav · 2024-09-21T21:52:40.822Z · comments (7)
Text Posts from the Kids Group: 2018
jefftk (jkaufman) · 2024-11-23T12:50:05.325Z · comments (0)
Filled Cupcakes
jefftk (jkaufman) · 2024-11-26T03:20:08.504Z · comments (2)
Non-Obvious Benefits of Insurance
jefftk (jkaufman) · 2024-12-23T03:40:02.184Z · comments (5)
Alignment by default: the simulation hypothesis
gb (ghb) · 2024-09-25T16:26:00.552Z · comments (39)
[link] To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
Bogdan Ionut Cirstea (bogdan-ionut-cirstea) · 2024-09-19T16:13:55.835Z · comments (1)
Long Live the Usurper
pleiotroth · 2024-11-27T12:10:51.025Z · comments (0)
AXRP Episode 37 - Jaime Sevilla on Forecasting AI
DanielFilan · 2024-10-04T21:00:03.077Z · comments (3)
[link] Mechanistic Interpretability of Llama 3.2 with Sparse Autoencoders
PaulPauls · 2024-11-24T05:45:20.124Z · comments (3)
[question] What is the alpha in one bit of evidence?
J Bostock (Jemist) · 2024-10-22T21:57:09.056Z · answers+comments (13)
[link] Announcement: AI for Math Fund
sarahconstantin · 2024-12-05T18:33:13.556Z · comments (9)
AI Can be “Gradient Aware” Without Doing Gradient hacking.
Sodium · 2024-10-20T21:02:10.754Z · comments (0)
Gwerns
Tomás B. (Bjartur Tómas) · 2024-11-16T14:31:57.791Z · comments (2)
AXRP Episode 38.3 - Erik Jenner on Learned Look-Ahead
DanielFilan · 2024-12-12T05:40:06.835Z · comments (0)
[link] I read every major AI lab’s safety plan so you don’t have to
sarahhw · 2024-12-16T18:51:38.499Z · comments (0)
Gell-Mann checks
Cleo Scrolls (cleo-scrolls) · 2024-09-26T22:45:43.569Z · comments (7)
A Generalization of the Good Regulator Theorem
Alfred Harwood · 2025-01-04T09:55:25.432Z · comments (5)
[link] It looks like there are some good funding opportunities in AI safety right now
Benjamin_Todd · 2024-12-22T12:41:02.151Z · comments (0)
← previous page (newer posts) · next page (older posts) →