LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

Do you need a better map of your myriad of maps to the territory?
CstineSublime · 2024-12-24T02:00:30.426Z · comments (0)
Panology
JenniferRM · 2024-12-23T21:40:14.540Z · comments (3)
Aristotle, Aquinas, and the Evolution of Teleology: From Purpose to Meaning.
Spiritus Dei (spiritus-dei) · 2024-12-23T19:37:58.788Z · comments (0)
People aren't properly calibrated on FrontierMath
cakubilo · 2024-12-23T19:35:44.467Z · comments (1)
Near- and medium-term AI Control Safety Cases
Martín Soto (martinsq) · 2024-12-23T17:37:48.860Z · comments (0)
[Rationality Malaysia] 2024 year-end meetup!
Doris Liew (doris-liew) · 2024-12-23T16:02:03.566Z · comments (0)
Printable book of some rationalist creative writing (from Scott A. & Eliezer)
CounterBlunder · 2024-12-23T15:44:31.437Z · comments (0)
Monthly Roundup #25: December 2024
Zvi · 2024-12-23T14:20:04.682Z · comments (2)
Exploring the petertodd / Leilan duality in GPT-2 and GPT-J
mwatkins · 2024-12-23T13:17:53.755Z · comments (0)
[question] What are the strongest arguments for very short timelines?
Kaj_Sotala · 2024-12-23T09:38:56.905Z · answers+comments (36)
Reduce AI Self-Allegiance by saying "he" instead of "I"
Knight Lee (Max Lee) · 2024-12-23T09:32:29.947Z · comments (1)
[link] Funding Case: AI Safety Camp 11
Remmelt (remmelt-ellen) · 2024-12-23T08:51:55.255Z · comments (0)
[link] What is compute governance?
Vishakha (vishakha-agrawal) · 2024-12-23T06:32:25.588Z · comments (0)
Stop Making Sense
JenniferRM · 2024-12-23T05:16:12.428Z · comments (0)
Hire (or become) a Thinking Assistant / Body Double
Raemon · 2024-12-23T03:58:42.061Z · comments (15)
Non-Obvious Benefits of Insurance
jefftk (jkaufman) · 2024-12-23T03:40:02.184Z · comments (2)
Vision of a positive Singularity
RussellThor · 2024-12-23T02:19:35.050Z · comments (0)
[link] Ideologies are slow and necessary, for now
Gabriel Alfour (gabriel-alfour-1) · 2024-12-23T01:57:47.153Z · comments (1)
Propaganda Is Everywhere—LLM Models Are No Exception
Yanling Guo (yanling-guo) · 2024-12-23T01:39:03.777Z · comments (0)
[question] Has Anthropic checked if Claude fakes alignment for intended values too?
Maloew (maloew-valenar) · 2024-12-23T00:43:07.490Z · answers+comments (1)
Vegans need to eat just enough Meat - emperically evaluate the minimum ammount of meat that maximizes utility
Johannes C. Mayer (johannes-c-mayer) · 2024-12-22T22:08:31.971Z · comments (20)
[link] We are in a New Paradigm of AI Progress - OpenAI's o3 model makes huge gains on the toughest AI benchmarks in the world
garrison · 2024-12-22T21:45:52.026Z · comments (3)
[link] My AI timelines
xpostah · 2024-12-22T21:06:41.722Z · comments (2)
A breakdown of AI capability levels focused on AI R&D labor acceleration
ryan_greenblatt · 2024-12-22T20:56:00.298Z · comments (2)
How I saved 1 human life (in expectation) without overthinking it
Christopher King (christopher-king) · 2024-12-22T20:53:13.492Z · comments (0)
Towards mutually assured cooperation
mikko (morrel) · 2024-12-22T20:46:21.965Z · comments (0)
Checking in on Scott's composition image bet with imagen 3
Dave Orr (dave-orr) · 2024-12-22T19:04:17.495Z · comments (0)
Woloch & Wosatan
JackOfAllTrades (JackOfAllSpades) · 2024-12-22T15:46:27.235Z · comments (0)
[link] A primer on machine learning in cryo-electron microscopy (cryo-EM)
Abhishaike Mahajan (abhishaike-mahajan) · 2024-12-22T15:11:58.860Z · comments (0)
Notes from Copenhagen Secular Solstice 2024
Søren Elverlin (soren-elverlin-1) · 2024-12-22T15:08:20.848Z · comments (0)
Proof Explained for "Robust Agents Learn Causal World Model"
Dalcy (Darcy) · 2024-12-22T15:06:16.880Z · comments (0)
subfunctional overlaps in attentional selection history implies momentum for decision-trajectories
Emrik (Emrik North) · 2024-12-22T14:12:49.027Z · comments (1)
[link] It looks like there are some good funding opportunities in AI safety right now
Benjamin_Todd · 2024-12-22T12:41:02.151Z · comments (0)
What o3 Becomes by 2028
Vladimir_Nesov · 2024-12-22T12:37:20.929Z · comments (10)
[link] The Alignment Simulator
Yair Halberstadt (yair-halberstadt) · 2024-12-22T11:45:55.220Z · comments (3)
Theoretical Alignment's Second Chance
lunatic_at_large · 2024-12-22T05:03:51.653Z · comments (0)
Orienting to 3 year AGI timelines
Nikola Jurkovic (nikolaisalreadytaken) · 2024-12-22T01:15:11.401Z · comments (17)
ARC-AGI is a genuine AGI test but o3 cheated :(
Knight Lee (Max Lee) · 2024-12-22T00:58:05.447Z · comments (2)
When AI 10x's AI R&D, What Do We Do?
Logan Riggs (elriggs) · 2024-12-21T23:56:11.069Z · comments (12)
AI as systems, not just models
Andy Arditi (andy-arditi) · 2024-12-21T23:19:05.507Z · comments (0)
Towards a Unified Interpretability of Artificial and Biological Neural Networks
jan_bauer · 2024-12-21T23:10:45.842Z · comments (0)
Robbin's Farm Sledding Route
jefftk (jkaufman) · 2024-12-21T22:10:01.175Z · comments (1)
AGI with RL is Bad News for Safety
Nadav Brandes (nadav-brandes) · 2024-12-21T19:36:03.970Z · comments (16)
Better difference-making views
MichaelStJules · 2024-12-21T18:27:45.552Z · comments (0)
[link] Review: Good Strategy, Bad Strategy
L Rudolf L (LRudL) · 2024-12-21T17:17:04.342Z · comments (0)
Last Line of Defense: Minimum Viable Shelters for Mirror Bacteria
Ulrik Horn (ulrik-horn) · 2024-12-21T08:28:14.860Z · comments (18)
Elon Musk and Solar Futurism
transhumanist_atom_understander · 2024-12-21T02:55:28.554Z · comments (27)
Good Reasons for Alts
jefftk (jkaufman) · 2024-12-21T01:30:03.113Z · comments (2)
[link] Updating on Bad Arguments
Guive (GAA) · 2024-12-21T01:19:15.686Z · comments (2)
[link] Bird's eye view: An interactive representation to see large collection of text "from above".
Alexandre Variengien (alexandre-variengien) · 2024-12-21T00:15:02.239Z · comments (4)
next page (older posts) →