LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

Will the AGIs be able to run the civilisation?
StanislavKrym · 2025-03-28T04:50:07.568Z · comments (2)
[question] Is AGI actually that likely to take off given the world energy consumption?
StanislavKrym · 2025-03-27T23:13:14.959Z · answers+comments (2)
[Linkpost] The value of initiating a pursuit in temporal decision-making
Gunnar_Zarncke · 2025-03-27T21:47:05.123Z · comments (0)
Alignment through atomic agents
micseydel · 2025-03-27T18:43:14.569Z · comments (0)
Machines of Stolen Grace
Riley Tavassoli (riley-tavassoli) · 2025-03-27T18:15:23.736Z · comments (0)
An argument for asexuality
filthy_hedonist (sid-kolichala) · 2025-03-27T18:08:48.624Z · comments (10)
On the plausibility of a “messy” rogue AI committing human-like evil
Jacob Griffith (Jacob.Griffith) · 2025-03-27T18:06:45.505Z · comments (0)
[link] AI Moral Alignment: The Most Important Goal of Our Generation
Ronen Bar (ronen-bar) · 2025-03-27T18:04:07.212Z · comments (0)
[link] Tracing the Thoughts of a Large Language Model
Adam Jermyn (adam-jermyn) · 2025-03-27T17:20:02.162Z · comments (22)
Computational Superposition in a Toy Model of the U-AND Problem
Adam Newgas (BorisTheBrave) · 2025-03-27T16:56:34.474Z · comments (2)
Mistral Large 2 (123B) exhibits alignment faking
Marc Carauleanu (Marc-Everin Carauleanu) · 2025-03-27T15:39:02.176Z · comments (4)
AIS Netherlands is looking for a Founding Executive Director (EOI form)
gergogaspar (gergo-gaspar) · 2025-03-27T15:30:18.444Z · comments (0)
AI #109: Google Fails Marketing Forever
Zvi · 2025-03-27T14:50:01.825Z · comments (12)
What life will be like for humans if aligned ASI is created
james oofou (james-oofou) · 2025-03-27T10:06:56.846Z · comments (6)
[link] What is scaffolding?
Vishakha (vishakha-agrawal) · 2025-03-27T09:06:35.403Z · comments (0)
Workflow vs interface vs implementation
Sniffnoy · 2025-03-27T07:38:49.109Z · comments (0)
Quick thoughts on the difficulty of widely conveying a non-stereotyped position
Sniffnoy · 2025-03-27T07:30:41.353Z · comments (0)
Doing principle-of-charity better
Sniffnoy · 2025-03-27T05:19:52.195Z · comments (1)
X as phenomenon vs as policy, Goodhart, and the AB problem
Sniffnoy · 2025-03-27T04:32:18.848Z · comments (0)
Consequentialism is for making decisions
Sniffnoy · 2025-03-27T04:00:07.020Z · comments (6)
Third-wave AI safety needs sociopolitical thinking
Richard_Ngo (ricraz) · 2025-03-27T00:55:30.548Z · comments (23)
[link] Knowledge, Reasoning, and Superintelligence
owencb · 2025-03-26T23:28:11.465Z · comments (0)
Many Common Problems are NP-Hard, and Why that Matters for AI
Andrew Keenan Richardson (qemqemqem) · 2025-03-26T21:51:17.960Z · comments (9)
Fun With GPT-4o Image Generation
Zvi · 2025-03-26T19:50:03.270Z · comments (3)
[link] I'm hiring a Research Assistant for a nonfiction book on AI!
garrison · 2025-03-26T19:46:43.923Z · comments (0)
[link] Automated Researchers Can Subtly Sandbag
gasteigerjo · 2025-03-26T19:13:26.879Z · comments (0)
[link] Negative Results for SAEs On Downstream Tasks and Deprioritising SAE Research (GDM Mech Interp Team Progress Update #2)
lewis smith (lsgos) · 2025-03-26T19:07:48.710Z · comments (12)
[link] AI companies should be safety-testing the most capable versions of their models
sjadler · 2025-03-26T19:03:41.790Z · comments (6)
[link] Conceptual Rounding Errors
Jan_Kulveit · 2025-03-26T19:00:31.549Z · comments (15)
Personal Agents: The First Step in Emergent AI Society
Andrey Seryakov (andrey-seryakov) · 2025-03-26T18:55:26.358Z · comments (0)
[link] Will AI R&D Automation Cause a Software Intelligence Explosion?
Tom Davidson (tom-davidson-1) · 2025-03-26T18:12:02.524Z · comments (3)
[link] Why Does Unemployment Happen?
Nicholas D. (nicholas-d) · 2025-03-26T18:02:51.433Z · comments (2)
Finding Emergent Misalignment
Jan Betley (jan-betley) · 2025-03-26T17:33:46.792Z · comments (0)
[link] Center on Long-Term Risk: Summer Research Fellowship 2025 - Apply Now
Tristan Cook · 2025-03-26T17:29:14.797Z · comments (0)
[link] Eukaryote Skips Town - Why I'm leaving DC
eukaryote · 2025-03-26T17:16:29.663Z · comments (1)
[link] Apply to become a Futurekind AI Facilitator or Mentor (deadline: April 10)
superbeneficiary · 2025-03-26T15:47:54.815Z · comments (0)
Language and My Frustration Continue in Our RSI
TristanTrim · 2025-03-26T14:13:21.994Z · comments (1)
[question] Would it be effective to learn a language to improve cognition?
Hruss (henry-russell) · 2025-03-26T10:17:56.357Z · answers+comments (7)
New AI safety treaty paper out!
otto.barten (otto-barten) · 2025-03-26T09:29:07.661Z · comments (2)
[link] Map of all 40 copyright suits v. AI in U.S.
Remmelt (remmelt-ellen) · 2025-03-26T07:57:58.976Z · comments (3)
[link] Gemini 2.5 Pro released
Yair Halberstadt (yair-halberstadt) · 2025-03-26T07:56:44.964Z · comments (0)
Probability Theory Fundamentals 102: Territory that Probability is in the Map of
Ape in the coat · 2025-03-26T06:40:57.913Z · comments (5)
Avoid the Counterargument Collapse
marknm · 2025-03-26T03:19:58.655Z · comments (3)
Luxemburg – ACX Meetups Everywhere Spring 2025
Roland (roland-1) · 2025-03-26T00:12:00.609Z · comments (0)
Madrid – ACX Meetups Everywhere Spring 2025
Pablo Villalobos (pablo-villalobos) · 2025-03-26T00:11:59.166Z · comments (0)
Groningen – ACX Meetups Everywhere Spring 2025
Herman van der Veer (herman-van-der-veer) · 2025-03-26T00:11:57.401Z · comments (0)
Pittsburgh – ACX Meetups Everywhere Spring 2025
MrJones (matan-shtepel) · 2025-03-26T00:11:56.025Z · comments (0)
Chico – ACX Meetups Everywhere Spring 2025
Ryan_Axtell · 2025-03-26T00:11:54.373Z · comments (0)
Munich – ACX Meetups Everywhere Spring 2025
Organizer (organizer) · 2025-03-26T00:11:53.202Z · comments (0)
Buenos Aires – ACX Meetups Everywhere Spring 2025
eitan sprejer (eitan-sprejer) · 2025-03-26T00:11:52.041Z · comments (0)
← previous page (newer posts) · next page (older posts) →