LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Join a LessWrong Team for the Unaging System Challenge
Crissman · 2024-10-23T06:01:08.018Z · comments (5)
Boston Solstice 2024 Retrospective
jefftk (jkaufman) · 2024-12-29T15:40:05.095Z · comments (0)
Zombies! Substance Dualist Zombies?
Ape in the coat · 2024-12-11T06:10:28.318Z · comments (7)
[link] The Genesis Project
aproteinengine · 2024-12-19T21:26:51.344Z · comments (0)
[link] Levers for Biological Progress - A Response to "Machines of Loving Grace"
Niko_McCarty (niko-2) · 2024-11-01T16:35:08.221Z · comments (0)
Facets and Social Networks
jefftk (jkaufman) · 2024-11-27T03:40:08.689Z · comments (1)
The low Information Density of Eliezer Yudkowsky & LessWrong
Felix Olszewski (quick-maths) · 2024-12-30T19:43:59.355Z · comments (7)
Importing Bluesky Comments
jefftk (jkaufman) · 2024-11-28T03:50:06.635Z · comments (0)
Backdoors have universal representations across large language models
Amirali Abdullah (amirali-abdullah) · 2024-12-06T22:56:33.519Z · comments (0)
[link] Disentangling Representations through Multi-task Learning
Bogdan Ionut Cirstea (bogdan-ionut-cirstea) · 2024-11-24T13:10:26.307Z · comments (1)
What can we learn from insecure domains?
Logan Zoellner (logan-zoellner) · 2024-11-01T23:53:30.066Z · comments (21)
[link] I, Token
Ivan Vendrov (ivan-vendrov) · 2024-11-25T02:20:35.629Z · comments (2)
How I saved 1 human life (in expectation) without overthinking it
Christopher King (christopher-king) · 2024-12-22T20:53:13.492Z · comments (0)
Secular Solstice Songbook Update
jefftk (jkaufman) · 2024-11-17T17:30:07.404Z · comments (2)
The first AGI may be a good engineer but bad strategist
Knight Lee (Max Lee) · 2024-12-09T06:34:54.082Z · comments (2)
Lenses of Control
WillPetillo · 2024-10-22T07:51:06.355Z · comments (0)
[question] How can we prevent AGI value drift?
Dakara (chess-ice) · 2024-11-20T18:19:24.375Z · answers+comments (5)
[link] NeuroAI for AI safety: A Differential Path
nz · 2024-12-16T13:17:12.527Z · comments (0)
[question] What epsilon do you subtract from "certainty" in your own probability estimates?
Dagon · 2024-11-26T19:13:46.795Z · answers+comments (6)
AXRP Episode 38.0 - Zhijing Jin on LLMs, Causality, and Multi-Agent Systems
DanielFilan · 2024-11-14T07:00:06.977Z · comments (0)
Inverse Problems In Everyday Life
silentbob · 2024-10-15T11:42:30.276Z · comments (2)
Humans are (mostly) metarational
Yair Halberstadt (yair-halberstadt) · 2024-10-09T05:51:16.644Z · comments (6)
Dance Differentiation
jefftk (jkaufman) · 2024-11-15T02:30:07.694Z · comments (0)
The deepest atheist: Sam Altman
Trey Edwin (Paolo Vivaldi) · 2024-10-10T03:27:34.465Z · comments (2)
Is the mind a program?
EuanMcLean (euanmclean) · 2024-11-28T09:42:02.892Z · comments (60)
[link] [Linkpost] Building Altruistic and Moral AI Agent with Brain-inspired Affective Empathy Mechanisms
Gunnar_Zarncke · 2024-11-04T10:15:35.550Z · comments (0)
Crosspost: Developing the middle ground on polarized topics
juliawise · 2024-11-25T14:39:53.041Z · comments (16)
Comparing the AirFanta 3Pro to the Coway AP-1512
jefftk (jkaufman) · 2024-12-16T01:40:01.522Z · comments (0)
[link] Metaculus Is Open Source
ChristianWilliams · 2024-10-07T19:55:31.035Z · comments (0)
What You Can Give Instead of Advice
Karl Faulks (karl-faulks) · 2024-10-24T23:10:48.014Z · comments (2)
[question] Is AI alignment a purely functional property?
Roko · 2024-12-15T21:42:50.674Z · answers+comments (7)
[link] Is AI Hitting a Wall or Moving Faster Than Ever?
garrison · 2025-01-09T22:18:51.497Z · comments (2)
[question] How can humanity survive a multipolar AGI scenario?
Leonard Holloway (literally-best) · 2025-01-09T20:17:40.143Z · answers+comments (8)
Mid-Generation Self-Correction: A Simple Tool for Safer AI
MrThink (ViktorThink) · 2024-12-19T23:41:00.702Z · comments (0)
Low-effort review of "AI For Humanity"
Charlie Steiner · 2024-12-11T09:54:42.871Z · comments (0)
Registrations Open for 2024 NYC Secular Solstice & Megameetup
Joe Rogero · 2024-11-12T17:50:10.827Z · comments (0)
Curriculum of Ascension
andrew sauer (andrew-sauer) · 2024-11-07T23:54:18.983Z · comments (0)
[question] Why is Gemini telling the user to die?
Burny · 2024-11-18T01:44:12.583Z · answers+comments (1)
[question] What are your favorite books or blogs that are out of print, or whose domains have expired (especially if they also aren't on LibGen/Wayback/etc, or on Amazon)?
Arjun Panickssery (arjun-panickssery) · 2024-10-13T20:21:04.540Z · answers+comments (4)
A pragmatic story about where we get our priors
Fiora from Rosebloom · 2025-01-02T10:16:54.019Z · comments (6)
[link] Sam Altman's Business Negging
Julian Bradshaw · 2024-09-30T21:06:59.184Z · comments (0)
[link] AI Safety at the Frontier: Paper Highlights, September '24
gasteigerjo · 2024-10-02T09:49:00.357Z · comments (0)
Robbin's Farm Sledding Route
jefftk (jkaufman) · 2024-12-21T22:10:01.175Z · comments (1)
[link] AISN #45: Center for AI Safety 2024 Year in Review
Corin Katzke (corin-katzke) · 2024-12-19T18:15:56.416Z · comments (0)
Goal: Understand Intelligence
Johannes C. Mayer (johannes-c-mayer) · 2024-11-03T21:20:02.900Z · comments (19)
[link] The lying p value
kqr · 2024-11-12T06:12:59.934Z · comments (7)
Paraddictions: unreasonably compelling behaviors and their uses
Michael Cohn (michael-cohn) · 2024-11-22T20:53:59.479Z · comments (0)
PIBBSS Fellowship 2025: Bounties and Cooperative AI Track Announcement
DusanDNesic · 2025-01-09T14:23:47.027Z · comments (0)
(My) self-referential reason to believe in free will
jacek (jacek-karwowski) · 2025-01-06T23:35:02.809Z · comments (5)
Preliminary Thoughts on Flirting Theory
la .alis. (Diatom) · 2024-12-24T07:37:47.045Z · comments (6)
← previous page (newer posts) · next page (older posts) →