LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

The first AGI may be a good engineer but bad strategist
Knight Lee (Max Lee) · 2024-12-09T06:34:54.082Z · comments (2)
A good way to build many air filters on the cheap
winstonBosan · 2024-12-08T01:47:58.236Z · comments (5)
On Contact, Part 1
james.lucassen · 2025-01-21T03:10:54.429Z · comments (0)
[link] NeuroAI for AI safety: A Differential Path
nz · 2024-12-16T13:17:12.527Z · comments (0)
Simple Steganographic Computation Eval - gpt-4o and gemini-exp-1206 can't solve it yet
Filip Sondej · 2024-12-19T15:47:05.512Z · comments (2)
The low Information Density of Eliezer Yudkowsky & LessWrong
Felix Olszewski (quick-maths) · 2024-12-30T19:43:59.355Z · comments (8)
Is the mind a program?
EuanMcLean (euanmclean) · 2024-11-28T09:42:02.892Z · comments (62)
How I saved 1 human life (in expectation) without overthinking it
Christopher King (christopher-king) · 2024-12-22T20:53:13.492Z · comments (0)
The Functionalist Case for Machine Consciousness: Evidence from Large Language Models
James Diacoumis (james-diacoumis) · 2025-01-22T17:43:41.215Z · comments (24)
[link] AISN #45: Center for AI Safety 2024 Year in Review
Corin Katzke (corin-katzke) · 2024-12-19T18:15:56.416Z · comments (0)
Robbin's Farm Sledding Route
jefftk (jkaufman) · 2024-12-21T22:10:01.175Z · comments (1)
Crosspost: Developing the middle ground on polarized topics
juliawise · 2024-11-25T14:39:53.041Z · comments (16)
[question] Those of you with lots of meditation experience: How did it influence your understanding of philosophy of mind and topics such as qualia?
SpectrumDT · 2025-01-28T14:29:47.034Z · answers+comments (15)
Learn to Develop Your Advantage
ReverendBayes (vedernikov-andrei) · 2025-01-29T22:06:00.641Z · comments (0)
[link] How do fictional stories illustrate AI misalignment?
Vishakha (vishakha-agrawal) · 2025-01-15T06:11:44.336Z · comments (4)
[question] Should you publish solutions to corrigibility?
rvnnt · 2025-01-30T11:52:05.983Z · answers+comments (12)
2. Skim the Manual: Intelligent Voluntary Cooperation
Allison Duettmann (allison-duettmann) · 2025-01-02T19:02:06.864Z · comments (1)
Nvidia doesn’t just sell shovels
winstonBosan · 2025-01-28T04:56:38.720Z · comments (4)
Low-effort review of "AI For Humanity"
Charlie Steiner · 2024-12-11T09:54:42.871Z · comments (0)
Mid-Generation Self-Correction: A Simple Tool for Safer AI
MrThink (ViktorThink) · 2024-12-19T23:41:00.702Z · comments (0)
[link] Progress links and short notes, 2025-01-13
jasoncrawford · 2025-01-13T18:35:21.426Z · comments (2)
Paper club: He et al. on modular arithmetic (part I)
Dmitry Vaintrob (dmitry-vaintrob) · 2025-01-13T11:18:44.738Z · comments (0)
Paraddictions: unreasonably compelling behaviors and their uses
Michael Cohn (michael-cohn) · 2024-11-22T20:53:59.479Z · comments (0)
Why I'm Pouring Cold Water in My Left Ear, and You Should Too
Maloew (maloew-valenar) · 2025-01-24T23:13:52.340Z · comments (0)
Near term discussions need something smaller and more concrete than AGI
ryan_b · 2025-01-11T18:24:58.283Z · comments (0)
[question] What are some scenarios where an aligned AGI actually helps humanity, but many/most people don't like it?
RomanS · 2025-01-10T18:13:11.900Z · answers+comments (6)
[link] Is AI Hitting a Wall or Moving Faster Than Ever?
garrison · 2025-01-09T22:18:51.497Z · comments (5)
Comparing the AirFanta 3Pro to the Coway AP-1512
jefftk (jkaufman) · 2024-12-16T01:40:01.522Z · comments (0)
[question] How can humanity survive a multipolar AGI scenario?
[deleted] · 2025-01-09T20:17:40.143Z · answers+comments (8)
[question] Is AI alignment a purely functional property?
Roko · 2024-12-15T21:42:50.674Z · answers+comments (7)
How Much to Give is a Pragmatic Question
jefftk (jkaufman) · 2024-12-24T04:20:01.480Z · comments (1)
[link] Linkpost: Rat Traps by Sheon Han in Asterisk Mag
Chris_Leong · 2024-12-03T03:22:45.424Z · comments (7)
[link] Markov's Inequality Explained
criticalpoints · 2025-01-08T00:31:55.125Z · comments (2)
[question] AI for medical care for hard-to-treat diseases?
CronoDAS · 2025-01-10T23:55:39.902Z · answers+comments (1)
[question] Who are the worthwhile non-European pre-Industrial thinkers?
Lorec · 2024-12-03T01:45:31.445Z · answers+comments (4)
Preliminary Thoughts on Flirting Theory
Alice Blair (Diatom) · 2024-12-24T07:37:47.045Z · comments (6)
[link] My AI timelines
samuelshadrach (xpostah) · 2024-12-22T21:06:41.722Z · comments (2)
No Internally-Crispy Mac and Cheese
jefftk (jkaufman) · 2024-12-20T03:20:01.798Z · comments (5)
[question] Is weak-to-strong generalization an alignment technique?
cloud · 2025-01-31T07:13:03.332Z · answers+comments (0)
Maximally Eggy Crepes
jefftk (jkaufman) · 2025-01-19T20:40:03.709Z · comments (0)
Approaches to Group Singing
jefftk (jkaufman) · 2025-01-01T12:50:01.877Z · comments (1)
A problem shared by many different alignment targets
ThomasCederborg · 2025-01-15T14:22:12.754Z · comments (10)
Commenting Patterns by Platform
jefftk (jkaufman) · 2024-12-01T11:50:06.932Z · comments (0)
[link] Anatomy of a Dance Class: A step by step guide
Nathan Young · 2025-01-26T18:02:04.974Z · comments (0)
Exploring the petertodd / Leilan duality in GPT-2 and GPT-J
mwatkins · 2024-12-23T13:17:53.755Z · comments (1)
Sideloading: creating a model of a person via LLM with very large prompt
avturchin · 2024-11-22T16:41:28.293Z · comments (4)
(My) self-referential reason to believe in free will
jacek (jacek-karwowski) · 2025-01-06T23:35:02.809Z · comments (6)
Book review: Range by David Epstein
PatrickDFarley · 2025-01-08T04:27:26.391Z · comments (0)
A Ground-Level Perspective on Capacity Building in International Development
Sean Aubin (sean-aubin) · 2025-01-05T20:36:54.308Z · comments (1)
Basics of Bayesian learning
Dmitry Vaintrob (dmitry-vaintrob) · 2025-01-14T10:00:46.000Z · comments (0)
← previous page (newer posts) · next page (older posts) →