LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

next page (older posts) →

We probably won't just play status games with each other after AGI
Matthew Barnett (matthew-barnett) · 2025-01-15T04:56:38.330Z · comments (12)
Inference-Time-Compute: More Faithful? A Research Note
James Chua (james-chua) · 2025-01-15T04:43:00.631Z · comments (7)
On the OpenAI Economic Blueprint
Zvi · 2025-01-15T14:30:06.773Z · comments (0)
Predict 2025 AI capabilities (by Sunday)
Jonas V (Jonas Vollmer) · 2025-01-15T00:16:05.034Z · comments (3)
Voluntary Salary Reduction
jefftk (jkaufman) · 2025-01-15T03:40:02.909Z · comments (2)
What Is The Alignment Problem?
johnswentworth · 2025-01-16T01:20:16.826Z · comments (1)
New, improved multiple-choice TruthfulQA
Owain_Evans · 2025-01-15T23:32:09.202Z · comments (1)
Feature request: comment bookmarks
dirk (abandon) · 2025-01-15T06:45:23.862Z · comments (2)
[link] LLMs for language learning
Benquo · 2025-01-15T14:08:54.620Z · comments (1)
Improving Our Safety Cases Using Upper and Lower Bounds
Yonatan Cale (yonatan-cale-1) · 2025-01-16T00:01:49.043Z · comments (0)
[link] Gaming TruthfulQA: Simple Heuristics Exposed Dataset Weaknesses
TurnTrout · 2025-01-16T02:14:35.098Z · comments (1)
Six Small Cohabitive Games
Screwtape · 2025-01-15T21:59:29.778Z · comments (0)
Unregulated Peptides: Does BPC-157 hold its promises?
ChristianKl · 2025-01-15T23:36:16.351Z · comments (2)
Everywhere I Look, I See Kat Woods
just_browsing · 2025-01-15T19:29:10.373Z · comments (5)
[link] How do fictional stories illustrate AI misalignment?
Vishakha (vishakha-agrawal) · 2025-01-15T06:11:44.336Z · comments (2)
[question] Where should one post to get into the training data?
keltan · 2025-01-15T00:41:19.405Z · answers+comments (4)
List of AI safety papers from companies, 2023–2024
Zach Stein-Perlman · 2025-01-15T18:00:30.242Z · comments (0)
[link] Looking for humanness in the world wide social
Itay Dreyfus (itay-dreyfus) · 2025-01-15T14:50:54.966Z · comments (0)
A problem shared by many different alignment targets
ThomasCederborg · 2025-01-15T14:22:12.754Z · comments (2)
Applications Open for the Cooperative AI Summer School 2025!
JesseClifton · 2025-01-15T18:16:26.700Z · comments (0)
A Novel Emergence of Meta-Awareness in LLM Fine-Tuning
rife (edgar-muniz) · 2025-01-15T22:59:46.321Z · comments (0)
LLMs are really good at k-order thinking (where k is even)
charlieoneill (kingchucky211) · 2025-01-15T20:43:00.623Z · comments (0)
Marx and the Machine
DAL · 2025-01-15T18:33:16.789Z · comments (2)
The Difference Between Prediction Markets and Debate (Argument) Maps
Jamie Joyce (jamie-joyce) · 2025-01-15T23:19:53.216Z · comments (3)
"Pick Two" AI Trilemma: Generality, Agency, Alignment.
Black Flag (robert-shala-1) · 2025-01-15T18:52:00.780Z · comments (0)
AI Alignment Meme Viruses
RationalDino · 2025-01-15T15:55:24.271Z · comments (0)
Playground and Willpower Problems
Emre (emre-2) · 2025-01-15T18:52:08.860Z · comments (0)
Code4Compassion 2025: a hackathon transforming animal advocacy through technology
superbeneficiary · 2025-01-15T18:31:35.185Z · comments (0)
Myths about Nonduality and Science by Gary Weber
Vadim Golub (a schizophrenic mind) · 2025-01-15T18:33:33.783Z · comments (0)
next page (older posts) →