LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

next page (older posts) →

How to Make Superbabies
GeneSmith · 2025-02-19T20:39:38.971Z · comments (177)
Arbital has been imported to LessWrong
RobertM (T3t) · 2025-02-20T00:47:33.983Z · comments (23)
[link] A History of the Future, 2025-2040
L Rudolf L (LRudL) · 2025-02-17T12:03:58.355Z · comments (28)
Eliezer's Lost Alignment Articles / The Arbital Sequence
Ruby · 2025-02-20T00:48:10.338Z · comments (6)
The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better
Thane Ruthenis · 2025-02-21T20:15:11.545Z · comments (35)
AGI Safety & Alignment @ Google DeepMind is hiring
Rohin Shah (rohinmshah) · 2025-02-17T21:11:18.970Z · comments (11)
[link] Power Lies Trembling: a three-book review
Richard_Ngo (ricraz) · 2025-02-22T22:57:59.720Z · comments (3)
[question] Have LLMs Generated Novel Insights?
abramdemski · 2025-02-23T18:22:12.763Z · answers+comments (18)
Timaeus in 2024
Jesse Hoogland (jhoogland) · 2025-02-20T23:54:56.939Z · comments (1)
Dear AGI,
Nathan Young · 2025-02-18T10:48:15.030Z · comments (10)
[link] Thermodynamic entropy = Kolmogorov complexity
Aram Ebtekar (EbTech) · 2025-02-17T05:56:06.960Z · comments (12)
How might we safely pass the buck to AI?
joshc (joshua-clymer) · 2025-02-19T17:48:32.249Z · comments (52)
[link] The first RCT for GLP-1 drugs and alcoholism isn't what we hoped
dynomight · 2025-02-20T22:30:07.536Z · comments (3)
Alignment can be the ‘clean energy’ of AI
Cameron Berg (cameron-berg) · 2025-02-22T00:08:30.391Z · comments (1)
Go Grok Yourself
Zvi · 2025-02-19T20:20:09.371Z · comments (2)
Do models know when they are being evaluated?
Govind Pimpale (govind-pimpale) · 2025-02-17T23:13:22.017Z · comments (2)
On OpenAI’s Model Spec 2.0
Zvi · 2025-02-21T14:10:06.827Z · comments (2)
HPMOR Anniversary Guide
Screwtape · 2025-02-22T16:17:25.093Z · comments (0)
Proselytizing
lsusr · 2025-02-22T11:54:12.740Z · comments (0)
How accurate was my "Altered Traits" book review?
lsusr · 2025-02-18T17:00:55.584Z · comments (3)
AI #104: American State Capacity on the Brink
Zvi · 2025-02-20T14:50:06.375Z · comments (9)
[link] SuperBabies podcast with Gene Smith
Eneasz · 2025-02-19T19:36:49.852Z · comments (1)
ParaScope: Do Language Models Plan the Upcoming Paragraph?
NickyP (Nicky) · 2025-02-21T16:50:20.745Z · comments (0)
Abstract Mathematical Concepts vs. Abstractions Over Real-World Systems
Thane Ruthenis · 2025-02-18T18:04:46.717Z · comments (10)
Monthly Roundup #27: February 2025
Zvi · 2025-02-17T14:10:06.486Z · comments (3)
[link] The Takeoff Speeds Model Predicts We May Be Entering Crunch Time
johncrox · 2025-02-21T02:26:31.768Z · comments (0)
The case for the death penalty
Yair Halberstadt (yair-halberstadt) · 2025-02-21T08:30:41.182Z · comments (67)
[question] Take over my project: do computable agents plan against the universal distribution pessimistically?
Cole Wyeth (Amyr) · 2025-02-19T20:17:04.813Z · answers+comments (3)
Medical Roundup #4
Zvi · 2025-02-18T13:40:06.574Z · comments (2)
[link] New Report: Multi-Agent Risks from Advanced AI
Lewis Hammond (lewis-hammond-1) · 2025-02-23T00:32:29.534Z · comments (0)
[link] The Peeperi (unfinished) - By Katja Grace
Nathan Young · 2025-02-17T19:33:29.894Z · comments (0)
Judgements: Merging Prediction & Evidence
abramdemski · 2025-02-23T19:35:51.488Z · comments (0)
[question] What are the surviving worlds like?
KvmanThinking (avery-liu) · 2025-02-17T00:41:49.810Z · answers+comments (1)
Test of the Bene Gesserit
lsusr · 2025-02-23T11:51:10.279Z · comments (1)
Reflections on the state of the race to superintelligence, February 2025
Mitchell_Porter · 2025-02-23T13:58:07.663Z · comments (7)
Longtermist implications of aliens Space-Faring Civilizations - Introduction
Maxime Riché (maxime-riche) · 2025-02-21T12:08:42.403Z · comments (0)
List of most interesting ideas I encountered in my life, ranked
Lucien (lucien) · 2025-02-23T12:36:48.158Z · comments (3)
Inefficiencies in Pharmaceutical Research Practices
ErioirE (erioire) · 2025-02-22T04:43:09.147Z · comments (2)
[link] When should we worry about AI power-seeking?
Joe Carlsmith (joekc) · 2025-02-19T19:44:25.062Z · comments (0)
Undergrad AI Safety Conference
JoNeedsSleep (joanna-j-1) · 2025-02-19T03:43:47.969Z · comments (0)
Export Surplusses
lsusr · 2025-02-24T05:53:23.422Z · comments (1)
Seeing Through the Eyes of the Algorithm
silentbob · 2025-02-22T11:54:35.782Z · comments (1)
[link] Won't vs. Can't: Sandbagging-like Behavior from Claude Models
Joe Benton · 2025-02-19T20:47:06.792Z · comments (0)
Literature Review of Text AutoEncoders
NickyP (Nicky) · 2025-02-19T21:54:14.905Z · comments (1)
[link] Ascetic hedonism
dkl9 · 2025-02-17T15:56:30.267Z · comments (9)
[link] The Geometry of Linear Regression versus PCA
criticalpoints · 2025-02-23T21:01:33.415Z · comments (2)
MAISU - Minimal AI Safety Unconference
Linda Linsefors · 2025-02-21T11:36:25.202Z · comments (0)
Information throughput of biological humans and frontier LLMs
benwr · 2025-02-22T07:15:45.457Z · comments (0)
[link] US AI Safety Institute will be 'gutted,' Axios reports
Matrice Jacobine · 2025-02-20T14:40:13.049Z · comments (0)
Human-AI Relationality is Already Here
bridgebot (puppy) · 2025-02-20T07:08:22.420Z · comments (0)
next page (older posts) →