LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

[link] Review: Breaking Free with Dr. Stone
TurnTrout · 2024-12-18T01:26:37.730Z · comments (5)
Caring about excellence
owencb · 2024-07-22T14:24:37.892Z · comments (4)
AI #91: Deep Thinking
Zvi · 2024-11-21T14:30:06.930Z · comments (11)
[link] What Ketamine Therapy Is Like
Sable · 2024-11-11T11:09:08.602Z · comments (8)
Book a Time to Chat about Interp Research
Logan Riggs (elriggs) · 2024-12-03T17:27:46.808Z · comments (3)
[link] How quickly could robots scale up?
Benjamin_Todd · 2025-01-12T17:01:04.927Z · comments (22)
I'm open for projects (sort of)
cousin_it · 2024-04-18T18:05:01.395Z · comments (13)
Higher-effort summer solstice: What if we used AI (i.e., Angel Island)?
Rachel Shu (wearsshoes) · 2024-06-25T01:35:54.064Z · comments (9)
[link] MIRI's September 2024 newsletter
Harlan · 2024-09-16T18:15:40.785Z · comments (0)
Bounty for Evidence on Some of Palisade Research's Beliefs
benwr · 2024-09-23T20:01:20.917Z · comments (4)
AI #88: Thanks for the Memos
Zvi · 2024-10-31T15:00:07.412Z · comments (5)
Rapid capability gain around supergenius level seems probable even without intelligence needing to improve intelligence
Towards_Keeperhood (Simon Skade) · 2024-05-06T17:09:10.729Z · comments (16)
On OpenAI’s Model Spec
Zvi · 2024-06-21T13:00:03.014Z · comments (3)
[link] Metascience of the Vesuvius Challenge
Maxwell Tabarrok (maxwell-tabarrok) · 2024-03-30T12:02:38.978Z · comments (2)
AI as a powerful meme, via CGP Grey
TheManxLoiner · 2024-10-30T18:31:58.544Z · comments (8)
[link] Robin Hanson AI X-Risk Debate — Highlights and Analysis
Liron · 2024-07-12T21:31:02.222Z · comments (7)
AI #68: Remarkably Reasonable Reactions
Zvi · 2024-06-13T16:30:02.969Z · comments (11)
Humanity isn't remotely longtermist, so arguments for AGI x-risk should focus on the near term
Seth Herd · 2024-08-12T18:10:56.543Z · comments (10)
~80 Interesting Questions about Foundation Model Agent Safety
RohanS · 2024-10-28T16:37:04.713Z · comments (4)
Enriched tab is now the default LW Frontpage experience for logged-in users
Ruby · 2024-06-21T00:09:30.441Z · comments (27)
All The Latest Human tFUS Studies
sarahconstantin · 2024-08-09T22:20:04.561Z · comments (2)
Things I have been using LLMs for
Kaj_Sotala · 2025-01-20T14:20:02.600Z · comments (6)
Tear Down the Burren
jefftk (jkaufman) · 2025-02-04T03:40:02.767Z · comments (2)
Skepticism towards claims about the views of powerful institutions
tlevin (trevor) · 2025-02-13T07:40:52.257Z · comments (2)
HPMOR Anniversary Guide
Screwtape · 2025-02-22T16:17:25.093Z · comments (0)
Dream, Truth, & Good
abramdemski · 2025-02-24T16:59:05.045Z · comments (7)
Some costs of superposition
Linda Linsefors · 2024-03-03T16:08:20.674Z · comments (11)
Big Picture AI Safety: Introduction
EuanMcLean (euanmclean) · 2024-05-23T11:15:44.037Z · comments (7)
How to hire somebody better than yourself
lemonhope (lcmgcd) · 2024-08-28T08:12:53.450Z · comments (5)
Anthropic rewrote its RSP
Zach Stein-Perlman · 2024-10-15T14:25:12.518Z · comments (19)
AI Safety 101 : Capabilities - Human Level AI, What? How? and When?
markov (markovial) · 2024-03-07T17:29:53.260Z · comments (8)
AI #75: Math is Easier
Zvi · 2024-08-01T13:40:05.539Z · comments (25)
Minimal Motivation of Natural Latents
johnswentworth · 2024-10-14T22:51:58.125Z · comments (14)
Trustworthy and untrustworthy models
Olli Järviniemi (jarviniemi) · 2024-08-19T16:27:11.088Z · comments (3)
1. The CAST Strategy
Max Harms (max-harms) · 2024-06-07T22:29:13.005Z · comments (19)
[link] Michael Dickens' Caffeine Tolerance Research
niplav · 2024-09-04T15:41:53.343Z · comments (3)
New intro textbook on AIXI
Alex_Altair · 2024-05-11T18:18:50.945Z · comments (8)
An Introduction to AI Sandbagging
Teun van der Weij (teun-van-der-weij) · 2024-04-26T13:40:00.126Z · comments (13)
[link] How people stopped dying from diarrhea so much (& other life-saving decisions)
Writer · 2024-03-16T16:00:47.830Z · comments (0)
AI #54: Clauding Along
Zvi · 2024-03-07T16:00:05.066Z · comments (11)
Things Solenoid Narrates
Solenoid_Entity · 2024-04-12T23:57:16.169Z · comments (2)
AI #80: Never Have I Ever
Zvi · 2024-09-10T17:50:08.074Z · comments (20)
[question] "Deception Genre" What Books are like Project Lawful?
Double · 2024-08-28T17:19:52.172Z · answers+comments (20)
Principled Satisficing To Avoid Goodhart
JenniferRM · 2024-08-16T19:05:27.204Z · comments (2)
Apply to LASR Labs: a London-based technical AI safety research programme
Erin Robertson · 2024-04-09T17:34:06.847Z · comments (1)
AI #53: One More Leap
Zvi · 2024-02-29T16:10:04.049Z · comments (0)
[link] Why Georgism Lost Its Popularity
Zero Contradictions · 2024-07-20T15:08:41.469Z · comments (53)
The Gemini Incident Continues
Zvi · 2024-02-27T16:00:05.648Z · comments (6)
Higher-Order Forecasts
ozziegooen · 2024-05-22T21:49:42.802Z · comments (1)
[link] AI governance needs a theory of victory
Corin Katzke (corin-katzke) · 2024-06-21T16:15:46.560Z · comments (8)
← previous page (newer posts) · next page (older posts) →