LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

next page (older posts) →

[link] By default, capital will matter more than ever after AGI
L Rudolf L (LRudL) · 2024-12-28T17:52:58.358Z · comments (60)
Shallow review of technical AI safety, 2024
technicalities · 2024-12-29T12:01:14.724Z · comments (22)
Some arguments against a land value tax
Matthew Barnett (matthew-barnett) · 2024-12-29T15:17:00.740Z · comments (23)
Why I'm Moving from Mechanistic to Prosaic Interpretability
Daniel Tan (dtch1997) · 2024-12-30T06:35:43.417Z · comments (17)
2025 Prediction Thread
habryka (habryka4) · 2024-12-30T01:50:14.216Z · comments (13)
Is "VNM-agent" one of several options, for what minds can grow up into?
AnnaSalamon · 2024-12-30T06:36:20.890Z · comments (32)
[link] Began a pay-on-results coaching experiment, made $40,300 since July
Chipmonk · 2024-12-29T21:12:02.574Z · comments (11)
AI Assistants Should Have a Direct Line to Their Developers
Jan_Kulveit · 2024-12-28T17:01:58.643Z · comments (4)
What happens next?
Logan Zoellner (logan-zoellner) · 2024-12-29T01:41:33.685Z · comments (19)
[link] Learn to write well BEFORE you have something worth saying
eukaryote · 2024-12-29T23:42:31.906Z · comments (7)
Considerations on orca intelligence
Towards_Keeperhood (Simon Skade) · 2024-12-29T14:35:16.445Z · comments (4)
o3, Oh My
Zvi · 2024-12-30T14:10:05.144Z · comments (9)
Book Summary: Zero to One
bilalchughtai (beelal) · 2024-12-29T16:13:52.922Z · comments (1)
The average rationalist IQ is about 122
Rockenots (Ekefa) · 2024-12-28T15:42:07.067Z · comments (20)
[link] Why OpenAI’s Structure Must Evolve To Advance Our Mission
stuhlmueller · 2024-12-28T04:24:19.937Z · comments (1)
[link] The Legacy of Computer Science
Johannes C. Mayer (johannes-c-mayer) · 2024-12-29T13:15:28.606Z · comments (0)
[link] Impact in AI Safety Now Requires Specific Strategic Insight
MiloSal (milosal) · 2024-12-29T00:40:53.780Z · comments (1)
Boston Solstice 2024 Retrospective
jefftk (jkaufman) · 2024-12-29T15:40:05.095Z · comments (0)
The low Information Density of Eliezer Yudkowsky & LessWrong
Felix Olszewski (quick-maths) · 2024-12-30T19:43:59.355Z · comments (0)
[link] Predictions of Near-Term Societal Changes Due to Artificial Intelligence
Annapurna (jorge-velez) · 2024-12-29T14:53:57.176Z · comments (0)
[link] Corrigibility should be an AI's Only Goal
PeterMcCluskey · 2024-12-29T20:25:17.922Z · comments (1)
Does Claude Prioritize Some Prompt Input Channels Over Others?
keltan · 2024-12-29T01:21:26.755Z · comments (2)
Notes on Altruism
David Gross (David_Gross) · 2024-12-29T03:13:09.444Z · comments (0)
Dusty Hands and Geoarbitrage
Tomás B. (Bjartur Tómas) · 2024-12-30T19:52:51.787Z · comments (0)
[question] Has Someone Checked The Cold-Water-In-Left-Ear Thing?
Maloew (maloew-valenar) · 2024-12-28T20:15:35.951Z · answers+comments (0)
The Robot, the Puppet-master, and the Psychohistorian
WillPetillo · 2024-12-28T00:12:08.824Z · comments (1)
Linkpost: Look at the Water
J Bostock (Jemist) · 2024-12-30T19:49:04.107Z · comments (0)
No, the Polymarket price does not mean we can immediately conclude what the probability of a bird flu pandemic is. We also need to know the interest rate!
Christopher King (christopher-king) · 2024-12-28T16:05:47.037Z · comments (7)
Dishbrain and implications.
RussellThor · 2024-12-29T10:42:43.912Z · comments (0)
[question] Could my work, "Beyond HaHa" benefit the LessWrong community?
P. João (gabriel-brito) · 2024-12-29T16:14:13.497Z · answers+comments (0)
Teaching Claude to Meditate
Gordon Seidoh Worley (gworley) · 2024-12-29T22:27:44.657Z · comments (2)
[link] When do experts think human-level AI will be created?
Vishakha (vishakha-agrawal) · 2024-12-30T06:20:33.158Z · comments (0)
[link] Genetically edited mosquitoes haven't scaled yet. Why?
alexey · 2024-12-30T21:37:32.942Z · comments (0)
[link] World models I'm currently building
xpostah · 2024-12-30T08:26:16.972Z · comments (0)
The Great OpenAI Debate: Should It Stay ‘Open’ or Go Private?
Satya (satya-2) · 2024-12-30T01:14:28.329Z · comments (0)
Making LLMs safer is more intuitive than you think: How Common Sense and Diversity Improve AI Alignment
Jeba Sania (jeba-sania) · 2024-12-29T19:27:35.685Z · comments (0)
Emergence and Amplification of Survival
jgraves01 · 2024-12-28T23:52:47.893Z · comments (0)
Morality Is Still Demanding
utilistrutil · 2024-12-29T00:33:40.471Z · comments (2)
Action: how do you REALLY go about doing?
DDthinker · 2024-12-29T22:00:24.915Z · comments (0)
The Engineering Argument Fallacy: Why Technological Success Doesn't Validate Physics
Wenitte Apiou (wenitte-apiou) · 2024-12-28T00:49:53.300Z · comments (5)
Rejecting Anthropomorphic Bias: Addressing Fears of AGI and Transformation
Gedankensprünge (gedankenspruenge) · 2024-12-29T01:48:47.583Z · comments (1)
AI Alignment, and where we stand.
afeller08 · 2024-12-29T14:08:47.276Z · comments (0)
The Misconception of AGI as an Existential Threat: A Reassessment
Gedankensprünge (gedankenspruenge) · 2024-12-29T01:39:57.780Z · comments (0)
next page (older posts) →