LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

[question] How do we prepare for final crunch time?
Eli Tyre (elityre) · 2021-03-30T05:47:54.654Z · answers+comments (30)
Full-time AGI Safety!
Steven Byrnes (steve2152) · 2021-03-01T12:42:14.813Z · comments (3)
Why I'm joining Anthropic
evhub · 2023-01-05T01:12:13.822Z · comments (4)
[link] Manifold: If okay AGI, why?
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2023-03-25T22:43:53.820Z · comments (37)
Narrative Syncing
AnnaSalamon · 2022-05-01T01:48:45.889Z · comments (48)
[question] What are your greatest one-shot life improvements?
Mark Xu (mark-xu) · 2020-05-16T16:53:40.608Z · answers+comments (171)
Human takeover might be worse than AI takeover
Tom Davidson (tom-davidson-1) · 2025-01-10T16:53:27.043Z · comments (54)
Parable of the Dammed
johnswentworth · 2020-12-10T00:08:44.493Z · comments (29)
[link] Scott Aaronson is joining OpenAI to work on AI safety
peterbarnett · 2022-06-18T04:06:55.465Z · comments (31)
“Reframing Superintelligence” + LLMs + 4 years
Eric Drexler · 2023-07-10T13:42:09.739Z · comments (9)
Experiences raising children in shared housing
juliawise · 2021-12-21T17:09:05.008Z · comments (4)
We have achieved Noob Gains in AI
phdead · 2022-05-18T20:56:49.143Z · comments (20)
Honoring Petrov Day on LessWrong, in 2020
Ben Pace (Benito) · 2020-09-26T08:01:36.838Z · comments (100)
Deep learning models might be secretly (almost) linear
beren · 2023-04-24T18:43:28.188Z · comments (29)
[link] Popular education in Sweden: much more than you wanted to know
Henrik Karlsson (henrik-karlsson) · 2022-05-17T20:07:50.318Z · comments (3)
unRLHF - Efficiently undoing LLM safeguards
Pranav Gade (pranav-gade) · 2023-10-12T19:58:08.811Z · comments (15)
AMA: Paul Christiano, alignment researcher
paulfchristiano · 2021-04-28T18:55:39.707Z · comments (197)
[question] How Hard Would It Be To Make A COVID Vaccine For Oneself?
johnswentworth · 2020-12-21T16:19:10.415Z · answers+comments (29)
Soft optimization makes the value target bigger
Jeremy Gillen (jeremy-gillen) · 2023-01-02T16:06:50.229Z · comments (20)
Situating LessWrong in contemporary philosophy: An interview with Jon Livengood
Suspended Reason (suspended-reason) · 2020-07-01T00:37:00.695Z · comments (21)
Catching the Spark
LoganStrohl (BrienneYudkowsky) · 2021-01-30T23:23:57.383Z · comments (21)
2024 in AI predictions
jessicata (jessica.liu.taylor) · 2025-01-01T20:29:49.132Z · comments (3)
GPT-3 Catching Fish in Morse Code
Megan Kinniment (megan-kinniment) · 2022-06-30T21:22:49.054Z · comments (27)
The Dream Machine
sarahconstantin · 2024-12-05T00:00:05.796Z · comments (6)
8 examples informing my pessimism on uploading without reverse engineering
Steven Byrnes (steve2152) · 2023-11-03T20:03:50.450Z · comments (12)
Technology Changes Constraints
johnswentworth · 2020-01-25T23:13:17.428Z · comments (6)
Measuring hardware overhang
hippke · 2020-08-05T19:59:00.003Z · comments (14)
On the limits of idealized values
Joe Carlsmith (joekc) · 2021-06-22T02:10:50.073Z · comments (20)
Gears vs Behavior
johnswentworth · 2019-09-19T06:50:42.379Z · comments (14)
Announcing $5,000 bounty for (responsibly) ending malaria
lc · 2022-09-24T04:28:22.189Z · comments (40)
Making AIs less likely to be spiteful
Nicolas Macé (NicolasMace) · 2023-09-26T14:12:06.202Z · comments (4)
Can you get AGI from a Transformer?
Steven Byrnes (steve2152) · 2020-07-23T15:27:51.712Z · comments (40)
A new acausal trading platform: RobinShould
Matthew Barnett (matthew-barnett) · 2021-04-01T16:56:07.488Z · comments (5)
The o1 System Card Is Not About o1
Zvi · 2024-12-13T20:30:08.048Z · comments (5)
The Pearly Gates
lsusr · 2024-05-30T04:01:14.198Z · comments (6)
Am I secretly excited for AI getting weird?
porby · 2022-10-29T22:16:52.592Z · comments (4)
On Investigating Conspiracy Theories
Zvi · 2023-02-20T12:50:00.891Z · comments (38)
Touch reality as soon as possible (when doing machine learning research)
LawrenceC (LawChan) · 2023-01-03T19:11:58.915Z · comments (9)
ARC tests to see if GPT-4 can escape human control; GPT-4 failed to do so
Christopher King (christopher-king) · 2023-03-15T00:29:23.523Z · comments (22)
Covid 6/25: The Dam Breaks
Zvi · 2020-06-25T18:30:02.899Z · comments (21)
Parasitic Language Games: maintaining ambiguity to hide conflict while burning the commons
Hazard · 2023-03-12T05:25:26.496Z · comments (17)
Talent Needs of Technical AI Safety Teams
yams (william-brewer) · 2024-05-24T00:36:40.486Z · comments (65)
The Fountain of Health: a First Principles Guide to Rejuvenation
PhilJackson · 2023-01-07T18:34:12.094Z · comments (38)
Social status part 1/2: negotiations over object-level preferences
Steven Byrnes (steve2152) · 2024-03-05T16:29:07.143Z · comments (15)
The Pareto Best and the Curse of Doom
Screwtape · 2024-02-21T23:10:01.359Z · comments (21)
[link] In Defense of Attempting Hard Things, and my story of the Leverage ecosystem
Cathleen · 2021-12-17T23:08:20.150Z · comments (43)
[link] Why Weren't Hot Air Balloons Invented Sooner?
Lost Futures (aeviternity1) · 2022-10-18T00:41:43.156Z · comments (52)
Should CA, TX, OK, and LA merge into a giant swing state, just for elections?
Thomas Kwa (thomas-kwa) · 2024-11-06T23:01:48.992Z · comments (35)
[link] Anthropic release Claude 3, claims >GPT-4 Performance
LawrenceC (LawChan) · 2024-03-04T18:23:54.065Z · comments (41)
Why I funded PIBBSS
Ryan Kidd (ryankidd44) · 2024-09-15T19:56:33.018Z · comments (21)
← previous page (newer posts) · next page (older posts) →