LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Don't leave your fingerprints on the future
So8res · 2022-10-08T00:35:35.430Z · comments (46)
[link] Popular education in Sweden: much more than you wanted to know
Henrik Karlsson (henrik-karlsson) · 2022-05-17T20:07:50.318Z · comments (3)
[link] Explore More: A Bag of Tricks to Keep Your Life on the Rails
Shoshannah Tekofsky (DarkSym) · 2024-09-28T21:38:52.256Z · comments (5)
Soft optimization makes the value target bigger
Jeremy Gillen (jeremy-gillen) · 2023-01-02T16:06:50.229Z · comments (20)
Deconfusing Direct vs Amortised Optimization
beren · 2022-12-02T11:30:46.754Z · comments (19)
Honoring Petrov Day on LessWrong, in 2020
Ben Pace (Benito) · 2020-09-26T08:01:36.838Z · comments (100)
Mental health benefits and downsides of psychedelic use in ACX readers: survey results
RationalElf · 2021-10-25T22:55:09.522Z · comments (18)
[question] How Hard Would It Be To Make A COVID Vaccine For Oneself?
johnswentworth · 2020-12-21T16:19:10.415Z · answers+comments (29)
AI Alignment Metastrategy
Vanessa Kosoy (vanessa-kosoy) · 2023-12-31T12:06:11.433Z · comments (13)
AMA: Paul Christiano, alignment researcher
paulfchristiano · 2021-04-28T18:55:39.707Z · comments (197)
Experiences raising children in shared housing
juliawise · 2021-12-21T17:09:05.008Z · comments (4)
[link] Scott Aaronson is joining OpenAI to work on AI safety
peterbarnett · 2022-06-18T04:06:55.465Z · comments (31)
Efficient Dictionary Learning with Switch Sparse Autoencoders
Anish Mudide (anish-mudide) · 2024-07-22T18:45:53.502Z · comments (19)
“Reframing Superintelligence” + LLMs + 4 years
Eric Drexler · 2023-07-10T13:42:09.739Z · comments (9)
A List of 45+ Mech Interp Project Ideas from Apollo Research’s Interpretability Team
Lee Sharkey (Lee_Sharkey) · 2024-07-18T14:15:50.248Z · comments (18)
We have achieved Noob Gains in AI
phdead · 2022-05-18T20:56:49.143Z · comments (20)
AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years
basil.halperin (bhalperin) · 2023-01-10T16:06:52.329Z · comments (44)
Deep learning models might be secretly (almost) linear
beren · 2023-04-24T18:43:28.188Z · comments (29)
GPT-3 Catching Fish in Morse Code
Megan Kinniment (megan-kinniment) · 2022-06-30T21:22:49.054Z · comments (27)
unRLHF - Efficiently undoing LLM safeguards
Pranav Gade (pranav-gade) · 2023-10-12T19:58:08.811Z · comments (15)
Goodhart's Law inside the human mind
Kaj_Sotala · 2023-04-17T13:48:13.183Z · comments (13)
Situating LessWrong in contemporary philosophy: An interview with Jon Livengood
Suspended Reason (suspended-reason) · 2020-07-01T00:37:00.695Z · comments (21)
[question] Why The Focus on Expected Utility Maximisers?
DragonGod · 2022-12-27T15:49:36.536Z · answers+comments (84)
[link] Manifold: If okay AGI, why?
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2023-03-25T22:43:53.820Z · comments (37)
[question] What are your greatest one-shot life improvements?
Mark Xu (mark-xu) · 2020-05-16T16:53:40.608Z · answers+comments (171)
On the limits of idealized values
Joe Carlsmith (joekc) · 2021-06-22T02:10:50.073Z · comments (20)
Catching the Spark
LoganStrohl (BrienneYudkowsky) · 2021-01-30T23:23:57.383Z · comments (21)
Parable of the Dammed
johnswentworth · 2020-12-10T00:08:44.493Z · comments (29)
Covid 6/25: The Dam Breaks
Zvi · 2020-06-25T18:30:02.899Z · comments (21)
Technology Changes Constraints
johnswentworth · 2020-01-25T23:13:17.428Z · comments (6)
[question] Which skincare products are evidence-based?
Vanessa Kosoy (vanessa-kosoy) · 2024-05-02T15:22:12.597Z · answers+comments (47)
Am I secretly excited for AI getting weird?
porby · 2022-10-29T22:16:52.592Z · comments (4)
Can you get AGI from a Transformer?
Steven Byrnes (steve2152) · 2020-07-23T15:27:51.712Z · comments (40)
ARC tests to see if GPT-4 can escape human control; GPT-4 failed to do so
Christopher King (christopher-king) · 2023-03-15T00:29:23.523Z · comments (22)
[link] When discussing AI risks, talk about capabilities, not intelligence
Vika · 2023-08-11T13:38:48.844Z · comments (7)
Book review: "A Thousand Brains" by Jeff Hawkins
Steven Byrnes (steve2152) · 2021-03-04T05:10:44.929Z · comments (18)
Announcing $5,000 bounty for (responsibly) ending malaria
lc · 2022-09-24T04:28:22.189Z · comments (40)
Bing chat is the AI fire alarm
Ratios · 2023-02-17T06:51:51.551Z · comments (63)
Awakening
lsusr · 2024-05-30T07:03:00.821Z · comments (79)
[link] Anthropic release Claude 3, claims >GPT-4 Performance
LawrenceC (LawChan) · 2024-03-04T18:23:54.065Z · comments (41)
Anthropic's Certificate of Incorporation
Zach Stein-Perlman · 2024-06-12T13:00:30.806Z · comments (4)
[link] In Defense of Attempting Hard Things, and my story of the Leverage ecosystem
Cathleen · 2021-12-17T23:08:20.150Z · comments (43)
The LessWrong 2022 Review
habryka (habryka4) · 2023-12-05T04:00:00.000Z · comments (43)
Why I funded PIBBSS
Ryan Kidd (ryankidd44) · 2024-09-15T19:56:33.018Z · comments (20)
On Investigating Conspiracy Theories
Zvi · 2023-02-20T12:50:00.891Z · comments (38)
Measuring hardware overhang
hippke · 2020-08-05T19:59:00.003Z · comments (14)
Talent Needs of Technical AI Safety Teams
yams (william-brewer) · 2024-05-24T00:36:40.486Z · comments (64)
[link] Why Weren't Hot Air Balloons Invented Sooner?
Lost Futures (aeviternity1) · 2022-10-18T00:41:43.156Z · comments (52)
[link] My Number 1 Epistemology Book Recommendation: Inventing Temperature
adamShimi · 2024-09-08T14:30:40.456Z · comments (18)
Monastery and Throne
Jacob Falkovich (Jacobian) · 2021-04-06T19:00:52.623Z · comments (42)
← previous page (newer posts) · next page (older posts) →