LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Why I no longer identify as transhumanist
Kaj_Sotala · 2024-02-03T12:00:04.389Z · comments (33)
It's OK to be biased towards humans
dr_s · 2023-11-11T11:59:16.568Z · comments (69)
[link] Building intuition with spaced repetition systems
Jacob G-W (g-w1) · 2024-05-12T15:49:04.860Z · comments (6)
Conditional prediction markets are evidential, not causal
philh · 2024-02-07T21:52:47.476Z · comments (10)
[link] Pay-on-results personal growth: first success
Chipmonk · 2024-09-14T03:39:12.975Z · comments (2)
Rationalists are missing a core piece for agent-like structure (energy vs information overload)
tailcalled · 2024-08-17T09:57:19.370Z · comments (9)
A "Bitter Lesson" Approach to Aligning AGI and ASI
RogerDearnaley (roger-d-1) · 2024-07-06T01:23:22.376Z · comments (39)
Calculating Natural Latents via Resampling
johnswentworth · 2024-06-06T00:37:42.127Z · comments (4)
The Best of Don’t Worry About the Vase
Zvi · 2023-12-13T12:50:02.510Z · comments (4)
Genetic fitness is a measure of selection strength, not the selection target
Kaj_Sotala · 2023-11-04T19:02:13.783Z · comments (43)
[link] [Closed] Agent Foundations track in MATS
Vanessa Kosoy (vanessa-kosoy) · 2023-10-31T08:12:50.482Z · comments (1)
On “first critical tries” in AI alignment
Joe Carlsmith (joekc) · 2024-06-05T00:19:02.814Z · comments (8)
[link] In Defense of Epistemic Empathy
Kevin Dorst · 2023-12-27T16:27:06.320Z · comments (19)
[link] Google Gemini Announced
Jacob G-W (g-w1) · 2023-12-06T16:14:07.192Z · comments (22)
[link] Book review: Xenosystems
jessicata (jessica.liu.taylor) · 2024-09-16T20:17:56.670Z · comments (18)
On Anthropic’s Sleeper Agents Paper
Zvi · 2024-01-17T16:10:05.145Z · comments (5)
What if a tech company forced you to move to NYC?
KatjaGrace · 2024-06-09T06:30:03.329Z · comments (22)
Cooperating with aliens and AGIs: An ECL explainer
Chi Nguyen · 2024-02-24T22:58:47.345Z · comments (8)
Dating Roundup #2: If At First You Don’t Succeed
Zvi · 2024-01-02T16:00:04.955Z · comments (29)
Math-to-English Cheat Sheet
nahoj · 2024-04-08T09:19:40.814Z · comments (5)
Ten Modes of Culture War Discourse
jchan · 2024-01-31T13:58:20.572Z · comments (15)
[link] Theories of Change for AI Auditing
Lee Sharkey (Lee_Sharkey) · 2023-11-13T19:33:43.928Z · comments (0)
[link] Come to Manifest 2024 (June 7-9 in Berkeley)
Saul Munn (saul-munn) · 2024-03-27T21:30:17.306Z · comments (2)
[link] the micro-fulfillment cambrian explosion
bhauth · 2023-12-04T01:15:34.342Z · comments (5)
Monthly Roundup #17: April 2024
Zvi · 2024-04-15T12:10:03.126Z · comments (4)
[link] OpenAI releases GPT-4o, natively interfacing with text, voice and vision
Martín Soto (martinsq) · 2024-05-13T18:50:52.337Z · comments (23)
[link] Unlocking Solutions—By Understanding Coordination Problems
James Stephen Brown (james-brown) · 2024-07-27T04:52:13.435Z · comments (4)
Safe Stasis Fallacy
Davidmanheim · 2024-02-05T10:54:44.061Z · comments (2)
Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision
leogao · 2023-12-16T05:39:10.558Z · comments (5)
[link] Land Reclamation is in the 9th Circle of Stagnation Hell
Maxwell Tabarrok (maxwell-tabarrok) · 2024-01-12T13:36:27.159Z · comments (6)
Complexity of value but not disvalue implies more focus on s-risk. Moral uncertainty and preference utilitarianism also do.
Chi Nguyen · 2024-02-23T06:10:05.881Z · comments (18)
[link] AI, centralization, and the One Ring
owencb · 2024-09-13T14:00:16.126Z · comments (11)
[Closed] PIBBSS is hiring in a variety of roles (alignment research and incubation program)
Nora_Ammann · 2024-04-09T08:12:59.241Z · comments (0)
AI #44: Copyright Confrontation
Zvi · 2023-12-28T14:30:10.237Z · comments (13)
[link] Questions are usually too cheap
Nathan Young · 2024-05-11T13:00:54.302Z · comments (19)
Self-Blinded L-Theanine RCT
niplav · 2023-10-31T15:24:57.717Z · comments (12)
AI #71: Farewell to Chevron
Zvi · 2024-07-04T13:40:05.905Z · comments (9)
AI #82: The Governor Ponders
Zvi · 2024-09-19T13:30:04.863Z · comments (8)
We are headed into an extreme compute overhang
devrandom · 2024-04-26T21:38:21.694Z · comments (33)
Trading off Lives
jefftk (jkaufman) · 2024-01-03T03:40:05.603Z · comments (12)
AI #50: The Most Dangerous Thing
Zvi · 2024-02-08T14:30:13.168Z · comments (4)
Fat Tails Discourage Compromise
niplav · 2024-06-17T09:39:16.489Z · comments (5)
[question] Can we get an AI to do our alignment homework for us?
Chris_Leong · 2024-02-26T07:56:22.320Z · answers+comments (33)
AI #40: A Vision from Vitalik
Zvi · 2023-11-30T17:30:08.350Z · comments (12)
Causal Graphs of GPT-2-Small's Residual Stream
David Udell · 2024-07-09T22:06:55.775Z · comments (7)
Human wanting
TsviBT · 2023-10-24T01:05:39.374Z · comments (1)
2022 (and All Time) Posts by Pingback Count
Raemon · 2023-12-16T21:17:00.572Z · comments (14)
How the AI safety technical landscape has changed in the last year, according to some practitioners
tlevin (trevor) · 2024-07-26T19:06:47.126Z · comments (6)
Per protocol analysis as medical malpractice
braces · 2024-01-31T16:22:21.367Z · comments (8)
← previous page (newer posts) · next page (older posts) →