LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

An Analogy for Understanding Transformers
CallumMcDougall (TheMcDouglas) · 2023-05-13T12:20:25.688Z · comments (5)
How I apply (so-called) Non-Violent Communication
Kaj_Sotala · 2023-05-15T09:56:52.490Z · comments (25)
How I learned to stop worrying and love skill trees
junk heap homotopy (zrkrlc) · 2023-05-23T04:08:42.022Z · comments (2)
[link] Hell is Game Theory Folk Theorems
jessicata (jessica.liu.taylor) · 2023-05-01T03:16:03.247Z · comments (101)
AI #10: Code Interpreter and Geoff Hinton
Zvi · 2023-05-04T14:00:00.366Z · comments (7)
Result Of The Bounty/Contest To Explain Infra-Bayes In The Language Of Game Theory
johnswentworth · 2023-05-09T16:35:26.751Z · comments (0)
We learn long-lasting strategies to protect ourselves from danger and rejection
Richard_Ngo (ricraz) · 2023-05-16T16:36:08.398Z · comments (5)
AI #12:The Quest for Sane Regulations
Zvi · 2023-05-18T13:20:07.802Z · comments (12)
Brief notes on the Senate hearing on AI oversight
Diziet · 2023-05-16T22:29:33.167Z · comments (2)
What 2025 looks like
Ruby · 2023-05-01T22:53:15.783Z · comments (17)
New User's Guide to LessWrong
Ruby · 2023-05-17T00:55:49.814Z · comments (45)
Mob and Bailey
Screwtape · 2023-05-25T22:14:52.377Z · comments (15)
Some Thoughts on Virtue Ethics for AIs
peligrietzer · 2023-05-02T05:46:41.334Z · comments (7)
Residual stream norms grow exponentially over the forward pass
StefanHex (Stefan42) · 2023-05-07T00:46:02.658Z · comments (24)
Solving the Mechanistic Interpretability challenges: EIS VII Challenge 2
StefanHex (Stefan42) · 2023-05-25T15:37:54.593Z · comments (1)
Long Covid Risks: 2023 Update
Elizabeth (pktechgirl) · 2023-05-06T18:20:01.259Z · comments (9)
The Lightcone Theorem: A Better Foundation For Natural Abstraction?
johnswentworth · 2023-05-15T02:24:02.038Z · comments (25)
[link] Helping your Senator Prepare for the Upcoming Sam Altman Hearing
Tiago de Vassal (tiago-de-vassal) · 2023-05-14T22:45:43.341Z · comments (2)
Seeking (Paid) Case Studies on Standards
HoldenKarnofsky · 2023-05-26T17:58:57.042Z · comments (9)
[link] Orthogonal's Formal-Goal Alignment theory of change
Tamsin Leake (carado-1) · 2023-05-05T22:36:14.883Z · comments (12)
AI #11: In Search of a Moat
Zvi · 2023-05-11T15:40:00.462Z · comments (28)
[Linkpost] "Governance of superintelligence" by OpenAI
Daniel_Eth · 2023-05-22T20:15:25.327Z · comments (20)
[link] Turning off lights with model editing
Sam Marks (samuel-marks) · 2023-05-12T20:25:12.353Z · comments (5)
What does it take to ban a thing?
qbolec · 2023-05-08T11:00:51.133Z · comments (18)
Self-leadership and self-love dissolve anger and trauma
Richard_Ngo (ricraz) · 2023-05-22T22:30:06.650Z · comments (7)
Naturalist Collection
LoganStrohl (BrienneYudkowsky) · 2023-05-06T00:37:42.774Z · comments (7)
Some quotes from Tuesday's Senate hearing on AI
Daniel_Eth · 2023-05-17T12:13:04.449Z · comments (9)
Advice for interacting with busy people
Severin T. Seehrich (sts) · 2023-05-04T13:31:06.709Z · comments (4)
[New] Rejected Content Section
Ruby · 2023-05-04T01:43:19.547Z · comments (21)
An Impossibility Proof Relevant to the Shutdown Problem and Corrigibility
Audere (Ozzalus) · 2023-05-02T06:52:25.097Z · comments (13)
Announcing “Key Phenomena in AI Risk” (facilitated reading group)
Nora_Ammann · 2023-05-09T00:31:10.294Z · comments (4)
[link] Jaan Tallinn's 2022 Philanthropy Overview
jaan · 2023-05-14T15:35:04.297Z · comments (2)
The Compleat Cybornaut
ukc10014 · 2023-05-19T08:44:38.274Z · comments (2)
Avoiding xrisk from AI doesn't mean focusing on AI xrisk
Stuart_Armstrong · 2023-05-02T19:27:32.162Z · comments (7)
Idea: medical hypotheses app for mysterious chronic illnesses
riceissa · 2023-05-19T20:49:24.526Z · comments (8)
Systems that cannot be unsafe cannot be safe
Davidmanheim · 2023-05-02T08:53:35.115Z · comments (27)
How MATS addresses “mass movement building” concerns
Ryan Kidd (ryankidd44) · 2023-05-04T00:55:26.913Z · comments (9)
'Fundamental' vs 'applied' mechanistic interpretability research
Lee Sharkey (Lee_Sharkey) · 2023-05-23T18:26:18.174Z · comments (6)
[link] Google "We Have No Moat, And Neither Does OpenAI"
Chris_Leong · 2023-05-04T18:23:09.121Z · comments (28)
[link] TinyStories: Small Language Models That Still Speak Coherent English
Ulisse Mini (ulisse-mini) · 2023-05-28T22:23:30.560Z · comments (8)
Collective Identity
NicholasKees (nick_kees) · 2023-05-18T09:00:24.410Z · comments (12)
[link] The Office of Science and Technology Policy put out a request for information on A.I.
HiroSakuraba (hirosakuraba) · 2023-05-24T13:33:30.672Z · comments (4)
Reply to a fertility doctor concerning polygenic embryo screening
GeneSmith · 2023-05-29T21:50:20.854Z · comments (6)
Getting Your Eyes On
LoganStrohl (BrienneYudkowsky) · 2023-05-02T00:33:20.493Z · comments (11)
Resolving internal conflicts requires listening to what parts want
Richard_Ngo (ricraz) · 2023-05-19T00:04:20.451Z · comments (0)
Better debates
TsviBT · 2023-05-10T19:34:29.148Z · comments (7)
Explaining “Hell is Game Theory Folk Theorems”
electroswing · 2023-05-05T23:33:20.977Z · comments (21)
Naturalist Experimentation
LoganStrohl (BrienneYudkowsky) · 2023-05-10T04:28:32.013Z · comments (14)
Some Summaries of Agent Foundations Work
mattmacdermott · 2023-05-15T16:09:56.364Z · comments (1)
[link] The Treacherous Turn is finished! (AI-takeover-themed tabletop RPG)
Daniel Kokotajlo (daniel-kokotajlo) · 2023-05-22T05:49:28.145Z · comments (5)
← previous page (newer posts) · next page (older posts) →