LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

[link] LLMs seem (relatively) safe
JustisMills · 2024-04-25T22:13:06.221Z · comments (24)
Calendar feature geometry in GPT-2 layer 8 residual stream SAEs
Patrick Leask (patrickleask) · 2024-08-17T01:16:53.764Z · comments (0)
Self-Blinded L-Theanine RCT
niplav · 2023-10-31T15:24:57.717Z · comments (12)
Per protocol analysis as medical malpractice
braces · 2024-01-31T16:22:21.367Z · comments (8)
Reflections on my first year of AI safety research
Jay Bailey · 2024-01-08T07:49:08.147Z · comments (3)
Gradient Descent on the Human Brain
Jozdien · 2024-04-01T22:39:24.862Z · comments (5)
Thiel on AI & Racing with China
Ben Pace (Benito) · 2024-08-20T03:19:18.966Z · comments (10)
AI #45: To Be Determined
Zvi · 2024-01-04T15:00:05.936Z · comments (4)
[link] Investigating an insurance-for-AI startup
L Rudolf L (LRudL) · 2024-09-21T15:29:10.083Z · comments (0)
Anthropical Paradoxes are Paradoxes of Probability Theory
Ape in the coat · 2023-12-06T08:16:26.846Z · comments (18)
[link] OpenAI Staff (including Sutskever) Threaten to Quit Unless Board Resigns
Seth Herd · 2023-11-20T14:20:33.539Z · comments (28)
Aligned AI is dual use technology
lc · 2024-01-27T06:50:10.435Z · comments (31)
Can we build a better Public Doublecrux?
Raemon · 2024-05-11T19:21:53.326Z · comments (6)
[link] The Long-Term Future Fund is looking for a full-time fund chair
Linch · 2023-10-05T22:18:53.720Z · comments (0)
[link] Breaking Circuit Breakers
mikes · 2024-07-14T18:57:20.251Z · comments (13)
BatchTopK: A Simple Improvement for TopK-SAEs
Bart Bussmann (Stuckwork) · 2024-07-20T02:20:51.848Z · comments (0)
Schelling points in the AGI policy space
mesaoptimizer · 2024-06-26T13:19:25.186Z · comments (2)
Announcing the Double Crux Bot
sanyer (santeri-koivula) · 2024-01-09T18:54:15.361Z · comments (8)
Towards a formalization of the agent structure problem
Alex_Altair · 2024-04-29T20:28:15.190Z · comments (5)
The Stochastic Parrot Hypothesis is debatable for the last generation of LLMs
Quentin FEUILLADE--MONTIXI (quentin-feuillade-montixi) · 2023-11-07T16:12:20.031Z · comments (20)
Pseudonymity and Accusations
jefftk (jkaufman) · 2023-12-21T19:20:19.944Z · comments (20)
AI #43: Functional Discoveries
Zvi · 2023-12-21T15:50:04.442Z · comments (26)
[link] S-Risks: Fates Worse Than Extinction
aggliu · 2024-05-04T15:30:36.666Z · comments (2)
[question] Intelligence Enhancement (Monthly Thread) 13 Oct 2023
Nicholas / Heather Kross (NicholasKross) · 2023-10-13T17:28:37.490Z · answers+comments (40)
Provably Safe AI: Worldview and Projects
bgold · 2024-08-09T23:21:02.763Z · comments (43)
Reformative Hypocrisy, and Paying Close Enough Attention to Selectively Reward It.
Andrew_Critch · 2024-09-11T04:41:24.872Z · comments (7)
Two LessWrong speed friending experiments
mikko (morrel) · 2024-06-15T10:52:26.081Z · comments (3)
[link] A model of research skill
L Rudolf L (LRudL) · 2024-01-08T00:13:12.755Z · comments (6)
The Shutdown Problem: Incomplete Preferences as a Solution
EJT (ElliottThornley) · 2024-02-23T16:01:16.378Z · comments (22)
[link] Bed Time Quests & Dinner Games for 3-5 year olds
Gunnar_Zarncke · 2024-06-22T07:53:38.989Z · comments (0)
AI #82: The Governor Ponders
Zvi · 2024-09-19T13:30:04.863Z · comments (8)
[link] The Good Balsamic Vinegar
jenn (pixx) · 2024-01-26T19:30:57.435Z · comments (4)
AI #52: Oops
Zvi · 2024-02-22T21:50:07.393Z · comments (9)
Llama Llama-3-405B?
Zvi · 2024-07-24T19:40:07.565Z · comments (9)
[link] how birds sense magnetic fields
bhauth · 2024-06-27T18:59:35.075Z · comments (4)
Does literacy remove your ability to be a bard as good as Homer?
Adrià Garriga-alonso (rhaps0dy) · 2024-01-18T03:43:14.994Z · comments (19)
The case for stopping AI safety research
catubc (cat-1) · 2024-05-23T15:55:18.713Z · comments (38)
Polysemantic Attention Head in a 4-Layer Transformer
Jett (jett) · 2023-11-09T16:16:35.132Z · comments (0)
[link] Towards shutdownable agents via stochastic choice
EJT (ElliottThornley) · 2024-07-08T10:14:24.452Z · comments (5)
[link] Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
Gunnar_Zarncke · 2024-05-16T13:09:39.265Z · comments (20)
Spatial attention as a “tell” for empathetic simulation?
Steven Byrnes (steve2152) · 2024-04-26T15:10:58.040Z · comments (12)
Rewilding the Gut VS the Autoimmune Epidemic
GGD · 2024-08-16T18:00:46.239Z · comments (0)
The Assumed Intent Bias
silentbob · 2023-11-05T16:28:03.282Z · comments (13)
Will 2024 be very hot? Should we be worried?
A.H. (AlfredHarwood) · 2023-12-29T11:22:50.200Z · comments (12)
OpenAI-Microsoft partnership
Zach Stein-Perlman · 2023-10-03T20:01:44.795Z · comments (19)
On OpenAI’s Preparedness Framework
Zvi · 2023-12-21T14:00:05.144Z · comments (4)
SAEs (usually) Transfer Between Base and Chat Models
Connor Kissane (ckkissane) · 2024-07-18T10:29:46.138Z · comments (0)
Measuring Structure Development in Algorithmic Transformers
Micurie (micurie) · 2024-08-22T08:38:02.140Z · comments (4)
n of m ring signatures
DanielFilan · 2023-12-04T20:00:06.580Z · comments (7)
Apply to the Conceptual Boundaries Workshop for AI Safety
Chipmonk · 2023-11-27T21:04:59.037Z · comments (0)
← previous page (newer posts) · next page (older posts) →