LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

next page (older posts) →

GPTs are Predictors, not Imitators
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2023-04-08T19:59:13.601Z · comments (90)
LW Team is adjusting moderation policy
Raemon · 2023-04-04T20:41:07.603Z · comments (182)
Hooray for stepping out of the limelight
So8res · 2023-04-01T02:45:31.397Z · comments (24)
Notes on Teaching in Prison
jsd · 2023-04-19T01:53:00.427Z · comments (12)
[link] [SEE NEW EDITS] No, *You* Need to Write Clearer
NicholasKross · 2023-04-29T05:04:01.559Z · comments (64)
On AutoGPT
Zvi · 2023-04-13T12:30:01.059Z · comments (47)
Pausing AI Developments Isn't Enough. We Need to Shut it All Down
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2023-04-08T00:36:47.702Z · comments (39)
My Assessment of the Chinese AI Safety Community
Lao Mein (derpherpize) · 2023-04-25T04:21:19.274Z · comments (94)
My views on “doom”
paulfchristiano · 2023-04-27T17:50:01.415Z · comments (34)
Policy discussions follow strong contextualizing norms
Richard_Ngo (ricraz) · 2023-04-01T23:51:36.588Z · comments (61)
Catching the Eye of Sauron
Casey B. (Zahima) · 2023-04-07T00:40:46.556Z · comments (68)
[link] Orthogonal: A new agent foundations alignment organization
Tamsin Leake (carado-1) · 2023-04-19T20:17:14.174Z · comments (4)
Eliezer Yudkowsky’s Letter in Time Magazine
Zvi · 2023-04-05T18:00:01.670Z · comments (86)
If interpretability research goes well, it may get dangerous
So8res · 2023-04-03T21:48:18.752Z · comments (10)
Evolution provides no evidence for the sharp left turn
Quintin Pope (quintin-pope) · 2023-04-11T18:43:07.776Z · comments (62)
Giant (In)scrutable Matrices: (Maybe) the Best of All Possible Worlds
1a3orn · 2023-04-04T17:39:39.720Z · comments (35)
The basic reasons I expect AGI ruin
Rob Bensinger (RobbBB) · 2023-04-18T03:37:01.496Z · comments (72)
Transcript and Brief Response to Twitter Conversation between Yann LeCunn and Eliezer Yudkowsky
Zvi · 2023-04-26T13:10:01.233Z · comments (50)
The ‘ petertodd’ phenomenon
mwatkins · 2023-04-15T00:59:47.142Z · comments (50)
A report about LessWrong karma volatility from a different universe
Ben Pace (Benito) · 2023-04-01T21:48:32.503Z · comments (7)
Killing Socrates
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-11T10:28:37.096Z · comments (144)
Talking publicly about AI risk
Jan_Kulveit · 2023-04-21T11:28:16.665Z · comments (8)
The Brain is Not Close to Thermodynamic Limits on Computation
DaemonicSigil · 2023-04-24T08:21:44.727Z · comments (58)
[April Fools'] Definitive confirmation of shard theory
TurnTrout · 2023-04-01T07:27:23.096Z · comments (7)
[link] grey goo is unlikely
bhauth · 2023-04-17T01:59:57.054Z · comments (109)
Davidad's Bold Plan for Alignment: An In-Depth Explanation
Charbel-Raphaël (charbel-raphael-segerie) · 2023-04-19T16:09:01.455Z · comments (29)
Agentized LLMs will change the alignment landscape
Seth Herd · 2023-04-09T02:29:07.797Z · comments (95)
Shutting down AI is not enough. We need to destroy all technology.
Matthew Barnett (matthew-barnett) · 2023-04-01T21:03:24.448Z · comments (36)
Could a superintelligence deduce general relativity from a falling apple? An investigation
titotal (lombertini) · 2023-04-23T12:49:44.288Z · comments (39)
A freshman year during the AI midgame: my approach to the next year
Buck · 2023-04-14T00:38:49.807Z · comments (14)
AI doom from an LLM-plateau-ist perspective
Steven Byrnes (steve2152) · 2023-04-27T13:58:10.973Z · comments (23)
The self-unalignment problem
Jan_Kulveit · 2023-04-14T12:10:12.151Z · comments (22)
Consider The Hand Axe
ymeskhout · 2023-04-08T01:31:44.614Z · comments (16)
Request to AGI organizations: Share your views on pausing AI progress
Akash (akash-wasil) · 2023-04-11T17:30:46.707Z · comments (11)
AI x-risk, approximately ordered by embarrassment
Alex Lawsen (alex-lszn) · 2023-04-12T23:01:00.561Z · comments (7)
Four mindset disagreements behind existential risk disagreements in ML
Rob Bensinger (RobbBB) · 2023-04-11T04:53:48.427Z · comments (12)
The Learning-Theoretic Agenda: Status 2023
Vanessa Kosoy (vanessa-kosoy) · 2023-04-19T05:21:29.177Z · comments (13)
AI Summer Harvest
Cleo Nardo (strawberry calm) · 2023-04-04T03:35:58.473Z · comments (10)
Misgeneralization as a misnomer
So8res · 2023-04-06T20:43:33.275Z · comments (22)
$250 prize for checking Jake Cannell's Brain Efficiency
Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2023-04-26T16:21:06.035Z · comments (170)
But why would the AI kill us?
So8res · 2023-04-17T18:42:39.720Z · comments (86)
Goodhart's Law inside the human mind
Kaj_Sotala · 2023-04-17T13:48:13.183Z · comments (13)
[link] Tuning your Cognitive Strategies
Raemon · 2023-04-27T20:32:06.337Z · comments (55)
[New LW Feature] "Debates"
Ruby · 2023-04-01T07:00:24.466Z · comments (34)
Contra Yudkowsky on AI Doom
jacob_cannell · 2023-04-24T00:20:48.561Z · comments (111)
Deep learning models might be secretly (almost) linear
beren · 2023-04-24T18:43:28.188Z · comments (28)
How could you possibly choose what an AI wants?
So8res · 2023-04-19T17:08:54.694Z · comments (19)
Should we publish mechanistic interpretability research?
Marius Hobbhahn (marius-hobbhahn) · 2023-04-21T16:19:40.514Z · comments (40)
[link] Financial Times: We must slow down the race to God-like AI
trevor (TrevorWiesinger) · 2023-04-13T19:55:26.217Z · comments (17)
10 reasons why lists of 10 reasons might be a winning strategy
trevor (TrevorWiesinger) · 2023-04-06T21:24:17.896Z · comments (7)
next page (older posts) →