LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

What do ML researchers think about AI in 2022?
KatjaGrace · 2022-08-04T15:40:05.024Z · comments (33)
What Do GDP Growth Curves Really Mean?
johnswentworth · 2021-10-07T21:58:15.121Z · comments (64)
[link] Gravity Turn
alkjash · 2021-08-16T19:20:06.748Z · comments (21)
How I buy things when Lightcone wants them fast
jacobjacob · 2022-09-26T05:02:09.003Z · comments (21)
Recursive Middle Manager Hell
Raemon · 2023-01-01T04:33:29.942Z · comments (45)
MIRI 2024 Mission and Strategy Update
Malo (malo) · 2024-01-05T00:20:54.169Z · comments (44)
[link] AI presidents discuss AI alignment agendas
TurnTrout · 2023-09-09T18:55:37.931Z · comments (22)
What are the results of more parental supervision and less outdoor play?
juliawise · 2023-11-25T12:52:29.986Z · comments (30)
Announcing Apollo Research
Marius Hobbhahn (marius-hobbhahn) · 2023-05-30T16:17:19.767Z · comments (11)
Ways I Expect AI Regulation To Increase Extinction Risk
1a3orn · 2023-07-04T17:32:48.047Z · comments (32)
Elements of Rationalist Discourse
Rob Bensinger (RobbBB) · 2023-02-12T07:58:42.479Z · comments (47)
CFAR Takeaways: Andrew Critch
Raemon · 2024-02-14T01:37:03.931Z · comments (62)
Thoughts on responsible scaling policies and regulation
paulfchristiano · 2023-10-24T22:21:18.341Z · comments (33)
Lessons learned from talking to >100 academics about AI safety
Marius Hobbhahn (marius-hobbhahn) · 2022-10-10T13:16:38.036Z · comments (17)
Believing In
AnnaSalamon · 2024-02-08T07:06:13.072Z · comments (49)
ProjectLawful.com: Eliezer's latest story, past 1M words
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2022-05-11T06:18:02.738Z · comments (112)
Modern Transformers are AGI, and Human-Level
abramdemski · 2024-03-26T17:46:19.373Z · comments (89)
Moses and the Class Struggle
lsusr · 2022-04-01T11:55:04.911Z · comments (26)
ChatGPT can learn indirect control
Raymond D · 2024-03-21T21:11:06.649Z · comments (23)
What I would do if I wasn’t at ARC Evals
LawrenceC (LawChan) · 2023-09-05T19:19:36.830Z · comments (8)
Launching Lightspeed Grants (Apply by July 6th)
habryka (habryka4) · 2023-06-07T02:53:29.227Z · comments (41)
[link] Actually, Othello-GPT Has A Linear Emergent World Representation
Neel Nanda (neel-nanda-1) · 2023-03-29T22:13:14.878Z · comments (24)
[link] Introducing AI Lab Watch
Zach Stein-Perlman · 2024-04-30T17:00:12.652Z · comments (25)
[link] Cultivating a state of mind where new ideas are born
Henrik Karlsson (henrik-karlsson) · 2023-07-27T09:16:42.566Z · comments (18)
Long Covid Is Not Necessarily Your Biggest Problem
Elizabeth (pktechgirl) · 2021-09-01T07:20:05.374Z · comments (40)
[link] Orthogonal: A new agent foundations alignment organization
Tamsin Leake (carado-1) · 2023-04-19T20:17:14.174Z · comments (4)
Natural Abstractions: Key claims, Theorems, and Critiques
LawrenceC (LawChan) · 2023-03-16T16:37:40.181Z · comments (20)
What it's like to dissect a cadaver
Alok Singh (OldManNick) · 2022-11-10T06:40:05.776Z · comments (24)
Call For Distillers
johnswentworth · 2022-04-04T18:25:34.942Z · comments (43)
My current LK99 questions
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2023-08-01T22:48:00.733Z · comments (38)
Unifying Bargaining Notions (1/2)
Diffractor · 2022-07-25T00:28:27.572Z · comments (41)
I Converted Book I of The Sequences Into A Zoomer-Readable Format
dkirmani · 2022-11-10T02:59:04.236Z · comments (31)
An AI risk argument that resonates with NYTimes readers
Julian Bradshaw · 2023-03-12T23:09:20.458Z · comments (14)
Visible Homelessness in SF: A Quick Breakdown of Causes
alyssavance · 2022-05-25T01:40:43.768Z · comments (32)
Benign Boundary Violations
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2022-05-26T06:48:35.585Z · comments (84)
Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense
So8res · 2023-11-24T17:37:43.020Z · comments (83)
Enemies vs Malefactors
So8res · 2023-02-28T23:38:11.594Z · comments (60)
Worlds Where Iterative Design Fails
johnswentworth · 2022-08-30T20:48:29.025Z · comments (30)
Eliezer Yudkowsky’s Letter in Time Magazine
Zvi · 2023-04-05T18:00:01.670Z · comments (86)
Lightcone Infrastructure/LessWrong is looking for funding
habryka (habryka4) · 2023-06-14T04:45:53.425Z · comments (38)
Common knowledge about Leverage Research 1.0
BayAreaHuman · 2021-09-24T06:56:14.729Z · comments (212)
Funny Anecdote of Eliezer From His Sister
Daniel Birnbaum (daniel-birnbaum) · 2024-04-22T22:05:31.886Z · comments (5)
Yudkowsky and Christiano discuss "Takeoff Speeds"
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-11-22T19:35:27.657Z · comments (176)
[link] "How could I have thought that faster?"
mesaoptimizer · 2024-03-11T10:56:17.884Z · comments (31)
We're Not Ready: thoughts on "pausing" and responsible scaling policies
HoldenKarnofsky · 2023-10-27T15:19:33.757Z · comments (33)
GPT-4 Plugs In
Zvi · 2023-03-27T12:10:00.926Z · comments (47)
[link] The Lighthaven Campus is open for bookings
habryka (habryka4) · 2023-09-30T01:08:12.664Z · comments (18)
[link] Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy
garrison · 2024-02-10T19:52:55.191Z · comments (52)
A concrete bet offer to those with short AGI timelines
Matthew Barnett (matthew-barnett) · 2022-04-09T21:41:45.106Z · comments (117)
Thoughts on sharing information about language model capabilities
paulfchristiano · 2023-07-31T16:04:21.396Z · comments (34)
← previous page (newer posts) · next page (older posts) →