LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

What Do GDP Growth Curves Really Mean?
johnswentworth · 2021-10-07T21:58:15.121Z · comments (64)
What do ML researchers think about AI in 2022?
KatjaGrace · 2022-08-04T15:40:05.024Z · comments (33)
The ‘strong’ feature hypothesis could be wrong
lewis smith (lsgos) · 2024-08-02T14:33:58.898Z · comments (19)
Recursive Middle Manager Hell
Raemon · 2023-01-01T04:33:29.942Z · comments (46)
Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More
Ben Pace (Benito) · 2019-10-04T04:08:49.942Z · comments (61)
Catching the Eye of Sauron
Casey B. (Zahima) · 2023-04-07T00:40:46.556Z · comments (68)
Thoughts on responsible scaling policies and regulation
paulfchristiano · 2023-10-24T22:21:18.341Z · comments (33)
What I would do if I wasn’t at ARC Evals
LawrenceC (LawChan) · 2023-09-05T19:19:36.830Z · comments (10)
Book Review: Design Principles of Biological Circuits
johnswentworth · 2019-11-05T06:49:58.329Z · comments (24)
[link] Gravity Turn
alkjash · 2021-08-16T19:20:06.748Z · comments (21)
Modern Transformers are AGI, and Human-Level
abramdemski · 2024-03-26T17:46:19.373Z · comments (88)
MIRI 2024 Mission and Strategy Update
Malo (malo) · 2024-01-05T00:20:54.169Z · comments (44)
The Zettelkasten Method
abramdemski · 2019-09-20T13:15:10.131Z · comments (90)
[question] Coronavirus: Justified Practical Advice Thread
Ben Pace (Benito) · 2020-02-28T06:43:41.139Z · answers+comments (337)
Noticing Frame Differences
Raemon · 2019-09-30T01:24:20.435Z · comments (39)
[link] AI presidents discuss AI alignment agendas
TurnTrout · 2023-09-09T18:55:37.931Z · comments (23)
"Slow" takeoff is a terrible term for "maybe even faster takeoff, actually"
Raemon · 2024-09-28T23:38:25.512Z · comments (69)
Announcing Apollo Research
Marius Hobbhahn (marius-hobbhahn) · 2023-05-30T16:17:19.767Z · comments (11)
LLM Generality is a Timeline Crux
eggsyntax · 2024-06-24T12:52:07.704Z · comments (119)
CFAR Takeaways: Andrew Critch
Raemon · 2024-02-14T01:37:03.931Z · comments (64)
Lessons learned from talking to >100 academics about AI safety
Marius Hobbhahn (marius-hobbhahn) · 2022-10-10T13:16:38.036Z · comments (18)
[link] Orthogonal: A new agent foundations alignment organization
Tamsin Leake (carado-1) · 2023-04-19T20:17:14.174Z · comments (4)
Brute Force Manufactured Consensus is Hiding the Crime of the Century
Roko · 2024-02-03T20:36:59.806Z · comments (156)
Draft report on AI timelines
Ajeya Cotra (ajeya-cotra) · 2020-09-18T23:47:39.684Z · comments (56)
Ayn Rand’s model of “living money”; and an upside of burnout
AnnaSalamon · 2024-11-16T02:59:07.368Z · comments (58)
Superbabies: Putting The Pieces Together
sarahconstantin · 2024-07-11T20:40:05.036Z · comments (37)
Enemies vs Malefactors
So8res · 2023-02-28T23:38:11.594Z · comments (69)
Morality is Scary
Wei Dai (Wei_Dai) · 2021-12-02T06:35:06.736Z · comments (116)
DARPA Digital Tutor: Four Months to Total Technical Expertise?
SebastianG (JohnBuridan) · 2020-07-06T23:34:21.089Z · comments (22)
ChatGPT can learn indirect control
Raymond D · 2024-03-21T21:11:06.649Z · comments (27)
Utility Maximization = Description Length Minimization
johnswentworth · 2021-02-18T18:04:23.365Z · comments (44)
An overview of 11 proposals for building safe advanced AI
evhub · 2020-05-29T20:38:02.060Z · comments (36)
Updates and Reflections on Optimal Exercise after Nearly a Decade
romeostevensit · 2023-06-08T23:02:14.761Z · comments (56)
Eliezer Yudkowsky’s Letter in Time Magazine
Zvi · 2023-04-05T18:00:01.670Z · comments (86)
[link] Actually, Othello-GPT Has A Linear Emergent World Representation
Neel Nanda (neel-nanda-1) · 2023-03-29T22:13:14.878Z · comments (26)
An AI risk argument that resonates with NYTimes readers
Julian Bradshaw · 2023-03-12T23:09:20.458Z · comments (14)
Launching Lightspeed Grants (Apply by July 6th)
habryka (habryka4) · 2023-06-07T02:53:29.227Z · comments (41)
Why indoor lighting is hard to get right and how to fix it
Richard Korzekwa (Grothor) · 2020-10-28T04:46:22.672Z · comments (54)
Humans provide an untapped wealth of evidence about alignment
TurnTrout · 2022-07-14T02:31:48.575Z · comments (94)
[link] Understanding Shapley Values with Venn Diagrams
Carson L · 2024-12-06T21:56:43.960Z · comments (34)
[question] Have epistemic conditions always been this bad?
Wei Dai (Wei_Dai) · 2020-01-25T04:42:52.190Z · answers+comments (106)
Unifying Bargaining Notions (1/2)
Diffractor · 2022-07-25T00:28:27.572Z · comments (41)
Limits of Current US Prediction Markets (PredictIt Case Study)
aphyer · 2020-07-14T07:24:23.421Z · comments (50)
Thoughts on sharing information about language model capabilities
paulfchristiano · 2023-07-31T16:04:21.396Z · comments (44)
Visible Homelessness in SF: A Quick Breakdown of Causes
alyssavance · 2022-05-25T01:40:43.768Z · comments (32)
Power Buys You Distance From The Crime
Elizabeth (pktechgirl) · 2019-08-02T20:50:01.106Z · comments (75)
We've built Connected Papers - a visual tool for researchers to find and explore academic papers
discordy · 2020-06-08T18:44:56.467Z · comments (39)
[link] The Lighthaven Campus is open for bookings
habryka (habryka4) · 2023-09-30T01:08:12.664Z · comments (18)
What it's like to dissect a cadaver
Alok Singh (OldManNick) · 2022-11-10T06:40:05.776Z · comments (24)
Towards more cooperative AI safety strategies
Richard_Ngo (ricraz) · 2024-07-16T04:36:29.191Z · comments (133)
← previous page (newer posts) · next page (older posts) →