LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

How to do theoretical research, a personal perspective
Mark Xu (mark-xu) · 2022-08-19T19:41:21.562Z · comments (6)
The Power to Teach Concepts Better
Liron · 2019-09-23T00:21:55.849Z · comments (22)
[link] LTFF and EAIF are unusually funding-constrained right now
Linch · 2023-08-30T01:03:30.321Z · comments (24)
Local Memes Against Geometric Rationality
Scott Garrabrant · 2022-12-21T03:53:28.196Z · comments (3)
Deceptive Alignment is <1% Likely by Default
DavidW (david-wheaton) · 2023-02-21T15:09:27.920Z · comments (29)
[link] Epistemic standards for “Why did it take so long to invent X?”
jasoncrawford · 2020-03-02T18:58:42.783Z · comments (16)
Retrospective on the 2022 Conjecture AI Discussions
Andrea_Miotti (AndreaM) · 2023-02-24T22:41:13.131Z · comments (5)
You can, in fact, bamboozle an unaligned AI into sparing your life
David Matolcsi (matolcsid) · 2024-09-29T16:59:43.942Z · comments (170)
AI #31: It Can Do What Now?
Zvi · 2023-09-28T16:00:01.919Z · comments (6)
Writing Causal Models Like We Write Programs
johnswentworth · 2020-05-05T18:05:38.339Z · comments (10)
PSA: if you are in Russia, probably move out ASAP
yagudin · 2022-03-03T20:01:07.041Z · comments (22)
We Should Prepare for a Larger Representation of Academia in AI Safety
Leon Lang (leon-lang) · 2023-08-13T18:03:19.799Z · comments (13)
«Boundaries», Part 3a: Defining boundaries as directed Markov blankets
Andrew_Critch · 2022-10-30T06:31:00.277Z · comments (20)
A few thought on the inner ring
KatjaGrace · 2021-01-21T03:40:15.253Z · comments (23)
Prediction Markets aren't Magic
SimonM · 2023-12-21T12:54:07.754Z · comments (29)
Shulman and Yudkowsky on AI progress
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-12-03T20:05:22.552Z · comments (16)
Covid 3/4: Declare Victory and Leave Home
Zvi · 2021-03-04T13:20:01.046Z · comments (28)
A Playbook for AI Risk Reduction (focused on misaligned AI)
HoldenKarnofsky · 2023-06-06T18:05:55.306Z · comments (41)
Review: Conor Moreton's "Civilization & Cooperation"
Duncan Sabien (Deactivated) (Duncan_Sabien) · 2024-05-26T19:32:43.131Z · comments (8)
Swiss Political System: More than You ever Wanted to Know (II.)
Martin Sustrik (sustrik) · 2020-07-22T13:50:04.033Z · comments (24)
What is going on in the world?
KatjaGrace · 2021-01-17T11:30:12.275Z · comments (48)
[question] How do you assess the quality / reliability of a scientific study?
Eli Tyre (elityre) · 2019-10-29T14:52:57.904Z · answers+comments (41)
I'm trying out "asteroid mindset"
Alex_Altair · 2022-06-03T13:35:48.614Z · comments (5)
Covid 2/4: Safe and Effective Vaccines Aplenty
Zvi · 2021-02-04T14:50:01.552Z · comments (25)
The shoot-the-moon strategy
jchan · 2021-07-21T16:19:48.226Z · comments (18)
Coercion is an adaptation to scarcity; trust is an adaptation to abundance
Richard_Ngo (ricraz) · 2023-05-23T18:14:19.117Z · comments (11)
[link] Introducing METR's Autonomy Evaluation Resources
Megan Kinniment (megan-kinniment) · 2024-03-15T23:16:59.696Z · comments (0)
Current themes in mechanistic interpretability research
Lee Sharkey (Lee_Sharkey) · 2022-11-16T14:14:02.030Z · comments (2)
Partial value takeover without world takeover
KatjaGrace · 2024-04-05T06:20:03.961Z · comments (23)
Comparative Advantage is Not About Trade
johnswentworth · 2020-09-22T18:43:11.496Z · comments (26)
Bruce Wayne and the Cost of Inaction
TurnTrout · 2022-09-30T00:19:47.335Z · comments (5)
Book review: Rethinking Consciousness
Steven Byrnes (steve2152) · 2020-01-10T20:41:27.352Z · comments (56)
Omicron Post #6
Zvi · 2021-12-13T18:00:01.098Z · comments (30)
[link] A blog post is a very long and complex search query to find fascinating people and make them route interesting stuff to your inbox
Henrik Karlsson (henrik-karlsson) · 2022-10-05T19:07:55.069Z · comments (12)
Coordination Skills I Wish I Had For the Pandemic
Raemon · 2021-11-13T23:32:11.510Z · comments (9)
How Specificity Works
Liron · 2019-09-03T12:11:36.216Z · comments (47)
Becoming Unusually Truth-Oriented
abramdemski · 2020-01-03T01:27:06.677Z · comments (20)
[link] New report: Safety Cases for AI
joshc (joshua-clymer) · 2024-03-20T16:45:27.984Z · comments (13)
SlateStarCodex deleted because NYT wants to dox Scott
Rudi C (rudi-c) · 2020-06-23T07:51:30.859Z · comments (93)
The bullseye framework: My case against AI doom
titotal (lombertini) · 2023-05-30T11:52:31.194Z · comments (35)
story-based decision-making
bhauth · 2024-02-07T02:35:27.286Z · comments (11)
There is essentially one best-validated theory of cognition.
abramdemski · 2021-12-10T15:51:06.423Z · comments (33)
Public Call for Interest in Mathematical Alignment
Davidmanheim · 2023-11-22T13:22:09.558Z · comments (9)
LessWrong Community Weekend 2023 [Applications now closed]
Henry Prowbell · 2023-05-01T09:08:14.502Z · comments (0)
Gifts Which Money Cannot Buy
johnswentworth · 2020-11-04T19:37:57.451Z · comments (8)
[link] Linkpost: A Post Mortem on the Gino Case
Linch · 2023-10-24T06:50:42.896Z · comments (7)
AI Safety Needs Great Engineers
Andy Jones (andyljones) · 2021-11-23T15:40:18.358Z · comments (43)
Based Beff Jezos and the Accelerationists
Zvi · 2023-12-06T16:00:08.380Z · comments (29)
Omicron Post #9
Zvi · 2021-12-23T21:50:10.466Z · comments (11)
[link] Large Language Models can Strategically Deceive their Users when Put Under Pressure.
ReaderM · 2023-11-15T16:36:04.446Z · comments (8)
← previous page (newer posts) · next page (older posts) →