LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

How I Learned To Stop Trusting Prediction Markets and Love the Arbitrage
orthonormal · 2024-08-06T02:32:41.364Z · comments (30)
Seeing the Smoke
Jacob Falkovich (Jacobian) · 2020-02-28T18:26:58.839Z · comments (29)
Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense
So8res · 2023-11-24T17:37:43.020Z · comments (84)
Thoughts on “AI is easy to control” by Pope & Belrose
Steven Byrnes (steve2152) · 2023-12-01T17:30:52.720Z · comments (61)
Attempted Gears Analysis of AGI Intervention Discussion With Eliezer
Zvi · 2021-11-15T03:50:01.141Z · comments (49)
Six economics misconceptions of mine which I've resolved over the last few years
Buck · 2020-07-13T03:01:43.717Z · comments (59)
Feedbackloop-first Rationality
Raemon · 2023-08-07T17:58:56.349Z · comments (66)
Two non-obvious lessons from microcovid.org
catherio · 2021-06-24T02:35:14.357Z · comments (4)
[link] Literature Review For Academic Outsiders: What, How, and Why
namespace (ingres) · 2020-05-09T22:58:46.580Z · comments (6)
Self-Other Overlap: A Neglected Approach to AI Alignment
Marc Carauleanu (Marc-Everin Carauleanu) · 2024-07-30T16:22:29.561Z · comments (48)
The EMH Aten't Dead
Richard Meadows (richard-meadows-1) · 2020-05-15T20:44:49.402Z · comments (87)
The impossible problem of due process
mingyuan · 2024-01-16T05:18:33.415Z · comments (64)
[link] Connor Leahy on Dying with Dignity, EleutherAI and Conjecture
Michaël Trazzi (mtrazzi) · 2022-07-22T18:44:19.749Z · comments (29)
[link] Inference-Time Intervention: Eliciting Truthful Answers from a Language Model
likenneth · 2023-06-11T05:38:35.284Z · comments (4)
This might be the last AI Safety Camp
Remmelt (remmelt-ellen) · 2024-01-24T09:33:29.438Z · comments (34)
Cognitive Emulation: A Naive AI Safety Proposal
Connor Leahy (NPCollapse) · 2023-02-25T19:35:02.409Z · comments (46)
My tentative best guess on how EAs and Rationalists sometimes turn crazy
habryka (habryka4) · 2023-06-21T04:11:28.518Z · comments (110)
Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain
Daniel Kokotajlo (daniel-kokotajlo) · 2021-01-18T12:08:13.418Z · comments (86)
[question] Examples of Highly Counterfactual Discoveries?
johnswentworth · 2024-04-23T22:19:19.399Z · answers+comments (101)
Response to Aschenbrenner's "Situational Awareness"
Rob Bensinger (RobbBB) · 2024-06-06T22:57:11.737Z · comments (27)
Going Out With Dignity
Tomás B. (Bjartur Tómas) · 2021-07-09T22:07:59.169Z · comments (25)
My "2.9 trauma limit"
Raemon · 2023-07-01T19:32:14.805Z · comments (31)
Acausal normalcy
Andrew_Critch · 2023-03-03T23:34:33.971Z · comments (36)
Takeaways from one year of lockdown
mingyuan · 2021-03-01T03:53:20.228Z · comments (87)
AI alignment researchers don't (seem to) stack
So8res · 2023-02-21T00:48:25.186Z · comments (40)
[link] Technological stagnation: Why I came around
jasoncrawford · 2021-01-23T22:05:59.364Z · comments (79)
Propaganda or Science: A Look at Open Source AI and Bioterrorism Risk
1a3orn · 2023-11-02T18:20:29.569Z · comments (79)
Zvi’s Thoughts on the Survival and Flourishing Fund (SFF)
Zvi · 2021-12-14T14:30:01.096Z · comments (65)
Tyranny of the Epistemic Majority
Scott Garrabrant · 2022-11-22T17:19:34.144Z · comments (13)
How my team at Lightcone sometimes gets stuff done
jacobjacob · 2022-09-19T05:47:06.787Z · comments (43)
On saving one's world
Rob Bensinger (RobbBB) · 2022-05-17T19:53:58.192Z · comments (4)
The ‘ petertodd’ phenomenon
mwatkins · 2023-04-15T00:59:47.142Z · comments (50)
Ruling Out Everything Else
Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-27T21:50:39.545Z · comments (51)
Another medical miracle
Dentin · 2023-06-25T20:43:45.493Z · comments (48)
[link] Sam Altman fired from OpenAI
LawrenceC (LawChan) · 2023-11-17T20:42:30.759Z · comments (75)
[link] The Compendium, A full argument about extinction risk from AGI
adamShimi · 2024-10-31T12:01:51.714Z · comments (52)
Twiblings, four-parent babies and other reproductive technology
GeneSmith · 2023-05-20T17:11:23.726Z · comments (33)
My AI Model Delta Compared To Christiano
johnswentworth · 2024-06-12T18:19:44.768Z · comments (73)
What's Going on With OpenAI's Messaging?
ozziegooen · 2024-05-21T02:22:04.171Z · comments (13)
Speaking of Stag Hunts
Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-11-06T08:20:34.967Z · comments (373)
Cryonics is free
Mati_Roy (MathieuRoy) · 2024-09-29T17:58:17.108Z · comments (42)
Grant applications and grand narratives
Elizabeth (pktechgirl) · 2023-07-02T00:16:25.129Z · comments (22)
Discontinuous progress in history: an update
KatjaGrace · 2020-04-14T00:00:02.478Z · comments (25)
Transcript and Brief Response to Twitter Conversation between Yann LeCunn and Eliezer Yudkowsky
Zvi · 2023-04-26T13:10:01.233Z · comments (51)
Killing Socrates
Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-11T10:28:37.096Z · comments (146)
Two easy things that maybe Just Work to improve AI discourse
jacobjacob · 2024-06-08T15:51:18.078Z · comments (35)
OMMC Announces RIP
Adam Scholl (adam_scholl) · 2024-04-01T23:20:00.433Z · comments (5)
"Humanity vs. AGI" Will Never Look Like "Humanity vs. AGI" to Humanity
Thane Ruthenis · 2023-12-16T20:08:39.375Z · comments (34)
Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI
Andrew_Critch · 2021-06-01T18:45:39.820Z · comments (26)
My Interview With Cade Metz on His Reporting About Slate Star Codex
Zack_M_Davis · 2024-03-26T17:18:05.114Z · comments (187)
← previous page (newer posts) · next page (older posts) →