LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

[link] Moral Reality Check (a short story)
jessicata (jessica.liu.taylor) · 2023-11-26T05:03:18.254Z · comments (45)
A Year of AI Increasing AI Progress
TW123 (ThomasWoodside) · 2022-12-30T02:09:39.458Z · comments (3)
On Devin
Zvi · 2024-03-18T13:20:04.779Z · comments (34)
[link] Why has nuclear power been a flop?
jasoncrawford · 2021-04-16T16:49:15.789Z · comments (50)
All Possible Views About Humanity's Future Are Wild
HoldenKarnofsky · 2021-09-03T20:19:06.453Z · comments (37)
Leading The Parade
johnswentworth · 2024-01-31T22:39:56.499Z · comments (31)
Reshaping the AI Industry
Thane Ruthenis · 2022-05-29T22:54:31.582Z · comments (35)
OpenAI o1
Zach Stein-Perlman · 2024-09-12T17:30:31.958Z · comments (41)
K-complexity is silly; use cross-entropy instead
So8res · 2022-12-20T23:06:27.131Z · comments (54)
Could a superintelligence deduce general relativity from a falling apple? An investigation
titotal (lombertini) · 2023-04-23T12:49:44.288Z · comments (39)
AI coordination needs clear wins
evhub · 2022-09-01T23:41:48.334Z · comments (16)
Some (problematic) aesthetics of what constitutes good work in academia
Steven Byrnes (steve2152) · 2024-03-11T17:47:28.835Z · comments (12)
Selection Has A Quality Ceiling
johnswentworth · 2021-06-02T18:25:54.432Z · comments (18)
Moral public goods
paulfchristiano · 2020-01-26T00:10:01.803Z · comments (74)
Use Normal Predictions
Jan Christian Refsgaard (jan-christian-refsgaard) · 2022-01-09T15:01:44.934Z · comments (67)
Liability regimes for AI
Ege Erdil (ege-erdil) · 2024-08-19T01:25:01.006Z · comments (34)
Inner and outer alignment decompose one hard problem into two extremely hard problems
TurnTrout · 2022-12-02T02:43:20.915Z · comments (22)
Discussion: Challenges with Unsupervised LLM Knowledge Discovery
Seb Farquhar · 2023-12-18T11:58:39.379Z · comments (21)
Assessing Kurzweil predictions about 2019: the results
Stuart_Armstrong · 2020-05-06T13:36:18.788Z · comments (22)
OpenAI #10: Reflections
Zvi · 2025-01-07T17:00:07.348Z · comments (6)
The metaphor you want is "color blindness," not "blind spot."
Duncan Sabien (Deactivated) (Duncan_Sabien) · 2022-02-14T00:28:55.119Z · comments (17)
Repeal the Jones Act of 1920
Zvi · 2024-11-27T15:00:06.801Z · comments (23)
[link] "Heretical Thoughts on AI" by Eli Dourado
DragonGod · 2023-01-19T16:11:56.567Z · comments (38)
What's General-Purpose Search, And Why Might We Expect To See It In Trained ML Systems?
johnswentworth · 2022-08-15T22:48:38.671Z · comments (18)
Why I’m not into the Free Energy Principle
Steven Byrnes (steve2152) · 2023-03-02T19:27:52.309Z · comments (49)
Does davidad's uploading moonshot work?
jacobjacob · 2023-11-03T02:21:51.720Z · comments (35)
Authorities and Amateurs
jefftk (jkaufman) · 2020-03-25T03:40:02.208Z · comments (30)
[link] Decomposing Agency — capabilities without desires
owencb · 2024-07-11T09:38:48.509Z · comments (32)
POC || GTFO culture as partial antidote to alignment wordcelism
lc · 2023-03-15T10:21:47.037Z · comments (13)
The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate
Adam David Long (adam-david-long-1) · 2023-08-01T00:08:30.908Z · comments (30)
The U.S. is becoming less stable
lc · 2023-08-18T21:13:11.909Z · comments (68)
Redwood Research’s current project
Buck · 2021-09-21T23:30:36.989Z · comments (29)
LLMs for Alignment Research: a safety priority?
abramdemski · 2024-04-04T20:03:22.484Z · comments (24)
Give it a google
Adam Zerner (adamzerner) · 2020-12-29T05:30:39.133Z · comments (28)
[link] Comparative advantage and when to blow up your island
dynomight · 2020-09-12T06:20:36.622Z · comments (39)
The Information: OpenAI shows 'Strawberry' to feds, races to launch it
Martín Soto (martinsq) · 2024-08-27T23:10:18.155Z · comments (15)
The unexpected difficulty of comparing AlphaStar to humans
Richard Korzekwa (Grothor) · 2019-09-18T02:20:01.292Z · comments (36)
[link] Stanislav Petrov Quarterly Performance Review
Ricki Heicklen (bayesshammai) · 2024-09-26T21:20:11.646Z · comments (3)
0. CAST: Corrigibility as Singular Target
Max Harms (max-harms) · 2024-06-07T22:29:12.934Z · comments (12)
Supervise Process, not Outcomes
stuhlmueller · 2022-04-05T22:18:20.068Z · comments (9)
6 non-obvious mental health issues specific to AI safety
Igor Ivanov (igor-ivanov) · 2023-08-18T15:46:09.938Z · comments (24)
Trapped Priors As A Basic Problem Of Rationality
Scott Alexander (Yvain) · 2021-03-12T20:02:28.639Z · comments (33)
An Intuitive Guide to Garrabrant Induction
Mark Xu (mark-xu) · 2021-06-03T22:21:41.877Z · comments (20)
Dark Forest Theories
Raemon · 2023-05-12T20:21:49.052Z · comments (53)
Brain Efficiency Cannell Prize Contest Award Ceremony
Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2023-07-24T11:30:10.602Z · comments (12)
Algorithmic Improvement Is Probably Faster Than Scaling Now
johnswentworth · 2023-06-06T02:57:33.700Z · comments (25)
Learning By Writing
HoldenKarnofsky · 2022-02-22T15:50:19.452Z · comments (15)
Why I Am Not in Charge
Zvi · 2021-02-07T18:20:01.333Z · comments (23)
We're already in AI takeoff
Valentine · 2022-03-08T23:09:06.733Z · comments (119)
[link] Transformer Circuits
evhub · 2021-12-22T21:09:22.676Z · comments (4)
← previous page (newer posts) · next page (older posts) →