LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Conceding a short timelines bet early
Matthew Barnett (matthew-barnett) · 2023-03-16T21:49:35.903Z · comments (17)
Demand offsetting
paulfchristiano · 2021-03-21T18:20:05.090Z · comments (41)
"AI and Compute" trend isn't predictive of what is happening
alexlyzhov · 2021-04-02T00:44:46.671Z · comments (16)
On saying "Thank you" instead of "I'm Sorry"
Michael Cohn (michael-cohn) · 2024-07-08T03:13:50.663Z · comments (16)
[link] "AI achieves silver-medal standard solving International Mathematical Olympiad problems"
gjm · 2024-07-25T15:58:57.638Z · comments (38)
Will Capabilities Generalise More?
Ramana Kumar (ramana-kumar) · 2022-06-29T17:12:56.255Z · comments (39)
Sadly, FTX
Zvi · 2022-11-17T14:30:03.068Z · comments (18)
Think carefully before calling RL policies "agents"
TurnTrout · 2023-06-02T03:46:07.467Z · comments (38)
Nuclear war is unlikely to cause human extinction
Jeffrey Ladish (jeff-ladish) · 2020-11-07T05:42:24.380Z · comments (48)
How do we become confident in the safety of a machine learning system?
evhub · 2021-11-08T22:49:41.080Z · comments (5)
Externalized reasoning oversight: a research direction for language model alignment
tamera · 2022-08-03T12:03:16.630Z · comments (23)
Trust develops gradually via making bids and setting boundaries
Richard_Ngo (ricraz) · 2023-05-19T22:16:38.483Z · comments (12)
The Dark Arts
lsusr · 2023-12-19T04:41:13.356Z · comments (49)
Why it's so hard to talk about Consciousness
Rafael Harth (sil-ver) · 2023-07-02T15:56:05.188Z · comments (161)
The feeling of breaking an Overton window
AnnaSalamon · 2021-02-17T05:31:40.629Z · comments (29)
My current thoughts on the risks from SETI
Matthew Barnett (matthew-barnett) · 2022-03-15T17:18:19.722Z · comments (27)
Third Time: a better way to work
bfinn · 2022-01-07T21:15:57.789Z · comments (76)
Philosophy in the Darkest Timeline: Basics of the Evolution of Meaning
Zack_M_Davis · 2020-06-07T07:52:09.143Z · comments (16)
Superintelligent AI is necessary for an amazing future, but far from sufficient
So8res · 2022-10-31T21:16:35.052Z · comments (48)
Why We Launched LessWrong.SubStack
Ben Pace (Benito) · 2021-04-01T06:34:00.907Z · comments (44)
AI Forecasting: One Year In
jsteinhardt · 2022-07-04T05:10:18.470Z · comments (12)
[New Feature] Support for Footnotes!
Ruby · 2022-01-04T07:35:21.500Z · comments (31)
AGI safety career advice
Richard_Ngo (ricraz) · 2023-05-02T07:36:09.044Z · comments (24)
Evaluations (of new AI Safety researchers) can be noisy
LawrenceC (LawChan) · 2023-02-05T04:15:02.117Z · comments (11)
Processor clock speeds are not how fast AIs think
Ege Erdil (ege-erdil) · 2024-01-29T14:39:38.050Z · comments (55)
Meadow Theory
Duncan Sabien (Deactivated) (Duncan_Sabien) · 2022-03-09T17:13:21.817Z · comments (16)
A Dozen Ways to Get More Dakka
Davidmanheim · 2024-04-08T04:45:19.427Z · comments (11)
Conjecture: Internal Infohazard Policy
Connor Leahy (NPCollapse) · 2022-07-29T19:07:08.491Z · comments (6)
Luna Lovegood and the Chamber of Secrets - Part 3
lsusr · 2020-12-01T12:43:42.647Z · comments (11)
Good News, Everyone!
jbash · 2023-03-25T13:48:22.499Z · comments (23)
Why I don't believe in the placebo effect
transhumanist_atom_understander · 2024-06-10T02:37:07.776Z · comments (22)
Book Launch: "The Carving of Reality," Best of LessWrong vol. III
Raemon · 2023-08-16T23:52:12.518Z · comments (22)
Sci-Hub sued in India
Connor_Flexman · 2021-11-13T23:12:40.559Z · comments (19)
Creating a truly formidable Art
Valentine · 2021-10-14T04:39:16.641Z · comments (59)
Orexin and the quest for more waking hours
ChristianKl · 2022-09-24T19:54:56.207Z · comments (39)
Utility ≠ Reward
Vlad Mikulik (vlad_m) · 2019-09-05T17:28:13.222Z · comments (24)
Why Not Subagents?
johnswentworth · 2023-06-22T22:16:55.249Z · comments (50)
[question] Which things were you surprised to learn are not metaphors?
Eric Neyman (UnexpectedValues) · 2024-11-21T18:56:18.025Z · answers+comments (79)
AI Summer Harvest
Cleo Nardo (strawberry calm) · 2023-04-04T03:35:58.473Z · comments (10)
Updatelessness doesn't solve most problems
Martín Soto (martinsq) · 2024-02-08T17:30:11.266Z · comments (44)
Ngo and Yudkowsky on AI capability gains
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-11-18T22:19:05.913Z · comments (61)
My simple AGI investment & insurance strategy
lc · 2024-03-31T02:51:53.479Z · comments (27)
The case for training frontier AIs on Sumerian-only corpus
Alexandre Variengien (alexandre-variengien) · 2024-01-15T16:40:22.011Z · comments (15)
Basic Facts about Language Model Internals
beren · 2023-01-04T13:01:35.223Z · comments (19)
2019 AI Alignment Literature Review and Charity Comparison
Larks · 2019-12-19T03:00:54.708Z · comments (18)
Ten Levels of AI Alignment Difficulty
Sammy Martin (SDM) · 2023-07-03T20:20:21.403Z · comments (24)
Self-sacrifice is a scarce resource
mingyuan · 2020-06-28T05:08:05.010Z · comments (18)
What o3 Becomes by 2028
Vladimir_Nesov · 2024-12-22T12:37:20.929Z · comments (15)
Assume Bad Faith
Zack_M_Davis · 2023-08-25T17:36:32.678Z · comments (56)
Notice When People Are Directionally Correct
Chris_Leong · 2024-01-14T14:12:37.090Z · comments (8)
← previous page (newer posts) · next page (older posts) →