LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

[S] D&D.Sci: All the D8a. Allllllll of it. Evaluation and Ruleset
aphyer · 2023-02-27T23:15:39.094Z · comments (7)
[link] The Birth and Death of Sydney — The Bayesian Conspiracy Podcast
moridinamael · 2023-02-27T22:36:59.691Z · comments (0)
A case for capabilities work on AI as net positive
Noosphere89 (sharmake-farah) · 2023-02-27T21:12:44.173Z · comments (37)
Beginning to feel like a conspiracy theorist
Joulebit (andreas-gravrok) · 2023-02-27T20:05:01.156Z · comments (25)
Some thoughts pointing to slower AI take-off
Bastiaan · 2023-02-27T19:53:41.678Z · comments (2)
Prediction Thread: Make Predictions About How Different Factors Affect AGI X-Risk.
MrThink (ViktorThink) · 2023-02-27T19:15:05.349Z · comments (8)
Counting-down vs. counting-up coherence
TsviBT · 2023-02-27T14:59:39.041Z · comments (4)
Eliezer is still ridiculously optimistic about AI risk
johnlawrenceaspden · 2023-02-27T14:21:25.239Z · comments (34)
Milk EA, Casu Marzu EA
jefftk (jkaufman) · 2023-02-27T14:00:06.354Z · comments (0)
[link] Normie response to Normie AI Safety Skepticism
Giulio (thesofakillers) · 2023-02-27T13:54:40.249Z · comments (1)
Fertility Rate Roundup #1
Zvi · 2023-02-27T13:30:01.015Z · comments (19)
Something Unfathomable: Unaligned Humanity and how we're racing against death with death
Yuli_Ban · 2023-02-27T11:37:36.112Z · comments (14)
[link] The idea of an "aligned superintelligence" seems misguided
ssadler · 2023-02-27T11:19:57.679Z · comments (7)
EA & LW Forum Weekly Summary (20th - 26th Feb 2023)
Zoe Williams (GreyArea) · 2023-02-27T03:46:39.842Z · comments (0)
[Simulators seminar sequence] #2 Semiotic physics - revamped
Jan (jan-2) · 2023-02-27T00:25:52.635Z · comments (23)
A Somewhat Functional Definition of Philosophy
Richard Henage (richard-henage) · 2023-02-27T00:25:29.561Z · comments (0)
Respect Chesterton-Schelling Fences
shminux · 2023-02-27T00:09:30.815Z · comments (17)
Curiosity as a Solution to AGI Alignment
Harsha G. (harsha-g) · 2023-02-26T23:36:55.541Z · comments (7)
[link] Learning How to Learn (And 20+ Studies)
maxa · 2023-02-26T22:46:55.031Z · comments (12)
[link] Bayesian Scenario: Snipers & Soldiers
abstractapplic · 2023-02-26T21:48:00.788Z · comments (8)
[link] NYT: Lab Leak Most Likely Caused Pandemic, Energy Dept. Says
trevor (TrevorWiesinger) · 2023-02-26T21:21:54.675Z · comments (9)
[link] [Link Post] Cyber Digital Authoritarianism (National Intelligence Council Report)
Phosphorous (everett-smith) · 2023-02-26T20:51:49.579Z · comments (2)
Reflections on Zen and the Art of Motorcycle Maintenance
LoganStrohl (BrienneYudkowsky) · 2023-02-26T20:46:29.553Z · comments (3)
Taboo "human-level intelligence"
Sherrinford · 2023-02-26T20:42:25.880Z · comments (7)
[link] [Link] Petition on brain preservation: Allow global access to high-quality brain preservation as an option rapidly after death
Mati_Roy (MathieuRoy) · 2023-02-26T15:56:55.656Z · comments (2)
Some thoughts on the cults LW had
Noosphere89 (sharmake-farah) · 2023-02-26T15:46:58.535Z · comments (28)
A library for safety research in conditioning on RLHF tasks
James Chua (james-chua) · 2023-02-26T14:50:56.762Z · comments (2)
The Preference Fulfillment Hypothesis
Kaj_Sotala · 2023-02-26T10:55:12.647Z · comments (62)
All of my grandparents were prodigies, I am extremely bored at Oxford University. Please let me intern/work for you!
politicalpersuasion · 2023-02-26T07:50:48.260Z · comments (7)
"Rationalist Discourse" Is Like "Physicist Motors"
Zack_M_Davis · 2023-02-26T05:58:29.249Z · comments (152)
[question] Ways to prepare to a vastly new world?
Annapurna (jorge-velez) · 2023-02-26T04:56:12.505Z · answers+comments (6)
Incentives and Selection: A Missing Frame From AI Threat Discussions?
DragonGod · 2023-02-26T01:18:13.487Z · comments (16)
A mechanistic explanation for SolidGoldMagikarp-like tokens in GPT2
MadHatter · 2023-02-26T01:10:33.785Z · comments (14)
Politics is the Fun-Killer
Adam Zerner (adamzerner) · 2023-02-25T23:29:41.072Z · comments (5)
Bayes is Out-Dated, and You’re Doing it Wrong
AnthonyRepetto · 2023-02-25T23:18:53.558Z · comments (43)
[question] Would more model evals teams be good?
Ryan Kidd (ryankidd44) · 2023-02-25T22:01:31.568Z · answers+comments (4)
Nod posts
Adam Zerner (adamzerner) · 2023-02-25T21:53:31.996Z · comments (8)
[link] Prediction market: Will John Wentworth's Gears of Aging series hold up in 2033?
tailcalled · 2023-02-25T20:15:11.535Z · comments (4)
Making Implied Standards Explicit
Logan Riggs (elriggs) · 2023-02-25T20:02:50.617Z · comments (0)
Two Reasons for no Utilitarianism
False Name (False Name, Esq.) · 2023-02-25T19:51:47.743Z · comments (3)
Cognitive Emulation: A Naive AI Safety Proposal
Connor Leahy (NPCollapse) · 2023-02-25T19:35:02.409Z · comments (45)
[Prediction] Humanity will survive the next hundred years
lsusr · 2023-02-25T18:59:57.845Z · comments (44)
The Caplan-Yudkowsky End-of-the-World Bet Scheme Doesn't Actually Work
lsusr · 2023-02-25T18:57:00.105Z · comments (14)
[link] The Practitioner's Path 2.0: the Empiricist Archetype
[deleted] · 2023-02-25T17:05:12.689Z · comments (0)
[question] Pink Shoggoths: What does alignment look like in practice?
Yuli_Ban · 2023-02-25T12:23:10.724Z · answers+comments (13)
Just How Hard a Problem is Alignment?
Roger Dearnaley · 2023-02-25T09:00:06.066Z · comments (1)
Buddhist Psychotechnology for Withstanding Apocalypse Stress
romeostevensit · 2023-02-25T03:11:18.735Z · comments (10)
[link] How to Read Papers Efficiently: Fast-then-Slow Three pass method
the gears to ascension (lahwran) · 2023-02-25T02:56:30.814Z · comments (4)
What kind of place is this?
Jim Pivarski (jim-pivarski) · 2023-02-25T02:14:03.011Z · comments (24)
Agents vs. Predictors: Concrete differentiating factors
evhub · 2023-02-24T23:50:40.322Z · comments (3)
next page (older posts) →