LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

South Bay Meetup
DavidFriedman · 2023-01-30T23:35:22.817Z · comments (None)
Peter Thiel's speech at Oxford Debating Union on technological stagnation, Nuclear weapons, COVID, Environment, Alignment, 'anti-anti anti-anti-classical liberalism', Bostrom, LW, etc.
M. Y. Zuo · 2023-01-30T23:31:26.134Z · comments (33)
Medical Image Registration: The obscure field where Deep Mesaoptimizers are already at the top of the benchmarks. (post + colab notebook)
Hastings (hastings-greer) · 2023-01-30T22:46:31.352Z · comments (None)
Humans Can Be Manually Strategic
Screwtape · 2023-01-30T22:35:39.010Z · comments (None)
Why I hate the "accident vs. misuse" AI x-risk dichotomy (quick thoughts on "structural risk")
David Scott Krueger (formerly: capybaralet) (capybaralet) · 2023-01-30T18:50:17.613Z · comments (41)
2022 Unofficial LessWrong General Census
Screwtape · 2023-01-30T18:36:30.616Z · comments (31)
[link] Call for submissions: “(In)human Values and Artificial Agency”, ALIFE 2023
the gears to ascension (lahwran) · 2023-01-30T17:37:48.882Z · comments (4)
What I mean by "alignment is in large part about making cognition aimable at all"
So8res · 2023-01-30T15:22:09.294Z · comments (24)
[link] The Energy Requirements and Feasibility of Off-World Mining
clans · 2023-01-30T15:07:59.872Z · comments (1)
[link] Whatever their arguments, Covid vaccine sceptics will probably never convince me
contrarianbrit · 2023-01-30T13:42:56.028Z · comments (10)
Simulacra Levels Summary
Zvi · 2023-01-30T13:40:00.774Z · comments (12)
A Few Principles of Successful AI Design
Vestozia (damien-lasseur) · 2023-01-30T10:42:25.960Z · comments (None)
Against Boltzmann mesaoptimizers
porby · 2023-01-30T02:55:12.041Z · comments (6)
How Likely is Losing a Google Account?
jefftk (jkaufman) · 2023-01-30T00:20:01.584Z · comments (11)
Model-driven feedback could amplify alignment failures
aogara (Aidan O'Gara) · 2023-01-30T00:00:28.647Z · comments (1)
Takeaways from calibration training
Loppukilpailija (jarviniemi) · 2023-01-29T19:09:30.815Z · comments (1)
Structure, creativity, and novelty
TsviBT · 2023-01-29T14:30:19.459Z · comments (4)
What is the ground reality of countries taking steps to recalibrate AI development towards Alignment first?
[deleted] · 2023-01-29T13:26:39.705Z · comments (6)
Compendium of problems with RLHF
Charbel-Raphaël (charbel-raphael-segerie) · 2023-01-29T11:40:53.147Z · comments (16)
[link] formal alignment: what it is, and some proposals
Tamsin Leake (carado-1) · 2023-01-29T11:32:33.239Z · comments (3)
My biggest takeaway from Redwood Research REMIX
Alok Singh (OldManNick) · 2023-01-29T11:00:07.499Z · comments (None)
EA novel published on Amazon
Timothy Underwood (timothy-underwood-1) · 2023-01-29T08:33:35.853Z · comments (None)
Reverse RSS Stats
jefftk (jkaufman) · 2023-01-29T03:40:01.216Z · comments (2)
Why and How to Graduate Early [U.S.]
Tego · 2023-01-29T01:28:32.029Z · comments (5)
Stop-gradients lead to fixed point predictions
Johannes Treutlein (Johannes_Treutlein) · 2023-01-28T22:47:35.008Z · comments (2)
[link] Eli Dourado AMA on the Progress Forum
jasoncrawford · 2023-01-28T22:18:11.801Z · comments (None)
LW Filter Tags (Rationality/World Modeling now promoted in Latest Posts)
Ruby · 2023-01-28T22:14:32.371Z · comments (4)
No Fire in the Equations
Carlos Ramirez (carlos-ramirez) · 2023-01-28T21:16:24.896Z · comments (4)
Optimality is the tiger, and annoying the user is its teeth
Christopher King (christopher-king) · 2023-01-28T20:20:33.605Z · comments (5)
On not getting contaminated by the wrong obesity ideas
Natália Coelho Mendonça (Natália Mendonça) · 2023-01-28T20:18:21.322Z · comments (65)
Advice I found helpful in 2022
Akash (akash-wasil) · 2023-01-28T19:48:23.160Z · comments (5)
The Knockdown Argument Paradox
Bryan Frances · 2023-01-28T19:23:02.678Z · comments (6)
Less Wrong/ACX Budapest Feb 4th Meetup
Richard Horvath · 2023-01-28T14:49:41.367Z · comments (None)
Reflections on Deception & Generality in Scalable Oversight (Another OpenAI Alignment Review)
Shoshannah Tekofsky (DarkSym) · 2023-01-28T05:26:49.866Z · comments (6)
A Simple Alignment Typology
Shoshannah Tekofsky (DarkSym) · 2023-01-28T05:26:36.660Z · comments (2)
[link] Spooky action at a distance in the loss landscape
Jesse Hoogland (jhoogland) · 2023-01-28T00:22:46.506Z · comments (4)
[link] WaPo: "Big Tech was moving cautiously on AI. Then came ChatGPT."
Julian Bradshaw · 2023-01-27T22:54:50.121Z · comments (5)
[link] Literature review of TAI timelines
Jsevillamol · 2023-01-27T20:07:38.186Z · comments (6)
[link] Scaling Laws Literature Review
Pablo Villalobos (pvs) · 2023-01-27T19:57:08.341Z · comments (1)
The role of Bayesian ML in AI safety - an overview
Marius Hobbhahn (marius-hobbhahn) · 2023-01-27T19:40:05.727Z · comments (6)
[link] to me, it's instrumentality that is alienating
Tamsin Leake (carado-1) · 2023-01-27T18:27:19.062Z · comments (None)
Assigning Praise and Blame: Decoupling Epistemology and Decision Theory
adamShimi · 2023-01-27T18:16:43.025Z · comments (5)
[question] How could humans dominate over a super intelligent AI?
Marco Discendenti (marco-discendenti) · 2023-01-27T18:15:55.760Z · answers+comments (8)
[link] ChatGPT understands language
philosophybear · 2023-01-27T07:14:42.790Z · comments (4)
Jar of Chocolate
jefftk (jkaufman) · 2023-01-27T03:40:03.163Z · comments (None)
Basics of Rationalist Discourse
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-01-27T02:40:52.739Z · comments (178)
The recent banality of rationality (and effective altruism)
CraigMichael · 2023-01-27T01:19:00.643Z · comments (7)
11 heuristics for choosing (alignment) research projects
Akash (akash-wasil) · 2023-01-27T00:36:08.742Z · comments (5)
A different observation of Vavilov Day
Elizabeth (pktechgirl) · 2023-01-26T21:50:01.571Z · comments (1)
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
mwatkins · 2023-01-26T21:01:57.920Z · comments (80)
next page (older posts) →