LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

The problem with proportional extrapolation
pathos_bot · 2024-01-30T23:40:02.431Z · comments (0)
Counterfactual Mechanism Networks
StrivingForLegibility · 2024-01-30T20:30:00.954Z · comments (0)
Control vs Selection: Civilisation is best at control, but navigating AGI requires selection
VojtaKovarik · 2024-01-30T19:06:29.913Z · comments (1)
AI governance frames
[deleted] · 2024-01-30T18:18:35.083Z · comments (0)
Deciding What Project/Org to Start: A Guide to Prioritization Research
Alexandra Bos (AlexandraB) · 2024-01-30T18:13:59.184Z · comments (0)
[link] on neodymium magnets
bhauth · 2024-01-30T15:58:24.088Z · comments (6)
[question] Can we create self-improving AIs that perfect their own ethics?
Gabi QUENE · 2024-01-30T14:45:58.246Z · answers+comments (10)
Childhood and Education Roundup #4
Zvi · 2024-01-30T13:50:06.033Z · comments (10)
[link] Last call for submissions for TAIS 2024!
Blaine (blaine-rogers) · 2024-01-30T12:08:11.240Z · comments (0)
[question] Has anyone actually changed their mind regarding Sleeping Beauty problem?
Ape in the coat · 2024-01-30T08:34:43.904Z · answers+comments (50)
San Fernando Valley Rationality: February 15, 2024
Thomas Broadley (thomas-broadley) · 2024-01-30T04:40:32.067Z · comments (0)
The case for more ambitious language model evals
Jozdien · 2024-01-30T00:01:13.876Z · comments (30)
A short 'derivation' of Watanabe's Free Energy Formula
Wuschel Schulz (wuschel-schulz) · 2024-01-29T23:41:44.203Z · comments (6)
How important is AI hacking as LLMs advance?
Artyom Karpov (artkpv) · 2024-01-29T18:41:02.767Z · comments (0)
[link] LLM Psychometrics: A Speculative Approach to AI Safety
pskl · 2024-01-29T18:38:19.522Z · comments (4)
[question] How to write better?
TeaTieAndHat (Augustin Portier) · 2024-01-29T17:02:55.942Z · answers+comments (24)
Processor clock speeds are not how fast AIs think
Ege Erdil (ege-erdil) · 2024-01-29T14:39:38.050Z · comments (55)
Natural selection for ingame character build optimisation
Kongo Landwalker (kongo-landwalker) · 2024-01-29T11:34:55.761Z · comments (5)
Analogy Bank for AI Safety
utilistrutil · 2024-01-29T02:35:13.746Z · comments (0)
Minneapolis-St Paul ACX Article Club: Meditation and LSD
25Hour (aaron-kaufman) · 2024-01-29T01:24:14.764Z · comments (0)
Simple distribution approximation: When sampled 100 times, can language models yield 80% A and 20% B?
Teun van der Weij (teun-van-der-weij) · 2024-01-29T00:24:27.706Z · comments (5)
Why I take short timelines seriously
NicholasKees (nick_kees) · 2024-01-28T22:27:21.098Z · comments (29)
[link] Win Friends and Influence People Ch. 2: The Bombshell
gull · 2024-01-28T21:40:47.986Z · comments (13)
Riga ACX February 2024 Meetup: 2023 in Review
Anastasia (anastasia) · 2024-01-28T21:36:31.198Z · comments (0)
San Francisco ACX Meetup “First Saturday”
Nate Sternberg (nate-sternberg) · 2024-01-28T18:39:39.728Z · comments (1)
[link] Things You're Allowed to Do: At the Dentist
rbinnn · 2024-01-28T18:39:33.584Z · comments (16)
[question] What exactly did that great AI future involve again?
lemonhope (lcmgcd) · 2024-01-28T10:10:21.270Z · answers+comments (27)
[link] Palworld development blog post
bhauth · 2024-01-28T05:56:19.984Z · comments (12)
Virtually Rational - VRChat Meetup
Tomás B. (Bjartur Tómas) · 2024-01-28T05:52:36.934Z · comments (3)
[link] [Stanford Daily] Table Talk
sudo · 2024-01-28T03:15:23.968Z · comments (1)
AI Law-a-Thon
Iknownothing · 2024-01-28T02:30:09.737Z · comments (3)
[link] Chapter 1 of How to Win Friends and Influence People
gull · 2024-01-28T00:32:52.865Z · comments (5)
Don't sleep on Coordination Takeoffs
trevor (TrevorWiesinger) · 2024-01-27T19:55:26.831Z · comments (24)
Epistemic Hell
rogersbacon · 2024-01-27T17:13:09.578Z · comments (20)
[link] David Burns Thinks Psychotherapy Is a Learnable Skill. Git Gud.
Morpheus · 2024-01-27T13:21:05.068Z · comments (20)
Aligned AI is dual use technology
lc · 2024-01-27T06:50:10.435Z · comments (31)
Questions I’d Want to Ask an AGI+ to Test Its Understanding of Ethics
sweenesm · 2024-01-26T23:40:23.074Z · comments (6)
An Invitation to Refrain from Downvoting Posts into Net-Negative Karma
MikkW (mikkel-wilson) · 2024-01-26T20:13:49.209Z · comments (12)
[link] The Good Balsamic Vinegar
jenn (pixx) · 2024-01-26T19:30:57.435Z · comments (4)
The Perspective-based Explanation to the Reflective Inconsistency Paradox
dadadarren · 2024-01-26T19:00:38.920Z · comments (16)
To Boldly Code
StrivingForLegibility · 2024-01-26T18:25:59.525Z · comments (4)
Incorporating Mechanism Design Into Decision Theory
StrivingForLegibility · 2024-01-26T18:25:40.373Z · comments (4)
[link] Making every researcher seek grants is a broken model
jasoncrawford · 2024-01-26T16:06:26.688Z · comments (41)
Notes on Innocence
David Gross (David_Gross) · 2024-01-26T14:45:45.612Z · comments (21)
Stacked Laptop Monitor
jefftk (jkaufman) · 2024-01-26T14:10:05.297Z · comments (5)
[link] Surgery Works Well Without The FDA
Maxwell Tabarrok (maxwell-tabarrok) · 2024-01-26T13:31:29.968Z · comments (28)
[question] Workshop (hackathon, residence program, etc.) about for-profit AI Safety projects?
Roman Leventov · 2024-01-26T09:49:30.836Z · answers+comments (5)
Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI
Jeremy Gillen (jeremy-gillen) · 2024-01-26T07:22:06.370Z · comments (60)
Approximately Bayesian Reasoning: Knightian Uncertainty, Goodhart, and the Look-Elsewhere Effect
RogerDearnaley (roger-d-1) · 2024-01-26T03:58:16.573Z · comments (2)
Musings on Cargo Cult Consciousness
Gareth Davidson (gareth-davidson) · 2024-01-25T23:00:30.018Z · comments (11)
next page (older posts) →