LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

[link] Bing Chat is a Precursor to Something Legitimately Dangerous
Simon Berens (sberens) · 2023-03-01T01:36:17.490Z · comments (6)
Inside the mind of a superhuman Go model: How does Leela Zero read ladders?
Haoxing Du (haoxing-du) · 2023-03-01T01:47:20.660Z · comments (8)
Contract Fraud
jefftk (jkaufman) · 2023-03-01T03:10:01.047Z · comments (10)
Implied "utilities" of simulators are broad, dense, and shallow
porby · 2023-03-01T03:23:22.974Z · comments (7)
Predictions for shard theory mechanistic interpretability results
TurnTrout · 2023-03-01T05:16:48.043Z · comments (10)
An evening at a bar
yakimoff · 2023-03-01T06:40:42.252Z · comments (2)
[question] (Cryonics) can I be frozen before being near-death?
hollowing · 2023-03-01T06:44:58.532Z · answers+comments (16)
reflections on lockdown, two years out
mingyuan · 2023-03-01T06:58:38.176Z · comments (9)
Problems of people new to AI safety and my project ideas to mitigate them
Igor Ivanov (igor-ivanov) · 2023-03-01T09:09:02.681Z · comments (4)
Open & Welcome Thread — March 2023
niplav · 2023-03-01T09:30:09.639Z · comments (48)
My current thinking about ChatGPT @3QD [Gärdenfors, Wolfram, and the value of speculation]
Bill Benzon (bill-benzon) · 2023-03-01T10:50:10.511Z · comments (0)
AI: Practical Advice for the Worried
Zvi · 2023-03-01T12:30:00.703Z · comments (43)
Scoring forecasts from the 2016 “Expert Survey on Progress in AI”
PatrickL · 2023-03-01T14:41:53.112Z · comments (6)
Dealing with infinite entropy
Alex_Altair · 2023-03-01T15:01:40.400Z · comments (9)
Some Variants of Sleeping Beauty
Sylvester Kollin · 2023-03-01T16:51:58.318Z · comments (10)
Twin Cities ACX Meetup - Mar 2023
Timothy M. (timothy-bond) · 2023-03-01T16:54:13.046Z · comments (3)
[link] Timeline: The proximal origin of SARS-CoV-2
ChristianKl · 2023-03-01T17:02:45.699Z · comments (4)
Sunlight is yellow parallel rays plus blue isotropic light
Thomas Kehrenberg (thomas-kehrenberg) · 2023-03-01T17:58:02.706Z · comments (4)
Fighting without hope
Akash (akash-wasil) · 2023-03-01T18:15:05.188Z · comments (13)
[link] Call for Cruxes by Rhyme, a Longtermist History Consultancy
Lara (lara-1) · 2023-03-01T18:39:04.358Z · comments (0)
Taboo "compute overhang"
Zach Stein-Perlman · 2023-03-01T19:15:02.515Z · comments (8)
[link] Progress links and tweets, 2023-03-01
jasoncrawford · 2023-03-01T20:33:23.267Z · comments (2)
[link] OpenAI introduce ChatGPT API at 1/10th the previous $/token
Arthur Conmy (arthur-conmy) · 2023-03-01T20:48:51.636Z · comments (4)
To MIRI-style folk, you can't simulate the universe from the beginning
the gears to ascension (lahwran) · 2023-03-01T21:38:26.506Z · comments (19)
The Parable of the King and the Random Process
moridinamael · 2023-03-01T22:18:59.734Z · comments (22)
Learn the mathematical structure, not the conceptual structure
Adam Shai (adam-shai) · 2023-03-01T22:24:19.451Z · comments (35)
Extreme GDP growth is a bad operating definition of "slow takeoff"
lc · 2023-03-01T22:25:27.446Z · comments (1)
Human level AI can plausibly take over the world
anithite (obserience) · 2023-03-01T23:27:00.379Z · comments (12)
Clippy, the friendly paperclipper
Seth Herd · 2023-03-02T00:02:55.749Z · comments (11)
Don't Jump or I'll...
Double · 2023-03-02T02:58:43.058Z · comments (7)
Live Kingfisher Album?
jefftk (jkaufman) · 2023-03-02T03:40:01.767Z · comments (0)
Reflection Mechanisms as an Alignment Target - Attitudes on “near-term” AI
elandgre · 2023-03-02T04:29:47.741Z · comments (0)
[link] Job listing (closed): Sentience Institute is accepting applications for a researcher
michael_dello · 2023-03-02T04:40:31.698Z · comments (0)
[question] What are some sources related to big-picture AI strategy?
Jacob Watts (Green_Swan) · 2023-03-02T05:00:44.090Z · answers+comments (0)
[question] If I want to test how good I would be as an AI safety researcher alongside my full-time job (with the hope of it becoming my full-time career at some point), is this a good plan?
Malleable_shape · 2023-03-02T09:44:46.731Z · answers+comments (0)
[link] Joscha Bach on Synthetic Intelligence [annotated]
Roman Leventov · 2023-03-02T11:02:09.009Z · comments (1)
Payor's Lemma in Natural Language
Andrew_Critch · 2023-03-02T12:22:13.252Z · comments (0)
AI #2
Zvi · 2023-03-02T14:50:01.078Z · comments (18)
[question] Lost in the sauce
JungleTact1cs · 2023-03-02T16:58:20.266Z · answers+comments (12)
Why I’m not into the Free Energy Principle
Steven Byrnes (steve2152) · 2023-03-02T19:27:52.309Z · comments (48)
Results Prediction Thread About How Different Factors Affect AI X-Risk
MrThink (ViktorThink) · 2023-03-02T22:13:19.956Z · comments (0)
ACX/SSC/LW meetup
Épiphanie Gédéon (joy_void_joy) · 2023-03-02T23:37:48.458Z · comments (0)
Aspiring AI safety researchers should ~argmax over AGI timelines
Ryan Kidd (ryankidd44) · 2023-03-03T02:04:51.685Z · comments (8)
The Waluigi Effect (mega-post)
Cleo Nardo (strawberry calm) · 2023-03-03T03:22:08.619Z · comments (188)
[Fiction] The boy in the glass dome
Kaj_Sotala · 2023-03-03T07:50:03.578Z · comments (0)
Sydney can play chess and kind of keep track of the board state
Erik Jenner (ejenner) · 2023-03-03T09:39:52.439Z · comments (19)
[link] state of my alignment research, and what needs work
Tamsin Leake (carado-1) · 2023-03-03T10:28:34.225Z · comments (0)
A reply to Byrnes on the Free Energy Principle
Roman Leventov · 2023-03-03T13:03:48.990Z · comments (16)
[link] Robin Hanson’s latest AI risk position statement
Liron · 2023-03-03T14:25:04.545Z · comments (17)
Group Wiki Walk
Screwtape · 2023-03-03T15:10:53.024Z · comments (0)
next page (older posts) →