LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

neuron spike computational capacity
bhauth · 2023-05-01T00:28:45.532Z · comments (0)
Safety standards: a framework for AI regulation
joshc (joshua-clymer) · 2023-05-01T00:56:39.823Z · comments (0)
[link] Hell is Game Theory Folk Theorems
jessicata (jessica.liu.taylor) · 2023-05-01T03:16:03.247Z · comments (101)
[question] In AI Risk what is the base model of the AI?
jmh · 2023-05-01T03:25:11.821Z · answers+comments (1)
LessWrong Community Weekend 2023 [Applications now closed]
Henry Prowbell · 2023-05-01T09:08:14.502Z · comments (0)
LessWrong Community Weekend 2023 [Applications now closed]
Henry Prowbell · 2023-05-01T09:31:19.215Z · comments (0)
[question] Where is all this evidence of UFOs?
Logan Zoellner (logan-zoellner) · 2023-05-01T12:13:33.706Z · answers+comments (42)
Socialist Democratic-Republic GAME: 12 Amendments to the Constitutions of the Free World
monkymind · 2023-05-01T13:13:46.109Z · comments (0)
The Rocket Alignment Problem, Part 2
Zvi · 2023-05-01T14:30:00.385Z · comments (20)
What Boston Can Teach Us About What a Woman Is
ymeskhout · 2023-05-01T15:34:49.997Z · comments (45)
[question] Does agency necessarily imply self-preservation instinct?
Mislav Jurić (mislav-juric) · 2023-05-01T16:06:03.489Z · answers+comments (8)
Distinguishing misuse is difficult and uncomfortable
lukehmiles (lcmgcd) · 2023-05-01T16:23:17.040Z · comments (3)
Shah (DeepMind) and Leahy (Conjecture) Discuss Alignment Cruxes
OliviaJ (olivia-jimenez-1) · 2023-05-01T16:47:41.655Z · comments (10)
In favor of steelmanning
jp · 2023-05-01T17:12:22.942Z · comments (6)
Budapest, Hungary – ACX Meetups Everywhere Spring 2023
Richard Horvath · 2023-05-01T17:36:55.929Z · comments (0)
The Apprentice Thread 2
hath · 2023-05-01T20:09:50.977Z · comments (19)
Geoff Hinton Quits Google
Adam Shai (adam-shai) · 2023-05-01T21:03:47.806Z · comments (14)
A[I] Zombie Apocalypse Is Already Upon Us
NickHarris (nickharris) · 2023-05-01T22:02:43.241Z · comments (4)
[question] Natural Selection vs Gradient Descent
CuriousApe11 · 2023-05-01T22:16:57.538Z · answers+comments (3)
What 2025 looks like
Ruby · 2023-05-01T22:53:15.783Z · comments (17)
Getting Your Eyes On
LoganStrohl (BrienneYudkowsky) · 2023-05-02T00:33:20.493Z · comments (11)
AXRP Episode 21 - Interpretability for Engineers with Stephen Casper
DanielFilan · 2023-05-02T00:50:07.045Z · comments (1)
Summaries of top forum posts (24th - 30th April 2023)
Zoe Williams (GreyArea) · 2023-05-02T02:30:59.447Z · comments (1)
Open & Welcome Thread - May 2023
Ruby · 2023-05-02T02:58:01.690Z · comments (41)
The Moral Copernican Principle
Legionnaire · 2023-05-02T03:25:40.142Z · comments (7)
Technological unemployment as another test for rationalist winning
RomanHauksson (r) · 2023-05-02T04:16:46.614Z · comments (5)
Some Thoughts on Virtue Ethics for AIs
peligrietzer · 2023-05-02T05:46:41.334Z · comments (7)
An Impossibility Proof Relevant to the Shutdown Problem and Corrigibility
Audere (Ozzalus) · 2023-05-02T06:52:25.097Z · comments (13)
AGI safety career advice
Richard_Ngo (ricraz) · 2023-05-02T07:36:09.044Z · comments (24)
Systems that cannot be unsafe cannot be safe
Davidmanheim · 2023-05-02T08:53:35.115Z · comments (27)
[link] Five Worlds of AI (by Scott Aaronson and Boaz Barak)
mishka · 2023-05-02T13:23:41.544Z · comments (5)
[question] What's the state of AI safety in Japan?
ChristianKl · 2023-05-02T17:06:17.024Z · answers+comments (1)
My best system yet: text-based project management
jt · 2023-05-02T17:44:49.478Z · comments (6)
[link] AI Safety Newsletter #4: AI and Cybersecurity, Persuasive AIs, Weaponization, and Geoffrey Hinton talks AI risks
ozhang (oliver-zhang) · 2023-05-02T18:41:43.144Z · comments (0)
Avoiding xrisk from AI doesn't mean focusing on AI xrisk
Stuart_Armstrong · 2023-05-02T19:27:32.162Z · comments (7)
Does descaling a kettle help? Theory and practice
philh · 2023-05-02T20:20:04.401Z · comments (25)
[link] Are Emergent Abilities of Large Language Models a Mirage? [linkpost]
Matthew Barnett (matthew-barnett) · 2023-05-02T21:01:33.142Z · comments (19)
A Case for the Least Forgiving Take On Alignment
Thane Ruthenis · 2023-05-02T21:34:49.832Z · comments (82)
Averting Catastrophe: Decision Theory for COVID-19, Climate Change, and Potential Disasters of All Kinds
JakubK (jskatt) · 2023-05-02T22:50:59.867Z · comments (0)
"Copilot" type AI integration could lead to training data needed for AGI
anithite (obserience) · 2023-05-03T00:57:41.752Z · comments (0)
Understand how other people think: a theory of worldviews.
spencerg · 2023-05-03T03:57:17.710Z · comments (8)
Communication strategies for autism, with examples
stonefly · 2023-05-03T05:25:16.387Z · comments (2)
[question] How much do personal biases in risk assessment affect assessment of AI risks?
Gordon Seidoh Worley (gworley) · 2023-05-03T06:12:57.001Z · answers+comments (8)
Monthly Roundup #6: May 2023
Zvi · 2023-05-03T12:50:01.009Z · comments (12)
[link] Finding Neurons in a Haystack: Case Studies with Sparse Probing
wesg (wes-gurnee) · 2023-05-03T13:30:30.836Z · comments (5)
Slowing AI: Crunch time
Zach Stein-Perlman · 2023-05-03T15:00:12.495Z · comments (1)
Personhood is a Religious Belief
jan Sijan (tim-min) · 2023-05-03T16:16:05.745Z · comments (28)
[link] Progress links and tweets, 2023-05-03
jasoncrawford · 2023-05-03T16:23:12.241Z · comments (0)
[question] LessWrong exporting?
axiomAdministrator · 2023-05-03T18:34:48.064Z · answers+comments (3)
[question] List of notable people who believe in AI X-risk?
vlad.proex · 2023-05-03T18:46:38.756Z · answers+comments (4)
next page (older posts) →