LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Announcing the Alignment of Complex Systems Research Group
Jan_Kulveit · 2022-06-04T04:10:14.337Z · comments (20)
I'm trying out "asteroid mindset"
Alex_Altair · 2022-06-03T13:35:48.614Z · comments (5)
My current take on Internal Family Systems “parts”
Kaj_Sotala · 2022-06-26T17:40:05.750Z · comments (11)
In defense of flailing, with foreword by Bill Burr
lc · 2022-06-17T16:40:32.152Z · comments (6)
Causal confusion as an argument against the scaling hypothesis
RobertKirk · 2022-06-20T10:54:05.623Z · comments (30)
Transcript of a Twitter Discussion on EA from June 2022
Zvi · 2022-06-06T13:50:01.164Z · comments (4)
I applied for a MIRI job in 2020. Here's what happened next.
ViktoriaMalyasova · 2022-06-15T19:37:49.797Z · comments (17)
Air Conditioner Test Results & Discussion
johnswentworth · 2022-06-22T22:26:26.643Z · comments (42)
Air Conditioner Repair
Zvi · 2022-06-27T12:40:01.514Z · comments (34)
[link] Reinventing the wheel
jasoncrawford · 2022-06-04T22:39:49.776Z · comments (13)
AI Training Should Allow Opt-Out
alyssavance · 2022-06-23T01:33:17.801Z · comments (13)
The “mind-body vicious cycle” model of RSI & back pain
Steven Byrnes (steve2152) · 2022-06-09T12:30:33.810Z · comments (30)
[link] A Quick List of Some Problems in AI Alignment As A Field
NicholasKross · 2022-06-21T23:23:31.719Z · comments (12)
Some reflections on the LW community after several months of active engagement
M. Y. Zuo · 2022-06-25T17:04:16.233Z · comments (40)
Feature request: voting buttons at the bottom?
Oliver Sourbut · 2022-06-24T14:41:55.268Z · comments (12)
Worked Examples of Shapley Values
lalaithion · 2022-06-24T17:13:43.095Z · comments (9)
Book Review: Talent
Zvi · 2022-06-03T20:10:02.425Z · comments (19)
Eliciting Latent Knowledge (ELK) - Distillation/Summary
Marius Hobbhahn (marius-hobbhahn) · 2022-06-08T13:18:51.114Z · comments (2)
Resources I send to AI researchers about AI safety
Vael Gates · 2022-06-14T02:24:58.897Z · comments (12)
How to pursue a career in technical AI alignment
Charlie Rogers-Smith (charlie.rs) · 2022-06-04T21:11:46.501Z · comments (1)
Epistemological Vigilance for Alignment
adamShimi · 2022-06-06T00:27:43.956Z · comments (11)
Half-baked AI Safety ideas thread
Aryeh Englander (alenglander) · 2022-06-23T16:11:54.207Z · comments (61)
Ten experiments in modularity, which we'd like you to run!
CallumMcDougall (TheMcDouglas) · 2022-06-16T09:17:28.955Z · comments (3)
[question] What’s the contingency plan if we get AGI tomorrow?
Yitz (yitz) · 2022-06-23T03:10:27.821Z · answers+comments (23)
Seven ways to become unstoppably agentic
[deleted] · 2022-06-26T17:39:08.424Z · comments (16)
How Do Selection Theorems Relate To Interpretability?
johnswentworth · 2022-06-09T19:39:59.988Z · comments (14)
Open Problems in AI X-Risk [PAIS #5]
Dan H (dan-hendrycks) · 2022-06-10T02:08:06.121Z · comments (6)
[question] Has anyone actually tried to convince Terry Tao or other top mathematicians to work on alignment?
P. · 2022-06-08T22:26:55.710Z · answers+comments (51)
[link] "Brain enthusiasts" in AI Safety
Jan (jan-2) · 2022-06-18T09:59:04.149Z · comments (5)
What's it like to have sex with Duncan?
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2022-06-17T02:32:36.210Z · comments (19)
[link] A short conceptual explainer of Immanuel Kant's Critique of Pure Reason
jessicata (jessica.liu.taylor) · 2022-06-03T01:06:32.394Z · comments (12)
Covid 6/2/22: Declining to Respond
Zvi · 2022-06-02T13:50:01.330Z · comments (10)
[link] Kurzgesagt – The Last Human (Youtube)
habryka (habryka4) · 2022-06-29T03:28:44.213Z · comments (7)
Paradigms of AI alignment: components and enablers
Vika · 2022-06-02T06:19:59.378Z · comments (4)
How To: A Workshop (or anything)
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2022-06-12T08:00:21.934Z · comments (13)
[Link] OpenAI: Learning to Play Minecraft with Video PreTraining (VPT)
Aryeh Englander (alenglander) · 2022-06-23T16:29:19.323Z · comments (3)
How fast can we perform a forward pass?
jsteinhardt · 2022-06-10T23:30:10.341Z · comments (9)
The horror of what must, yet cannot, be true
Kaj_Sotala · 2022-06-02T10:20:04.575Z · comments (18)
Perils of optimizing in social contexts
owencb · 2022-06-16T17:40:46.843Z · comments (1)
Poorly-Aimed Death Rays
Thane Ruthenis · 2022-06-11T18:29:55.430Z · comments (5)
Child Contracting
jefftk (jkaufman) · 2022-06-26T02:30:04.904Z · comments (2)
Pitching an Alignment Softball
mu_(negative) · 2022-06-07T04:10:45.023Z · comments (13)
[link] [Link] Childcare : what the science says
Gunnar_Zarncke · 2022-06-24T21:45:23.406Z · comments (4)
Why so little AI risk on rationalist-adjacent blogs?
Grant Demaree (grant-demaree) · 2022-06-13T06:31:40.288Z · comments (23)
Our mental building blocks are more different than I thought
Marius Hobbhahn (marius-hobbhahn) · 2022-06-15T11:07:04.062Z · comments (11)
Summary of "AGI Ruin: A List of Lethalities"
Stephen McAleese (stephen-mcaleese) · 2022-06-10T22:35:48.500Z · comments (2)
Latent Adversarial Training
Adam Jermyn (adam-jermyn) · 2022-06-29T20:04:00.249Z · comments (12)
FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community
Darren McKee · 2022-06-15T18:08:42.754Z · comments (15)
[link] Blake Richards on Why he is Skeptical of Existential Risk from AI
Michaël Trazzi (mtrazzi) · 2022-06-14T19:09:26.783Z · comments (12)
[link] Tao, Kontsevich & others on HLAI in Math
interstice · 2022-06-10T02:25:38.341Z · comments (5)
← previous page (newer posts) · next page (older posts) →