LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

And the word was "God"
pchvykov · 2022-08-30T21:13:02.846Z · comments (4)
Worlds Where Iterative Design Fails
johnswentworth · 2022-08-30T20:48:29.025Z · comments (30)
Inner Alignment via Superpowers
JamesH (AtlasOfCharts) · 2022-08-30T20:01:52.129Z · comments (13)
[link] ML Model Attribution Challenge [Linkpost]
aogara (Aidan O'Gara) · 2022-08-30T19:34:39.007Z · comments (0)
How likely is deceptive alignment?
evhub · 2022-08-30T19:34:25.997Z · comments (28)
Built-In Bundling For Faster Loading
jefftk (jkaufman) · 2022-08-30T19:20:03.145Z · comments (0)
[question] A bayesian updating on expert opinions
amarai (Maiwaaro23) · 2022-08-30T11:56:19.925Z · answers+comments (1)
[link] Any Utilitarianism Makes Sense As Policy
George3d6 · 2022-08-30T09:55:00.346Z · comments (6)
A gentle primer on caring, including in strange senses, with applications
Kaarel (kh) · 2022-08-30T08:05:12.333Z · comments (4)
[link] Modified Guess Culture
konstell (parsley) · 2022-08-30T02:30:05.990Z · comments (5)
[question] What is the best critique of AI existential risk arguments?
joshc (joshua-clymer) · 2022-08-30T02:18:42.463Z · answers+comments (11)
How to plan for a radically uncertain future?
Kerry · 2022-08-30T02:14:47.244Z · comments (35)
EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22')
Zoe Williams (GreyArea) · 2022-08-30T01:42:39.309Z · comments (4)
Can We Align a Self-Improving AGI?
Peter S. Park · 2022-08-30T00:14:45.655Z · comments (5)
On the nature of help - a framework for helping
Faustify (nikolay-blagoev) · 2022-08-29T20:42:02.877Z · comments (2)
Fundamental Uncertainty: Chapter 4 - Why don't we do what we think we should?
Gordon Seidoh Worley (gworley) · 2022-08-29T19:25:16.917Z · comments (6)
[question] How can I reconcile the two most likely requirements for humanities near-term survival.
Erlja Jkdf. (erlja-jkdf) · 2022-08-29T18:46:58.083Z · answers+comments (6)
*New* Canada AI Safety & Governance community
Wyatt Tessari L'Allié (wyatt-tessari-l-allie) · 2022-08-29T18:45:59.945Z · comments (0)
Are Generative World Models a Mesa-Optimization Risk?
Thane Ruthenis · 2022-08-29T18:37:13.811Z · comments (2)
Sequencing Intro
jefftk (jkaufman) · 2022-08-29T17:50:02.064Z · comments (3)
How Do AI Timelines Affect Existential Risk?
Stephen McAleese (stephen-mcaleese) · 2022-08-29T16:57:44.107Z · comments (9)
How might we align transformative AI if it’s developed very soon?
HoldenKarnofsky · 2022-08-29T15:42:08.985Z · comments (55)
[link] An Audio Introduction to Nick Bostrom
PeterH · 2022-08-29T08:50:10.739Z · comments (0)
Please Do Fight the Hypothetical
Lone Pine (conor-sullivan) · 2022-08-29T08:35:41.513Z · comments (6)
[link] Have you considered getting rid of death?
Willa (Eh_Yo_Lexa) · 2022-08-29T01:31:48.656Z · comments (19)
(My understanding of) What Everyone in Technical Alignment is Doing and Why
Thomas Larsen (thomas-larsen) · 2022-08-29T01:23:58.073Z · comments (89)
Breaking down the training/deployment dichotomy
Erik Jenner (ejenner) · 2022-08-28T21:45:49.687Z · comments (3)
More Clothes Over Time?
jefftk (jkaufman) · 2022-08-28T20:30:03.294Z · comments (1)
The Expanding Moral Cinematic Universe
Raemon · 2022-08-28T18:42:19.134Z · comments (9)
An Introduction to Current Theories of Consciousness
hohenheim · 2022-08-28T17:55:38.151Z · comments (44)
[Linkpost] Can lab-grown brains become conscious?
Jack R (Jack Ryan) · 2022-08-28T17:45:52.937Z · comments (3)
[link] Robert Long On Why Artificial Sentience Might Matter
Michaël Trazzi (mtrazzi) · 2022-08-28T17:30:34.410Z · comments (5)
[link] Artificial Moral Advisors: A New Perspective from Moral Psychology
David Gross (David_Gross) · 2022-08-28T16:37:07.626Z · comments (1)
Pronunciations
Solenoid_Entity · 2022-08-28T11:43:54.042Z · comments (7)
First thing AI will do when it takes over is get fission going
visiax · 2022-08-28T05:56:27.837Z · comments (0)
Who ordered alignment's apple?
Eleni Angelou (ea-1) · 2022-08-28T04:05:34.942Z · comments (3)
Sufficiently many Godzillas as an alignment strategy
142857 · 2022-08-28T00:08:02.666Z · comments (3)
[question] What would you expect a massive multimodal online federated learner to be capable of?
Aryeh Englander (alenglander) · 2022-08-27T17:31:07.153Z · answers+comments (4)
Basin broadness depends on the size and number of orthogonal features
CallumMcDougall (TheMcDouglas) · 2022-08-27T17:29:32.508Z · comments (21)
Informal semantics and Orders
Q Home · 2022-08-27T04:17:09.327Z · comments (10)
Help Understanding Preferences And Evil
Netcentrica · 2022-08-27T03:42:01.392Z · comments (7)
Contra Dance Contact Tracing
jefftk (jkaufman) · 2022-08-27T01:50:01.802Z · comments (0)
[link] Annual AGI Benchmarking Event
Lawrence Phillips · 2022-08-27T00:06:31.972Z · comments (3)
Is there a benefit in low capability AI Alignment research?
Letti (Adzusa) · 2022-08-26T23:51:44.623Z · comments (1)
AI Risk in Terms of Unstable Nuclear Software
Thane Ruthenis · 2022-08-26T18:49:53.726Z · comments (1)
Taking the parameters which seem to matter and rotating them until they don't
Garrett Baker (D0TheMath) · 2022-08-26T18:26:47.667Z · comments (48)
ACX Meetups Everywhere List
Scott Alexander (Yvain) · 2022-08-26T18:12:04.083Z · comments (1)
What's the Most Impressive Thing That GPT-4 Could Plausibly Do?
bayesed · 2022-08-26T15:34:51.675Z · comments (22)
[question] Is population collapse due to low birth rates a problem?
mukashi (adrian-arellano-davin) · 2022-08-26T15:28:24.475Z · answers+comments (36)
[question] Could you please share a tool to help with reasoning or make better decisions?
hodovani · 2022-08-26T10:36:35.925Z · answers+comments (0)
next page (older posts) →