LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

How evolutionary lineages of LLMs can plan their own future and act on these plans
Roman Leventov · 2022-12-25T18:11:18.754Z · comments (16)
Accurate Models of AI Risk Are Hyperexistential Exfohazards
Thane Ruthenis · 2022-12-25T16:50:24.817Z · comments (38)
ChatGPT is our Wright Brothers moment
Ron J (ron-j) · 2022-12-25T16:26:21.890Z · comments (9)
The Meditation on Winter
Raemon · 2022-12-25T16:12:10.039Z · comments (3)
I've updated towards AI boxing being surprisingly easy
Noosphere89 (sharmake-farah) · 2022-12-25T15:40:48.104Z · comments (20)
Take 14: Corrigibility isn't that great.
Charlie Steiner · 2022-12-25T13:04:21.534Z · comments (3)
Simplified Level Up
jefftk (jkaufman) · 2022-12-25T13:00:01.546Z · comments (16)
Hyperfinite graphs ~ manifolds
Alok Singh (OldManNick) · 2022-12-25T12:24:11.910Z · comments (5)
Inconsistent math is great
Alok Singh (OldManNick) · 2022-12-25T03:20:39.074Z · comments (2)
A hundredth of a bit of extra entropy
Adam Scherlis (adam-scherlis) · 2022-12-24T21:12:41.517Z · comments (4)
Shared reality: a key driver of human behavior
kdbscott · 2022-12-24T19:35:51.126Z · comments (25)
Contra Steiner on Too Many Natural Abstractions
DragonGod · 2022-12-24T17:42:53.828Z · comments (6)
Three reasons to cooperate
paulfchristiano · 2022-12-24T17:40:01.114Z · comments (14)
Practical AI risk I: Watching large compute
Gustavo Ramires (gustavo-ramires) · 2022-12-24T13:25:25.848Z · comments (0)
Non-Elevated Air Purifiers
jefftk (jkaufman) · 2022-12-24T12:40:01.244Z · comments (2)
The Case for Chip-Backed Dollars
AnthonyRepetto · 2022-12-24T10:28:25.554Z · comments (1)
List #3: Why not to assume on prior that AGI-alignment workarounds are available
Remmelt (remmelt-ellen) · 2022-12-24T09:54:17.375Z · comments (1)
List #2: Why coordinating to align as humans to not develop AGI is a lot easier than, well... coordinating as humans with AGI coordinating to be aligned with humans
Remmelt (remmelt-ellen) · 2022-12-24T09:53:19.926Z · comments (0)
List #1: Why stopping the development of AGI is hard but doable
Remmelt (remmelt-ellen) · 2022-12-24T09:52:57.266Z · comments (11)
The case against AI alignment
andrew sauer (andrew-sauer) · 2022-12-24T06:57:53.405Z · comments (110)
Content and Takeaways from SERI MATS Training Program with John Wentworth
RohanS · 2022-12-24T04:17:21.094Z · comments (3)
Löb's Lemma: an easier approach to Löb's Theorem
Andrew_Critch · 2022-12-24T02:02:42.137Z · comments (16)
Durkon, an open-source tool for Inherently Interpretable Modelling
abstractapplic · 2022-12-24T01:49:58.684Z · comments (0)
[link] Issues with uneven AI resource distribution
User_Luke · 2022-12-24T01:18:00.295Z · comments (9)
Loose Threads on Intelligence
Shoshannah Tekofsky (DarkSym) · 2022-12-24T00:38:41.689Z · comments (3)
[question] If you factor out next token prediction, what are the remaining salient features of human cognition?
shminux · 2022-12-24T00:38:04.801Z · answers+comments (7)
[question] Why is "Argument Mapping" Not More Common in EA/Rationality (And What Objections Should I Address in a Post on the Topic?)
HarrisonDurland · 2022-12-23T21:58:18.870Z · answers+comments (5)
The Fear [Fiction]
Yitz (yitz) · 2022-12-23T21:21:00.535Z · comments (0)
To err is neural: select logs with ChatGPT
VipulNaik · 2022-12-23T20:26:41.702Z · comments (2)
AISER - AIS Europe Retreat
Carolin (carolin) · 2022-12-23T19:03:06.091Z · comments (0)
Two Truths and a Prediction Market
Screwtape · 2022-12-23T18:52:10.734Z · comments (0)
ChatGPT understands, but largely does not generate Spanglish (and other code-mixed) text
Milan W (weibac) · 2022-12-23T17:40:59.862Z · comments (4)
On sincerity
Joe Carlsmith (joekc) · 2022-12-23T17:13:09.478Z · comments (6)
[link] Epigenetics of the mammalian germline
Metacelsus · 2022-12-23T15:21:37.032Z · comments (0)
Boston Solstice Songs
jefftk (jkaufman) · 2022-12-23T13:00:02.527Z · comments (0)
Are there any reliable CAPTCHAs? Competition for CAPTCHA ideas that AIs can’t solve.
MrThink (ViktorThink) · 2022-12-23T12:52:24.621Z · comments (37)
"Search" is dead. What is the new paradigm?
shminux · 2022-12-23T10:33:35.596Z · comments (9)
[link] one-shot AI, delegating embedded agency and decision theory, and one-shot QACI
Tamsin Leake (carado-1) · 2022-12-23T04:40:31.880Z · comments (0)
[link] Article Review: Discovering Latent Knowledge (Burns, Ye, et al)
Robert_AIZI · 2022-12-22T18:16:05.161Z · comments (4)
Let’s think about slowing down AI
KatjaGrace · 2022-12-22T17:40:04.787Z · comments (183)
Some Notes on the mathematics of Toy Autoencoding Problems
Spencer Becker-Kahn · 2022-12-22T17:21:25.371Z · comments (1)
December 2022 updates and fundraising
AI Impacts (AI Imacts) · 2022-12-22T17:20:05.382Z · comments (1)
Covid 12/22/22: Reevaluating Past Options
Zvi · 2022-12-22T16:50:01.236Z · comments (2)
China Covid #4
Zvi · 2022-12-22T16:30:00.919Z · comments (2)
Racing through a minefield: the AI deployment problem
HoldenKarnofsky · 2022-12-22T16:10:07.694Z · comments (2)
Lead in Chocolate?
jefftk (jkaufman) · 2022-12-22T16:10:01.522Z · comments (6)
Response to Holden’s alignment plan
Alex Flint (alexflint) · 2022-12-22T16:08:53.188Z · comments (4)
Staring into the abyss as a core life skill
benkuhn · 2022-12-22T15:30:05.093Z · comments (20)
Secular Solstice for children
juliawise · 2022-12-22T14:33:46.168Z · comments (1)
Mental acceptance and reflection
remember · 2022-12-22T14:32:15.589Z · comments (1)
← previous page (newer posts) · next page (older posts) →