LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

Intrinsic vs. Extrinsic Alignment
Alfonso Pérez Escudero (alfonso-perez-escudero) · 2023-06-01T01:06:40.733Z · comments (1)
Dancing to Positional Calling
jefftk (jkaufman) · 2023-06-01T02:40:06.292Z · comments (2)
An explanation of decision theories
metachirality · 2023-06-01T03:42:05.807Z · comments (4)
Re: The Crux List
Logan Zoellner (logan-zoellner) · 2023-06-01T04:48:24.320Z · comments (0)
Maximal Sentience: A Sentience Spectrum and Test Foundation
Snowyiu · 2023-06-01T06:45:12.616Z · comments (2)
General intelligence: what is it, what makes it hard, and will we have it soon?
homeopathicsyzygy · 2023-06-01T06:46:18.121Z · comments (0)
"LLMs Don't Have a Coherent Model of the World" - What it Means, Why it Matters
Davidmanheim · 2023-06-01T07:46:37.075Z · comments (2)
How will they feed us
meijer1973 · 2023-06-01T08:49:51.645Z · comments (3)
Short Remark on the (subjective) mathematical 'naturalness' of the Nanda--Lieberum addition modulo 113 algorithm
Spencer Becker-Kahn · 2023-06-01T11:31:37.796Z · comments (12)
Work dumber not smarter
lukehmiles (lcmgcd) · 2023-06-01T12:40:31.264Z · comments (17)
[link] Yes, avoiding extinction from AI *is* an urgent priority: a response to Seth Lazar, Jeremy Howard, and Arvind Narayanan.
Soroush Pour (soroush-pour) · 2023-06-01T13:38:16.444Z · comments (0)
Probably tell your friends when they make big mistakes
Chi Nguyen · 2023-06-01T14:30:35.579Z · comments (1)
Book Club: Thomas Schelling's "The Strategy of Conflict"
Optimization Process · 2023-06-01T15:29:36.135Z · comments (1)
[Request]: Use "Epilogenics" instead of "Eugenics" in most circumstances
GeneSmith · 2023-06-01T15:36:21.277Z · comments (49)
Don’t waste your time meditating on meditation retreats!
Anton Rodenhauser (anton-rodenhauser) · 2023-06-01T16:56:44.499Z · comments (7)
The unspoken but ridiculous assumption of AI doom: the hidden doom assumption
Christopher King (christopher-king) · 2023-06-01T17:01:49.088Z · comments (1)
Change my mind: Veganism entails trade-offs, and health is one of the axes
Elizabeth (pktechgirl) · 2023-06-01T17:10:02.075Z · comments (82)
[question] What are the arguments for/against FOOM?
FinalFormal2 · 2023-06-01T17:23:11.698Z · answers+comments (0)
Uncertainty about the future does not imply that AGI will go well
Lauro Langosco · 2023-06-01T17:38:09.619Z · comments (11)
[question] When does an AI become intelligent enough to become self-aware and power-seeking?
FinalFormal2 · 2023-06-01T18:09:20.027Z · answers+comments (1)
[link] Progress links and tweets, 2023-06-01
jasoncrawford · 2023-06-01T19:03:14.459Z · comments (3)
Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better?
1a3orn · 2023-06-01T19:36:48.351Z · comments (73)
Experimental Fat Loss
johnlawrenceaspden · 2023-06-01T20:26:18.296Z · comments (5)
self-improvement-executors are not goal-maximizers
bhauth · 2023-06-01T20:46:16.964Z · comments (0)
[link] Things I Learned by Spending Five Thousand Hours In Non-EA Charities
jenn (pixx) · 2023-06-01T20:48:03.940Z · comments (34)
Four levels of understanding decision theory
Max H (Maxc) · 2023-06-01T20:55:07.974Z · comments (11)
AI #14: A Very Good Sentence
Zvi · 2023-06-01T21:30:04.548Z · comments (30)
Safe AI and moral AI
William D'Alessandro (william-d-alessandro) · 2023-06-01T21:36:44.260Z · comments (0)
Open Source LLMs Can Now Actively Lie
Josh Levy (josh-levy) · 2023-06-01T22:03:36.662Z · comments (0)
[link] Outreach success: Intro to AI risk that has been successful
Michael Tontchev (michael-tontchev-1) · 2023-06-01T23:12:24.085Z · comments (8)
[link] Dreams of "Mathopedia"
NicholasKross · 2023-06-02T01:30:05.007Z · comments (16)
Think carefully before calling RL policies "agents"
TurnTrout · 2023-06-02T03:46:07.467Z · comments (35)
Product Recommendation: LessWrong dialogues with Recast
Bart Bussmann (Stuckwork) · 2023-06-02T08:05:41.497Z · comments (0)
Winning doesn't need to flow through increases in rationality
MichelJusten · 2023-06-02T12:05:26.615Z · comments (3)
Hallucinating Suction
Johannes C. Mayer (johannes-c-mayer) · 2023-06-02T14:16:34.676Z · comments (0)
The Control Problem: Unsolved or Unsolvable?
Remmelt (remmelt-ellen) · 2023-06-02T15:42:37.269Z · comments (46)
Limits to Learning: Rethinking AGI’s Path to Dominance
tangerine · 2023-06-02T16:43:25.635Z · comments (4)
[Replication] Conjecture's Sparse Coding in Toy Models
Hoagy · 2023-06-02T17:34:24.928Z · comments (0)
[question] Are computationally complex algorithms expensive to have, expensive to operate, or both?
Noosphere89 (sharmake-farah) · 2023-06-02T17:50:09.432Z · answers+comments (5)
A mind needn't be curious to reap the benefits of curiosity
So8res · 2023-06-02T18:00:06.947Z · comments (14)
AI should be used to find better morality
Jorterder (utebaypi) · 2023-06-02T20:38:04.587Z · comments (1)
Advice for Entering AI Safety Research
scasper · 2023-06-02T20:46:13.392Z · comments (2)
Thoughts on Dancing the Whole Dance: Positional Calling for Contra
jefftk (jkaufman) · 2023-06-02T20:50:07.536Z · comments (0)
Inference from a Mathematical Description of an Existing Alignment Research: a proposal for an outer alignment research program
Christopher King (christopher-king) · 2023-06-02T21:54:56.291Z · comments (4)
Proposal: labs should precommit to pausing if an AI argues for itself to be improved
NickGabs · 2023-06-02T22:31:55.321Z · comments (3)
[question] How could AIs 'see' each other's source code?
Kenny · 2023-06-02T22:41:20.107Z · answers+comments (45)
Unfaithful Explanations in Chain-of-Thought Prompting
miles · 2023-06-03T00:22:14.624Z · comments (8)
[link] The AGI Race Between the US and China Doesn’t Exist.
Eva_B · 2023-06-03T00:22:31.162Z · comments (14)
Upcoming AI regulations are likely to make for an unsafer world
shminux · 2023-06-03T01:07:35.921Z · comments (14)
[link] Terry Tao is hosting an "AI to Assist Mathematical Reasoning" workshop
junk heap homotopy (zrkrlc) · 2023-06-03T01:19:08.398Z · comments (1)
next page (older posts) →