LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

Be 40% Deferent To the Wisdom Of the Crowd
DirectedEvolution (AllAmericanBreakfast) · 2023-06-03T10:23:34.832Z · comments (None)
Announcing AISafety.info's Write-a-thon (June 16-18) and Second Distillation Fellowship (July 3-October 2)
steven0461 · 2023-06-03T02:03:01.364Z · comments (None)
[link] Terry Tao is hosting an "AI to Assist Mathematical Reasoning" workshop
junk heap homotopy (zrkrlc) · 2023-06-03T01:19:08.398Z · comments (None)
Upcoming AI regulations are likely to make for an unsafer world
shminux · 2023-06-03T01:07:35.921Z · comments (3)
[link] The AGI Race Between the US and China Doesn’t Exist.
Eva_B · 2023-06-03T00:22:31.162Z · comments (8)
Unfaithful Explanations in Chain-of-Thought Prompting
miles · 2023-06-03T00:22:14.624Z · comments (1)
[question] How could AIs 'see' each other's source code?
Kenny · 2023-06-02T22:41:20.107Z · answers+comments (3)
Proposal: labs should precommit to pausing if an AI argues for itself to be improved
NickGabs · 2023-06-02T22:31:55.321Z · comments (1)
Inference from a Mathematical Description of an Existing Alignment Research: a proposal for an outer alignment research program
Christopher King (christopher-king) · 2023-06-02T21:54:56.291Z · comments (None)
Thoughts on Dancing the Whole Dance: Positional Calling for Contra
jefftk (jkaufman) · 2023-06-02T20:50:07.536Z · comments (None)
Advice for Entering AI Safety Research
scasper · 2023-06-02T20:46:13.392Z · comments (2)
AI should be used to find better morality
Jorterder (utebaypi) · 2023-06-02T20:38:04.587Z · comments (None)
Co-found an incubator for independent AI Safety researchers
Alexandra Bos (AlexandraB) · 2023-06-02T18:02:33.983Z · comments (3)
A mind needn't be curious to reap the benefits of curiosity
So8res · 2023-06-02T18:00:06.947Z · comments (6)
[question] Are computationally complex algorithms expensive to have, expensive to operate, or both?
Noosphere89 (sharmake-farah) · 2023-06-02T17:50:09.432Z · answers+comments (4)
[Replication] Conjecture's Sparse Coding in Toy Models
Hoagy · 2023-06-02T17:34:24.928Z · comments (None)
Limits to Learning: Rethinking AGI’s Path to Dominance
tangerine · 2023-06-02T16:43:25.635Z · comments (0)
The Control Problem: Unsolved or Unsolvable?
Remmelt (remmelt-ellen) · 2023-06-02T15:42:37.269Z · comments (2)
Hallucinating Suction
Johannes C. Mayer (johannes-c-mayer) · 2023-06-02T14:16:34.676Z · comments (None)
Winning doesn't need to flow through increases in rationality
MichelJusten · 2023-06-02T12:05:26.615Z · comments (2)
Product Recommendation: LessWrong dialogues with Recast
Stuckwork · 2023-06-02T08:05:41.497Z · comments (None)
EA/LW/SSC Argentina First Meeting!
daviddelauba (davidfriva) · 2023-06-02T06:12:02.166Z · comments (None)
Think carefully before calling RL policies "agents"
TurnTrout · 2023-06-02T03:46:07.467Z · comments (15)
[link] Dreams of "Mathopedia"
NicholasKross · 2023-06-02T01:30:05.007Z · comments (10)
[link] Outreach success: Intro to AI risk that has been successful at Meta
Michael Tontchev (michael-tontchev-1) · 2023-06-01T23:12:24.085Z · comments (6)
Open Source LLMs Can Now Actively Lie
[deleted] · 2023-06-01T22:03:36.662Z · comments (None)
Safe AI and moral AI
William D'Alessandro (william-d-alessandro) · 2023-06-01T21:36:44.260Z · comments (None)
AI #14: A Very Good Sentence
Zvi · 2023-06-01T21:30:04.548Z · comments (20)
Four levels of understanding decision theory
Max H (Maxc) · 2023-06-01T20:55:07.974Z · comments (4)
[link] Things I Learned by Spending Five Thousand Hours In Non-EA Charities
jenn (pixx) · 2023-06-01T20:48:03.940Z · comments (10)
self-improvement-executors are not goal-maximizers
bhauth · 2023-06-01T20:46:16.964Z · comments (None)
Experimental Fat Loss
johnlawrenceaspden · 2023-06-01T20:26:18.296Z · comments (4)
Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better?
1a3orn · 2023-06-01T19:36:48.351Z · comments (27)
[link] Progress links and tweets, 2023-06-01
jasoncrawford · 2023-06-01T19:03:14.459Z · comments (3)
[link] When betting, consider non-ergodicity and absorbing states
NinaR · 2023-06-01T18:51:48.973Z · comments (2)
[question] When does an AI become intelligent enough to become self-aware and power-seeking?
FinalFormal2 · 2023-06-01T18:09:20.027Z · answers+comments (None)
Value Physics
GageSiebert (gagesiebert) · 2023-06-01T18:03:52.096Z · comments (1)
Uncertainty about the future does not imply that AGI will go well
Lauro Langosco · 2023-06-01T17:38:09.619Z · comments (6)
[question] What are the arguments for/against FOOM?
FinalFormal2 · 2023-06-01T17:23:11.698Z · answers+comments (None)
Change my mind: Veganism entails trade-offs, and health is one of the axes
Elizabeth (pktechgirl) · 2023-06-01T17:10:02.075Z · comments (40)
The unspoken but ridiculous assumption of AI doom: the hidden doom assumption
Christopher King (christopher-king) · 2023-06-01T17:01:49.088Z · comments (1)
Don’t waste your time meditating on meditation retreats!
Anton Rodenhauser (anton-rodenhauser) · 2023-06-01T16:56:44.499Z · comments (4)
[Request]: Use "Epilogenics" instead of "Eugenics" in most circumstances
GeneSmith · 2023-06-01T15:36:21.277Z · comments (19)
Book Club: Thomas Schelling's "The Strategy of Conflict"
Optimization Process · 2023-06-01T15:29:36.135Z · comments (None)
Probably tell your friends when they make big mistakes
Chi Nguyen · 2023-06-01T14:30:35.579Z · comments (1)
[link] Yes, avoiding extinction from AI *is* an urgent priority: a response to Seth Lazar, Jeremy Howard, and Arvind Narayanan.
Soroush Pour (soroush-pour) · 2023-06-01T13:38:16.444Z · comments (None)
Work dumber not smarter
lukehmiles (lcmgcd) · 2023-06-01T12:40:31.264Z · comments (12)
Short Remark on the (subjective) mathematical 'naturalness' of the Nanda--Lieberum addition modulo 113 algorithm
Spencer Becker-Kahn · 2023-06-01T11:31:37.796Z · comments (7)
How will they feed us
meijer1973 · 2023-06-01T08:49:51.645Z · comments (3)
"LLMs Don't Have a Coherent Model of the World" - What it Means, Why it Matters
Davidmanheim · 2023-06-01T07:46:37.075Z · comments (2)
next page (older posts) →