LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

Humans, chimpanzees and other animals
gjm · 2023-05-30T23:53:08.295Z · comments (18)
The case for removing alignment and ML research from the training dataset
beren · 2023-05-30T20:54:36.596Z · comments (8)
Why Job Displacement Predictions are Wrong: Explanations of Cognitive Automation
Moritz Wallawitsch (moritz-wallawitsch) · 2023-05-30T20:43:49.615Z · comments (0)
PaLM-2 & GPT-4 in "Extrapolating GPT-N performance"
Lukas Finnveden (Lanrian) · 2023-05-30T18:33:40.765Z · comments (6)
RoboNet - A new internet protocol for AI
antoniomax · 2023-05-30T17:55:44.702Z · comments (1)
Why I don't think that the probability that AGI kills everyone is roughly 1 (but rather around 0.995).
Bastumannen (karl-roekaeus) · 2023-05-30T17:54:41.493Z · comments (0)
AI X-risk is a possible solution to the Fermi Paradox
magic9mushroom · 2023-05-30T17:42:22.421Z · comments (20)
[link] LIMA: Less Is More for Alignment
Ulisse Mini (ulisse-mini) · 2023-05-30T17:10:31.628Z · comments (6)
Boomerang - protocol to dissolve some commitment races
Filip Sondej · 2023-05-30T16:21:13.649Z · comments (10)
Announcing Apollo Research
Marius Hobbhahn (marius-hobbhahn) · 2023-05-30T16:17:19.767Z · comments (11)
Advice for new alignment people: Info Max
Jonas Hallgren · 2023-05-30T15:42:20.142Z · comments (4)
[question] Who is liable for AI?
jmh · 2023-05-30T13:54:45.776Z · answers+comments (4)
[link] AI Safety Newsletter #8: Rogue AIs, how to screen for AI risks, and grants for research on democratic governance of AI
Dan H (dan-hendrycks) · 2023-05-30T11:52:31.669Z · comments (0)
The bullseye framework: My case against AI doom
titotal (lombertini) · 2023-05-30T11:52:31.194Z · comments (35)
[link] Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures
Dan H (dan-hendrycks) · 2023-05-30T09:05:25.986Z · comments (77)
[link] Theoretical Limitations of Autoregressive Models
Gabriel Wu (gabriel-wu) · 2023-05-30T02:37:51.020Z · comments (1)
[link] A book review for “Animal Weapons” and cross-applying the lessons to x-risk
Habeeb Abdulfatah (habeeb-abdulfatah) · 2023-05-30T00:58:48.781Z · comments (1)
Without a trajectory change, the development of AGI is likely to go badly
Max H (Maxc) · 2023-05-29T23:42:16.511Z · comments (2)
Winners-take-how-much?
YonatanK (jonathan-kallay) · 2023-05-29T21:56:45.505Z · comments (2)
Reply to a fertility doctor concerning polygenic embryo screening
GeneSmith · 2023-05-29T21:50:20.854Z · comments (6)
Sentience matters
So8res · 2023-05-29T21:25:30.638Z · comments (96)
[link] Wikipedia as an introduction to the alignment problem
SoerenMind · 2023-05-29T18:43:47.247Z · comments (10)
[question] What are some of the best introductions/breakdowns of AI existential risk for those unfamiliar?
Isaac King (KingSupernova) · 2023-05-29T17:04:40.384Z · answers+comments (2)
Creating Flashcards with LLMs
Diogo Cruz · 2023-05-29T16:55:27.087Z · comments (3)
On the Impossibility of Intelligent Paperclip Maximizers
Michael Simkin (michael-simkin) · 2023-05-29T16:55:15.597Z · comments (5)
Minimum Viable Exterminator
Richard Horvath · 2023-05-29T16:32:59.055Z · comments (5)
An LLM-based “exemplary actor”
Roman Leventov · 2023-05-29T11:12:50.762Z · comments (0)
Aligning an H-JEPA agent via training on the outputs of an LLM-based "exemplary actor"
Roman Leventov · 2023-05-29T11:08:36.289Z · comments (10)
Gemini will bring the next big timeline update
p.b. · 2023-05-29T06:05:52.583Z · comments (6)
Proposed Alignment Technique: OSNR (Output Sanitization via Noising and Reconstruction) for Safer Usage of Potentially Misaligned AGI
sudo · 2023-05-29T01:35:51.616Z · comments (9)
Morality is Accidental & Self-Congratulatory
ymeskhout · 2023-05-29T00:40:59.354Z · comments (40)
[link] TinyStories: Small Language Models That Still Speak Coherent English
Ulisse Mini (ulisse-mini) · 2023-05-28T22:23:30.560Z · comments (8)
"Membranes" is better terminology than "boundaries" alone
Chipmonk · 2023-05-28T22:16:21.404Z · comments (12)
The king token
p.b. · 2023-05-28T19:18:16.539Z · comments (0)
Language Agents Reduce the Risk of Existential Catastrophe
cdkg · 2023-05-28T19:10:17.729Z · comments (14)
Devil's Advocate: Adverse Selection Against Conscientiousness
lionhearted (Sebastian Marshall) (lionhearted) · 2023-05-28T17:53:37.968Z · comments (2)
Reacts now enabled on 100% of posts, though still just experimenting
Ruby · 2023-05-28T05:36:40.953Z · comments (73)
[link] My AI Alignment Research Agenda and Threat Model, right now (May 2023)
Nicholas / Heather Kross (NicholasKross) · 2023-05-28T03:23:38.353Z · comments (0)
Kelly betting vs expectation maximization
MorgneticField (motred) · 2023-05-28T01:54:51.819Z · comments (33)
[link] Why and When Interpretability Work is Dangerous
Nicholas / Heather Kross (NicholasKross) · 2023-05-28T00:27:37.747Z · comments (7)
Twin Cities ACX Meetup - June 2023
Timothy M. (timothy-bond) · 2023-05-27T20:11:59.649Z · comments (1)
Project Idea: Challenge Groups for Alignment Researchers
Adam Zerner (adamzerner) · 2023-05-27T20:10:12.001Z · comments (0)
Introspective Bayes
False Name (False Name, Esq.) · 2023-05-27T19:35:52.856Z · comments (2)
Should Rational Animations invite viewers to read content on LessWrong?
Writer · 2023-05-27T19:26:47.606Z · comments (9)
[link] Who are the Experts on Cryonics?
Mati_Roy (MathieuRoy) · 2023-05-27T19:24:39.958Z · comments (9)
AI and Planet Earth are incompatible.
archeon · 2023-05-27T18:59:08.601Z · comments (2)
South Bay ACX/LW Meetup
IS (is) · 2023-05-27T17:25:14.159Z · comments (0)
Hands-On Experience Is Not Magic
Thane Ruthenis · 2023-05-27T16:57:10.531Z · comments (14)
Is Deontological AI Safe? [Feedback Draft]
Dan H (dan-hendrycks) · 2023-05-27T16:39:25.556Z · comments (15)
San Francisco ACX Meetup “First Saturday” June 3, 1 pm
guenael · 2023-05-27T13:58:33.715Z · comments (0)
next page (older posts) →