LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

Papers on protein design
alexlyzhov · 2023-05-27T01:18:51.773Z · comments (0)
D&D.Sci 5E: Return of the League of Defenders
aphyer · 2023-05-26T20:39:18.879Z · comments (11)
Seeking (Paid) Case Studies on Standards
HoldenKarnofsky · 2023-05-26T17:58:57.042Z · comments (9)
Conditional Prediction with Zero-Sum Training Solves Self-Fulfilling Prophecies
Rubi J. Hudson (Rubi) · 2023-05-26T17:44:35.575Z · comments (13)
Request: stop advancing AI capabilities
So8res · 2023-05-26T17:42:07.182Z · comments (23)
Bandgaps, Brains, and Bioweapons: The limitations of computational science and what it means for AGI
titotal (lombertini) · 2023-05-26T15:57:43.620Z · comments (20)
[link] The American Information Revolution in Global Perspective
jasoncrawford · 2023-05-26T12:39:10.288Z · comments (1)
Helio-Selenic Laser Telescope (in SPACE!?)
Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2023-05-26T11:24:26.504Z · comments (2)
[question] Why is violence against AI labs a taboo?
ArisC · 2023-05-26T08:00:59.314Z · answers+comments (63)
Where do you lie on two axes of world manipulability?
Max H (Maxc) · 2023-05-26T03:04:17.780Z · comments (15)
Some thoughts on automating alignment research
Lukas Finnveden (Lanrian) · 2023-05-26T01:50:20.099Z · comments (4)
[question] What's your viewpoint on the likelihood of GPT-5 being able to autonomously create, train, and implement an AI superior to GPT-5?
Super AGI (super-agi) · 2023-05-26T01:43:47.845Z · answers+comments (15)
Before smart AI, there will be many mediocre or specialized AIs
Lukas Finnveden (Lanrian) · 2023-05-26T01:38:41.562Z · comments (4)
how humans are aligned
bhauth · 2023-05-26T00:09:20.626Z · comments (3)
[question] What vegan food resources have you found useful?
Elizabeth (pktechgirl) · 2023-05-25T22:46:24.994Z · answers+comments (6)
Mob and Bailey
Screwtape · 2023-05-25T22:14:52.377Z · comments (15)
Look At What's In Front Of You (Conclusion to The Nuts and Bolts of Naturalism)
LoganStrohl (BrienneYudkowsky) · 2023-05-25T19:00:44.835Z · comments (1)
[link] [Market] Will AI xrisk seem to be handled seriously by the end of 2026?
tailcalled · 2023-05-25T18:51:49.184Z · comments (2)
[question] What should my college major be if I want to do AI alignment research?
metachirality · 2023-05-25T18:23:43.442Z · answers+comments (7)
[link] Is behavioral safety "solved" in non-adversarial conditions?
Robert_AIZI · 2023-05-25T17:56:28.223Z · comments (8)
Book Review: How Minds Change
bc4026bd4aaa5b7fe (bc4026bd4aaa5b7fe0bdcd47da7a22b453953f990d35286b9d315a619b23667a) · 2023-05-25T17:55:32.218Z · comments (52)
Self-administered EMDR without a therapist is very useful for a lot of things!
Anton Rodenhauser (anton-rodenhauser) · 2023-05-25T17:54:13.389Z · comments (10)
[link] RecurrentGPT: a loom-type tool with a twist
mishka · 2023-05-25T17:09:37.844Z · comments (0)
The Genie in the Bottle: An Introduction to AI Alignment and Risk
Snorkelfarsan · 2023-05-25T16:30:50.998Z · comments (1)
AI #13: Potential Algorithmic Improvements
Zvi · 2023-05-25T15:40:04.953Z · comments (4)
Solving the Mechanistic Interpretability challenges: EIS VII Challenge 2
StefanHex (Stefan42) · 2023-05-25T15:37:54.593Z · comments (1)
Malthusian Competition (not as bad as it seems)
Logan Zoellner (logan-zoellner) · 2023-05-25T15:30:18.534Z · comments (11)
You Don't Always Need Indexes
jefftk (jkaufman) · 2023-05-25T14:20:07.901Z · comments (6)
Theories of Biological Inspiration
Eric Zhang (ChaseDanton) · 2023-05-25T13:07:10.972Z · comments (3)
Evaluating strategic reasoning in GPT models
phelps-sg · 2023-05-25T11:51:31.808Z · comments (1)
Requirements for a STEM-capable AGI Value Learner (my Case for Less Doom)
RogerDearnaley (roger-d-1) · 2023-05-25T09:26:31.316Z · comments (3)
Alignment solutions for weak AI don't (necessarily) scale to strong AI
Michael Tontchev (michael-tontchev-1) · 2023-05-25T08:26:42.502Z · comments (0)
[question] What features would you like to see in a personal forcasting / prediction tracking app?
regnarg · 2023-05-25T08:18:57.961Z · answers+comments (0)
[link] Announcing the Confido app: bringing forecasting to everyone
regnarg · 2023-05-25T08:18:13.307Z · comments (2)
But What If We Actually Want To Maximize Paperclips?
snerx · 2023-05-25T07:13:47.700Z · comments (6)
Exploiting Newcomb's Game Show
carterallen · 2023-05-25T04:01:08.692Z · comments (2)
[link] DeepMind: Model evaluation for extreme risks
Zach Stein-Perlman · 2023-05-25T03:00:00.915Z · comments (11)
[link] Why I'm Not (Yet) A Full-Time Technical Alignment Researcher
NicholasKross · 2023-05-25T01:26:49.378Z · comments (21)
Two ideas for alignment, perpetual mutual distrust and induction
APaleBlueDot (menarul-alam) · 2023-05-25T00:56:33.041Z · comments (2)
Evaluating Evidence Reconstructions of Mock Crimes -Submission 2
Alan E Dunne · 2023-05-24T22:17:56.037Z · comments (1)
[link] [Linkpost] Interpretability Dreams
DanielFilan · 2023-05-24T21:08:17.254Z · comments (2)
[link] Rishi Sunak mentions "existential threats" in talk with OpenAI, DeepMind, Anthropic CEOs
Arjun Panickssery (arjun-panickssery) · 2023-05-24T21:06:31.726Z · comments (1)
If you're not a morning person, consider quitting allergy pills
Brendan Long (korin43) · 2023-05-24T20:11:07.131Z · comments (3)
[link] Adumbrations on AGI from an outsider
nicholashalden · 2023-05-24T17:41:13.048Z · comments (44)
Open Thread With Experimental Feature: Reactions
jimrandomh · 2023-05-24T16:46:39.367Z · comments (189)
[link] A rejection of the Orthogonality Thesis
ArisC · 2023-05-24T16:37:51.056Z · comments (11)
Aligned AI via monitoring objectives in AutoGPT-like systems
Paul Colognese (paul-colognese) · 2023-05-24T15:59:13.621Z · comments (4)
[link] The Office of Science and Technology Policy put out a request for information on A.I.
HiroSakuraba (hirosakuraba) · 2023-05-24T13:33:30.672Z · comments (4)
[link] ChatGPT (May 2023) on Designing Friendly Superintelligence
Mitchell_Porter · 2023-05-24T10:47:16.325Z · comments (0)
No - AI is just as energy-efficient as your brain.
Maxwell Clarke (maxwell-clarke) · 2023-05-24T02:30:24.526Z · comments (7)
← previous page (newer posts) · next page (older posts) →