LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

[question] Accuracy of arguments that are seen as ridiculous and intuitively false but don't have good counter-arguments
Christopher King (christopher-king) · 2023-04-29T23:58:24.012Z · answers+comments (39)
A Brief Introduction to ACI, 3.5: How ACI improves Active Inference
Akira Pyinya · 2023-04-29T23:51:11.236Z · comments (0)
A Guide to Forecasting AI Science Capabilities
Eleni Angelou (ea-1) · 2023-04-29T23:24:46.579Z · comments (1)
(A Failed Approach) From Precedent to Utility Function
Akira Pyinya · 2023-04-29T21:55:16.192Z · comments (2)
[link] Biological DOOM: a brief overview of biological computation
Metacelsus · 2023-04-29T20:02:25.257Z · comments (2)
Horizontal vs vertical generality
tailcalled · 2023-04-29T19:14:35.632Z · comments (9)
[question] Which technologies are stuck on initial adoption?
Vanessa Kosoy (vanessa-kosoy) · 2023-04-29T17:37:34.749Z · answers+comments (39)
Will GPT-5 be able to self-improve?
Nathan Helm-Burger (nathan-helm-burger) · 2023-04-29T17:34:48.028Z · comments (22)
Research agenda: Supervising AIs improving AIs
Quintin Pope (quintin-pope) · 2023-04-29T17:09:21.182Z · comments (5)
[link] The Many Stories of Science
amayhew (amayhew@uwo.ca) · 2023-04-29T16:49:08.964Z · comments (0)
One bit of observation can unlock many of optimization - but at what cost?
dr_s · 2023-04-29T10:53:03.969Z · comments (4)
[link] [SEE NEW EDITS] No, *You* Need to Write Clearer
Nicholas / Heather Kross (NicholasKross) · 2023-04-29T05:04:01.559Z · comments (65)
Programming an IFS for alternate uses
SilverFlame · 2023-04-29T00:25:38.980Z · comments (0)
Utopia for artificial minds
Ocracoke · 2023-04-28T22:52:13.843Z · comments (1)
Locating Fulcrum Experiences
LoganStrohl (BrienneYudkowsky) · 2023-04-28T20:14:03.644Z · comments (31)
"notkilleveryoneism" sounds dumb
bhauth · 2023-04-28T19:46:01.000Z · comments (27)
Conflict Resolution: the Game
CrimsonChin · 2023-04-28T19:32:13.637Z · comments (11)
LLMs and computation complexity
Jonathan Marcus (jonathan-marcus) · 2023-04-28T17:48:03.762Z · comments (29)
Student competition for drafting a treaty on moratorium of large-scale AI capabilities R&D
Nayanika (nayanika) · 2023-04-28T17:45:54.453Z · comments (0)
The Networked Memory Hierarchy Overhang
austinsilveria · 2023-04-28T17:44:09.215Z · comments (0)
[link] Hiring for the role of AI Fellow
Nidhi · 2023-04-28T17:42:28.257Z · comments (0)
[Linkpost] Sam Altman's 2015 Blog Posts Machine Intelligence Parts 1 & 2
OliviaJ (olivia-jimenez-1) · 2023-04-28T16:02:00.060Z · comments (4)
[question] Realistic near-future scenarios of AI doom understandable for non-techy people?
RomanS · 2023-04-28T14:45:16.791Z · answers+comments (4)
The Social Alignment Problem
irving (judith) · 2023-04-28T14:16:17.825Z · comments (13)
[link] PhD Position: AI Interpretability in Berlin, Germany
Tiberius · 2023-04-28T13:44:25.448Z · comments (0)
[link] A Clearer Thinking tool that teaches you to use Internal Family Systems concepts
spencerg · 2023-04-28T13:42:35.063Z · comments (1)
[link] cyberpunk raccoons
bhauth · 2023-04-28T02:52:20.473Z · comments (7)
[link] AI safety logo design contest, due mid-May
Adrian Cipriani (adrian-cipriani-arts) · 2023-04-28T02:00:19.210Z · comments (0)
Dropout can create a privileged basis in the ReLU output model.
lewis smith (lsgos) · 2023-04-28T01:59:36.410Z · comments (3)
[link] Podcast with Divia Eden and Ronny Fernandez on the strong orthogonality thesis
DanielFilan · 2023-04-28T01:30:45.681Z · comments (1)
Simultaneous Footbass and Footdrums
jefftk (jkaufman) · 2023-04-28T00:50:01.697Z · comments (0)
[link] Tuning your Cognitive Strategies
Raemon · 2023-04-27T20:32:06.337Z · comments (57)
The LW crossroads of purpose
Caerulea-Lawrence (humm1lity) · 2023-04-27T19:53:58.311Z · comments (2)
Metaculus Event: Forecast Friday, April 28th at 12pm ET — Speed Forecasting Session! 
ChristianWilliams · 2023-04-27T19:50:44.343Z · comments (0)
Infrafunctions Proofs
Diffractor · 2023-04-27T19:25:44.903Z · comments (1)
Infrafunctions and Robust Optimization
Diffractor · 2023-04-27T19:25:11.662Z · comments (11)
What are the limits of superintelligence?
rainy · 2023-04-27T18:29:32.081Z · comments (3)
A Proposal for AI Alignment: Using Directly Opposing Models
Arne B (arne-123) · 2023-04-27T18:05:44.229Z · comments (5)
My views on “doom”
paulfchristiano · 2023-04-27T17:50:01.415Z · comments (35)

[deleted] · 2023-04-27T17:37:57.318Z · comments (0)
An International Manhattan Project for Artificial Intelligence
Glenn Clayton (glenn-clayton) · 2023-04-27T17:34:13.242Z · comments (2)
[link] Quote quiz: “drifting into dependence”
jasoncrawford · 2023-04-27T15:13:12.141Z · comments (6)
Second-Level Empiricism: Reframing the Two-Child Puzzle
Richard Henage (richard-henage) · 2023-04-27T15:04:22.842Z · comments (5)
[link] Interview with Paul Christiano: How We Prevent the AI’s from Killing us
Dalmert · 2023-04-27T14:39:49.571Z · comments (0)
AI #9: The Merge and the Million Tokens
Zvi · 2023-04-27T14:20:01.329Z · comments (8)
AI doom from an LLM-plateau-ist perspective
Steven Byrnes (steve2152) · 2023-04-27T13:58:10.973Z · comments (24)
Romance, misunderstanding, social stances, and the human LLM
Kaj_Sotala · 2023-04-27T12:59:09.229Z · comments (32)
[link] "A Note on the Compatibility of Different Robust Program Equilibria of the Prisoner's Dilemma"
the gears to ascension (lahwran) · 2023-04-27T07:34:20.722Z · comments (5)
[link] AI chatbots don't know why they did it
skybrian · 2023-04-27T06:57:09.239Z · comments (11)
[link] The Great Ideological Conflict: Intuitionists vs. Establishmentarians
Thoth Hermes (thoth-hermes) · 2023-04-27T01:49:52.732Z · comments (0)
next page (older posts) →