LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

Aiming for Convergence Is Like Discouraging Betting
Zack_M_Davis · 2023-02-01T00:03:21.315Z · comments (17)
Contra Dance Lengths
jefftk (jkaufman) · 2023-02-01T03:30:01.330Z · comments (0)
[link] The effect of horizon length on scaling laws
Jacob_Hilton · 2023-02-01T03:59:26.074Z · comments (2)
Abstraction As Symmetry and Other Thoughts
Numendil · 2023-02-01T06:25:53.338Z · comments (9)
Voting Results for the 2021 Review
Raemon · 2023-02-01T08:02:06.744Z · comments (10)
[link] Directed Babbling
Yudhister Kumar (randomwalks) · 2023-02-01T09:10:26.516Z · comments (1)
Subitisation of Self
vitaliya · 2023-02-01T09:18:49.784Z · comments (4)
Polis: Why and How to Use it
brook · 2023-02-01T14:03:04.658Z · comments (0)
[link] Trends in the dollar training cost of machine learning systems
Ben Cottier (ben-cottier) · 2023-02-01T14:48:54.065Z · comments (0)
Language Models can be Utility-Maximising Agents
Raymond D · 2023-02-01T18:13:34.694Z · comments (1)
More findings on Memorization and double descent
Marius Hobbhahn (marius-hobbhahn) · 2023-02-01T18:26:41.320Z · comments (2)
AI Safety Arguments: An Interactive Guide
Lukas Trötzmüller (Lukas T) · 2023-02-01T19:26:58.017Z · comments (0)
Schizophrenia as a deficiency in long-range cortex-to-cortex communication
Steven Byrnes (steve2152) · 2023-02-01T19:32:24.447Z · comments (29)
[link] Eli Lifland on Navigating the AI Alignment Landscape
ozziegooen · 2023-02-01T21:17:05.807Z · comments (1)
[link] Can we “cure” cancer?
jasoncrawford · 2023-02-01T22:03:44.989Z · comments (31)
Hinton: "mortal" efficient analog hardware may be learned-in-place, uncopyable
the gears to ascension (lahwran) · 2023-02-01T22:19:03.227Z · comments (3)
[link] Product safety is a poor model for AI governance
Richard Korzekwa (Grothor) · 2023-02-01T22:40:04.469Z · comments (0)
Exercise is Good, Actually
Gordon Seidoh Worley (gworley) · 2023-02-02T00:09:18.143Z · comments (27)
Focus on the places where you feel shocked everyone's dropping the ball
So8res · 2023-02-02T00:27:55.687Z · comments (61)
Predicting researcher interest in AI alignment
Vael Gates · 2023-02-02T00:58:01.120Z · comments (0)
“AI Risk Discussions” website: Exploring interviews from 97 AI Researchers
Vael Gates · 2023-02-02T01:00:01.067Z · comments (1)
Interviews with 97 AI Researchers: Quantitative Analysis
Maheen Shermohammed (msherm) · 2023-02-02T01:01:32.087Z · comments (0)
A Brief Overview of AI Safety/Alignment Orgs, Fields, Researchers, and Resources for ML Researchers
Austin Witte (austin-witte) · 2023-02-02T01:02:00.980Z · comments (1)
A simple technique to reduce negative rumination
cranberry_bear · 2023-02-02T01:33:11.553Z · comments (0)
Hacked Account Spam
jefftk (jkaufman) · 2023-02-02T01:50:01.738Z · comments (5)
[link] How to export Android Chrome tabs to an HTML file in Linux (as of February 2023)
Adam Scherlis (adam-scherlis) · 2023-02-02T02:03:55.934Z · comments (3)
Retrospective on the AI Safety Field Building Hub
Vael Gates · 2023-02-02T02:06:52.722Z · comments (0)
[link] Progress links and tweets, 2023-02-01
jasoncrawford · 2023-02-02T02:25:21.021Z · comments (0)
Research agenda: Formalizing abstractions of computations
Erik Jenner (ejenner) · 2023-02-02T04:29:06.568Z · comments (10)
[link] Temporally Layered Architecture for Adaptive, Distributed and Continuous Control
Roman Leventov · 2023-02-02T06:29:21.137Z · comments (4)
You Don't Exist, Duncan
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-02-02T08:37:01.049Z · comments (107)
Epoch Impact Report 2022
Jsevillamol · 2023-02-02T13:09:00.659Z · comments (0)
Don't Judge a Tool by its Average Output
silentbob · 2023-02-02T13:42:28.994Z · comments (2)
You are probably not a good alignment researcher, and other blatant lies
junk heap homotopy (zrkrlc) · 2023-02-02T13:55:15.186Z · comments (16)
Covid 2/2/23: The Emergency Ends on 5/11
Zvi · 2023-02-02T14:00:00.572Z · comments (6)
Heritability, Behaviorism, and Within-Lifetime RL
Steven Byrnes (steve2152) · 2023-02-02T16:34:33.182Z · comments (3)
More findings on maximal data dimension
Marius Hobbhahn (marius-hobbhahn) · 2023-02-02T18:33:53.606Z · comments (1)
[question] How can I help inflammation-based nerve damage be temporary?
Optimization Process · 2023-02-02T19:20:19.823Z · answers+comments (4)
Andrew Huberman on How to Optimize Sleep
Leon Lang (leon-lang) · 2023-02-02T20:17:12.010Z · comments (6)
Normative vs Descriptive Models of Agency
mattmacdermott · 2023-02-02T20:28:28.701Z · comments (5)
Conditioning Predictive Models: Large language models as predictors
evhub · 2023-02-02T20:28:46.612Z · comments (4)
Conditioning Predictive Models: Outer alignment via careful conditioning
evhub · 2023-02-02T20:28:58.955Z · comments (13)
Halifax Monthly Meetup: Introduction to Effective Altruism
Ideopunk · 2023-02-02T21:10:12.992Z · comments (0)
Is AI risk assessment too anthropocentric?
Craig Mattson (craig-mattson) · 2023-02-02T21:34:57.515Z · comments (6)
[question] Monotonous Work
Gideon Bauer (gideon-bauer) · 2023-02-02T21:35:19.710Z · answers+comments (0)
What fact that you know is true but most people aren't ready to accept it?
lorepieri (lorenzo-rex) · 2023-02-03T00:06:42.460Z · comments (209)
I don't think MIRI "gave up"
Raemon · 2023-02-03T00:26:07.552Z · comments (64)
[question] What is the risk of asking a counterfactual oracle a question that already had its answer erased?
Chris_Leong · 2023-02-03T03:13:10.508Z · answers+comments (0)
Jordan Peterson: Guru/Villain
Bryan Frances · 2023-02-03T09:02:47.626Z · comments (6)
ChatGPT: Tantalizing afterthoughts in search of story trajectories [induction heads]
Bill Benzon (bill-benzon) · 2023-02-03T10:35:12.110Z · comments (0)
next page (older posts) →