LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

[link] What About The Horses?
Maxwell Tabarrok (maxwell-tabarrok) · 2025-02-11T13:59:36.913Z · comments (17)
Dream, Truth, & Good
abramdemski · 2025-02-24T16:59:05.045Z · comments (2)
Chaos Investments v0.31
Screwtape · 2025-02-08T06:53:22.959Z · comments (1)
Learn to Develop Your Advantage
ReverendBayes (vedernikov-andrei) · 2025-01-29T22:06:00.641Z · comments (1)
[link] Is there such a thing as an impossible protein?
Abhishaike Mahajan (abhishaike-mahajan) · 2025-01-24T17:12:01.174Z · comments (3)
Untrusted monitoring insights from watching ChatGPT play coordination games
jwfiredragon · 2025-01-29T04:53:33.125Z · comments (4)
[question] Those of you with lots of meditation experience: How did it influence your understanding of philosophy of mind and topics such as qualia?
SpectrumDT · 2025-01-28T14:29:47.034Z · answers+comments (24)
[link] Ascetic hedonism
dkl9 · 2025-02-17T15:56:30.267Z · comments (9)
The Functionalist Case for Machine Consciousness: Evidence from Large Language Models
James Diacoumis (james-diacoumis) · 2025-01-22T17:43:41.215Z · comments (24)
Alignment Can Reduce Performance on Simple Ethical Questions
Daan Henselmans (drhens) · 2025-02-03T19:35:42.895Z · comments (7)
What makes a theory of intelligence useful?
Cole Wyeth (Amyr) · 2025-02-20T19:22:29.725Z · comments (0)
[link] Visual Reference for Frontier Large Language Models
kenakofer · 2025-02-11T05:14:24.752Z · comments (0)
Detecting out of distribution text with surprisal and entropy
Sandy Fraser (alex-fraser) · 2025-01-28T18:46:46.977Z · comments (4)
Literature Review of Text AutoEncoders
NickyP (Nicky) · 2025-02-19T21:54:14.905Z · comments (1)
Rational Utopia: Multiversal AI Alignment, Steerable ASI, Ultimate Human Freedom (V. 3: Multiversal Ethics, Place ASI)
ank · 2025-02-11T03:21:40.899Z · comments (7)
Nvidia doesn’t just sell shovels
winstonBosan · 2025-01-28T04:56:38.720Z · comments (4)
[question] Should you publish solutions to corrigibility?
rvnnt · 2025-01-30T11:52:05.983Z · answers+comments (13)
Deep sparse autoencoders yield interpretable features too
Armaan A. Abraham (armaanabraham) · 2025-02-23T05:46:59.189Z · comments (3)
[link] Systematic Sandbagging Evaluations on Claude 3.5 Sonnet
farrelmahaztra · 2025-02-14T01:22:46.695Z · comments (0)
[link] Anatomy of a Dance Class: A step by step guide
Nathan Young · 2025-01-26T18:02:04.974Z · comments (0)
Local Trust
ben_levinstein (benlev) · 2025-02-24T19:53:26.953Z · comments (0)
[link] Keeping Capital is the Challenge
LTM · 2025-02-03T02:04:27.142Z · comments (2)
[question] Would anyone be interested in pursuing the Virtue of Scholarship with me?
japancolorado (russell-white) · 2025-02-02T04:02:27.116Z · answers+comments (2)
Nick Land: Orthogonality
lumpenspace (lumpen-space) · 2025-02-04T21:07:04.947Z · comments (37)
Intelligence as Privilege Escalation
Cole Wyeth (Amyr) · 2025-02-23T19:31:27.604Z · comments (0)
[question] Recommendations for Recent Posts/Sequences on Instrumental Rationality?
Benjamin Hendricks (benjamin-hendricks) · 2025-01-26T00:41:08.577Z · answers+comments (3)
MAISU - Minimal AI Safety Unconference
Linda Linsefors · 2025-02-21T11:36:25.202Z · comments (0)
[link] Forecasting AGI: Insights from Prediction Markets and Metaculus
Alvin Ånestrand (alvin-anestrand) · 2025-02-04T13:03:45.927Z · comments (0)
Information throughput of biological humans and frontier LLMs
benwr · 2025-02-22T07:15:45.457Z · comments (0)
Why I'm Pouring Cold Water in My Left Ear, and You Should Too
Maloew (maloew-valenar) · 2025-01-24T23:13:52.340Z · comments (0)
[link] A Bearish Take on AI, as a Treat
rats (cartier-gucciscarf) · 2025-02-10T19:22:30.593Z · comments (0)
Introducing the Coalition for a Baruch Plan for AI: A Call for a Radical Treaty-Making process for the Global Governance of AI
rguerreschi · 2025-01-30T15:26:09.482Z · comments (0)
New Foresight Longevity Bio & Molecular Nano Grants Program
Allison Duettmann (allison-duettmann) · 2025-02-04T00:28:30.147Z · comments (0)
[question] Journalism student looking for sources
pinkerton · 2025-02-04T18:58:49.740Z · answers+comments (3)
What does success look like?
Raymond D · 2025-01-23T17:48:35.618Z · comments (0)
Contra Dances Getting Shorter and Earlier
jefftk (jkaufman) · 2025-01-23T23:30:03.595Z · comments (0)
[link] Uncontrollable: A Surprisingly Good Introduction to AI Risk
PeterMcCluskey · 2025-01-24T04:30:37.499Z · comments (0)
[link] US AI Safety Institute will be 'gutted,' Axios reports
Matrice Jacobine · 2025-02-20T14:40:13.049Z · comments (0)
Human-AI Relationality is Already Here
bridgebot (puppy) · 2025-02-20T07:08:22.420Z · comments (0)
Seasonal Patterns in BIDA's Attendance
jefftk (jkaufman) · 2025-02-02T02:40:03.768Z · comments (0)
Using Prompt Evaluation to Combat Bio-Weapon Research
Stuart_Armstrong · 2025-02-19T12:39:00.491Z · comments (2)
The Clueless Sniper and the Principle of Indifference
Jim Buhler (jim-buhler) · 2025-01-27T11:52:57.978Z · comments (26)
[link] The current AI strategic landscape: one bear's perspective
Matrice Jacobine · 2025-02-15T09:49:13.120Z · comments (0)
Hopeful hypothesis, the Persona Jukebox.
Donald Hobson (donald-hobson) · 2025-02-14T19:24:35.514Z · comments (4)
I'm making a ttrpg about life in an intentional community during the last year before the Singularity
bgaesop · 2025-02-13T21:54:09.002Z · comments (2)
My Mental Model of AI Optimist Opinions
tailcalled · 2025-01-29T18:44:36.485Z · comments (7)
[link] Fertility Will Never Recover
Eneasz · 2025-01-30T01:16:41.332Z · comments (31)
[link] Forecasting newsletter #2/2025: Forecasting meetup network
NunoSempere (Radamantis) · 2025-02-09T18:07:51.514Z · comments (0)
The Human Alignment Problem for AIs
rife (edgar-muniz) · 2025-01-22T04:06:10.872Z · comments (5)
[question] A Simulation of Automation economics?
qbolec · 2025-02-10T08:11:04.424Z · answers+comments (1)
← previous page (newer posts) · next page (older posts) →