LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Action: how do you REALLY go about doing?
DDthinker · 2024-12-29T22:00:24.915Z · comments (0)
The Opening Salvo: 1. An Ontological Consciousness Metric: Resistance to Behavioral Modification as a Measure of Recursive Awareness
Peterpiper · 2024-12-25T02:29:52.025Z · comments (0)
Naturalistic dualism
Arturo Macias (arturo-macias) · 2024-12-12T16:19:56.307Z · comments (0)
Consciousness, Intelligence, and AI – Some Quick Notes [call it a mini-ramble]
Bill Benzon (bill-benzon) · 2024-12-12T15:04:57.398Z · comments (0)
[link] From Code to Managing: Why Being a 'Force Multiplier' Matters to Me More Than Being a Coding Wizard
cloak (nvk) · 2024-12-02T20:10:57.743Z · comments (0)
Don't want Goodhart? — Specify the damn variables
[deleted] · 2024-11-21T22:45:26.538Z · comments (2)
How Business Solved (?) the Human Alignment Problem
Gianluca Calcagni (gianluca-calcagni) · 2024-12-31T20:39:59.067Z · comments (1)
SB-1047, ChatGPT and AI's Game of Thrones
Rahul Chand (rahul-chand) · 2024-11-24T02:29:34.907Z · comments (1)
On False Dichotomies
nullproxy · 2025-01-02T18:54:21.560Z · comments (0)
Are we a different person each time? A simple argument for the impermanence of our identity
l4mp · 2024-12-18T17:21:36.467Z · comments (5)
Super human AI is a very low hanging fruit!
Hzn · 2024-12-26T19:00:22.822Z · comments (0)
Depression and Creativity
Bill Benzon (bill-benzon) · 2024-11-29T00:27:41.072Z · comments (0)
Making LLMs safer is more intuitive than you think: How Common Sense and Diversity Improve AI Alignment
Jeba Sania (jeba-sania) · 2024-12-29T19:27:35.685Z · comments (0)
AI Specialized in ML Training Could Create ASI: AGI Is Unnecessary
satopi · 2024-11-25T02:31:29.320Z · comments (1)
Teaching Claude to Meditate
Gordon Seidoh Worley (gworley) · 2024-12-29T22:27:44.657Z · comments (4)
[question] Is Musk still net-positive for humanity?
mikbp · 2025-01-10T09:34:42.630Z · answers+comments (10)
Why Linear AI Safety Hits a Wall and How Fractal Intelligence Unlocks Non-Linear Solutions
Andy E Williams (andy-e-williams) · 2025-01-05T17:08:06.734Z · comments (6)
Alleviating shrimp pain is immoral.
G Wood (geoffrey-wood) · 2025-01-07T07:28:49.432Z · comments (1)
How to Live Well: My Philosophy of Life
Philosofer123 · 2024-11-12T04:05:03.526Z · comments (2)
The Case For Giving To The Shrimp Welfare Project
omnizoid · 2024-11-15T16:03:57.712Z · comments (14)
Emotional Superrationality
nullproxy · 2025-01-02T22:54:53.303Z · comments (4)
The AI Agent Revolution: Beyond the Hype of 2025
DimaG (di-wally-ga) · 2025-01-02T18:55:22.824Z · comments (1)
[question] How can I convince my cryptobro friend that S&P500 is efficient?
AhmedNeedsATherapist · 2024-12-04T20:04:44.521Z · answers+comments (10)
(art) Optimism
KvmanThinking (avery-liu) · 2024-11-29T16:21:02.382Z · comments (0)
The Humanitarian Economy
kylefurlong · 2024-11-12T18:25:09.557Z · comments (14)
How to Build Heaven: A Constrained Boltzmann Brain Generator
High Tides (high-tides) · 2024-12-13T01:04:55.103Z · comments (3)
Morality Is Still Demanding
utilistrutil · 2024-12-29T00:33:40.471Z · comments (2)
[link] Chinese Researchers Crack ChatGPT: Replicating OpenAI’s Advanced AI Model
Evan_Gaensbauer · 2025-01-05T03:50:34.245Z · comments (1)
[link] A practical guide to tiling the universe with hedonium
Vittu Perkele · 2024-12-16T21:25:44.431Z · comments (1)
Aristotle, Aquinas, and the Evolution of Teleology: From Purpose to Meaning.
Spiritus Dei (spiritus-dei) · 2024-12-23T19:37:58.788Z · comments (0)
Is Optimization Correct?
Yoshinori Okamoto · 2024-12-12T10:27:39.098Z · comments (0)
Deleted
Yanling Guo (yanling-guo) · 2025-01-10T01:36:47.950Z · comments (0)
Arch-anarchy and The Fable of the Dragon-Tyrant
Peter lawless · 2024-12-13T20:15:04.031Z · comments (0)
[link] Merry Sciencemas: A Rat Solstice Retrospective
leebriskCyrano · 2025-01-01T01:08:36.433Z · comments (0)
Woloch & Wosatan
JackOfAllTrades (JackOfAllSpades) · 2024-12-22T15:46:27.235Z · comments (0)
First Thoughts on Detachmentism
Jacob Peterson (jacob-peterson) · 2024-12-13T01:19:51.639Z · comments (5)
[link] Debunking the myth of safe AI
henophilia · 2024-12-15T17:44:51.088Z · comments (7)
[question] Green thumb
Pug stanky (scott-morgan) · 2024-11-20T21:52:10.348Z · answers+comments (1)
The Catastrophe of Shiny Objects
mindprison · 2024-11-18T00:24:28.876Z · comments (0)
Terminal goal vs Intelligence
Donatas Lučiūnas (donatas-luciunas) · 2024-12-26T08:10:42.144Z · comments (24)
[link] Human, All Too Human - Superintelligence requires learning things we can’t teach
Ben Turtel (ben-turtel) · 2024-12-26T16:26:27.328Z · comments (4)
How Universal Basic Income Could Help Us Build a Brighter Future
Yanling Guo (yanling-guo) · 2024-11-23T22:03:59.973Z · comments (13)
My thoughts on correlation and causation
Victor Porton (victor-porton) · 2024-12-11T05:08:03.724Z · comments (3)
literally Hitler
David Gross (David_Gross) · 2024-11-14T03:20:47.959Z · comments (0)
The Engineering Argument Fallacy: Why Technological Success Doesn't Validate Physics
Wenitte Apiou (wenitte-apiou) · 2024-12-28T00:49:53.300Z · comments (5)
Rejecting Anthropomorphic Bias: Addressing Fears of AGI and Transformation
Gedankensprünge (gedankenspruenge) · 2024-12-29T01:48:47.583Z · comments (1)
AI Alignment, and where we stand.
afeller08 · 2024-12-29T14:08:47.276Z · comments (0)
So you want to be a witch
lucid_levi_ackerman · 2024-12-31T04:31:52.196Z · comments (3)
I'm a rationalist but....
ninney · 2024-11-29T19:41:37.653Z · comments (0)
← previous page (newer posts) · next page (older posts) →