LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Keeping self-replicating nanobots in check
Knight Lee (Max Lee) · 2024-12-09T05:25:45.898Z · comments (4)
[link] bending light
Recurrented (rachel-farley) · 2024-12-17T22:40:06.950Z · comments (4)
Walking Sue
Matthew McRedmond (matthew-mcredmond) · 2024-12-18T13:19:41.575Z · comments (5)
The CARLIN Method: Teaching AI How to Be Genuinely Funny
Greg Robison (grobison) · 2024-12-09T21:51:05.504Z · comments (0)
[question] fake alignment solutions????
KvmanThinking (avery-liu) · 2024-12-11T03:31:53.740Z · answers+comments (6)
[question] What are the main arguments against AGI?
Edy Nastase (edy-nastase) · 2024-12-24T15:49:03.196Z · answers+comments (6)
Madison Secular Solstice
svfritz · 2024-12-15T21:52:14.345Z · comments (0)
[Rationality Malaysia] 2024 year-end meetup!
Doris Liew (doris-liew) · 2024-12-23T16:02:03.566Z · comments (0)
Game Theory and Behavioral Economics in The Stock Market
Jaiveer Singh (jaiveer-singh) · 2024-12-24T18:15:55.468Z · comments (0)
PoMP and Circumstance: Introduction
benatkin · 2024-12-09T05:54:09.292Z · comments (1)
The "Everyone Can't Be Wrong" Prior causes AI risk denial but helped prehistoric people
Knight Lee (Max Lee) · 2025-01-09T05:54:43.395Z · comments (0)
building character isn't about willpower or sacrifice
dhruvmethi · 2024-12-19T18:17:18.521Z · comments (0)
Executive Director for AIS Brussels - Expression of interest
gergogaspar (gergo-gaspar) · 2024-12-19T09:19:42.138Z · comments (0)
Governance Course - Week 1 Reflections
la .alis. (Diatom) · 2025-01-09T04:48:27.502Z · comments (0)
Alienable (not Inalienable) Right to Buy
FlorianH (florian-habermacher) · 2025-01-01T12:19:03.691Z · comments (6)
[link] AGI is what generates evolutionarily fit and novel information
onur · 2025-01-01T09:22:55.841Z · comments (0)
[link] World models I'm currently building
xpostah · 2024-12-30T08:26:16.972Z · comments (0)
AIS Hungary is hiring a part-time Technical Lead! (Deadline: Dec 31st)
gergogaspar (gergo-gaspar) · 2024-12-17T14:12:36.988Z · comments (0)
[link] Expevolu, Part II: Buying land to create countries
Fernando · 2025-01-09T21:11:11.780Z · comments (0)
[link] Making progress bars for Alignment
Kabir Kumar (kabir-kumar) · 2025-01-03T21:25:58.292Z · comments (0)
Effective Altruism FAQ
omnizoid · 2024-12-16T16:27:16.044Z · comments (7)
[link] The MVO and The MVP
kwang · 2024-12-10T23:17:26.981Z · comments (0)
Ought We to Be Doing More Than We Are?
Jacob1 (JacobBowden) · 2025-01-09T18:12:32.149Z · comments (6)
[question] are IQ tests a good measure of intelligence?
KvmanThinking (avery-liu) · 2024-12-15T23:06:48.034Z · answers+comments (5)
[link] Variably compressibly studies are fun
dkl9 · 2024-12-16T16:00:13.910Z · comments (0)
Turing-Test-Passing AI implies Aligned AI
Roko · 2024-12-31T19:59:27.917Z · comments (28)
The Great OpenAI Debate: Should It Stay ‘Open’ or Go Private?
Satya (satya-2) · 2024-12-30T01:14:28.329Z · comments (0)
Emergence and Amplification of Survival
[deleted] · 2024-12-28T23:52:47.893Z · comments (0)
How i'm building my ai system, how it's going so far, and my thoughts on it
ollie_ · 2025-01-04T18:20:28.185Z · comments (3)
As We May Align
Gilbert C (gil-gil) · 2024-12-20T19:02:09.915Z · comments (0)
Actualism, asymmetry and extinction
MichaelStJules · 2025-01-07T16:02:31.610Z · comments (1)
Representing Irrationality in Game Theory
Larry Lee (larry-lee) · 2024-12-13T00:50:33.874Z · comments (3)
Many Worlds and the Problems of Evil
Jonah Wilberg (jrwilb@googlemail.com) · 2025-01-09T16:10:46.752Z · comments (2)
On False Dichotomies
nullproxy · 2025-01-02T18:54:21.560Z · comments (0)
Action: how do you REALLY go about doing?
DDthinker · 2024-12-29T22:00:24.915Z · comments (0)
How Business Solved (?) the Human Alignment Problem
Gianluca Calcagni (gianluca-calcagni) · 2024-12-31T20:39:59.067Z · comments (1)
Naturalistic dualism
Arturo Macias (arturo-macias) · 2024-12-12T16:19:56.307Z · comments (0)
The Opening Salvo: 1. An Ontological Consciousness Metric: Resistance to Behavioral Modification as a Measure of Recursive Awareness
Peterpiper · 2024-12-25T02:29:52.025Z · comments (0)
Consciousness, Intelligence, and AI – Some Quick Notes [call it a mini-ramble]
Bill Benzon (bill-benzon) · 2024-12-12T15:04:57.398Z · comments (0)
Super human AI is a very low hanging fruit!
Hzn · 2024-12-26T19:00:22.822Z · comments (0)
Are we a different person each time? A simple argument for the impermanence of our identity
l4mp · 2024-12-18T17:21:36.467Z · comments (5)
[question] Is Musk still net-positive for humanity?
mikbp · 2025-01-10T09:34:42.630Z · answers+comments (10)
Alleviating shrimp pain is immoral.
G Wood (geoffrey-wood) · 2025-01-07T07:28:49.432Z · comments (1)
Making LLMs safer is more intuitive than you think: How Common Sense and Diversity Improve AI Alignment
Jeba Sania (jeba-sania) · 2024-12-29T19:27:35.685Z · comments (0)
Teaching Claude to Meditate
Gordon Seidoh Worley (gworley) · 2024-12-29T22:27:44.657Z · comments (4)
Why Linear AI Safety Hits a Wall and How Fractal Intelligence Unlocks Non-Linear Solutions
Andy E Williams (andy-e-williams) · 2025-01-05T17:08:06.734Z · comments (6)
Emotional Superrationality
nullproxy · 2025-01-02T22:54:53.303Z · comments (4)
The AI Agent Revolution: Beyond the Hype of 2025
DimaG (di-wally-ga) · 2025-01-02T18:55:22.824Z · comments (1)
[link] Chinese Researchers Crack ChatGPT: Replicating OpenAI’s Advanced AI Model
Evan_Gaensbauer · 2025-01-05T03:50:34.245Z · comments (1)
How to Build Heaven: A Constrained Boltzmann Brain Generator
High Tides (high-tides) · 2024-12-13T01:04:55.103Z · comments (3)
← previous page (newer posts) · next page (older posts) →