LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

[question] Has Someone Checked The Cold-Water-In-Left-Ear Thing?
Maloew (maloew-valenar) · 2024-12-28T20:15:35.951Z · answers+comments (0)
[question] Can private companies test LVTs?
Yair Halberstadt (yair-halberstadt) · 2025-01-02T11:08:07.352Z · answers+comments (0)
Linkpost: Look at the Water
J Bostock (Jemist) · 2024-12-30T19:49:04.107Z · comments (3)
A Collection of Empirical Frames about Language Models
Daniel Tan (dtch1997) · 2025-01-02T02:49:05.965Z · comments (0)
[question] Could my work, "Beyond HaHa" benefit the LessWrong community?
P. João (gabriel-brito) · 2024-12-29T16:14:13.497Z · answers+comments (0)
The Robot, the Puppet-master, and the Psychohistorian
WillPetillo · 2024-12-28T00:12:08.824Z · comments (2)
[link] Example of GPU-accelerated scientific computing with PyTorch
Tahp · 2025-01-01T23:01:04.606Z · comments (0)
No, the Polymarket price does not mean we can immediately conclude what the probability of a bird flu pandemic is. We also need to know the interest rate!
Christopher King (christopher-king) · 2024-12-28T16:05:47.037Z · comments (7)
Implications of Moral Realism on AI Safety
Myles H (zarsou9) · 2025-01-02T02:58:42.527Z · comments (0)
Dishbrain and implications.
RussellThor · 2024-12-29T10:42:43.912Z · comments (0)
[link] When do experts think human-level AI will be created?
Vishakha (vishakha-agrawal) · 2024-12-30T06:20:33.158Z · comments (0)
[question] 2025 Alignment Predictions
anaguma · 2025-01-02T05:37:36.912Z · answers+comments (3)
Teaching Claude to Meditate
Gordon Seidoh Worley (gworley) · 2024-12-29T22:27:44.657Z · comments (3)
[link] Alignment Is NOT All You Need
Adam Jones (domdomegg) · 2025-01-02T17:50:00.486Z · comments (0)
I Recommend More Training Rationales
Gianluca Calcagni (gianluca-calcagni) · 2024-12-31T14:06:44.007Z · comments (0)
[link] World models I'm currently building
xpostah · 2024-12-30T08:26:16.972Z · comments (0)
[link] AGI is what generates evolutionarily fit and novel information
onur · 2025-01-01T09:22:55.841Z · comments (0)
[link] Riffing on Machines of Loving Grace
an1lam · 2025-01-01T01:06:45.122Z · comments (0)
Alienable (not Inalienable) Right to Buy
FlorianH (florian-habermacher) · 2025-01-01T12:19:03.691Z · comments (3)
The Great OpenAI Debate: Should It Stay ‘Open’ or Go Private?
Satya (satya-2) · 2024-12-30T01:14:28.329Z · comments (0)
Emergence and Amplification of Survival
jgraves01 · 2024-12-28T23:52:47.893Z · comments (0)
Morality Is Still Demanding
utilistrutil · 2024-12-29T00:33:40.471Z · comments (2)
Action: how do you REALLY go about doing?
DDthinker · 2024-12-29T22:00:24.915Z · comments (0)
[link] Merry Sciencemas: A Rat Solstice Retrospective
leebriskCyrano · 2025-01-01T01:08:36.433Z · comments (0)
Making LLMs safer is more intuitive than you think: How Common Sense and Diversity Improve AI Alignment
Jeba Sania (jeba-sania) · 2024-12-29T19:27:35.685Z · comments (0)
Turing-Test-Passing AI implies Aligned AI
Roko · 2024-12-31T19:59:27.917Z · comments (27)
How Business Solved (?) the Human Alignment Problem
Gianluca Calcagni (gianluca-calcagni) · 2024-12-31T20:39:59.067Z · comments (1)
The Engineering Argument Fallacy: Why Technological Success Doesn't Validate Physics
Wenitte Apiou (wenitte-apiou) · 2024-12-28T00:49:53.300Z · comments (5)
Rejecting Anthropomorphic Bias: Addressing Fears of AGI and Transformation
Gedankensprünge (gedankenspruenge) · 2024-12-29T01:48:47.583Z · comments (1)
AI Alignment, and where we stand.
afeller08 · 2024-12-29T14:08:47.276Z · comments (0)
So you want to be a witch
lucid_levi_ackerman · 2024-12-31T04:31:52.196Z · comments (3)
The Misconception of AGI as an Existential Threat: A Reassessment
Gedankensprünge (gedankenspruenge) · 2024-12-29T01:39:57.780Z · comments (0)
← previous page (newer posts) · next page (older posts) →