LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

[link] My Experience With A Magnet Implant
Vale · 2025-01-07T03:01:21.410Z · comments (2)
[link] An Epistemological Nightmare
Ariel Cheng (arielcheng218) · 2024-11-21T02:08:56.942Z · comments (0)
[link] Taking nonlogical concepts seriously
Kris Brown (kris-brown) · 2024-10-15T18:16:01.226Z · comments (5)
[link] The Problem with Reasoners by Aidan McLaughin
t14n (tommy-nguyen-1) · 2024-11-25T20:24:26.021Z · comments (1)
[question] What actual bad outcome has "ethics-based" RLHF AI Alignment already prevented?
Roko · 2024-10-19T06:11:12.602Z · answers+comments (16)
2025 Q1 Pivotal Research Fellowship (Technical & Policy)
Tobias H (clearthis) · 2024-11-12T10:56:24.858Z · comments (0)
[link] [Linkpost] Hawkish nationalism vs international AI power and benefit sharing
jakub_krys (kryjak) · 2024-10-18T18:13:19.425Z · comments (5)
Ethical Implications of the Quantum Multiverse
Jonah Wilberg (jrwilb@googlemail.com) · 2024-11-18T16:00:20.645Z · comments (22)
Why modelling multi-objective homeostasis is essential for AI alignment (and how it helps with AI safety as well)
Roland Pihlakas (roland-pihlakas) · 2025-01-12T03:37:59.692Z · comments (0)
[link] Progress links and short notes, 2024-12-16
jasoncrawford · 2024-12-16T17:24:31.398Z · comments (0)
A small improvement to Wikipedia page on Pareto Efficiency
ektimo · 2024-11-18T02:13:49.151Z · comments (0)
[link] Sam Harris’s Argument For Objective Morality
Zero Contradictions · 2024-12-05T10:19:03.701Z · comments (5)
[question] Recommendations on communities that discuss AI applications in society
Annapurna (jorge-velez) · 2024-12-24T13:37:49.821Z · answers+comments (2)
[link] Deconstructing arguments against AI art
DMMF · 2024-12-27T19:40:13.015Z · comments (5)
[link] October 2024 Progress in Guaranteed Safe AI
Quinn (quinn-dougherty) · 2024-10-28T23:34:51.689Z · comments (0)
[link] AI Forecasting Benchmark: Congratulations to Q4 Winners + Q1 Practice Questions Open
ChristianWilliams · 2025-01-10T03:02:05.856Z · comments (0)
Enhancing Mathematical Modeling with LLMs: Goals, Challenges, and Evaluations
ozziegooen · 2024-10-28T21:44:42.352Z · comments (0)
The Robot, the Puppet-master, and the Psychohistorian
WillPetillo · 2024-12-28T00:12:08.824Z · comments (2)
Implications of Moral Realism on AI Safety
Myles H (zarsou9) · 2025-01-02T02:58:42.527Z · comments (1)
[link] Formalize the Hashiness Model of AGI Uncontainability
Remmelt (remmelt-ellen) · 2024-11-09T16:10:05.032Z · comments (0)
[question] How to cite LessWrong as an academic source?
PhilosophicalSoul (LiamLaw) · 2024-11-06T08:28:26.309Z · answers+comments (6)
[link] Model Integrity: MAI on Value Alignment
Jonas Hallgren · 2024-12-05T17:11:31.707Z · comments (11)
[question] Is malice a real emotion?
landscape_kiwi · 2024-12-01T23:47:55.823Z · answers+comments (5)
[question] Why does ChatGPT throw an error when outputting "David Mayer"?
Archimedes · 2024-12-01T00:11:53.690Z · answers+comments (9)
[link] Apply to attend a Global Challenges Project workshop in 2025!
LiamE (liam-epstein) · 2024-12-11T00:41:39.247Z · comments (0)
Another UFO Bet
codyz · 2024-11-01T01:55:27.301Z · comments (11)
Introducing the Evidence Color Wheel
Larry Lee (larry-lee) · 2024-12-14T16:08:38.744Z · comments (0)
1 What If We Rebuild Motivation with the Fermi ESTIMATion?
P. João (gabriel-brito) · 2024-12-17T07:46:40.547Z · comments (0)
[question] how to truly feel my beliefs?
KvmanThinking (avery-liu) · 2024-11-11T00:04:30.994Z · answers+comments (6)
How to solve the misuse problem assuming that in 10 years the default scenario is that AGI agents are capable of synthetizing pathogens
jeremtti · 2024-11-27T21:17:56.687Z · comments (0)
[link] AI Safety Newsletter #43: White House Issues First National Security Memo on AI Plus, AI and Job Displacement, and AI Takes Over the Nobels
Corin Katzke (corin-katzke) · 2024-10-28T16:03:39.258Z · comments (0)
A short critique of Omohundro's "Basic AI Drives"
Soumyadeep Bose (soumyadeep-bose) · 2024-12-19T19:19:52.864Z · comments (0)
Replaceable Axioms give more credence than irreplaceable axioms
Yoav Ravid · 2024-12-20T00:51:13.578Z · comments (2)
Introducing Kairos: a new AI safety fieldbuilding organization (the new home for SPAR and FSP)
agucova · 2024-10-25T21:59:08.782Z · comments (0)
No one has the ball on 1500 Russian olympiad winners who've received HPMOR
Mikhail Samin (mikhail-samin) · 2025-01-12T11:43:36.560Z · comments (1)
Against Job Boards: Human Capital and the Legibility Trap
vaishnav92 · 2024-10-24T20:50:50.266Z · comments (1)
[link] What is compute governance?
Vishakha (vishakha-agrawal) · 2024-12-23T06:32:25.588Z · comments (0)
Reduce AI Self-Allegiance by saying "he" instead of "I"
Knight Lee (Max Lee) · 2024-12-23T09:32:29.947Z · comments (4)
[question] Why is neuron count of human brain relevant to AI timelines?
xpostah · 2024-12-24T05:15:58.839Z · answers+comments (7)
[link] Streamlining my voice note process
Vlad Sitalo (harcisis) · 2024-12-26T06:04:01.990Z · comments (1)
Beyond Gaussian: Language Model Representations and Distributions
Matt Levinson · 2024-11-24T01:53:38.156Z · comments (1)
[question] Why would ASI share any resources with us?
Satron · 2024-11-13T23:38:36.535Z · answers+comments (8)
Zombies among us
Declan Molony (declan-molony) · 2024-12-31T05:14:07.929Z · comments (3)
[link] Internal music player: phenomenology of earworms
dkl9 · 2024-11-14T23:29:48.383Z · comments (4)
[link] Example of GPU-accelerated scientific computing with PyTorch
Tahp · 2025-01-01T23:01:04.606Z · comments (0)
6. Increase Intelligence: Welcome AI Players
Allison Duettmann (allison-duettmann) · 2025-01-02T19:06:01.521Z · comments (1)
[link] How to Do a PhD (in AI Safety)
Lewis Hammond (lewis-hammond-1) · 2025-01-05T16:57:35.409Z · comments (0)
Speedrunning Rationality: Day II
aproteinengine · 2025-01-06T03:59:25.488Z · comments (3)
Generating Cognateful Sentences with Large Language Models
vkethana (vijay-k) · 2025-01-06T18:40:09.564Z · comments (0)
Agency overhang as a proxy for Sharp left turn
Eris (anton-zheltoukhov) · 2024-11-07T12:14:24.333Z · comments (0)
← previous page (newer posts) · next page (older posts) →