LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

I Have A New Paper Out Arguing Against The Asymmetry And For The Existence of Happy People Being Very Good
omnizoid · 2024-11-21T17:21:41.426Z · comments (3)
Fractals to Quasiparticles
James Camacho (james-camacho) · 2024-11-26T20:19:29.675Z · comments (0)
[link] Anthropic teams up with Palantir and AWS to sell AI to defense customers
Matrice Jacobine · 2024-11-09T11:50:34.050Z · comments (0)
Testing "True" Language Understanding in LLMs: A Simple Proposal
MtryaSam · 2024-11-02T19:12:34.710Z · comments (2)
[link] Do Large Language Models Perform Latent Multi-Hop Reasoning without Exploiting Shortcuts?
Bogdan Ionut Cirstea (bogdan-ionut-cirstea) · 2024-11-26T09:58:44.025Z · comments (0)
[link] AISN #44: The Trump Circle on AI Safety Plus, Chinese researchers used Llama to create a military tool for the PLA, a Google AI system discovered a zero-day cybersecurity vulnerability, and Complex Systems
Corin Katzke (corin-katzke) · 2024-11-19T16:36:40.501Z · comments (0)
Contra Musician Gender II
jefftk (jkaufman) · 2024-11-13T03:30:09.510Z · comments (0)
Value/Utility: A History
Lorec · 2024-11-19T23:01:39.167Z · comments (0)
Rethinking Laplace's Rule of Succession
Cleo Nardo (strawberry calm) · 2024-11-22T18:46:25.156Z · comments (5)
Valence Need Not Be Bounded; Utility Need Not Synthesize
Lorec · 2024-11-20T01:37:20.911Z · comments (0)
Quantum Immortality: A Perspective if AI Doomers are Probably Right
avturchin · 2024-11-07T16:06:08.106Z · comments (53)
The grass is always greener in the environment that shaped your values
Karl Faulks (karl-faulks) · 2024-11-17T18:00:15.852Z · comments (0)
[link] Catastrophic Cyber Capabilities Benchmark (3CB): Robustly Evaluating LLM Agent Cyber Offense Capabilities
Jonathan N (derpyplops) · 2024-11-05T01:01:08.083Z · comments (0)
[link] An Uncanny Moat
Adam Newgas (BorisTheBrave) · 2024-11-15T11:39:15.165Z · comments (0)
New UChicago Rationality Group
Noah Birnbaum (daniel-birnbaum) · 2024-11-08T21:20:34.485Z · comments (0)
Reflections on ML4Good
james__p · 2024-11-25T02:40:32.586Z · comments (0)
Importing Bluesky Comments
jefftk (jkaufman) · 2024-11-28T03:50:06.635Z · comments (0)
Consider tabooing "I think"
Adam Zerner (adamzerner) · 2024-11-12T02:00:08.433Z · comments (2)
Proactive 'If-Then' Safety Cases
Nathan Helm-Burger (nathan-helm-burger) · 2024-11-18T21:16:37.237Z · comments (0)
[question] Set Theory Multiverse vs Mathematical Truth - Philosophical Discussion
Wenitte Apiou (wenitte-apiou) · 2024-11-01T18:56:06.900Z · answers+comments (25)
Join my new subscriber chat
sarahconstantin · 2024-11-06T02:30:11.059Z · comments (0)
Ethical Implications of the Quantum Multiverse
Jonah Wilberg (jrwilb@googlemail.com) · 2024-11-18T16:00:20.645Z · comments (20)
[link] An Epistemological Nightmare
Ariel Cheng (arielcheng218) · 2024-11-21T02:08:56.942Z · comments (0)
Not all biases are equal - a study of sycophancy and bias in fine-tuned LLMs
jakub_krys (kryjak) · 2024-11-11T23:11:15.233Z · comments (0)
Do Deep Neural Networks Have Brain-like Representations?: A Summary of Disagreements
Joseph Emerson (joseph-emerson) · 2024-11-18T00:07:15.155Z · comments (0)
Implications—How Conscious Significance Could Inform Our lives
James Stephen Brown (james-brown) · 2024-11-26T17:42:49.085Z · comments (0)
[link] Spherical cow
dkl9 · 2024-11-11T03:10:27.788Z · comments (0)
[link] The Problem with Reasoners by Aidan McLaughin
t14n (tommy-nguyen-1) · 2024-11-25T20:24:26.021Z · comments (1)
Americans are fat and sick—and it’s their fault…right?
Declan Molony (declan-molony) · 2024-11-19T06:41:36.648Z · comments (3)
[link] Internal music player: phenomenology of earworms
dkl9 · 2024-11-14T23:29:48.383Z · comments (4)
[question] Why would ASI share any resources with us?
Satron · 2024-11-13T23:38:36.535Z · answers+comments (8)
[question] How to cite LessWrong as an academic source?
PhilosophicalSoul (LiamLaw) · 2024-11-06T08:28:26.309Z · answers+comments (6)
2025 Q1 Pivotal Research Fellowship (Technical & Policy)
Tobias H (clearthis) · 2024-11-12T10:56:24.858Z · comments (0)
Another UFO Bet
codyz · 2024-11-01T01:55:27.301Z · comments (11)
[link] LLMs Do Not Think Step-by-step In Implicit Reasoning
Bogdan Ionut Cirstea (bogdan-ionut-cirstea) · 2024-11-28T09:16:57.463Z · comments (0)
[question] how to truly feel my beliefs?
KvmanThinking (avery-liu) · 2024-11-11T00:04:30.994Z · answers+comments (6)
Agency overhang as a proxy for Sharp left turn
Eris (anton-zheltoukhov) · 2024-11-07T12:14:24.333Z · comments (0)
If I care about measure, choices have additional burden (+AI generated LW-comments)
avturchin · 2024-11-15T10:27:15.212Z · comments (11)
[link] Is P(Doom) Meaningful? Bayesian vs. Popperian Epistemology Debate
Liron · 2024-11-09T23:39:30.039Z · comments (0)
Educational CAI: Aligning a Language Model with Pedagogical Theories
Bharath Puranam (bharath-puranam) · 2024-11-01T18:55:26.993Z · comments (1)
Apply to be a mentor in SPAR!
agucova · 2024-11-05T21:32:45.797Z · comments (0)
How to solve the misuse problem assuming that in 10 years the default scenario is that AGI agents are capable of synthetizing pathogens
jeremtti · 2024-11-27T21:17:56.687Z · comments (0)
Scattered thoughts on what it means for an LLM to believe
TheManxLoiner · 2024-11-06T22:10:29.429Z · comments (4)
A small improvement to Wikipedia page on Pareto Efficiency
ektimo · 2024-11-18T02:13:49.151Z · comments (0)
Using Narrative Prompting to Extract Policy Forecasts from LLMs
Max Ghenis (MaxGhenis) · 2024-11-05T04:37:52.004Z · comments (0)
Project Adequate: Seeking Cofounders/Funders
Lorec · 2024-11-17T03:12:12.995Z · comments (7)
[link] Formalize the Hashiness Model of AGI Uncontainability
Remmelt (remmelt-ellen) · 2024-11-09T16:10:05.032Z · comments (0)
Bellevue Library Meetup - Nov 23
Cedar (xida-ren) · 2024-11-09T23:05:02.452Z · comments (3)
Theories With Mentalistic Atoms Are As Validly Called Theories As Theories With Only Non-Mentalistic Atoms
Lorec · 2024-11-12T06:45:26.039Z · comments (5)
A better “Statement on AI Risk?”
Knight Lee (Max Lee) · 2024-11-25T04:50:29.399Z · comments (4)
← previous page (newer posts) · next page (older posts) →