LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

[question] Why would ASI share any resources with us?
Satron · 2024-11-13T23:38:36.535Z · answers+comments (8)
[question] How to cite LessWrong as an academic source?
PhilosophicalSoul (LiamLaw) · 2024-11-06T08:28:26.309Z · answers+comments (6)
Speedrunning Rationality: Day I
aproteinengine · 2025-01-04T14:28:49.220Z · comments (0)
Towards mutually assured cooperation
mikko (morrel) · 2024-12-22T20:46:21.965Z · comments (0)
Using LLM Search to Augment (Mathematics) Research
kaleb (geomaturge) · 2024-12-19T18:59:34.391Z · comments (0)
[link] Is P(Doom) Meaningful? Bayesian vs. Popperian Epistemology Debate
Liron · 2024-11-09T23:39:30.039Z · comments (0)
[link] AI Safety at the Frontier: Paper Highlights, December '24
gasteigerjo · 2025-01-11T22:54:02.625Z · comments (1)
Theories With Mentalistic Atoms Are As Validly Called Theories As Theories With Only Non-Mentalistic Atoms
Lorec · 2024-11-12T06:45:26.039Z · comments (5)
[link] How to Edit an Essay into a Solstice Speech?
Czynski (JacobKopczynski) · 2024-12-15T04:30:50.545Z · comments (1)
Apply to be a mentor in SPAR!
agucova · 2024-11-05T21:32:45.797Z · comments (0)
Agency overhang as a proxy for Sharp left turn
Eris (anton-zheltoukhov) · 2024-11-07T12:14:24.333Z · comments (0)
No, the Polymarket price does not mean we can immediately conclude what the probability of a bird flu pandemic is. We also need to know the interest rate!
Christopher King (christopher-king) · 2024-12-28T16:05:47.037Z · comments (8)
[link] Inescapably Value-Laden Experience—a Catchy Term I Made Up to Make Morality Rationalisable
James Stephen Brown (james-brown) · 2024-12-19T04:45:37.906Z · comments (0)
[question] Artificial V/S Organoid Intelligence
10xyz (10xyz-coder) · 2024-10-23T14:31:46.385Z · answers+comments (0)
Reducing x-risk might be actively harmful
MountainPath · 2024-11-18T14:25:07.127Z · comments (5)
[link] How long should political (and other) terms be?
ohmurphy · 2024-10-14T21:38:43.050Z · comments (0)
Project Adequate: Seeking Cofounders/Funders
Lorec · 2024-11-17T03:12:12.995Z · comments (7)
Why empiricists should believe in AI risk
Knight Lee (Max Lee) · 2024-12-11T03:51:17.979Z · comments (0)
Educational CAI: Aligning a Language Model with Pedagogical Theories
Bharath Puranam (bharath-puranam) · 2024-11-01T18:55:26.993Z · comments (1)
Fractals to Quasiparticles
James Camacho (james-camacho) · 2024-11-26T20:19:29.675Z · comments (0)
Morality as Cooperation Part I: Humans
DeLesley Hutchins (delesley-hutchins) · 2024-12-05T08:16:51.778Z · comments (0)
A proposal for iterated interpretability with known-interpretable narrow AIs
Peter Berggren (peter-berggren) · 2025-01-11T14:43:05.423Z · comments (0)
Using Narrative Prompting to Extract Policy Forecasts from LLMs
Max Ghenis (MaxGhenis) · 2024-11-05T04:37:52.004Z · comments (0)
Experiments are in the territory, results are in the map
Tahp · 2024-12-06T15:44:50.412Z · comments (1)
If I care about measure, choices have additional burden (+AI generated LW-comments)
avturchin · 2024-11-15T10:27:15.212Z · comments (11)
"Alignment at Large": Bending the Arc of History Towards Life-Affirming Futures
welfvh · 2024-12-03T21:17:56.466Z · comments (0)
4. Uphold Voluntarism: Physical Defense
Allison Duettmann (allison-duettmann) · 2025-01-02T19:04:53.675Z · comments (2)
Bellevue Library Meetup - Nov 23
Cedar (xida-ren) · 2024-11-09T23:05:02.452Z · comments (3)
Printable book of some rationalist creative writing (from Scott A. & Eliezer)
CounterBlunder · 2024-12-23T15:44:31.437Z · comments (0)
[link] OpenAI o1 + ChatGPT Pro release
anaguma · 2024-12-05T19:13:21.843Z · comments (0)
Scattered thoughts on what it means for an LLM to believe
TheManxLoiner · 2024-11-06T22:10:29.429Z · comments (4)
Logic vs intuition <=> algorithm vs ML
pchvykov · 2025-01-04T09:06:51.822Z · comments (0)
[link] World Models I'm Currently Building
temporary · 2024-12-15T16:29:08.287Z · comments (1)
Towards a Clever Hans Test: Unmasking Sentience Biases in Chatbot Interactions
glykokalyx · 2024-11-10T22:34:58.956Z · comments (0)
Linkpost: Look at the Water
J Bostock (Jemist) · 2024-12-30T19:49:04.107Z · comments (3)
Effects of Non-Uniform Sparsity on Superposition in Toy Models
Shreyans Jain (shreyans-jain) · 2024-11-14T16:59:43.234Z · comments (3)
Some Comments on Recent AI Safety Developments
testingthewaters · 2024-11-09T16:44:58.936Z · comments (0)
What are Emotions?
Myles H (zarsou9) · 2024-11-15T04:20:27.388Z · comments (13)
It is time to start war gaming for AGI
yanni kyriacos (yanni) · 2024-10-17T05:14:17.932Z · comments (1)
[question] Is there a known method to find others who came across the same potential infohazard without spoiling it to the public?
hive · 2024-10-17T10:47:05.099Z · answers+comments (6)
[question] is there a big dictionary somewhere with all your jargon and acronyms and whatnot?
KvmanThinking (avery-liu) · 2024-10-17T11:30:50.937Z · answers+comments (7)
Investing in Robust Safety Mechanisms is critical for reducing Systemic Risks
Tom DAVID (tom-david) · 2024-12-11T13:37:24.177Z · comments (3)
Activation Magnitudes Matter On Their Own: Insights from Language Model Distributional Analysis
Matt Levinson · 2025-01-10T06:53:02.228Z · comments (0)
[question] Are there ways to artificially fix laziness?
Aidar (aidar-toktargazin) · 2024-12-08T18:26:26.433Z · answers+comments (2)
Fred the Heretic, a GPT for poetry
Bill Benzon (bill-benzon) · 2024-12-08T16:52:07.660Z · comments (0)
Model Integrity
ryan.lowe · 2024-12-06T21:28:20.775Z · comments (1)
[link] Can AI improve the current state of molecular simulation?
Abhishaike Mahajan (abhishaike-mahajan) · 2024-12-06T20:22:31.685Z · comments (0)
More Growth, Melancholy, and MindCraft @3QD [revised and updated]
Bill Benzon (bill-benzon) · 2024-12-05T19:36:02.289Z · comments (0)
[link] Expevolu, a laissez-faire approach to country creation
Fernando · 2024-12-05T19:29:24.011Z · comments (4)
Are SAE features from the Base Model still meaningful to LLaVA?
Shan23Chen (shan-chen) · 2024-12-05T19:24:34.727Z · comments (0)
← previous page (newer posts) · next page (older posts) →