LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

2023 Prediction Evaluations
Zvi · 2024-01-08T14:40:07.377Z · comments (0)
[question] Why aren't Yudkowsky & Bostrom getting more attention now?
JoshuaFox · 2024-01-08T14:42:46.303Z · answers+comments (8)
[question] How to find translations of a book?
Viliam · 2024-01-08T14:57:18.172Z · answers+comments (8)
[link] Task vectors & analogy making in LLMs
Sergii (sergey-kharagorgiev) · 2024-01-08T15:17:58.992Z · comments (1)
When "yang" goes wrong
Joe Carlsmith (joekc) · 2024-01-08T16:35:50.607Z · comments (6)
[link] Hackathon and Staying Up-to-Date in AI
jacobhaimes · 2024-01-08T17:10:20.270Z · comments (0)
[link] Project ideas: Backup plans & Cooperative AI
Lukas Finnveden (Lanrian) · 2024-01-08T17:19:33.181Z · comments (0)
Is it justifiable for non-experts to have strong opinions about Gaza?
Yair Halberstadt (yair-halberstadt) · 2024-01-08T17:31:21.934Z · comments (12)
[link] A starter guide for evals
Marius Hobbhahn (marius-hobbhahn) · 2024-01-08T18:24:23.913Z · comments (2)
Better Anomia
jefftk (jkaufman) · 2024-01-08T18:40:05.236Z · comments (0)
Hiring decisions are not suitable for prediction markets
SimonM · 2024-01-08T21:11:14.304Z · comments (6)
[link] Brief Thoughts on Justifications for Paternalism
Srdjan Miletic (srdjan-miletic) · 2024-01-09T00:36:25.329Z · comments (0)
Learning Math in Time for Alignment
NicholasKross · 2024-01-09T01:02:37.446Z · comments (3)
Can Morality Be Quantified?
Julius (julius-1) · 2024-01-09T06:35:05.426Z · comments (0)
[link] Compensating for Life Biases
Jonathan Moregård (JonathanMoregard) · 2024-01-09T14:39:14.229Z · comments (6)
Uncertainty in all its flavours
Cleo Nardo (strawberry calm) · 2024-01-09T16:21:07.915Z · comments (6)
AI demands unprecedented reliability
Jono (lw-user0246) · 2024-01-09T16:30:12.095Z · comments (5)
Does AI risk “other” the AIs?
Joe Carlsmith (joekc) · 2024-01-09T17:51:47.020Z · comments (3)
Announcing the Double Crux Bot
sanyer (santeri-koivula) · 2024-01-09T18:54:15.361Z · comments (6)
2024 ACX Predictions: Blind/Buy/Sell/Hold
Zvi · 2024-01-09T19:30:06.388Z · comments (2)
Bent or Blunt Hoods?
jefftk (jkaufman) · 2024-01-09T20:10:11.545Z · comments (0)
Goodbye, Shoggoth: The Stage, its Animatronics, & the Puppeteer – a New Metaphor
RogerDearnaley (roger-d-1) · 2024-01-09T20:42:28.349Z · comments (8)
[question] What's the protocol for if a novice has ML ideas that are unlikely to work, but might improve capabilities if they do work?
drocta · 2024-01-09T22:51:23.291Z · answers+comments (2)
On the Contrary, Steelmanning Is Normal; ITT-Passing Is Niche
Zack_M_Davis · 2024-01-09T23:12:20.349Z · comments (31)
A discussion of normative ethics
Gordon Seidoh Worley (gworley) · 2024-01-09T23:29:11.467Z · comments (6)
Stop being surprised by the passage of time
duck_master · 2024-01-10T00:36:51.598Z · comments (1)
AI as a natural disaster
Neil (neil-warren) · 2024-01-10T00:42:39.616Z · comments (1)
[question] Questions about Solomonoff induction
mukashi (adrian-arellano-davin) · 2024-01-10T01:16:58.595Z · answers+comments (11)
Saving the world sucks
Defective Altruism (Elijah Bodden) · 2024-01-10T05:55:46.504Z · comments (29)
[question] rabbit (a new AI company) and Large Action Model (LAM)
MiguelDev (whitehatStoic) · 2024-01-10T13:57:48.371Z · answers+comments (3)
[link] "Dark Constitution" for constraining some superintelligences
Valentine · 2024-01-10T16:02:36.706Z · comments (9)
The true cost of fences
pleiotroth · 2024-01-10T19:01:40.384Z · comments (2)
Beware the suboptimal routine
jwfiredragon · 2024-01-10T19:02:52.563Z · comments (3)
An Actually Intuitive Explanation of the Oberth Effect
Isaac King (KingSupernova) · 2024-01-10T20:23:17.216Z · comments (33)
The Aspiring Rationalist Congregation
maia · 2024-01-10T22:52:54.298Z · comments (23)
[link] The Perceptron Controversy
Yuxi_Liu · 2024-01-10T23:07:23.341Z · comments (18)
Universal Love Integration Test: Hitler
Raemon · 2024-01-10T23:55:35.526Z · comments (65)
Trying to align humans with inclusive genetic fitness
peterbarnett · 2024-01-11T00:13:29.487Z · comments (5)
[question] Prediction markets are consistently underconfident. Why?
Sinclair Chen (sinclair-chen) · 2024-01-11T02:44:02.824Z · answers+comments (4)
Introduce a Speed Maximum
jefftk (jkaufman) · 2024-01-11T02:50:04.284Z · comments (28)
[link] AI-Generated Music for Learning
ethanmorse · 2024-01-11T04:11:58.545Z · comments (1)
Reprograming the Mind: Meditation as a Tool for Cognitive Optimization
Jonas Hallgren · 2024-01-11T12:03:41.763Z · comments (3)
Motivating Alignment of LLM-Powered Agents: Easy for AGI, Hard for ASI?
RogerDearnaley (roger-d-1) · 2024-01-11T12:56:29.672Z · comments (4)
An even deeper atheism
Joe Carlsmith (joekc) · 2024-01-11T17:28:31.843Z · comments (47)
A Benchmark for Decision Theories
StrivingForLegibility · 2024-01-11T18:54:40.300Z · comments (0)
Apply to the PIBBSS Summer Research Fellowship
Nora_Ammann · 2024-01-12T04:06:58.328Z · comments (1)
[link] introduction to solid oxide electrolytes
bhauth · 2024-01-12T05:35:49.878Z · comments (0)
Decent plan prize announcement (1 paragraph, $1k)
lukehmiles (lcmgcd) · 2024-01-12T06:27:44.495Z · comments (19)
A Chinese Room Containing a Stack of Stochastic Parrots
RogerDearnaley (roger-d-1) · 2024-01-12T06:29:50.788Z · comments (2)
What good is G-factor if you're dumped in the woods? A field report from a camp counselor.
Hastings (hastings-greer) · 2024-01-12T13:17:23.829Z · comments (22)
← previous page (newer posts) · next page (older posts) →