LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

Bayesian updating in real life is mostly about understanding your hypotheses
Max H (Maxc) · 2024-01-01T00:10:30.978Z · comments (4)
Rhythm Stage Setup Components
jefftk (jkaufman) · 2024-01-01T03:10:15.625Z · comments (4)
2023 in AI predictions
jessicata (jessica.liu.taylor) · 2024-01-01T05:23:42.514Z · comments (34)
Research Jan/Feb 2024
Stephen Fowler (LosPolloFowler) · 2024-01-01T06:02:47.785Z · comments (0)
A hermeneutic net for agency
TsviBT · 2024-01-01T08:06:30.289Z · comments (4)
[question] What should a non-genius do in the face of rapid progress in GAI to ensure a decent life?
kaler · 2024-01-01T08:22:18.620Z · answers+comments (16)
Stop talking about p(doom)
Isaac King (KingSupernova) · 2024-01-01T10:57:28.636Z · comments (22)
Deception Chess
Chris Land · 2024-01-01T15:40:15.804Z · comments (2)
The Act Itself: Exceptionless Moral Norms
JohnBuridan · 2024-01-01T16:06:47.670Z · comments (10)
[link] Metaculus Hosts ACX 2024 Prediction Contest
ChristianWilliams · 2024-01-01T16:38:49.210Z · comments (0)
[link] Aldix and the Book of Life
ville · 2024-01-01T17:23:30.698Z · comments (0)
Mech Interp Challenge: January - Deciphering the Caesar Cipher Model
CallumMcDougall (TheMcDouglas) · 2024-01-01T18:03:45.774Z · comments (0)
San Francisco ACX Meetup “First Saturday”
guenael · 2024-01-01T20:58:41.795Z · comments (1)
Twin Cities ACX Meetup - January 2024
Timothy M. (timothy-bond) · 2024-01-01T21:13:33.867Z · comments (2)
[link] Steering Llama-2 with contrastive activation additions
Nina Rimsky (NinaR) · 2024-01-02T00:47:04.621Z · comments (29)
Boston Solstice 2023 Retrospective
jefftk (jkaufman) · 2024-01-02T03:10:05.694Z · comments (0)
Are Metaculus AI Timelines Inconsistent?
Chris_Leong · 2024-01-02T06:47:18.114Z · comments (7)
AI Is Not Software
Davidmanheim · 2024-01-02T07:58:04.992Z · comments (29)
Looking for Reading Recommendations: Content Moderation, Power & Censorship
Joerg Weiss · 2024-01-02T11:37:30.585Z · comments (7)
Dating Roundup #2: If At First You Don’t Succeed
Zvi · 2024-01-02T16:00:04.955Z · comments (29)
OpenAI's Preparedness Framework: Praise & Recommendations
Akash (akash-wasil) · 2024-01-02T16:20:04.249Z · comments (1)
Cortés, AI Risk, and the Dynamics of Competing Conquerors
James_Miller · 2024-01-02T16:37:17.457Z · comments (2)
Apologizing is a Core Rationalist Skill
johnswentworth · 2024-01-02T17:47:35.950Z · comments (42)
Otherness and control in the age of AGI
Joe Carlsmith (joekc) · 2024-01-02T18:15:54.168Z · comments (0)
Gentleness and the artificial Other
Joe Carlsmith (joekc) · 2024-01-02T18:21:34.746Z · comments (33)
Focusing on Mal-Alignment
John Fisher (john-fisher) · 2024-01-02T19:51:01.316Z · comments (0)
Agent membranes and causal distance
Chipmonk · 2024-01-02T22:43:41.508Z · comments (3)
MonoPoly Restricted Trust
ymeskhout · 2024-01-02T23:02:55.066Z · comments (37)
Trading off Lives
jefftk (jkaufman) · 2024-01-03T03:40:05.603Z · comments (12)
[question] Terminology: <something>-ware for ML?
Oliver Sourbut · 2024-01-03T11:42:37.710Z · answers+comments (27)
[question] Does LessWrong make a difference when it comes to AI alignment?
PhilosophicalSoul (LiamLaw) · 2024-01-03T12:21:32.587Z · answers+comments (11)
What Helped Me - Kale, Blood, CPAP, X-tiamine, Methylphenidate
Johannes C. Mayer (johannes-c-mayer) · 2024-01-03T13:22:11.700Z · comments (12)
Johannes' Biography
Johannes C. Mayer (johannes-c-mayer) · 2024-01-03T13:27:19.329Z · comments (0)
[question] Theoretically, could we balance the budget painlessly?
Logan Zoellner (logan-zoellner) · 2024-01-03T14:46:04.753Z · answers+comments (12)
Copyright Confrontation #1
Zvi · 2024-01-03T15:50:04.850Z · comments (7)
Trivial Mathematics as a Path Forward
ACrackedPot · 2024-01-03T16:41:32.537Z · comments (2)
[link] Practically A Book Review: Appendix to "Nonlinear's Evidence: Debunking False and Misleading Claims" (ThingOfThings)
tailcalled · 2024-01-03T17:07:13.990Z · comments (25)
Safety First: safety before full alignment. The deontic sufficiency hypothesis.
Chipmonk · 2024-01-03T17:55:19.825Z · comments (3)
Agent membranes/boundaries and formalizing “safety”
Chipmonk · 2024-01-03T17:55:21.018Z · comments (46)
$300 for the best sci-fi prompt: the results
RomanS · 2024-01-03T19:10:19.353Z · comments (19)
Spirit Airlines Merger Play
sapphire (deluks917) · 2024-01-03T19:25:54.726Z · comments (12)
What’s up with LLMs representing XORs of arbitrary features?
Sam Marks (samuel-marks) · 2024-01-03T19:44:33.162Z · comments (61)
"Attitudes Toward Artificial General Intelligence: Results from American Adults 2021 and 2023" - call for reviewers (Seeds of Science)
rogersbacon · 2024-01-03T20:11:43.216Z · comments (0)
Investigating Alternative Futures: Human and Superintelligence Interaction Scenarios
Hiroshi Yamakawa (hiroshi-yamakawa) · 2024-01-03T23:46:46.924Z · comments (0)
Fact Checking and Retaliation Against Sources
jefftk (jkaufman) · 2024-01-04T00:41:07.134Z · comments (2)
[link] Non-alignment project ideas for making transformative AI go well
Lukas Finnveden (Lanrian) · 2024-01-04T07:23:13.658Z · comments (1)
[question] Which investments for aligned-AI outcomes?
tailcalled · 2024-01-04T13:28:57.198Z · answers+comments (8)
Screen-supported Portable Monitor
jefftk (jkaufman) · 2024-01-04T13:50:06.179Z · comments (10)
AI #45: To Be Determined
Zvi · 2024-01-04T15:00:05.936Z · comments (4)
EAG Bay Area Satellite event: AI Institution Design Hackathon 2024
beatrice@foresight.org · 2024-01-04T15:02:13.582Z · comments (0)
next page (older posts) →