LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

[question] Were there any ancient rationalists?
OliverHayman · 2024-05-03T18:26:52.228Z · answers+comments (3)
Key takeaways from our EA and alignment research surveys
Cameron Berg (cameron-berg) · 2024-05-03T18:10:41.416Z · comments (8)
[link] "AI Safety for Fleshy Humans" an AI Safety explainer by Nicky Case
habryka (habryka4) · 2024-05-03T18:10:12.478Z · comments (10)
AI Clarity: An Initial Research Agenda
Justin Bullock (justin-bullock) · 2024-05-03T13:54:22.894Z · comments (1)
Apply to ESPR & PAIR, Rationality and AI Camps for Ages 16-21
Anna Gajdova (anna-gajdova) · 2024-05-03T12:36:37.610Z · comments (0)
On precise out-of-context steering
Olli Järviniemi (jarviniemi) · 2024-05-03T09:41:25.093Z · comments (6)
LLM+Planners hybridisation for friendly AGI
installgentoo · 2024-05-03T08:40:18.128Z · comments (2)
Mechanistic Interpretability Workshop Happening at ICML 2024!
Neel Nanda (neel-nanda-1) · 2024-05-03T01:18:26.936Z · comments (4)
Weekly newsletter for AI safety events and training programs
Bryce Robertson (bryceerobertson) · 2024-05-03T00:33:29.418Z · comments (0)
CCS: Counterfactual Civilization Simulation
Pi Rogers (pi-rogers) · 2024-05-02T22:54:29.773Z · comments (0)
[link] Let's Design A School, Part 2.1 School as Education - Structure
Sable · 2024-05-02T22:04:30.435Z · comments (2)
Why I'm not doing PauseAI
Ariel Kwiatkowski (ariel-kwiatkowski) · 2024-05-02T22:00:35.207Z · comments (5)
AI #61: Meta Trouble
Zvi · 2024-05-02T18:40:03.242Z · comments (0)
[question] How would you navigate a severe financial emergency with no help or resources?
Tigerlily · 2024-05-02T18:27:51.329Z · answers+comments (21)
Why is AGI/ASI Inevitable?
DeathlessAmaranth · 2024-05-02T18:27:17.486Z · comments (6)
[link] AISN #34: New Military AI Systems Plus, AI Labs Fail to Uphold Voluntary Commitments to UK AI Safety Institute, and New AI Policy Proposals in the US Senate
aogara (Aidan O'Gara) · 2024-05-02T16:12:47.783Z · comments (0)
Ai Salon: Trustworthy AI Futures #1
Ian Eisenberg (ian-eisenberg) · 2024-05-02T16:07:01.928Z · comments (0)
How to write Pseudocode and why you should
Johannes C. Mayer (johannes-c-mayer) · 2024-05-02T15:53:44.014Z · comments (5)
AI #62: Too Soon to Tell
Zvi · 2024-05-02T15:40:04.364Z · comments (8)
Whiteboard Program Traceing: Debug a Program Before you have the Code
Johannes C. Mayer (johannes-c-mayer) · 2024-05-02T15:30:10.500Z · comments (0)
[question] Which skincare products are evidence-based?
Vanessa Kosoy (vanessa-kosoy) · 2024-05-02T15:22:12.597Z · answers+comments (43)
Q&A on Proposed SB 1047
Zvi · 2024-05-02T15:10:02.916Z · comments (5)
[question] What are the Activities that make up your Research Process?
Johannes C. Mayer (johannes-c-mayer) · 2024-05-02T15:01:01.437Z · answers+comments (0)
Please stop publishing ideas/insights/research about AI
Tamsin Leake (carado-1) · 2024-05-02T14:54:15.691Z · comments (56)
[question] How do you Select the Right Research Acitivity in the Right Moment?
Johannes C. Mayer (johannes-c-mayer) · 2024-05-02T14:45:30.428Z · answers+comments (1)
[question] Can stealth aircraft be detected optically?
Yair Halberstadt (yair-halberstadt) · 2024-05-02T07:47:00.101Z · answers+comments (25)
An explanation of evil in an organized world
KatjaGrace · 2024-05-02T05:20:06.240Z · comments (9)
[link] Why I am no longer thinking about/working on AI safety
jbkjr · 2024-05-02T05:08:56.813Z · comments (0)
[link] [Linkpost] Silver Bulletin: For most people, politics is about fitting in
Gunnar_Zarncke · 2024-05-01T18:12:43.238Z · comments (4)
Launching applications for AI Safety Careers Course India 2024
Axiom_Futures · 2024-05-01T17:55:38.013Z · comments (1)
[question] Shane Legg's necessary properties for every AGI Safety plan
jacquesthibs (jacques-thibodeau) · 2024-05-01T17:15:41.233Z · answers+comments (12)
[link] KAN: Kolmogorov-Arnold Networks
Gunnar_Zarncke · 2024-05-01T16:50:58.124Z · comments (14)
Manifund Q1 Retro: Learnings from impact certs
Austin Chen (austin-chen) · 2024-05-01T16:48:33.140Z · comments (1)
ACX Covid Origins Post convinced readers
ErnestScribbler · 2024-05-01T13:06:20.818Z · comments (7)
LessWrong Community Weekend 2024, open for applications
UnplannedCauliflower · 2024-05-01T10:18:21.992Z · comments (1)
Take SCIFs, it’s dangerous to go alone
latterframe · 2024-05-01T08:02:38.067Z · comments (1)
AXRP Episode 30 - AI Security with Jeffrey Ladish
DanielFilan · 2024-05-01T02:50:04.621Z · comments (0)
Neuro/BCI/WBE for Safe AI Workshop
Allison Duettmann (allison-duettmann) · 2024-05-01T00:46:23.910Z · comments (0)
AGI: Cryptography, Security & Multipolar Scenarios Workshop
Allison Duettmann (allison-duettmann) · 2024-05-01T00:42:08.735Z · comments (1)
The formal goal is a pointer
Pi Rogers (pi-rogers) · 2024-05-01T00:27:38.856Z · comments (10)
Arch-anarchy:Theory and practice
Peter lawless · 2024-04-30T23:20:23.840Z · comments (0)
[link] "Open Source AI" is a lie, but it doesn't have to be
jacobhaimes · 2024-04-30T23:10:11.963Z · comments (1)
Questions for labs
Zach Stein-Perlman · 2024-04-30T22:15:55.362Z · comments (10)
Reality comprehensibility: are there illogical things in reality?
DDthinker · 2024-04-30T21:30:32.932Z · comments (0)
Mechanistically Eliciting Latent Behaviors in Language Models
Andrew Mack (andrew-mack) · 2024-04-30T18:51:13.493Z · comments (36)
[question] What is the easiest/funnest way to build up a comprehensive understanding of AI and AI Safety?
Jordan Arel · 2024-04-30T18:41:53.017Z · answers+comments (2)
Transcoders enable fine-grained interpretable circuit analysis for language models
Jacob Dunefsky (jacob-dunefsky) · 2024-04-30T17:58:09.982Z · comments (12)
[link] Announcing the 2024 Roots of Progress Blog-Building Intensive
jasoncrawford · 2024-04-30T17:37:28.457Z · comments (0)
The Intentional Stance, LLMs Edition
Eleni Angelou (ea-1) · 2024-04-30T17:12:29.005Z · comments (2)
[link] Introducing AI Lab Watch
Zach Stein-Perlman · 2024-04-30T17:00:12.652Z · comments (16)
← previous page (newer posts) · next page (older posts) →