LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

[link] Succession
Richard_Ngo (ricraz) · 2023-12-20T19:25:03.185Z · comments (48)
[link] Metaculus Introduces Multiple Choice Questions
ChristianWilliams · 2023-12-20T19:00:05.156Z · comments (0)
Brighter Than Today Versions
jefftk (jkaufman) · 2023-12-20T18:20:04.785Z · comments (2)
Gaia Network: a practical, incremental pathway to Open Agency Architecture
Roman Leventov · 2023-12-20T17:11:43.843Z · comments (8)
On the future of language models
owencb · 2023-12-20T16:58:28.433Z · comments (17)
[Valence series] Appendix A: Hedonic tone / (dis)pleasure / (dis)liking
Steven Byrnes (steve2152) · 2023-12-20T15:54:17.131Z · comments (0)
Matrix completion prize results
paulfchristiano · 2023-12-20T15:40:04.281Z · comments (0)
[question] What's the minimal additive constant for Kolmogorov Complexity that a programming language can achieve?
Noosphere89 (sharmake-farah) · 2023-12-20T15:36:50.968Z · answers+comments (15)
[link] Legalize butanol?
bhauth · 2023-12-20T14:24:33.849Z · comments (20)
A short dialogue on comparability of values
cousin_it · 2023-12-20T14:08:29.650Z · comments (7)
Inside View, Outside View... And Opposing View
chaosmage · 2023-12-20T12:35:48.509Z · comments (1)
Heuristics for preventing major life mistakes
SK2 (lunchbox) · 2023-12-20T08:01:09.340Z · comments (2)
[link] What should be reified?
herschel (hrs) · 2023-12-20T04:52:53.826Z · comments (2)
[link] (In)appropriate (De)reification
herschel (hrs) · 2023-12-20T04:51:56.269Z · comments (1)
Escaping Skeuomorphism
Stuart Johnson (stuart-johnson) · 2023-12-20T03:51:00.489Z · comments (0)
Ronny and Nate discuss what sorts of minds humanity is likely to find by Machine Learning
So8res · 2023-12-19T23:39:59.689Z · comments (30)
[question] What are the best Siderea posts?
mike_hawke · 2023-12-19T23:07:59.027Z · answers+comments (2)
Meaning & Agency
abramdemski · 2023-12-19T22:27:32.123Z · comments (17)
[link] s/acc: Safe Accelerationism Manifesto
lorepieri (lorenzo-rex) · 2023-12-19T22:19:59.551Z · comments (5)
Don't Share Information Exfohazardous on Others' AI-Risk Models
Thane Ruthenis · 2023-12-19T20:09:06.244Z · comments (11)
[link] Paper: Tell, Don't Show- Declarative facts influence how LLMs generalize
Owain_Evans · 2023-12-19T19:14:26.423Z · comments (4)
[link] Interview: Applications w/ Alice Rigg
jacobhaimes · 2023-12-19T19:03:02.824Z · comments (0)
[link] How does a toy 2 digit subtraction transformer predict the sign of the output?
Evan Anders (evan-anders) · 2023-12-19T18:56:22.487Z · comments (0)
[link] Incremental AI Risks from Proxy-Simulations
kmenou · 2023-12-19T18:56:12.300Z · comments (0)
A proposition for the modification of our epistemology
JacobBowden · 2023-12-19T18:55:20.252Z · comments (2)
Goal-Completeness is like Turing-Completeness for AGI
Liron · 2023-12-19T18:12:29.947Z · comments (26)
SociaLLM: proposal for a language model design for personalised apps, social science, and AI safety research
Roman Leventov · 2023-12-19T16:49:51.966Z · comments (5)
Chording "The Next Right Thing"
jefftk (jkaufman) · 2023-12-19T15:40:07.029Z · comments (0)
Monthly Roundup #13: December 2023
Zvi · 2023-12-19T15:10:08.293Z · comments (5)
Effective Aspersions: How the Nonlinear Investigation Went Wrong
TracingWoodgrains (tracingwoodgrains) · 2023-12-19T12:00:23.529Z · comments (170)
[link] A Universal Emergent Decomposition of Retrieval Tasks in Language Models
Alexandre Variengien (alexandre-variengien) · 2023-12-19T11:52:27.354Z · comments (3)
Assessment of AI safety agendas: think about the downside risk
Roman Leventov · 2023-12-19T09:00:48.278Z · comments (1)
A Socratic Dialogue about Socratic Dialogues
Yoav Ravid · 2023-12-19T07:50:05.935Z · comments (0)
Constellations are Younger than Continents
Jeffrey Heninger (jeffrey-heninger) · 2023-12-19T06:12:40.667Z · comments (22)
The Dark Arts
lsusr · 2023-12-19T04:41:13.356Z · comments (49)
[link] When scientists consider whether their research will end the world
Harlan · 2023-12-19T03:47:06.645Z · comments (4)
[link] Is the far future inevitably zero sum?
Srdjan Miletic (srdjan-miletic) · 2023-12-19T01:45:44.626Z · comments (2)
The 'Neglected Approaches' Approach: AE Studio's Alignment Agenda
Cameron Berg (cameron-berg) · 2023-12-18T20:35:01.569Z · comments (20)
The Shortest Path Between Scylla and Charybdis
Thane Ruthenis · 2023-12-18T20:08:34.995Z · comments (8)
[link] OpenAI: Preparedness framework
Zach Stein-Perlman · 2023-12-18T18:30:10.153Z · comments (23)
[Valence series] 5. “Valence Disorders” in Mental Health & Personality
Steven Byrnes (steve2152) · 2023-12-18T15:26:29.970Z · comments (7)
Discussion: Challenges with Unsupervised LLM Knowledge Discovery
Seb Farquhar · 2023-12-18T11:58:39.379Z · comments (21)
Interpreting the Learning of Deceit
RogerDearnaley (roger-d-1) · 2023-12-18T08:12:39.682Z · comments (9)
[link] Talk: "AI Would Be A Lot Less Alarming If We Understood Agents"
johnswentworth · 2023-12-17T23:46:32.814Z · comments (3)
[link] ∀: a story
Richard_Ngo (ricraz) · 2023-12-17T22:42:32.857Z · comments (1)
Reviving a 2015 MacBook
jefftk (jkaufman) · 2023-12-17T21:00:08.729Z · comments (0)
A Common-Sense Case For Mutually-Misaligned AGIs Allying Against Humans
Thane Ruthenis · 2023-12-17T20:28:57.854Z · comments (7)
[link] OpenAI, DeepMind, Anthropic, etc. should shut down.
Tamsin Leake (carado-1) · 2023-12-17T20:01:22.332Z · comments (48)
The Limits of Artificial Consciousness: A Biology-Based Critique of Chalmers’ Fading Qualia Argument
Štěpán Los (stepan-los) · 2023-12-17T19:11:31.953Z · comments (9)
What makes teaching math special
Viliam · 2023-12-17T14:15:01.136Z · comments (27)
← previous page (newer posts) · next page (older posts) →