LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

The predictive power of dissipative adaptation
dr_s · 2023-12-17T14:01:31.568Z · comments (12)
[link] Linkpost: Francesca v Harvard
Linch · 2023-12-17T06:18:05.883Z · comments (5)
[link] Lessons from massaging myself, others, dogs, and cats
Chipmonk · 2023-12-17T04:28:40.080Z · comments (27)
The Serendipity of Density
jefftk (jkaufman) · 2023-12-17T03:50:04.824Z · comments (4)
Bounty: Diverse hard tasks for LLM agents
Beth Barnes (beth-barnes) · 2023-12-17T01:04:05.460Z · comments (31)
2022 (and All Time) Posts by Pingback Count
Raemon · 2023-12-16T21:17:00.572Z · comments (14)
"Humanity vs. AGI" Will Never Look Like "Humanity vs. AGI" to Humanity
Thane Ruthenis · 2023-12-16T20:08:39.375Z · comments (34)
[link] Alignment work in anomalous worlds
Tamsin Leake (carado-1) · 2023-12-16T19:34:26.202Z · comments (4)
A visual analogy for text generation by LLMs?
Bill Benzon (bill-benzon) · 2023-12-16T17:58:57.121Z · comments (0)
Upgrading the AI Safety Community
trevor (TrevorWiesinger) · 2023-12-16T15:34:26.600Z · comments (9)
[link] cold aluminum for medicine
bhauth · 2023-12-16T14:38:03.260Z · comments (4)
Scalable Oversight and Weak-to-Strong Generalization: Compatible approaches to the same problem
Ansh Radhakrishnan (anshuman-radhakrishnan-1) · 2023-12-16T05:49:23.672Z · comments (3)
Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision
leogao · 2023-12-16T05:39:10.558Z · comments (5)
[link] Pope Francis shares thoughts on responsible AI development
corruptedCatapillar · 2023-12-16T03:49:18.516Z · comments (4)
Current AIs Provide Nearly No Data Relevant to AGI Alignment
Thane Ruthenis · 2023-12-15T20:16:09.723Z · comments (152)
Agglomeration of 'Ought'
DavidAndresBloom (davidandresbloom) · 2023-12-15T19:07:27.242Z · comments (1)
[link] Predicting the future with the power of the Internet (and pissing off Rob Miles)
Writer · 2023-12-15T17:37:29.695Z · comments (8)
[link] Progress links digest, 2023-12-15: Vitalik on d/acc, $100M+ in prizes, and more
jasoncrawford · 2023-12-15T15:52:34.588Z · comments (0)
"AI Alignment" is a Dangerously Overloaded Term
Roko · 2023-12-15T14:34:29.850Z · comments (98)
[Valence series] 4. Valence & Social Status
Steven Byrnes (steve2152) · 2023-12-15T14:24:41.040Z · comments (19)
[link] Contra Scott on Abolishing the FDA
Maxwell Tabarrok (maxwell-tabarrok) · 2023-12-15T14:00:17.247Z · comments (3)
[link] [Paper] Trajectories through semantic spaces in schizophrenia and the relationship to ripple bursts
bvbvbvbvbvbvbvbvbvbvbv · 2023-12-15T13:37:55.880Z · comments (0)
Takeaways from a Mechanistic Interpretability project on “Forbidden Facts”
Tony Wang (tw) · 2023-12-15T11:05:23.256Z · comments (8)
[link] Refinement of Active Inference agency ontology
Roman Leventov · 2023-12-15T09:31:21.514Z · comments (0)
EU policymakers reach an agreement on the AI Act
tlevin (trevor) · 2023-12-15T06:02:44.668Z · comments (7)
Where Does Adversarial Pressure Come From?
quetzal_rainbow · 2023-12-14T22:31:25.384Z · comments (1)
Epoch wise critical periods, and singular learning theory
Garrett Baker (D0TheMath) · 2023-12-14T20:55:32.508Z · comments (1)
[link] OpenAI Superalignment: Weak-to-strong generalization
Dalmert · 2023-12-14T19:47:24.347Z · comments (3)
Applications for EA Global are still open!
Eli_Nathan · 2023-12-14T19:10:55.595Z · comments (0)
[link] Personal Development System: Winning Repeatedly and Growing Effectively With The BIG4
Paul Rohde (paul-rohde-1) · 2023-12-14T18:49:06.199Z · comments (0)
[link] Introducing The ‘From Big Ideas To Real-World Results’: A Series for Effective Personal Development
Paul Rohde (paul-rohde-1) · 2023-12-14T18:49:06.186Z · comments (1)
[link] Talking With People Who Speak to Congressional Staffers about AI risk
Eneasz · 2023-12-14T17:55:50.606Z · comments (0)
[link] Bayesian Injustice
Kevin Dorst · 2023-12-14T15:44:08.664Z · comments (10)
AI #42: The Wrong Answer
Zvi · 2023-12-14T14:50:05.086Z · comments (6)
Some for-profit AI alignment org ideas
Eric Ho (eh42) · 2023-12-14T14:23:20.654Z · comments (19)
Mapping the semantic void: Strange goings-on in GPT embedding spaces
mwatkins · 2023-12-14T13:10:22.691Z · comments (31)
Categorical Organization in Memory: ChatGPT Organizes the 665 Topic Tags from My New Savanna Blog
Bill Benzon (bill-benzon) · 2023-12-14T13:02:33.073Z · comments (6)
Moral Mountains
Adam Zerner (adamzerner) · 2023-12-14T10:40:06.179Z · comments (10)
Update on Chinese IQ-related gene panels
Lao Mein (derpherpize) · 2023-12-14T10:12:21.212Z · comments (7)
Red Line Ashmont Train is Now Approaching
jefftk (jkaufman) · 2023-12-14T02:50:11.382Z · comments (2)
[link] Various AI doom pathways (and how likely they are)
Logan Zoellner (logan-zoellner) · 2023-12-14T00:54:09.424Z · comments (1)
[link] Are There Examples of Overhang for Other Technologies?
Jeffrey Heninger (jeffrey-heninger) · 2023-12-13T21:48:08.954Z · comments (50)
Is being sexy for your homies?
Valentine · 2023-12-13T20:37:02.043Z · comments (92)
[link] How bad is chlorinated water?
bhauth · 2023-12-13T18:00:12.640Z · comments (18)
[question] Suggestions for net positive LLM research
Cole Wyeth (Amyr) · 2023-12-13T17:29:11.666Z · answers+comments (6)
AI Control: Improving Safety Despite Intentional Subversion
Buck · 2023-12-13T15:51:35.982Z · comments (7)
The Busy Bee Brain
Bill Benzon (bill-benzon) · 2023-12-13T13:10:24.343Z · comments (0)
The Best of Don’t Worry About the Vase
Zvi · 2023-12-13T12:50:02.510Z · comments (4)
[question] Has anyone here investigated the occult community? It is curious to me that many magicians consider themselves empiricists.
SpectrumDT · 2023-12-13T11:09:14.375Z · answers+comments (10)
AI Views Snapshots
Rob Bensinger (RobbBB) · 2023-12-13T00:45:50.016Z · comments (61)
← previous page (newer posts) · next page (older posts) →