LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Survey - Psychological Impact of Long-Term AI Engagement
Manuela García (manuela-garcia) · 2024-09-17T17:31:38.383Z · comments (1)
Whimsical Thoughts on an AI Notepad: Exploring Non-Invasive Neural Integration via Viral and Stem Cell Pathways
Pug stanky (scott-morgan) · 2024-10-06T16:37:12.912Z · comments (2)
Electoral Systems
RedFishBlueFish (RedStateBlueState) · 2024-10-20T03:25:20.677Z · comments (0)
AGI & Consciousness - Joscha Bach
Rahul Chand (rahul-chand) · 2024-10-08T22:51:50.841Z · comments (0)
A Psychoanalytic Explanation of Sam Altman's Irrational Actions
Gabe · 2024-09-29T18:58:13.511Z · comments (3)
vgillioz's Shortform
vgillioz · 2024-10-09T19:31:21.441Z · comments (2)
Superposition through Active Learning Lens
akankshanc · 2024-09-17T17:32:56.583Z · comments (0)
Understanding incomparability versus incommensurability in relation to RLHF
artemiocobb · 2024-11-02T22:57:02.849Z · comments (1)
Scaling prediction markets with meta-markets
Dentosal (dentosal) · 2024-10-10T21:17:26.934Z · comments (0)
Longevity, AI, and Cognitive Research Hackathon @ MIT
ekkolápto (ekkolapto) · 2024-10-22T06:19:28.008Z · comments (0)
[link] A Heuristic Proof of Practical Aligned Superintelligence
Roko · 2024-10-11T05:05:58.262Z · comments (6)
Composition Circuits in Vision Transformers (Hypothesis)
phenomanon (ekg) · 2024-11-01T22:16:11.191Z · comments (0)
[question] A Coordination Cookbook?
azergante · 2024-11-10T23:20:34.843Z · answers+comments (0)
"Deep Galactic Chillout" a space to relax during SF tech week & meet wholesome, fun people
Jared Phillip Mantell (jared-phillip-mantell) · 2024-10-11T19:50:49.299Z · comments (0)
[question] What does it mean for an event or observation to have probability 0 or 1 in Bayesian terms?
Noosphere89 (sharmake-farah) · 2024-09-17T17:28:52.731Z · answers+comments (22)
[link] The Explore vs. Exploit Dilemma
nathanjzhao · 2024-10-14T06:20:25.526Z · comments (0)
How Should We Measure Intelligence Models: Why Use Frequency of Elemental Information Operations
hwj20 · 2024-10-24T16:54:19.096Z · comments (0)
Introduction to Choice set Misspecification in Reward Inference
Rahul Chand (rahul-chand) · 2024-10-29T22:57:34.310Z · comments (0)
Modeling AI-driven occupational change over the next 10 years and beyond
2120eth · 2024-11-12T04:58:26.741Z · comments (0)
Breakthroughs, Neurodivergence, and Working Outside the System
j_passeri · 2024-10-14T16:54:16.617Z · comments (3)
Extending the Off-Switch Game: Toward a Robust Framework for AI Corrigibility
OwenChen · 2024-09-25T20:38:22.928Z · comments (0)
AIS Hungary Operations Officer role, Deadline: 2024 October 6th
gergogaspar (gergo-gaspar) · 2024-09-25T13:54:25.077Z · comments (0)
Antonym Heads Predict Semantic Opposites in Language Models
Jake Ward (jake-ward) · 2024-11-15T15:32:14.102Z · comments (0)
Pausing for what?
MountainPath · 2024-10-21T20:12:29.732Z · comments (1)
[question] Reinforcement Learning: Essential Step Towards AGI or Irrelevant?
Double · 2024-10-17T03:37:04.635Z · answers+comments (0)
understanding bureaucracy
dhruvmethi · 2024-10-17T17:55:27.336Z · comments (2)
Factoring P(doom) into a bayesian network
Joseph Gardi (joseph-gardi-1) · 2024-10-17T17:55:37.455Z · comments (0)
Sampling Effects on Strategic Behavior in Supervised Learning Models
Phil Bland · 2024-09-24T07:44:41.677Z · comments (0)
Emergent Authorship: Creativity à la Communing
gswonk · 2024-09-14T19:02:07.635Z · comments (0)
Monosemanticity & Quantization
Rahul Chand (rahul-chand) · 2024-10-22T22:57:53.529Z · comments (0)
[link] Intelligence explosion: a rational assessment.
p4rziv4l · 2024-09-30T21:17:35.675Z · comments (0)
AGI Farm
Rahul Chand (rahul-chand) · 2024-10-01T04:29:58.606Z · comments (0)
Not Just For Therapy Chatbots: The Case For Compassion In AI Moral Alignment Research
kenneth_diao · 2024-09-30T18:37:20.409Z · comments (0)
Exploring Decomposability of SAE Features
Vikram_N (viknat) · 2024-09-30T18:28:09.348Z · comments (0)
[link] A Nonconstructive Existence Proof of Aligned Superintelligence
Roko · 2024-09-12T03:20:09.531Z · comments (78)
AGI Fermi Paradox
jrincayc (nerd_gatherer) · 2024-10-27T20:14:54.490Z · comments (2)
Redefining Tolerance: Beyond Popper's Paradox
mindprison · 2024-11-03T22:23:44.688Z · comments (0)
AI alignment via civilizational cognitive updates
AtillaYasar (atillayasar) · 2024-11-10T09:33:35.023Z · comments (10)
[link] AI: The Philosopher's Stone of the 21st Century
HNX · 2024-10-11T16:55:57.021Z · comments (2)
A Rational Company - Seeking Advisors
AlignmentOptimizer · 2024-09-21T19:51:22.476Z · comments (1)
Amplify is hiring! Work with us to support field-building initiatives through digital marketing
gergogaspar (gergo-gaspar) · 2024-09-10T08:56:42.822Z · comments (1)
I just can't agree with AI safety. Why am I wrong?
Ya Polkovnik (yurii-burak-1) · 2024-09-13T17:48:22.245Z · comments (5)
'Meta', 'mesa', and mountains
Lorec · 2024-10-31T17:25:53.635Z · comments (0)
'Estimat - Values and Data’s For Starters'- A Necessary Proposal?
Gabriel Brito (gabriel-brito) · 2024-11-14T14:37:57.692Z · comments (0)
[question] I = W/T?
HNX · 2024-10-12T15:15:36.806Z · answers+comments (3)
A Different Angle on Thinking Balance
Gabriel Brito (gabriel-brito) · 2024-11-06T23:46:49.668Z · comments (4)
Exploring the Platonic Representation Hypothesis Beyond In-Distribution Data
rokosbasilisk · 2024-10-20T08:40:04.404Z · comments (2)
A metaphor: what "green lights" for AGI would look like
Lorec · 2024-10-23T23:24:43.812Z · comments (6)
Moscow – ACX Meetups Everywhere Fall 2024
red-hara · 2024-09-20T23:03:16.028Z · comments (0)
Retrieval Augmented Genesis II — Holy Texts Semantics Analysis
João Ribeiro Medeiros (joao-ribeiro-medeiros) · 2024-10-26T17:00:56.328Z · comments (0)
← previous page (newer posts) · next page (older posts) →