LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

[link] Some Preliminary Notes on the Promise of a Wisdom Explosion
Chris_Leong · 2024-10-31T09:21:11.623Z · comments (0)
[question] Have we seen any "ReLU instead of sigmoid-type improvements" recently
KvmanThinking (avery-liu) · 2024-11-23T03:51:52.984Z · answers+comments (4)
MIT FutureTech are hiring ‍a Product and Data Visualization Designer
peterslattery · 2024-11-13T14:48:06.167Z · comments (0)
A better “Statement on AI Risk?”
Knight Lee (Max Lee) · 2024-11-25T04:50:29.399Z · comments (4)
Root node of my posts
AtillaYasar (atillayasar) · 2024-11-19T20:09:02.973Z · comments (0)
Breaking beliefs about saving the world
Oxidize · 2024-11-15T00:46:03.693Z · comments (3)
Optimizing Problem-Solving Strategies Through Prediction Markets
patrik-cihal · 2024-11-22T19:58:32.930Z · comments (0)
Workshop Report: Why current benchmarks approaches are not sufficient for safety?
Tom DAVID (tom-david) · 2024-11-26T17:20:47.453Z · comments (0)
Jakarta ACX December 2024 Meetup
Aud (aud) · 2024-11-19T15:01:31.101Z · comments (0)
'Meta', 'mesa', and mountains
Lorec · 2024-10-31T17:25:53.635Z · comments (0)
Should you increase AI alignment funding, or increase AI regulation?
Knight Lee (Max Lee) · 2024-11-26T09:17:01.809Z · comments (0)
[question] A Coordination Cookbook?
azergante · 2024-11-10T23:20:34.843Z · answers+comments (0)
Zaragoza ACX/LW Meetup
Fernand0 · 2024-11-25T06:56:12.321Z · comments (0)
Composition Circuits in Vision Transformers (Hypothesis)
phenomanon (ekg) · 2024-11-01T22:16:11.191Z · comments (0)
Introduction to Choice set Misspecification in Reward Inference
Rahul Chand (rahul-chand) · 2024-10-29T22:57:34.310Z · comments (0)
Understanding incomparability versus incommensurability in relation to RLHF
artemiocobb · 2024-11-02T22:57:02.849Z · comments (1)
AI alignment via civilizational cognitive updates
AtillaYasar (atillayasar) · 2024-11-10T09:33:35.023Z · comments (10)
Launching a 5-day Intro to Transformative AI course
bluedotimpact · 2024-11-22T17:45:05.304Z · comments (0)
[question] Will Orion/Gemini 2/Llama-4 outperform o1
LuigiPagani (luigipagani) · 2024-11-18T21:15:55.953Z · answers+comments (3)
Gothenburg LW/ACX meetup
Stefan (stefan-1) · 2024-11-24T19:40:52.215Z · comments (0)
Truth Terminal: A reconstruction of events
crvr.fr (crdevio) · 2024-11-17T23:51:21.279Z · comments (1)
Modeling AI-driven occupational change over the next 10 years and beyond
2120eth · 2024-11-12T04:58:26.741Z · comments (0)
A Different Angle on Thinking Balance
Gabriel Brito (gabriel-brito) · 2024-11-06T23:46:49.668Z · comments (4)
AGI Fermi Paradox
jrincayc (nerd_gatherer) · 2024-10-27T20:14:54.490Z · comments (2)
[question] What changes should happen in the HHS?
ChristianKl · 2024-11-20T11:04:01.778Z · answers+comments (12)
Redefining Tolerance: Beyond Popper's Paradox
mindprison · 2024-11-03T22:23:44.688Z · comments (0)
'Values and Data’s For Starters'- A Necessary Proposal?
Gabriel Brito (gabriel-brito) · 2024-11-14T14:37:57.692Z · comments (0)
SB-1047, ChatGPT and AI's Game of Thrones
Rahul Chand (rahul-chand) · 2024-11-24T02:29:34.907Z · comments (1)
GPT-8 may not be ASI
rvzlxax409 · 2024-11-04T19:31:39.758Z · comments (1)
The Manufactured Crisis: How Society Is Willingly Tying Its Own Noose
PROPHET (prophet) · 2024-11-22T17:45:27.941Z · comments (2)
Going Beyond "immaturity"
moisentinel · 2024-11-05T20:51:48.162Z · comments (2)
Testing "True" Language Understanding in LLMs: A Simple Proposal
MtryaSam · 2024-11-02T19:12:34.691Z · comments (0)
AI Specialized in ML Training Could Create ASI: AGI Is Unnecessary
satopi · 2024-11-25T02:31:29.320Z · comments (1)
[link] The Case Against Moral Realism
Zero Contradictions · 2024-11-07T10:14:26.269Z · comments (10)
How I Learned That You Should Push Children Into Ponds
omnizoid · 2024-11-11T14:20:23.080Z · comments (3)
Urbit New England Meetup
Conquerer Cohen (conquerer-cohen) · 2024-11-10T17:56:04.083Z · comments (0)
Don't want Goodhart? — Specify the damn variables
Ян Лютнев (yan-lyutnev) · 2024-11-21T22:45:26.538Z · comments (0)
[question] Green thumb
Pug stanky (scott-morgan) · 2024-11-20T21:52:10.348Z · answers+comments (1)
How to Live Well: My Philosophy of Life
Philosofer123 · 2024-11-12T04:05:03.526Z · comments (2)
The Cartesian Crisis
mindprison · 2024-11-01T23:02:45.775Z · comments (2)
Rebutting Every Objection To Giving To The Shrimp Welfare Project
omnizoid · 2024-11-22T16:12:48.868Z · comments (0)
The Case For Giving To The Shrimp Welfare Project
omnizoid · 2024-11-15T16:03:57.712Z · comments (13)
What AI safety researchers can learn from Mahatma Gandhi
Lysandre Terrisse · 2024-11-08T19:49:14.397Z · comments (0)
The Humanitarian Economy
kylefurlong · 2024-11-12T18:25:09.557Z · comments (14)
How Universal Basic Income Could Help Us Build a Brighter Future
Yanling Guo (yanling-guo) · 2024-11-23T22:03:59.973Z · comments (9)
Reality as Category-Theoretic State Machines: A Mathematical Framework
Wenitte Apiou (wenitte-apiou) · 2024-11-02T21:04:22.743Z · comments (0)
AI timelines don't account for base rate of tech progress
rvzlxax409 · 2024-11-04T19:31:39.732Z · comments (2)
The Catastrophe of Shiny Objects
mindprison · 2024-11-18T00:24:28.876Z · comments (0)
Alignment is not intelligent
Donatas Lučiūnas (donatas-luciunas) · 2024-11-25T06:59:24.048Z · comments (5)
We can survive
Oxidize · 2024-11-04T19:33:23.816Z · comments (7)
← previous page (newer posts) · next page (older posts) →