LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

Planes are still decades away from displacing most bird jobs
guzey · 2022-11-25T16:49:32.344Z · comments (13)
[link] Take part in our giant study of cognitive abilities and get a customized report of your strengths and weaknesses!
spencerg · 2022-11-25T16:28:52.538Z · comments (1)
Guardian AI (Misaligned systems are all around us.)
Jessica Rumbelow (jessica-cooper) · 2022-11-25T15:55:43.939Z · comments (6)
Intuitions by ML researchers may get progressively worse concerning likely candidates for transformative AI
Viktor Rehnberg (viktor.rehnberg) · 2022-11-25T15:49:29.426Z · comments (0)
[link] Refining the Sharp Left Turn threat model, part 2: applying alignment techniques
Vika · 2022-11-25T14:36:08.948Z · comments (9)
[question] Who holds all the USDT?
ChristianKl · 2022-11-25T11:58:30.163Z · answers+comments (6)
Fair Collective Efficient Altruism
Jobst Heitzig · 2022-11-25T09:38:18.586Z · comments (1)
[question] If humanity one day discovers that it is a form of disease that threatens to destroy the universe, should it allow itself to be shut down?
shminux · 2022-11-25T08:27:14.740Z · answers+comments (12)
[link] Could a single alien message destroy us?
Writer · 2022-11-25T07:32:24.889Z · comments (23)
How do I start a programming career in the West?
Lao Mein (derpherpize) · 2022-11-25T06:37:12.237Z · comments (7)
The AI Safety community has four main work groups, Strategy, Governance, Technical and Movement Building
peterslattery · 2022-11-25T03:45:30.203Z · comments (0)
Less Successful Cider Adventures
jefftk (jkaufman) · 2022-11-25T01:50:03.366Z · comments (1)
Gliders in Language Models
Alexandre Variengien (alexandre-variengien) · 2022-11-25T00:38:11.565Z · comments (11)
On Kelly and altruism
philh · 2022-11-24T23:40:04.095Z · comments (6)
Open technical problem: A Quinean proof of Löb's theorem, for an easier cartoon guide
Andrew_Critch · 2022-11-24T21:16:43.879Z · comments (35)
[question] Historical examples of people gaining unusual cognitive abilities?
NicholasKross · 2022-11-24T19:01:14.105Z · answers+comments (2)
Corrigibility Via Thought-Process Deference
Thane Ruthenis · 2022-11-24T17:06:39.058Z · comments (5)
Geometric Exploration, Arithmetic Exploitation
Scott Garrabrant · 2022-11-24T15:36:30.334Z · comments (4)
What I Learned Running Refine
adamShimi · 2022-11-24T14:49:59.366Z · comments (5)
Covid 11/24/22: Thanks for Good Health
Zvi · 2022-11-24T13:00:01.750Z · comments (4)
[question] Dumb and ill-posed question: Is conceptual research like this MIRI paper on the shutdown problem/Corrigibility "real"
joraine · 2022-11-24T05:08:01.984Z · answers+comments (11)
Clarifying wireheading terminology
leogao · 2022-11-24T04:53:23.925Z · comments (6)
LW Beta Feature: Side-Comments
jimrandomh · 2022-11-24T01:55:31.578Z · comments (47)
Against "Classic Style"
Cleo Nardo (strawberry calm) · 2022-11-23T22:10:50.422Z · comments (30)
South Bay ACX/LW Meetup
IS (is) · 2022-11-23T22:05:35.278Z · comments (0)
Meme Dialects
jefftk (jkaufman) · 2022-11-23T21:30:05.008Z · comments (1)
[question] When do you visualize (or not) while doing math?
Alex_Altair · 2022-11-23T20:15:20.885Z · answers+comments (9)
When AI solves a game, focus on the game's mechanics, not its theme.
Cleo Nardo (strawberry calm) · 2022-11-23T19:16:07.333Z · comments (7)
The Geometric Expectation
Scott Garrabrant · 2022-11-23T18:05:12.206Z · comments (19)
"Far Coordination"
DragonGod · 2022-11-23T17:14:41.830Z · comments (17)
Conjecture Second Hiring Round
Connor Leahy (NPCollapse) · 2022-11-23T17:11:42.524Z · comments (0)
Conjecture: a retrospective after 8 months of work
Connor Leahy (NPCollapse) · 2022-11-23T17:10:23.510Z · comments (9)
Against a General Factor of Doom
Jeffrey Heninger (jeffrey-heninger) · 2022-11-23T16:50:04.229Z · comments (19)
[link] Injecting some numbers into the AGI debate - by Boaz Barak
Jsevillamol · 2022-11-23T16:10:33.671Z · comments (0)
Notes on an Experiment with Markets
Jeffrey Heninger (jeffrey-heninger) · 2022-11-23T16:10:05.573Z · comments (0)
Announcing AI safety Mentors and Mentees
Marius Hobbhahn (marius-hobbhahn) · 2022-11-23T15:21:12.636Z · comments (7)
Ex nihilo
Hopkins Stanley (Heideana) · 2022-11-23T14:38:29.272Z · comments (0)
Human-level Diplomacy was my fire alarm
Lao Mein (derpherpize) · 2022-11-23T10:05:36.127Z · comments (15)
Sets of objectives for a multi-objective RL agent to optimize
Ben Smith (ben-smith) · 2022-11-23T06:49:45.236Z · comments (0)
Simulators, constraints, and goal agnosticism: porbynotes vol. 1
porby · 2022-11-23T04:22:25.748Z · comments (2)
Rationalist Town Hall: FTX Fallout Edition (RSVP Required)
Ben Pace (Benito) · 2022-11-23T01:38:25.516Z · comments (13)
Feeling Old: Leaving your 20s in the 2020s
squidious · 2022-11-22T22:50:01.195Z · comments (3)
Brute-forcing the universe: a non-standard shot at diamond alignment
Martín Soto (martinsq) · 2022-11-22T22:36:36.599Z · comments (2)
Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility
Akash (akash-wasil) · 2022-11-22T22:19:09.419Z · comments (20)
ACX Zurich November Meetup
MB (mb-1) · 2022-11-22T21:41:46.662Z · comments (0)
Human-level Full-Press Diplomacy (some bare facts).
Cleo Nardo (strawberry calm) · 2022-11-22T20:59:18.155Z · comments (7)
[question] How does late-2022 COVID transmissibility drop over time?
Daniel Dewey (daniel-dewey) · 2022-11-22T19:54:50.422Z · answers+comments (2)
AI will change the world, but won’t take it over by playing “3-dimensional chess”.
boazbarak · 2022-11-22T18:57:29.604Z · comments (98)
[link] Progress links and tweets, 2022-11-22
jasoncrawford · 2022-11-22T17:39:06.498Z · comments (0)
Tyranny of the Epistemic Majority
Scott Garrabrant · 2022-11-22T17:19:34.144Z · comments (13)
← previous page (newer posts) · next page (older posts) →