LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Is AI Progress Impossible To Predict?
alyssavance · 2022-05-15T18:30:12.103Z · comments (39)
We Choose To Align AI
johnswentworth · 2022-01-01T20:06:23.307Z · comments (16)
OpenAI: The Battle of the Board
Zvi · 2023-11-22T17:30:04.574Z · comments (82)
What Are You Tracking In Your Head?
johnswentworth · 2022-06-28T19:30:06.164Z · comments (81)
Sazen
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2022-12-21T07:54:51.415Z · comments (83)
My May 2023 priorities for AI x-safety: more empathy, more unification of concerns, and less vilification of OpenAI
Andrew_Critch · 2023-05-24T00:02:08.836Z · comments (39)
Don't die with dignity; instead play to your outs
Jeffrey Ladish (jeff-ladish) · 2022-04-06T07:53:05.172Z · comments (59)
Notes on Teaching in Prison
jsd · 2023-04-19T01:53:00.427Z · comments (12)
Guide to rationalist interior decorating
mingyuan · 2023-06-19T06:47:13.704Z · comments (45)
Toni Kurz and the Insanity of Climbing Mountains
GeneSmith · 2022-07-03T20:51:58.429Z · comments (67)
The Base Rate Times, news through prediction markets
vandemonian · 2023-06-06T17:42:56.718Z · comments (39)
Humans are very reliable agents
alyssavance · 2022-06-16T22:02:10.892Z · comments (35)
Gentleness and the artificial Other
Joe Carlsmith (joekc) · 2024-01-02T18:21:34.746Z · comments (33)
We don’t trade with ants
KatjaGrace · 2023-01-10T23:50:11.476Z · comments (108)
OpenAI: Facts from a Weekend
Zvi · 2023-11-20T15:30:06.732Z · comments (158)
Accidentally Load Bearing
jefftk (jkaufman) · 2023-07-13T16:10:00.806Z · comments (14)
[link] Scale Was All We Needed, At First
Gabriel Mukobi (gabe-mukobi) · 2024-02-14T01:49:16.184Z · comments (31)
12 interesting things I learned studying the discovery of nature's laws
Ben Pace (Benito) · 2022-02-19T23:39:47.841Z · comments (40)
Express interest in an "FHI of the West"
habryka (habryka4) · 2024-04-18T03:32:58.592Z · comments (39)
The 6D effect: When companies take risks, one email can be very powerful.
scasper · 2023-11-04T20:08:39.775Z · comments (40)
A Brief Introduction to Container Logistics
Vitor · 2021-11-11T15:58:11.510Z · comments (22)
[link] Where do your eyes go?
alkjash · 2021-09-19T22:43:47.491Z · comments (22)
Basics of Rationalist Discourse
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-01-27T02:40:52.739Z · comments (180)
Omicron Variant Post #1: We’re F***ed, It’s Never Over
Zvi · 2021-11-26T19:00:00.988Z · comments (95)
Constellations are Younger than Continents
Jeffrey Heninger (jeffrey-heninger) · 2023-12-19T06:12:40.667Z · comments (22)
"Carefully Bootstrapped Alignment" is organizationally hard
Raemon · 2023-03-17T18:00:09.943Z · comments (22)
Changing the world through slack & hobbies
Steven Byrnes (steve2152) · 2022-07-21T18:11:05.636Z · comments (13)
[link] Thoughts on seed oil
dynomight · 2024-04-20T12:29:14.212Z · comments (79)
On green
Joe Carlsmith (joekc) · 2024-03-21T17:38:56.295Z · comments (33)
Safetywashing
Adam Scholl (adam_scholl) · 2022-07-01T11:56:33.495Z · comments (20)
AI Timelines
habryka (habryka4) · 2023-11-10T05:28:24.841Z · comments (74)
So, geez there's a lot of AI content these days
Raemon · 2022-10-06T21:32:20.833Z · comments (140)
The Plan
johnswentworth · 2021-12-10T23:41:39.417Z · comments (78)
Sexual Abuse attitudes might be infohazardous
Pseudonymous Otter · 2022-07-19T18:06:43.956Z · comments (71)
[link] [SEE NEW EDITS] No, *You* Need to Write Clearer
NicholasKross · 2023-04-29T05:04:01.559Z · comments (64)
AI alignment is distinct from its near-term applications
paulfchristiano · 2022-12-13T07:10:04.407Z · comments (21)
larger language models may disappoint you [or, an eternally unfinished draft]
nostalgebraist · 2021-11-26T23:08:56.221Z · comments (31)
Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality"
AnnaSalamon · 2022-06-09T02:12:35.151Z · comments (62)
Omicron: My Current Model
Zvi · 2021-12-28T17:10:00.629Z · comments (72)
Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023)
Chris Scammell (chris-scammell) · 2023-05-10T19:04:21.138Z · comments (53)
Discussion with Nate Soares on a key alignment difficulty
HoldenKarnofsky · 2023-03-13T21:20:02.976Z · comments (38)
Ngo and Yudkowsky on alignment difficulty
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-11-15T20:31:34.135Z · comments (148)
[link] Paul Christiano named as US AI Safety Institute Head of AI Safety
Joel Burget (joel-burget) · 2024-04-16T16:22:06.937Z · comments (55)
Dear Self; we need to talk about ambition
Elizabeth (pktechgirl) · 2023-08-27T23:10:04.720Z · comments (25)
Visible Thoughts Project and Bounty Announcement
So8res · 2021-11-30T00:19:08.408Z · comments (103)
Pausing AI Developments Isn't Enough. We Need to Shut it All Down
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2023-04-08T00:36:47.702Z · comments (39)
On AutoGPT
Zvi · 2023-04-13T12:30:01.059Z · comments (47)
[link] My PhD thesis: Algorithmic Bayesian Epistemology
Eric Neyman (UnexpectedValues) · 2024-03-16T22:56:59.283Z · comments (14)
[link] DeepMind: Generally capable agents emerge from open-ended play
Daniel Kokotajlo (daniel-kokotajlo) · 2021-07-27T14:19:13.782Z · comments (53)
The LessWrong Team is now Lightcone Infrastructure, come work with us!
habryka (habryka4) · 2021-10-01T01:20:33.411Z · comments (71)
← previous page (newer posts) · next page (older posts) →