LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

next page (older posts) →

LessWrong's (first) album: I Have Been A Good Bing
habryka (habryka4) · 2024-04-01T07:33:45.242Z · comments (174)
Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible
GeneSmith · 2023-12-12T18:14:51.438Z · comments (168)
I would have shit in that alley, too
Declan Molony (declan-molony) · 2024-06-18T04:41:06.545Z · comments (134)
Transformers Represent Belief State Geometry in their Residual Stream
Adam Shai (adam-shai) · 2024-04-16T21:16:11.377Z · comments (100)
Failures in Kindness
silentbob · 2024-03-26T21:30:11.052Z · comments (60)
Reliable Sources: The Story of David Gerard
TracingWoodgrains (tracingwoodgrains) · 2024-07-10T19:50:21.191Z · comments (53)
The Best Tacit Knowledge Videos on Every Subject
Parker Conley (parker-conley) · 2024-03-31T17:14:31.199Z · comments (143)
There is way too much serendipity
Malmesbury (Elmer of Malmesbury) · 2024-01-19T19:37:57.068Z · comments (56)
How I got 4.2M YouTube views without making a single video
Closed Limelike Curves · 2024-09-03T03:52:33.025Z · comments (36)
[link] My hour of memoryless lucidity
Eric Neyman (UnexpectedValues) · 2024-05-04T01:40:56.717Z · comments (35)
Notifications Received in 30 Minutes of Class
tanagrabeast · 2024-05-26T17:02:20.989Z · comments (16)
[link] Thoughts on seed oil
dynomight · 2024-04-20T12:29:14.212Z · comments (128)
Safety isn’t safety without a social model (or: dispelling the myth of per se technical safety)
Andrew_Critch · 2024-06-14T00:16:47.850Z · comments (34)
[link] [April Fools' Day] Introducing Open Asteroid Impact
Linch · 2024-04-01T08:14:15.800Z · comments (29)
Shallow review of live agendas in alignment & safety
technicalities · 2023-11-27T11:10:27.464Z · comments (69)
MIRI 2024 Communications Strategy
Gretta Duleba (gretta-duleba) · 2024-05-29T19:33:39.169Z · comments (202)
Social Dark Matter
Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-11-16T20:00:00.000Z · comments (115)
[link] Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
evhub · 2024-01-12T19:51:01.021Z · comments (95)
You don't know how bad most things are nor precisely how they're bad.
Solenoid_Entity · 2024-08-04T14:12:54.136Z · comments (48)
Speaking to Congressional staffers about AI risk
Akash (akash-wasil) · 2023-12-04T23:08:52.055Z · comments (23)
Gentleness and the artificial Other
Joe Carlsmith (joekc) · 2024-01-02T18:21:34.746Z · comments (33)
Non-Disparagement Canaries for OpenAI
aysja · 2024-05-30T19:20:13.022Z · comments (51)
Would catching your AIs trying to escape convince AI developers to slow down or undeploy?
Buck · 2024-08-26T16:46:18.872Z · comments (69)
OpenAI: The Battle of the Board
Zvi · 2023-11-22T17:30:04.574Z · comments (83)
AI Timelines
habryka (habryka4) · 2023-11-10T05:28:24.841Z · comments (94)
[link] Scale Was All We Needed, At First
Gabe M (gabe-mukobi) · 2024-02-14T01:49:16.184Z · comments (32)
My AI Model Delta Compared To Yudkowsky
johnswentworth · 2024-06-10T16:12:53.179Z · comments (100)
Universal Basic Income and Poverty
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2024-07-26T07:23:50.151Z · comments (128)
The 6D effect: When companies take risks, one email can be very powerful.
scasper · 2023-11-04T20:08:39.775Z · comments (42)
80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly)
Raemon · 2024-07-03T20:34:50.741Z · comments (71)
OpenAI: Facts from a Weekend
Zvi · 2023-11-20T15:30:06.732Z · comments (165)
Express interest in an "FHI of the West"
habryka (habryka4) · 2024-04-18T03:32:58.592Z · comments (41)
Leaving MIRI, Seeking Funding
abramdemski · 2024-08-08T18:32:20.387Z · comments (19)
[link] "No-one in my org puts money in their pension"
Tobes (tobias-jolly) · 2024-02-16T18:33:28.996Z · comments (16)
On green
Joe Carlsmith (joekc) · 2024-03-21T17:38:56.295Z · comments (35)
Getting 50% (SoTA) on ARC-AGI with GPT-4o
ryan_greenblatt · 2024-06-17T18:44:01.039Z · comments (49)
Constellations are Younger than Continents
Jeffrey Heninger (jeffrey-heninger) · 2023-12-19T06:12:40.667Z · comments (22)
The case for ensuring that powerful AIs are controlled
ryan_greenblatt · 2024-01-24T16:11:51.354Z · comments (66)
The Great Data Integration Schlep
sarahconstantin · 2024-09-13T15:40:02.298Z · comments (16)
[link] My PhD thesis: Algorithmic Bayesian Epistemology
Eric Neyman (UnexpectedValues) · 2024-03-16T22:56:59.283Z · comments (14)
[link] Paul Christiano named as US AI Safety Institute Head of AI Safety
Joel Burget (joel-burget) · 2024-04-16T16:22:06.937Z · comments (58)
The hostile telepaths problem
Valentine · 2024-10-27T15:26:53.610Z · comments (48)
The 101 Space You Will Always Have With You
Screwtape · 2023-11-29T04:56:40.240Z · comments (20)
Raising children on the eve of AI
juliawise · 2024-02-15T21:28:07.737Z · comments (47)
Overview of strong human intelligence amplification methods
TsviBT · 2024-10-08T08:37:18.896Z · comments (141)
My Clients, The Liars
ymeskhout · 2024-03-05T21:06:36.669Z · comments (85)
Ilya Sutskever and Jan Leike resign from OpenAI [updated]
Zach Stein-Perlman · 2024-05-15T00:45:02.436Z · comments (95)
The Best Lay Argument is not a Simple English Yud Essay
J Bostock (Jemist) · 2024-09-10T17:34:28.422Z · comments (15)
Principles for the AGI Race
William_S · 2024-08-30T14:29:41.074Z · comments (13)
Truthseeking is the ground in which other principles grow
Elizabeth (pktechgirl) · 2024-05-27T01:09:20.796Z · comments (16)
next page (older posts) →