LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Basics of Rationalist Discourse
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-01-27T02:40:52.739Z · comments (180)
Constellations are Younger than Continents
Jeffrey Heninger (jeffrey-heninger) · 2023-12-19T06:12:40.667Z · comments (22)
"Carefully Bootstrapped Alignment" is organizationally hard
Raemon · 2023-03-17T18:00:09.943Z · comments (22)
AI Timelines
habryka (habryka4) · 2023-11-10T05:28:24.841Z · comments (74)
[link] [SEE NEW EDITS] No, *You* Need to Write Clearer
NicholasKross · 2023-04-29T05:04:01.559Z · comments (64)
Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023)
Chris Scammell (chris-scammell) · 2023-05-10T19:04:21.138Z · comments (53)
Dear Self; we need to talk about ambition
Elizabeth (pktechgirl) · 2023-08-27T23:10:04.720Z · comments (25)
Discussion with Nate Soares on a key alignment difficulty
HoldenKarnofsky · 2023-03-13T21:20:02.976Z · comments (38)
On AutoGPT
Zvi · 2023-04-13T12:30:01.059Z · comments (47)
Pausing AI Developments Isn't Enough. We Need to Shut it All Down
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2023-04-08T00:36:47.702Z · comments (39)
The 101 Space You Will Always Have With You
Screwtape · 2023-11-29T04:56:40.240Z · comments (20)
My Assessment of the Chinese AI Safety Community
Lao Mein (derpherpize) · 2023-04-25T04:21:19.274Z · comments (94)
Munk AI debate: confusions and possible cruxes
Steven Byrnes (steve2152) · 2023-06-27T14:18:47.694Z · comments (21)
[link] I hired 5 people to sit behind me and make me productive for a month
Simon Berens (sberens) · 2023-02-05T01:19:39.182Z · comments (81)
My views on “doom”
paulfchristiano · 2023-04-27T17:50:01.415Z · comments (34)
Yes, It's Subjective, But Why All The Crabs?
johnswentworth · 2023-07-28T19:35:36.741Z · comments (15)
Book Review: Going Infinite
Zvi · 2023-10-24T15:00:02.251Z · comments (109)
Alignment Implications of LLM Successes: a Debate in One Act
Zack_M_Davis · 2023-10-21T15:22:23.053Z · comments (50)
My Model Of EA Burnout
LoganStrohl (BrienneYudkowsky) · 2023-01-25T17:52:42.770Z · comments (49)
Thoughts on the impact of RLHF research
paulfchristiano · 2023-01-25T17:23:16.402Z · comments (101)
[link] More information about the dangerous capability evaluations we did with GPT-4 and Claude.
Beth Barnes (beth-barnes) · 2023-03-19T00:25:39.707Z · comments (54)
UFO Betting: Put Up or Shut Up
RatsWrongAboutUAP · 2023-06-13T04:05:32.652Z · comments (207)
Deep Deceptiveness
So8res · 2023-03-21T02:51:52.794Z · comments (58)
Policy discussions follow strong contextualizing norms
Richard_Ngo (ricraz) · 2023-04-01T23:51:36.588Z · comments (61)
Self-driving car bets
paulfchristiano · 2023-07-29T18:10:01.112Z · comments (41)
You Don't Exist, Duncan
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-02-02T08:37:01.049Z · comments (107)
[link] Sum-threshold attacks
TsviBT · 2023-09-08T17:13:37.044Z · comments (52)
Lessons On How To Get Things Right On The First Try
johnswentworth · 2023-06-19T23:58:09.605Z · comments (56)
[link] AGI in sight: our look at the game board
Andrea_Miotti (AndreaM) · 2023-02-18T22:17:44.364Z · comments (135)
Catching the Eye of Sauron
Casey B. (Zahima) · 2023-04-07T00:40:46.556Z · comments (68)
Announcing MIRI’s new CEO and leadership team
Gretta Duleba (gretta-duleba) · 2023-10-10T19:22:11.821Z · comments (52)
Recursive Middle Manager Hell
Raemon · 2023-01-01T04:33:29.942Z · comments (45)
[link] AI presidents discuss AI alignment agendas
TurnTrout · 2023-09-09T18:55:37.931Z · comments (22)
Elements of Rationalist Discourse
Rob Bensinger (RobbBB) · 2023-02-12T07:58:42.479Z · comments (47)
What are the results of more parental supervision and less outdoor play?
juliawise · 2023-11-25T12:52:29.986Z · comments (30)
Announcing Apollo Research
Marius Hobbhahn (marius-hobbhahn) · 2023-05-30T16:17:19.767Z · comments (11)
Ways I Expect AI Regulation To Increase Extinction Risk
1a3orn · 2023-07-04T17:32:48.047Z · comments (32)
Thoughts on responsible scaling policies and regulation
paulfchristiano · 2023-10-24T22:21:18.341Z · comments (33)
What I would do if I wasn’t at ARC Evals
LawrenceC (LawChan) · 2023-09-05T19:19:36.830Z · comments (8)
Launching Lightspeed Grants (Apply by July 6th)
habryka (habryka4) · 2023-06-07T02:53:29.227Z · comments (41)
[link] Cultivating a state of mind where new ideas are born
Henrik Karlsson (henrik-karlsson) · 2023-07-27T09:16:42.566Z · comments (18)
[link] Actually, Othello-GPT Has A Linear Emergent World Representation
Neel Nanda (neel-nanda-1) · 2023-03-29T22:13:14.878Z · comments (24)
[link] Orthogonal: A new agent foundations alignment organization
Tamsin Leake (carado-1) · 2023-04-19T20:17:14.174Z · comments (4)
Natural Abstractions: Key claims, Theorems, and Critiques
LawrenceC (LawChan) · 2023-03-16T16:37:40.181Z · comments (20)
My current LK99 questions
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2023-08-01T22:48:00.733Z · comments (38)
Enemies vs Malefactors
So8res · 2023-02-28T23:38:11.594Z · comments (60)
Eliezer Yudkowsky’s Letter in Time Magazine
Zvi · 2023-04-05T18:00:01.670Z · comments (86)
Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense
So8res · 2023-11-24T17:37:43.020Z · comments (83)
An AI risk argument that resonates with NYTimes readers
Julian Bradshaw · 2023-03-12T23:09:20.458Z · comments (14)
Lightcone Infrastructure/LessWrong is looking for funding
habryka (habryka4) · 2023-06-14T04:45:53.425Z · comments (38)
← previous page (newer posts) · next page (older posts) →