LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

The 6D effect: When companies take risks, one email can be very powerful.
scasper · 2023-11-04T20:08:39.775Z · comments (40)
Omicron Variant Post #1: We’re F***ed, It’s Never Over
Zvi · 2021-11-26T19:00:00.988Z · comments (95)
Reality-Revealing and Reality-Masking Puzzles
AnnaSalamon · 2020-01-16T16:15:34.650Z · comments (57)
Are we in an AI overhang?
Andy Jones (andyljones) · 2020-07-27T12:48:05.196Z · comments (106)
Constellations are Younger than Continents
Jeffrey Heninger (jeffrey-heninger) · 2023-12-19T06:12:40.667Z · comments (22)
"Carefully Bootstrapped Alignment" is organizationally hard
Raemon · 2023-03-17T18:00:09.943Z · comments (22)
Is Rationalist Self-Improvement Real?
Jacob Falkovich (Jacobian) · 2019-12-09T17:11:03.337Z · comments (78)
Changing the world through slack & hobbies
Steven Byrnes (steve2152) · 2022-07-21T18:11:05.636Z · comments (13)
On green
Joe Carlsmith (joekc) · 2024-03-21T17:38:56.295Z · comments (33)
Your Dog is Even Smarter Than You Think
StyleOfDog · 2021-05-01T05:16:09.821Z · comments (108)
Express interest in an "FHI of the West"
habryka (habryka4) · 2024-04-18T03:32:58.592Z · comments (38)
AI Timelines
habryka (habryka4) · 2023-11-10T05:28:24.841Z · comments (74)
Safetywashing
Adam Scholl (adam_scholl) · 2022-07-01T11:56:33.495Z · comments (20)
So, geez there's a lot of AI content these days
Raemon · 2022-10-06T21:32:20.833Z · comments (140)
larger language models may disappoint you [or, an eternally unfinished draft]
nostalgebraist · 2021-11-26T23:08:56.221Z · comments (31)
The Plan
johnswentworth · 2021-12-10T23:41:39.417Z · comments (78)
AI alignment is distinct from its near-term applications
paulfchristiano · 2022-12-13T07:10:04.407Z · comments (21)
Sexual Abuse attitudes might be infohazardous
Pseudonymous Otter · 2022-07-19T18:06:43.956Z · comments (71)
[link] [SEE NEW EDITS] No, *You* Need to Write Clearer
NicholasKross · 2023-04-29T05:04:01.559Z · comments (64)
Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality"
AnnaSalamon · 2022-06-09T02:12:35.151Z · comments (62)
Omicron: My Current Model
Zvi · 2021-12-28T17:10:00.629Z · comments (72)
Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023)
Chris Scammell (chris-scammell) · 2023-05-10T19:04:21.138Z · comments (53)
Ngo and Yudkowsky on alignment difficulty
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-11-15T20:31:34.135Z · comments (148)
Dear Self; we need to talk about ambition
Elizabeth (pktechgirl) · 2023-08-27T23:10:04.720Z · comments (25)
Discussion with Nate Soares on a key alignment difficulty
HoldenKarnofsky · 2023-03-13T21:20:02.976Z · comments (38)
What cognitive biases feel like from the inside
chaosmage · 2020-01-03T14:24:22.265Z · comments (32)
Visible Thoughts Project and Bounty Announcement
So8res · 2021-11-30T00:19:08.408Z · comments (103)
[link] My PhD thesis: Algorithmic Bayesian Epistemology
Eric Neyman (UnexpectedValues) · 2024-03-16T22:56:59.283Z · comments (14)
CFAR Participant Handbook now available to all
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2020-01-03T15:43:44.618Z · comments (40)
[link] Paul Christiano named as US AI Safety Institute Head of AI Safety
Joel Burget (joel-burget) · 2024-04-16T16:22:06.937Z · comments (48)
On AutoGPT
Zvi · 2023-04-13T12:30:01.059Z · comments (47)
Pausing AI Developments Isn't Enough. We Need to Shut it All Down
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2023-04-08T00:36:47.702Z · comments (39)
[link] DeepMind: Generally capable agents emerge from open-ended play
Daniel Kokotajlo (daniel-kokotajlo) · 2021-07-27T14:19:13.782Z · comments (53)
The LessWrong Team is now Lightcone Infrastructure, come work with us!
habryka (habryka4) · 2021-10-01T01:20:33.411Z · comments (71)
Notes from "Don't Shoot the Dog"
juliawise · 2021-04-02T16:34:46.170Z · comments (11)
The 101 Space You Will Always Have With You
Screwtape · 2023-11-29T04:56:40.240Z · comments (20)
[link] Strong Evidence is Common
Mark Xu (mark-xu) · 2021-03-13T22:04:40.538Z · comments (49)
Munk AI debate: confusions and possible cruxes
Steven Byrnes (steve2152) · 2023-06-27T14:18:47.694Z · comments (21)
My Assessment of the Chinese AI Safety Community
Lao Mein (derpherpize) · 2023-04-25T04:21:19.274Z · comments (93)
[link] Thoughts on seed oil
dynomight · 2024-04-20T12:29:14.212Z · comments (70)
New Scaling Laws for Large Language Models
1a3orn · 2022-04-01T20:41:17.665Z · comments (22)
[link] I hired 5 people to sit behind me and make me productive for a month
Simon Berens (sberens) · 2023-02-05T01:19:39.182Z · comments (81)
How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme
Collin (collin-burns) · 2022-12-15T18:22:40.109Z · comments (39)
Embedded Interactive Predictions on LessWrong
Amandango · 2020-11-20T18:35:32.089Z · comments (88)
Failures in Kindness
silentbob · 2024-03-26T21:30:11.052Z · comments (26)
Common misconceptions about OpenAI
Jacob_Hilton · 2022-08-25T14:02:26.257Z · comments (142)
The ground of optimization
Alex Flint (alexflint) · 2020-06-20T00:38:15.521Z · comments (80)
My views on “doom”
paulfchristiano · 2023-04-27T17:50:01.415Z · comments (33)
Jailbreaking ChatGPT on Release Day
Zvi · 2022-12-02T13:10:00.860Z · comments (77)
[link] "No-one in my org puts money in their pension"
Tobes (tobias-jolly) · 2024-02-16T18:33:28.996Z · comments (7)
← previous page (newer posts) · next page (older posts) →