LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

[Intro to brain-like-AGI safety] 15. Conclusion: Open problems, how to help, AMA
Steven Byrnes (steve2152) · 2022-05-17T15:11:12.397Z · comments (10)
RationalWiki on face masks
Viliam · 2021-01-15T01:55:49.836Z · comments (27)
Working Out in VR Really Works
Yonatan Cale (yonatan-cale-1) · 2022-04-03T18:42:37.514Z · comments (28)
Learning the prior
paulfchristiano · 2020-07-05T21:00:01.192Z · comments (28)
On attunement
Joe Carlsmith (joekc) · 2024-03-25T12:47:34.856Z · comments (8)
Modelling Transformative AI Risks (MTAIR) Project: Introduction
Davidmanheim · 2021-08-16T07:12:22.277Z · comments (0)
Exercise is Good, Actually
Gordon Seidoh Worley (gworley) · 2023-02-02T00:09:18.143Z · comments (27)
SolidGoldMagikarp III: Glitch token archaeology
mwatkins · 2023-02-14T10:17:51.495Z · comments (32)
The Darwin Game
lsusr · 2020-10-09T10:19:10.516Z · comments (131)
Everything Wrong with Roko's Claims about an Engineered Pandemic
EZ97 · 2024-02-22T15:59:08.439Z · comments (10)
Coherence arguments imply a force for goal-directed behavior
KatjaGrace · 2021-03-26T16:10:04.936Z · comments (25)
[link] Biological Anchors: The Trick that Might or Might Not Work
Scott Alexander (Yvain) · 2023-08-12T00:53:30.159Z · comments (3)
How to train your own "Sleeper Agents"
evhub · 2024-02-07T00:31:42.653Z · comments (11)
Don't Double-Crux With Suicide Rock
Zack_M_Davis · 2020-01-01T19:02:55.707Z · comments (30)
Reframing the evolutionary benefit of sex
paulfchristiano · 2019-09-14T17:00:01.184Z · comments (21)
Comments on OpenPhil's Interpretability RFP
paulfchristiano · 2021-11-05T22:36:04.733Z · comments (5)
More Christiano, Cotra, and Yudkowsky on AI progress
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-12-06T20:33:12.164Z · comments (28)
An artificially structured argument for expecting AGI ruin
Rob Bensinger (RobbBB) · 2023-05-07T21:52:54.421Z · comments (26)
Against Active Shooter Drills
Zvi · 2022-06-16T13:40:02.690Z · comments (30)
Survey of NLP Researchers: NLP is contributing to AGI progress; major catastrophe plausible
Sam Bowman (sbowman) · 2022-08-31T01:39:54.533Z · comments (6)
Announcing the Alignment of Complex Systems Research Group
Jan_Kulveit · 2022-06-04T04:10:14.337Z · comments (20)
Reveal Culture
MalcolmOcean (malcolmocean) · 2020-07-25T03:36:28.525Z · comments (18)
How to do theoretical research, a personal perspective
Mark Xu (mark-xu) · 2022-08-19T19:41:21.562Z · comments (6)
Meaning & Agency
abramdemski · 2023-12-19T22:27:32.123Z · comments (17)
Ten Minutes with Sam Altman
lsusr · 2021-12-28T07:32:59.140Z · comments (11)
Improving the Welfare of AIs: A Nearcasted Proposal
ryan_greenblatt · 2023-10-30T14:51:35.901Z · comments (5)
The “mind-body vicious cycle” model of RSI & back pain
Steven Byrnes (steve2152) · 2022-06-09T12:30:33.810Z · comments (32)
Bad names make you open the box
Adam Zerner (adamzerner) · 2021-06-09T03:19:14.107Z · comments (51)
Defining alignment research
Richard_Ngo (ricraz) · 2024-08-19T20:42:29.279Z · comments (23)
[link] Linkpost: They Studied Dishonesty. Was Their Work a Lie?
Linch · 2023-10-02T08:10:51.857Z · comments (12)
A reply to Agnes Callard
Vaniver · 2020-06-28T03:25:27.378Z · comments (36)
Code Generation as an AI risk setting
Not Relevant (not-relevant) · 2022-04-17T22:27:38.097Z · comments (16)
New page: Integrity
Zach Stein-Perlman · 2024-07-10T15:00:41.050Z · comments (3)
Quotes from Leopold Aschenbrenner’s Situational Awareness Paper
Zvi · 2024-06-07T11:40:03.981Z · comments (10)
The Unilateralist’s “Curse” Is Mostly Good
David Hornbein · 2020-04-13T22:48:22.589Z · comments (16)
Setting the Zero Point
Duncan Sabien (Deactivated) (Duncan_Sabien) · 2022-12-09T06:06:25.873Z · comments (43)
When Someone Tells You They're Lying, Believe Them
ymeskhout · 2023-07-14T00:31:48.168Z · comments (3)
[Book Review] "Sorceror's Apprentice" by Tahir Shah
lsusr · 2021-11-20T11:29:21.361Z · comments (11)
Circular Reasoning
abramdemski · 2024-08-05T18:10:32.736Z · comments (36)
Just admit that you’ve zoned out
joec · 2024-06-04T02:51:27.594Z · comments (22)
Internal Double Crux
CFAR!Duncan (CFAR 2017) · 2022-07-22T04:34:54.719Z · comments (15)
The Track Record of Futurists Seems ... Fine
HoldenKarnofsky · 2022-06-30T19:40:18.893Z · comments (25)
Opening Session Tips & Advice
CFAR!Duncan (CFAR 2017) · 2022-07-25T03:57:49.731Z · comments (3)
Beyond micromarriages
Richard_Ngo (ricraz) · 2022-03-11T03:40:01.463Z · comments (13)
Why Simulator AIs want to be Active Inference AIs
Jan_Kulveit · 2023-04-10T18:23:35.101Z · comments (8)
A Taijitu symbol for Moloch and Slack
B Jacobs (Bob Jacobs) · 2020-05-25T20:03:44.447Z · comments (3)
Access to powerful AI might make computer security radically easier
Buck · 2024-06-08T06:00:19.310Z · comments (14)
Meetups as Institutions for Intellectual Progress
mingyuan · 2019-09-17T05:23:08.004Z · comments (26)
Deliberate Play
jsteinhardt · 2021-10-24T02:50:16.947Z · comments (4)
Fixed Point: a love story
Richard_Ngo (ricraz) · 2023-07-08T13:56:54.807Z · comments (2)
← previous page (newer posts) · next page (older posts) →