LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

[question] How likely do you think worse-than-extinction type fates to be?
span1 · 2022-08-01T04:08:06.293Z · answers+comments (3)
On akrasia: starting at the bottom
seecrow · 2022-08-01T04:08:12.187Z · comments (2)
A Word is Worth 1,000 Pictures
Kully · 2022-08-01T04:08:28.199Z · comments (0)
Polaris, Five-Second Versions, and Thought Lengths
CFAR!Duncan (CFAR 2017) · 2022-08-01T07:14:16.429Z · comments (12)
[question] Which intro-to-AI-risk text would you recommend to...
Sherrinford · 2022-08-01T09:36:11.733Z · answers+comments (1)
Meditation course claims 65% enlightenment rate: my review
KatWoods (ea247) · 2022-08-01T11:25:37.017Z · comments (33)
[question] Is there any writing about prompt engineering for humans?
Alex Hollow · 2022-08-01T12:52:01.930Z · answers+comments (8)
Technical AI Alignment Study Group
Eric K · 2022-08-01T18:33:36.758Z · comments (0)
Letter from leading Soviet Academicians to party and government leaders of the Soviet Union regarding signs of decline and structural problems of the economic-political system (1970)
M. Y. Zuo · 2022-08-01T22:35:08.750Z · comments (10)
Turbocharging
CFAR!Duncan (CFAR 2017) · 2022-08-02T00:01:23.148Z · comments (3)
[question] Would quantum immortality mean subjective immortality?
n0ah · 2022-08-02T04:54:45.231Z · answers+comments (10)
Thinking without priors?
Q Home · 2022-08-02T09:17:45.622Z · comments (0)
[question] I want to donate some money (not much, just what I can afford) to AGI Alignment research, to whatever organization has the best chance of making sure that AGI goes well and doesn't kill us all. What are my best options, where can I make the most difference per dollar?
lumenwrites · 2022-08-02T12:08:46.674Z · answers+comments (9)
[link] Progress links and tweets, 2022-08-02
jasoncrawford · 2022-08-02T17:03:51.605Z · comments (0)
(Summary) Sequence Highlights - Thinking Better on Purpose
qazzquimby (torendarby@gmail.com) · 2022-08-02T17:45:26.859Z · comments (3)
Againstness
CFAR!Duncan (CFAR 2017) · 2022-08-02T19:29:14.221Z · comments (7)
What are the Red Flags for Neural Network Suffering? - Seeds of Science call for reviewers
rogersbacon · 2022-08-02T22:37:59.448Z · comments (6)
Two-year update on my personal AI timelines
Ajeya Cotra (ajeya-cotra) · 2022-08-02T23:07:48.698Z · comments (60)
Law-Following AI 4: Don't Rely on Vicarious Liability
Cullen (Cullen_OKeefe) · 2022-08-02T23:26:00.426Z · comments (2)
[question] How does one recognize information and differentiate it from noise?
M. Y. Zuo · 2022-08-03T03:57:35.432Z · answers+comments (29)
Open & Welcome Thread - Aug/Sep 2022
Thomas · 2022-08-03T10:22:53.266Z · comments (32)
Externalized reasoning oversight: a research direction for language model alignment
tamera · 2022-08-03T12:03:16.630Z · comments (23)
[link] Survey: What (de)motivates you about AI risk?
Daniel_Friedrich (Hominid Dan) · 2022-08-03T19:17:35.822Z · comments (0)
[link] Announcing Squiggle: Early Access
ozziegooen · 2022-08-03T19:48:16.727Z · comments (7)
[question] Some doubts about Non Superintelligent AIs
aditya malik (aditya-malik) · 2022-08-03T19:55:45.454Z · answers+comments (4)
Transformer language models are doing something more general
Numendil · 2022-08-03T21:13:42.472Z · comments (6)
Precursor checking for deceptive alignment
evhub · 2022-08-03T22:56:44.626Z · comments (0)
Three pillars for avoiding AGI catastrophe: Technical alignment, deployment decisions, and coordination
Alex Lintz (alex-lintz) · 2022-08-03T23:15:22.652Z · comments (0)
[question] How do I know if my first post should be a post, or a question?
Nathan1123 · 2022-08-04T01:46:15.312Z · answers+comments (4)
Clapping Lower
jefftk (jkaufman) · 2022-08-04T02:10:04.509Z · comments (7)
Surprised by ELK report's counterexample to Debate, IDA
Evan R. Murphy · 2022-08-04T02:12:15.139Z · comments (0)
High Reliability Orgs, and AI Companies
Raemon · 2022-08-04T05:45:34.928Z · comments (7)
Covid 8/4/22: Rebound
Zvi · 2022-08-04T11:20:01.482Z · comments (0)
Interpretability isn’t Free
Joel Burget (joel-burget) · 2022-08-04T15:02:54.842Z · comments (1)
What do ML researchers think about AI in 2022?
KatjaGrace · 2022-08-04T15:40:05.024Z · comments (33)
Socratic Ducking, OODA Loops, Frame-by-Frame Debugging
CFAR!Duncan (CFAR 2017) · 2022-08-04T17:44:25.007Z · comments (1)
[question] AI alignment: Would a lazy self-preservation instinct be sufficient?
BrainFrog · 2022-08-04T17:53:08.507Z · answers+comments (4)
[question] Would "Manhattan Project" style be beneficial or deleterious for AI Alignment?
Just Learning · 2022-08-04T19:12:44.560Z · answers+comments (1)
[link] Fiber arts, mysterious dodecahedrons, and waiting on “Eureka!”
eukaryote · 2022-08-04T20:37:59.388Z · comments (15)
Running a Basic Meetup
Screwtape · 2022-08-04T21:49:40.878Z · comments (1)
The Pragmascope Idea
johnswentworth · 2022-08-04T21:52:15.206Z · comments (19)
Monthly Shorts 7/22
Celer · 2022-08-04T22:30:18.832Z · comments (0)
Calibration Trivia
Screwtape · 2022-08-04T22:31:00.955Z · comments (9)
Cambist Booking
Screwtape · 2022-08-04T22:40:51.361Z · comments (3)
Convergence Towards World-Models: A Gears-Level Model
Thane Ruthenis · 2022-08-04T23:31:33.448Z · comments (1)
The Falling Drill
Screwtape · 2022-08-05T00:08:26.069Z · comments (3)
Two Kids Crosswise
jefftk (jkaufman) · 2022-08-05T02:40:04.924Z · comments (3)
$20K In Bounties for AI Safety Public Materials
Dan H (dan-hendrycks) · 2022-08-05T02:52:47.729Z · comments (9)
An attempt to understand the Complexity of Values
Dalton Mabery (dalton-mabery) · 2022-08-05T04:43:25.495Z · comments (0)
Deontology and Tool AI
Nathan1123 · 2022-08-05T05:20:14.647Z · comments (5)
next page (older posts) →