LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

[link] Kallipolis, USA
Daniel Kokotajlo (daniel-kokotajlo) · 2023-04-01T02:06:52.827Z · comments (1)
Hooray for stepping out of the limelight
So8res · 2023-04-01T02:45:31.397Z · comments (24)
My Model of Gender Identity
Iris of Rosebloom (Qualia Inqueria) · 2023-04-01T03:03:16.662Z · comments (4)
The Significance of "Alignment vs Progress: The AI Rap Showdown" in the AI Safety Discourse
Jonathan Grant (jonathan-grant) · 2023-04-01T03:26:45.471Z · comments (0)
An Average Dialogue
NicholasKross · 2023-04-01T04:01:50.998Z · comments (0)
[question] What Are Your Preferences Regarding The FLI Letter?
JenniferRM · 2023-04-01T04:52:46.887Z · answers+comments (122)
Keep Making AI Safety News
RedFishBlueFish (RedStateBlueState) · 2023-04-01T06:27:21.540Z · comments (6)
[New LW Feature] "Debates"
Ruby · 2023-04-01T07:00:24.466Z · comments (34)
[April Fools'] Definitive confirmation of shard theory
TurnTrout · 2023-04-01T07:27:23.096Z · comments (7)
New Alignment Research Agenda: Massive Multiplayer Organism Oversight
TsviBT · 2023-04-01T08:02:13.474Z · comments (1)
Campaign for AI Safety: Please join me
Nik Samoylov (nik-samoylov) · 2023-04-01T09:32:11.907Z · comments (9)
[link] Singularities against the Singularity: Announcing Workshop on Singular Learning Theory and Alignment
Jesse Hoogland (jhoogland) · 2023-04-01T09:58:22.764Z · comments (0)
Some thought experiments on digital consciousness
rorygreig (rorygreig100) · 2023-04-01T11:45:12.964Z · comments (13)
[question] Is this true? paulg: [One special thing about AI risk is that people who understand AI well are more worried than people who understand it poorly]
tailcalled · 2023-04-01T11:59:45.038Z · answers+comments (5)
Proposal: Butt bumps as a default for physical greetings
Adam Zerner (adamzerner) · 2023-04-01T12:48:30.554Z · comments (23)
The frozen neutrality
ProgramCrafter (programcrafter) · 2023-04-01T12:58:40.873Z · comments (0)
[question] Transformer trained on it's own content?
Micromegas · 2023-04-01T15:08:49.776Z · answers+comments (0)
AI community building: EliezerKart
Christopher King (christopher-king) · 2023-04-01T15:25:05.151Z · comments (0)
How security and cryptography can aid AI safety [sequence]
Allison Duettmann (allison-duettmann) · 2023-04-01T16:28:18.946Z · comments (0)
Introducing AlignmentSearch: An AI Alignment-Informed Conversional Agent
BionicD0LPH1N (jumeaux200) · 2023-04-01T16:39:09.643Z · comments (14)
AI infosec: first strikes, zero-day markets, hardware supply chains, adoption barriers
Allison Duettmann (allison-duettmann) · 2023-04-01T16:44:01.592Z · comments (0)
The Plan: Put ChatGPT in Charge
Sven Nilsen (bvssvni) · 2023-04-01T17:23:07.980Z · comments (3)
Quaker Practice for the Aspiring Rationalist
maia · 2023-04-01T19:32:03.051Z · comments (4)
Ho Chi Minh ACX Meetup
cygnus · 2023-04-01T19:41:17.828Z · comments (0)
AI Safety via Luck
Jozdien · 2023-04-01T20:13:55.346Z · comments (7)
Draft: Inferring minimizers
Alex_Altair · 2023-04-01T20:20:48.676Z · comments (0)
Repairing the Effort Asymmetry
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-01T20:23:34.261Z · comments (11)
Why I Think the Current Trajectory of AI Research has Low P(doom) - LLMs
GaPa (ga-pa-pa-la) · 2023-04-01T20:35:18.248Z · comments (1)
A policy guaranteed to increase AI timelines
Richard Korzekwa (Grothor) · 2023-04-01T20:50:14.832Z · comments (1)
Shutting down AI is not enough. We need to destroy all technology.
Matthew Barnett (matthew-barnett) · 2023-04-01T21:03:24.448Z · comments (36)
A Confession about the LessWrong Team
Ruby · 2023-04-01T21:47:11.572Z · comments (5)
A report about LessWrong karma volatility from a different universe
Ben Pace (Benito) · 2023-04-01T21:48:32.503Z · comments (7)
Policy discussions follow strong contextualizing norms
Richard_Ngo (ricraz) · 2023-04-01T23:51:36.588Z · comments (61)
Analysis of GPT-4 competence in assessing complex legal language: Example of Bill C-11 of the Canadian Parliament. - Part 1
M. Y. Zuo · 2023-04-02T00:01:11.133Z · comments (2)
Some lesser-known megaproject ideas
Linch · 2023-04-02T01:14:54.293Z · comments (4)
Pessimism about AI Safety
Max_He-Ho · 2023-04-02T07:43:11.723Z · comments (1)
[question] When should a neural network-based approach for plant control systems be preferred over a traditional control method?
Bob Guran (the-wizzart) · 2023-04-02T10:18:13.924Z · answers+comments (0)
[link] Ask First
intellectronica · 2023-04-02T10:45:13.289Z · comments (1)
Transparency for Generalizing Alignment from Toy Models
Johannes C. Mayer (johannes-c-mayer) · 2023-04-02T10:47:03.742Z · comments (3)
Ultimate ends may be easily hidable behind convergent subgoals
TsviBT · 2023-04-02T14:51:23.245Z · comments (4)
AISC 2023, Progress Report for March: Team Interpretable Architectures
Robert Kralisch (nonmali-1) · 2023-04-02T16:19:07.857Z · comments (0)
[question] Is there a LessWrong-adjacent place to hire freelancers/seek freelance work?
nonzerosum · 2023-04-02T16:39:49.659Z · answers+comments (3)
Talkbox Bagpipe Drones
jefftk (jkaufman) · 2023-04-02T18:50:04.921Z · comments (0)
Exposure to Lizardman is Lethal
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-02T18:57:43.750Z · comments (96)
[question] What could EA's new name be?
trevor (TrevorWiesinger) · 2023-04-02T19:25:22.740Z · answers+comments (20)
Fine-insured bounties as AI deterrent
Virtual Instinct (virtual-instinct) · 2023-04-02T19:44:52.320Z · comments (0)
"a dialogue with myself concerning eliezer yudkowsky" (not author)
the gears to ascension (lahwran) · 2023-04-02T20:12:32.584Z · comments (18)
[question] Scared about the future of AI
eitan weiss (eitan-weiss) · 2023-04-02T20:37:46.414Z · answers+comments (0)
GTP4 capable of limited recursive improving?
Boris Kashirin (boris-kashirin) · 2023-04-02T21:38:22.959Z · comments (3)
[link] Orthogonality is Expensive
DragonGod · 2023-04-03T00:43:34.566Z · comments (3)
next page (older posts) →