LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

Horizontal and Vertical Integration
Jeffrey Heninger (jeffrey-heninger) · 2023-07-01T01:15:26.987Z · comments (1)
Despedida a Pablo Stafforini
daviddelauba (davidfriva) · 2023-07-01T02:44:17.219Z · comments (0)
EA/LW/SSC Argentina Group!
daviddelauba (davidfriva) · 2023-07-01T02:47:22.379Z · comments (0)
Predicting: Quick Start
duck_master · 2023-07-01T03:43:50.379Z · comments (3)
Ateliers: But what is an Atelier?
Stephen Fowler (LosPolloFowler) · 2023-07-01T05:57:19.510Z · comments (2)
Micro Habits that Improve One’s Day
silentbob · 2023-07-01T10:53:57.280Z · comments (9)
[link] Elements of Computational Philosophy, Vol. I: Truth
Paul Bricman (paulbricman) · 2023-07-01T11:44:22.154Z · comments (6)
[Linkpost] A shared linguistic space for transmitting our thoughts from brain to brain in natural conversations
Bogdan Ionut Cirstea (bogdan-ionut-cirstea) · 2023-07-01T13:57:56.021Z · comments (2)
[ASoT] GPT2 Steering & The Tuned Lens
Ulisse Mini (ulisse-mini) · 2023-07-01T14:12:49.784Z · comments (0)
[link] Forum Karma: view stats and find highly-rated comments for any LW user
Max H (Maxc) · 2023-07-01T15:36:28.881Z · comments (16)
[link] Alpha
Erich_Grunewald · 2023-07-01T16:05:55.940Z · comments (2)
My "2.9 trauma limit"
Raemon · 2023-07-01T19:32:14.805Z · comments (31)
An Introduction, an Overview of my personal resources, and how one might make use of them
ProofBySonnet (redheadbros) · 2023-07-01T21:00:21.093Z · comments (6)
Grant applications and grand narratives
Elizabeth (pktechgirl) · 2023-07-02T00:16:25.129Z · comments (20)
Using (Uninterpretable) LLMs to Generate Interpretable AI Code
Joar Skalse (Logical_Lunatic) · 2023-07-02T01:01:53.846Z · comments (9)
[link] faster latent diffusion
bhauth · 2023-07-02T01:30:17.035Z · comments (8)
Negativity enhances positivity
Adam Zerner (adamzerner) · 2023-07-02T02:47:12.201Z · comments (7)
LLMs, Batches, and Emergent Episodic Memory
Lao Mein (derpherpize) · 2023-07-02T07:55:04.368Z · comments (4)
Through a panel, darkly: a case study in internet BS detection
jchan · 2023-07-02T13:40:48.186Z · comments (7)
[link] Consider giving money to people, not projects or organizations
Nina Rimsky (NinaR) · 2023-07-02T14:33:29.160Z · comments (30)
How Smart Are Humans?
Joar Skalse (Logical_Lunatic) · 2023-07-02T15:46:55.309Z · comments (19)
Why it's so hard to talk about Consciousness
Rafael Harth (sil-ver) · 2023-07-02T15:56:05.188Z · comments (152)
Shall We Throw A Huge Party Before AGI Bids Us Adieu?
GeorgeMan (jozinko-kovacik) · 2023-07-02T17:56:48.372Z · comments (6)
Going Crazy and Getting Better Again
Evenstar · 2023-07-02T18:55:25.790Z · comments (10)
Quantitative cruxes in Alignment
Martín Soto (martinsq) · 2023-07-02T20:38:18.534Z · comments (0)
Sources of evidence in Alignment
Martín Soto (martinsq) · 2023-07-02T20:38:34.089Z · comments (0)
VC Theory Overview
Joar Skalse (Logical_Lunatic) · 2023-07-02T22:45:59.974Z · comments (2)
Meta-rationality and frames
Richard_Ngo (ricraz) · 2023-07-03T00:33:20.355Z · comments (2)
Frames in context
Richard_Ngo (ricraz) · 2023-07-03T00:38:52.078Z · comments (9)
[link] Douglas Hofstadter changes his mind on Deep Learning & AI risk (June 2023)?
gwern · 2023-07-03T00:48:47.131Z · comments (54)
My Alignment Timeline
NicholasKross · 2023-07-03T01:04:07.935Z · comments (0)
My Central Alignment Priority (2 July 2023)
NicholasKross · 2023-07-03T01:46:01.764Z · comments (1)
Marginal charity
Pat Myron (pat-myron) · 2023-07-03T02:13:12.943Z · comments (1)
6/23
Celer · 2023-07-03T06:30:07.676Z · comments (0)
Complex Signs Bad
Evenstar · 2023-07-03T13:09:18.695Z · comments (2)
Monthly Roundup #8: July 2023
Zvi · 2023-07-03T13:20:05.614Z · comments (4)
[link] A Subtle Selection Effect in Overconfidence Studies
Kevin Dorst · 2023-07-03T14:43:11.287Z · comments (0)
[question] What in your opinion is the biggest open problem in AI alignment?
tailcalled · 2023-07-03T16:34:09.698Z · answers+comments (35)
Security, Cryptograhy AI Workshop in SF
Allison Duettmann (allison-duettmann) · 2023-07-03T19:01:19.865Z · comments (0)
Ten Levels of AI Alignment Difficulty
Sammy Martin (SDM) · 2023-07-03T20:20:21.403Z · comments (12)
Should you announce your bets publicly?
Ege Erdil (ege-erdil) · 2023-07-04T00:11:11.386Z · comments (1)
[link] Mechanistic Interpretability is Being Pursued for the Wrong Reasons
Cole Wyeth (Amyr) · 2023-07-04T02:17:10.347Z · comments (0)
[link] Two Percolation Puzzles
Adam Scherlis (adam-scherlis) · 2023-07-04T05:34:19.441Z · comments (14)
[question] The literature on aluminum adjuvants is very suspicious. Small IQ tax is plausible - can any experts help me estimate it?
mikes · 2023-07-04T09:33:51.849Z · answers+comments (39)
Rational Unilateralists Aren't So Cursed
Sami Petersen (sami-petersen) · 2023-07-04T12:19:12.048Z · comments (5)
Twitter Twitches
Zvi · 2023-07-04T13:00:06.111Z · comments (9)
My Time As A Goddess
Evenstar · 2023-07-04T13:14:36.736Z · comments (5)
[link] What I Think About When I Think About History
Jacob G-W (g-w1) · 2023-07-04T14:02:26.066Z · comments (4)
[link] AIs teams will probably be more superintelligent than individual AIs
Robert_AIZI · 2023-07-04T14:06:49.881Z · comments (1)
AI labs' statements on governance
Zach Stein-Perlman · 2023-07-04T16:30:01.624Z · comments (0)
next page (older posts) →