LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

[question] Which of our online writings was used to train GPT-3?
Mati_Roy (MathieuRoy) · 2021-10-30T21:52:08.706Z · answers+comments (3)
Why the Problem of the Criterion Matters
Gordon Seidoh Worley (gworley) · 2021-10-30T20:44:00.143Z · comments (9)
Budapest Less Wrong/ SSC
Timothy Underwood (timothy-underwood-1) · 2021-10-30T18:27:27.045Z · comments (0)
Quick general thoughts on suffering and consciousness
Rob Bensinger (RobbBB) · 2021-10-30T18:05:59.612Z · comments (46)
Must true AI sleep?
YimbyGeorge (mardukofbabylon) · 2021-10-30T16:47:46.234Z · comments (1)
How Much is a Sweet?
jefftk (jkaufman) · 2021-10-30T15:50:05.758Z · comments (6)
God Is Great
Mahdi Complex (mahdi-complex) · 2021-10-30T13:03:20.998Z · comments (7)
We Live in a Post-Scarcity Society
lsusr · 2021-10-30T12:05:35.267Z · comments (22)
Tell the Truth
lsusr · 2021-10-30T10:27:42.996Z · comments (40)
A Roadmap to a Post-Scarcity Economy
lorepieri (lorenzo-rex) · 2021-10-30T09:04:29.479Z · comments (3)
Start with a Title
lsusr · 2021-10-30T08:59:08.208Z · comments (4)
SSC/Lesswrong San Diego Meetup
CitizenTen · 2021-10-30T00:15:41.324Z · comments (1)
Unlock the Door
lincolnquirk · 2021-10-29T23:45:43.273Z · comments (5)
[link] Naval Ravikant and Chris Dixon Didn't Explain Any Web3 Use Cases
Liron · 2021-10-29T21:54:50.184Z · comments (0)
[TL;DR] "Training for the New Alpinism" by Steve House and Scott Johnston
lsusr · 2021-10-29T21:20:00.451Z · comments (1)
True Stories of Algorithmic Improvement
johnswentworth · 2021-10-29T20:57:13.638Z · comments (8)
Goodhart's Imperius
Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-29T20:19:42.291Z · comments (6)
A system of infinite ethics
Chantiel · 2021-10-29T19:37:42.828Z · comments (60)
Stuart Russell and Melanie Mitchell on Munk Debates
Alex Flint (alexflint) · 2021-10-29T19:13:58.244Z · comments (4)
A very crude deception eval is already passed
Beth Barnes (beth-barnes) · 2021-10-29T17:57:29.475Z · comments (6)
On the Universal Distribution
Joe Carlsmith (joekc) · 2021-10-29T17:50:15.849Z · comments (4)
[link] Google announces Pathways: new generation multitask AI Architecture
Ozyrus · 2021-10-29T11:55:21.797Z · comments (1)
I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness
J Bostock (Jemist) · 2021-10-29T11:09:20.559Z · comments (120)
Leadership
lsusr · 2021-10-29T07:29:54.610Z · comments (4)
Truthful and honest AI
abergal · 2021-10-29T07:28:36.225Z · comments (1)
Interpretability
abergal · 2021-10-29T07:28:02.650Z · comments (13)
Techniques for enhancing human feedback
abergal · 2021-10-29T07:27:46.700Z · comments (0)
Measuring and forecasting risks
abergal · 2021-10-29T07:27:32.836Z · comments (0)
Request for proposals for projects in AI alignment that work with deep learning systems
abergal · 2021-10-29T07:26:58.754Z · comments (0)
My current thinking on money and low carb diets
Adam Zerner (adamzerner) · 2021-10-29T06:50:38.543Z · comments (17)
[question] What are fiction stories related to AI alignment?
Mati_Roy (MathieuRoy) · 2021-10-29T02:59:52.920Z · answers+comments (22)
[question] How to generate idea/solutions to solve a problem?
warrenjordan · 2021-10-29T00:53:15.941Z · answers+comments (5)
[link] Forecasting progress in language models
Matthew Barnett (matthew-barnett) · 2021-10-28T20:40:59.897Z · comments (6)
[AN #168]: Four technical topics for which Open Phil is soliciting grant proposals
Rohin Shah (rohinmshah) · 2021-10-28T17:20:03.387Z · comments (0)
Better and Worse Ways of Stating SIA
dadadarren · 2021-10-28T16:04:22.333Z · comments (0)
Recommending Understand, a Game about Discerning the Rules
MondSemmel · 2021-10-28T14:53:16.901Z · comments (53)
Covid 10/28: An Unexpected Victory
Zvi · 2021-10-28T14:50:01.072Z · comments (37)
An Unexpected Victory: Container Stacking at the Port of Long Beach
Zvi · 2021-10-28T14:40:00.497Z · comments (41)
Save the kid, ruin the suit; Acceptable utility exchange rates; Distributed utility calculations; Civic duties matter
spkoc · 2021-10-28T11:51:52.057Z · comments (8)
Voting for people harms people
CraigMichael · 2021-10-28T08:29:13.075Z · comments (6)
[link] Selfishness, preference falsification, and AI alignment
jessicata (jessica.liu.taylor) · 2021-10-28T00:16:47.051Z · comments (28)
Ruling Out Everything Else
Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-27T21:50:39.545Z · comments (51)
[link] They don't make 'em like they used to
jasoncrawford · 2021-10-27T19:44:47.098Z · comments (84)
Hegel vs. GPT-3
Bezzi · 2021-10-27T05:55:18.296Z · comments (21)
[link] Everything Studies on Cynical Theories
DanielFilan · 2021-10-27T01:31:20.608Z · comments (5)
Harry Potter and the Methods of Psychomagic | Chapter 2: The Global Neuronal Workspace
Henry Prowbell · 2021-10-26T18:54:49.386Z · comments (8)
X-Risk, Anthropics, & Peter Thiel's Investment Thesis
Jackson Wagner · 2021-10-26T18:50:03.300Z · comments (1)
[question] Would the world be a better place if we all agreed to form a world government next Monday?
idontwanttodie · 2021-10-26T18:14:17.432Z · answers+comments (5)
Don't Use the "God's-Eye View" in Anthropic Problems.
dadadarren · 2021-10-26T13:47:53.386Z · comments (1)
Impressive vs honest signaling
Adam Zerner (adamzerner) · 2021-10-26T07:16:24.478Z · comments (12)
next page (older posts) →