LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

[question] Collaborators for Effective Thesis project wanted
DavidJanku · 2020-06-19T20:38:38.617Z · answers+comments (1)
Are All Existential Risks Equivalent to a Lack of General Collective Intelligence? And is GCI therefore the Most Important Human Innovation in the History and Immediate Future of Mankind?
williamsae · 2020-06-19T20:38:51.456Z · comments (0)
The ground of optimization
Alex Flint (alexflint) · 2020-06-20T00:38:15.521Z · comments (80)
The point of a memory palace
DirectedEvolution (AllAmericanBreakfast) · 2020-06-20T01:00:41.975Z · comments (14)
[link] Relevant pre-AGI possibilities
Daniel Kokotajlo (daniel-kokotajlo) · 2020-06-20T10:52:00.257Z · comments (7)
[question] ‘Maximum’ level of suffering?
Anirandis · 2020-06-20T14:05:14.423Z · answers+comments (16)
Coronavirus and Rents
jefftk (jkaufman) · 2020-06-20T16:20:03.133Z · comments (3)
Types of Knowledge
Elizabeth (pktechgirl) · 2020-06-20T17:20:02.229Z · comments (11)
When is it Wrong to Click on a Cow?
[deleted] · 2020-06-20T18:23:07.420Z · comments (19)
Insights Over Frameworks
[deleted] · 2020-06-20T19:52:18.602Z · comments (6)
How to make a predicted AI alignment event/meeting calendar
rmoehn · 2020-06-20T22:43:43.901Z · comments (0)
The affect heuristic and studying autocracies
Tim Liptrot (rockthecasbah) · 2020-06-21T04:07:21.061Z · comments (3)
My weekly review habit
benkuhn · 2020-06-21T14:40:02.607Z · comments (5)
[question] Are Humans Fundamentally Good?
Nyarlathotep (LoganCole) · 2020-06-21T16:29:56.976Z · answers+comments (24)
The Bright Side of rationalization
Alexey Smirnov (alexey-smirnov) · 2020-06-21T16:30:58.466Z · comments (2)
[link] Training our humans on the wrong dataset
George3d6 · 2020-06-21T17:17:07.267Z · comments (5)
Plausible cases for HRAD work, and locating the crux in the "realism about rationality" debate
riceissa · 2020-06-22T01:10:23.757Z · comments (15)
You have become the supreme dictator of the United States.
Nyarlathotep (LoganCole) · 2020-06-22T01:45:10.278Z · comments (10)
Do Women Like Assholes?
Jacob Falkovich (Jacobian) · 2020-06-22T02:14:43.503Z · comments (10)
Fight the Power
Jacob Falkovich (Jacobian) · 2020-06-22T02:19:39.042Z · comments (3)
Neural Basis for Global Workspace Theory
Hazard · 2020-06-22T04:19:15.947Z · comments (9)
The Queen of the Damned
Slimepriestess (Hivewired) · 2020-06-22T04:50:02.730Z · comments (0)
Deriving General Relativity
lsusr · 2020-06-22T09:52:09.554Z · comments (0)
AI Benefits Post 1: Introducing “AI Benefits”
Cullen (Cullen_OKeefe) · 2020-06-22T16:59:22.605Z · comments (3)
The Indexing Problem
johnswentworth · 2020-06-22T19:11:53.626Z · comments (2)
News ⊂ Advertising
lsusr · 2020-06-22T19:19:29.948Z · comments (18)
[question] Question on GPT-3 Excel Demo
Zhitao Hou (zhitao-hou) · 2020-06-22T20:31:12.494Z · answers+comments (1)
Locality of goals
adamShimi · 2020-06-22T21:56:01.428Z · comments (8)
Prediction = Compression [Transcript]
alkjash · 2020-06-22T23:54:22.170Z · comments (13)
Slate Star Codex Deleted
Jeremy_Nixon · 2020-06-23T07:21:09.733Z · comments (0)
SlateStarCodex deleted because NYT wants to dox Scott
Rudi C (rudi-c) · 2020-06-23T07:51:30.859Z · comments (93)
Personal experience of coffee as nootropic
ryan wong (ryan-wong) · 2020-06-23T12:00:54.843Z · comments (6)
New York Times, Please Do Not Threaten The Safety of Scott Alexander By Revealing His True Name
Zvi · 2020-06-23T12:20:00.788Z · comments (2)
Old-world Politics Fallacy
Rudi C (rudi-c) · 2020-06-23T12:32:55.946Z · comments (0)
[META] Building a rationalist communication system to avoid censorship
Donald Hobson (donald-hobson) · 2020-06-23T14:12:49.354Z · comments (33)
[question] Requesting feedback/advice: what Type Theory to study for AI safety?
rvnnt · 2020-06-23T17:03:32.452Z · answers+comments (4)
Modelling Continuous Progress
Sammy Martin (SDM) · 2020-06-23T18:06:47.474Z · comments (3)
[question] What is "Instrumental Corrigibility"?
joebernstein · 2020-06-23T20:24:16.371Z · answers+comments (1)
[question] How do you Murphyjitsu essentially risky activities?
9eB1 · 2020-06-23T21:09:54.593Z · answers+comments (7)
[question] Has anyone explored deliberate infection with milder coronavirus strains for cross-immunity?
polyamorousRobot · 2020-06-23T22:23:41.659Z · answers+comments (2)
Half-Baked Products and Idea Kernels
Liron · 2020-06-24T01:00:20.466Z · comments (9)
The Dark Miracle of Optics
Suspended Reason (suspended-reason) · 2020-06-24T03:09:29.874Z · comments (5)
[question] Does NYT have policies?
Douglas_Knight · 2020-06-24T04:06:57.556Z · answers+comments (0)
Models, myths, dreams, and Cheshire cat grins
Stuart_Armstrong · 2020-06-24T10:50:57.683Z · comments (7)
Linkpost: M21 Review: We Have Normality
Zvi · 2020-06-24T16:10:00.775Z · comments (0)
[AN #105]: The economic trajectory of humanity, and what we might mean by optimization
Rohin Shah (rohinmshah) · 2020-06-24T17:30:02.977Z · comments (3)
Abstraction, Evolution and Gears
johnswentworth · 2020-06-24T17:39:42.563Z · comments (11)
[question] Can I archive content from lesswrong.com on the wayback machine (internet archive, archive.org) ?
Nicolas Lacombe (nicolas-lacombe) · 2020-06-24T18:15:21.866Z · answers+comments (3)
[question] What's the name for that plausible deniability thing?
[deleted] · 2020-06-24T18:42:54.705Z · answers+comments (3)
Quick Look #1 Diophantus of Alexandria
JohnBuridan · 2020-06-24T19:12:14.672Z · comments (0)
← previous page (newer posts) · next page (older posts) →