LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

Thoughts on Retrieving Knowledge from Neural Networks
Jaime Ruiz · 2019-08-23T16:41:35.818Z · comments (2)
Algorithmic Similarity
LukasM · 2019-08-23T16:39:47.706Z · comments (10)
Soft takeoff can still lead to decisive strategic advantage
Daniel Kokotajlo (daniel-kokotajlo) · 2019-08-23T16:39:31.317Z · comments (47)
Moscow LW meetup in "Nauchka" library
Alexander230 · 2019-08-23T12:40:25.788Z · comments (0)
[link] OpenGPT-2: We Replicated GPT-2 Because You Can Too
avturchin · 2019-08-23T11:32:43.191Z · comments (0)
Torture and Dust Specks and Joy--Oh my! or: Non-Archimedean Utility Functions as Pseudograded Vector Spaces
Louis_Brown · 2019-08-23T11:11:52.802Z · comments (29)
Metalignment: Deconfusing metaethics for AI alignment.
Guillaume Corlouer (Tancrede) · 2019-08-23T10:25:38.756Z · comments (7)
[question] A basic probability question
shminux · 2019-08-23T07:13:10.995Z · answers+comments (3)
Towards an Intentional Research Agenda
romeostevensit · 2019-08-23T05:27:53.843Z · comments (8)
[question] Why are people so optimistic about superintelligence?
bipolo · 2019-08-23T04:25:16.542Z · answers+comments (3)
Vague Thoughts and Questions about Agent Structures
loriphos · 2019-08-23T04:01:17.932Z · comments (3)
Formalising decision theory is hard
Lukas Finnveden (Lanrian) · 2019-08-23T03:27:24.757Z · comments (19)
Creating Environments to Design and Test Embedded Agents
lukehmiles (lcmgcd) · 2019-08-23T03:17:33.265Z · comments (5)
Tabooing 'Agent' for Prosaic Alignment
Hjalmar_Wijk · 2019-08-23T02:55:57.591Z · comments (10)
Vaniver's View on Factored Cognition
Vaniver · 2019-08-23T02:54:00.915Z · comments (4)
Redefining Fast Takeoff
VojtaKovarik · 2019-08-23T02:15:16.369Z · comments (1)
[question] Does Agent-like Behavior Imply Agent-like Architecture?
Scott Garrabrant · 2019-08-23T02:01:09.651Z · answers+comments (8)
The Commitment Races problem
Daniel Kokotajlo (daniel-kokotajlo) · 2019-08-23T01:58:19.669Z · comments (56)
Analysis of a Secret Hitler Scenario
jaek · 2019-08-23T01:24:17.006Z · comments (6)
Thoughts from a Two Boxer
jaek · 2019-08-23T00:24:49.692Z · comments (11)
Deconfuse Yourself about Agency
VojtaKovarik · 2019-08-23T00:21:24.548Z · comments (9)
Logical Optimizers
Donald Hobson (donald-hobson) · 2019-08-22T23:54:35.773Z · comments (4)
Towards a mechanistic understanding of corrigibility
evhub · 2019-08-22T23:20:57.134Z · comments (26)
Response to Glen Weyl on Technocracy and the Rationalist Community
John_Maxwell (John_Maxwell_IV) · 2019-08-22T23:14:58.690Z · comments (9)
[question] Why so much variance in human intelligence?
Ben Pace (Benito) · 2019-08-22T22:36:55.499Z · answers+comments (28)
Logical Counterfactuals and Proposition graphs, Part 1
Donald Hobson (donald-hobson) · 2019-08-22T22:06:01.764Z · comments (0)
Time Travel, AI and Transparent Newcomb
johnswentworth · 2019-08-22T22:04:55.908Z · comments (7)
Embedded Naive Bayes
johnswentworth · 2019-08-22T21:40:05.972Z · comments (6)
Intentional Bucket Errors
Scott Garrabrant · 2019-08-22T20:02:11.357Z · comments (6)
Computational Model: Causal Diagrams with Symmetry
johnswentworth · 2019-08-22T17:54:11.274Z · comments (29)
[AN #62] Are adversarial examples caused by real but imperceptible features?
Rohin Shah (rohinmshah) · 2019-08-22T17:10:01.959Z · comments (10)
Implications of Quantum Computing for Artificial Intelligence Alignment Research
Jsevillamol · 2019-08-22T10:33:27.502Z · comments (3)
Body Alignment & Balance. Our Midline Anatomy & the Median Plane.
leggi · 2019-08-22T10:24:59.156Z · comments (6)
[question] Simulation Argument: Why aren't ancestor simulations outnumbered by transhumans?
[deleted] · 2019-08-22T09:07:07.533Z · answers+comments (11)
Markets are Universal for Logical Induction
johnswentworth · 2019-08-22T06:44:56.532Z · comments (2)
Announcement: Writing Day Today (Thursday)
Ben Pace (Benito) · 2019-08-22T04:48:38.086Z · comments (5)
Western Massachusetts SSC meetup #15
a_lieb · 2019-08-22T00:53:41.618Z · comments (0)
Call for contributors to the Alignment Newsletter
Rohin Shah (rohinmshah) · 2019-08-21T18:21:31.113Z · comments (0)
Two senses of “optimizer”
Joar Skalse (Logical_Lunatic) · 2019-08-21T16:02:08.985Z · comments (41)
Paradoxical Advice Thread
Hazard · 2019-08-21T14:50:51.465Z · comments (10)
Three Levels of Motivation
DragonGod · 2019-08-21T09:24:39.164Z · comments (1)
Odds are not easier
MrMind · 2019-08-21T08:34:22.286Z · comments (6)
GPT-2: 6-Month Follow-Up
[deleted] · 2019-08-21T05:06:52.461Z · comments (1)
Lana Wachowski is doing a new Matrix movie
mako yass (MakoYass) · 2019-08-21T00:47:40.521Z · comments (3)
[question] What authors consistently give accurate pictures of complex topics they discuss?
seez · 2019-08-21T00:09:52.057Z · answers+comments (3)
[question] What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause?
David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-08-20T21:45:12.118Z · answers+comments (27)
[question] Has Moore's Law actually slowed down?
Matthew Barnett (matthew-barnett) · 2019-08-20T19:18:41.488Z · answers+comments (6)
[link] Cerebras Systems unveils a record 1.2 trillion transistor chip for AI
avturchin · 2019-08-20T14:36:24.935Z · comments (4)
Lisbon SSC Meetup #1
tamkin&popkin · 2019-08-20T12:20:47.424Z · comments (0)
Self-supervised learning & manipulative predictions
Steven Byrnes (steve2152) · 2019-08-20T10:55:51.804Z · comments (14)
← previous page (newer posts) · next page (older posts) →