LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

[link] Letter from Utopia: Talking to Nick Bostrom
morganism · 2017-11-25T22:19:48.345Z · comments (2)
An Educational Curriculum
DragonGod · 2017-11-22T10:11:58.779Z · comments (6)
[link] Intent of Experimenters; Halting Procedures; Frequentists vs. Bayesians
curi · 2017-11-04T19:13:46.762Z · comments (0)
[link] Intercellular competition and the inevitability of multicellular aging
Gunnar_Zarncke · 2017-11-04T12:32:54.879Z · comments (3)
Cooperative model knob-turning
Gust · 2017-11-03T16:18:48.229Z · comments (0)
[link] Simple refutation of the ‘Bayesian’ philosophy of science
curi · 2017-11-01T06:54:20.510Z · comments (34)
[meta] Linking to Other Resources
Chris_Leong · 2017-11-08T06:33:50.114Z · comments (0)
[link] Artificial intelligence and the stability of markets
fortyeridania · 2017-11-15T02:17:12.401Z · comments (7)
Clarify Your No's
Chris_Leong · 2017-11-07T07:25:46.881Z · comments (2)
Stable agent, subagent-unstable
Stuart_Armstrong · 2017-11-28T16:04:02.612Z · comments (0)
Problems as dragons and papercuts
Elo · 2017-11-03T01:41:53.922Z · comments (1)
[link] My Philosophy of Intelligence Alignment
whpearson · 2017-11-19T14:44:18.097Z · comments (1)
Set, Game, Match
ryan_b · 2017-11-09T23:06:53.672Z · comments (2)
Yet another failed utopia
Kazi Siddiqui (kazi-siddiqui) · 2017-11-02T00:04:55.900Z · comments (2)
On Inconvenient Truth
PDV · 2017-11-10T01:21:41.840Z · comments (4)
Mixed-Strategy Ratifiability Implies CDT=EDT
abramdemski · 2017-11-15T04:22:11.000Z · comments (0)
[link] The Little Dragon is Dead
SquirrelInHell · 2017-11-06T21:24:04.529Z · comments (2)
[meta] Remove Intercom?
skybrian · 2017-11-07T04:35:26.399Z · comments (2)
Self-confidence as a time saving tactic
devas · 2017-11-06T10:37:56.940Z · comments (4)
[link] Announcing the AI Alignment Prize
cousin_it · 2017-11-04T11:44:19.000Z · comments (0)
Fables grow around missed natural experiments
MaryCh · 2017-11-10T21:42:26.686Z · comments (15)
Being Likened to a Trolley Problem
Portodiovilla · 2017-11-02T17:49:04.341Z · comments (0)
November 2017 Media Thread
ArisKatsaris · 2017-11-02T00:35:26.761Z · comments (32)
[link] Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”
turchin · 2017-11-25T11:44:51.077Z · comments (19)
[link] Towards an Axiology Approach to AI Alignement
Gordon Seidoh Worley (gworley) · 2017-11-15T02:04:14.000Z · comments (0)
[link] Ethical priorities for neurotech and AI - Nature
morganism · 2017-11-15T09:08:38.309Z · comments (0)
Rationalising humans: another mugging, but not Pascal's
Stuart_Armstrong · 2017-11-15T12:07:05.000Z · comments (0)
Looking for Recommendations RE UDT vs. bounded computation / meta-reasoning / opportunity cost?
IAFF-User-111 (Imported-IAFF-User-111) · 2017-11-08T22:58:55.000Z · comments (1)
[link] Senelda - Reputed IT training Academy in Chennai
IAFF-User-271 (Imported-IAFF-User-271) · 2017-11-07T13:20:17.000Z · comments (0)
Catastrophe Mitigation Using DRL
Vanessa Kosoy (vanessa-kosoy) · 2017-11-17T15:38:18.000Z · comments (0)
Kolmogorov complexity makes reward learning worse
Stuart_Armstrong · 2017-11-06T20:08:02.000Z · comments (0)
[link] Asgardia - The Space Kingdom
morganism · 2017-11-18T20:17:50.326Z · comments (2)
[meta] Revisiting HPMOR
DragonGod · 2017-11-19T08:55:31.552Z · comments (0)
Bias in rationality is much worse than noise
Stuart_Armstrong · 2017-11-06T11:08:38.000Z · comments (0)
Normative assumptions: regret
Stuart_Armstrong · 2017-11-06T10:59:48.000Z · comments (0)
Our values are underdefined, changeable, and manipulable
Stuart_Armstrong · 2017-11-06T10:59:38.000Z · comments (0)
Learning values, or defining them?
Stuart_Armstrong · 2017-11-06T10:59:28.000Z · comments (0)
[link] AGI
curi · 2017-11-05T20:20:56.338Z · comments (3)
[link] Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”
avturchin · 2017-11-28T15:39:37.000Z · comments (0)
Reward learning summary
Stuart_Armstrong · 2017-11-28T15:55:08.000Z · comments (1)
Stable agent, subagent-unstable
Stuart_Armstrong · 2017-11-28T16:05:53.000Z · comments (0)
If creating FAI is easy, it might be automatic
MrRobot · 2017-11-11T19:59:54.169Z · comments (2)
[link] Military AI as a Convergent Goal of Self-Improving AI
turchin · 2017-11-13T11:25:39.407Z · comments (6)
[link] Military AI as a Convergent Goal of Self-Improving AI
avturchin · 2017-11-13T12:09:45.000Z · comments (0)
A short poem I found
MrRobot · 2017-11-15T06:25:06.736Z · comments (0)
Implications of a feelings-first metaphysics?
efficientfox · 2017-11-18T17:10:35.828Z · comments (2)
Tragedy of the Commons
DragonGod · 2017-11-18T20:30:57.841Z · comments (0)
Consider motivated snobbery
zlrth · 2017-11-11T15:49:57.490Z · comments (1)
Is FB optimized to kill my time?
MrRobot · 2017-11-02T05:51:19.694Z · comments (5)
Next narrow-AI challenge proposal
MrRobot · 2017-11-22T11:32:57.215Z · comments (4)
← previous page (newer posts) · next page (older posts) →