LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

[link] Please Help Metaculus Forecast COVID-19
AABoyles · 2020-02-14T17:31:16.488Z · comments (0)
The Reasonable Effectiveness of Mathematics or: AI vs sandwiches
Vanessa Kosoy (vanessa-kosoy) · 2020-02-14T18:46:39.280Z · comments (8)
Synthesizing amplification and debate
evhub · 2020-02-05T22:53:56.940Z · comments (10)
Refactoring EMH – Thoughts following the latest market crash
Alex_Shleizer · 2020-02-28T15:53:28.298Z · comments (16)
Reasons for Excitement about Impact of Impact Measure Research
TurnTrout · 2020-02-27T21:42:18.903Z · comments (8)
[Link] Beyond the hill: thoughts on ontologies for thinking, essay-completeness and forecasting
jacobjacob · 2020-02-02T12:39:06.563Z · comments (6)
On unfixably unsafe AGI architectures
Steven Byrnes (steve2152) · 2020-02-19T21:16:19.544Z · comments (8)
We Want MoR (HPMOR Discussion Podcast) Completes Book One
moridinamael · 2020-02-19T00:34:15.864Z · comments (0)
Subagents and impact measures, full and fully illustrated
Stuart_Armstrong · 2020-02-24T13:12:05.014Z · comments (14)
Training Regime Day 1: What is applied rationality?
Mark Xu (mark-xu) · 2020-02-15T21:03:32.685Z · comments (7)
Pessimism About Unknown Unknowns Inspires Conservatism
michaelcohen (cocoa) · 2020-02-03T14:48:14.824Z · comments (2)
Continuous Improvement: Insights from 'Topology'
TurnTrout · 2020-02-22T21:58:01.584Z · comments (4)
[question] At what point should CFAR stop holding workshops due to COVID-19?
Adam Scholl (adam_scholl) · 2020-02-25T09:59:17.910Z · answers+comments (11)
[question] How do you survive in the humanities?
polymathwannabe · 2020-02-20T15:19:10.598Z · answers+comments (20)
Curiosity Killed the Cat and the Asymptotically Optimal Agent
michaelcohen (cocoa) · 2020-02-20T17:28:41.955Z · comments (15)
Draft: Models of Risks of Delivery Under Coronavirus
Elizabeth (pktechgirl) · 2020-02-28T04:10:01.310Z · comments (5)
Attainable Utility Preservation: Scaling to Superhuman
TurnTrout · 2020-02-27T00:52:49.970Z · comments (22)
[AN #87]: What might happen as deep learning scales even further?
Rohin Shah (rohinmshah) · 2020-02-19T18:20:01.664Z · comments (0)
Other versions of "No free lunch in value learning"
Stuart_Armstrong · 2020-02-25T14:25:00.613Z · comments (0)
How Low Should Fruit Hang Before We Pick It?
TurnTrout · 2020-02-25T02:08:52.630Z · comments (9)
Why SENS makes sense
emanuele ascani (emanuele-ascani) · 2020-02-22T16:28:42.025Z · comments (2)
Training Regime Day 8: Noticing
Mark Xu (mark-xu) · 2020-02-22T19:47:03.898Z · comments (7)
Response to Oren Etzioni's "How to know if artificial intelligence is about to destroy civilization"
Daniel Kokotajlo (daniel-kokotajlo) · 2020-02-27T18:10:11.129Z · comments (5)
Resource on alcohol problems
juliawise · 2020-02-27T18:17:56.947Z · comments (3)
Will AI undergo discontinuous progress?
Sammy Martin (SDM) · 2020-02-21T22:16:59.424Z · comments (21)
[link] On the falsifiability of hypercomputation, part 2: finite input streams
jessicata (jessica.liu.taylor) · 2020-02-17T03:51:57.238Z · comments (7)
Long Now, and Culture vs Artifacts
Raemon · 2020-02-03T21:49:25.367Z · comments (3)
Where's the Turing Machine? A step towards Ontology Identification
adamShimi · 2020-02-26T17:10:53.054Z · comments (0)
Training Regime Day 2: Searching for bugs
Mark Xu (mark-xu) · 2020-02-16T17:16:32.606Z · comments (2)
[link] On the falsifiability of hypercomputation
jessicata (jessica.liu.taylor) · 2020-02-07T08:16:07.268Z · comments (4)
Gricean communication and meta-preferences
Charlie Steiner · 2020-02-10T05:05:30.079Z · comments (0)
[Link and commentary] The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?
MichaelA · 2020-02-16T19:56:15.963Z · comments (4)
[link] Time Binders
Slimepriestess (Hivewired) · 2020-02-24T09:55:55.672Z · comments (10)
[question] What to make of Aubrey de Grey's prediction?
Rafael Harth (sil-ver) · 2020-02-28T19:25:18.027Z · answers+comments (18)
Training Regime Day 3: Tips and Tricks
Mark Xu (mark-xu) · 2020-02-17T18:53:24.808Z · comments (5)
[link] Philosophical self-ratification
jessicata (jessica.liu.taylor) · 2020-02-03T22:48:46.985Z · comments (13)
Twenty-three AI alignment research project definitions
rmoehn · 2020-02-03T22:21:49.089Z · comments (0)
Making Sense of Coronavirus Stats
jmh · 2020-02-20T15:12:51.292Z · comments (28)
Training Regime Day 4: Murphyjitsu
Mark Xu (mark-xu) · 2020-02-18T17:33:12.523Z · comments (0)
If I were a well-intentioned AI... III: Extremal Goodhart
Stuart_Armstrong · 2020-02-28T11:24:23.090Z · comments (0)
Mapping downside risks and information hazards
MichaelA · 2020-02-20T14:46:30.259Z · comments (0)
[Productivity] How not to use "Important // Not Urgent"
aaq · 2020-02-17T23:42:31.178Z · comments (0)
Predictive coding and motor control
Steven Byrnes (steve2152) · 2020-02-23T02:04:57.442Z · comments (3)
Simulation of technological progress (work in progress)
Daniel Kokotajlo (daniel-kokotajlo) · 2020-02-10T20:39:34.620Z · comments (9)
Taking the Outgroup Seriously
Davis_Kingsley · 2020-02-16T13:23:42.645Z · comments (8)
Wireheading and discontinuity
Michele Campolo · 2020-02-18T10:49:42.030Z · comments (4)
Goal-directed = Model-based RL?
adamShimi · 2020-02-20T19:13:51.342Z · comments (10)
Absent coordination, future technology will cause human extinction
Jeffrey Ladish (jeff-ladish) · 2020-02-03T21:52:55.764Z · comments (12)
Memetic downside risks: How ideas can evolve and cause harm
MichaelA · 2020-02-25T19:47:18.237Z · comments (3)
Abstract Plans Lead to Failure
Chris_Leong · 2020-02-27T21:20:11.554Z · comments (0)
← previous page (newer posts) · next page (older posts) →