LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

Shanghai LessWrong Meetup
Mindey · 2019-01-23T12:59:28.613Z · comments (10)
One-step hypothetical preferences
Stuart_Armstrong · 2019-01-23T15:14:52.063Z · comments (6)
S-Curves for Trend Forecasting
Matt Goldenberg (mr-hire) · 2019-01-23T18:17:56.436Z · comments (23)
Edinburgh SSC meetup ϛ
Hamish Peter Todd (hamish-peter-todd) · 2019-01-23T18:36:17.108Z · comments (0)
Starting to see 2 months later
Pausecafe · 2019-01-23T20:46:45.901Z · comments (3)
Rationality Café No. 9 - The Sequences, Part 1; Section C b
thegreatnick · 2019-01-23T22:41:10.193Z · comments (0)
Allowing a formal proof system to self improve while avoiding Lobian obstacles.
Donald Hobson (donald-hobson) · 2019-01-23T23:04:43.524Z · comments (4)
South Bay Meetup
David Friedman (david-friedman) · 2019-01-24T05:29:20.454Z · comments (0)
The human side of interaction
Rohin Shah (rohinmshah) · 2019-01-24T10:14:33.906Z · comments (5)
[question] In what way has the generation after us "gone too far"?
Elo · 2019-01-24T10:22:34.063Z · answers+comments (3)
[question] Is Agent Simulates Predictor a "fair" problem?
Chris_Leong · 2019-01-24T13:18:13.745Z · answers+comments (19)
From Personal to Prison Gangs: Enforcing Prosocial Behavior
johnswentworth · 2019-01-24T18:07:33.262Z · comments (26)
Thoughts on reward engineering
paulfchristiano · 2019-01-24T20:15:05.251Z · comments (30)
[link] "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros]
gwern · 2019-01-24T20:49:01.350Z · comments (52)
[question] For what do we need Superintelligent AI?
avturchin · 2019-01-25T15:01:01.772Z · answers+comments (18)
Future directions for narrow value learning
Rohin Shah (rohinmshah) · 2019-01-26T02:36:51.532Z · comments (4)
Building up to an Internal Family Systems model
Kaj_Sotala · 2019-01-26T12:25:11.162Z · comments (86)
[link] "Forecasting Transformative AI: An Expert Survey", Gruetzemacher et al 2019
gwern · 2019-01-27T02:34:57.214Z · comments (0)
Río Grande: judgment calls
KatjaGrace · 2019-01-27T03:50:01.102Z · comments (5)
Confessions of an Abstraction Hater
Martin Sustrik (sustrik) · 2019-01-27T05:50:01.066Z · comments (4)
Freely Complying With the Ideal: A Theory of Happiness
Solnassant · 2019-01-27T12:28:59.810Z · comments (2)
Prediction Contest 2018: Scores and Retrospective
jbeshir · 2019-01-27T17:20:15.245Z · comments (5)
"The Unbiased Map"
JohnBuridan · 2019-01-27T19:08:10.051Z · comments (1)
Practical Considerations Regarding Political Polarization
joshuabecker · 2019-01-27T22:26:15.048Z · comments (0)
[link] Solomonoff induction and belief in God
Berkeley Beetle · 2019-01-28T03:01:04.584Z · comments (3)
[question] How would one go about defining the ideal personality compatibility test?
digitalcaffeine · 2019-01-28T03:02:04.777Z · answers+comments (4)
A small example of one-step hypotheticals
Stuart_Armstrong · 2019-01-28T16:12:02.722Z · comments (0)
Quantum Neural Net and You
ScrubbyBubbles · 2019-01-28T18:42:01.456Z · comments (4)
"Giftedness" and Genius, Crucial Differences
ArthurLidia · 2019-01-28T20:22:37.664Z · comments (0)
[Link] Did AlphaStar just click faster?
aogara (Aidan O'Gara) · 2019-01-28T20:23:08.196Z · comments (14)
Techniques for optimizing worst-case performance
paulfchristiano · 2019-01-28T21:29:53.164Z · comments (12)
How much can value learning be disentangled?
Stuart_Armstrong · 2019-01-29T14:17:00.601Z · comments (30)
Can there be an indescribable hellworld?
Stuart_Armstrong · 2019-01-29T15:00:54.481Z · comments (19)
Towards equilibria-breaking methods
ryan_b · 2019-01-29T16:19:57.564Z · comments (3)
[question] Which textbook would you recommend to learn decision theory?
supermartingale · 2019-01-29T20:48:11.577Z · answers+comments (6)
The Question Of Perception
The Arkon · 2019-01-29T20:59:23.450Z · comments (18)
Alignment Newsletter #43
Rohin Shah (rohinmshah) · 2019-01-29T21:10:02.373Z · comments (2)
Book Trilogy Review: Remembrance of Earth’s Past (The Three Body Problem)
Zvi · 2019-01-30T01:10:00.414Z · comments (15)
Deconfusing Logical Counterfactuals
Chris_Leong · 2019-01-30T15:13:41.436Z · comments (16)
Masculine Virtues
Jacob Falkovich (Jacobian) · 2019-01-30T16:03:56.000Z · comments (32)
Wireheading is in the eye of the beholder
Stuart_Armstrong · 2019-01-30T18:23:07.143Z · comments (10)
[question] Applied Rationality podcast - feedback?
Bae's Theorem (Senarin) · 2019-01-31T01:46:36.703Z · answers+comments (12)
The role of epistemic vs. aleatory uncertainty in quantifying AI-Xrisk
David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-01-31T06:13:35.321Z · comments (6)
Cambridge (UK) SSC meetup
thisheavenlyconjugation · 2019-01-31T11:45:04.398Z · comments (0)
Reliability amplification
paulfchristiano · 2019-01-31T21:12:18.591Z · comments (3)
Small hope for less bias and more practability
ArthurLidia · 2019-01-31T22:09:29.357Z · comments (0)
[question] Why is this utilitarian calculus wrong? Or is it?
EconomicModel · 2019-01-31T23:57:55.780Z · answers+comments (21)
Boundaries - A map and territory experiment. [post-rationality]
Elo · 2019-02-01T02:08:35.372Z · comments (14)
[link] Drexler on AI Risk
PeterMcCluskey · 2019-02-01T05:11:01.008Z · comments (10)
What is Wrong?
Inyuki · 2019-02-01T12:02:13.023Z · comments (2)
← previous page (newer posts) · next page (older posts) →