LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

Feature Selection
Zack_M_Davis · 2021-11-01T00:22:29.993Z · comments (24)
The Anxious Philosopher King
Jacob Falkovich (Jacobian) · 2021-11-01T16:05:25.660Z · comments (1)
Money Stuff
Jacob Falkovich (Jacobian) · 2021-11-01T16:08:02.700Z · comments (18)
Comparing Options For Safer Events
jefftk (jkaufman) · 2021-11-01T16:30:04.983Z · comments (19)
[link] Is genetics "dark"?
David Hugh-Jones (david-hugh-jones) · 2021-11-01T18:23:33.584Z · comments (12)
What is the most evil AI that we could build, today?
ThomasJ · 2021-11-01T19:58:38.743Z · comments (14)
Miami ACX November Meetup
eric135 · 2021-11-01T20:12:29.025Z · comments (0)
[question] May I ask for some help getting started with crypto?
SlainLadyMondegreen · 2021-11-01T21:52:05.236Z · answers+comments (12)
Open & Welcome Thread November 2021
Ruby · 2021-11-01T23:43:55.006Z · comments (35)
Great-Filter Hard-Step Math, Explained Intuitively
Daniel_Eth · 2021-11-01T23:45:52.634Z · comments (7)
Effective Evil
lsusr · 2021-11-02T00:26:29.910Z · comments (7)
[question] Does it make sense to get 2 flu vaccines?
dawangy · 2021-11-02T01:48:35.732Z · answers+comments (9)
[link] EfficientZero: human ALE sample-efficiency w/MuZero+self-supervised
gwern · 2021-11-02T02:32:41.856Z · comments (52)
Notes on Shame
David Gross (David_Gross) · 2021-11-02T04:33:23.861Z · comments (6)
2020 PhilPapers Survey Results
Rob Bensinger (RobbBB) · 2021-11-02T05:00:13.859Z · comments (50)
[Book Review] "The Bell Curve" by Charles Murray
lsusr · 2021-11-02T05:49:22.198Z · comments (134)
[question] Can homo-sapiens sustain an economy parallel to AI's?
qbolec · 2021-11-02T07:03:08.129Z · answers+comments (2)
Models Modeling Models
Charlie Steiner · 2021-11-02T07:08:44.848Z · comments (5)
Vaccine Requirements, Age, and Fairness
jefftk (jkaufman) · 2021-11-02T12:10:07.790Z · comments (7)
Forecasting Newsletter: October 2021.
NunoSempere (Radamantis) · 2021-11-02T14:07:23.771Z · comments (0)
Why we need prosocial agents
Akbir Khan (akbir-khan) · 2021-11-02T15:19:57.424Z · comments (0)
Transcript: "You Should Read HPMOR"
TurnTrout · 2021-11-02T18:20:53.161Z · comments (12)
Austin LW/SSC Meetup: Roots of Progress Crossover
SilasBarta · 2021-11-02T23:12:16.079Z · comments (1)
[question] What's the difference between newer Atari-playing AI and the older Deepmind one (from 2014)?
Raemon · 2021-11-02T23:36:26.389Z · answers+comments (8)
Paths Forward: Scaling the Sharing of Information and Solutions
Zvi · 2021-11-02T23:50:01.283Z · comments (20)
[question] Why isn't there more rationalist punk rock?
CraigMichael · 2021-11-03T01:06:34.586Z · answers+comments (1)
Rockville Dinner Meetup
rocurley · 2021-11-03T02:15:33.251Z · comments (0)
Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22]
habryka (habryka4) · 2021-11-03T18:22:58.879Z · comments (4)
The Opt-Out Clause
Raymond D · 2021-11-03T22:02:53.680Z · comments (49)
[question] If I have a strong preference for remote work, should I focus my career on AI or on blockchain?
Malleable_shape · 2021-11-03T23:03:37.217Z · answers+comments (1)
Depositions and Rationality
Pablo Repetto (pablo-repetto-1) · 2021-11-04T00:10:21.908Z · comments (13)
Baby Sister Numbers
jefftk (jkaufman) · 2021-11-04T01:10:09.551Z · comments (28)
Covid 11/4: The After Times
Zvi · 2021-11-04T15:20:01.362Z · comments (33)
[question] Are there any essays on what scares us? A study of fear, so-to-speak.
sunokthinks · 2021-11-04T21:12:59.126Z · answers+comments (6)
[question] How does one learn to create models?
Conor (conor) · 2021-11-05T02:57:42.056Z · answers+comments (1)
Y2K: Successful Practice for AI Alignment
Darmani · 2021-11-05T06:09:18.591Z · comments (5)
[link] Disagreeables and Assessors: Two Intellectual Archetypes
ozziegooen · 2021-11-05T09:05:07.056Z · comments (10)
[External Event] 2022 IEEE International Conference on Assured Autonomy (ICAA) - submission deadline extended
Aryeh Englander (alenglander) · 2021-11-05T15:29:58.224Z · comments (0)
[question] Summary of the sequences / Lesson plans for rationality
Space L Clottey (TheTrueSquidward) · 2021-11-05T17:22:49.439Z · answers+comments (4)
Modeling the impact of safety agendas
Ben Cottier (ben-cottier) · 2021-11-05T19:46:04.656Z · comments (6)
[link] Drug addicts and deceptively aligned agents - a comparative analysis
Jan (jan-2) · 2021-11-05T21:42:48.993Z · comments (2)
How should we compare neural network representations?
jsteinhardt · 2021-11-05T22:10:18.677Z · comments (0)
Comments on OpenPhil's Interpretability RFP
paulfchristiano · 2021-11-05T22:36:04.733Z · comments (5)
Nightclubs in Heaven?
J Bostock (Jemist) · 2021-11-05T23:28:19.461Z · comments (3)
Study Guide
johnswentworth · 2021-11-06T01:23:09.552Z · comments (48)
Concentration of Force
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T08:20:18.991Z · comments (23)
Speaking of Stag Hunts
[DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-06T08:20:34.967Z · comments (373)
CFAR, responsibility and bureaucracy
ChristianKl · 2021-11-06T14:53:28.290Z · comments (1)
Substack Ho?
Zvi · 2021-11-06T16:50:01.012Z · comments (17)
[link] Chu are you?
Adele Lopez (adele-lopez-1) · 2021-11-06T17:39:45.332Z · comments (9)
next page (older posts) →