LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

[link] 0 and 1 aren't probabilities
Alok Singh (OldManNick) · 2023-01-01T00:09:27.647Z · comments (4)
Localhost Security Messaging
jefftk (jkaufman) · 2023-01-01T02:20:01.515Z · comments (3)
Why Free Will is NOT an illusion
Akira Pyinya · 2023-01-01T02:29:18.351Z · comments (16)
[link] Investing for a World Transformed by AI
PeterMcCluskey · 2023-01-01T02:47:06.004Z · comments (19)
[link] The Roots of Progress's 2022 in review
jasoncrawford · 2023-01-01T02:54:23.187Z · comments (2)
[link] Challenge to the notion that anything is (maybe) possible with AGI
Remmelt (remmelt-ellen) · 2023-01-01T03:57:04.213Z · comments (4)
Recursive Middle Manager Hell
Raemon · 2023-01-01T04:33:29.942Z · comments (45)
What percent of people work in moral mazes?
Raemon · 2023-01-01T04:33:43.890Z · comments (9)
[link] Summary of 80k's AI problem profile
JakubK (jskatt) · 2023-01-01T07:30:22.177Z · comments (0)
woke offline, anti-woke online
Yair Halberstadt (yair-halberstadt) · 2023-01-01T08:24:39.748Z · comments (12)
A Löbian argument pattern for implicit reasoning in natural language: Löbian party invitations
Andrew_Critch · 2023-01-01T17:39:59.643Z · comments (8)
[link] Better New Year's Goals through Aligning the Elephant and the Rider
moridinamael · 2023-01-01T17:54:48.386Z · comments (0)
[question] Would it be good or bad for the US military to get involved in AI risk?
Grant Demaree (grant-demaree) · 2023-01-01T19:02:30.892Z · answers+comments (12)
Thoughts On Expanding the AI Safety Community: Benefits and Challenges of Outreach to Non-Technical Professionals
Yashvardhan Sharma (yashvardhan-sharma) · 2023-01-01T19:21:33.010Z · comments (4)
The Thingness of Things
TsviBT · 2023-01-01T22:19:08.026Z · comments (35)
Fun math facts about 2023
Adam Scherlis (adam-scherlis) · 2023-01-01T23:38:22.165Z · comments (6)
Sail Over Mountains of ICE...
AnthonyRepetto · 2023-01-02T00:27:31.340Z · comments (51)
My first year in AI alignment
Alex_Altair · 2023-01-02T01:28:03.470Z · comments (10)
[question] How can total world index fund growth outpace money supply growth over the long term?
pando · 2023-01-02T05:33:40.620Z · answers+comments (7)
Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI
tavurth · 2023-01-02T06:16:53.686Z · comments (3)
Pacing: inexplicably good
KatjaGrace · 2023-01-02T08:30:10.158Z · comments (6)
Belief Bias: Bias in Evaluating AGI X-Risks
Remmelt (remmelt-ellen) · 2023-01-02T08:59:08.713Z · comments (1)
A kernel of Lie theory
Alok Singh (OldManNick) · 2023-01-02T09:20:06.098Z · comments (8)
The ultimate limits of alignment will determine the shape of the long term future
beren · 2023-01-02T12:47:01.419Z · comments (2)
[link] Opportunity Cost Blackmail
adamShimi · 2023-01-02T13:48:51.811Z · comments (11)
Induction heads - illustrated
CallumMcDougall (TheMcDouglas) · 2023-01-02T15:35:20.550Z · comments (8)
Results from the AI testing hackathon
Esben Kran (esben-kran) · 2023-01-02T15:46:44.491Z · comments (0)
Soft optimization makes the value target bigger
Jeremy Gillen (jeremy-gillen) · 2023-01-02T16:06:50.229Z · comments (20)
Why don't Rationalists use bidets?
Lakin (ChrisLakin) · 2023-01-02T17:42:09.900Z · comments (33)
Prediction Markets for Science
Vaniver · 2023-01-02T17:55:12.808Z · comments (7)
On the Importance of Open Sourcing Reward Models
elandgre · 2023-01-02T19:01:52.993Z · comments (5)
Large language models can provide "normative assumptions" for learning human preferences
Stuart_Armstrong · 2023-01-02T19:39:00.569Z · comments (12)
Preparing for Less Privacy
jefftk (jkaufman) · 2023-01-02T20:30:01.624Z · comments (1)
[link] Review of “Make People Better”
Metacelsus · 2023-01-02T20:30:13.218Z · comments (0)
MAKE IT BETTER (a poetic demonstration of the banality of GPT-3)
rogersbacon · 2023-01-02T20:47:11.348Z · comments (2)
[question] Default Sort for Shortforms is Very Bad; How Do I Change It?
DragonGod · 2023-01-02T21:50:12.779Z · answers+comments (0)
Knottiness
abramdemski · 2023-01-02T22:13:12.752Z · comments (4)
MacArthur BART (Filk)
Gordon Seidoh Worley (gworley) · 2023-01-02T22:50:04.248Z · comments (1)
Linear Algebra Done Right, Axler
David Udell · 2023-01-02T22:54:58.724Z · comments (6)
[Simulators seminar sequence] #1 Background & shared assumptions
Jan (jan-2) · 2023-01-02T23:48:50.298Z · comments (4)
The need for speed in web frameworks?
Adam Zerner (adamzerner) · 2023-01-03T00:06:15.737Z · comments (2)
200 COP in MI: Exploring Polysemanticity and Superposition
Neel Nanda (neel-nanda-1) · 2023-01-03T01:52:46.044Z · comments (6)
Status quo bias; System justification: Bias in Evaluating AGI X-Risks
Remmelt (remmelt-ellen) · 2023-01-03T02:50:50.722Z · comments (0)
[link] SF Severe Weather Warning
stavros · 2023-01-03T06:04:54.943Z · comments (3)
On the naturalistic study of the linguistic behavior of artificial intelligence
Bill Benzon (bill-benzon) · 2023-01-03T09:06:42.295Z · comments (0)
Is recursive self-alignment possible?
No77e (no77e-noi) · 2023-01-03T09:15:21.304Z · comments (5)
[question] I have thousands of copies of HPMOR in Russian. How to use them with the most impact?
Mikhail Samin (mikhail-samin) · 2023-01-03T10:21:26.853Z · answers+comments (3)
How to eat potato chips while typing
KatjaGrace · 2023-01-03T11:50:05.816Z · comments (12)
Whisper's Wild Implications
Ollie J (Rividus) · 2023-01-03T12:17:28.600Z · comments (6)
Monthly Roundup #2
Zvi · 2023-01-03T12:50:01.214Z · comments (3)
next page (older posts) →