Posts

Cheap Model → Big Model design 2023-11-19T22:50:15.017Z
Maxwell Peterson's Highlighted Posts 2022-04-08T01:34:57.006Z
Practical use of the Beta distribution for data analysis 2022-04-03T07:34:26.483Z
The median and mode use less information than the mean does 2022-04-01T21:25:20.916Z
Towards trying to feel consistently energized 2022-04-01T20:46:23.774Z
Are there any preventive steps someone can take after being exposed to strep throat? 2022-02-14T03:01:53.625Z
Is there a good way to read deep into LW comment histories on mobile? 2022-01-17T19:02:31.140Z
Maxwell Peterson's Shortform 2022-01-16T18:56:59.316Z
Activated Charcoal for Hangover Prevention: Way more than you wanted to know 2022-01-10T19:26:54.907Z
Finding the Central Limit Theorem in Bayes' rule 2021-11-27T05:48:06.161Z
How should an interviewer evaluate management skills in a candidate? 2021-10-05T18:25:51.746Z
An analysis of the Less Wrong D&D.Sci 4th Edition game 2021-10-04T00:03:44.279Z
How should dance venues best protect the drinks of attendees? 2021-09-20T19:32:06.594Z
What weird treatments should people who lost their taste from COVID try? 2021-07-30T02:51:25.598Z
For reducing caffeine dependence, does daily maximum matter more, or does total daily intake matter more? 2021-07-09T15:40:03.872Z
Are there reasons to think mixing vaccines is dangerous? 2021-06-03T22:36:35.588Z
What is the best chemistry textbook? 2021-05-11T02:39:20.341Z
How can I protect my bank account from large, surprise withdrawals? 2021-02-22T18:57:46.784Z
Use conditional probabilities to clear up error rate confusion 2021-01-17T08:27:38.137Z
Netflix's "Start-Up" and sincere work dramatization 2020-12-25T05:32:46.547Z
Probability theory implies Occam's razor 2020-12-18T07:48:17.030Z
How long does it take to become Gaussian? 2020-12-08T07:23:41.725Z
Convolution as smoothing 2020-11-25T06:00:07.611Z
The central limit theorem in terms of convolutions 2020-11-21T04:09:44.145Z
Examples of Measures 2020-11-15T01:44:39.593Z
Where can I find good explanations of the central limit theorems for people with a Bayesian background? 2020-11-13T16:36:01.611Z
Frequentist practice incorporates prior information all the time 2020-11-07T20:43:30.781Z
"model scores" is a questionable concept 2020-11-06T03:19:45.196Z

Comments

Comment by Maxwell Peterson (maxwell-peterson) on are IQ tests a good measure of intelligence? · 2024-12-16T20:16:58.624Z · LW · GW

Guesses: people see it as too 101 of a question; people think it’s too controversial / has been done to death many years ago; one guy with a lot of karma hates the whole concept and strong-downvoted it

I think the 101 idea is most likely. But I don’t think it’s a bad question, so I’ve upvoted it.

Comment by Maxwell Peterson (maxwell-peterson) on Benito's Shortform Feed · 2024-12-15T21:31:22.604Z · LW · GW

Years ago, a coworker and I were on a project with a guy we both thought was a total dummy, and worse, a dummy who talked all the time in meetings. We rarely expressed our opinion on this guy openly to each other - me and the coworker didn’t know each other well enough to be comfortable talking a lot of trash - but once, when discussing him privately after yet another useless meeting, my coworker drew in breath, sighed, looked at me, and said: “I’m sure he’s a great father.” We both laughed, and I still remember this as one of the most cutting insults I’ve heard.

Comment by Maxwell Peterson (maxwell-peterson) on Activated Charcoal for Hangover Prevention: Way more than you wanted to know · 2024-12-14T22:47:34.335Z · LW · GW

Cheers!

Comment by Maxwell Peterson (maxwell-peterson) on Logan Riggs's Shortform · 2024-12-05T05:11:34.517Z · LW · GW

I’d guess that weekend dips come from office workers, since they rarely work on weekends, but students often do homework on weekends.

Comment by Maxwell Peterson (maxwell-peterson) on Social events with plausible deniability · 2024-11-18T22:20:00.865Z · LW · GW

If OP were advocating banning normal parties, in favor of only having cancellable parties, I would agree with this comment.

Comment by Maxwell Peterson (maxwell-peterson) on Seven lessons I didn't learn from election day · 2024-11-17T16:22:02.602Z · LW · GW

Appreciate it! Cheers.

Comment by Maxwell Peterson (maxwell-peterson) on Seven lessons I didn't learn from election day · 2024-11-15T17:53:25.292Z · LW · GW

A good post, of interest to all across the political spectrum, marred by the mistake at the end to become explicitly politically opinionated and say bad things about those who voted differently than OP.

Comment by Maxwell Peterson (maxwell-peterson) on The central limit theorem in terms of convolutions · 2024-10-31T02:25:01.976Z · LW · GW

The integral was incorrect! Fixed now, thanks! Also added the (f * g)(x) to the equality for those who find that notation better (I've just discovered that GPT-4o prefers it too). Cheers!

Comment by Maxwell Peterson (maxwell-peterson) on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-25T18:49:47.916Z · LW · GW

Yes, I’m not so sure either about the stockfish-pawns point.

In Michael Redmond’s AlphaGo vs AlphaGo series on YouTube, he often finds the winning AI carelessly loses points in the endgame. It might have a lead of 1.5 or 2.5 points, 20 moves before the game ends; but by the time the game ends, has played enough suboptimal moves to make itself win by 0.5 - the smallest possible margin.

It never causes itself to lose with these lazy moves; only reduces its margin of victory. Redmond theorizes, and I agree, that this is because the objective is to win, not maximize point differential, and at such a late stage of the game, its victory is certain regardless.

This is still a little strange - the suboptimal moves do not sacrifice points to reduce variance, so it’s not like it’s raising p(win). But it just doesn’t care either way; a win is a win.

There are Go AI that are trained with the objective of maximizing point difference. I am told they are quite vicious, in a way that AlphaGo isn’t. But the most famous Go AI in our timeline turned out to be the more chill variant.

Comment by maxwell-peterson on [deleted post] 2024-05-24T15:25:51.130Z

Quip about souls feels unnecessary and somehow grates on me. Something about putting an athiesm zinger into the tag for cooking… feels off.

Comment by Maxwell Peterson (maxwell-peterson) on Reconsider the anti-cavity bacteria if you are Asian · 2024-04-16T17:16:57.729Z · LW · GW

Would you be willing to share your ethnicity? Even as simple as “Asian / not Asian”?

Comment by Maxwell Peterson (maxwell-peterson) on Matt Goldenberg's Short Form Feed · 2024-02-24T22:30:07.537Z · LW · GW

I do think it has some of that feeling to me, yeah. I had to re-read the entire thing 3 or 4 times to understand what it meant. My best guesses as to why:

I felt whiplashed on transitions like “be motivated towards what's good and true. This is exactly what Marc Gafni is trying to do with Cosmo-Erotic Humanism”, since I don’t know him or that type of Humanism, but the sentence structure suggests to me that I am expected to know these. A possible rewrite could perhaps be “There are two projects I know of that aim to create a belief system that works with, instead of against, technology. The first is Marc Gafni; he calls his ‘Cosmo-Erotic Humanism’…”

There are some places I feel a colon would be better than a comma. Though I’m not sure how important these are, it would help slow down the pace of the writing:

“increasingly let go of faith in higher powers as a tenet of our central religion: secular humanism.” “But this is crumbling: the cold philosophy”

While minor punctuation differences like this are usually not too important, the way you wrote gives me a sense of, like, too much happening too fast: “wow, this is a ton of information delivered extremely quickly, and I don’t know what appolonian means, I don’t know who Gafni is, or what dataism is…” So maybe slowing down the pace with stronger punctuation like colons is more important than it would otherwise be?

Also, phrases like “our central religion is secular humanism” and “mystical true wise core” read as very Woo. I can see where both are coming from, but I’ve read a lot of Woo, but I think many readers would bounce off these phrases. They can still be communicated, but perhaps something like “in place of religion, many have turned to Secular Humanism. Secular humanism says that X, Y, Z, but has no concept of a higher power. That means the core motivation that…”

(To be honest I’ve forgotten what secular humanism is, so this was another phrase that added to my feeling of everything moving too fast, and me being lost).

There are some typos too.

So maybe I’d advise making the overall piece of writing slower, by giving more set-up each time you introduce a term readers are likely to be unfamiliar with. On the other hand, that’s a hassle, and probably annoying to do in every note, if you write on this topic often. But it’s the best I’ve got!

Comment by Maxwell Peterson (maxwell-peterson) on E.T. Jaynes Probability Theory: The logic of Science I · 2023-12-29T17:05:50.759Z · LW · GW

I read this book in 2020, and the way this post serves as a refresher and different look at it is great.

I think there might be some mistakes in the log-odds section?

The orcs example starts:

We now want to consider the hypothesis that we were attacked by orcs, the prior odds are 10:1

Then there is a 1/3 wall-destruction rate, so orcs should be more likely in the posterior, but the post says:

There were 20 destroyed walls and 37 intact walls… corresponding to 1:20 odds that the orcs did it.

We started at 10:1 (likely that it’s orcs?), then saw evidence suggesting orcs, and ended up with a posterior quite against orcs. Which doesn’t seem right. I was thinking maybe “10:1” for the prior should be “1:10”, but even then, going from 1:10 in the prior to 1:20 in the posterior, when orcs are evidenced, doesn’t work either.

All that said, I just woke up, so it’s possible I’m all wrong!

Comment by Maxwell Peterson (maxwell-peterson) on Activated Charcoal for Hangover Prevention: Way more than you wanted to know · 2023-12-18T22:58:17.840Z · LW · GW

In Korea every convenience store sells “hangover preventative”, “hangover cure drink”, with pop idols on the label. Then you come back to America and the instant you say “hangover preventative”, people look at you crazy, like no such thing could possibly exist or help. I wonder how we got this way!

Comment by Maxwell Peterson (maxwell-peterson) on Activated Charcoal for Hangover Prevention: Way more than you wanted to know · 2023-12-18T17:07:56.167Z · LW · GW

Thanks for your review! I've updated the post to make the medications warning be in italicized bold, in the third paragraph of the post, and included the nutrient warning more explicitly as well.

Comment by Maxwell Peterson (maxwell-peterson) on Information warfare historically revolved around human conduits · 2023-08-30T21:24:49.388Z · LW · GW

Thank you!

Comment by Maxwell Peterson (maxwell-peterson) on Information warfare historically revolved around human conduits · 2023-08-30T14:47:14.264Z · LW · GW

“(although itiots might still fall for the "I'm an idiot like you" persona such as Donald Trump, Tucker Carlson, and particularly Alex Jones).”

This line is too current-culture-war for LessWrong. I began to argue with it in this comment, before deleting what I wrote, and limiting myself to this.

Comment by Maxwell Peterson (maxwell-peterson) on Dating Roundup #1: This is Why You’re Single · 2023-08-29T19:12:37.281Z · LW · GW

It changed to be much more swipe-focused. It’s been 5 years since I used it, but even in 2018, I remember being surprised at how much it had changed. Apparently now even open messaging is gone, and you need to have someone Like you before you can message them, though I haven’t actually checked this.

Comment by Maxwell Peterson (maxwell-peterson) on Finding the Central Limit Theorem in Bayes' rule · 2023-07-10T16:13:43.610Z · LW · GW

Yes, agree - I've looked into non-identical distributions in previous posts, and found that identicality isn't important, but I haven't looked at non-independence at all. I agree dependent chains, like the books example, is an open question!

Comment by Maxwell Peterson (maxwell-peterson) on Man in the Arena · 2023-07-01T22:02:53.492Z · LW · GW

Love this! Definitely belongs on LessWrong. High-quality sci-fi, that relates to social dynamics? Very relevant! I’ve been away from the site for a while, tiring of the content, but am glad I scrolled and saw this today.

Comment by Maxwell Peterson (maxwell-peterson) on Will the growing deer prion epidemic spread to humans? Why not? · 2023-06-28T06:08:52.140Z · LW · GW

Enjoyed this! Very well written. The two arrow graphs, where the second has everything squished down to the bottom, are especially charming

Comment by Maxwell Peterson (maxwell-peterson) on Super-Luigi = Luigi + (Luigi - Waluigi) · 2023-03-17T18:44:11.667Z · LW · GW

I don’t think the problem is this big if you’re trying to control one specific model. Given an RLHF’d model, equipped with a specific system prompt (e.g. helpless harmless assistant), you have either one or a small number of luigis, and therefore around the same amount of waluigis - right?

Comment by Maxwell Peterson (maxwell-peterson) on How long does it take to become Gaussian? · 2023-01-08T16:31:04.247Z · LW · GW

Good question!

Comment by Maxwell Peterson (maxwell-peterson) on Activated Charcoal for Hangover Prevention: Way more than you wanted to know · 2022-11-21T21:49:00.985Z · LW · GW

Hmm! I’m not sure about this. The patient in the linked paper received hemodialysis (which, I think, manually takes the methanol out) before his body could get around to metabolizing it into formaldehyde and formic acid. For someone who doesn’t receive hemodialysis, I think the methanol would still have to be metabolized at some point, even if when that happens is much delayed? In which case the same toxic effects of formaldehyde and formic acid would hit, just much later.

Comment by Maxwell Peterson (maxwell-peterson) on Luck based medicine: my resentful story of becoming a medical miracle · 2022-10-17T23:10:59.222Z · LW · GW

Thanks!

Comment by Maxwell Peterson (maxwell-peterson) on Luck based medicine: my resentful story of becoming a medical miracle · 2022-10-16T20:20:51.843Z · LW · GW

What form/brand/dose do you take the ketone esters in?

Comment by Maxwell Peterson (maxwell-peterson) on Consider your appetite for disagreements · 2022-10-09T21:22:04.872Z · LW · GW

I think the poker example is OK, and paragraphs like

“The second decision point was when the flop was dealt and you faced a bet. This time you decided to fold. Maybe that wasn't the best play though. Maybe you should have called. Maybe you should have raised. Again, the goal of hand review is to figure this out.”

made sense to me. But the terminology in the dialogue was very tough: button, Rainbow, LAGgy, bdfs, AX, nut flush, nitty - I understood none of these. (I’ve played poker now and then, but never studied it). So keeping the example but translating it a bit further to more widely-used language (if possible) might be good.

Comment by Maxwell Peterson (maxwell-peterson) on Are c-sections underrated? · 2022-10-02T05:56:42.631Z · LW · GW

Very interesting! I work in health insurance and we try to encourage vaginal delivery and discourage C-sections; the other side you present here is a surprise. Good stuff.

Comment by Maxwell Peterson (maxwell-peterson) on The Redaction Machine · 2022-09-23T04:46:16.188Z · LW · GW

Very very good. The full power of science fiction - taking the concept of the redaction machines and finding this many interesting consequences of them, and fitting them into the story - really good

Comment by Maxwell Peterson (maxwell-peterson) on Covid 9/22/22: The Joe Biden Sings · 2022-09-22T21:21:41.952Z · LW · GW

You don’t need to call tails to explore whether tails is possible, though - the information gain of a coin flip is the same whether you call heads or tails

Comment by Maxwell Peterson (maxwell-peterson) on What are some alternatives to Shapley values which drop additivity? · 2022-08-11T05:13:04.650Z · LW · GW

I’m not the asker, but I think I get where they’re coming from. For a long time, linear and logistic regression were the king & queen of modeling. Then the introduction of non-linear functions like random forest and gradient boosters made us far more able to fit difficult data. So the original question has me wondering if there’s a similar possible gain in going from linearity to non-linearity in interpretability algorithms.

Comment by Maxwell Peterson (maxwell-peterson) on I’ve become a medical mystery and I don’t know how to effectively get help · 2022-07-09T13:17:08.670Z · LW · GW

I agree with the encouragement to look harder for a sooner TMJ appointment. ADHD testing has similar waits now - looking in May, I was told everyone was booked up till September. But I lucked out, and the first testing doctor I talked to had just had some people cancel appointments, and nobody on his waitlist was responding, so I ended up seeing him a week later, in June, instead of in September. So there are opportunities for luck like this around. And this is without me looking out of state.

Comment by Maxwell Peterson (maxwell-peterson) on I’ve become a medical mystery and I don’t know how to effectively get help · 2022-07-09T13:14:22.994Z · LW · GW

I like the trigger point idea. OP should note too that there are injection treatments for trigger points: https://www.webmd.com/pain-management/guide/trigger-point-injection

Comment by Maxwell Peterson (maxwell-peterson) on How do I use caffeine optimally? · 2022-06-22T19:29:20.197Z · LW · GW

I just quit caffeine a month ago after years of daily dependence on it, and I feel better than I did on it. I now limit myself to 100mg a week. The dependence had a consistent moderate negative affect on my life, so I’d recommend people be very careful to avoid dependence.

Comment by Maxwell Peterson (maxwell-peterson) on Deep Learning Systems Are Not Less Interpretable Than Logic/Probability/Etc · 2022-06-04T20:38:36.412Z · LW · GW

Sure - there are plenty of cases where a pair of interactions isn’t interesting. In the image net context, probably you’ll care more about screening-off behavior at more abstract levels.

For example, maybe you find that, in your trained network, a hidden representation that seems to correspond to “trunk” isn’t very predictive of the class “tree”. And that one that looks like “leaves” is predictive of “tree”. It’d be useful to know if the reason “trunk” isn’t predictive is that “leaves” screens it off. (This could happen if all the tree trunks in your training images come with leaves in the frame).

Of course, the causality parts of the above analysis don’t address the “how should you assign labels in the first place” problem that the post is most focused on! I’m just saying both the ML parts and the causality parts work well in concert, and are not opposing methods.

Comment by Maxwell Peterson (maxwell-peterson) on Deep Learning Systems Are Not Less Interpretable Than Logic/Probability/Etc · 2022-06-04T13:56:08.079Z · LW · GW

This post does a sort of head-to-head comparison of causal models and deep nets. But I view the relationship between them differently - they’re better together! The causal framework gives us the notion of “screening off”, which is missing from the ML/deep learning framework. Screening-off turns out to be useful in analyzing feature importance.

A workflow that 1) uses a complex modern gradient booster or deep net to fit the data, then 2) uses causal math to interpret the features - which are most important, which screen off which - is really nice. [This workflow requires fitting multiple models, on different sets of variables, so it’s not just fit a single model in step 1), analyze it in step 2), done].

Causal math lacks the ability to auto-fit complex functions, and ML-without-causality lacks the ability to measure things like “which variables screen off which”. Causality tools, paired with modern feature-importance measures like SHAP values, help us interpret black-box models.

Comment by Maxwell Peterson (maxwell-peterson) on Thought experiment: Imagine you were assigned to help a random person in your community become as peaceful and joyful as the most peaceful and joyful person you'd ever met. What would you try? · 2022-05-09T19:31:34.742Z · LW · GW

From my personal experience, I would have them take up one or two competitive arts.

Timing yourself to improve your personal best at, say, running, does not count. Running on a track against, or in a longer race against, a small handful of people that you can potentially beat does count, although I would lean toward recommending something where you have to deal with the counter-moves of your opponent. Boxing or kickboxing training that includes sparring against others counts; doing boxing training at a gym that does not do sparring doesn’t count. Playing chess or Go counts. Basketball, hockey, soccer, etc, all count. Playing online competitive video games technically counts under this definition, but I’m excluding it; those mostly make me feel bad.

Having an opponent that will challenge you with counter-moves, and do their best to get one over on you, but who you can beat if you train and try hard enough, has no substitute. Winning against someone who has put everything into the fight gives confidence that you can apply all over the place. Plus it feels great.

My experience: I’ve spent the past two years running and lifting. These mean I look great physically, and am healthy, and get the exercise endorphins and stuff. But they didn’t meet the competitive need! I’ve recently gotten back into boxing at a sparring gym. The competitive aspect of being in the ring, trying to best the other guy, is something I’ve really been missing. It also directs my training at a real concrete purpose, instead of the colder “increase the weight / increase running speed” metric-tracking approach to those forms of exercise.

I also play Go, and it used to serve this purpose well in my life.

(This competitiveness stuff might be more important for men than it is for women - I’m not sure. I’d definitely give this advice to a man, and I’d give it as a ‘maybe’ to a woman. Of course, women can get a lot of value from competition; I’m just not sure if the lack of it would gnaw at them the way it was gnawing at me.)

Comment by Maxwell Peterson (maxwell-peterson) on The glorious energy boost I've gotten by abstaining from coffee · 2022-05-07T21:54:03.758Z · LW · GW

I have pretty bad energy-level problems and have been looking for more things to try to fix them. I’d always thought quitting caffeine would make it so I could attain my current energy levels without caffeine; it never occurred to me that energy levels after quitting could be higher. So this is very interesting. Thanks for sharing.

Comment by Maxwell Peterson (maxwell-peterson) on Activated Charcoal for Hangover Prevention: Way more than you wanted to know · 2022-05-07T19:50:37.124Z · LW · GW

That’s great! Happy to hear it - thanks for reporting back, especially in such detail.

Comment by Maxwell Peterson (maxwell-peterson) on Pop Culture Alignment Research and Taxes · 2022-04-27T19:40:49.590Z · LW · GW

Whoops. That’s a big mistake on my part. Appreciate the correction.

Comment by Maxwell Peterson (maxwell-peterson) on Exploring toy neural nets under node removal. Section 1. · 2022-04-21T19:57:30.556Z · LW · GW

Thanks! I’ll give it a read

Comment by Maxwell Peterson (maxwell-peterson) on If everything is genetic, then nothing is genetic - Understanding the phenotypic null hypothesis · 2022-04-21T06:00:51.026Z · LW · GW

I am having trouble concording “a low signal:noise ratio biases the effects, often towards zero” with the result in the final section, where you say

“the genetic correlation has ended up much bigger than the environmental correlation. This happened due to the measurement error; if it was not for the measurement error, they would be of similar magnitudes.”

In the second statement, the noise (measurement error) was high, so there’s a low signal:noise ratio - is that right? If so, doesn’t the first statement suggest the genetic correlation should be biased towards zero, instead of being inflated?

Comment by Maxwell Peterson (maxwell-peterson) on Pop Culture Alignment Research and Taxes · 2022-04-18T03:42:25.663Z · LW · GW

Thanks!

Comment by Maxwell Peterson (maxwell-peterson) on Pop Culture Alignment Research and Taxes · 2022-04-17T19:37:33.866Z · LW · GW

Wait! There’s doubts about the Tay story? I didn’t know that, and have failed to turn up anything in a few different searches just now. Can you say more, or drop a link if you have one?

Comment by Maxwell Peterson (maxwell-peterson) on Activated Charcoal for Hangover Prevention: Way more than you wanted to know · 2022-04-17T05:20:13.927Z · LW · GW

Glad you’re trying it! Let me know if you end up feeling like it helps

Comment by Maxwell Peterson (maxwell-peterson) on Pop Culture Alignment Research and Taxes · 2022-04-17T01:11:50.325Z · LW · GW

A quibble: Amazon’s resume evaluator discriminated against women who went to women’s colleges, or were in women’s clubs. This is different from discriminating against women in general! I feel like this is an important difference. Women’s colleges, in particular, are not very high-rated, among all colleges. Knowing someone went to a women’s college means you also know they didn’t go to MIT, or Berkeley, or any of the many good state universities. I brought this up to a female friend who went to Columbia; she said Columbia had a women’s college, but that it was a bit of a meme at broader Columbia, for not being a very good school. Googling a bit now, I find there are either 31 or “less than 50” women’s colleges in the US, and that many are liberal arts colleges. If “women’s college” is a proxy variable for “liberal arts college”, that’s a good reason to ding people for listing a women’s college. Most women do not go to women’s colleges! And I’d bet almost none of the best STEM women went to a women’s college.

A prediction: if they included an explicit gender variable in the resume predictor, a candidate being female would carry much less of a penalty (if there was even a penalty) than a candidate having gone to a women’s college.

Another “prediction”, although it’s pushing the term “prediction”, since it can’t be evaluated: in a world where there were less than 50 men’s colleges in the US, and most were liberal arts, that world’s Amazon resume rater would penalize having gone to a men’s college.

Comment by Maxwell Peterson (maxwell-peterson) on The Parable Of The Talents · 2022-04-15T22:56:23.383Z · LW · GW

Downvoted: There are multiple problems, and different people can work on different ones. Pointing to one problem and saying it should be addressed isn’t the same as saying work should be halted on all the other ones.

I also think that acting as if the Bostrom quote is about saving Canadians in particular is a bad misreading. Bostrom is using the population of Canada to give a sense for the size of the problem, not to call for a focus on Canadian aging. Cures for aging would probably be invented in the west, but could then be extended to Africa and Asia - people die from aging in those regions, too.

I think your objection in the final paragraph, about death being a good thing is a reasonable one - it’s certainly a popular belief. But your first two paragraphs are… arguing dirty.

Comment by Maxwell Peterson (maxwell-peterson) on Exploring toy neural nets under node removal. Section 1. · 2022-04-15T19:31:49.271Z · LW · GW

This is super cool. I’d have thought this was a great post if it was just the content of the video, so the additional analysis is, like, super great.

Comment by Maxwell Peterson (maxwell-peterson) on The median and mode use less information than the mean does · 2022-04-14T06:10:11.118Z · LW · GW

Gotcha - thanks.

Comment by Maxwell Peterson (maxwell-peterson) on Print Books of Scott Alexander's Writing · 2022-04-05T22:00:40.057Z · LW · GW

Both the first and third links in the Overview section of the repo are links to the Goddess of Everything Else book - I think the first link is meant to go to a different book instead?