Posts
Comments
I trained a booster (LightGBM) and used it to look for nonlinearity in the items - basically I made one ICE plot per item. From this I discovered the following nonlinearities:
Unicorns were the big thing - if you submit enough Unicorn Horns, you seem to get a discount or credit on your taxes. Perhaps they are medicinal, and there is a shortage. This happens at 5 horns, and submitting more than 5 doesn't get any further discount.
There was also some discounting going on with Cockatrice Eyes, but more confusing, where in one view of mine, it looked like the tax was bigger at 0 of them, smaller at 1, bigger at 2, smaller at 3, etc., oscillating.
Dragon, Lich, and Zombie parts looked mostly linear though.
There are a number of tax submissions for which the assessed tax was zero. Even property as large as [1 cockatrice eye, 1 lich skull, 6 zombie hands] had a zero-tax entry. So I took the strategy of starting by copying the zero-tax historical records, where I could, for three of the adventurers. For the fourth, Dragon Heads always incur a big chunk of tax, so I gave the final adventurer all the Dragon Heads, as well as 5 Unicorn Horns and an odd number of Cockatrice Eyes, to offset them.
Then from there I poked around and tried to ride the gradient downward manually. I arrived at:
1: {2 Lich Skull, 8 Zombie Hand} [for 3 gp 6 sp = 3.6 tax]
2: {1 Cockatrice Eye, 1 Dragon Head, 1 Unicorn Horn} [0.0]
3: {1 Dragon Head, 1 Unicorn Horn} [4.2]
4: {3 Cockatrice Eye, 2 Dragon Head, 3 Lich Skull, 5 Unicorn Horn} [19.2]
For a total tax of 27 gp 0 sp.
From this poking around, I've started to feel like maybe one Unicorn Horn can cancel a Dragon Head, or something? I couldn't get a proper black-box optimization program working, so it was just my manual optimization at the end that got me from 32.0 down to 27.0. There is probably a bit of room for progress.
(Haven't yet read what others wrote).
Cool setup! Haven't done one of these for a few years, and I enjoyed it a lot.
I did have a terrible time trying to get a black-box optimizer running - the hard constraints on the sums seemed to be mostly not a thing in optimizer packages? I'm interested in the thoughts of someone who knows more about black-box optimization like genetic algorithms, simulated annealing, or whatever, and if they think they'd be suitable for a problem like this.
Posting my findings in the comment below.
I suspect that, to many readers, what gives urgency to the Krome claims is that two people have allegedly died at the facility. For example, the fourth link OP provides is an instagram video with the caption “people are dying under ICE detainment in Miami”.
The two deceased are Genry Ruiz Guillen and Maksym Chernyak. ICE has published deaths reports for both:
https://www.ice.gov/doclib/foia/reports/ddr-GenryRuizGuillen.pdf
https://www.ice.gov/doclib/foia/reports/ddrMaksymChernyak.pdf
Notably, Mr. Ruiz-Guillen was transferred to medical and psychiatric facilities multiple times, and my read of the timeline is that he was in the custody of various hospitals from December 11 up through his January 23 death, i.e. over a month separates his death and his time at Krome. (It’s possible I’m reading this wrong so let me know if others have a different read). Ruiz-Guillen was transferred to hospital a month before inauguration day.
Chernyak’s report is much shorter and I don’t know what to make of it. Hemmorhagic stroke is hypothesized. He died February 20.
These are fairly detailed timelines. Guillen-Ruiz’s in particular involves many parties (normal hospital, psychiatric hospital, different doctors), so would be a pretty bold fabrication.
You said:
>the fact that we haven't seen definitive evidence against the allegations is significant evidence in favour of their veracity.
But “detainees are dying because of overcrowding and lack of water” is an allegation made by one of OP’s links, and these timelines and symptoms, especially Guillen-Ruiz’s, are evidence against.
When something is true, I desire to believe it’s true. When something is false, I desire to believe it’s false. This is the proper epistemics. If your epistemic goals are different, then they’re different. But “If the accused is in power, increase the probability estimate” is not how good epistemics are achieved.
Tangent here, just occurred to me while writing. The correct adjustment might be in the other direction: there are way more accusations against people in power, so part of the problem when considering them is: how do you keep your False Discovery Rate low? Like, if your neighbor is accused of a crime, he probably did it. But top politicians are accused of crimes every week, and many of those aren’t real, or aren’t criminal. And most or all False Discovery Rate adjustments lower the estimated probability of each instance. (Tangent over).
I think you may have a case about how one’s decision theory should adjust based on power and risk. Something like “I think there’s a 15% chance this is true, but if it were, it would be really bad, so 15% is high enough that I think we should investigate”. But taking that decision theory thought process, and using it to speak as if the 15% thing has a greater-than-50% probability, for example, isn’t correct.
The Krome thing is all rumor - looking into it, you see numeric estimates like
According to its official figures, there are 605 people detained at Krome, although the capacity is 581. While ICE is looking for ways to increase its current detention capacity of 40,000 nationwide to 100,000, lawyers and activists estimate the real number is much higher. Some speak of double the capacity, others of up to 4,000.
“Activists and [activist] lawyers say number is huge” is not news, and shouldn’t dumbfound the reader.
The water claim is also weird. I tried watching one of the instagram links, and it shared so much stylistically with mind-killing videos I remember from the BLM era that I had to turn it off.
Like, maybe some of this stuff is true. I don’t have evidence against. But when I was deeply involved with the protest scene in 2014-2015, I remember every arrest being an opportunity for claiming major mistreatment. Everything from the way police carried resisting arrestees, to when and if arrestees were made to change into jail uniforms, were spread frantically on social media as clear examples of mistreatment.
Once, when I was arrested, and we were being transported to the larger jail via van, the other arrestee (to be clear: not related to protests) being transported with me banged his head on the metal separating grate repeatedly, presumably with the idea of later accusing the police of beating him.
I’d always scoffed at police claims about detainees hurting themselves to get social ammunition, but I’ve ridden in a police van once in my life, and saw this. So now I think detainees often tell very tall tales.
All this isn’t to say “this proves your links are false”. But rather to say this is a low standard of evidence. I think it would be really bad if people started just dumping rumors and accusations on LessWrong whenever those accusations pointed at politicians they already didn’t like.
Social media posts by activists are mind-killing. Like, take a look at previous posts by that instagram account in the post: many are about celebrities, or her breakup, but when the videos are political, they are pretty clearly pro-migrant and anti-trump. “Partisan social media account” is typically not the best information source for rationalists.
That’s an interesting example. The CEO I had in mind while writing this was a buff guy with a very force-of-will kind of character, but he appreciated such questions.
I guess all our examples were non-public, company-only meetings. I don’t know the Musk example you describe, but since we know about it, I’m guessing it was more public? Or was it secretly recorded and leaked later?
Good point!
Yup - from the release page a week ago:
>Web search is available now in feature preview for all paid Claude users in the United States. Support for users on our free plan and more countries is coming soon.
It feels like people mainly gain status from making posts, not comments. And it’s harder to make a post that primarily points out a lack of skepticism / care. For example, while I am here disagreeing via a comment, doing so will be negligible to my status. I’d be better off posting about some interesting topic… but posting is harder!
The line "Your browser does not support the video tag." appears multiple times in this post for me, on both Chrome and Safari.
Thank you!
No clear findings, no. However, the biggest period at which I shook the feeling was when I returned to work after a 3-month leave, and began working on an LLM Agent in early 2022 (back when that was very new and very exciting, instead of a thing that’s everywhere like today). I was up and excited and energetic for at least a month straight, and I think longer than that.
Now I’m back to finding work somewhat uninteresting, and also back to being tired. So one theory that is always lurking in my head now is: am I tired because I am bored? Some additional evidence: I began playing poker with friends in person recently, and have not once been tired while at poker night. Nor did I have many tiredness issues while on vacation in Japan.
I don’t think this is the whole story, but I think it’s more of the story than I appreciated 3 years ago.
I find the evidence being asserted unclear. Is the entire thought here based on what hours of the day he’s posting on X? Is it rather the content of his X posts that is the strongest indication? Or is it what Musk has said in his recent televised appearances? I’ve found him reserved and even-spoken in the clips I’ve watched, though I don’t read his X posts, so I am having trouble understanding why you think this in the first place.
I quite enjoyed the fan-written sequel Significant Digits: https://www.anarchyishyperbole.com/p/significant-digits.html?m=1
Yesterday, I realized in my conversations with Claude over the past week or so, I don’t think it’s talked about how much of a genius I am, perhaps not even once. I remember in the fall it would do this all the time. Maybe there’s been an update?
The Elections panel on OP’s image says “combat disinformation”, so while you’re technically right, I think Christian’s “fighting election misinformation” rephrasing is close enough to make no difference.
Well okay then :)! You giving a disagree-vote makes a lot of sense. Thanks for explaining.
I am not sure what people are disagreeing with here. The only factual claims I see are “the preexisting chain of command is incompetent or corrupt”, which I agree with (on incompetence), that “the president has a lot of power”, “is supposed to control all the agencies”, and “if the new CEO of a private company…”. None of these seem incorrect to me. I’ve strong-upvoted in both ways.
I had seen recommendations for T3/T4 on twitter to help with low energy, and even purchased some, but haven’t taken it. I hadn’t considered that the thyroid might respond by shrinking, and now think that that’s a worrying intervention! So I’m glad I read this - thank you.
Oh… wait a minute! I looked up Principal of Indifference, to try and find stronger assertions on when it should or shouldn’t be used, and was surprised to see what it actually means! Wikipedia:
>The principle of indifference states that in the absence of any relevant evidence, agents should distribute their credence (or "degrees of belief") equally among all the possible outcomes under consideration. In Bayesian probability, this is the simplest non-informative prior.
So I think the superior is wrong to call it “principle of indifference”! You are the one arguing for indifference: “it could hit anywhere in a radius around the targets, and we can’t say more” is POI. “It is more likely to hit the adult you aimed at” is not POI! It’s an argument about the tendency of errors to cancel.
Error cancelling tends to produce Gaussian distributions. POI gives uniform distributions.
I still think I agree with the superior that it’s marginally more likely to hit the target aimed for, but now I disagree with them that this assertion is POI.
If you meant specifically negative secrets, about clandestine acts, I don’t have anything, but MrBeast’s document that new employees are given when they join his company surprised me. It’s 30+ pages of excellent, specific advice, as well as clear directions about how MrBeast videos are different and thus employees must think and act differently than they would at any other production company.
The clarity of it, and the density of information, makes it hands-down the best work document I’ve ever read, and having read many in my 10 years of corporate work - all of them less clear, all of them less interesting - I was impressed. There is a tendency in work documents, at least in corporations, to 1) care more about style than substance and 2) adhere to a style that is stifled, banal, and obsessively adheres to Textbook Dictionary English and Grammar. MrBeast just writes how he talks, and it works.
I’d never considered watching any MrBeast video, I thought they were for stupid people, but after being impressed by the document I gave them a try, and now I’ve watched many and enjoyed them.
https://cdn.prod.website-files.com/6623bf84e83241ec49b548e4/66edaa19db6e9359bb92931f_How-To-Succeed-At-MrBeast-Production%20(2).pdf
Having previously been supremely convinced of this way of thinking by reading The Last Psychiatrist, and having lived by it for the last few years, I do now suspect it’s possible to take it too far.
I think the desire for status - the goal of being able to say and think “I am this type of person”, and be recognized for it - is a part of the motivation system. As you say, some (most?) take it too far. But if one truly excises this way of thinking from themselves, they’ve kind of… excised part of their motivational system!
I think you’ve anticipated this point, because you say
>”But what if I really want what’s in the mines? Sometimes you have to do unpleasant work to get what you want.” Absolutely, but when you’re honest about it, you’ll correctly recognize those situations as life-consuming work, and that’ll affect how you relate to the task. You’ll say, “I want to find three pieces of gold,” instead of saying, “I want to work in the mines.” And so you won’t expect to feel alive or rejuvenated or joyful from the work itself.
But I’m not so sure that replacement maintains the motivation that the identity-based motivation gives. For example, when I was training for and fighting in amateur boxing, I trained and ran all the time. During runs, especially sprinting, where I was tired and wanted to quit, I would say out loud (or even yell) things like “this is easy for me because I’m a fighter!” If I had instead been thinking “this sucks but I have to do it anyway because the payoff is worth it”, that would have felt a good deal less motivating.
Or, I don’t know, maybe in marital and relationship fidelity. I am not a person who cheats; it would be a stain on my soul if I cheated. “I must never be a person who cheats.” This makes it easy to not cheat. This identity-based rule works well for me, and I don’t think replacing it with non-identity thinking would be safe for me personally.
But again I think I do agree! And most normal people would probably be better off from the advice to do more things they want to do, and less things they want to have done. But to readers who have already gone down this road… don’t feel like you need to take it all the way! Preserving some identity-based motivation is good and important.
Interesting! I also agree with the superior, but I can see where your intuition might be coming from: if we drop a bouncy ball in the middle of a circle, there will be some bounce to it, and maybe the bounce will always be kinda large, so there might be good reason to think it ending up at rest in the very center is less likely than it ending up off-center. For the sniper’s bullet, however, I think it’s different.
Do you agree with AnthonyC’s view that the bullet’s perturbations are well-modeled by a random walk? If so, maybe I’ll simulate it if I have time and report back - but only makes sense to do that if you agree that the random walk model is appropriate in the first place.
>The semicircular canals track changes in your head’s orientation. The otoliths track which way is down. But why not just combine them? Why did they evolve to be separate?
Here’s an idea.
The body is completely obsessed with inferring its state of poisonedness, and uses inner ear orientation sensors to help infer this. This is why car / sea / VR sickness exist. Since inferring poisonedness quickly is important, so it can start forcing itself to throw up, having two sensors is better because.. it’s more.. fault-tolerant? Not sure. But maybe there’s something here.
Ahh. The correlations being dependent on inputs, but things appearing random to Alice and Bob, does seem trickier than whatever I was imaginining was meant by quantum randomness/uncertainty. Don't fully have my head around it yet, but this difference seems important. Thanks!
Ahh. One is uncertain which world they’re in. This feels like it could address it neatly. Thanks!
Strong-downvoted.
She’s all over the EA and AI-related subreddits /r/singularity, /r/artificial, /r/ArtificialIntelligence, /r/ChatGPT, /r/OpenAI, /r/Futurology
In other words, everywhere but here. Since that’s the case, it would be better to take your fight to those places. Ms. Woods’s only post on Less Wrong in the past year was a short notice about o3 safety testing sign-ups, which was unobjectionable.
I don’t like the vibes.
I was thinking the same thing. This post badly, badly clashes with the vibe of Less Wrong. I think you should delete it, and repost to a site in which catty takedowns are part of the vibe. Less Wrong is not the place for it.
I expect our intuitions about objective randomness would clash quite violently! My own intuition revolts at even the phrase itself :)
I looked into it a bit, and understand it to mean that one must accept one of:
- Physical uncertainty exists
- Non-locality exists
Quantum mechanics is wrong
is that breakdown correct?
Been thinking about your answer here, and still can’t decide if I should view this as solving the conundrum, or just renaming it. If that makes sense?
Do weights of quantum configuration, though they may not be probabilities, similar enough in concept to still imply that physical, irreducible uncertainty exists?
I’ve phrased this badly (part of why it took me so long to actually write it) but maybe you see the question I’m waving at?
Hm - reading Ben’s linked comment, it seems to me that the thrust is that negative probabilities must be admitted. But I don’t understand how that is related to the map vs. territory / probability-in-the-mind-or-physical distinction?
Like, “one must modify the relevant functions to allow negative probabilities” seems consistent with “probability is in the mind”, since functions are a part of the map, but it seems you consider it a counterexample! So I find myself confused.
>In other words, you don’t need reality to be i.i.d.; you simply need to structure your beliefs in a way that allows an “as if” i.i.d. interpretation.
I think I view exchangeability vs. iid slightly differently. In my view, the “independence” part of iid is just way too strong, and is not required in most of the places people scatter the acronym “iid”.
For example, say you are catching fish in a lake, and you know only bass and carp live in the lake, and that there are a ton of fish in it, but not how many of each, and you’re trying to estimate the proportion of carp as you catch fish.
When I catch a carp, the probability that my next catch is a carp goes up. So my probability is dependent on my previous catches - that’s why I can learn things about the proportion! If they were indeed independent, then I couldn’t learn anything. But happily, the correct requirement is not independence, but exchangeability, so I can still update my beliefs as I see more fish.
However, I may just be confused about “iid” as classicists use it, since I never properly learned classic statistics. Interested in what you think about the difference between the two in this example.
Thanks!
Thanks for putting this together!
I have a vague memory of a post saying that taking zinc early, while virus was replicating in the upper respiratory tract, was much more important than taking it later, because later it would have spread all over the body and thus the zinc can’t get to it, or something like this. So I tend to take a couple early on then stop. But it sounds like you don’t consider that difference important.
Is it your current (Not asking you to do more research!) impression that it’s useful to take zinc throughout the illness?
The post is an advertisement, without other content. I think a post of that type should only be on the site if it comes with some meat - an excerpt, at least. (And even then I’m not sure). The reader can’t even look up or read the book yet if he wanted to!
(There is a quote of the thesis of the book, but the text is stuff I’ve been rereading for years now. It feels like someone is always telling me liberalism is under threat recently.)
Interesting! The current Sonnet 3.5 agrees (for equivalent concentrations), for the same reason you've described, and I was about to update the essay with a correction, but then 4o argued that 1. formaldehyde is metabolized much more quickly, so has little time to do damage or build up, and 2. that it considers formic acid's inhibition of a critical enzyme (cytochrome c oxidase) in the mitochondrial electron transport chain to be pretty bad.
Or maybe a better summary of 4o's argument is "In equivalent concentrations, formaldehyde is worse, but the differences in rapidity of metabolization mean formic acid builds up more and causes more damage in real-life scenarios."
So I've linked your comment in the relevant section, sort of waving my hands and succumbing to both-sides-ism. Interested in what you think about the rapidity-of-metabolization argument.
Guesses: people see it as too 101 of a question; people think it’s too controversial / has been done to death many years ago; one guy with a lot of karma hates the whole concept and strong-downvoted it
I think the 101 idea is most likely. But I don’t think it’s a bad question, so I’ve upvoted it.
Years ago, a coworker and I were on a project with a guy we both thought was a total dummy, and worse, a dummy who talked all the time in meetings. We rarely expressed our opinion on this guy openly to each other - me and the coworker didn’t know each other well enough to be comfortable talking a lot of trash - but once, when discussing him privately after yet another useless meeting, my coworker drew in breath, sighed, looked at me, and said: “I’m sure he’s a great father.” We both laughed, and I still remember this as one of the most cutting insults I’ve heard.
Cheers!
I’d guess that weekend dips come from office workers, since they rarely work on weekends, but students often do homework on weekends.
If OP were advocating banning normal parties, in favor of only having cancellable parties, I would agree with this comment.
Appreciate it! Cheers.
A good post, of interest to all across the political spectrum, marred by the mistake at the end to become explicitly politically opinionated and say bad things about those who voted differently than OP.
The integral was incorrect! Fixed now, thanks! Also added the (f * g)(x) to the equality for those who find that notation better (I've just discovered that GPT-4o prefers it too). Cheers!
Yes, I’m not so sure either about the stockfish-pawns point.
In Michael Redmond’s AlphaGo vs AlphaGo series on YouTube, he often finds the winning AI carelessly loses points in the endgame. It might have a lead of 1.5 or 2.5 points, 20 moves before the game ends; but by the time the game ends, has played enough suboptimal moves to make itself win by 0.5 - the smallest possible margin.
It never causes itself to lose with these lazy moves; only reduces its margin of victory. Redmond theorizes, and I agree, that this is because the objective is to win, not maximize point differential, and at such a late stage of the game, its victory is certain regardless.
This is still a little strange - the suboptimal moves do not sacrifice points to reduce variance, so it’s not like it’s raising p(win). But it just doesn’t care either way; a win is a win.
There are Go AI that are trained with the objective of maximizing point difference. I am told they are quite vicious, in a way that AlphaGo isn’t. But the most famous Go AI in our timeline turned out to be the more chill variant.
Quip about souls feels unnecessary and somehow grates on me. Something about putting an athiesm zinger into the tag for cooking… feels off.
Would you be willing to share your ethnicity? Even as simple as “Asian / not Asian”?
I do think it has some of that feeling to me, yeah. I had to re-read the entire thing 3 or 4 times to understand what it meant. My best guesses as to why:
I felt whiplashed on transitions like “be motivated towards what's good and true. This is exactly what Marc Gafni is trying to do with Cosmo-Erotic Humanism”, since I don’t know him or that type of Humanism, but the sentence structure suggests to me that I am expected to know these. A possible rewrite could perhaps be “There are two projects I know of that aim to create a belief system that works with, instead of against, technology. The first is Marc Gafni; he calls his ‘Cosmo-Erotic Humanism’…”
There are some places I feel a colon would be better than a comma. Though I’m not sure how important these are, it would help slow down the pace of the writing:
“increasingly let go of faith in higher powers as a tenet of our central religion: secular humanism.” “But this is crumbling: the cold philosophy”
While minor punctuation differences like this are usually not too important, the way you wrote gives me a sense of, like, too much happening too fast: “wow, this is a ton of information delivered extremely quickly, and I don’t know what appolonian means, I don’t know who Gafni is, or what dataism is…” So maybe slowing down the pace with stronger punctuation like colons is more important than it would otherwise be?
Also, phrases like “our central religion is secular humanism” and “mystical true wise core” read as very Woo. I can see where both are coming from, but I’ve read a lot of Woo, but I think many readers would bounce off these phrases. They can still be communicated, but perhaps something like “in place of religion, many have turned to Secular Humanism. Secular humanism says that X, Y, Z, but has no concept of a higher power. That means the core motivation that…”
(To be honest I’ve forgotten what secular humanism is, so this was another phrase that added to my feeling of everything moving too fast, and me being lost).
There are some typos too.
So maybe I’d advise making the overall piece of writing slower, by giving more set-up each time you introduce a term readers are likely to be unfamiliar with. On the other hand, that’s a hassle, and probably annoying to do in every note, if you write on this topic often. But it’s the best I’ve got!
I read this book in 2020, and the way this post serves as a refresher and different look at it is great.
I think there might be some mistakes in the log-odds section?
The orcs example starts:
We now want to consider the hypothesis that we were attacked by orcs, the prior odds are 10:1
Then there is a 1/3 wall-destruction rate, so orcs should be more likely in the posterior, but the post says:
There were 20 destroyed walls and 37 intact walls… corresponding to 1:20 odds that the orcs did it.
We started at 10:1 (likely that it’s orcs?), then saw evidence suggesting orcs, and ended up with a posterior quite against orcs. Which doesn’t seem right. I was thinking maybe “10:1” for the prior should be “1:10”, but even then, going from 1:10 in the prior to 1:20 in the posterior, when orcs are evidenced, doesn’t work either.
All that said, I just woke up, so it’s possible I’m all wrong!
In Korea every convenience store sells “hangover preventative”, “hangover cure drink”, with pop idols on the label. Then you come back to America and the instant you say “hangover preventative”, people look at you crazy, like no such thing could possibly exist or help. I wonder how we got this way!
Thanks for your review! I've updated the post to make the medications warning be in italicized bold, in the third paragraph of the post, and included the nutrient warning more explicitly as well.