Posts

Comments

Comment by Idan Arye on Being Moral is an end goal. · 2022-03-09T18:22:57.937Z · LW · GW

Doesn't society already consider it immoral to go to crowded places untested when you suspect you have COVID? This is not just about a specific detail of this specific story - one important feature is morality is preventing humans from convincing themselves that the thing they want to do is the utilitarian choice. We decided that going untested is immoral precisely because people like Alice who avoid testing themselves for such reasons.

Instead of morality, I think what Alice seeks here is deniability. If Alice does not take the test, she can convince herself (and possibly others?) that the probability of her sore throat indicating COVID is as low as she wants. No one else can really tell how bad it was - certainly not days later, when the thing is discovered - she may even claim that she felt nothing unusual. She is still immoral, but she can at least convince herself that she has done nothing wrong.

Comment by Idan Arye on Nuanced and Extreme Countersignaling · 2022-01-25T16:48:47.440Z · LW · GW

I interpret the engagement with conservative ideas Scott's describing a little more straightforwardly. Lots of people are inundated with Mrs. Grundy leftist takes on social media. They're smart enough to try and figure out what they really think. So they say  things like "Oh, I heard about that guy in South Carolina. Instead of knee-jerk condemnation, let’s try to form some general principles out of it and see what it teaches us about civil society.”

This isn't countersignaling. It's just signaling. This isn't making fun of anybody, and it's calling for straightforward civil discourse in terms nobody could possibly mistake for anything else.

I notice that a key characteristic of signaling vs countersignaling is that for regular signaling you are paying with resources while for counter signaling you are paying with risk. That is - the credibility of regular signals is derived from false signals being more costly than true signals, so it's harder to justify sending them. The credibility for countersignals comes from the risk of the signal backfiring, which should be have a greater probability when the signal is false, deterring false signalers.

Calling for a civil discourse is not a signal that requires much cost to send. So if it was a regular signal, it was at most a virtue signal. But even thought it comes "in terms nobody could possibly mistake for anything else", it can still backfire. If you are perceived as a person who often gets emotional in discussions and engage in personal attacks and general demagogy, calling for a civil discourse can paint you as a hypocrite who only wants civil discourse when it fits their agenda. If you consistently discuss civilly, the risk for that happening is much lower.

So, this may not be a strict by-definition countersignal - after all, the naive interpretation of the signal is exactly what you are trying to signal - but I still find it's mechanics to be much closer to countersignals than to traditional signals.

Comment by Idan Arye on NFTs, Coin Collecting, and Expensive Paintings · 2022-01-24T21:07:33.233Z · LW · GW

Rarity, by its very nature, cannot be too abundant. The more plentiful it becomes, the more it loses its defining property. There is only one original Mona Lisa, but every NFT project spits out a combinatorial number of images all built from a small number of assets and pretends they are all rare.

Each NFT is indeed unique, but since there are tens of thousands similarly unique NFTs - most or them are not really rare. One could claim that rare paintings are the same - that if NFTs are not rare because there are other NFTs, then by the same logic Mona Lisa should also not be rare because there are other paintings. But the Mona Lisa's rarity is real, because no other paining is valued like it. The second most valuable painting is The Last Supper is also very rare and valuable - but it's not the Mona Lisa.

If you try to make the same claim about NFTs - e.g. "Bored Ape #4444 is rare, but it is not Bored Ape #5555" - I'd reply that this claim can be reversed - "Bored Ape #5555 is rare, but it is not Bored Ape #4444". This doesn't work for the Mona Lisa and The Last Supper - "The Mona Lisa is rare, but it's not The Last Supper" doesn't make sense, because the Mona Lisa is the most famous painting in the world. The Last Supper is more famous that any painting other than these two, which makes it still very rare and valuable, and while the ordering is not always precise and can vary from evaluator to evaluator, there can usually be a rough estimation of "about <ballpark> other paintings are at least as famous as this one". The lower that estimation - the more rare and valuable the painting is.

Are there NFTs that are more rare than any other NFT? Probably. But there can't be that many of them - certainly not as many as the NFT advocators pretend there are.

And this is enough to explain why they are looked down upon, but not enough to explain why they are ridiculed and even hated.

My hypothesis is that the hatred they receive recently is related to Facebook's big announcement about the metaverse!

Rarity is a finite resource, but that does not mean its total amount is some physical constant. The thing that determines how much rarity there is to split between the rare things is the number of people interested in it, and the amount of resources and attention they are willing to put into that market (that's, of course, a simplification - because there are many intersecting markets. But it's good enough for this discussion). In order to get more people into NFTs and increase the value of existing NFTs (assuming the demand increases faster than supply), the NFT investors need to get non-investors into that market.

Facebook coined the term "metaverse", and changed their name to indicate how serious they are about it, but there is still very little consensus about what exactly the metaverse is going to be. The NFT people sees this as and opportunity - they want to push for metaverse assets to be managed by NFTs. They are already working on profile pictures (how are profile pictures related to the metaverse? Well, we can't agree what the metaverse is, so could you really say they are not?) to be NFT verified. They want the VR aspects of the metaverse to also be NFTs - so if you get a sword in a game, you get the NFT of it so that it's registered in the blockchain as yours. Same for virtual land, same for everything else.

And these are all things that people just assumed would be free. Or at least nearly free. And there is no reason for them not to be free. But the NFT guys want you to pay for your virtual things on more computing power for verifying your ownership than the computer power spent on their graphics and physics, just so some other avatar won't have the same instance of a virtual shirt as your avatar.

This is an obvious scam that people are seeing from miles away - and they are pushing back by attacking NFT and crypto on social networks.

Comment by Idan Arye on SSC Journal Club: AI Timelines · 2021-10-28T15:31:51.732Z · LW · GW

The headline result: the researchers asked experts for their probabilities that we would get AI that was “able to accomplish every task better and more cheaply than human workers”. The experts thought on average there was a 50% chance of this happening by 2062 – and a 10% chance of it happening by 2026!

But on its own this is a bit misleading. They also asked by what year “for any occupation, machines could be built to carry out the task better and more cheaply than human workers”. The experts thought on average that there was a 50% chance of this happening by 2139, and a 20% chance of it happening by 2037.

As the authors point out, these two questions are basically the same – they were put in just to test if there was any framing effect. The framing effect was apparently strong enough to shift the median date of strong human-level AI from 2062 to 2139. This makes it hard to argue AI experts actually have a strong opinion on this.

These are not the same.

The first question sounds like an AGI - a single AI that can just do anything we tell it to do (or anything it decides to do?) without any farther development effort by humans. We'll just need to provide a reasonably specified description of the task, and the AI will learn on it's how to do it by deducing it from the laws of physics or by consuming existing learning resources made for humans or by trial-and-errors or whatever.

The second question does not require AGI - it's about regular AIs. It requires that for whatever task done by humans, it would be possible to build an AI that does it better and more cheaply. No research into the unknown would need to be done - just utilization of established theory, techniques, and tools - but you would still need humans to develop and build that specific AI.

So, the questions are very different, and different answers to them are expected, but... should't one expect the latter to happen sooner than the former?

Comment by Idan Arye on Self-Integrity and the Drowning Child · 2021-10-27T21:04:11.584Z · LW · GW

I see. So essentially demandingness is not about how strong the demand is but about how much is being demanded?

Comment by Idan Arye on Self-Integrity and the Drowning Child · 2021-10-27T20:39:26.112Z · LW · GW

I think the key to the drowning child parable is the ability of others to judge you. I can't judge you for not donating a huge portion of your income to charity, because then you'll bring up the fact that I don't donate a huge portion of my own income to charity. Sure, there are people who do donate that much, but they are few enough that it is still socially safe to not donate. But I can judge you for not saving the child, because you can't challenge me for not saving them - I was not there. This means that not saving the child poses a risk to your social status, which can greatly tilt the utility balance in favor of saving them.

Comment by Idan Arye on Self-Integrity and the Drowning Child · 2021-10-27T20:31:11.259Z · LW · GW

Could you clarify what you mean by "demandingness"? Because according to my understanding the drowning child should be more demanding than donating to AMF because the situation demands that you sacrifice to rescue them, unlike AMF that does not place any specific demands on you personally. So I assume you mean something else?

Comment by Idan Arye on A Modern Myth · 2021-10-11T09:30:42.686Z · LW · GW

If Heracles was staring at Hermes' back, shouldn't he have noticed the Eagle eating his liver?

Comment by Idan Arye on The Lizard People Of Alpha Draconis 1 Decided To Build An Ansible · 2021-10-07T16:58:43.966Z · LW · GW

Wait - but if you can use population control to manipulate the global utility just by changing the statistical weights, isn't it plain average utilitarianism instead of the fancier negative preference kind?

Comment by Idan Arye on Burdens · 2021-10-05T16:38:21.398Z · LW · GW

This also relates to your thrive/survive theory. A society in extreme survive mode cannot tolerate "burdens" - it needs 100% of the populace to contribute. Infants may be a special exception for the few years until they can start contributing, but other than that if you can't work for whatever reason you die - because if the society will have to allocate to you more utility than what you can give back, it'll lose utility and die. This is extreme survive mode, there is no utility to spare.

As we move thriveward, we get more and more room for "burdens". We don't want to leave our disabled and elderly to die once they are no longer useful - we only had to do that in extreme survive mode, but now that we have some surplus we want to use it to avoid casting away people who can't work.

This presents us with a problem - if we can support a small number of people who can't work, it means we can also support a small number of people who don't want to work. Whether or not it's true, the ruling assumption to this very day is that if left unchecked enough lazy people will take up that opportunity that the few willing to work will crumble under their weight.

So we need to create mechanisms for selecting the people that will get more support than they contribute. At first it's easy - we don't have that much slack anyway, so we just pick the obvious people, like the elders and the visible disabled. These things are very hard to fake. But eventually we run out of that, and can afford giving slack to less and less obvious disabilities, and even to people just ran out of luck - e.g. lost their job and are having trouble getting a new one, or need to stay home to take care of family members.

And these things are much easier to fake.

So we do still try to identify these lazy people and make them work, but we also employ deterrents to make faking less desirable. Lower living conditions is a natural occurring deterrent, and on top of that society adds shame and lower social status. If you legitimately can't work there is not much you can do about it so you suffer through these deterrents. If you are just lazy, it might be better to work anyway because while not working won't get you killed it'll still get you shunning looks, disrespect, and that shameful feeling of being a burden on society.

This has false negatives and false positives, of course, but overall it was good enough a filter to let society live and prosper without throwing out too many unfortunate members.

But... thanks to this mechanism, working became a virtue.

This was useful for quite a while, but it makes it harder to move on. If it's shameful not to work, and everyone who don't have a special condition have to work, then society needs to guarantee enough work for everyone or we'll have a problem. Instead of having to conserve the little slack we have and carefully distribute it, we now need to conserve the find ways to get rid of all that slack because people need to feel useful.

(note that this is a first world problem. Humanity is spread out on the thrive/survive axis, and there are many places when you still need to work to survive and not just to feel good about yourself)

Some of the methods we use to achieve that are beneficial (as long as they don't screw up, as they sometimes do) - letting kids study until somewhere in their twenties, letting people retire while they still have some vitality left, letting people have days off and vacations, etc. But there are also wastes for the sake of waste, like workfare or overproducing, which we only do because work is a virtue and we need to be virtuous.

At some point technology will get so far, that we'll be able to allow a majority of the populace to not work. Some say we are already there. So we need to get out of this mentality fast - because we can't let too many people feel like they are a burden on society.

I'm... not really sure how that "virtue" can be rooted out...

Comment by Idan Arye on Burdens · 2021-10-05T13:26:16.451Z · LW · GW

I came to a similar conclusion from a different angle. Instead of the past, I considered the future - specifically the future of automation. There is a popular pessimistic scenario of machines taking up human jobs making everyone - save for the tycoons who own the machines - unable to provide for themselves. This prediction is criticized by pointing out that automation in the past created better jobs, replacing the ones it took away. Which is countered by claiming that past automation was mainly replacing our muscles, but now we are working on automation that replaces our brains, which will make humans completely obsolete. And now that I read this post, I realize that these better jobs created by automation left many people behind, so wouldn't better automation leave even more people behind?

So developing automation has ethical problems - even if it benefits society as a whole, is it really okay to sacrifice all these people to attain it?

My ethical framework is based on Pareto efficiency - solutions are only morally acceptable if they are Pareto improvements. I wouldn't call it "fully consistent", because it raises the question of "Pareto improvement compared to what?" and by cleverly picking the baseline you can make anything moral or immoral as you wish. But if you can hand-wave that fundamental issue away it forms this vague basic principle:

A solution where everyone benefits is preferable to a solution where some are harmed, even if the total utility of the latter is higher than that of the former.

Sometimes the difference in total utility is very big, and it seems like a waste to throw away all that utility. Luckily real life is not a simple game theory scenario with a fixed and very small number of strategies and outcomes. We have many tools to create new strategies or just modify existing ones. And if we have one outcome that generates a huge surplus at the expense of some people, we can just take some of that surplus and give it to them, to create a new outcome where we have it all - every individual is better off and the total utility is greatly increased.

Even if a solution without the surplus division can result in more utility overall, I'd still prefer to divide the surplus just so no one will have to get hurt.

And this is where UBI comes in - use a small portion of that great utility surplus we get from automation to make sure even the people who lose their jobs end up at a net benefit.

But if we apply this to the future, why not apply it to the present as well? Use the same principle for the people who already got hurt due to automation?

Comment by Idan Arye on EA Hangout Prisoners' Dilemma · 2021-10-03T17:30:22.681Z · LW · GW

Why ? The participants may have a preference for one nonprofit over the other, but surely - all else being equal - they should prefer their less favorite nonprofit to get money over it getting nothing.

I'd go even farther - this is charity, so instead of a social outcome which is the sum of the players' utility the individual utilities here are applications of the players' value functions on the social outcome. Even if you prefer one nonprofit over the other - do you prefer it enough to relinquish these extra $100? Do you think your favorite charity can do with $100 more than your second favorite can do with $200?

I don't think so. We have  here - and overall .

For most game it's clear what counts as cooperation and what counts as defecting. For BoS - no so much. Your classification relies on that labeling (otherwise you could switch W with Z and X with Y) and since we can't use them here I'll just fix  - that is cooperation is always the strategy that chosen by both players is better than the other strategy if chosen by both.

So - in BoS cooperation is doing what you were already wanting to do, and you hope for your spouse to defect. The order is , which is not exactly our case but closer than any other game I can think of.

Comment by Idan Arye on EA Hangout Prisoners' Dilemma · 2021-09-30T12:23:58.306Z · LW · GW

You also need to only permit people who took part in the negotiations to launch nukes. Otherwise newcomers could just nuke without anyone having a chance to establish a precommittment to retaliate against them.

Comment by Idan Arye on EA Hangout Prisoners' Dilemma · 2021-09-29T17:20:08.182Z · LW · GW

Oh. Good point. Maybe it would be interesting to do a version where you can't retract a counter nuke after the original nuke's 20 minutes.

Either way, I think the 20 minute rule is important for even talking about precommitting. Without it, people can chat and make contracts all they want, only for someone completely uninvolved in the conversation to suddenly post a nuke comment.

Comment by Idan Arye on EA Hangout Prisoners' Dilemma · 2021-09-29T16:41:03.484Z · LW · GW

The ability to cancel launches make it effectively simultaneous, because they mean you can't commit (at least not under the explicit rules)

Comment by Idan Arye on EA Hangout Prisoners' Dilemma · 2021-09-29T15:19:23.529Z · LW · GW

If we are looking for a known game structure with a formal name, I'd say it's Battle of the Sexes: a defect-cooperate is preferable to both defect-defect and cooperate-cooperate, but each side would rather be the defector in that outcome.

Comment by Idan Arye on Why didn't we find katas for rationality? · 2021-09-15T00:18:10.913Z · LW · GW

I wonder, though - maybe there are some rational skills that do benefit from repetitive practice? Overcoming bias comes to mind - even after you recognize the bias, sometimes it still takes mental energy to resist its temptation. Maybe katas could help there?

Comment by Idan Arye on Moloch's Toolbox (1/2) · 2021-09-02T18:55:12.615Z · LW · GW

visitor:  Hold on, I think my cultural translator is broken. You used that word “doctor” and my translator spit out a long sequence of words for Examiner plus Diagnostician plus Treatment Planner plus Surgeon plus Outcome Evaluator plus Student Trainer plus Business Manager. Maybe it’s stuck and spitting out the names of all the professions associated with medicine.

This actually sounds a bit similar to how Scott Alexander described hospital pipelines. Sure - real life are not as efficient as in the visitor's homeworld, and medical doctors still go through maybe too much training, but there is still specialization going on, with each professional dealing with what they are proficient at.

Comment by Idan Arye on Death by a Single Cut · 2021-08-29T19:12:58.363Z · LW · GW

Beliefs are quantitative, not qualitative. The more evidence you pile in favor of the claim, the stronger your confidence in it should be. Observing that there is no monkey is much stronger evidence than the geography based argument, and it's probably enough, but the belief is not binary so having both arguments should result in higher probability assigned to it than with having just one argument, not matter how much stronger that single argument is. .

In practice, thing about it that way - what if the monkey heard you coming and managed to hide so well that you couldn't find it even after looking? This is a very unlikely scenario, but still a possibility - and its less likely to happen in the Pacific Northwest than in, say, India. So the geographic argument reduces the probability of a hidden monkey scenario - even if only by a little bit - and thus increases the overall probability of having a monkelyless closet.

Comment by Idan Arye on A Philosopher Walks Into A Coffee Shop · 2021-08-23T13:14:33.596Z · LW · GW

Rene Descartes goes up to the counter. “I’ll have a scone,” he says. “Would you like juice with that?” asks the barista. “I think not,” says Descartes, and he ceases to exist.

I can't believe you missed an opportunity to do an "I drink, therefore I am" joke...

Comment by Idan Arye on When Programmers Don't Understand Code, Don't Blame The User · 2021-08-19T15:25:46.248Z · LW · GW

I think this lesson extends behind the scope of programming, even behind the more general scope of technology. We should not be too humble before complicated, hard-to-understand things. We should not be too quick to assume the fault is in our inability to comprehend them. We should always consider the possibility that it's their fault being needlessly complicated, or even just plain nonsense.

I've seen some essays (often in the area of philosophy and/or religion) that - I believe - try to take advantage of that utility. They support their argument with cryptic, cumbersome and confusing reasoning that seem to me like an attempt to force their would be challengers to give up on the discourse for failing to understand it. Their supporters, of course, can remain - they are not trying to disprove the argument, so they don't really need to understand it.

To fight this mentality, we need to give more credit to ourselves. Is the person making the argument smarter than us? Maybe. Does their intelligence exceed our own so much that they can create coherent arguments we cannot understand no matter how hard we try? Very unlikely. Maybe not outright impossible, but the probability is low enough that we should insist on the argument being flawed even when they try to convince us we simply fail to understand it.

Comment by Idan Arye on The Case Of The Suffocating Woman · 2021-08-05T16:42:35.989Z · LW · GW

(on the other hand, the inability to condition yourself seems relevant here. It seems like the brain might be not be controlling for whether something is reasonable, but only for whether something is produced by yourself. So maybe exercise counts because it’s under your control, but waterboarding doesn’t count because it isn’t. I wonder if anyone has ever tried letting someone waterboard themselves and giving them the on-off switch for the waterboarding device. Was Hitchens’ experience close enough to this to count? Why would this be different from letting someone hold their breath, which doesn’t produce the same level of panic?)

 

Hypothesis: the difference is in the failure mode.

If you hold your breath, you can always choose (assuming you are not underwater or in some other environment that prevents proper breathing) to stop holding your breath and save yourself from suffocating. If you are being waterboarded by friendly demonstrators, you can say the safe word and save yourself from "drowning". These may seem the same, but are not. In both cases the longer you hold the weaker you get - the way that weakening affects your ability to stop the ordeal is very different.

The longer you hold your breath the harder it gets to keep holding your breath - until at some point you are no longer able to hold your breath and are forced to breath. Even if you can keep holding your breath past that point - you are just going to pass out, and then you'll just switch to autopilot and breath automatically. Unless you suffer from Ondine's curse, failure will not kill you.

With waterboarding, if you become too weak to properly signal the "torturers" to stop - they won't stop. Sure, in Hitchens' case they noticed that he passed out and stopped it. Because they are professionals. But this is probably too high level for your subconsciousness and your body to rely on - as far as they care, failure can mean death. Holding as long as you can is no longer a safe option - so your body will try to scream at you to stop it as soon as possible.

Comment by Idan Arye on "If and Only If" Should Be Spelled "Ifeff" · 2021-07-16T22:51:09.476Z · LW · GW

I'd argue that people who are not familiar with "iff" are usually unfamiliar with its full version "if and only if" as well and, unaware of the need for such distinction, tend to treat regular "if" as bidirectional. These two mistakes will cancel each other out and they won't miss said something key.

Comment by Idan Arye on The Control Group Is Out Of Control · 2021-06-23T16:38:59.977Z · LW · GW

January 2021 have witnessed the GameStop short squeeze where many small investors, self organized via Reddit, bought a stock in order to hold it and cause financial damage to several hedge funds that shorted it. It was all over the news and was eventually diffused when the brokerage companies sold their clients stocks without their consent.

This resolution triggered great outrage. The traders and their supporters claimed that hedge funds were toying with the economy for a long time now, ruining companies and the families who depended on them, and it was considered okay because they played by the rules. Now that the common folks play by the same rules - the rules were changed so that they cannot play.

(To be fair - the brokerage companies that sold their stocks did have a legal standing in doing so. But this is just an anecdote for my main point, so I'd rather not delve into this technicality)

This post was written years before that, but the sentiment is timeless. Is it really okay to constantly change the rules of science just to deny access to a certain group?

Comment by Idan Arye on Debunked And Well-Refuted · 2021-06-16T23:23:23.975Z · LW · GW

If you've never acknowledged that other study, there is a possibility that you'll consider it objectively once introduced to it.

Comment by Idan Arye on Don't Sell Your Soul · 2021-04-09T01:06:28.147Z · LW · GW

Section IV, clause A:

Buyer and Seller agree that the owner of the Soul may possess, claim, keep, store, offer, transfer, or make use of it in whole or in part in any manner that they see fit to do so, conventional or otherwise, including (but not limited to) the purposes described in this Section (IV). Example uses of the Soul which would be permitted under these terms include (but are not limited to):

  • ...
  • Long term storage, usage, or preservation of the Soul in a state which would prevent it from taking the course of development, evolution, or relocation it may otherwise take naturally or due to the actions or material status of the Seller.

Am I interpreting it wrong, or is this clause permitting the buyer to kill the seller?

Comment by Idan Arye on Strong Evidence is Common · 2021-03-14T18:58:41.736Z · LW · GW

Isn't that the information density for sentences? With all the conjunctions, and with the limitness of the number of different words that can appear in different places of the sentence, it's not that surprising we only get 1.1 bits per letter. But names should be more information dense - maybe not the full 4.7 (because some names just don't make sense) but at least 2 bits per letter, maybe even 3?

I don't know where to find (or how to handle) a big list of full names, so I'm settling for the (probably partial) lists of first names from https://www.galbithink.org/names/us200.htm (picked because the plaintext format is easy to process). I wrote a small script: https://gist.github.com/idanarye/fb75e5f813ddbff7d664204607c20321 

When I run it on the list of female names from the 1990s I get this:

$ ./names_entropy.py https://www.galbithink.org/names/s1990f.txt
Entropy per letter: 1.299113499617074

Any of the 5 rarest name are 1:7676.4534883720935
Bits for rarest name: 12.906224226276189
Rarest name needs to be 10 letters long
Rarest names are between 4 and 7 letters long

#1 Most frequent name is Christin, which is 8 letters long
Christin is worth 5.118397576228959 bits
Christin would needs to be 4 letters long

#2 Most frequent name is Mary, which is 4 letters long
Mary is worth 5.380839995073667 bits
Mary would needs to be 5 letters long

#3 Most frequent name is Ashley, which is 6 letters long
Ashley is worth 5.420441711983749 bits
Ashley would needs to be 5 letters long

#4 Most frequent name is Jesse, which is 5 letters long
Jesse is worth 5.4899422055346445 bits
Jesse would needs to be 5 letters long

#5 Most frequent name is Alice, which is 5 letters long
Alice is worth 5.590706018293878 bits
Alice would needs to be 5 letters long

And when I run it on the list of male names from the 1990s I get this:

$ ./names_entropy.py https://www.galbithink.org/names/s1990m.txt
Entropy per letter: 1.3429318549784128

Any of the 11 rarest name are 1:14261.4
Bits for rarest name: 13.799827993443198
Rarest name needs to be 11 letters long
Rarest names are between 4 and 8 letters long

#1 Most frequent name is John, which is 4 letters long
John is worth 5.004526222833823 bits
John would needs to be 4 letters long

#2 Most frequent name is Michael, which is 7 letters long
Michael is worth 5.1584658860672485 bits
Michael would needs to be 4 letters long

#3 Most frequent name is Joseph, which is 6 letters long
Joseph is worth 5.4305677416620135 bits
Joseph would needs to be 5 letters long

#4 Most frequent name is Christop, which is 8 letters long
Christop is worth 5.549228103371756 bits
Christop would needs to be 5 letters long

#5 Most frequent name is Matthew, which is 7 letters long
Matthew is worth 5.563161441124633 bits
Matthew would needs to be 5 letters long

So the information density is about 1.3 bits per letter. Higher than 1.1, but not nearly as high as I expected. But - the rarest names in these list are about 1:14k - not 1:1m like OP's estimation. Then again - I'm only looking at given names - surnames tend to be more diverse. But that would also give them higher entropy, so instead of to figure out how to scale everything let's just go with the given names, which I have numbers for (for simplicity, assume these lists I found are complete)

So - the rare names are about half as long as the number of letters required to represent them. The frequent names are anywhere between the number of letters required to represent them and twice that amount. I guess that is to be expected - names are not optimized to be an ideal representation, after all. But my point is that the amount of evidence needed here is not orders of magnitude bigger than the amount of information you gain from hearing the name.

Actually, due to what entropy is supposed to represent, on average the amount of information needed is exactly the amount of information contained in the name.

Comment by Idan Arye on Strong Evidence is Common · 2021-03-14T12:49:05.898Z · LW · GW

The prior odds that someone’s name is “Mark Xu” are generously 1:1,000,000. Posterior odds of 20:1 implies that the odds ratio of me saying “Mark Xu” is 20,000,000:1, or roughly 24 bits of evidence. That’s a lot of evidence.

There are 26 letters in the English alphabet. Even if, for simplicity, our encoding ignores word boundaries and message ending, that's  bits per letter so hearing you say "Mark Xu" is 28.2 bits of evidence total - more than the 24 bits required.

Of course - my encoding is flawed. An optimal encoding should assign "Mark Xu" with less bits than, say, "Rqex Gh" - even though they both have the same amount of letters. And "Maria Rodriguez" should be assigned an even shorter message even though it has more than twice the letters of "Mark Xu".

Measuring the amount of information given in messages is not as easy to do on actual real life cases as it is in theory...

Comment by Idan Arye on Defending the non-central fallacy · 2021-03-12T23:44:00.922Z · LW · GW

Realistically, how high would the tax burden have to be for you to accept those costs of secession?

France's 2015 taxes of 75% made rich people secede, so we can take that as a supremum on the minimal tax burden that can make people secede. Of course - France's rich didn't have to go live in the woods - they had the option to go to other countries. Also, they did not have the option to not go to any country, because all the land on earth is divided between the countries.

I agree that the main benefits for the rich to remain in under the state's rule and pay taxes is to be able to do business with its citizens. And of course - to be able to pass through the land - otherwise they won't be able to physically do said business. So the core question is:

Does the state have the right to prevent its citizens from doing business with whoever they want?

They practice that power - that's a fact. They send the police to stop business that's not licensed by the state. But should this be considered an act of violence, or as an act of protecting their property?

Comment by Idan Arye on Defending the non-central fallacy · 2021-03-12T00:53:11.322Z · LW · GW

I think there is some academic merit in taking this example to the extreme and assuming that the rich person is responsible to 100% of the community's resources, and they alone can fund the its entire activity, and if they secede alone the community is left with nothing. They can't protect people in their streets because they can't afford a police. They can't punish criminals because they can't afford a prison. They may be left with their old roads, but without maintenance they quickly wear out while the rich person can build new ones. Their permission to do business means nothing because they have no means to enforce it (no police) - they can't even make a credible embargo because the rich person is the only one you can offer jobs and the only one who has goods to sell, so the incentive to break the embargo is huge. The rich person has all the power and zero incentive to give in to the community which will take it away and give their "fair share" of  of it in return.

Of course - this extreme scenario never happens in real life, because in real life there are always alternatives. There are more rich people, to begin with, so no single rich person can hold all the power. People can start their own business, breaking the 100% dependency on the rich class from our example. And - maybe most importantly - modern society has a huge middle class that holds (as a socioeconomic class) a considerable share of the power.

So, a real life rich person cannot have a full Shapley value like our hypothetical rich person, and the poor people's Shapley value is more than zero. Still - a rich person's Shapley value is much much higher than a poor person's, and therefore there is a point where taxation is heavy enough to make it worthwhile for them to secede.

Comment by Idan Arye on Defending the non-central fallacy · 2021-03-11T22:55:39.722Z · LW · GW

I was replying to ShemTealeaf's claim that the rich person still has an incentive to stay - remaining under the protection of the community's court system. I was arguing that what the rich person needs from the community's court system is not its resources (which the rich person was providing anyway, and would dry out once they secede) but its social norms - the people's agreement to respect it's laws, which mean they would not attack the rich person. My point is that if the reach person's incentive to stay is to not get robbed and killed by the community - then we can't really say that they are allowed to opt out.

Of course - if they poor people that remain the community will not attack the rich person once they leave - then they are indeed allowed to opt out, but in that case their incentive to stay is gone.

Comment by Idan Arye on Defending the non-central fallacy · 2021-03-11T20:15:46.582Z · LW · GW

In this hypothetical scenario, the rich person was the sole source of funding for the community's services. Once they opt out, the community will no longer be able to pay the police, and since all the police salaries came from the rich person's pockets - the rich person will be able to use the same amount of money previously used to pay the police force to finance their own private security.

Same for all the other services the community was providing.

Of course, the community will still have all the infrastructure and equipment that was purchased with the rich person's taxes in the past, and the rich person will start with nothing - but this is just a temporary setback. In a few years the rich person will build new infrastructure and the community's infrastructure will not hold for long if they keep using it without being able to afford its maintenance.

This leaves us with the core community service the rich person was enjoying. The only service that does not (directly) cost money to provide. Social norms.

As you said - once the rich person opts out of the community, the members of the community is no longer obliged to refrain from robbing or kill them. And they have an incentive to do so. They may no longer be able to pay their police in the long run, but it'll take some time for all the cops to quit and it'll take some time for the rich person to build their own security force (unless they have prepared it in advance? They probably did), so if they act quick enough they can launch an attack and have a good chance at winning. And even if they get delayed and the balance of armed forces swifts - large enough masses of poor people can take down the rich with their armed guards.

So this is what's going to stop the rich person from opting out. The threat of violence if they do so. In that light - can we still say they are allowed to opt out?

Comment by Idan Arye on Defending the non-central fallacy · 2021-03-10T16:43:27.967Z · LW · GW

Most[1] logical fallacies are obvious when arranged in their pattern, but when you encounter them in the wild they are usually transformed by rhetorics to mask that pattern. The "lack of rhetorical skills", then, may not be bad argumentation by itself - but it does help exposing it. If a pickpocket is caught in the act, it won't help them to claim that they were only caught because they were not dexterous enough and it's unfair to put someone in jail for a lack of skill. The fact remains that they tried to steal, and it would still be a crime if they were proficient enough to succeed. Similarly, just because one's rhetorical skills are not good enough to mask a bad argument does not make it a good argument.

A more important implication of my take on the nature of logical fallacies is that it is not enough to show that an argument fits the fallacy's pattern - the important part of countering it is showing how, when rearranged in that pattern, the argument loses its power to convince. If it still makes sense even in that form.

Note that in all of Scott's examples, he never just said "X is a noncentral member of Y" and left it at that. He always said "we usually hate Y because most of its members share the trait Z, but X is not Z and only happens to be in Y because of some other trait W, which we don't have such strong feeling about".

So, if we take your first example (the one about eating meat) and fully rearrange it by the noncentral fallacy not only with X and Y but also with Z and W, the counter-argument would look something like that:

It's true that animal farming (X) is technically cruelty (Y), but the central members of cruelty are things like torture and child abuse. What these things have in common is that they hurt humans (Z), and this is the reason why we should frown upon cruelty. Animal farming does not share that trait. Animal farming is only included in the cruelty category because it involves involuntary suffering (W) - a trait that we don't really care about.

Does this breakdown make the original argument lose its punch? Not really. Certainly not as much as breaking down the "MLK was a criminal" argument to the noncentral fallacy pattern makes that argument lose its punch. Here, at most, the breakdown exposes the underlying reasoning, and shifts the discussion from "whether or not meat is technically a cruelty" to "to what extent do animals deserve to be protected from involuntary suffering".

Which is a good thing. I believe the goal noticing logical fallacies is not to directly disprove claims, but to strip them from the rhetorical dressing and expose the actual argument underneath. That underlying argument can be bad, or it can be good - but it needs to be exposed before it can be properly discussed.


  1. I say "most", but the only exception I can think of is the proving too much fallacy. And even then - that's only because there is no common template like other fallacies have. But that doesn't mean that arguments that inhibit that fallacy cannot be transformed to expose it - in this case, to normalize the fallacy one has to reshape it to a form where the claim, instead of being a critical part of its logic, is just a placeholder that can contain anything and still make the same amount of sense.

    So, there is still an normal form involved. But instead of a normal form for the fallacy, the proving too much fallacy is about finding the normal form of the specific argument you are trying to expose the fallacy in, and showing how that form can be used for proving too much. I guess this makes the proving too much fallacy a meta-fallacy? ↩︎

Comment by Idan Arye on Privacy vs proof of character · 2021-02-28T22:46:23.371Z · LW · GW

If Alice can sacrifice her privacy to prove her loyalty, she'll be force to do so to avoid losing to Bob - who already sacrificed his privacy to prove his loyalty and not lose to Alice. They both sacrificed their privacy to get an advantage over each other, and ended up without any relative advantage gained. Moloch wins.

Comment by Idan Arye on Coincidences are Improbable · 2021-02-24T19:53:57.048Z · LW · GW

Coincidences can be evidence for correlation and therefore evidence for causation, as long as one remembers that evidence - like more things than most people feel comfortable with - are quantitative, not qualitative. A single coincidence, of even multiple coincidences, can make a causation less improbable - but it can still be considered very improbable until we get much more evidence.

Comment by Idan Arye on Oliver Sipple · 2021-02-20T21:53:19.671Z · LW · GW

Manslaughter? Probably not - you did not contribute to that person's death. You are, however, guilty of:

  1. Desecration of the corpse.
  2. Obstructing the work of the sanitation workers (it's too late for paramedics) that can't remove the body from the road because of the endless stream of cars running over it.
  3. You probably didn't count 100k vehicles running over that body. A bystander who stayed there for a couple of days could have, but since you are one of the drivers you probably only witness a few cars running over that person - so as far as you know there is a slim chance they are still alive.

I may be taking the allegory too far here, but I feel these offenses can map quite well. Starting from the last - being able to know that all the damage is done. In Sipple's case, this is history so it's easy to know that all the damage was already done. He can't be outed again. His family will not be harassed again by their community, and will not estrange him again. His life will not be ruined again, and he will not die again.

Up next - interfering with the efforts to make things better. Does this really happen here? I don't think so. On the contrary - talking about this, establishing that this is wrong, can help prevent this from happening to other people. And it's better to talk about cases from the past, where all the damage is already done, than about current cases that still have damage potential.

This leaves us with the final issue - respecting the dead. Which is probably the main issue, so I could have just skipped the other two points, but I took the trouble of writing them so I might as well impose on you the trouble of reading them. Are we really disrespecting Oliver Sipple by talking about him?

Sipple did not want to be outed because he did not want his family to know and he did not want his employer to know. They all know, but even after they were originally told about this, Sipple probably did want want them to be constantly reminded and harassed about this. But... are discussions about this bringing reporters to his surviving family members? I doubt it. This issue is no longer about his sexual orientation, it's about the journalism ethics now, and there is no point in interviewing his parents and asking them what they think about their son being gay.

Given all that - I don't think talking about this case should be considered as a violation of Sipple's wish to not be outed.

Comment by Idan Arye on Oliver Sipple · 2021-02-20T12:26:35.310Z · LW · GW

Is pulling the lever after the trolley had passed still a murder?

Comment by Idan Arye on Luna Lovegood and the Chamber of Secrets - Part 11 · 2020-12-28T14:50:59.653Z · LW · GW

Even if you could tell - Voldemort was Obliviated while knocked out and then transfigured before having the chance to wake up, so there never was an opportunity to verify that the Obliviation worked.

Comment by Idan Arye on Luna Lovegood and the Chamber of Secrets - Part 6 · 2020-12-10T16:25:45.232Z · LW · GW

I don't think so - the Vow is not an electric collar that shocks Harry every time he tries to destroy the world. This would invite ways to try and outsmart the Vow. Remember - the allegory here is to AI alignment. The Vow is not just giving Harry deterrents - it modifies his internal reasoning and values so that he would avoid world destruction.

Comment by Idan Arye on The Incomprehensibility Bluff · 2020-12-07T17:35:50.794Z · LW · GW

One thing to keep in mind is that even if it does seem likely that the suspected bluffer is smarter and more knowledgeable than you, the bar for actually working on the subject is higher than the bar for understanding a discussion about it. So even if you are not qualified enough to be an X researcher or an X lecturer, you should still be able to understand a lecture about X.

Even if the gap between you two is so great that they can publish papers on the subject and you can't even understand a simple lecture, you should still be able to understand some of that lecture. Maybe you can't follow the entire derivation of an equation but you can understand the intuition behind it. Maybe you get lost in some explanation but can understand an alternative example.

Yes - it is possible that you are so stupid and so ignorant and that the other person is such a brilliant expert that even with your sincere effort to understand and their sincere effort to explain as simply as possible you still can't understand even a single bit of it because the subject really is that complicated. But at this point the likability of this scenario with all these conditions is low enough that you should seriously consider the option that they are just bluffing.

Comment by Idan Arye on Luna Lovegood and the Chamber of Secrets - Part 5 · 2020-12-06T01:55:02.484Z · LW · GW

By the way, I wouldn't be surprised if "the end of the world" is Moody's stock response to "what's the worst that could happen?" in any context.

(this is no longer spoiler so we no longer need to hide it)

I'm not sure about that. That could be Harry's stock response - "there was always a slight probability for the end of the world and this suggestion will not completely eliminate that probability". But Moody's? I would expect him to quickly make a list of all the things that could go wrong for each suggested course of action.

Comment by Idan Arye on Luna Lovegood and the Chamber of Secrets - Part 5 · 2020-12-05T16:21:37.540Z · LW · GW

Are potential HPMOR spoilers acceptable in the comments here? I'm not really sure - the default is to assume they aren't, but the fanfic itself contains some, so to be sure I'll hide it just in case:

Can Harry really discuss the idea of destroying the world so casually? Shouldn't his unbreakable oath compel him to avoid anything that can contribute to it, and abandon the idea of building the hospital without permit as soon as Moody jokes (is that the correct term when talking about Moody?) about it causing the end of the world?

Comment by Idan Arye on Luna Lovegood and the Chamber of Secrets - Part 4 · 2020-12-04T21:56:43.016Z · LW · GW

I notice we are seeing Luna getting ridiculed for her reputation rather then directly for her actions. Even when it's clear how her reputation is a result of her actions - for example they laugh at her for having an imaginary pet, but never once have we seen other students looking at weird when she interacts with Wanda.

Is this intentional? Because we are getting this story from Luna's PoV? Does she consider her reputation unjustified because her behavior does not seem weird to her?

Comment by Idan Arye on Luna Lovegood and the Chamber of Secrets - Part 3 · 2020-12-01T21:13:13.284Z · LW · GW

I'm a bit surprised the twins had the patience and concentration to sit with Luna and help her go over the map over and over.

Comment by Idan Arye on Extortion beats brinksmanship, but the audience matters · 2020-11-17T15:54:30.030Z · LW · GW

Wouldn't increasing the number of offenders improve the effectiveness of brinkmanship compared to extortion? Since the victim is only bound by a deal with the offender, they can surrender and reject future deals from the other potential offenders. This makes surrendering safer and therefore more attractive compared to extortion, where surrendering to one extorter would invite more extortions.

Comment by Idan Arye on Bayesians vs. Barbarians · 2020-11-08T14:51:41.666Z · LW · GW

The moral of Ends Don't Justify Means (Among Humans) was that even if philosophical though experiments demonstrate scenarios where ethical rules should be abandoned for the greater good, real life cases are not as clear cut and we should still obey these moral rules because humans cannot be trusted when they claim that <unethical plan> really does maximize the expected utility - we cannot be trusted when we say "this is the only way" and we cannot be trusted when we say "this is better than the alternative".

I think this may be the source of the repulsion we all feel toward the idea of selecting soldiers in a lottery and forcing them to fight with drugs and threats of execution. Yes, dying in a war is better than being conquered by the barbarians - I'd rather fight and risk death if the alternative is to get slaughtered anyway together with my loved ones after being tortured, and if the only way to avoid that is to use abandon all ethics than so be it.

But...

Even in a society of rationalists, the leaders are still humans. Not benevolent ("friendly" is not enough here) superintelligent perfect Bayesian AIs. Can we really trust them that this is the only way to win? Can we really trust them to relinquish that power once the war is over? Will living under the barbarians rule be worse than living in a (formerly?) rationalist society that resorted to totalitarianism? Are the barbarians really going to invade us in the first place?

Governments lie about such things in order to grab more power. We have ethics for a reason - it is far too dangerous to rationalize that we are too rational to be bound by these ethics.

Comment by Idan Arye on Purchase Fuzzies and Utilons Separately · 2020-11-04T17:00:47.309Z · LW · GW

I may be straying from your main point here, but...

Could you really utilize these 60 seconds in a better, more specialized way? Not any block of 60 seconds - these specific 60 seconds, that happened during your walk.

Had you not encountered that open trunk, would you open your laptop in the middle of that walk and started working on a world changing idea or an important charity plan? Unlikely - if that was the case you were already sitting somewhere working on that. You went out for a walk, not for work.

Would you, had you not encountered that open trunk, finish your walk 60 seconds earlier, went to sleep 60 seconds earlier, woke up 60 seconds earlier, started your workday 60 seconds earlier, and by doing all that moved these 60 seconds to connect with your regular productivity time? This is probably not the case either - if it was, that would mean you intentionally used that hard earned fuzz as an excuse to deliberately take one minute off your workday, and that would take small mindedness you do not seem to possess.

No - that act was an Action of Opportunity. Humans don't usually have a schedule to tight and so accurate that every lost minute messes it up. There is room for leeway, where you can push such gestures without compromising your specialized work.

Comment by Idan Arye on Why Our Kind Can't Cooperate · 2020-11-03T09:00:43.540Z · LW · GW

Should arguers be encouraged, then, to not write all the arguments if favor of their claim in order to leave more room for those who agree with them to add their own supporting arguments?

This requires either refraining from fully exploring the subject (so that you don't think of all the arguments you can) or straight out omitting arguments you thought of. Not exactly Dark Side, but not fully Light Side either...

Comment by Idan Arye on What is the right phrase for "theoretical evidence"? · 2020-11-02T21:01:14.130Z · LW · GW

The difference can be quite large. If we get the results first, we can come up with Fake Explanations why the masks were only 20% effective in the experiments where in reality they are 75% effective. If we do the prediction first, we wouldn't predict 20% effectiveness. We wouldn't predict that our experiment will "fail". Our theory says masks are effective so we would predict 75% to begin with, and when we get the results it'll put a big dent in our theory. As it should.

Comment by Idan Arye on What is the right phrase for "theoretical evidence"? · 2020-11-02T16:24:15.554Z · LW · GW

Maybe "destroying the theory" was not a good choice of words - the theory will more likely be "demoted" to the stature of "very good approximation". Like gravity. But the distinction I'm trying to make here is between super-accurate sciences like physics that give exact predictions and still-accurate-but-not-as-physics fields. If medicine says masks are 99% effective, and they were not effective for 100 out of 100 patients, the theory still assigned a probability of  that this would happen. You need to update it, but you don't have to "throw it out". But if physics says a photon should fire and it didn't fire - then the theory is wrong. Your model did not assign any probability at all to the possibility of the photon not firing.

And before anyone brings 0 And 1 Are Not Probabilities, remember that in the real world:

  • There is a probability photon could have fired and our instruments have missed it.
  • There is a probability that we unknowingly failed to set up or confirm the conditions that our theory required in order for the photon to fire.
  • We do not assign 100% probability to our theory being correct, and we can just throw it out to avoid Laplace throwing us to hell for our negative infinite score.

This means that the falsifying evidence, on its own, does not destroy the theory. But it can still weaken it severely. And my point (which I've detoured too far from) is that the perfect Bayesian should achieve the same final posterior no matter at which stage they apply it.