Posts
Comments
Taking your Tetris example, sure 6KB seems small -- as long as you restrict yourself to a space of all possible programs for Gameboy or whichever platform you took this example from. But if your goal is to encode Tetris for a computer engineer who has no knowledge about Gameboy, you will have to include, at the very least, the documentation on the CPU ISA, the hardware architecture of the device and the details on the quirks of its I/O hardware. That would already bring the "size of Tetris" to 10s of megabytes. Describing it for a person from 1950s, I suspect, would require a decent chunk of Internet in addition.
I don't think this is making it a fairer comparison. For bacteria, doesn't that mean you'd have to include descriptions of DNA, amino acids, proteins in general and everything known about the specific proteins used by the bacteria, etc? You quickly end up with a decent chunk of the Internet as well.
Kolgomorov complexity is not about how much background knowledge or computational effort was required to produce some from first principles output. It is about how much, given infinite knowledge and time, you can compress a complete description of the output. Which maybe means it's not the right metric to use here...
I don't think it affects the essence of your argument, but I would say that you cannot get a good estimate of the Kolgomorov complexity of Word or other modern software from binary size. The Kolgomorov complexity of Word should properly be the size of the smallest binary that would execute in an indistinguishable way to Word. There are very good reasons to think that the existing Word binary is significantly larger than that.
Modern software development practices optimize for a combination of factors where binary size has very little weight. Development and maintenance time and cost are paramount are usually the biggest factors, absence of bugs and performance are relatively smaller concerns in most cases and size not a factor except in some special cases.
Sometimes human programmers do optimize primarily for size and if you look at the tricks they came up with have similar vibes to some of the biology tricks described in the post. For a charming example in early constrained computer systems look at http://www.catb.org/jargon/html/story-of-mel.html. For a community of people doing this type of thing for toy problems see https://codegolf.stackexchange.com. If you just want to be dazzled by how much people can pack into 4kb binaries when they are really trying look at https://www.youtube.com/playlist?list=PLjxyPjW-DeNWennaPMEPBDoj5GFTj3rbz.
It would be great to prevent it, but it also seems very hard? Is there anything short of an international agreement with serious teeth that could have a decent chance of doing it? I suppose US-only legislation could maybe delay it for a few years and would be worth doing, but that also seems a very big lift in current climate.
Really fantastic primer! I have been meaning to learn more about DeFi and this was a perfect intro.
Does anybody know, for someone who wants to learn more, not just on the investing/trading side but on the development of smart contracts, what are good resources, other than the many links in the article?
Are there good books on the topic? Or tutorials?
What about subreddits or discord servers? People to follow on twitter?
I get what you are saying. You have convinced me that the following two statements are contradictory:
- Axiom of Independence: preferring A to B implies preferring ApC to BpC for any p and C.
- The variance and higher moments of utility matter, not just the expected value.
My confusion is that it intuitively it seems both must be true for a rational agent but I guess my intuition is just wrong.
Thanks for your comments, they were very illuminating.
I think you are not allowed to refer explicitly to utility in the options.
I was going to answer that I can easily reword my example to not explicitly mention any utility values, but when I tried to that it very quickly led to something where it is obvious that u(A) = u(C). I guess my rewording was basically going through the steps of the proof of VNM theorem.
I am still not sure I am convinced by your objection, as I don't think there's anything self-referential in my example, but that did give me some pause.
The tricky bit is the question whether this also applies to one-shot problems or not.
This is the crux. It seems to me that the expected utility frame work means that if you prefer A to B in one time choice, then you must also prefer n repetitions of A to n repetitions of B, because the fact that you have larger variance for n=1 does not matter. This seems intuitively wrong to me.
Thanks, I looked at the discussion you linked with interest. I think I understand my confusion a little better, but I am still confused.
I can walk through the proof of the VNM theorem and see where the independence axiom comes in and how it leads to u(A)=u(B) in my example. The axiom of independence itself feels unassailable to me and I am not quite sure this is a strong enough argument against it. Maybe having a more direct argument from axiom of independence to unintuitive result would be more convincing.
Maybe the answer is to read Dawes book, thanks for the reference.
I find it confusing that the only thing that matters to a rational agent is the expectation of utility, i.e., that the details of the probability distribution of utilities do not matter.
I understand that VNM theorem proves that from what seem reasonable axioms, but on the other hand it seems to me that there is nothing irrational about having different risk preferences. Consider the following two scenarios
- A: you gain utility 1 with probability 1
- B: you gain utility 0 with probability 1/2 or utility 2 with probability 1/2
According to expected utility, it is irrational to be anything but indifferent to between A and B. This seems wrong to me. I can even go a bit further, consider a third option:
- C: you gain utility 0.9 with probability 1
Expected utility says it is irrational to prefer C to B, but this seems perfectly reasonable to me. It's optimizing for the worst-case instead of the average case. Is there a direct way of showing that preferring B to C is irrational?