Posts
Comments
You can hate cruelty without hating the people who cause it
Considering Harry might destroy the world, and this might be the very way he does it, why not let Hermione take care of them?
Regardless of other differences in utility function, Harry and Voldemort both want the world to not be destroyed, and consider this of the utmost priority.
Aumann's agreement theorem means that as they are both rationalists, they should be able to come to the same opinion on what the best course of action is to prevent that. Harry was willing to sacrifice himself earlier to save others.
Harry is allowed to convince voldy to keep him in a coma to kill later. He just has to "evade immediate death", even if there is no hope of survival afterwards
How about simply telling Voldemort that he doesn't have a complete model of time, and give him a bunch of examples of things until one is found which voldy wouldn't have predicted. Suggest to voldemort that he should keep harry in a coma until he has done more experiments with Time to derive its nature, and then kill Harry without waking him up.
I mean the existing unbreakable vow that Harry has just been bound by could perhaps be used for something else.
Thoughts:
- Can the unbreakable vow be leveraged for unbreakable pre commitments?
- Harry knows that the horcruxes will eventually be destroyed through heat death of the universe if nothing else and could use this to tell Voldemort something like "if you kill me you will die" in parseltongue
Equivalence of infinite cardinalities is determined by whether a bijection between sets of those cardinalities exists. In this case, if interpreted as cardinalities, both infinities would be equal.
Also, the order in which you sum the terms in a series can matter. See here: https://en.wikipedia.org/wiki/Alternating_series#Rearrangements
I was reading about the St. Petersburg paradox
I was wondering how you compare two games with infinite expected value. The obvious way would seem to be to take the limit of the difference of their expected value, as one tolerates less and less likely outcomes.
Is there any existing research on this?
I'm confused because I had always thought it would be the exact opposite. To predict your observational history given a description of the universe, solomonoff induction needs to find you in it. The more special you are, the easier you are to find and thus the easier it is to find your observational history.
Truly random data is incompressible in the average case by the pigeonhole principle
Solomonoff induction still tries though. It assumes there is always more signal in the noise. I'm not sure how you would justify stopping that search, how can you ever be certain there's not some complex signal we just haven't found yet?
But you should end up with a bunch of theories with similar kolmogorov complexity.
Requested question: "How much money have you donated to organizations aiming to reduce x-risk other than MIRI/CFAR?"
This is ambiguous. Is it you who are aiming to reduce x-risk, or the organizations who are aiming to reduce x-risk.
For example, someone could donate to the malaria foundation because they believe this somehow reduces x-risk, even though the malaria foundation's goal is not reducing x-risk.
Vampire uses specialisation according to wikipedia:
A number of efficient indexing techniques are used to implement all major operations on sets of terms and clauses. Run-time algorithm specialisation is used to accelerate forward matching.