Posts

Comments

Comment by EliasHasle on SSA rejects anthropic shadow, too · 2023-11-17T12:27:05.579Z · LW · GW

Conditional on being in a billion-human universe, your probability of having an index between 1 and 1 billion is 1 in 1 billion, and your probability of having any other index is 0. Conditional on being in a trillion-human universe, your probability of having an index between 1 and 1 trillion is 1 in 1 trillion, and your probability of having any other index is 0.

Ehm.. Huh? I would say that:

Conditional on being in a billion-human universe, your probability of having an index between 1 and 1 billion is 1, and your probability of having any other index is 0. Conditional on being in a trillion-human universe, your probability of having an index between 1 and 1 trillion is 1, and your probability of having any other index is 0. Also, conditional on being in a trillion-human universe, your probability of having an index between 1 and 1 billion is 1 in a thousand.

That way, the probabilities respect the conditions, and add up to 1 as they should.

SSA first samples a universe (which, in this case, contains only one world), then samples a random observer in the universe. It samples a universe of each type with ¼ probability. There are, however, two subtypes of type-1 or type-2 universes, namely, ones with nuclear extinction or not. It samples a nuclear extinction type-1 universe with ¼ * 99% probability, a non nuclear extinction type-1 universe with ¼ * 1% probability, a nuclear extinction type-2 universe with ¼ * 10% probability, and a non nuclear extinction type-2 universe with ¼ * 90% probability.

This is also confusing to me, as the resulting "probabilities" do not add up.

Comment by EliasHasle on EfficientZero: How It Works · 2023-06-02T13:16:30.808Z · LW · GW

If repetitions arise from sampling merely due to high conditional probability given an initial "misstep", they should be avoidable in an MCTS that sought to maximize unconditional probability of the output sequence (or rather conditional upon its input but not upon its own prior output). After entering the "trap" once or a few times, it would simply avoid the unfortunate misstep in subsequent "playouts". From my understanding, that is.

Comment by EliasHasle on Attainable Utility Preservation: Scaling to Superhuman · 2023-04-26T14:23:32.551Z · LW · GW

I kind of fell over board when V popped up in the equations without being introduced first.

Comment by EliasHasle on Avoiding Side Effects in Complex Environments · 2023-04-26T13:19:49.750Z · LW · GW

I went into the idea of evaluating on future state representations here: https://www.lesswrong.com/posts/5kurn5W62C5CpSWq6/avoiding-side-effects-in-complex-environments#bFLrwnpjq6wY3E39S (Not sure it is wise, though.)

Comment by EliasHasle on Avoiding Side Effects in Complex Environments · 2023-04-26T13:16:37.758Z · LW · GW

It seems like the method is sensitive to the ranges of the game reward and the auxiliary penalty. In real life, I suppose one would have to clamp the "game" reward to allow the impact penalty to dominate even when massive gains are foreseen from a big-impact course?

Comment by EliasHasle on Avoiding Side Effects in Complex Environments · 2023-04-26T13:13:17.874Z · LW · GW

What if the encoding difference penalty were applied after a counterfactual rollout of no-ops after the candidate action or no-op? Couldn't that detect "butterfly effects" of small impactful actions, avoiding "salami slicing" exploits?

Building upon this thought, how about comparing mutated policies to a base policy by sampling possible futures to generate distributions of the encodings up to the farthest step and penalize divergence from the base policy?

Or just train a sampling policy by GD, using a Monte Carlo Tree Search that penalizes actions which alter the future encodings when compared to a pure no-op policy.

Comment by EliasHasle on The Optimizer's Curse and How to Beat It · 2023-04-26T10:44:02.321Z · LW · GW

I must admit that I did not understand everything in the paper, but I think this excerpt summarizes a crucial point:

"The key issue here is proper conditioning. The unbiasedness of the value estimates V_i discussed in §1 is unbiasedness conditional on mu. In contrast, we might think of the revised estimates ^v_i as being unbiased conditional on V. At the time we optimize and make the decision, we know V but we do not know mu, so proper conditioning dictates that we work with distributions and estimates conditional on V."

The proposed "solution" converts n independent evaluations into n evaluations (estimates) that respect the selection process, but, as far as I can tell, they still rest on prior value estimates and prior knowledge about the uncertainty of those estimates... Which means the "solution" at best limits introduction of optimizer bias, and at worst... masks old mistakes?

Comment by EliasHasle on The Optimizer's Curse and How to Beat It · 2023-04-26T09:25:38.527Z · LW · GW

The big problem arises when the number of choices is huge and sparsely explored, such as when optimizing a neural network.

But restricting ourselves to n superficially evaluated choices with known estimate variance in each evaluation and with independent errors/noise, then if – as in realistic cases like Monte Carlo Tree Search – we are allowed to perform some additional "measurements" to narrow down the uncertainty, it will be wise to scrutinize the high-expectance choices most – in a way trying to "falsify" their greatness, while increasing the certainty of their greatness if the falsification "fails". This is the effect of using heuristics like the Upper Confidence Bound for experiment/branch selection.

UCB is also described as "optimism in the face of uncertainty", which kind of defeats the point I am making if it is deployed as decision policy. What I mean is that in research, preparations and planning (with tree search in perfect information games as a formal example where UCB can be applied), one should put a lot of effort into finding out whether the seemingly best choice (of path, policy, etc.) really is that good, and then make a final choice that penalizes remaining uncertainty.

I would like to throw in a Wikipedia article on a relevant topic, which I came across while reading about the related "Winner's curse": https://en.wikipedia.org/wiki/Order_statistic

The math for order statistics is quite neat as long as the variables are independently sampled from the same distribution. In real life, "sadly", choice evaluations may not always be from the same distribution... Rather, they are by definition conditional upon the choices. (https://en.wikipedia.org/wiki/Bapat%E2%80%93Beg_theorem provides a kind of solution in the form of an intractable colossus of a calculation.) That is not to say that there can be found no valuable/informative approximations.

Comment by EliasHasle on Ukraine #4: Prediction Market Movement Modeling · 2023-01-02T10:46:22.375Z · LW · GW

I think "Metaculus" is a pun on "meta", "calculus" and "meticulous".

"WW3" and "28 years passing" are similarly dangerous "events" for the individual gambler. Why invest with a long-term perspective if there is a significant probability that you eventually cannot harvest... Crucially, the probability of not harvesting the reward may be a lot higher in a "force majeure" situation like WW3, even if one stays alive. But on the other hand, an early WW3 would chop off a lot of the individual existential risk associated with 28 years passing. 🤔 I think there could be major biases here, also possibly affected by "doomsday" propaganda centered on exaggerated "nuclear winter" predictions.

The possibility of outright manipulation of prediction markets should be thoroughly considered when the fates of Ukraine, NATO, Russia and Putin are at stake. If the cost of maintaining "favorable yet mutually consistent" predictions is within, say, a few hundred million dollars per year, it could be a good idea, as long as the enemy has no similar opposing operation going on...

Comment by EliasHasle on Maximizing life universally · 2014-02-10T08:40:10.960Z · LW · GW
Comment by EliasHasle on Maximizing life universally · 2014-02-08T19:47:36.937Z · LW · GW
Comment by EliasHasle on Maximizing life universally · 2014-02-08T10:45:17.006Z · LW · GW
Comment by EliasHasle on Maximizing life universally · 2014-02-08T10:31:58.250Z · LW · GW
Comment by EliasHasle on Maximizing life universally · 2014-02-07T18:43:13.284Z · LW · GW
Comment by EliasHasle on Maximizing life universally · 2014-02-07T15:16:15.595Z · LW · GW
Comment by EliasHasle on Maximizing life universally · 2014-02-07T13:27:02.482Z · LW · GW
Comment by EliasHasle on How big of an impact would cleaner political debates have on society? · 2014-02-07T12:10:59.841Z · LW · GW

But among the people who care and who have a favorite before the campaigning period, would not some change their minds if they saw their favorite being intellectually humiliated on TV? (For the first time ever, that is.)

Comment by EliasHasle on Welcome to Less Wrong! (5th thread, March 2013) · 2014-02-07T11:04:41.665Z · LW · GW