Posts

BurntVictory's Shortform 2019-08-24T14:11:30.024Z

Comments

Comment by BurntVictory on BurntVictory's Shortform · 2019-08-24T14:11:30.164Z · LW · GW

The Kitty Genovese Equation

Someone's in trouble. You can hear them from your apartment, but you can't tell if any of your neighbors are already rushing down, or already calling the police. It's time sensitive, and you've got to decide now: is it worth spending those precious minutes, or not?

Let's define our variables:

Cost to victim of nobody helping:

cost to each bystander of intervening:

Number of bystanders: (Since , for it's always right to intervene.)


Analysis:

Suppose the bystanders all simultaneously decide whether to intervene or not, with probability p. Then expected world-utility is

Utility is maximized when ; In other words, when . Let . Then we have the optimal probability of not helping, .

One interesting implication of our solution is that the probability that the victim isn't helped, , equals . Since , this means P(not helped) starts small at for and rapidly rises to .


Examples:

Suppose intervening would cost a minute, and the victim would live 2 years longer on average if you intervened. Then is about one in a million, . Once you get to seven bystanders, it's optimal to not intervene 10% of the time. is about a million, so with 21 bystanders it's optimal for each to take a 50-50 shot at helping.

If is a mere , you get there six times as fast: a 10% chance to not help at N=2, 50% around N = 4-5, and a whopping 75% chance around N=9.


Application:

This was inspired by friends' varied willingness to intervene in public disputes, and my own experience worrying about how to respond to potential crises around me. Of course, in real life we have a lot of uncertainty around and around other people's , and we can often wait and observe if someone goes to help. For situations where decisions are pretty simultaneous, though, it would be interesting to see how well people's responses line up with the curve.

Comment by BurntVictory on Buying Value, not Price · 2019-04-30T00:27:06.044Z · LW · GW

The LessWrongy framework I'm familiar with would say that value = expected utility, so it takes potential downsides into account. You're not risk-averse wrt your VNM utility function, but computing that utility function is hard in practice, and EV calculations can benefit from some consideration of the tail-risks.

Comment by BurntVictory on The Game Theory of Blackmail · 2019-03-25T18:30:45.048Z · LW · GW

Schelling's The Strategy of Conflict seems very relevant here; a major focus is precommitment as a bargaining tool. See here for an old review by cousin_it.

Iterated chicken seems fine to test, just as a spinoff of the IPD that maps to slightly different situations. (I believe that the iterated game of mutually modeling each other's single-shot strategy is different from iterating the game itself, so I don't think Abram's post necessarily implies that iterated chicken is relevant to ASI blackmail solutions.)

Speaking of iterated games, one natural form of blackmail is for the blackmailee to pay an income stream to the blackmailer; that way, at each time-step they're paying their fair price for the good of [not having their secret revealed between time t and time t+1]. Here's a well-cited paper that discusses this idea in the context of nuclear brinksmanship: Schwarz & Sonin 2007.

Comment by BurntVictory on Privacy · 2019-03-18T01:28:51.849Z · LW · GW

It's true the net effect is low to first order, but you're neglecting second-order effects. If premia are important enough, people will feel compelled to Goodhart proxies used for them until those proxies have less meaning.

Given the linked siderea post, maybe this is not very true for insurance in particular. I agree that wasn't a great example.

Slack-wise, uh, choices are bad. really bad. Keep the sabbath. These are some intuitions I suspect are at play here. I'm not interested in a detailed argument hashing out whether we should believe that these outweigh other factors in practice across whatever range of scenarios, because it seems like it would take a lot of time/effort for me to actually build good models here, and opportunity costs are a thing. I just want to point out that these ideas seem relevant for correctly interpreting Zvi's position.

Comment by BurntVictory on Privacy · 2019-03-16T01:34:55.594Z · LW · GW
The post implies it is bad to be judged. I could have misinterpreted why, but that implication is there. If judge just meant "make inferences about" why would it be bad?

As Raemon says, knowing that others are making correct inferences about your behavior means you can't relax. No, idk, watching soap operas, because that's an indicator of being less likely to repay your loans, and your premia go up. There's an ethos of slack, decisionmaking-has-costs, strategizing-has-costs that Zvi's explored in his previous posts, and that's part of how I'm interpreting what he's saying here.

But it also helps in knowing who's exploiting them! Why does it give more advantages to the "bad" side?

Sure, but doesn't it help me against them too?

You don't want to spend your precious time on blackmailing random jerks, probably. So at best, now some of your income goes toward paying a white-hat blackmailer to fend off the black-hats. (Unclear what the market for that looks like. Also, black-hatters can afford to specialize in unblackmailability; it comes up much more often for them than the average person.) You're right, though, that it's possible to have an equilibrium where deterrence dominates and the black-hatting incentives are low, in which case maybe the white-hat fees are low and now you have a white-hat deterrent. So this isn't strictly bad, though my instinct is that it's bad in most plausible cases.

Why would you expect the terrorists to be miscalibrated about this before the reduction in privacy, to the point where they think people won't negotiate with them when they actually will, and less privacy predictably changes this opinion?

That's a fair point! A couple of counterpoints: I think risk-aversion of 'terrorists' helps. There's also a point about second-order effects again; the easier it is to blackmail/extort/etc., the more people can afford to specialize in it and reap economies of scale.

Perhaps the optimal set of norms for these people is "there are no rules, do what you want". If you can improve on that, than that would constitute a norm-set that is more just than normlessness. Capturing true ethical law in the norms most people follow isn't necessary.

Eh, sure. My guess is that Zvi is making a statement about norms as they are likely to exist in human societies with some level of intuitive-similarity to our own. I think the useful question here is like "is it possible to instantiate norms s.t. norm-violations are ~all ethical-violations". (we're still discussing the value of less privacy/more blackmail, right?) No-rule or few-rule communities could work for this, but I expect it to be pretty hard to instantiate them at large scale. So sure, this does mean you could maybe build a small local community where blackmail is easy. That's even kind of just what social groups are, as Zvi notes; places where you can share sensitive info because you won't be judged much, nor attacked as a norm-violator. Having that work at super-Dunbar level seems tough.

Comment by BurntVictory on Privacy · 2019-03-15T23:30:41.540Z · LW · GW

I found this pretty useful--Zvi's definitely reflecting a particular, pretty negative view of society and strategy here. But I disagree with some of your inferences, and I think you're somewhat exaggerating the level of gloom-and-doom implicit in the post.

>Implication: "judge" means to use information against someone. Linguistic norms related to the word "judgment" are thoroughly corrupt enough that it's worth ceding to these, linguistically, and using "judge" to mean (usually unjustly!) using information against people.

No, this isn't bare repetition. I agree with Raemon that "judge" here means something closer to one of its standard usages, "to make inferences about". Though it also fits with the colloquial "deem unworthy for baring [understandable] flaws", which is also a thing that would happen with blackmail and could be bad.

>Implication: more generally available information about what strategies people are using helps "our" enemies more than it helps "us". (This seems false to me, for notions of "us" that I usually use in strategy)

I can imagine a couple things going on here? One, if the world is a place where may more vulnerabilities are more known, this incentivizes more people to specialize in exploiting those vulnerabilities. Two, as a flawed human there are probably some stressors against which you can't credibly play the "won't negotiate with terrorists" card.


>Implication: even in the most just possible system of norms, it would be good to sometimes violate those norms and hide the fact that you violated them. (This seems incorrect to me!)

I think the assumption is these are ~baseline humans we're talking about, and most human brains can't hold norms of sufficient sophistication to capture true ethical law, and are also biased in ways that will sometimes strain against reflectively-endorsed ethics (e.g. they're prone to using constrained circles of moral concern rather than universality).


>Implication: the bad guys won; we have rule by gangsters, who aren't concerned with sustainable production, and just take as much stuff as possible in the short term. (This seems on the right track but partially false; the top marginal tax rate isn't 100%)

This part of the post reminded me of (the SSC review of) Seeing Like a State, which makes a similar point; surveying and 'rationalizing' farmland, taking a census, etc. = legibility = taxability. "all of them" does seem like hyperbole here. I guess you can imagine the maximally inconvenient case where motivated people with low cost of time and few compunctions know your resources and full utility function, and can proceed to extract ~all liquid value from you.

Comment by BurntVictory on Question: MIRI Corrigbility Agenda · 2019-03-15T00:58:18.281Z · LW · GW

The CHAI reading list is also fairly out of date (last updated april 2017) but has a few more papers, especially if you go to the top and select [3] or [4] so it shows lower-priority ones.

(And in case others haven't seen it, here's the MIRI reading guide for learning agent foundations.)

Comment by BurntVictory on AI development incentive gradients are not uniformly terrible · 2019-03-14T19:53:42.670Z · LW · GW

Oh wait, yeah, this is just an example of the general principle "when you're optimizing for xy, and you have a limited budget with linear costs on x and y, the optimal allocation is to spend equal amounts on both."

Formally, you can show this via Lagrange-multiplier optimization, using the Lagrangian . Setting the partials equal to zero gets you , and you recover the linear constraint function . So . (Alternatively, just optimizing works, but I like Lagrange multipliers.)

In this case, we want to maximize , which is equivalent to optimizing . Let's define , so we're optimizing .

Our constraint function is defined by the tradeoff between and . , so . , so .

Rearranging gives the constraint function . This is indeed linear, with a total 'budget' of .5 and a p-coefficient of 1. So by the above theorem we should have .

Comment by BurntVictory on AI development incentive gradients are not uniformly terrible · 2019-03-14T03:03:21.292Z · LW · GW

I think your solution to "reckless rivals" might be wrong? I think you mistakenly put a multiplier of q instead of a p on the left-hand side of the inequality. (The derivation of the general inequality checks out, though, and I like your point about discontinuous effects of capacity investment when you assume that the opponent plays a known pure strategy.)

I'll use slightly different notation from yours, to avoid overloading p and q. (This ends up not mattering because of linearity, but eh.) Let be the initial probabilities for winning and safety|winning. Let be the capacity variable, and without loss of generality let start at and end at . Then and . So , so . And , so .

Therefore, the left-hand side of the inequality, , equals . At the initial point , this simplifies to .

Let's assume . The relative safety of the other project is , which at simplifies to .

Thus we should commit more to capacity when , or , or . This is a little weird, but makes a bit more intuitive sense to me than or mattering.

Comment by BurntVictory on Drexler on AI Risk · 2019-02-12T22:48:01.959Z · LW · GW

Yeah, I worry that competitive pressure could convince people to push for unsafe systems. Military AI seems like an especially risky case. Military goals are harder to specify than "maximize portfolio value", but there are probably reasonable proxies, and as AI gets more capable and more widely used there's a strong incentive to get ahead of the competition.

Comment by BurntVictory on The Pavlov Strategy · 2019-01-24T17:38:13.452Z · LW · GW

Yeah, I think you’re right.* So it actually looks the same as the “TFTWF accidentally defects” case.

*assuming we specify TFTWF as “defect against DD, cooperate otherwise”. I don’t see a reasonable alternate definition. I think you’re right that defecting against DC is bad, and if we go to 3-memory, defecting against DDC while cooperating with DCD seems bad too.** Sarah can’t be assuming the latter, anyway, because the “TFTWF accidentally defects” case would look different.

**there might be some fairly reasonably-behaved variant that’s like “defect if >=2 of 3 past moves were D”, but that seems like a) probably bad since I just made it up and b) not what’s being discussed here.

Comment by BurntVictory on Do the best ideas float to the top? · 2019-01-22T22:16:51.386Z · LW · GW

I liked the playful writing here.

Maybe I'm being dumb, but I feel like spelling out some of your ideas would have been useful. (Or maybe you're just playing with ~pre-rigor intuitions, and I'm overthinking this.)

I think "float to the top" could plausibly mean:

A. In practice, human nature biases us towards treating these ideas as if they were true.

B. Ideal reasoning implies that these ideas should be treated as if they were true.

C. By postulate, these ideas end up reaching fixation in society. [Which then implies things about what members of society can and can't recognize, e.g. the existence of AIXI-like actors.]

Likewise, what level do you want a NAT to be implemented at? Personal behavior? Structure of group blog sites? Social norms?

Comment by BurntVictory on Less Competition, More Meritocracy? · 2019-01-22T18:59:11.194Z · LW · GW

I'll echo the other commenters in saying this was interesting and valuable, but also (perhaps necessarily) left me to cross some significant inferential gaps. The biggest for me were in going from game-descriptions to equilibria. Maybe this is just a thing that can't be made intuitive to people who haven't solved it out? But I think that, e.g., graphs of the kinds of distributions you get in different cases would have helped me, at least.

I also had to think for a bit about what assumptions you were making here:

A more rigorous or multi-step process could have only done so much. To get better information, they would have had to add a different kind of test. That would risk introducing bad noise.

A very naive model says additional tests -> uncorrelated noise -> less noise in the average.

More realistically, we can assume that some dimensions of quality are easier to Goodhart than others, and you don't know which are which beforehand. But then, how do you know your initial choice of test isn't Goodhart-y? And even if the Goodhart noise is much larger than the true variation in skill, it seems like you can aggregate scores in a way that would allow you to make use of the information from the different tests without being bamboozled. (Depending on your use-case, you could take the average of a concave function of the scores, or use quantiles, or take the min score, etc.)

In reality, though, you usually have some idea what dimensions are important for the job. Maybe it's something like PCA, with the noise/signal ratio of dimensions decreasing as you go down the list of components. Then that decrease, plus marginal costs of more tests, means that there is some natural stopping point. I guess that makes sense, but it took a bit for me to get there. Is that what you were thinking?

Comment by BurntVictory on The Vulnerable World Hypothesis (by Bostrom) · 2018-11-10T03:06:24.752Z · LW · GW

A similar concept is the idea of offense-defense balance in international relations. eg, large stockpiles of nuclear weapons strongly favor “defense” (well, deterrence) because it’s prohibitively costly to develop the capacity to reliably destroy the enemy’s second-strike forces. Note the caveats there: at sufficient resource levels, and given constraints imposed by other technologies (eg inability to detect nuclear subs).

Allan Dafoe and Ben Garfinkel have a paper out on how techs tend to favor offense at low investment and defense at high investment. (That is, the resource ratio R at which an attacker with resources RD has an X% chance of defeating a defender with resources D tends to decrease with D up to a local maximum, then increase.)

(On mobile, will link later.)

Comment by BurntVictory on Mark Eichenlaub: How to develop scientific intuition · 2018-10-27T20:20:50.393Z · LW · GW

Well, it’s nonequilibrium, so pressure isn’t even at each layer of water any more...

When I picture this happening, there’s a pulse of high-pressure water below the rock. If you froze the rock’s motion while keeping its force on the water below it, I think the pulse would eventually equilibrate out of existence as water flowed to the side? Or if I imagine a fluid with strong drag forces on the rock, but which flows smoothly itself, it again seems plausible that the pressure equilibrates at the bottom.

(More confident in the first para than the second one.)

Comment by BurntVictory on Decisions are not about changing the world, they are about learning what world you live in · 2018-07-30T02:46:49.878Z · LW · GW

Hey, noticed what might be errors in your lesion chart: No lesion, no cancer should give +1m utils in both cases. And your probabilities don't add to 1. Including p(lesion) explicitly doesn't meaningfully change the EV difference, so eh. However, my understanding is that the core of the lesion problem is recognizing that p(lesion) is independent of smoking; EYNS seems to say the same. Might be worth including it to make that clearer?

(I don't know much about decision theory, so maybe I'm just confused.)

Comment by BurntVictory on Anthropics: A Short Note on the Fission Riddle · 2018-07-29T23:21:01.306Z · LW · GW

I think what avturchin is getting at is that when you say “there is a 1/3 chance your memory is false and a 1/3 chance you are the original”, you’re implicitly conditioning only on “being one of the N total clones”, ignoring the extra information “do you remember the last split” which provides a lot of useful information. That is, if each clone fully conditioned on the information available to them, you’d get 0-.5-.5 as subjective probabilities due to your step 2.

If that’s not what you’re going for, it seems like maybe the probability you’re calculating is “probability that, given you’re randomly (uniformly) assigned to be one of the N people, you’re the original”. But then that’s obviously 1/N regardless of memory shenanigans.

If you think this is not what you’re saying, then I’m confused.

Comment by BurntVictory on An Intuitive Explanation of Solomonoff Induction · 2012-07-13T00:00:53.232Z · LW · GW

The idea of reducing hypotheses to bitstrings (ie, programs to be run on a universal Turing machine) actually helped me a lot in understanding something about science that hindisght had previously cheapened for me. Looking back on the founding of quantum mechanics, it's easy to say "right, they should have abandoned their idea of particles existing as point objects with definite position and adopted the concept and language of probability distributions, rather than assuming a particle really exists and is just 'hidden' by the wavefunction." But the scientists of the day had a programming language in their heads where "particle" was a basic object and probability was something complicated that you had to build up--the optimization process of science had arrived at a local maximum in the landscape of possible languages to describe the world.

I realize this is a pretty simple insight, but I'm glad the article gave me a way to better understand this.