Simple Rules of Law

2019-05-19T00:10:01.124Z · score: 28 (9 votes)

Tales from the Highway

2019-05-12T19:40:00.862Z · score: 13 (6 votes)
Comment by zvi on Tales From the American Medical System · 2019-05-10T11:30:01.807Z · score: 8 (5 votes) · LW · GW

I am confused why anyone would believe that this post is attempting to pass an ITT. It isn't.

It's also giving the doctor the benefit of the doubt in important ways that jimrandomh seems confident are unlikely to be accurate - in particular, that the doctor's justification for such frequent and copious appointments is concern for the patient, and has no profit/fraud motive of any kind.

Tales From the American Medical System

2019-05-10T00:40:00.768Z · score: 52 (28 votes)
Comment by zvi on Nash equilibriums can be arbitrarily bad · 2019-05-07T16:18:09.516Z · score: 10 (2 votes) · LW · GW

Is there a name for this type of equilibrium, where a player can pre-commit in a way where the best response leaves the first player very well-off, but not quite optimally well-off? What about if it is a mixed strategy (e.g. consider the version of this game where the player who gave the larger number gets paid nothing).

Comment by zvi on Dishonest Update Reporting · 2019-05-06T12:12:36.917Z · score: 3 (3 votes) · LW · GW

Yes, that seems right, if it can be used as the sole criteria, and be properly normalized for the time frames and questions involved. There are big second-level Goodhart traps lying in wait if people care about this metric.

Comment by zvi on Dishonest Update Reporting · 2019-05-05T10:45:30.844Z · score: 10 (6 votes) · LW · GW

Right. I kinda implied it was part of the solution but didn't say it explicitly enough, and may edit.

The problem for implementation, of course, is that explaining your reasoning is toxic in worlds with the models we describe. It's the opposite of not taking positions, staying hidden and destroying records. It opens you up to being blamed for any aspect of your reasoning. That's pretty terrible. It's doubly terrible if you're in any sort of double-think equilibrium (see SSC here). Because now, you can't explain your reasoning.

Comment by zvi on Dishonest Update Reporting · 2019-05-04T23:56:44.463Z · score: 7 (4 votes) · LW · GW

A key active ingredient here seems to be that exact ability to disguise your true position. Even if someone knows your trades, they don't know why you did them. You could have a different fair value (probability estimate), you could be hedging risk, you could expect the price to move in a direction without thinking that move is going to be accurate, and so on.

By not requiring the trader to be pinned down to anything (except profit and loss) we potentially extract more information.

And all of that applies to non-prediction markets, too.

Comment by zvi on Dishonest Update Reporting · 2019-05-04T23:50:11.078Z · score: 6 (3 votes) · LW · GW

Agreed. Changed to dishonest update reporting.

Comment by zvi on Dishonest Update Reporting · 2019-05-04T23:48:21.345Z · score: 4 (6 votes) · LW · GW

I think it's definitely not dishonest to actually update too slowly versus what would be ideal. As you say, almost everyone does it.

What's dishonest is for Bob to think 50% and say 70% (or 75%) because it will look better.

Dishonest Update Reporting

2019-05-04T14:10:00.742Z · score: 55 (14 votes)
Comment by zvi on Habryka's Shortform Feed · 2019-05-03T18:34:04.684Z · score: 18 (6 votes) · LW · GW

This feature is important to me. It might turn out to be a dud, but I would be excited to experiment with it. If it was available in a way that was portable to other websites as well, that would be even more exciting to me (e.g. I could do this in my base blog).

Note that this feature can be used for more than forecasting. One key use case on Arbital was to see who was willing to endorse or disagree with, to what extent, various claims relevant to the post. That seemed very useful.

I don't think having internal betting markets is going to add enough value to justify the costs involved. Especially since it both can't be real money (for legal reasons, etc) and can't not be real money if it's going to do what it needs to do.

Comment by zvi on Asymmetric Justice · 2019-04-30T20:14:31.512Z · score: 11 (5 votes) · LW · GW

Robin seems to have run smack into the reasonably obvious "slavery is bad, so anything that could be seen as justifying slavery, or excusing slavery, is also bad to say even if true" thing. It's not that he isn't sincere, it's that it seems like he should have figured this one out by now. I am confused by his confusion, and wish he'd spend his points more efficiently.

The Asymmetric Justice model whereby you are as bad as the worst thing you've done would seem to cover this reasonably well at first glance - "Owned a slave" is very bad, and "Owned a slave but didn't force them into it" doesn't score a different number of points because "Owned a slave" is the salient biggest bad in addition to or rather than "Forced someone into slavery."

There's also the enrichment that, past a certain point, things just get marked as 'evil' or 'bad' and in many contexts, past that point, it doesn't matter, because you score points by condemning them and are guilty along side them if you defend them, and pointing out truth counts as defending, and lies or bad arguments against them count as condemning. But that all seems... elementary? Is any of this non-obvious? Actually asking.

Comment by zvi on Blackmail · 2019-04-29T12:57:46.886Z · score: 5 (3 votes) · LW · GW

Yes. This is a viable strategy people use in the real world. It is often called "get ahead of the story."

Comment by zvi on On The London Mulligan · 2019-04-29T12:56:48.125Z · score: 2 (1 votes) · LW · GW

That's how it works out there in the real world. There's a big cost to change and a bigger cost to reversing change. Plus, the idea is to give us less dead games. If they gave us that, then took it away, that seems quite bad.

If the whole thing is subtle, it won't be undone.

If it's obvious (e.g. dead games actually go up, not down) then it would perhaps be undone.

Comment by zvi on Speculations on Duo Standard · 2019-04-29T12:54:47.080Z · score: 2 (1 votes) · LW · GW

It varies. Sometimes, they are meant to illustrate broader points, but mostly they are about the object-level, as is this one. Deck guides are always object level.

I'll give thought to whether this makes sense to do at my main blog (which also has tags but not in a way that would be helpful here). If not, the question is whether to go in and edit the posts manually. I would be 100% fine with moderators adding such a tag where it was appropriate; if we didn't copy everything over to LW automatically, I wouldn't be copying MtG posts here.

Comment by zvi on The Forces of Blandness and the Disagreeable Majority · 2019-04-29T12:15:03.141Z · score: 17 (9 votes) · LW · GW

I worry that these studies in support of free speech are narrowly defining free speech as 'allowed to speak' rather than lack of social and economic punishment for speaking one's mind.

I also worry that the reason it looks like free speech looks supported in Murray's study is that he's asking about the things people wanted to censor in 1970, as opposed to the things they want to censor now. E.g. imagine the graph for someone against homosexuality, or in favor of religion, or for big crack-downs on communists. The consensus view on these now, among moderates, would have been subject to censorship in 1970.

I feel a lack of free speech on some issues, but actual zero of that is coming from the threat of government intervention or even corporate censorship, but rather worry about social, economic or reputational retaliation

Comment by zvi on Habryka's Shortform Feed · 2019-04-29T12:08:47.879Z · score: 8 (4 votes) · LW · GW

Which is definitely better than it expiring, and 24h batching is better than instantaneous feedback (unless you were going to check posts individually for information already, in which case things are already quite bad). It's not obvious to me what encouraging daily checks here is doing for discourse as opposed to being a Skinner box.

Comment by zvi on The Forces of Blandness and the Disagreeable Majority · 2019-04-29T12:06:57.654Z · score: 16 (6 votes) · LW · GW

I have seen the term used positively in the Trump era. My guess is that this is a reaction to it becoming a rhetorical point that it is bad, which makes others respond that it is good.

Whereas before that, the term had been abandoned due to its negative connotations. Part of my model of this is that people support censoring specific things but are against censoring in general. Just like they say the government/corporation spends too much but are individually in favor of every government program and against firing anyone.

Comment by zvi on The Forces of Blandness and the Disagreeable Majority · 2019-04-29T12:01:45.097Z · score: 9 (6 votes) · LW · GW

I gotta love this quote from their website:

As the information war escalates, we believe more than ever that our responsibility is to provide an advanced, reliable disinformation solution to national security agencies, responsible leaders, and trusted brands.

The ambiguity between "solution to disinformation" and "solution in the form of disinformation" is delicious.

They say this is only to be used on manipulative or disinformation campaigns:

Based on data from our monitoring system, New Knowledge analysts provide the tools and support that companies need to disrupt manipulative online campaigns and maintain brand integrity. No system integration required. No private data collected.

I have no idea why what they are offering would be an asymmetric weapon. Nor do I think that 'get very good at detecting and understanding manipulative social media campaigns' is a strategy likely to lead to non-manipulative counter-strategies at a profit-maximizing corporation.

I can see why it might be better at disruption than creation, like many things. This might be one of the few places that makes me feel a little better.

Comment by zvi on Asymmetric Justice · 2019-04-29T00:25:32.745Z · score: 6 (3 votes) · LW · GW

No need to apologize for focusing on points of disagreement. And I'm grateful for the commentary and confusion, because it pointed to important questions about how to have good discourse and caused me to notice something I do frequently that is likely a mistake. It's like finally having an editor, in the good way.

I'm not on the moderation team, but my perspective is that the two goals overlap and are fully compatible but largely distinct and need to be optimized for in different ways (see Tale of Alice Almost). And this is the situation in which you get a conflict between them, because norms are messy and you can't avoid what happens in hard mode threads bleeding into other places.

Comment by zvi on Habryka's Shortform Feed · 2019-04-28T23:32:36.723Z · score: 9 (5 votes) · LW · GW

If people are checking karma changes constantly and getting emotional validation or pain from the result, that seems like a bad result. And yes, the whole 'one -2 and three +17s feels like everyone hates me' thing is real, can confirm.

Comment by zvi on The Forces of Blandness and the Disagreeable Majority · 2019-04-28T23:13:09.256Z · score: 38 (11 votes) · LW · GW

(Epistemic Status: Quick brainstorm slash free form just-write-it exercise. This wants to be a post but want to throw it out as a comment quickly first and see if it sounds right.)

Could we tie this directly in with Asymmetric Justice?

If you are a big thing you are being evaluated primarily on the basis of what horrible things you've done, and reap little of the relative benefit from the brilliant things. If you're going to then enable many weird offensive things, that's a losing plan. Even if the group is a huge win on net, some of them will be bad and get you in a lot of trouble.

If you are a small thing, and want to do one weird thing as the only thing, you have a chance that it turns out all right at least with respect to those you are appealing to with your newspaper, blog or what have you. So you can gain the benefits of exploration, free expression, creation of knowledge and so on.

If you are a medium-size thing doing correlated weird things, which are weird and offensive in the same way, then again your risk is contained, because if they're sufficiently correlated, it's all one thing, so you won't reliably be evaluated as bad and can again get the benefits of your one thing. But it also means that in order to do that, you need to be consistent. No violating your group's party lines so they evaluate you as just. And of course you need to support free speech to avoid being shut down yourself by the "moderates."

So what happens? "The center" or 'moderates" trying to hold is the biggest thing, has to worry about all sides judging it asymmetrically, so it is forced to come out in favor of blandness. Since a big thing like capitalism or a major corporation or the government interacts with tons of stuff enough to get blamed for it they need to censor it in order to not be found guilty. Increasing polarization and uniformity on all sides.

And in parallel, as a moderate proposing policies and law, you can accuse a whole class of tings of being bad because one of them is bad with respect to one thing, and thus make the case that one must censor.

Which means this "moderate center" isn't actually anything of the sort. It's a third power with very little popular support trying to cram things down our throats, because they understand our point scoring systems better than we do - and only partly because they had a large role in engineering those systems. And they are responding to their own incentives.

You actually get the whole dynamic from first principles.

Individual people are small, can and want to take risks, feel increasingly censored for increasingly stupid reasons, and become more pro-free-speech. Large powerful things that want to appeal to multiple sides race with each other to be bigger censors so they can avoid being found guilty, and scapegoat the other moderate powerful things they're struggling with for power, along with everyone else who they can directly censor to gain the upper hand as a group. Ideally they'd like to censor any attempt to portray things accurately or create clarity or common knowledge at all, because the people hate the censorship and they distrust power and the more information they find out, the bigger the negative points they'll assign to every big powerful thing. This creates a tacit (at least) conspiracy of the powerful against all communication, coordination and creation of common knowledge on anything that might matter. A general opposition to reason and competence seems to logically follow.

Does that sound right?

Comment by zvi on Asymmetric Justice · 2019-04-28T14:31:36.909Z · score: 14 (4 votes) · LW · GW

Right.

I did change the post on the blog as well, not only the LW version, to the new version. This wasn't a case of 'I shouldn't have to change this but Raemon is being dense' but rather 'I see two of the best people on this site focusing on this one sentence in massively distracting ways so I'm clearly doing something wrong here' and reaching the conclusion that this is how humans read articles so this line needs to go. And indeed, to draw a clear distinction between the posts where I am doing pure model building, from the posts with action calls.

I got frustrated because it feels like this is an expensive sacrifice that shouldn't be necessary. And because I was worried that this was an emergent pattern and dilemma against clarity, where if your call to clarity hints at a call to action people focus on the call to action, and if you don't call to action then people (especially outside of LW) say "That was nice and all but you didn't tell me what to do with that so what's the point?" and/or therefore forget what said. And the whole issue of calls to action vs. clarity has been central to some recent private discussions recently. where very high-level rationalists have repeatedly reacted to calls for clarity as if they are calls to action, in patterns that seem optimized for preventing clarity and common knowledge. All of which I'm struggling to figure out how to explain.

There's also the gaslighting thing where people do politics while pretending they're not doing that, then accuse anyone who calls them out on it of doing politics (and then, of course, the worry where it goes deeper and someone accuses someone of accusing someone of playing politics, which can be very true and important but more frequently is next-level gaslighting).

We also need to do a better job of figuring out how to do things that require a lot of groundwork - to teach the hard mode advanced class. There was a time when everyone was expected to have read the sequences and understand them, which helped a lot here. But at the time, I was actively terrified of commenting let alone posting, so it certainly wasn't free.

Comment by zvi on Asymmetric Justice · 2019-04-28T14:10:36.410Z · score: 10 (2 votes) · LW · GW

As I noted in my other reply, on reflection I was definitely overly frustrated when replying here and it showed. I need to be better about that. And yes, this helps understand where you're coming from.

Responding to the concerns:

1) It is in part a coordination problem - everyone gets benefits if there is agreement on an answer, versus disagreement among two equally useful/correct potential responses. But it's certainly not a pure coordination problem. It isn't obvious to me if, given everyone else has coordinated on an incorrect answer, it is beneficial or harmful to you to find the correct answer (let's ignore here the question of what answer is right or wrong). You get to get your local incentives better, improve your map and understanding, set an example that can help people realize they're coordinating in the wrong place, people you want to be associating with are more inclined to associate with you (because they see you taking a stand for the right things, and would be willing to coordinate with you on the new answer, and on improving maps and incentives in general, and do less games that are primarily about coordination and political group dynamics...) and so on.

There is also the distinction between, (A) I am going to internally model what gets points in a better way, and try to coordinate with and encourage and help things that tend towards positive points over those with negative points, and (B) I am going to act as if everyone else is going to go along with this, or expect them to, or get into fights over this beyond trying to convince them. I'm reasonably confident that doing (A) is a good idea if you're right, and can handle the mental load of having a model different from the model you believe that others are using.

But even if we accept that, in some somewhat-local sense, failure to coordinate means the individual gets a worse payoff while the benefits are diffused without too much expectation of a shift in equilibrium happening soon, this seems remarkably similar to to many decisions of the norm "do rationality or philosophy on this." Unless one gets intrinsic benefit from being right or exploring the questions, one is at best doing a lot of underpaid work, and probably just making oneself worse off. Yet here we are.

I am also, in general, willing to bite the bullet that the best answer I know about to coordination problems where there is a correct coordination point, and the group is currently getting it wrong, and the cost of getting it wrong seems high compared to the cost of some failures of coordination, and you have enough slack to do it, is to give the 'right' answer rather than the coordination answer. And to encourage such a norm.

2) Agree that I wasn't trying at all to rule this out. There are a bunch of obvious benefits to groups and to individuals of using asymmetric systems, some of which I've pointed to in these comments. To the extent that I don't think you can entirely avoid such systems and I wouldn't propose tearing down the entire fence. A lot of my model of these situations is that such evolutionary-style systems are very lossy, leading to being used in situations they weren't intended for like evaluating economic systems or major corporations, or people you don't have any context on. And also they are largely designed for dealing with political coalitions and scapegoating in worlds where such things are super important and being done by others, often as the primary cause of cognition. And all these systems have to assume that you're working without the kind of logical reasoning we're using here, and care a lot that having one model and acting as if others have another, and when needed acting according to that other model, is expensive and hard, and others who notice you have a unique model will by default seek to scapegoat you for that which is the main reason why such problems are coordination problems, and so on. That sort of thing.

3) The goal of the conclusion/modeling game from the perspective of the group, I think we'd agree, is often to (i) coordinate on conclusions enough to act (ii) on the answer that is best for the group, subject to needing to coordinate. I was speaking of the goal from the perspective of the individual. When I individually decide what is just, what am I doing? (a) One possibility is that I am mostly worried about things like my social status and position in the group and whether others will praise or blame me, or scapegoat me. My view on what is just won't change what is rewarded or punished by the group much, one might say, since I am only one of a large group. Or (b) one can be primarily concerned with what is just or what norms of justice would provide the right incentives, figure out that and try to convince others and act on that basis to the extent possible. Part of that is figuring out what answers would be stable/practical to implement/practical to get to, although ideally one would first figure out the range of what solutions do what and then pick the best practical answer.

Agreed that it would be good to have better understanding of where coordination might land, especially once we get to the point of wanting to coordinate on landing in a new place.

Comment by zvi on Asymmetric Justice · 2019-04-28T13:28:56.540Z · score: 2 (1 votes) · LW · GW

Fair enough, I don't think this needs to go deeper. I agree this was criticism rather than blame. I got more frustrated than I should have been in this spot as I explained exactly what I was thinking at the time, and this seemed to be making things worse by creating a clearer target, or something. I dunno.

Comment by zvi on Asymmetric Justice · 2019-04-27T21:35:57.664Z · score: 20 (4 votes) · LW · GW

Top-level note that the last line of this post was previously "Asymmetric systems of judgment are systems for opposing all action."

It was changed because people I respect took this as an indication that this was either in the call-to-action genre, or was a hybrid of the call-to-action and call-to-clarity genres, or was suggesting that this one action was a solution to the problem, or something. See Wei Dei's top-level comment and its thread for details.

It felt very Copenhagen Interpretation - I'd interacted with the problem of what to do about it and thus was to blame for not doing more or my solution being incomplete.

To avoid this distraction, it was removed with a wrapping-up line that doesn't do that. I am very worried about the forces that caused me to have to do that, and also at least somewhat worried about the forces that made me feel the need to include the line in the first place, and hope to have a post on such issues some time soon.

I am grateful that this was pointed out because it feels like it is pointing to an important problem that is getting worse.

Comment by zvi on Asymmetric Justice · 2019-04-27T21:18:38.780Z · score: 16 (5 votes) · LW · GW

I do not think we have no idea what to do about it. Creating common knowledge of a mistake, and ceasing to make that mistake yourself, are both doing something about it. If the problem is a coordination game then coordination to create common knowledge of the mistake seems like the obvious first move.

Comment by zvi on Asymmetric Justice · 2019-04-27T21:16:45.438Z · score: 29 (6 votes) · LW · GW

I am confused why it is unreasonable to suggest to people that, as a first step to correcting a mistake, that they themselves stop making it. I don't think that 'I individually would suffer so much from not making this mistake that I require group coordination to stop making it' applies here.

And in general, I worry that the line of reasoning that goes " group rationality problems are usually coordination problems so it usually doesn't help much to tell people to individually "do the right thing" leads (as it seems to be doing directly in this case) to the suggestion that now it is unreasonable to suggest someone might do the right thing on their own in addition to any efforts to make that a better plan or to assist with abilities to coordinate.

I'd also challenge the idea that only the group's conclusions on what is just matter, or that the goal of forming conclusions about what is just is to reach the same conclusion as the group, meaning that justice becomes 'that which the group chooses to coordinate on.' And where one's cognition is primarily about figuring out where the coordination is going to land, rather than what would in fact be just.

This isn't a PD situation. You are individually better off if you provide good incentives to those around you to behave in just fashion, and your cognitive map is better if you can properly judge what is good and bad and what to offer your support to and encourage, and what to oppose and discourage.

To the extent group coordination is required, then the solution is in fact to do what all but one sentence of the post is clearly aiming to do, explain and create clarity and common knowledge.

Comment by zvi on Asymmetric Justice · 2019-04-27T21:07:25.900Z · score: 29 (9 votes) · LW · GW

Fine. I'm convinced now. The line has been replaced by a summary-style line that is clearly not a call to action.

The pattern seems to be, if one spends 1600 words on analysis, then one sentence suggesting one might aim to avoid the mistakes pointed out in the analysis, then one is viewed as "doing two things" and/or being a call to action, and then is guilty if the call-to-action isn't sufficiently well specified and doesn't give concrete explicit paths to making progress that seem realistic and to fit people's incentives and so on?

Which itself seems like several really big problems, and an illustration of the central point of this piece!

Call to action, and the calling thereof, is an action, and thus makes one potentially blameworthy in various ways for being insufficient, whereas having no call to action would have been fine. You've interacted with the problem, and thus by CIE are responsible for not doing more. So one must not interact with the problem in any real way, and ensure that one isn't daring to suggest anything get done.

Comment by zvi on Asymmetric Justice · 2019-04-26T21:05:31.167Z · score: 12 (4 votes) · LW · GW

I also see, looking back upon it now, that this was kind of supposed to be a call for literally any action whatsoever, as opposed to striving to take as little action as possible. Or at least, I can read it like that quite easily - one needs to not strive to be the 'perfect' person in the form of someone who didn't do anything actively wrong.

Which would then be the most call-to--action of all the calls-to-action, since it is literally a Call To Action.

Comment by zvi on Asymmetric Justice · 2019-04-26T21:03:07.597Z · score: 4 (2 votes) · LW · GW

So, yeah. There's that. In terms of what I was thinking at the time, I'll quote my comment above:

But this is also of the type of thing that I do when I'm analyzing my game play choices after a match of Magic, where I come up with all sorts of explanations and deep lines of possibility and consideration that were never in my conscious analysis at the time. At the time it was more something like, this needs a conclusion, I've shown the problems with this thing, this seems like a way to wrap things up and maybe get people to think about doing the thing less and spotting/discounting it more, which would be good.

Your reaction points out a way this could be bad. By taking a call-for-clarity piece, and finishing it with a sentence that implies one might want to take action of some kind, one potentially makes a reader classify the whole thing as a call-to-action. Which is natural, since the default is to assume calls-for-clarity are failed calls-for-action, because who would bother calling for clarity? Doesn't seem worth one's time.

Which means that such things might indeed be quite bad, and to be avoided. If people end up going 'oh, I'm being asked to do less X' and therefore forget about the model of X being presented, that's a big loss.

The cost is twofold, then:

1. It becomes harder to form a good ending. You can't just delete that line without substituting another ending.

2. If we can't put an incidental/payoff call to implied action into an analysis piece, then the concrete steps this suggests won't get taken. People might think 'this is interesting' but not know what to do with it, and thus discard the presented model as unworthy of their brain space.

Which means this gets pretty muddled and it's not obvious which way this should go.

Comment by zvi on Asymmetric Justice · 2019-04-26T20:56:33.179Z · score: 2 (1 votes) · LW · GW

First point:

Is it worth the bandwidth to get into the weeds on this? To me, saying "we currently have mechanisms with which to solve X" matters little if X is not being solved in this way. I certainly don't see how 'put all the downside on the researcher' could possibly be matched, since you're certainly not going to give them most or all of the upside - again we don't even come close to doing that for drugs that can be sold at monopoly prices, and that's before giving everyone along the way their cuts.

Second:

I have at least some reasons, of varying degrees of being good reasons. The best reason I can think of for why it is good, would be that it opens the door for lots of larger manipulations, and might put even greater burdens on people to constantly point out the good things they're doing to collect all the points from them to offset where they get docked or otherwise score highly. Whereas now you only have to avoid bad things being pointed out. Or alternatively, that when people claim good things they have obvious bad incentives to do that, so you're inclined to not believe them. And that we don't have time to find all the context, and need to act on simple heuristics due to limited compute. And in some places, the willingness to *ever* do a sufficiently bad thing is very strong evidence of additional bad things, and we need to maintain a strong norm of always punishing an action to maintain a strong norm against that action.

Also potentially important is that if you let things get fuzzy, those with power will use that fuzziness to enhance their own power. When needed, they'll find ways to give themselves points to offset any bad things they're caught doing. You need a way to stop this and bring them down.

And so on.

So in some places it becomes structurally necessary to have a no-excuses (or only local and well-specified excuses like self defense) approach. But there are entire cultural groups who use this as the generic evaluate-thing algorithm and that's terrible.

That's why I chose the phrasing "aim higher" rather than telling people "don't do that." I don't think one can entirely eliminate such systems at this time at a reasonable price.

But this is also of the type of thing that I do when I'm analyzing my game play choices after a match of Magic, where I come up with all sorts of explanations and deep lines of possibility and consideration that were never in my conscious analysis at the time. At the time it was more something like, this needs a conclusion, I've shown the problems with this thing, this seems like a way to wrap things up and maybe get people to think about doing the thing less and spotting/discounting it more, which would be good.

(I will continue this line of thought down below in another reply)

Comment by zvi on Asymmetric Justice · 2019-04-26T12:47:52.128Z · score: 16 (4 votes) · LW · GW

I think of requiring scientists to get liability insurance as actually an example of the problem - a scientist that makes a breakthrough will probably capture almost none of the benefits (as a percentage of total generated surplus) even if it makes them famous and well-off. Even a full patent grant is going to be only the short-term monopoly profits.

Whereas a scientist who makes a series of trivial advances allowing publication of papers might often capture more than all of the net benefits, or there might not even be net benefits. Thus, one of several reasons for very few attempts at breakthroughs. If you allowed better capture of the upside then it would make sense to make them own more downside.

I do agree that we also have situations where the reverse happens.

The intention of the last line was, avoid using asymmetric mental point systems except where structurally necessary, and be-a-conclusion. But the intention was to inform people and give a word to a concept that I could build upon, primarily, rather than a call for action.

It is important that calls for clarity without calls for action not be seen as failures to carefully elaborate a call for action. And in fact LW explicitly favors informing over calls for action and I've had posts (correctly) not promoted to main because they were too much of a call-for-action.

Comment by zvi on Asymmetric Justice · 2019-04-25T21:30:33.303Z · score: 5 (3 votes) · LW · GW

I think what you are pointing at is more heroic responsibility, unless you think that being unaware of something by choice actually lets you off the hook. I'm guessing you think it doesn't? If you think it does then say more.

The Good Place's ability to assign (at least in my book) shockingly accurate point totals to actions is the best case for the existence of objective morality I've ever seen, but yes we're all fully aware it is fiction. I'm using it as a way to illustrate a mode of thinking, and to recommend a great show, nothing more.

Comment by zvi on Asymmetric Justice · 2019-04-25T20:42:20.804Z · score: 12 (3 votes) · LW · GW

I'm actually going to remove the example as unneeded, as it's caused two distinct comments one of which pointed out it's not working right and one of which challenged its assumptions. It's a distraction that isn't worth it, and a waste of space. So thank you for pointing that out.

To respond directly, one who takes on a share of tail risk needs to enjoy a share of the generic upside, so the carpenter would get a small equity stake in the house if this was a non-trivial risk. Alternatively, we could simply accept a small distortion in the construction of houses in favor of being 'too safe' and favoring carpenters who don't have children. Or we could think this punishment is simply way too large compared to what is needed to do the job.

Comment by zvi on Asymmetric Justice · 2019-04-25T20:36:39.595Z · score: 15 (5 votes) · LW · GW

I say seemingly absurd to point out that, to my and many other ears, the statement seems upon first encounter to be absurd. And of course, the idea that it can’t be ethical to consume anything at all in any way at all, when lack of at least some consumption is death, does seem like it’s allowed to be absurd. Of course, also: Some absurd things are true!

I also think it is very wrong, that even the default consumption pattern is ethical as I see things (although not some other reasonable ways of seeing things), and that an engineered-to-be-ethical one is ethical under the other reasonable ways as well, such that for any given system there exists such an engineered method.

This is because I don’t think it is reasonable to apply different order-of-magnitude calculations on second and higher order benefits and harms from actions in complex systems, and I have a much more benign view of those higher order effects than those making this statement. The main error is upstream of the statement.

That doesn’t mean one doesn’t have an affirmative duty to work to make things better, somewhere, in some way. But one must structure that as the ability for actions to be good, and the best score to not be zero (e.g. the perfect person isn’t the person who fails to interact with the system).

[This discussion in particular risks going outside LW-appropriate bounds and so should likely be continued on the blog, if it continues]

Comment by zvi on Asymmetric Justice · 2019-04-25T20:35:37.719Z · score: 22 (9 votes) · LW · GW

(Replying to the last two paragraphs)

Agreed. Several things one could say here.

1. It is not common knowledge that the level-4 simulacrum of justice is a level-4 simulacrum. Or even that it is not a level-1. There are people honestly trying to do level-1 justice using a mostly level-4 simulacrum, or a mix of all levels, etc. I feel like this error was present and somewhat ubiquitous, for various reasons good and bad, long before L-4 took over the areas in question, and its origin often *was* usefully thought of as a technical error. Its final one-winged-angel form is something else.
2. Even if something is not a technical error in the sense that no one was trying to solve a given technical problem, it is still true in many cases, including this one, that it claims that it *is* trying to solve the problem. Pointing out that it’s doing a piss-poor job of that can create knowledge or ideally common knowledge that allows the remaining lower-level players to identify and coordinate against it, or at least avoid making the mistake in their own thinking and realize what they are up against.
3. It can lead to potential ways out. One can imagine forcing common knowledge of being L-4 accelerating a reversion. Language has been destroyed, so anyone who cares about the object level can now exit and start again, and the system of levels (and perhaps The System, if it’s too linked to not be doomed) can collapse. That seems good. Alternatively, it can create value for the game piece of claiming that everything else is a simulacrum and thus one can invest substantial resources in creating something that is protected (at least for now) from that, to compete. Or, it can free the L-1 players from not only confusion but feeling bad about playing the game being played, since once there is only a game board, the game itself becomes the object level – that which no longer has *any* link to reality on the original level has its own distinct reality, and you can operate on that object level, and kind of start again with the new meanings of words.
4. Yes! These people ARE hopelessly perverse! And also, a sufficient amount of such pressures also makes them stupid because they don’t have any words or accurate information to think with! That’s in addition to being situationally constrained and habituated. These are not exclusive things.

In general, I have the instinct that pointing out that things *would be* technical errors if they were part of a proposed technical solution to the problem they claim to be solving, is a useful thing to do to help create common knowledge / knowledge.

Comment by zvi on Asymmetric Justice · 2019-04-25T20:35:09.219Z · score: 13 (4 votes) · LW · GW

Endorse following that link above to simulacra level 1, for anyone following this.

One would think that it would also be powerful (at level 4) to create common knowledge of your *lack* of ability to interact with or help with a thing, which can be assisted by the creation of common knowledge blaming someone else. And in fact I do think we observe a lot of attempts to create common “knowledge” (air quotes because the information in question is often incomplete, misleading or outright false) about who is to blame for various things.

It is also reasonable in some sense, at that point, to put a large multiplier on bad things for which we establish common knowledge if we expect that most bad things do not become common knowledge, to the extent that one might be judged to be as bad as the worst established action.

Which in turn results in anything and anyone under sufficient hostile scrutiny, which has taken a bunch of action, to be seen as bad.

The Copenhagen Interpretation actually is perverse and is quite bad, whether or not it is a locally reasonable action in some cases for people on L-2 or higher.

One of the big advantages, to me, of TCI is that in addition to explaining specific behaviors very well in many cases, it also points out that the people involved can’t be L-1 players, and since most people agree with TCI, most people aren’t L-1.

Of course, it is rather silly to think that no one in the community is making honest mistakes about what deserve praise or blame; in addition to any and all dishonest ‘mistakes’ there are constant important honest ones as well. So hanging on to a pure L-1 perspective has its own problems even with only L-1 players, before a war into L-2.

There’s a ton of hostile action but you don’t need it to generate a lot of the same results anyway at lower magnitudes.

Comment by zvi on Asymmetric Justice · 2019-04-25T20:34:28.159Z · score: 11 (3 votes) · LW · GW

Noting that I also replied to Benquo's comments back at the original post (he posted them in both places): https://thezvi.wordpress.com/2019/04/25/asymmetric-justice/. I will cross-post the 'first wave' of replies here but may or may not post subsequent waves should they exist.

Comment by zvi on Asymmetric Justice · 2019-04-25T20:33:39.553Z · score: 10 (2 votes) · LW · GW

I am curious if that line ever actually got enforced.

I don’t think that, in practice, houses collapse all that often, or that preventing that is that expensive. So it’s more like (I’m completely guessing, I know nothing else about Babylonian architecture), there was more of an emphasis on things that don’t fall down over other properties. What you do is ban flimsy housing, but the main cost of housing lies elsewhere.

Asymmetric Justice

2019-04-25T16:00:01.106Z · score: 118 (41 votes)

Counterfactuals about Social Media

2019-04-22T12:20:00.476Z · score: 53 (19 votes)

Reflections on Duo Standard

2019-04-18T23:20:01.037Z · score: 8 (1 votes)

Reflections on the Mythic Invitational

2019-04-17T11:50:00.315Z · score: 11 (3 votes)
Comment by zvi on Do you like bullet points? · 2019-03-26T19:42:40.358Z · score: 8 (4 votes) · LW · GW

How does your brain respond to italics? I've been using italics aggressively but bold only in extreme cases, for related reasons.

Deck Guide: Biomancer’s Familiar

2019-03-26T15:20:00.420Z · score: 5 (4 votes)
Comment by zvi on What failure looks like · 2019-03-22T18:06:54.363Z · score: 15 (5 votes) · LW · GW

Is this future AI catastrophe? Or is this just a description of current events being a general gradual collapse?

This seems like what is happening now, and has been for a while. Existing ML systems are clearly making Type-I problems, already quite bad before ML was a thing at all, much worse, to the extent that I don't see much ability left of our civilization to get anything that can't be measured in a short term feedback loop - even in spaces like this, appeals to non-measurable or non-explicit concerns are a near-impossible sell.

Part II problems are not yet coming from ML systems, exactly, But we certainly have algorithms that are effectively optimized and selected for the ability to gain influence; the algorithm gains influence, which causes people to care about it and feed into it, causing it to get more. If we get less direct in the metaphor we get the same thing with memetics, culture, life strategies, corporations, media properties and so on. The emphasis on choosing winners, being 'on the right side of history', supporting those who are good at getting support. OP notes that this happens in non-ML situations explicitly, and there's no clear dividing line in any case.

So if there is another theory that says, this has already happened, what would one do next?

Comment by zvi on Rest Days vs Recovery Days · 2019-03-21T14:04:03.040Z · score: 19 (10 votes) · LW · GW

This distinction seems super valuable. What I find most interesting is that I would have labeled what OP calls Rest as Recovery, and what it calls Recovery as Rest...

Comment by zvi on Privacy · 2019-03-17T17:20:12.604Z · score: 19 (6 votes) · LW · GW

I will attempt to clarify which of these things I actually believe, as best I can, but do not expect to be able to engage deeper into the thread.

Implication: it's bad for people to have much more information about other people (generally), because they would reward/punish them based on that info, and such rewarding/punishing would be unjust. We currently have scapegoating, not justice. (Note that a just system for rewarding/punishing people will do no worse by having more information, and in particular will do no worse than the null strategy of not rewarding/punishing behavior based on certain subsets of information)

>> What I'm primarily thinking about here is that if one is going to be rewarded/punished for what one does and thinks, one chooses what one does and thinks largely based upon that - you have a signaling equilibria, as Wei Dei notes in his top-level comment. I believe that this in many situations is much worse, and will lead to massive warping of behavior in various ways, even if those rewarding/punishing were attempting to be just (or even if they actually were just, if there wasn't both common knowledge of this and agreement on what is and isn't just). The primary concern isn't whether someone can expect to be on-net punished or rewarded, but on how behaviors are changed.

We need people there with us who won’t judge us. Who won’t use information against us.

Implication: "judge" means to use information against someone. Linguistic norms related to the word "judgment" are thoroughly corrupt enough that it's worth ceding to these, linguistically, and using "judge" to mean (usually unjustly!) using information against people.

>> Judge here means to react to information about someone or their actions or thoughts largely by updating their view of the person - to not have to worry (as much, at least) about how things make you seem. The second sentence is a second claim, that we also need them not to use the information against us. I did not intend for the second to seem to be part of the first.

A complete transformation of our norms and norm principles, beyond anything I can think of in a healthy historical society, would be required to even attempt full non-contextual strong enforcement of all remaining norms.

Implication (in the context of the overall argument): a general reduction in privacy wouldn't lead to norms changing or being enforced less strongly, it would lead to the same norms being enforced strongly. Whatever or whoever decides which norms to enforce and how to enforce them is reflexive rather than responsive to information. We live in a reflex-based control system.

>> That doesn't follow at all, and I'm confused why you think that it does. I'm saying that when I try to design a norm system from scratch in order to be compatible with full non-contextual strong enforcement, I don't see a way to do that. Not that things wouldn't change - I'm sure they would.

There are also known dilemmas where any action taken would be a norm violation of a sacred value.

Implication: the system of norms is so corrupt that they will regularly put people in situations where they are guaranteed to be blamed, regardless of their actions. They won't adjust even when this is obvious.

>> The system of norms is messy, which is different than corrupt. Different norms conflict. Yes, the system is corrupt, but that's not required for this to be a problem. Concrete example, chosen to hopefully be not controversial: Either turn away the expensive sick child patient, or risk bankrupting the hospital.

Part of the job of making sausage is to allow others not to see it. We still get reliably disgusted when we see it.

Implication: people expect to lose value by knowing some things. Probably, it is because they would expect to be punished due to it being revealed they know these things (as in 1984). It is all an act, and it's better not to know that in concrete detail.

>> Consider the literal example of sausage being made. The central problem is not that people are afraid the sausage makers will strike back at them. The problem is knowing reduces one's ability to enjoy sausage. Alternatively, it might force one to stop enjoying sausage.

>> Another important dynamic is that we want to enforce a norm that X is bad and should be minimized. But sometimes X is necessary. So we'd rather not be too reminded of the X that is necessary in some situations where we know X must occur, to avoid weakening the norm against X elsewhere, and because we don't want to penalize those doing X where it is necessary as we would instinctively do if we learned too much detail.

We constantly must claim ‘everything is going to be all right’ or ‘everything is OK.’ That’s never true. Ever.

Implication: the control system demands optimistic stories regardless of the facts. There is something or someone forcing everyone to call the deer a horse under threat of punishment, to maintain a lie about how good things are, probably to prop up an unjust regime.

>> OK, this one's just straight up correct if you remove the unjust regime part. Also, I am married with children.

But these problems, while improved, wouldn’t go away in a better or less hypocritical time. Norms are not a system that can have full well-specified context dependence and be universally enforced. That’s not how norms work.

Implication: even in the most just possible system of norms, it would be good to sometimes violate those norms and hide the fact that you violated them. (This seems incorrect to me!)

>> As I noted above, my model of norms is that they are even at their best messy ways of steering behavior, and generally just norms will in some circumstances push towards incorrect action in ways the norm system would cause people to instinctively punish. In such cases it is sometimes correct to violate the norm system, even if it is as just a system as one could hope for. And yes, in some of those cases, it would be good to hide that this was done, to avoid weakening norms (including by allowing such cases not be punished thus enabling otherwise stronger punishment).

If others know exactly what resources we have, they can and will take all of them.

Implication: the bad guys won; we have rule by gangsters, who aren't concerned with sustainable production, and just take as much stuff as possible in the short term. (This seems on the right track but partially false; the top marginal tax rate isn't 100% [EDIT: see Ben's comment, the actual rate of extraction is higher than the marginal tax rate])

>> This is not primarily a statement about The Powers That Be or any particular bad guys. I think this is inherent in how people and politics operate, and what happens when one has many conflicting would-be sacred values. Of course, it is also a statement that when gangsters do go after you, it is important that they not know, and there is always worry about potential gangsters on many levels whether or not they have won. Often the thing taking all your resources is not a bad guy - e.g. expensive medical treatments, or in-need family members, etc etc.

If it is known how we respond to any given action, others find best responses. They will respond to incentives. They exploit exactly the amount we won’t retaliate against. They feel safe.

Implication: more generally available information about what strategies people are using helps "our" enemies more than it helps "us". (This seems false to me, for notions of "us" that I usually use in strategy)

>> Often on the margin more information is helpful. But complete information is highly dangerous. And in my experience, most systems in an interesting equilibrium where good things happen sustain that partly with fuzziness and uncertainty - the idea that obeying the spirit of the rules and working towards the goals and good things gets rewarded, other action gets punished, in uncertain ways. There need to be unknowns in the system. Competitions where every action by other agents is known are one-player games about optimization and exploitation.

World peace, and doing anything at all that interacts with others, depends upon both strategic confidence in some places, and strategic ambiguity in others. We need to choose carefully where to use which.

Implication (in context): strategic ambiguity isn't just necessary for us given our circumstances, it's necessary in general, even if we lived in a surveillance state. (Huh?)

>> Strategic ambiguity is necessary for the surveillance state so that people can't do everything the state didn't explicitly punish/forbid. It is necessary for those living in the state, because the risk of revolution, the we're-not-going-to-take-it-anymore moment, helps keep such places relatively livable versus places where there is no such fear. It is important that you don't know exactly what will cause the people to rise up, or you'll treat them as bad as won't do that. And of course I was also talking explicitly about things like 'if you cross that border we will be at war' - there are times when you want to be 100% clear that there will be war (e.g. NATO) and others where you want to be 100% unclear (e.g. Taiwan).

To conclude: if you think the arguments in this post are sound (with the conclusion being that we shouldn't drastically reduce privacy in general), you also believe the implications I just listed, unless I (or you) misinterpreted something.

>> I hope this cleared things up. And of course, you can disagree with many, most or even all my arguments and still not think we should radically reduce privacy. Radical changes don't default to being a good idea if someone gives invalid arguments against them!

Comment by zvi on Privacy · 2019-03-17T16:48:57.124Z · score: 2 (1 votes) · LW · GW

I replied to this comment on my blog (https://thezvi.wordpress.com/2019/03/15/privacy/#comment-3827)

Privacy

2019-03-15T20:20:00.269Z · score: 79 (26 votes)

Speculations on Duo Standard

2019-03-14T14:30:00.343Z · score: 10 (6 votes)

New York Restaurants I Love: Pizza

2019-03-12T12:10:01.002Z · score: 11 (6 votes)
Comment by zvi on On The London Mulligan · 2019-03-07T14:01:38.398Z · score: 2 (1 votes) · LW · GW

They would not change it back.

On The London Mulligan

2019-03-05T21:30:00.662Z · score: 5 (6 votes)
Comment by zvi on Blackmail · 2019-02-20T22:31:27.716Z · score: 2 (1 votes) · LW · GW

Yes. Long post is long and I didn't want to throw out arguments about particular reveals to show this - in particular, we all think the cost of that should be zero in that case, and we all know it often very much isn't. And I didn't want anyone to think I was relying on that.

Comment by zvi on Blackmail · 2019-02-20T22:29:56.460Z · score: 3 (2 votes) · LW · GW

I could have worded it to make this more clear but I think the point stands when clarified/understood - the proximate goal of the blackmail release is to be harmful, whereas the proximate goal of the gossip might or might not be.

If others agree it is misleading I will make this more explicit.

Comment by zvi on Blackmail · 2019-02-20T22:05:55.416Z · score: 4 (3 votes) · LW · GW

Yes. It's doing a few things, and that's a lot of it.

Blackmail

2019-02-19T03:50:04.606Z · score: 67 (28 votes)

New York Restaurants I Love: Breakfast

2019-02-14T13:10:01.072Z · score: 9 (7 votes)

Minimize Use of Standard Internet Food Delivery

2019-02-10T19:50:00.866Z · score: -15 (3 votes)
Comment by zvi on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] · 2019-01-31T17:39:10.902Z · score: 5 (3 votes) · LW · GW

We're not out. Certainly we're not out of games - e.g. Magic: The Gathering. Which would be a big leap.

For actual basic board games, the one I want to see is Stratego, actually; the only issue is I don't know if there are humans who have bothered to master it.

Book Trilogy Review: Remembrance of Earth’s Past (The Three Body Problem)

2019-01-30T01:10:00.414Z · score: 47 (20 votes)
Comment by zvi on Announcement: AI alignment prize round 4 winners · 2019-01-28T13:52:48.560Z · score: 6 (3 votes) · LW · GW

Important not to let the perfect be the enemy of the good. There's almost certainly a better way to find mentors, but this would be far better than not doing anything, so I'd say that if you can't find an actionable better option within (let's say) a month, you should just do it. Or just do it now and replace with better method when you find one.

Comment by zvi on Less Competition, More Meritocracy? · 2019-01-23T14:17:12.589Z · score: 4 (2 votes) · LW · GW

In that particular case, I would have chosen different names that likely would have resonated better, but felt it was important not to change the paper's chosen labels, even though they seemed not great. That might have been an error.

Their explanation is that the question is, will the weaker candidates concede that they are weaker than strong ones and let the strong ones all win, or will they challenge the stronger candidates.

Suggestions for other ways to make this more clear are appreciated. I'd like to be able to write things like this in a way that people actually read and benefit from.

Game Analysis Index

2019-01-21T15:30:00.371Z · score: 13 (4 votes)
Comment by zvi on Announcement: AI alignment prize round 4 winners · 2019-01-20T16:22:20.381Z · score: 26 (12 votes) · LW · GW

I want to post a marker here that if I don't write up my lessons learned from the prize process within the next month, people should bug me about that until I do.

Less Competition, More Meritocracy?

2019-01-20T02:00:00.974Z · score: 81 (24 votes)

Disadvantages of Card Rebalancing

2019-01-06T23:30:08.255Z · score: 33 (7 votes)

Advantages of Card Rebalancing

2019-01-01T13:10:02.224Z · score: 9 (2 votes)

Card Rebalancing and Economic Considerations in Digital Card Games

2018-12-31T17:00:00.547Z · score: 14 (5 votes)

Card Balance and Artifact

2018-12-28T13:10:00.323Z · score: 9 (2 votes)

Card Collection and Ownership

2018-12-27T13:10:00.977Z · score: 19 (5 votes)

Artifact Embraces Card Balance Changes

2018-12-26T13:10:00.384Z · score: 12 (3 votes)

Fifteen Things I Learned From Watching a Game of Secret Hitler

2018-12-17T13:40:01.047Z · score: 13 (8 votes)

Review: Slay the Spire

2018-12-09T20:40:01.616Z · score: 14 (9 votes)

Prediction Markets Are About Being Right

2018-12-08T14:00:00.281Z · score: 81 (26 votes)

Review: Artifact

2018-11-22T15:00:01.335Z · score: 21 (8 votes)

Preschool: Much Less Than You Wanted To Know

2018-11-20T19:30:01.155Z · score: 65 (21 votes)

Deck Guide: Burning Drakes

2018-11-13T19:40:00.409Z · score: 9 (2 votes)

Octopath Traveler: Spoiler-Free Review

2018-11-05T17:50:00.986Z · score: 12 (4 votes)

Linkpost: Arena’s New Opening Hand Rule Has Huge Implications For How We Play the Game

2018-11-01T12:30:00.810Z · score: 13 (4 votes)

The Art of the Overbet

2018-10-19T14:00:00.518Z · score: 58 (25 votes)

The Kelly Criterion

2018-10-15T21:20:03.430Z · score: 60 (28 votes)

Additional arguments for NIMBY

2018-10-11T20:40:05.547Z · score: 35 (11 votes)

Eternal: The Exit Interview

2018-10-10T16:50:02.776Z · score: 12 (3 votes)

Apply for Emergent Ventures

2018-09-13T21:50:00.295Z · score: 45 (17 votes)

On Robin Hanson’s Board Game

2018-09-08T17:10:00.263Z · score: 55 (17 votes)

You Play to Win the Game

2018-08-30T14:10:00.279Z · score: 26 (10 votes)

Unknown Knowns

2018-08-28T13:20:00.982Z · score: 105 (46 votes)

Chris Pikula Belongs in the Magic Hall of Fame

2018-08-22T21:10:00.448Z · score: 28 (17 votes)

Subsidizing Prediction Markets

2018-08-17T15:40:00.653Z · score: 99 (27 votes)

Tidying One’s Room

2018-08-16T13:50:00.303Z · score: 39 (13 votes)

Prediction Markets: When Do They Work?

2018-07-26T12:30:00.565Z · score: 116 (43 votes)

Who Wants The Job?

2018-07-22T14:00:00.296Z · score: 23 (15 votes)

Simplicio and Sophisticus

2018-07-22T13:30:00.333Z · score: 42 (19 votes)

Why Destructive Value Capture?

2018-06-18T12:20:00.407Z · score: 40 (19 votes)

Front Row Center

2018-06-11T13:50:00.237Z · score: 32 (19 votes)

Simplified Poker Conclusions

2018-06-09T21:50:00.400Z · score: 63 (19 votes)