Posts

Three on Two: Temur Walkers, Elk Blade, Goblin Blade and Dino Blade 2019-11-14T16:20:00.523Z · score: 10 (2 votes)
Ban the London Mulligan 2019-11-11T11:10:00.443Z · score: 14 (6 votes)
Artifact: What Went Wrong? 2019-10-08T12:10:01.019Z · score: 33 (9 votes)
Free Money at PredictIt? 2019-09-26T16:10:00.587Z · score: 50 (21 votes)
Timer Toxicities 2019-09-22T12:10:00.701Z · score: 42 (12 votes)
Free-to-Play Games: Three Key Trade-Offs 2019-09-10T12:10:00.440Z · score: 54 (16 votes)
Who To Root For: 2019 College Football Edition 2019-09-06T08:10:00.314Z · score: 11 (3 votes)
Dual Wielding 2019-08-27T14:10:00.715Z · score: 53 (27 votes)
Mistake Versus Conflict Theory of Against Billionaire Philanthropy 2019-08-01T13:10:01.408Z · score: 28 (23 votes)
Everybody Knows 2019-07-02T12:20:00.646Z · score: 74 (25 votes)
Magic Arena Bot Drafting 2019-06-18T16:00:00.402Z · score: 18 (6 votes)
Press Your Luck 2019-06-15T15:30:00.702Z · score: 14 (7 votes)
Some Ways Coordination is Hard 2019-06-13T13:00:00.443Z · score: 46 (10 votes)
Moral Mazes and Short Termism 2019-06-02T11:30:00.348Z · score: 63 (19 votes)
Quotes from Moral Mazes 2019-05-30T11:50:00.489Z · score: 87 (27 votes)
Laws of John Wick 2019-05-24T15:20:00.322Z · score: 21 (9 votes)
More Notes on Simple Rules 2019-05-21T14:50:00.305Z · score: 34 (10 votes)
Simple Rules of Law 2019-05-19T00:10:01.124Z · score: 53 (15 votes)
Tales from the Highway 2019-05-12T19:40:00.862Z · score: 15 (7 votes)
Tales From the American Medical System 2019-05-10T00:40:00.768Z · score: 55 (31 votes)
Dishonest Update Reporting 2019-05-04T14:10:00.742Z · score: 55 (14 votes)
Asymmetric Justice 2019-04-25T16:00:01.106Z · score: 148 (54 votes)
Counterfactuals about Social Media 2019-04-22T12:20:00.476Z · score: 54 (20 votes)
Reflections on Duo Standard 2019-04-18T23:20:01.037Z · score: 8 (1 votes)
Reflections on the Mythic Invitational 2019-04-17T11:50:00.315Z · score: 11 (3 votes)
Deck Guide: Biomancer’s Familiar 2019-03-26T15:20:00.420Z · score: 5 (4 votes)
Privacy 2019-03-15T20:20:00.269Z · score: 79 (26 votes)
Speculations on Duo Standard 2019-03-14T14:30:00.343Z · score: 10 (6 votes)
New York Restaurants I Love: Pizza 2019-03-12T12:10:01.002Z · score: 11 (6 votes)
On The London Mulligan 2019-03-05T21:30:00.662Z · score: 5 (6 votes)
Blackmail 2019-02-19T03:50:04.606Z · score: 74 (29 votes)
New York Restaurants I Love: Breakfast 2019-02-14T13:10:01.072Z · score: 9 (7 votes)
Minimize Use of Standard Internet Food Delivery 2019-02-10T19:50:00.866Z · score: -21 (4 votes)
Book Trilogy Review: Remembrance of Earth’s Past (The Three Body Problem) 2019-01-30T01:10:00.414Z · score: 47 (20 votes)
Game Analysis Index 2019-01-21T15:30:00.371Z · score: 13 (4 votes)
Less Competition, More Meritocracy? 2019-01-20T02:00:00.974Z · score: 81 (24 votes)
Disadvantages of Card Rebalancing 2019-01-06T23:30:08.255Z · score: 33 (7 votes)
Advantages of Card Rebalancing 2019-01-01T13:10:02.224Z · score: 9 (2 votes)
Card Rebalancing and Economic Considerations in Digital Card Games 2018-12-31T17:00:00.547Z · score: 14 (5 votes)
Card Balance and Artifact 2018-12-28T13:10:00.323Z · score: 9 (2 votes)
Card Collection and Ownership 2018-12-27T13:10:00.977Z · score: 20 (6 votes)
Artifact Embraces Card Balance Changes 2018-12-26T13:10:00.384Z · score: 12 (3 votes)
Fifteen Things I Learned From Watching a Game of Secret Hitler 2018-12-17T13:40:01.047Z · score: 14 (9 votes)
Review: Slay the Spire 2018-12-09T20:40:01.616Z · score: 14 (9 votes)
Prediction Markets Are About Being Right 2018-12-08T14:00:00.281Z · score: 83 (27 votes)
Review: Artifact 2018-11-22T15:00:01.335Z · score: 21 (8 votes)
Preschool: Much Less Than You Wanted To Know 2018-11-20T19:30:01.155Z · score: 68 (23 votes)
Deck Guide: Burning Drakes 2018-11-13T19:40:00.409Z · score: 9 (2 votes)
Octopath Traveler: Spoiler-Free Review 2018-11-05T17:50:00.986Z · score: 12 (4 votes)
Linkpost: Arena’s New Opening Hand Rule Has Huge Implications For How We Play the Game 2018-11-01T12:30:00.810Z · score: 13 (4 votes)

Comments

Comment by zvi on Ban the London Mulligan · 2019-11-14T22:41:54.560Z · score: 2 (1 votes) · LW · GW

Anything this complicated is a non-starter. I do think its heart is in the right place, but needs to be kept simple.

Comment by zvi on Everybody Knows · 2019-09-27T14:17:46.365Z · score: 2 (1 votes) · LW · GW

It's all pretty great. Agreed that the additional versus are on point, but I didn't want to go on too long.

Comment by zvi on Free Money at PredictIt? · 2019-09-26T22:20:52.348Z · score: 2 (1 votes) · LW · GW

One weird implication is that if *enough* people do this and force the market back into line with 100% combined probability, you could then close out of your positions with no losses and only wins, and still make a profit. Quirky.

Comment by zvi on Free Money at PredictIt? · 2019-09-26T22:18:17.635Z · score: 6 (5 votes) · LW · GW

Basically you can deposit for free. You pay 5% to withdraw. Net winnings cost 10%. Capital lock up is max loss on a given market, bet limit is $850 liability on each contract regardless of what you have in other contracts. Market is always live. You always use limit orders, which you can cancel any time.

Rules for some contracts are kind of weird, so if it matters, read 'em.

Comment by zvi on Free Money at PredictIt? · 2019-09-26T22:16:32.621Z · score: 2 (1 votes) · LW · GW

I've edited this in the OP, mods are requested to re-input the post.

Comment by zvi on Free Money at PredictIt? · 2019-09-26T22:09:52.937Z · score: 2 (1 votes) · LW · GW

It would be really weird if they charged you 10% on your net winnings, and didn't tie up the capital to pay that fee, but that is what the written rules imply. If that was true and the issue never corrected, you would pay back most of the cash. 112% would still technically be a win (and you get a full win if the field comes in, of course) but it's quite the tiny profit.

The 5% is another issue if you plan to move things in and out frequently; I've been rolling wins. Rossry's right that you don't put money in to then sell 95s, win and withdraw it, but you can do lots of 95s in succession (and probably should from a pure maximizing perspective).

And I certainly haven't been doing full arbitrage, in general, so there's that.

All fits the basic hypothesis of 'things need to be extremely wrong before they are worth fixing.' I will edit to make sure people realize the fee issue properly.

Comment by zvi on Free Money at PredictIt? · 2019-09-26T22:03:00.313Z · score: 3 (2 votes) · LW · GW

I haven't seen a documentation on this but I know that it does this, and it seems to base this on the right thing to do - which is that it ties up capital equal to max possible loss.

Comment by zvi on Timer Toxicities · 2019-09-25T23:04:53.820Z · score: 5 (2 votes) · LW · GW

Agreed, this. Similar to how I was willing to play the Paperclipper clicker game based on knowing it had an endpoint, it was terribly distracting for a few days and then it was a good memory to look back upon. Whereas a real clicker that doesn't end... shudder.

This game feels like it's going to be very life-toxic for its 30 days, *but* then it's fine, and it sounds like quite an experience. So it's something worth doing if you can spend 30 days like that. I don't think I can afford to check it out but sounds like it could be pretty cool.

Comment by zvi on A Critique of Functional Decision Theory · 2019-09-15T17:25:01.734Z · score: 15 (8 votes) · LW · GW

I am deeply confused how someone who is taking decision theory seriously can accept Guaranteed Payoffs as correct. I'm even more confused how it can seem so obvious that anyone violating it has a fatal problem.

Under certainty, this is assuming CDT is correct, when CDT seems to have many problems other than certainty. We can use Vaniver's examples above, or use a reliable insurance agent to remove any uncertainty, or we also can use any number of classic problems without any uncertainty (or remove it), and see that such an agent loses - e.g. Parfit's Hitchhiker in the case where he has 100% accuracy.

Comment by zvi on Free-to-Play Games: Three Key Trade-Offs · 2019-09-11T13:11:03.701Z · score: 7 (4 votes) · LW · GW

Exactly. This seems to be a popular model of design, where mostly nothing beyond checking back in periodically will ever be the long term limiting factor if you are playing in any reasonable way. The game that inspired this post does not make this mistake, but it does a similar thing where it offers rewards to everyone over time that dwarf anything a player can otherwise accomplish in their first few weeks - you still have to play to utilize what they give you, but mostly you're stuck with their gifts until reasonably far in.

Comment by zvi on Free-to-Play Games: Three Key Trade-Offs · 2019-09-11T13:07:37.255Z · score: 5 (3 votes) · LW · GW

Eternal fails the third trade-off by using a Hearthstone economy - your cards can't be traded. It does a good job with the other two. The most obnoxious thing it has is 'win one game a day to get a pack' which is not too bad. Contrast that with e.g. Villiam's comment above.

Comment by zvi on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-09-06T11:35:52.243Z · score: 2 (1 votes) · LW · GW

In some sense that list is rather exhaustive because it includes "know anything" and "do anything" as goals that are helped, and that pretty much includes everything. But in that sense, the list is not useful. In the sense that the list is useful, it seems woefully incomplete. And it's tricky to know what level to respond on. Most centrally, this seems like an example of the utilitarian failure mode of reducing the impact of a policy to the measured, proven direct impact of that policy, as a default (while still getting a result that is close to equal to 'helps with everything, everywhere, that matters at all').

"Increased ability to think" would be one potential fourth category. If truth is not being told because it's not in one's interest to do so, there is strong incentive to destroy one's own ability to think. If one was looking to essentially accept the error of 'only point to the measurable/observable directly caused effects.'

Part of me is screaming "do we really need a post explaining why it is good when people say that which is, when they believe that would be relevant or useful, and bad when they fail to do so, or say that which is not?"

Comment by zvi on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-09-06T11:23:38.867Z · score: 2 (1 votes) · LW · GW

Curious how that experiment ended and think this type of rule is healthy in general (e.g. rate limiting how often one checks and responds) and I'm doing my best to follow a similar one.

Comment by zvi on Who To Root For: 2019 College Football Edition · 2019-09-06T11:20:27.227Z · score: 7 (4 votes) · LW · GW

Special note for the LW copy of this post: This is obviously not written with LW in mind at all. If you want to respond in the full spirit of the post you may consider doing so at the original post instead ( https://thezvi.wordpress.com/2019/09/06/who-to-root-for-2019-college-football-edition/ ). If you have LW-style things to say on the topic or related topics, leaving them here would make more sense. I will strive to check back here every so often, but not too often.

At some point I will write a more fundamentals-level guide on sports. This isn't it, it does not attempt to explain basic football stuff the sport needs you to know and mostly assumes you know. Encouraging that post and similar posts to exist raises the chances they do exist, without lowering (much) the chances of other posts happening.

Finally, I do have more LW-centric stuff in the pipeline. I'll get there. I just don't have the focus/time for it right now, whereas this type of thing can be done in a different mode and trades against entertainment time.

Comment by zvi on Dual Wielding · 2019-08-28T11:31:58.001Z · score: 5 (3 votes) · LW · GW

My lord, that's genius. I'm not sure it's necessary given I would want to dual wield anyway for other reasons, but if not, seems obviously correct.

Comment by zvi on Dual Wielding · 2019-08-28T11:30:13.925Z · score: 5 (5 votes) · LW · GW

Mad respect for this position. I do try to be at zero phones on me whenever it makes sense to do so, but alas my life doesn't really allow this. Also I listen to a lot of podcasts, which I don't regret at all.

Comment by zvi on Dual Wielding · 2019-08-28T11:27:17.511Z · score: 3 (2 votes) · LW · GW

If you have zero battery life issues, and the other stuff never comes up, then probably not worth it. What I noticed was that worrying about running low takes up brain space long before running out becomes an issue, but if it's taking up zero brain space? Neat.

I'm guessing you use your phone differently. It's probably healthy.

Comment by zvi on Am I going for a job interview with a woo pusher? · 2019-08-25T23:07:46.081Z · score: 15 (6 votes) · LW · GW

Right. Remember that interviews are two-sided! They are evaluating you, and you are evaluating them as well. Go in with the attitude that if they have an issue with you being concerned about whether their thing is real, then it's not a place you want to work, so you want to be open about your doubts and see if they can prove them wrong.

Comment by zvi on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-16T11:05:58.109Z · score: 9 (4 votes) · LW · GW

Clarification question: Is this default to B over A meant to apply to the population at large, or for people who are in our orbits?

It seems like your model here actually views A as more likely than B in general but thinks EA/rationality at higher levels constitutes an exception, despite your observation of many cases of A in that place.

Comment by zvi on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-16T11:02:11.863Z · score: 20 (7 votes) · LW · GW

My model of politically motivated reasoning is that it usually feels reasonable to the person at the time. So does reasoning that is not so motivated. Noticing that you feel the view is reasonable isn't even strong evidence that you weren't doing this, let alone that others aren't doing it.

This also matches my experience - the times when I have noticed I used politically motivated reasoning, it seemed reasonable to me until this was pointed out.

Comment by zvi on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-16T10:56:21.271Z · score: 4 (2 votes) · LW · GW

How do you figure out good policies, or convince others of the need for such policies, without pointing out the problem with current policies? If that is not possible, how does one point them out without being seen as accusing individuals of wrongdoing?

Comment by zvi on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-16T10:53:00.543Z · score: 41 (11 votes) · LW · GW

Possibly clearer version of what Jessica is saying:

Imagine three levels of explanation: Straightforward to you, straightforward to those without motivated cognition, straightforward even to those with strong motivated cognition.

It is reasonable to say that getting from level 1 to level 2 is often a hard problem, that it is on you to solve that problem.

It is not reasonable, if you want clarity to win, to say that level 2 is insufficient and you must reach level 3. It certainly isn't reasonable to notice that level 2 has been reached, but level 3 has not, and thus judge the argument insufficient and a failure. It would be reasonable to say that reaching level 3 would be *better* and suggest ways of doing so.

If you don't want clarity to win, and instead you want to accomplish specific goals that require convincing specific people that have motivated cognition, you're on a different quest. Obfuscation has already won, because you are being held to higher standards and doing more work, and rewarding those who have no desire to understand for their failure to understand. Maybe you want to pay that price in context, but it's important to realize what you've lost.


Comment by zvi on Benito's Shortform Feed · 2019-08-07T20:54:21.082Z · score: 4 (2 votes) · LW · GW

Agreed it's a learned skill and it's hard. I think it's also just necessary. I notice that the best conversations I have about difficult to describe things definitely don't involve making everything explicit, and they involve a lot of 'do you understand what I'm saying?' and 'tell me if this resonates' and 'I'm thinking out loud, but maybe'.

And then I have insights that I find helpful, and I can't figure out how to write them up, because they'd need to be explicit, and they aren't, so damn. Or even, I try to have a conversation with someone else (in some recent cases, you) and share these types of things, and it feels like I have zero idea how to get into a frame where any of it will make any sense or carry any weight, even when the other person is willing to listen by even what would normally be strong standards.

Sometimes this turns into a post or sequence that ends up explaining some of the thing? I dunno.

Comment by zvi on Benito's Shortform Feed · 2019-08-07T20:48:41.165Z · score: 6 (3 votes) · LW · GW

Interesting. It seemed in the above exchanges like both Ben and you were acting as if this was a request to make your frames explicit to the other person, rather than a request to know what the frame was yourself and then tell if it seemed like a good idea.

I think for now I still endorse that making my frame fully explicit even to myself is not a reasonable ask slash is effectively a request to simplify my frame in likely to be unhelpful ways. But it's a lot more plausible as a hypothesis.

Comment by zvi on Benito's Shortform Feed · 2019-08-07T13:22:36.488Z · score: 30 (10 votes) · LW · GW

I find "keep everything explicit" to often be a power move designed to make non-explicit facts irrelevant and non-admissible. This often goes along with burden of proof. I make a claim (real example of this dynamic happening, at an unconference under Chatham house rules: That pulling people away from their existing community has real costs that hurt those communities), and I was told that, well, that seems possible, but I can point to concrete benefits of taking them away, so you need to be concrete and explicit about what those costs are, or I don't think we should consider them.

Thus, the burden of proof was put upon me, to show (1) that people central to communities were being taken away, (2) that those people being taken away hurt those communities, (3) in particular measurable ways, (4) that then would impact direct EA causes. And then we would take the magnitude of effect I could prove using only established facts and tangible reasoning, and multiply them together, to see how big this effect was.

I cooperated with this because I felt like the current estimate of this cost for this person was zero, and I could easily raise that, and that was better than nothing, but this simply is not going to get this person to understand my actual model, ever, at all, or properly update. This person is listening on one level, and that's much better than nothing, but they're not really listening curiously, or trying to figure the world out. They are holding court to see if they are blameworthy for not being forced off of their position, and doing their duty as someone who presents as listening to arguments, of allowing someone who disagrees with them to make their case under the official rules of utilitarian evidence.

Which, again, is way better than nothing! But is not the thing we're looking for, at all.

I've felt this way in conversations with Ray recently, as well. Where he's willing and eager to listen to explicit stuff, but if I want to change his mind, then (de facto) I need to do it with explicit statements backed by admissible evidence in this court. Ray's version is better, because there ways I can at least try to point to some forms of intuition or implicit stuff, and see if it resonates, whereas in the above example, I couldn't, but it's still super rough going.

Another problem is that if you have Things One Cannot Explicitly Say Or Consider, but which one believes are important, which I think basically everyone importantly does these days, then being told to only make explicit claims makes it impossible to make many important claims. You can't both follow 'ignore unfortunate correlations and awkward facts that exist' and 'reach proper Bayesian conclusions.' The solution of 'let the considerations be implicit' isn't great, but it can often get the job done if allowed to.

My private conversations with Ben have been doing a very good job, especially recently, of doing the dig-around-for-implicit-things and make-explicit-the-exact-thing-that-needs-it jobs.

Given Ray is writing a whole sequence, I'm inclined to wait until that goes up fully before responding in long form, but there seems to be something crucial missing from the explicitness approach.

Comment by zvi on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-08-01T20:22:43.727Z · score: 16 (10 votes) · LW · GW

Further engagement with the comments here seems likely to just be demon thread bait slash be a much bigger distraction and time sink than I would like, without accomplishing much, so I'm going to withdraw from further engagement on this post. Those who do wish to discuss things are free to contact me privately, or comment on the original blog post.

Posting this here both as a public commit to myself to not comment further, and so no one expects further responses.

Comment by zvi on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-08-01T20:15:41.692Z · score: 9 (5 votes) · LW · GW

I agree that it is often right to speak up loudly in favor of a good thing. "Yay Billionaire Philanthropy" is very different than the double negative "Against Against".

But I also have a strong policy against speaking up about someone or something I want to get less attention rather than more being Wrong On The Internet. To the extent that I worry/worried my response post here violated that rule. It might well have done so and therefore been a mistake.

Also the thing where we keep on insisting on pretending there's good faith and good intentions everywhere and taking the arguments at face value and being cautious about bias arguments and not calling people liars or frauds and etc etc? I kind of figured "people who oppose it when those with surplus they don't need use it to help with things that need help" would be a good place to point out such behaviors and see what happens. Get some valuable data.


Comment by zvi on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-08-01T18:59:51.139Z · score: 7 (6 votes) · LW · GW

Backfire specifically in the sense that, like you, others also gain the belief that there is more and broader support for the anti-philanthropy position than they thought, and that the cool in-group people are taking the argument seriously. And the question of whether to be against the thing gets more publicity. Thus, resulting in more anti-philanthropic actions and momentum. Claiming there's a serious debate about whether to take group action to scapegoat disliked group is not likely, all things being equal, to cause less trouble. Thought this was pretty explicit?

I agree there was useful content there, and certainly would have suggested making those points another way if this was going to not get posted.

Comment by zvi on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-08-01T18:55:59.070Z · score: 8 (7 votes) · LW · GW

In particular, I am not convinced Reich isn't in the class “people opposed to all private actions not under ‘democratic control" for actually impactful values of private action. Sure, he thinks it's fine to have tiny private actions that effectively add up to democratic control, but he is concerned that we can't vote a foundation's president out of office if we don't like what the foundation is doing, he talks a lot about the democratic vs. anti-democratic frame, and so on. I don't know him, but from the quotes I have to work with here...

Comment by zvi on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-08-01T18:53:28.451Z · score: 2 (5 votes) · LW · GW

You say you've given up understanding the number of basically people who disagree with things you think are obvious and morally obligatory.

I suspect there's a big confusion about what 'basically good' means here, I'm making a note of it for future posting, but moving past that for now:

When you examine specific cases of such disagreements happening, what do you find how often? (I keep writing possible things, but on reflection avoiding anchoring you is better)

Comment by zvi on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-08-01T18:42:50.436Z · score: 29 (12 votes) · LW · GW

Good data. All right. Fair enough. We'll see if it actually convinces either of them to dial it down, but at least you're live, and that's pretty good, although there's the risk that this still mostly just gives the whole thing more attention. If it works on the particular people you're addressing by name in the article, and they therefore dial it down, and this is the dominant effect, then that's a win. One can hope.

From looking briefly specifically at Reich in more detail, it appears to me that he's both making a nuanced case that the current tax and organizational structures we set aside for charitable works are bad designs, and using simple anti-billionaire and anti-action rhetoric - raising the alarm that someone might use their resources to (literally) do something that you couldn't (literally) vote against, or that someone might "seek to influence public policy," or your quote of “ask everyone involved to bend over in gratitude for her benevolence and genius in sprinkling around some social benefits.”

Here's the thing. You can be a conflict theorist and a basically good person. And when I look at what Reich is actually doing, that seems to be what's going on, to me - he's modeling this as a conflict between democracy and private interests, and not much caring about the things you care about because they don't impact the relevant conflicts. I don't know the politics or history of the whole thing, so I could be completely off base.

(The reddit comment seems like it's from someone who was basically in agreement before but nervous about things one is right to be nervous about, and happy to have that nervousness put into perspective. I do agree there's non-zero value in preaching to the choir on occasion. )

As for posts with concerns in the future, I'd be happy to join the group that reads such posts and offers thoughts. One thing I like about that is it feels (to me, and to those I share edits with) much easier to push back against things one disagrees with, when something isn't finalized. You're welcome to join my group too, if you'd like.

Comment by Zvi on [deleted post] 2019-07-18T18:48:37.302Z

I'm out on additional long form here in written form (as opposed to phone/Skype/Hangout) but I want to highlight this:

It feels like you keep repeating the 101 arguments and I want to say "I get them, I really get them, you're boring me" -- can you instead engage with why I think we can't use "but I'm saying true things" as free license to say anything in way whatsoever? That this doesn't get you a space where people discuss truth freely.

I feel like no one has ever, ever, ever taken the position that one has free license to say any true thing of their choice in any way whatsoever. You seem to keep claiming that other hold this position, and keep asking why we haven't engaged with the fact that this might be false. It's quite frustrating.

I also note that there seems to be something like "impolite actions are often actions that are designed to cause harm, therefore I want to be able to demand politeness and punish impoliteness, because the things I'm punishing are probably bad actors, because who else would be impolite?" Which is Parable of the Lightning stuff.

(If you want more detail on my position, I endorse Jessica's Dialogue on Appeals to Consequences).

Comment by Zvi on [deleted post] 2019-07-17T21:37:22.702Z

Ah again, thanks for clarifying that.

Comment by Zvi on [deleted post] 2019-07-17T17:36:21.589Z

Ah. That's my bad for conflating my mental concept of "POINTS!" (a reference mostly to the former At Midnight show, which I've generalized) with points in the form of Karma points. I think of generic 'points' as the vague mental accounting people do with respect to others by default. When I say I shouldn't have to say 'points' I meant that I shouldn't have to say words, but I certainly also meant I shouldn't have to literally give you actual points!

And yeah, the whole metaphor is already a sign that things are not where we'd like them to be.

Comment by Zvi on [deleted post] 2019-07-17T13:18:29.553Z

Short version:

I don't think the above is a reasonable statement of my position.

The above doesn't think of true statements made here mostly in terms of truth seeking, it thinks of words as mostly a form of social game playing aimed at causing particular world effects. As methods of attack requiring "regulation."

I don't think that the perspective the above takes is compatible with a LessWrong that accomplishes its mission, or a place I'd want to be.

Comment by Zvi on [deleted post] 2019-07-17T13:14:20.560Z

Echo Jessica's comments (we disagree in general about politeness but her comments here seem fully accurate to me).

I am having a hard time responding to this in a calm and polite manner. I do not think the way it characterizes my position is reasonable. Its core thesis seems incompatible with truth seeking. It seems to be engaging in multiple rhetorical devices to win an argument, rather than seek clarity, in ways that spike my stress and threat assessment levels. It would be against my ideal comment norms. I wouldn't normally mention such things, but in context I expect you would want to know this.

Knowing that this is the logic behind your position, if this was the logic behind moderation at Less Wrong and that moderation had teeth (as in, I couldn't just effectively ignore it and/or everyone else was following such principles) I would abandon the website as a lost cause. You can't think about saying true things this way and actually seek clarity. If you have a place whose explicit purpose is to seek truth/clarity, but even in that location one is expected not to say things that have 'negative consequences' than... we're done, right?

We all agree that if someone is bullying, harassing or trolling as their purpose and using 'speaking truth' as their justification, that does not get them off the hook at all, although it is less bad than if they were also lying. Bad actors trying to do harm are bad! I wrote Blackmail largely to point out that truth designed to cause harm is likely to on net cause harm.

The idea that my position can be reduced/enlarged/generalized to total absolution of responsibility for any statement of true things is... well, I notice I am confused if that isn't a rhetorical device. I spent a lot of words to prevent that kind of misinterpretation, although they could have been bad choices for those words. Perhaps something more like this:

It should be presumed that saying true things in order to improve people's models, and to get people to take actions better aligned with their goals and avoid doing things based on false expectations of what results those actions would have, and other neat stuff like that, is on net a very good idea. That seeking clarity is very important. It should be presumed that the consequences are object-level net positive. It should be further presumed that reinforcing the principle/virtue that one speaks the truth even if one's voice trembles, and without first charting out in detail all the potential consequences unless there is some obvious reason to have big worry that is a notably rare exception (please don't respond with 'what if you knew how to build an unsafe AGI or a biological weapon' or something), is also very important. That this goes double and more for those of us who are participating in a forum dedicated to this pursuit, while in that forum.

On some occasions, sharing a particular true thing will cause harm to some individual. Often that will be good, because that person was using deception to extract resources in a way they are now prevented from doing! Which should be prevented, by default, even if their intentions with the resources they extract were good. If you disagree, let's talk about that. But also often not that. Often it's just, side effects and unintended consequences are a thing, and sometimes things don't benefit from particular additional truth.

That's life. Sometimes those consequences are bad, and I do not completely subscribe to "that which can be destroyed by the truth should be" because I think that the class of things that could be so destroyed is... rather large and valuable. Sometimes even the sum total of all the consequences of stating a true thing are bad. And sometimes that means you shouldn't say it (e.g. the blueprint to a biological weapon). Sometimes those consequences are just, this thing is boring and off-topic and would waste people's time, so don't do that! Or it would give a false impression even though the statement is true, so again, don't do that. In both cases, additional words may be a good idea to prevent this.

Now, suppose there exists a statement X that I want to state. X is true and important, and saying it has positive results Y. But X would also have negative effect Z. Now, if Y includes all the secondary positive effects of speaking truth and seeking clarity, and I conclude Z>>Y, I should consider shutting up if I can't find a better way to say X that avoids Z. Sure. Again, this need not be an extraordinary situation, we all decide to keep our big mouths shut sometimes.

But suppose I think Y>Z. Am I responsible for Z? I mean, sure, I guess, in some sense. But is it my responsibility to prevent Z before I can say X? To what extent should I prioritize preventing Z versus preventing bad thing W? What types of Z make this more or less important to stop? Obviously, if I agree Z is bad and I can efficiently prevent Z while saying X, without doing other harm, I should do that, because I should generally be preventing harm when it's cheap to do so, especially close-by harm. But hurting people's ability to say X in general, or this X in particular, and be heard, is big harm.

If it's not particularly efficient to prevent Z, though, and Y>Z, I shouldn't have to then prevent Z.

I shouldn't be legally liable for Z, in the sense that I can be punished for Z. I also shouldn't be punished for Z in all cases where someone else thinks Z>Y.

Unless I did it on purpose in order to cause Z, rather than as a side effect, in which case, yes.

Or if I damn well knew or should have known Z happens and Z>>Y, and then... maybe? Sometimes? It gets weird. Full legal theories get complex.

If someone lies, and that lie is going to cause people to give money to a charity, and I point out that person is lying, and they says sure they were lying but I am now a horrible person because I am responsible for the thing that charity claims to be trying to stop, and they have a rhetorical leg to stand on rather than being banned, I don't want to stand anywhere near where that's the case.

Also important here is that we were talking about an example where the 'bad effect' was an update that caused people to lower the status of a person or group. Which one could claim in turn has additional bad effects. But this isn't an obviously bad effect! It's a by-default good effect to do this. If resources were being extracted under false pretenses, it's good to prevent that, even if the resources were being spent on [good thing]. If you don't think that, again, I'm confused why this website is interesting to you, please explain.

I also can't escape the general feeling that there's a large element of establishing that I sometimes trade things off against truth at some exchange rate, so we've established what we all are, and 'now we're talking price.' Except, no.

The conclusion of your statement makes it clear that these proposed norms are norms that would be enforced, and people violating them would be warned or banned, because otherwise such norms offer no protection against such bad actors.

If I need to do another long-form exchange like this, I think we'd need to move to higher bandwidth (e.g. phone calls) if we hope to make any progress.

Comment by Zvi on [deleted post] 2019-07-17T12:20:13.418Z

Before I read (2), I want to note that a universal idea that one is responsible for all the consequences of one's accurate speech - in an inevitably Asymmetric Justice / CIE fashion - seems like it is effectively a way to ban truth-seeking entirely, and perhaps all speech of any kind. And the fact that there might be other consequences to true speech that one may not like and might want to avoid, does not mean it is unreasonable to point out that the subclass of such consequences that seems to be in play in these examples, seems like a subclass that seems much less worth worrying about avoiding. But yes, Kant saying you should tell the truth to an Axe murderer seems highly questionable, and all that.

And echo Jessica that it's not reasonable to say that all of this is voluntary within the frame you're offering, if the response to not doing it is to not be welcome, or to be socially punished. Regardless of what standards one chooses.

Comment by Zvi on [deleted post] 2019-07-17T12:13:32.275Z

I think that is a far from complete description of my decision theory and selection of virtues here. Those are two important considerations, and this points in the right direction for the rest, but there are lots of others too. Margin too small to contain full description.

At some point I hope to write a virtue ethics sequence, but it's super hard to describe it in written form, and every time I think about it I assume that even if I do get it across, people who speak better philosopher will technically pick anything I say to pieces and all that and I get an ugg field around the whole operation, and assume it won't really work at getting people to reconsider. Alas.

Comment by zvi on Integrity and accountability are core parts of rationality · 2019-07-17T11:25:02.248Z · score: 21 (4 votes) · LW · GW

Agree strongly with this decomposition of integrity. They're definitely different (although correlated) things.

My biggest disagreement with this model is that the first form (structurally integrated models) seems to me to be something broader? Something like, you have structurally integrated models of how things work and what matters to you, and take the actions suggested by the models to achieve what matters to you based on how things work?

Need to think through this in more detail. One can have what one might call integrity of thought without what one might call integrity of action based on that thought - you have the models, but others/you can't count on you to act on them. And you can have integrity of action without integrity of thought, in the sense that you can be counted on to perform certain actions in certain circumstances, without integrity of thought, in which case you'll do them whether or not it makes any sense, but you can at least be counted on. Or you can have both.

And I agree you have to split integrity of action into keeping promises when you make them slash following one's own code, and keeping to the rules of the system slash following others' codes, especially codes that determine what is blameworthy. To me, that third special case isn't integrity. It's often a good thing, but it's a different thing - it counts as integrity if and only if one is following those rules because of one's own code saying one should follow the outside code. We can debate under what circumstances that is or isn't the right code, and should.

So I think for now I have it as Integrity-1 (Integrity of Thought) and Integrity-2 (Integrity of Action), and a kind of False-Integrity-3 (Integrity of Blamelessness) that is worth having a name for, and tracking who has and doesn't have it in what circumstances to what extent, like the other two, but isn't obviously something it's better to increase than decrease by default. Whereas Integrity-1 is by default to be increased, as is Integrity-2, and if you disagree with that, this implies to me there's a conflict causing you to want others to be less effective, or you're otherwise trying to do extraction or be zero sum.

Comment by Zvi on [deleted post] 2019-07-15T22:04:52.267Z

(5) Splitting for threading.

Wow, this got longer than I expected. Hopefully it is an opportunity to grok the perspective I'm coming from a lot better, which is why I'm trying a bunch of different approaches. I do hope this helps, and helps appreciate why a lot of the stuff going on lately has been so worrying to some of us.

Anyway, I still have to give a response to Ray's comment, so here goes.

Agree with his (1) that it comes across as politics-in-a-bad-way, but disagree that this is due to the simulacrum level, except insofar as the simulacrum level causes us to demand sickeningly political statements. I think it's because that answer is sickeningly political! It's saying "First, let me pay tribute to those who assume the title of Doer of Good or Participant in Nonprofit, whose status we can never lower and must only raise. Truly they are the worthy ones among us who always hold the best of intentions. Now, my lords, may I petition the King to notice that your Doers of Good seem to be slaughtering people out there in the name of the faith and kingdom, and perhaps ask politely, in light of the following evidence that they're slaughtering all these people, that you consider having them do less of that?"

I mean, that's not fair. But it's also not all that unfair, either.

(2) we strongly agree.

Pacifists who say "we should disband the military" may or may not be making the mistake of not appreciating the military - they may appreciate it but also think it has big downsides or is no longer needed. And while I currently think the answer is "a lot," I don't know to what extent the military should be appreciated.

As for appreciation of people's efforts, I appreciate the core fact of effort of any kind, towards anything at all, as something we don't have enough of, and which is generally good. But if that effort is an effort towards things I dislike, especially things that are in bad faith, then it would be weird to say I appreciated that particular effort. There are times I very much don't appreciate it. And I think that some major causes and central actions in our sphere are in fact doing harm, and those engaged in them are engaging in them in bad faith and have largely abandoned the founding principles of the sphere. I won't name them in print, but might in conversation.

So I don't think there's a missing mood, exactly. But even if there was, and I did appreciate that, there is something about just about everyone I appreciate, and things about them I don't, and I don't see why I'm reiterating things 'everybody knows' are praiseworthy, as praiseworthy, as a sacred incantation before I am permitted to petition the King with information.

That doesn't mean that I wouldn't reward people who tried to do something real, with good intentions, more often than I would be inclined not to. Original proposal #1 is sickeningly political. Original proposal #2 is also sickeningly political. Original proposal #3 will almost always be better than both of them. That does not preclude it being wise to often do something between #1 and #3 (#1 gives maybe 60% of its space to genuflections, #2 gives maybe 70% of its space to insults, #3 gives 0% to either, and I think my default would be more like 10% to genuflections if I thought intentions were mostly good?).

But much better would be that pointing out that someone was in fact doing harm would not be seen as punishment, if they stop when this is pointed out. In the world in which doing things is appreciated and rewarded, saying "I see you trying to do a thing! I think it's harmful and you should stop." and you saying "oops!" should net you points without me having to say "POINTS!"

Comment by Zvi on [deleted post] 2019-07-15T21:20:28.214Z

(4) Splitting for threading.

Pure answer / summary.

The nature of this should is that status evaluations are not why I am sharing the information. Nor are they my responsibility, nor would it be wise to make them my responsibility as the price of sharing information. And given I am sharing true and relevant information, any updates are likely to be accurate.

The meta-ethical framework I'm using is almost always a combination of Timeless Decision Theory and virtue ethics. Since you asked.

I believe it is virtuous, and good decision theory, to share true and relevant information, to try to create clarity. I believe it is not virtuous or good decision theory to obligate people with additional burdens in order to do this, and make those doing so worry about being accursed of violating such burdens. I do believe it is not virtuous or good decision theory to, while doing so, structure one's information in order to score political points, so don't do that. But it's also not virtuous or good decision theory to carefully always avoid changing the points noted on the scoreboard, regardless of events.

The power of this "should" is that I'm denying the legitimacy of coercing me into doing something in order to maintain someone else's desire for social frame control. If you want to force me to do that in order to tell you true things in a neutral way, the burden is on you to tell me why "should" attaches here, and why doing so would lead to good outcomes, be virtuous and/or be good decision theory.

The reason I want to point out that people are doing something I think is bad? Varies. Usually it is so we can know this and properly react to this information. Perhaps we can convince those people to stop, or deal with the consequences of those actions, or what not. Or the people doing it can know this and perhaps consider whether they should stop. Or we want to update our norms.

But the questions here in that last paragraph seem to imply that I should shape my information sharing primary based on what I expect the social reaction to my statements should be, rather than I should share my information in order to improve people's maps and create clarity. That's rhetoric, not discourse, no?

Comment by Zvi on [deleted post] 2019-07-15T21:05:41.663Z

(3) (Splitting for threading)

Sharing true information, or doing anything at all, will cause people to update.

Some of those updates will cause some probabilities to become less accurate.

Is it therefore my responsibility to prevent this, before I am permitted to share true information? Before I do anything? Am I responsible in an Asymmetric Justice fashion for every probability estimate change and status evaluation delta in people's heads? Have I become entwined with your status, via the Copenhagen Interpretation, and am now responsible for it? What does anything even have to do with anything?

Should I have to worry about how my information telling you about Bayesian probability impacts the price of tea in China?

Why should the burden be on me to explain should here, anyway? I'm not claiming a duty, I'm claiming a negative, a lack of duty - I'm saying I do not, by sharing information, thereby take on the burden of preventing all negative consequences of that information to individuals in the form of others making Bayesian updates, to the extent of having to prevent them.

Whether or not I appreciate their efforts, or wish them higher or lower status! Even if I do wish them higher status, it should not be my priority in the conversation to worry about that.

Thus, if you think that I should be responsible, then I would turn the question around, and ask you what normative/meta-ethical framework you are invoking. Because the burden here seems not to be on me, unless you think that the primary thing we do when we communicate is we raise and lower the status of people. In which case, I have better ways of doing that than being here at LW and so do you!

Comment by Zvi on [deleted post] 2019-07-15T20:59:46.723Z

(2) (Splitting these up to allow threading)

Sharing true information will cause people to update.

If they update in a way that causes your status to become lower, why should we presume that this update is a mistake?

If it is standard practice to do something to raise your status in order to offset the expected lowering of status that will come with the information, would not a proper Bayesian expect me to do that, and thus use my praise only as evidence of the degree to which I think others should update negatively on the basis of the information I share later?

If it is standard practice to do something to raise your status in order to offset the expected lowering of status that will come with the information, but only some of the time, what is going on there? Am I being forced to make a public declaration of whether I wish you to be raised or lowered in status? Am I being forced to acknowledge that you belong to a protected class of people whose status one is not allowed to lower in public? Am I worried about being labeled as biased against groups you belong to if I am seen as sufficiently negative towards you? (E.g. "I appreciate all the effort you have put in towards various causes, I think that otherwise you're a great person and I'm a big fan of [people of the same reference group] and support all their issues and causes, but I feel you should know that I really wish you hadn't shot me in the face. Twice.")


Comment by Zvi on [deleted post] 2019-07-15T20:51:46.858Z

(1) Glad you asked! Appreciate the effort to create clarity.

Let's start off with the recursive explanation, as it were, and then I'll give the straightforward ones.

I say that because I actually do appreciate the effort, and I actually do want to avoid lowering your status for asking, or making you feel punished for asking. It's a great question to be asking if you don't understand, or are unsure if you understand or not, and you want to know. If you're confused about this, and especially if others are as well, it's important to clear it up.

Thus, I choose to expend effort to line these things up the way I want them lined up, in a way that I believe reflects reality and creates good incentives. Because the information that you asked should raise your status, not lower your status. It should cause people, including you, to do a Bayesian update that you are praiseworthy, not blameworthy. Whereas I worry, in context, that you or others would do the opposite if I answered in a way that implied I thought it was a stupid question, or was exasperated by having to answer, and so on.

On the other hand, if I believed that you damn well knew the answer, even unconsciously, and were asking in order to place upon me the burden of proof via creation of a robust ethical framework justifying not caring primarily about people's social reactions rather than creation of clarity, lest I cede that I and others the moral burden of maintaining the status relations others desire as their primary motivation when sharing information. Or if I thought the point was to point out that I was using "should" which many claim is a word that indicates entitlement or sloppy thinking and an attempt to bully, and thus one should ignore the information content in favor of this error. Or if in general I did not think this question was asked in good faith?

Then I might or might not want to answer the question and give the information, and I might or might not think it worthwhile to point out the mechanisms I was observing behind the question, but I certainly would not want to prevent others from observing your question and its context, and performing a proper Bayesian update on you and what your status and level of blame/praise should be, according to their observations.

(And no, really, I am glad you asked and appreciate the effort, in this case. But: I desire to be glad you asked if knowing the true mechanisms behind the question combined with its effects would cause me to be glad you asked, and I desire to not be glad you asked if knowing the true mechanisms behind the question combined with its effects would cause me to not be glad you asked. Let me not become attached to beliefs I may not want. And I desire to tell you true things. Etc. Amen.)


Comment by zvi on Everybody Knows · 2019-07-04T20:56:52.778Z · score: 5 (3 votes) · LW · GW

I agree that these are (sometimes) legitimate things to do, and that people often use the 'everybody knows' framing to do them implicitly. But I think that using this framing, rather than saying the thing more explicitly, is useful for those trying to do other things, and counter-productive for those trying to do the exact things you are describing, unless they also want to do other things.

Comment by zvi on Everybody Knows · 2019-07-04T20:48:59.767Z · score: 3 (2 votes) · LW · GW

For #1, the reason we do that is exactly because it is likely that not everyone in the room knows (even though they really should if they are in the room) and the people who don't know are going to be lost if you don't tell them. And certainly not everyone knows there are 20 amino acids (e.g. I didn't know that and will doubtless not remember it tomorrow).

I find your example in #2 to be on point: I am highly confident that far from everyone knows what happens if trash bags are left outside the dumpster. I actually had another mode in at one point to describe the form "I thought that everyone knew X, but it turned out I was wrong" because in my experience that's how this actually comes up.


Comment by zvi on Raemon's Scratchpad · 2019-07-03T12:19:59.920Z · score: 10 (8 votes) · LW · GW

Also important to note that learn Calculus this week is a thing a person can do fairly easily without being some sort of math savant.

(Presumably not the full 'know how to do all the particular integrals and be able to ace the final' perhaps, but definitely 'grok what the hell this is about and know how to do most problems that one encounters in the wild, and where to look if you find one that's harder than that.' To ace the final you'll need two weeks.)

Comment by zvi on Causal Reality vs Social Reality · 2019-06-26T12:19:58.292Z · score: 17 (7 votes) · LW · GW

The cases Scott talks about are individuals clamoring for symbolic action in social reality in the aid of individuals that they want to signal they care about. It's quite Hansonian, because the whole point is that these people are already dead and none of these interventions do anything but take away resources from other patients. They don't ask 'what would cause people I love to die less often' at all, which my model says is because that question doesn't even parse to them.

Comment by zvi on 2013 Survey Results · 2019-06-23T18:44:56.762Z · score: 2 (1 votes) · LW · GW

Noting that this was suggested to me by the algorithm, and presumably shouldn't be eligible for that.

Comment by zvi on Recommendation Features on LessWrong · 2019-06-19T15:48:32.560Z · score: 4 (2 votes) · LW · GW

A 'remind me what recommendations you've given me recently' list being available to be clicked on might be nice?