Posts

Some suggestions (desperate please, even) 2017-11-09T23:14:14.427Z

Comments

Comment by Jiro on Can you control the past? · 2021-09-02T16:19:50.529Z · LW · GW

The obvious answer to this is that under these circumstances, you don't control your hand or anything else either.

Comment by Jiro on Can you control the past? · 2021-08-30T21:51:04.253Z · LW · GW

This has been voted down too much.

I think it's a pretty good objection. What does it even mean to say that you can "control" your duplicate if we are postulating that what you and your duplicate do is a deterministic function of your current states? What does it even mean to say that you can control or decide anything under these circumstances?

Comment by Jiro on Improving capital gains taxes · 2021-07-15T21:48:55.227Z · LW · GW

The simple answer is “most Americans think rich people should pay higher capital gains taxes.”

And how's that a reason? Most Americans couldn't articulate any of the reasons as to why they may be low to begin with, let alone articulate a coherent set of principles which would lead to raising them but not to other things they wouldn't support. If they can't comprehend something, we shouldn't pay attention to their feelings about it.

Comment by Jiro on [Link] Musk's non-missing mood · 2021-07-14T15:58:46.407Z · LW · GW

The idea of a missing mood, from following the link to Bryan Caplan's article, seems to amount to two ideas:

  1. "I think it has more costs than other people think, so even if someone thinks the benefits outweigh the costs, if they're not taking the costs seriously enough, they have a missing mood."
  2. "I think it has more benefits than other people think, so even if someone thinks the costs outweigh the benefits, if they're not taking the benefits seriously enough, they have a missing mood."

These are, of course, two sides of the same coin and have the same problem: You're assuming that the first half of your position (costs in case 1, benefits in case 2) is not only correct, but so obviously correct that nobody can reasonably disagree with it; if someone acts as they don't believe it, there must be some other explanation. This is better than assuming your entire position is correct, but it's still poor epistemic hygeine. For instance, both the military hawks example (case 1) and the immigration example (case 2) fail if your opponent doesn't value non-Americans very much, so there are lower costs or benefits, respectively.

Beware of starting with disagreement and concluding insincerity.

Comment by Jiro on You are allowed to edit Wikipedia · 2021-07-08T19:02:12.962Z · LW · GW

"Edits may be rejected for reasons that are not reasonably predictable by the editor and unrelated to the quality of the edit" is, in fact, a model of how edits are treated. And it's useful in deciding whether editing is likely to work.

Comment by Jiro on You are allowed to edit Wikipedia · 2021-07-08T17:38:39.866Z · LW · GW

That doesn’t mean they weren’t collateral damage.

"Your edit can be rejected because of collateral damage from some conflict you are not a party to" is an instance of "benign edits are rejected for arcane reasons", not a refutation of it.

Comment by Jiro on The Unexpected Hanging Paradox · 2021-06-28T20:48:39.891Z · LW · GW

Consider an unexpected hanging without the extra days: The judge tells you "I am going to hang you on Monday, but the day of the hanging is not something you will be able to predict."

The prisoner follows the same reasoning as in the unexpected hanging: the surprise hanging can't be Monday, because he would then know when it is, and it's not a surprise. The judge then hangs him on Monday, and of course it's a surprise. In other words, the unexpected hanging paradox doesn't require the extra days at all.

If by "surprise" you mean "can't logically be proven", then the judge's statement is equivalent to "X is true, but you cannot prove X". From that statement, everyone except you can prove X, and you cannot.

Comment by Jiro on The Point of Trade · 2021-06-24T22:09:20.805Z · LW · GW

All right, suppose that all of us liked exactly the same objects exactly the same amount. This obliterates the poorly-written-textbook’s reason for “trade”.

This is nonsensical, because whether an individual person likes something is not a constant. If everyone liked oranges just as much as apples,, trade would still be possible because the 1st orange is preferable to the 99th apple and the 1st apple is preferable to the 99th orange.

Comment by Jiro on Reply to Nate Soares on Dolphins · 2021-06-21T17:20:55.757Z · LW · GW

Human intelligence counts as "gained characteristics not shared by the others".

I think you're thinking that it doesn't count as a lot of divergence, but "a list of divergences with few items" doesn't mean "not a lot of divergence". Human intelligence has an effect on the environment and capabilities of humans that is equal or greater than the effect of the differences between birds and reptiles.

Comment by Jiro on Reply to Nate Soares on Dolphins · 2021-06-17T01:24:10.619Z · LW · GW

Your comment seems to me to assume that Scott thinks there would be nothing very wrong with a definition of “fish” that included whales only because that’s something he has to think in order to remain consistent while classifying transgender people the way they feel they should be classified.

Believing things for multiple reasons is a thing (despite the LW idea of a true rejection, as if people only have one reason for everything). Moreover, people aren't perfectly rational machines, and motivated reasoning is a thing. I certainly think that needing to believe it for the sake of transgendered people is a large component of why he believes it, and that he probably wouldn't otherwise believe it, even if it's not the only reason why.

Comment by Jiro on Am I anti-social if I get vaccinated now? · 2021-06-14T03:43:41.978Z · LW · GW

Vaccines that are already delivered to your country are not going to get shipped elsewhere.

This seems to fail to acausal reasoning.

Comment by Jiro on Reply to Nate Soares on Dolphins · 2021-06-14T03:25:07.123Z · LW · GW

What changed? Surely if “cognitively useful categories should carve reality at the joints, and dolphins being fish doesn’t do that” was good philosophy in 2008, it should still be good philosophy in 2021?

Scott Alexander's essay uses the example of fish versus whales to argue that transgender people should be classified by whatever sex they claim to be rather than classified by biological sex. This essay came out after 2008 and before 2021. And Scott Alexander is about as influential here as Yudkowsky.

In other words, what changed is that asserting that it makes sense to classify dolphins as fish is now something you need to assert for political purposes.

Edit: I missed the reference to gender issues. But I think it may explain why Yudkowsky and rationalists in general have changed their mind, regardless of why anyone in particular here has.

Comment by Jiro on Assessing Interest in Group Trip to Secure Panamanian Residency [Imminent Rules Change] · 2021-06-07T22:48:32.455Z · LW · GW

If you propose a course of action which a normal person would find profoundly weird, I suggest that Chesterton's fence applies, and you figure out why a normal person would object it. Then articulate why it is usually beneficial to avoid such things, before you decide that this one time, the normal person is wrong and you really should go after the thing that he avoids.

And the answer is not going to be "because he's a normal person and so he keeps missing twenty dollar bills in the street".

Comment by Jiro on Often, enemies really are innately evil. · 2021-06-07T22:40:29.793Z · LW · GW

By this reasoning almost nothing normally described as a terminal value is a terminal value. "He robs banks because he wants money because getting money makes him feel good".

Comment by Jiro on Often, enemies really are innately evil. · 2021-06-07T22:37:29.869Z · LW · GW

I think the context is that many people say that there is no such thing as evil and advocate for some actions and against other actions based on that. Just pointing out that they are recommending harmful things is valuable.

Ruling out certain classes of responses is useful even if there is still more than one possibility remaining and it's still hard to pick the right one.

If someone told you not to use homeopathy to cure disease, would you respond that they haven't explained how you should cure disease?

Comment by Jiro on Which animals can suffer? · 2021-06-02T02:00:56.975Z · LW · GW

Presumably people think that at some point an AI is able to suffer. So why wouldn't a neural network be able to suffer?

Comment by Jiro on Covid 5/20: The Great Unmasking · 2021-05-21T22:46:54.420Z · LW · GW

Admitting one is wrong and correcting errors needs to be rewarded and encouraged rather than punished and piled onto.

What is happening now is the opposite of admitting one is wrong. It's not as if the CDC said "sorry, you could have gone without a mask for the last month, we'll try not to make that mistake again".

Comment by Jiro on The Reebok effect · 2021-05-21T22:40:49.029Z · LW · GW

This implies that advertisers would be better off if they occasionally violated such assumptions (such as saying "of the top five" when they were in the top four) enough that it weakens the inferences viewers can make, by enough to benefit the advertisers.

Of course, the coordination problem in doing this is hard, but there are several ways around it (and not all of them just involve advertisers directly colluding with each other).

Comment by Jiro on There is no No Evidence · 2021-05-20T22:09:27.387Z · LW · GW

If we treat the “is” in Absence of Evidence is Evidence of Absence as an “implies” (which it seems to me to be) and then apply modus tollens to it, we get “if you don’t have evidence of absence, you don’t have absence of evidence” and it is precisely this bullshit that Zvi is calling. If you have evidence of absence, say so.

No. The "is" in that doesn't mean "implies". The "is evidence of" means "implies".

Modus tollens then gives you "if you don't have absence, you don't have absence of evidence", which is not subject to the semantic tricks in the post.

Comment by Jiro on In Defence of Spock · 2021-04-27T19:11:02.583Z · LW · GW

Not all predictions are equal. So I don't think that the fact that Spock makes a lot of offscreen, untelevised predictions means much; the predictions we see onscreen are not only the most dramatic, they are also the cases where predictions are the most important. As such, we should weight them much higher in analyzing Spock's competence than his offscreen predictions.

I could predict 365 sunrises and one asteroid impact for the next year, and if I was wrong about the asteroid impact, it would be good reason to call me a bad predictor, even if the other 99.7% of predictions were all correct.

And using a novel as a source for canon is dubious. You can do all the Sherlockian "pretend the novel really happened" you want, but onscreen episodes weren't written taking that novel into account.

Comment by Jiro on Covid 4/1: Vaccine Passports · 2021-04-06T16:12:18.083Z · LW · GW

Not allowing unused doses to go to ineligible people is legitimate. It's one of those cases where something is a Pareto improvement in the immediate situation, but creates bad incentives.

If you allow unused doses to go to other people, you'll end up with corruption where doses are deliberately held back from eligible people because someone paid under the table to have doses set aside as "leftover doses".

It's similar to why some organizations have a policy of making sure their discarded equipment goes into a landfill rather than letting employees take it. Because that way it's not possible to bribe someone to throw out good equipment for you to take. That sucks if you could have used the discarded equipment, but it's good if you're someone affected by fraud and it eliminated the incentive for fraud.

Comment by Jiro on Covid 4/1: Vaccine Passports · 2021-04-06T15:52:42.657Z · LW · GW

You left out one: the possibility that people will be deliberately denied COVID vaccination passports for other reasons than not having a COVID vaccination. It's like when Facebook denied the "verified" status to people who were verified, but who said politically inconvenient things.

This is going to be a risk when the passport is "private" but becomes ubiquitous, especially when encouraged by the government.

Comment by Jiro on Open, Free, Safe: Choose Two · 2021-03-22T06:13:22.782Z · LW · GW

The problem (or at least a problem) with seeing moderation this way is that moderators who are aware of the concepts at all tend to say that criticism of arbitrary moderation amounts to criticism for not being open--in other words, once you accept that openness is a bad idea, pretty much anything the moderators do becomes justified on that basis.

Comment by Jiro on Defending the non-central fallacy · 2021-03-15T09:06:33.478Z · LW · GW

The reader can probably fill in the blanks: just as we wouldn’t accept the justification “but that guy was evil” as a good one in the case of ordinary murder, maybe we should also be skeptical when the state makes the same excuse for capital punishment.

I wouldn't accept "that guy was evil" in the case of ordinary murder, because "is unjustified" is baked into the definition of "murder", not because that would be a non-central example. I might accept it in the case of killing rather than murder.

Furthermore, I'd point to the fact that many arguments against capital punishment prove too much. Would you accept "that guy was evil" in the case of ordinary kidnapping? Does that mean that we should be skeptical when the state makes this argument for prison?

Comment by Jiro on The Power & Tragedy of Names · 2020-11-09T17:56:41.345Z · LW · GW

Tell this to the people who named GIMP.

Comment by Jiro on Expansive translations: considerations and possibilities · 2020-10-25T16:15:27.334Z · LW · GW

The fact that people have different understanding of the same texts and have to "translate" them through an inferential distance is a necessary evil. Just because something is a necessary evil doesn't mean it's good, and certainly doesn't mean that we should be fine with deliberately creating more of it.

Comment by Jiro on A full explanation to Newcomb's paradox. · 2020-10-12T20:31:56.964Z · LW · GW

Under some circumstances, it seems that option 4 would result in the predictor trying to solve the Halting Problem since figuring out your best option may in effect involve simulating the predictor.

(Of course, you wouldn't be simulating the entire predictor, but you may be simulating enough of the predictor's chain of reasoning that the predictor essentially has to predict itself in order to predict you.)

Comment by Jiro on Inaccessible finely tuned RNG in humans? · 2020-10-10T21:17:55.208Z · LW · GW

Generate several "random" numbers in your head, trying to generate them randomly but falling prey to the usual problems of trying to generate them in your head. Then add them together and take them mod X to produce a result that is more like a real random number.

Comment by Jiro on Some elements of industrial literacy · 2020-10-10T21:13:47.490Z · LW · GW

Remember the original post about epistemic learned helplessness: making people literate in some things may be bad, because the fact that they don't understand things prevents them from doing good in those areas, but it also prevents them from falling prey to scams and fallacies in the same areas.

You might want the average person to fail to get excited about a 6% increase in battery energy density, because if too many people get excited about such things, the politicians, media machines, and advertisers will do their best to exploit this little bit of knowledge to extract momey from the general public while producing as few actual improvements to energy density as possible. I'm sure you could name plenty of issues where the public understands that they are important without having the breadth of knowledge to not fall for "we have to do omething, it's important!"

Comment by Jiro on Weird Things About Money · 2020-10-05T03:03:41.502Z · LW · GW

Small bets have fixed costs to them in addition to the change in utility from the money gained or lost. The smaller the bet, the more those fixed costs dominate.

Comment by Jiro on Weird Things About Money · 2020-10-05T03:02:55.027Z · LW · GW

Bets have fixed costs to them in addition to the change in utility from the money gained or lost. The smaller the bet, the more those fixed costs dominate. And at some point, even the hassle from just trying to figure out that the bet is a good deal dwarfs the gain in utility from the bet. You may be better off arbitrarily refusing to take all bets below a certain threshhold because you gain from not having overhead. Even if you lose out on some good bets by having such a policy, you also spend less overhead on bad bets, which makes up for that loss.

The fixed costs also change arbitrarily; if I have to go to the ATM to get more money because I lost a $10.00 bet, the disutility from that is probably going to dwarf any utility I get from a $0.10 profit, but whether the ATM trip is necessary is essentially random.

Of course you could model those fixed costs as a reduction in utility, in which case the utility function is indeed no longer logarithmic, but you need to be very careful about what conclusions you draw from that. For instance, you can't exploit such fixed costs to money pump someone.

Comment by Jiro on Words and Implications · 2020-10-05T02:58:24.498Z · LW · GW

“I merely wrote those inscriptions on two boxes, and then I put the dagger in the second one.”

Statements can have inconsistent truth values. The logical analysis done by the jester is wrong because the jester is assuming that the statements are either true or false. This assumption is unwarranted, and given the actual box contents, the statements aren't true or false.

In other words, the jester didn't correctly analyze the logic of the inscriptions, but mess up because the result has no connection to the real world. The jester incorrectly analyzed the logic of the inscriptions. If he had done so correctly, he would have figured out that the contents of the boxes could be anything.

Comment by Jiro on Words and Implications · 2020-10-05T02:57:41.768Z · LW · GW

“I merely wrote those inscriptions on two boxes, and then I put the dagger in the second one.”

Statements can have inconsistent truth values. The logical analysis done by the jester is wrong because the jester is assuming that the statements are either true or false. This assumption is unwarranted, and given the actual box contents, the statements aren't true or false.

In other words, the jester didn't correctly analyze the logic of the inscriptions, but mess up because the result has no connection to the real world. The jester incorrectly analyzed the logic of the inscriptions. If he had done so correctly, he would have figured out that the contents of the boxes could be anything.

Comment by Jiro on A few misconceptions surrounding Roko's basilisk · 2020-09-11T23:03:26.692Z · LW · GW

not making it absolutely clear using lengthy disclaimers that my yelling did not mean that I believed Roko was right

This is misleading. Eliezer didn't believe that the exact argument made by Roko was correct, but he believed that some version of the Basilisk could be and wanted to suppress it to avoid giving tools to superintelligences to hurt us.

Comment by Jiro on Anthropic effects imply that we are more likely to live in the universe with interstellar panspermia · 2020-03-11T18:36:30.527Z · LW · GW

This is similar to the simulation hypothesis, and in fact is sometimes used as a response to the simulation hypothesis.

Comment by Jiro on Potential Ways to Fight Mazes · 2020-01-29T21:33:12.204Z · LW · GW

Con­sider this re­cent column by the ex­cel­lent Matt Lev­ine. It vividly de­scribes the con­flict be­tween en­g­ineer­ing, which re­quires peo­ple com­mu­ni­cate in­for­ma­tion and keep ac­cu­rate records, and the le­gal sys­tem and pub­lic re­la­tions, which tell you that keep­ing ac­cu­rate records is in­sane.

It certainly sounds like a contradiction, but the spin that article puts on it is unconvincing:

In other words, if you are trying to build a good engineering culture, you might want to encourage your employees to send hyperbolic, overstated, highly quotable emails to a broad internal distribution list when they object to a decision. On the other hand your lawyers, and your public relations people, will obviously and correctly tell you that that is insane: If anything goes wrong, those emails will come out, and the headlines will say “Designed by Clowns,”

This argument is essentially "truth is bad".

We try to pretend that making problems sound worse than they really are, in order to compel action, is not lying. But it really is. This complaint sounds like "we want to get the benefits of lying, but not the harm". If you're overstating a problem in order to get group A to act in ways that they normally wouldn't, don't be surprised if group B also reacts in ways that they normally wouldn't, even if A's reaction helps you and B's reaction hurts you. The core of the problem is not that B gets to hear it, the core of the problem is that you're being deceitful, even if you're exaggerating something that does contain some truth.

(Also, this will result in a ratchet where every decision that engineers object to is always the worst, most disastrous, decision ever, because if your goal is to get someone to listen, you should always describe the current problem as the worst problem ever.)

Comment by Jiro on Reality-Revealing and Reality-Masking Puzzles · 2020-01-22T22:20:36.252Z · LW · GW

The epistemic immune system serves a purpose--some things are very difficult to reason out in full and some pitfalls are easy to fall in unknowingly. If you were a perfect reasoner, of course, this wouldn't matter, but the epistemic immune system is necessary because you're not a perfect reasoner. You're running on corrupted hardware, and you've just proposed dumping the error-checking that protects you from flaws in the corrupted hardware.

And saying "we should disable them if they get in the way of accurate beliefs" is, to mix metaphors, like saying "we should dispense with the idea of needing a warrant for the police to search your house, as long as you're guilty". Everyone thinks their own beliefs are accurate; saying "we should get rid of our epistemic immune system if it gets in the way of accurate beliefs" is equivalent to getting rid of it all the time.

Comment by Jiro on What's an important (new) idea you haven't had time to argue for yet? · 2019-12-10T22:19:07.823Z · LW · GW

Under what circumstances do you get people telling you they are fine? That doesn't happen to me very much--"I'm fine" as part of normal conversation does not literally mean that they are fine.

Comment by Jiro on Experiments and Consent · 2019-11-11T20:23:30.121Z · LW · GW

“if it’s ok to do A or B then it’s fine to run an ex­per­i­ment on A vs B”

Allowing A and B, and allowing an experiment on A vs. B, may create different incentives, and these incentives may be different enough to change whether we should allow the experiment versus allowing A and B.

Comment by Jiro on What do the baby eaters tell us about ethics? · 2019-10-10T21:58:40.745Z · LW · GW

Luckily for you, there definitely exists a rule that tells you the best possible move to play for every given configuration of pieces—the rule that tells you the move that maximizes the probability of victory (or since draws exist and may be acceptable, the move that minimizes the probability of defeat.

If your opponent is a perfect player, each move has a 0% or 100% probability of victory. You can only maximize it in a trivial sense.

If your opponent is an imperfect player, your best move is the one that maximizes the probability of victory given your opponent's pattern of imperfection. Depending on what this pattern is, this may also mean that each move has a 0% or 100% probability of victory.

Comment by Jiro on Bets and updating · 2019-10-10T21:43:31.664Z · LW · GW

Your process of deciding what to do may at some point include simulating Omega and Omicron. If so, this means that when Omega and Omicron are simulating you, they are now trying to solve the Halting Problem. I am skeptical that Omega or Omicron can solve the Halting Problem.

Comment by Jiro on Asymmetric Justice · 2019-09-26T21:34:34.321Z · LW · GW

I would suggest that this is ameliorated by the following:

  1. Nobody actually believes that you are to blame for every bad consequence of things you do, no matter how indirect. A conscientious person is expected to research and know some of the indirect consequences of his actions, but this expectation doesn't go out to infinity.

  2. While you don't get credit for unintended good consequences in general, you do get such credit in some situations. Specifically, if the good consequence is associated with a bad consequence, you are allowed to get credit for the good consequence and trade it off against the bad consequence. If I buy a tomato, bad consequences of this (someone else can't get one) are balanced off against good consequences (the store knows to order extra tomatoes next week) because they are both part of the same process. On the other hand, I can't offset a murder by saving two drowning victims, because the acts are not entwined and I could do one without doing the other.

Comment by Jiro on Troll Bridge · 2019-08-23T19:12:21.858Z · LW · GW

How can you (in general) conclude something by examining the source code of an agent, without potentially implicating the Halting Problem?

Comment by Jiro on Raemon's Shortform · 2019-07-23T16:01:48.593Z · LW · GW

I think there's a difference between "Most of the IRS tax code is reasonable" and "Most of the instances where the IRS tax code does something are instances where it does reasonable things." Not all parts of the tax code are used equally often. Furthermore, most unreasonable instances of a lot of things will be rare as a percentage of the whole because there is a large set of uncontroversial background uses. For instance, consider a completely corrupt politician who takes bribes--he's not going to be taking a bribe for every decision he makes and most of the ones he does make will be uncontroversial things like "approve $X for this thing which everyone thinks should be approved anyway".

Comment by Jiro on Integrity and accountability are core parts of rationality · 2019-07-16T20:51:13.109Z · LW · GW

I want employees to ask themselves whether they are willing to have any contemplated act appear the next day on the front page of their local paper—to be read by their spouses, children and friends—with the reporting done by an informed and critical reporter.”

Leaving out "parents" gets rid of some of the obvious objections, but even then, I don't want my children to know about my sexual fetishes. Other objections may include, for instance, letting your friends know that you voted for someone who they think will ruin the country. And I certainly wouldn't want rationalist-but-unpopular opinions I hold to be on the front page of the local paper to be seen by everyone (Go ahead, see what happens when the front page of the newspaper announces that you think that you should kill a fat man to stop a trolley.) This aphorism amounts to "never compartmentalize your life" which doesn't seem very justifiable.

Comment by Jiro on Everybody Knows · 2019-07-05T15:00:31.472Z · LW · GW

Bob does not know X. That’s why Alice is telling Bob in the first place.

Conversational phrases aren't supposed to be interpreted literally. "Everybody knows" never means "literally every single person knows". This is about equivalent to complaining that people say "you're welcome" when the person really wouldn't be welcome under some circumstances.

Don't be the literal Internet guy who thinks this way.

Comment by Jiro on An Increasingly Manipulative Newsfeed · 2019-07-02T21:26:49.939Z · LW · GW

I think the word “unbiased” there may be a typo; your statement would make a lot more sense if the word you meant to put there was actually “biased”.

I meant "unbiased" in scare quotes. Typical newsfeeds that are claimed to be unbiased in the real world (but actually may not be).

Comment by Jiro on An Increasingly Manipulative Newsfeed · 2019-07-02T19:51:17.950Z · LW · GW

Typical unbiased newsfeeds in the real world are created by organizations with bias who have an interest in spreading biased news. It could, of course, be that this was about a rare instance where this was not the case, but the odds are against it.

Comment by Jiro on An Increasingly Manipulative Newsfeed · 2019-07-02T18:09:26.798Z · LW · GW

Manipulative newsfeeds aren't an example of an AI becoming manipulative when the human just wanted it to be unbiased. They're an example of an AI becoming manipulative when the human also wanted it to be manipulative, but didn't want to be too obvious about it.

Comment by Jiro on [deleted post] 2019-06-12T22:03:52.195Z

Don't Godel sentences rebut the ideas of groundedness or of creating a system where self-referential sentences are blocked? Their existence means that you can create something that behaves as a self-referential sentence and has the associated paradoxes while using only normal arithmetic and without a "this sentence".