Posts

Some suggestions (desperate please, even) 2017-11-09T23:14:14.427Z · score: 15 (6 votes)

Comments

Comment by jiro on A full explanation to Newcomb's paradox. · 2020-10-12T20:31:56.964Z · score: 2 (1 votes) · LW · GW

Under some circumstances, it seems that option 4 would result in the predictor trying to solve the Halting Problem since figuring out your best option may in effect involve simulating the predictor.

(Of course, you wouldn't be simulating the entire predictor, but you may be simulating enough of the predictor's chain of reasoning that the predictor essentially has to predict itself in order to predict you.)

Comment by jiro on Inaccessible finely tuned RNG in humans? · 2020-10-10T21:17:55.208Z · score: 2 (1 votes) · LW · GW

Generate several "random" numbers in your head, trying to generate them randomly but falling prey to the usual problems of trying to generate them in your head. Then add them together and take them mod X to produce a result that is more like a real random number.

Comment by jiro on Some elements of industrial literacy · 2020-10-10T21:13:47.490Z · score: 2 (1 votes) · LW · GW

Remember the original post about epistemic learned helplessness: making people literate in some things may be bad, because the fact that they don't understand things prevents them from doing good in those areas, but it also prevents them from falling prey to scams and fallacies in the same areas.

You might want the average person to fail to get excited about a 6% increase in battery energy density, because if too many people get excited about such things, the politicians, media machines, and advertisers will do their best to exploit this little bit of knowledge to extract momey from the general public while producing as few actual improvements to energy density as possible. I'm sure you could name plenty of issues where the public understands that they are important without having the breadth of knowledge to not fall for "we have to do omething, it's important!"

Comment by jiro on Weird Things About Money · 2020-10-05T03:03:41.502Z · score: 2 (1 votes) · LW · GW

Small bets have fixed costs to them in addition to the change in utility from the money gained or lost. The smaller the bet, the more those fixed costs dominate.

Comment by jiro on Weird Things About Money · 2020-10-05T03:02:55.027Z · score: 9 (5 votes) · LW · GW

Bets have fixed costs to them in addition to the change in utility from the money gained or lost. The smaller the bet, the more those fixed costs dominate. And at some point, even the hassle from just trying to figure out that the bet is a good deal dwarfs the gain in utility from the bet. You may be better off arbitrarily refusing to take all bets below a certain threshhold because you gain from not having overhead. Even if you lose out on some good bets by having such a policy, you also spend less overhead on bad bets, which makes up for that loss.

The fixed costs also change arbitrarily; if I have to go to the ATM to get more money because I lost a $10.00 bet, the disutility from that is probably going to dwarf any utility I get from a $0.10 profit, but whether the ATM trip is necessary is essentially random.

Of course you could model those fixed costs as a reduction in utility, in which case the utility function is indeed no longer logarithmic, but you need to be very careful about what conclusions you draw from that. For instance, you can't exploit such fixed costs to money pump someone.

Comment by jiro on Words and Implications · 2020-10-05T02:58:24.498Z · score: 2 (1 votes) · LW · GW

“I merely wrote those inscriptions on two boxes, and then I put the dagger in the second one.”

Statements can have inconsistent truth values. The logical analysis done by the jester is wrong because the jester is assuming that the statements are either true or false. This assumption is unwarranted, and given the actual box contents, the statements aren't true or false.

In other words, the jester didn't correctly analyze the logic of the inscriptions, but mess up because the result has no connection to the real world. The jester incorrectly analyzed the logic of the inscriptions. If he had done so correctly, he would have figured out that the contents of the boxes could be anything.

Comment by jiro on Words and Implications · 2020-10-05T02:57:41.768Z · score: 2 (1 votes) · LW · GW

“I merely wrote those inscriptions on two boxes, and then I put the dagger in the second one.”

Statements can have inconsistent truth values. The logical analysis done by the jester is wrong because the jester is assuming that the statements are either true or false. This assumption is unwarranted, and given the actual box contents, the statements aren't true or false.

In other words, the jester didn't correctly analyze the logic of the inscriptions, but mess up because the result has no connection to the real world. The jester incorrectly analyzed the logic of the inscriptions. If he had done so correctly, he would have figured out that the contents of the boxes could be anything.

Comment by jiro on A few misconceptions surrounding Roko's basilisk · 2020-09-11T23:03:26.692Z · score: 2 (1 votes) · LW · GW

not making it absolutely clear using lengthy disclaimers that my yelling did not mean that I believed Roko was right

This is misleading. Eliezer didn't believe that the exact argument made by Roko was correct, but he believed that some version of the Basilisk could be and wanted to suppress it to avoid giving tools to superintelligences to hurt us.

Comment by jiro on Anthropic effects imply that we are more likely to live in the universe with interstellar panspermia · 2020-03-11T18:36:30.527Z · score: 2 (1 votes) · LW · GW

This is similar to the simulation hypothesis, and in fact is sometimes used as a response to the simulation hypothesis.

Comment by jiro on Potential Ways to Fight Mazes · 2020-01-29T21:33:12.204Z · score: 11 (2 votes) · LW · GW

Con­sider this re­cent column by the ex­cel­lent Matt Lev­ine. It vividly de­scribes the con­flict be­tween en­g­ineer­ing, which re­quires peo­ple com­mu­ni­cate in­for­ma­tion and keep ac­cu­rate records, and the le­gal sys­tem and pub­lic re­la­tions, which tell you that keep­ing ac­cu­rate records is in­sane.

It certainly sounds like a contradiction, but the spin that article puts on it is unconvincing:

In other words, if you are trying to build a good engineering culture, you might want to encourage your employees to send hyperbolic, overstated, highly quotable emails to a broad internal distribution list when they object to a decision. On the other hand your lawyers, and your public relations people, will obviously and correctly tell you that that is insane: If anything goes wrong, those emails will come out, and the headlines will say “Designed by Clowns,”

This argument is essentially "truth is bad".

We try to pretend that making problems sound worse than they really are, in order to compel action, is not lying. But it really is. This complaint sounds like "we want to get the benefits of lying, but not the harm". If you're overstating a problem in order to get group A to act in ways that they normally wouldn't, don't be surprised if group B also reacts in ways that they normally wouldn't, even if A's reaction helps you and B's reaction hurts you. The core of the problem is not that B gets to hear it, the core of the problem is that you're being deceitful, even if you're exaggerating something that does contain some truth.

(Also, this will result in a ratchet where every decision that engineers object to is always the worst, most disastrous, decision ever, because if your goal is to get someone to listen, you should always describe the current problem as the worst problem ever.)

Comment by jiro on Reality-Revealing and Reality-Masking Puzzles · 2020-01-22T22:20:36.252Z · score: 11 (7 votes) · LW · GW

The epistemic immune system serves a purpose--some things are very difficult to reason out in full and some pitfalls are easy to fall in unknowingly. If you were a perfect reasoner, of course, this wouldn't matter, but the epistemic immune system is necessary because you're not a perfect reasoner. You're running on corrupted hardware, and you've just proposed dumping the error-checking that protects you from flaws in the corrupted hardware.

And saying "we should disable them if they get in the way of accurate beliefs" is, to mix metaphors, like saying "we should dispense with the idea of needing a warrant for the police to search your house, as long as you're guilty". Everyone thinks their own beliefs are accurate; saying "we should get rid of our epistemic immune system if it gets in the way of accurate beliefs" is equivalent to getting rid of it all the time.

Comment by jiro on What's an important (new) idea you haven't had time to argue for yet? · 2019-12-10T22:19:07.823Z · score: 5 (4 votes) · LW · GW

Under what circumstances do you get people telling you they are fine? That doesn't happen to me very much--"I'm fine" as part of normal conversation does not literally mean that they are fine.

Comment by jiro on Experiments and Consent · 2019-11-11T20:23:30.121Z · score: 6 (3 votes) · LW · GW

“if it’s ok to do A or B then it’s fine to run an ex­per­i­ment on A vs B”

Allowing A and B, and allowing an experiment on A vs. B, may create different incentives, and these incentives may be different enough to change whether we should allow the experiment versus allowing A and B.

Comment by jiro on What do the baby eaters tell us about ethics? · 2019-10-10T21:58:40.745Z · score: 3 (2 votes) · LW · GW

Luckily for you, there definitely exists a rule that tells you the best possible move to play for every given configuration of pieces—the rule that tells you the move that maximizes the probability of victory (or since draws exist and may be acceptable, the move that minimizes the probability of defeat.

If your opponent is a perfect player, each move has a 0% or 100% probability of victory. You can only maximize it in a trivial sense.

If your opponent is an imperfect player, your best move is the one that maximizes the probability of victory given your opponent's pattern of imperfection. Depending on what this pattern is, this may also mean that each move has a 0% or 100% probability of victory.

Comment by jiro on Bets and updating · 2019-10-10T21:43:31.664Z · score: -2 (3 votes) · LW · GW

Your process of deciding what to do may at some point include simulating Omega and Omicron. If so, this means that when Omega and Omicron are simulating you, they are now trying to solve the Halting Problem. I am skeptical that Omega or Omicron can solve the Halting Problem.

Comment by jiro on Asymmetric Justice · 2019-09-26T21:34:34.321Z · score: 4 (2 votes) · LW · GW

I would suggest that this is ameliorated by the following:

  1. Nobody actually believes that you are to blame for every bad consequence of things you do, no matter how indirect. A conscientious person is expected to research and know some of the indirect consequences of his actions, but this expectation doesn't go out to infinity.

  2. While you don't get credit for unintended good consequences in general, you do get such credit in some situations. Specifically, if the good consequence is associated with a bad consequence, you are allowed to get credit for the good consequence and trade it off against the bad consequence. If I buy a tomato, bad consequences of this (someone else can't get one) are balanced off against good consequences (the store knows to order extra tomatoes next week) because they are both part of the same process. On the other hand, I can't offset a murder by saving two drowning victims, because the acts are not entwined and I could do one without doing the other.

Comment by jiro on Troll Bridge · 2019-08-23T19:12:21.858Z · score: 1 (3 votes) · LW · GW

How can you (in general) conclude something by examining the source code of an agent, without potentially implicating the Halting Problem?

Comment by jiro on Raemon's Shortform · 2019-07-23T16:01:48.593Z · score: 4 (2 votes) · LW · GW

I think there's a difference between "Most of the IRS tax code is reasonable" and "Most of the instances where the IRS tax code does something are instances where it does reasonable things." Not all parts of the tax code are used equally often. Furthermore, most unreasonable instances of a lot of things will be rare as a percentage of the whole because there is a large set of uncontroversial background uses. For instance, consider a completely corrupt politician who takes bribes--he's not going to be taking a bribe for every decision he makes and most of the ones he does make will be uncontroversial things like "approve $X for this thing which everyone thinks should be approved anyway".

Comment by jiro on Integrity and accountability are core parts of rationality · 2019-07-16T20:51:13.109Z · score: 5 (3 votes) · LW · GW

I want employees to ask themselves whether they are willing to have any contemplated act appear the next day on the front page of their local paper—to be read by their spouses, children and friends—with the reporting done by an informed and critical reporter.”

Leaving out "parents" gets rid of some of the obvious objections, but even then, I don't want my children to know about my sexual fetishes. Other objections may include, for instance, letting your friends know that you voted for someone who they think will ruin the country. And I certainly wouldn't want rationalist-but-unpopular opinions I hold to be on the front page of the local paper to be seen by everyone (Go ahead, see what happens when the front page of the newspaper announces that you think that you should kill a fat man to stop a trolley.) This aphorism amounts to "never compartmentalize your life" which doesn't seem very justifiable.

Comment by jiro on Everybody Knows · 2019-07-05T15:00:31.472Z · score: 7 (4 votes) · LW · GW

Bob does not know X. That’s why Alice is telling Bob in the first place.

Conversational phrases aren't supposed to be interpreted literally. "Everybody knows" never means "literally every single person knows". This is about equivalent to complaining that people say "you're welcome" when the person really wouldn't be welcome under some circumstances.

Don't be the literal Internet guy who thinks this way.

Comment by jiro on An Increasingly Manipulative Newsfeed · 2019-07-02T21:26:49.939Z · score: 2 (1 votes) · LW · GW

I think the word “unbiased” there may be a typo; your statement would make a lot more sense if the word you meant to put there was actually “biased”.

I meant "unbiased" in scare quotes. Typical newsfeeds that are claimed to be unbiased in the real world (but actually may not be).

Comment by jiro on An Increasingly Manipulative Newsfeed · 2019-07-02T19:51:17.950Z · score: 2 (1 votes) · LW · GW

Typical unbiased newsfeeds in the real world are created by organizations with bias who have an interest in spreading biased news. It could, of course, be that this was about a rare instance where this was not the case, but the odds are against it.

Comment by jiro on An Increasingly Manipulative Newsfeed · 2019-07-02T18:09:26.798Z · score: 6 (6 votes) · LW · GW

Manipulative newsfeeds aren't an example of an AI becoming manipulative when the human just wanted it to be unbiased. They're an example of an AI becoming manipulative when the human also wanted it to be manipulative, but didn't want to be too obvious about it.

Comment by jiro on Logic, Buddhism, and the Dialetheia · 2019-06-12T22:03:52.195Z · score: 2 (1 votes) · LW · GW

Don't Godel sentences rebut the ideas of groundedness or of creating a system where self-referential sentences are blocked? Their existence means that you can create something that behaves as a self-referential sentence and has the associated paradoxes while using only normal arithmetic and without a "this sentence".

Comment by jiro on Drowning children are rare · 2019-05-31T22:33:47.352Z · score: 3 (8 votes) · LW · GW

I would not, in fact, save a drowning child.

Or rather, I'd save a central example of a drowning child, but I wouldn't save a drowning child under literally all circumstances, and I think most people wouldn't either. If a child was drowning in a scenario similar to one that Singer uses it as an analogy for, it would be something like a scenario where there is an endless series of drowning children in front of me with an individually small but cumulatively large cost to saving them. Under those circumstances, I would not save every drowning child, or even try to maximize the number of drowning children I do save.

Comment by jiro on [AN #56] Should ML researchers stop running experiments before making hypotheses? · 2019-05-23T23:07:17.465Z · score: 2 (1 votes) · LW · GW

It seems to me that if you expect that the results of your experiment can be useful in and generalized to other situations, then it has to be possible to replicate it. Or to put it another way, if the principle you discovered is useful for more than running the same program with a different seed, shouldn't it be possible to test it by some means other than running the same program with a different seed?

Comment by jiro on By default, avoid ambiguous distant situations · 2019-05-23T16:25:46.276Z · score: 4 (2 votes) · LW · GW

the pre-brain­washed per­son had prefer­ences about their fu­ture selves

That would qualify as

for in­stance, you might think that forcibly chang­ing prefer­ences is differ­ent from cre­at­ing a be­ing with un­usual prefer­ences

Also, it's possible for people to have preferences about either their descendants, or about other sentient beings, just like they have preferences about their future selves. In fact, I would suggest that pretty much all the opposition to the idea is because people have preferences about their descendants or about other sentient beings. Again, it may be useful to spell out why you think those preferences merit less respect than preferences about one's future self.

(Note that some answers to this require making assumptions about how to aggregate preferences, that are also serious points of disagreement. Fo instance, you might say that if you create a lot of slaves, the preferences of that large number should have a large weight. Such assumptions can also be questioned, and by most people, would be questioned.)

Comment by jiro on [AN #56] Should ML researchers stop running experiments before making hypotheses? · 2019-05-22T19:17:59.664Z · score: 11 (2 votes) · LW · GW

Instead of preregistering all experiments, maybe researchers could run experiments and observe results, formulate a theory, and then preregister an experiment that would test the theory—but in this case I would expect that researchers end up “preregistering” experiments that are very similar to the experiments that generated the theory, such that the results are very likely to come out in support of the theory.

Why would you expect this? Assuming you are not suggesting "what if the researchers lie and say they did the experiment again when they didn't", then doing a similar experiment again is called "replication". If the initial result was caused by p-hacking, then the similar experiment won't support the theory. This is why we do replication.

Also, I notice the term "p-hacking" appears nowhere in your post.

Comment by jiro on By default, avoid ambiguous distant situations · 2019-05-22T19:12:26.863Z · score: 1 (2 votes) · LW · GW

Consider a similar situation without creating a race: some wizard brainwashes an existing person into becoming a willing slave. Is it moral to thwart the preferences of the brainwashed person bu not enslaving him, or by forcibly modifying his brain to desire freedom again? Most people would say yes.

You might argue that there is a difference (for instance, you might think that forcibly changing preferences is different from creating a being with unusual preferences) but it may be useful to spell out those differences and distinguish between objections that are affected by those differences and objections which are not.

Comment by jiro on Tales From the American Medical System · 2019-05-10T19:54:04.561Z · score: -9 (5 votes) · LW · GW

My friend explains again that he does not have the time to see any doctor the next day

He had the time to see a doctor. He didn't have the time to see a doctor without disrupting his life, which isn't the same thing as actually not having the time to see the doctor. And the fact that seeing a doctor disrupts his life is his own fault for delaying the appointment.

The doctor shouldn't be required to alter his procedure just because doing so would alleviate the consequences of the patient's own decisions.

Comment by jiro on Tales From the American Medical System · 2019-05-10T19:41:08.014Z · score: 4 (2 votes) · LW · GW

The conceptual gap between a standard use of a poison from a hardware store and a deadly use is much larger than the gap between a standard use and a deadly use of a medication, so I would expect far more tragedies to come from the medication than from the hardware store poison.

Nobody's going to self-diagnose and inject themselves with poison from a hardware store.

Comment by jiro on Episode 3 of Tsuyoku Naritai! (the 'becoming stronger podcast): Nike Timers · 2019-05-05T16:23:08.807Z · score: 3 (2 votes) · LW · GW

Calling this "Tsuyoku Naritai" is marginally better than calling your image editing program "GIMP". The name signals something really unfavorable to a lot of people (weeabooism in this case). And yes, I know it comes from an essay by Eliezer. He still seems to have gotten the phrase from shonen anime. (Also, if he did, it counts as appealing to fictional evidence.)

Geeks like to ignore the name "GIMP" because after all, the name of something has no relation to how it functions so, of course, you should never shy away from something just because of its name if its functionality is good, right? Which is a way of thinking that ignores the real world.

Comment by jiro on Counterspells · 2019-04-30T14:32:43.001Z · score: 4 (2 votes) · LW · GW

That isn't enough, though. First of all, some of what I said applies directly to the quality of the argument--someone could be sincere, but biased, and I may have a reason to avoid arguments based on personal experience or personal expertise from him about certain subjects, without completely avoiding conversation with him. Second, what I said applies when you're arguing with person A (who you can have a discussion with) and they're referencing person B (who you can't), and you want to dismiss the reference to B--in the example above, someone is referring back to the argument made by a senator, but he is not the senator himself.

Comment by jiro on The Forces of Blandness and the Disagreeable Majority · 2019-04-30T05:33:43.591Z · score: 18 (7 votes) · LW · GW

Since the 1970’s, Americans have become more tolerant of allowing people with controversial views to speak in public

The question of whether Americans have become more tolerant of speech is about recent changes, not changes since the 1970s.

There's also the problem of how speech is classified. Those figures show that the tolerance for letting racists speak has gone down recently, which may be concerning--but it's much more concerning if more things get moved into the "racist" category, which seems to be happening. Also, I don't see "sexists" or any other category aside from "racists" that is hated by the left, and it's quite possible that adding more such categories would show more downturns.

Comment by jiro on Counterspells · 2019-04-29T22:09:57.622Z · score: 4 (2 votes) · LW · GW
If there's something wrong with the senator's argument, you should say what it is; and if there isn't, what difference does it make that he's a senator?

Finding things wrong with an argument is not effort-free. The fact that someone may be biased may in some cases be enough to make me not want to spend the effort. Furthermore, most real-life arguments are not purely logical deductions and involve a certain amount of trusting that the other person has presented facts honestly and in a way that is not one-sided or based on motivated reasoning, especially when perceptions and personal experience are involved.

There's also a certain chance that someone will sneak a bad argument by me simply because I am human and imperfect at analyzing arguments. I can minimize the chance of this without causing other problems if I only argue with people who are relatively unbiased.

It matters much more whether [person] is wrong or right than what their tone is.

No, it doesn't. Imagine replacing "abusive tone" with "breaks the windows of my house". Whether someone is right or wrong is unrelated to whether he breaks the windows of my house, but I'd probably call the police and ignore his arguments.Abusive tone is negative utility for me and I'm not interested in getting negative utility when I can avoid it.

Comment by jiro on Wirehead your Chickens · 2018-06-27T15:16:15.605Z · score: 8 (2 votes) · LW · GW

There are two related but separate ideas. One is that if you want to find out if someone is harmed by X, you need to consider whether they would prefer X in a base state, even if X affects their preferences. Another is that if you want to find out if someone is harmed by X, you need to consider what they would prefer if they knew about and understood X, even if they don't.

Modifying an animal to have a smaller brain falls in the second category; pretty much any being who can understand the concept would consider it harmful to be modified to have a smaller brain, so it should also be considered harmful for beings who don't understand the concept. It may also fall in the first category if you try to argue "their reduced brain capacity will prevent them from knowing what they're missing by having reduced brain capacity". Modifying it so that it enjoys pain falls in the second category for the modification, and the first category for considering whether the pain is harmful.

Comment by jiro on Wirehead your Chickens · 2018-06-25T21:14:58.209Z · score: 5 (2 votes) · LW · GW

Most non-rationalists think that whether doing Y on target X is good depends on whether X would prefer Y in a base state where X is unaltered by Y and is aware of the possibility of Y, even if having Y would change his perception or is completely concealed from his perception.

If you're going to create animals who want to be eaten (or who enjoy actions that would otherwise cause suffering), you need to assess whether this is good or bad based on whether a base state animal with unaltered desires would want to be eaten or would want to be subject to those actions. If you're going to amputate animals' body parts, you need to consider whether a base state animal with those parts would want them amputated.

The proposals above all fail this standard.

Comment by jiro on Why Destructive Value Capture? · 2018-06-20T22:31:06.039Z · score: 13 (3 votes) · LW · GW

Generally, people have a heuristic of "if this is straightforwardly and immediately harmful, I'm going to be very skeptical about claims that contradict that. And this is not just because they're stubbornly being irrational--it's because it's a lot easier to make a mistake or be convinced by sophistry when looking at long indirect chains of causation than direct ones.

The straightforward and immediate effect of not trying to sell a seat is that you lose money because you forego the possible income from selling that seat. It is possible that ripping out those seats has secondary effects that cumulatively result in you making more money anyway. But actually doing that calculation is hard (and your original post did not do the calculation--it speculated instead), and you are limited in how well you can assess the correctness of such a calculation. It ends up becoming a form of epistemic learned helplessness where the correct thing to do is to massively discount arguments for doing things that straightforwardly harm you.

Comment by jiro on Resolving the Dr Evil Problem · 2018-06-17T16:57:37.094Z · score: 7 (1 votes) · LW · GW

"I am a stubborn git who would destroy the Earth and ignore the possibility of cloning, even if such an action produces negative utility for me" is just another way of saying "I have precommitted to destroying the Earth".

Comment by jiro on The Curious Prisoner Puzzle · 2018-06-17T16:53:36.756Z · score: 7 (1 votes) · LW · GW

The whole thing is basically the Monty Hall problem.

Comment by jiro on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases · 2018-06-06T20:14:35.722Z · score: 7 (3 votes) · LW · GW

Given how people actually act, a norm of "no literal falsehoods, but you can say deceptive but literally true things" will encourage deception in a way that "no deception unless really necessary" will not. "It's literally true, so it isn't lying" will easily slip to "it's literally true, so it isn't very deceptive", which will lead to people being more willing to deceive.

It's also something that only Jedi, certain religious believers, autists, Internet rationalists, and a few other odd groups would think is a good idea. "It isn't lying because what I said was literally true" is a proposition that most people see as sophistry.

Comment by jiro on Expressive Vocabulary · 2018-06-06T19:44:51.288Z · score: 5 (2 votes) · LW · GW

I don't know perfectly well what someone means when they say the dip is full of chemicals. I know roughly what they mean, but I can't figure out exactly what they mean, or even know if they have a consistent or thought out definition at all.

When telling them that the dip contains dihydrogen monoxide, I am not being pedantic; I am saying "the plain meaning of what you are saying doesn't make sense. And any not-plain meanings are beyond my ability to guess, so could you please tell me what you're really trying to say?"

Comment by jiro on Against accusing people of motte and bailey · 2018-06-04T01:46:28.084Z · score: 51 (15 votes) · LW · GW

If different people in the group make sensible and crazy interpretations, and you're arguing with someone who claims to be making only the sensible interpretation, I'd expect that that person would at least be willing to

1) admit that other members of the group are saying things that are crazy. They don't have to preemptively say it ahead of time, but they could at least say it when they are challenged on it.

2) treat known crazy-talking people as crazy-talking people, rather than glossing over their craziness in the interests of group solidarity.

I'm also very suspicious when the person with the reasonable interpretation benefits too much from the existence of (and the failure to challenge) the person with the crazy interpretation. His refusal to condemn the other guy then looks suspicious. The term for this is "good cop, bad cop", and the fact that we have already have a term for it should hint that it actually happens.

And finally, sometimes as a practical matter, it's necessary to go against the bad cops. If the motte is some kind of reasonable objection to James Damore, and the bailey is "Damore said (list of things he didn't actually say)" and the bailey is all over the media and Internet and is used to attack engineers, that bailey is the one to be concerned about and the one to focus most of my effort against. It's not just argument, it's argument in service of a goal, in this case, not to be stomped on by people using baileys.

Comment by jiro on A tentative solution to a certain mythological beast of a problem · 2018-05-09T18:42:09.558Z · score: 2 (2 votes) · LW · GW

Similarly, our history is marked by an evolution from a disregard for living creatures that impeded our survival, to a respect of other living creatures and life (abolishment of slavery, vegan, better treatment of creatures, etc). With sentience comes greater respect for life and self actualization.

That seems to imply that as society advances, abortion will be prohibited, at least at stages where the fetus has as much mental capacity as an animal.

Comment by jiro on Predicting Future Morality · 2018-05-09T18:35:49.380Z · score: 10 (3 votes) · LW · GW

Unfortunately many attempts to figure this out end up as "I believe X is good morality, but not a lot of people do it. Well, everyone who disagrees with me about X is obviously biased by the fact that doing X is difficult. If doing X was easy, they would all be enlightened and recognize that I am correct about X". Is there something that you don't personally already think is moral, where changing circumstances would lead more people to think it's moral in the future?

Also, it's worth looking at the past as well and seeing what things did not change even though the theory you are using to predict changes in the future seems like it would predict them.

Comment by jiro on Naming the Nameless · 2018-03-24T16:54:06.392Z · score: 12 (6 votes) · LW · GW

Replying to the intro topic instead of to the actual topic: “light-contrast, minimalist elegance” is exactly what the lesserwrong interface is not.

One reason sites have this problem is that designers want to be Doing Something. Nobody gets a promotion based on a web interface that is good because it's easy to ignore. Nobody gets the satisfaction from making a boring interface as they do from making an exciting one, and nobody gets the praise. Because nobody wants to not be noticed, even if the best interface is one that doesn't have to be noticed.

Comment by jiro on Caring less · 2018-03-19T09:51:38.869Z · score: 5 (3 votes) · LW · GW

Some things are central examples of caring (caring about the homeless), and other things are noncentral examples of caring (caring about sleeping late, caring about leisure activities). Whether a speaker describes something as "you should care more about X" or "you should care less about Y" does communicate information--it depends partly on how central an example of caring he considers X and Y to be.

(It also depends on how broad X and Y are. If you want to tell someone "you should care less about the entire range of activities that includes everything except climate change", you would probably describe it as "you should care more about climate change". So it doesn't follow that any "care more" can be reasonably phrased as a "care less".)

Comment by jiro on Cash transfers are not necessarily wealth transfers · 2017-12-09T18:38:42.356Z · score: 3 (1 votes) · LW · GW

If you give poor people money to spend on positional goods, the market will eventually respond, but it doesn't respond instantly. They may actually be able to be able to purchase the positional goods in the time it takes for the market to respond. Furthermore, if you give the money to only a relatively small number of poor people, the effect of your money may not be enough for the market to respond much.

Now, apparently what's actually happening is that some poor people are spending the money they get from cash transfers on school fees but (I guess) most aren't. What then? Staying in Econ-101-land, what this indicates is that different people have different values and the poor people who get most utility from having educated kids will do that.

But you're interested in making the donations effective. If only a small portion of the recipients will spend them in utility-increasing ways, you have to discount the effectiveness accordingly.

Comment by jiro on Cash transfers are not necessarily wealth transfers · 2017-12-09T18:31:18.753Z · score: 1 (2 votes) · LW · GW

By poor third world country standards, all education is uber expensive.

Comment by jiro on Some suggestions (desperate please, even) · 2017-11-13T03:59:24.866Z · score: 2 (1 votes) · LW · GW

Firefox under Linux at home also doesn't show the conversations icon or do anything if I click "login". I knoiw that no cookies are disabled on this. I have to use Chromium in order to get it to work.