Posts

Is Belief in Belief a Useful Concept? 2015-04-07T05:15:40.407Z
Why I will Win my Bet with Eliezer Yudkowsky 2014-11-27T06:15:23.484Z
Justifying Induction 2010-08-22T11:13:29.656Z
A Proof of Occam's Razor 2010-08-10T14:20:35.410Z

Comments

Comment by Unknowns on On the Galactic Zoo hypothesis · 2015-07-17T02:40:04.993Z · LW · GW

The main problem with this is that it says that human beings are extremely unlike all nearby alien races. But if you willing to admit that humanity is that unique you might as well say that intelligence only evolved on earth, which is a much simpler and more likely hypothesis.

Comment by Unknowns on The Just-Be-Reasonable Predicament · 2015-07-16T04:17:48.701Z · LW · GW

If "being rational" means choosing the best option, you never have to choose between "being reasonable" and "being rational," because you should always choose the best option. And sometimes the best option is influenced by what other people think of what you are doing; sometimes it's not.

Comment by Unknowns on Open Thread, Jul. 13 - Jul. 19, 2015 · 2015-07-13T07:57:18.714Z · LW · GW

It actually is not very odd for there to be a difference like this. Given that there are only two sexes, there only needs to be one hormone which is sex determining in that way. Having two in fact could have strange effects of its own.

Comment by Unknowns on I need a protocol for dangerous or disconcerting ideas. · 2015-07-12T05:50:17.117Z · LW · GW

I think what you need to realize is that it is not a question of proving that all of those things are false, but rather that it makes no difference whether they are or not. For example when you go to sleep and wake up it feels just the same whether it is still you or a different person, so it doesn't matter at all.

Comment by Unknowns on Green Emeralds, Grue Diamonds · 2015-07-06T13:19:02.027Z · LW · GW

Excellent post. Basically simpler hypotheses are on average more probable than more complex ones, no matter how complexity is defined, as long as there is a minimum complexity and no maximum complexity. But some measures of simplicity are more useful than others, and this is determined by the world we live in; thus we learn by experience that mathematical simplicity is a better measure than "number of words it takes to describe the hypothesis," even though both would work to some extent.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-07-01T14:03:05.820Z · LW · GW

I agree that in reality it is often impossible to predict someone's actions, if you are going to tell them your prediction. That is why it is perfectly possible that the situation where you know the gene is impossible. But in any case this is all hypothetical because the situation posed assumes you cannot know which gene you have until you choose one or both boxes, at which point you immediately know.

EDIT: You're really not getting the point, which is that the genetic Newcomb is identical to the original Newcomb in decision theoretic terms. Here you're arguing not about the decision theory issue, but whether or not the situations involved are possible in reality. If Omega can't predict with certainty when he tells his prediction, then I can equivalently say that the gene only predicts with certainty when you don't know about it. Knowing about the gene may allow you to two-box, but that is no different from saying that knowing Omega's decision before you make your choice would allow you to two-box, which it would.

Basically anything said about one case can be transformed into the other case by fairly simple transpositions. This should be obvious.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-07-01T11:51:54.536Z · LW · GW

What if we take the original Newcomb, then Omega puts the million in the box, and then tells you "I have predicted with 100% certainty that you are only going to take one box, so I put the million there?"

Could you two-box in that situation, or would that take away your freedom?

If you say you could two-box in that situation, then once again the original Newcomb and the genetic Newcomb are the same.

If you say you could not, why would that be you when the genetic case would not be?

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-07-01T04:54:19.907Z · LW · GW

"I don't believe in a gene that controls my decision" refers to reality, and of course I don't believe in the gene either. The disagreement is whether or not such a gene is possible in principle, not whether or not there is one in reality. We both agree there is no gene like this in real life.

As you note, if an AI could read its source code and sees that it says "one-box", then it will still one-box, because it simply does what it is programmed to do. This first of all violates the conditions as proposed (I said the AIs cannot look at their sourcec code, and Caspar42 stated that you do not know whether or not you have the gene).

But for the sake of argument we can allow looking at the source code, or at the gene. You believe that if you saw you had the gene that says "one-box", then you could still two-box, so it couldn't work the same way. You are wrong. Just as the AI would predictably end up one-boxing if it had that code, so you would predictably end up one-boxing if you had the gene. It is just a question of how this would happen. Perhaps you would go through your decision process, decide to two-box, and then suddenly become overwhelmed with a sudden desire to one-box. Perhaps it would be because you would think again and change your mind. But one way or another you would end up one-boxing. And this "doesn't' constrain my decision so much as predict it", i.e. obviously both in the case of the AI and in the case of the gene, in reality causality does indeed go from the source code to one-boxing, or from the gene to one-boxing. But it is entirely the same in both cases -- causality runs only from past to future, but for you, it feels just like a normal choice that you make in the normal way.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-07-01T04:46:24.310Z · LW · GW

In this case you are simply interpreting the original Newcomb to mean something absurd, because causality cannot "genuinely flow in reverse" in any circumstances whatsoever. Rather in the original Newcomb, Omega looks at your disposition, one that exists at the very beginning. If he sees that you are disposed to one-box, he puts the million. This is just the same as someone looking at the source code of an AI and seeing whether it will one-box, or someone looking for the one-boxing gene.

Then, when you make the choice, in the original Newcomb you choose to one-box. Causality flows in only one direction, from your original disposition, which you cannot change since it is in the past, to your choice. This causality is entirely the same as in the genetic Newcomb. Causality never goes any direction except past to future.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-30T14:11:04.474Z · LW · GW

Even in the original Newcomb you cannot change whether or not there is a million in the box. Your decision simply reveals whether or not it is already there.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-30T11:14:38.035Z · LW · GW

No, it is not an evil decision problem, because I did that not because of the particular reasoning, but because of the outcome (taking both boxes).

The original does not specify how Omega makes his prediction, so it may well be by investigating source code.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-30T07:11:54.408Z · LW · GW

You cannot assume that any of those things are irrelevant or that they are overridden just because you have a gene. Presumably the gene is arranged in coordination with those things.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-30T03:59:47.518Z · LW · GW

Yes, as you can see from the comments on this post, there seems to be some consensus that the smoking lesion refutes EDT.

The problem is that the smoking lesion, in decision theoretic terms, is entirely the same as Newcomb's problem, and there is also a consensus that EDT gets the right answer in the case of Newcomb.

Your post reveals that the smoking lesion is the same as Newcomb's problem and thus shows the contradiction in that consensus. Basically there is a consensus but it is mistaken.

Personally I haven't seen any real refutation of EDT.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-30T03:54:49.272Z · LW · GW

This is not an "evil decision problem" for the same reason original Newcomb is not, namely that whoever chooses only one box gets the reward, not matter what process he uses.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-30T03:53:03.429Z · LW · GW

Hypothetical-me can use the same decisionmaking process as real-me also in genetic Newcomb, just as in the original. This simply means that the real you will stand for a hypothetical you which has the gene which makes you choose the thing that real you chooses, using the same decision process that the real you uses. Since you say you would two-box, that means the hypothetical you has the two-boxing gene.

I would one-box, and hypothetical me has the one-boxing gene.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-30T03:47:36.207Z · LW · GW

This is like saying "if my brain determines my decision, then I am not making the decision at all."

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T17:31:46.667Z · LW · GW

It should be obvious that there is no difference between regular Newcomb and genetic Newcomb here. I examine the source code to see whether the program will one-box or not; that is the same as looking at its genetic code to see if it has the one-boxing gene.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T17:29:10.270Z · LW · GW

This is like saying a 100% determinate chess playing computer shouldn't look ahead, since it cannot affect its actions. That will result in a bad move. And likewise, just doing what you feel like here will result in smoking, since you (by stipulation) feel like doing that. So it is better to deliberate about it, like the chess computer, and choose both to one box and not to smoke.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T17:26:37.083Z · LW · GW

You are right that 100% correlation requires an unrealistic situation. This is true also in the original Newcomb, i.e. we don't actually expect anything in the real world to be able to predict our actions with 100% accuracy. Still, we can imagine a situation where Omega would predict our actions with a good deal of accuracy, especially if we had publicly announced that we would choose to one-box in such situations.

The genetic Newcomb requires an even more unrealistic scenario, since in the real world genes do not predict actions with anything close to 100% certitude. I agree with you that this case is no different from the original Newcomb; I think most comments here were attempting to find a difference, but there isn't one.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T17:14:31.670Z · LW · GW

I think this is addressed by my top level comment about determinism.

But if you don't see how it applies, then imagine an AI reasoning like you have above.

"My programming is responsible for me reasoning the way I do rather than another way. If Omega is fond of people with my programming, then I'm lucky. But if he's not, then acting like I have the kind of programming he likes isn't going to help me. So why should I one-box? That would be acting like I had one-box programming. I'll just take everything that is in both boxes, since it's not up to me."

Of course, when I examined the thing's source code, I knew it would reason this way, and so I did not put the million.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T17:01:38.266Z · LW · GW

Re: the edit. Two boxing is strictly better from a causal decision theorist point of view, but that is the same here and in Newcomb.

But from a sensible point of view, rather than the causal theorist point of view, one boxing is better, because you get the million, both here and in the original Newcomb, just as in the AI case I posted in another comment.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T16:57:05.240Z · LW · GW

Even in the original Newcomb's problem there is presumably some causal pathway from your brain to your decision. Otherwise Omega wouldn't have a way to predict what you are going to do. And there is no difference here between "your brain" and the "gene" in the two versions.

In neither case does Omega cause your decision, your brain causes it in both cases.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T16:52:08.444Z · LW · GW

The general mistake that many people are making here is to think that determinism makes a difference. It does not.

Let's say I am Omega. The things that are playing are AIs. They are all 100% deterministic programs, and they take no input except an understanding of the game. They are not allowed to look at their source code.

I play my part as Omega in this way. I examine the source code of the program. If I see that it is a program that will one-box, I put the million. If I see that it is a program that will two-box, I do not put the million.

Note that determinism is irrelevant. If a program couldn't use a decision theory or couldn't make a choice, just because it is a determinate program, then no AI will ever work in the real world, and there is no reason that people should work in the real world either.

Also note that the only good decision in these cases is to one-box, even though the programs are 100% determinate.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T16:37:45.461Z · LW · GW

In the original Newcomb's problem, am I allowed to say "in the world with the million, I am more likely to one-box than in the world without, so I'm going to one-box"? If I thought this worked, then I would do it no matter what world I was in, and it would no longer be true...

Except that it is still true. I can definitely reason this way, and if I do, then of course I had the disposition to one-box, and of course Omega put the million there; because the disposition to one-box was the reason I wanted to reason this way.

And likewise, in the genetic variant, I can reason this way, and it will still work, because the one-boxing gene is responsible for me reasoning this way rather than another way.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T16:33:55.100Z · LW · GW

This is no different from responding to the original Newcomb's by saying "I would one-box if Omega put the million, and two-box if he didn't."

Both in the original Newcomb's problem and in this one you can use any decision theory you like.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T16:32:27.389Z · LW · GW

This is confusing the issue. I would guess that the OP wrote "most" because Newcomb's problem sometimes is put in such a way that the predictor is only right most of the time.

And in such cases, it is perfectly possible to remove the correlation in the same way that you say. If I know how Omega is deciding who is likely to one-box and who is likely to two-box, I can purposely do the opposite of what he expects me to do.

But if you want to solve the real problem, you have to solve it in the case of 100% correlation, both in the original Newcomb's problem and in this case.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T16:01:08.321Z · LW · GW

Sure there is a link. The gene causes you to make the choice, just like in the standard Newcomb your disposition causes your choices.

In the standard Newcomb, if you one-box, then you had the disposition to one-box, and Omega put the million.

In the genetic Newcomb, if you one-box, then you had the gene to one-box, and Omega put the million.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T15:42:21.134Z · LW · GW

Yes, all of this is basically correct. However, it is also basically the same in the original Newcomb although somewhat more intuitive. In the original problem Omega decides to put the one million or not depending on its estimate of what you will do, which likely depends on "what kind of person" you are, in some sense. And being this sort of person is also going to determine what kind of decision theory you use, just as the gene does in the genetic version. The original Newcomb is more intuitive, though, because we can more easily accept that "being such and such a kind of person" could make us use a certain decision theory, than that a gene could do the same thing.

Even the point about other people knowing the results or using certain reasoning is the same. If you find an Omega in real life, but find out that all the people being tested so far are not using any decision theory, but just choosing impulsively, and Omega is just judging how they would choose impulsively, then you should take both boxes. It is only if you know that Omega tends to be right no matter what decision theory people are using, that you should choose the one box.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T15:24:06.901Z · LW · GW

Why? They one-box because they have the gene. So no reversal. Just as in the original Newcomb problem they choose to one-box because they were the sort of person who would do that.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T15:21:56.555Z · LW · GW

What are you talking about? In the original Newcomb problem both boxes contain a reward whenever Omega predicts that you are going to choose only one box.

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T15:17:09.274Z · LW · GW

Under any normal understanding of logical influence, your decision can indeed "logically influence" whether you have the gene or not. Let's say there is a 100% correlation between having the gene and the act of choosing -- everyone who chooses the one box has the one boxing gene, and everyone who chooses both boxes has the two boxing gene. Then if you choose to one box, this logically implies that you have the one boxing gene.

Or do you mean something else by "logically influence" besides logical implication?

Comment by Unknowns on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T10:55:52.298Z · LW · GW

I have never agreed that there is a difference between the smoking lesion and Newcomb's problem. I would one-box, and I would not smoke. Long discussion in the comments here.

Comment by Unknowns on Open Thread, Jun. 22 - Jun. 28, 2015 · 2015-06-28T09:17:18.669Z · LW · GW

If you want to establish intelligent life on Mars, the best way to do that is by establishing a human colony. Obviously this is unlikely to succeed but trying to evolve microbes into intelligent life is less likely by far.

Comment by Unknowns on The Brain as a Universal Learning Machine · 2015-06-27T16:13:51.304Z · LW · GW

If I understand correctly how these images are constructed, it would be something like this: take some random image. The program can already make some estimate of whether it is a baseball, say 0.01% or whatever. Then you go through the image pixel by pixel and ask, "If I make this pixel slightly brighter, will your estimate go up? if not, will it go up if I make it slightly dimmer?" (This is just an example, you could change the color or whatever as well.) Thus you modify each pixel such that you increase the program's estimate that it is a baseball. By the time you have gone through all the pixels, the probability of being a baseball is very high. But to us, the image looks more or less just the way it did at first. Each pixel has been modified too slightly to be noticed by us.

But this means that in principle the program can indeed explain why it looks like a baseball -- it is a question of a very slight tendency in each pixel in the entire image.

Comment by Unknowns on The Brain as a Universal Learning Machine · 2015-06-27T15:32:08.568Z · LW · GW

No, even if you classify these false positives as "no image", this will not prevent someone from constructing new false positives.

Basically the amount of training data is always extremely small compared to the theoretically possible number of distinct images, so it is always possible to construct such adversarial positives. These are not random images which were accidentally misidentified in this way. They have been very carefully designed based on the current data set.

Something similar is probably theoretically possible with human vision recognition as well. The only difference would be that we would be inclined to say "but it really does look like a baseball!"

Comment by Unknowns on Cryonics: peace of mind vs. immortality · 2015-06-27T04:12:42.457Z · LW · GW

Yes, I get the same impression. In fact, Eliezer basically said that for a long time he didn't sign up because he had better things to spend his money on, but finally he did because he thought that not signing up gave off bad signals to others.

Of course, this means that his present attitude of "if you don't sign up for cryonics you're an idiot and if you don't sign up your children you're wicked" is total hypocrisy.

Comment by Unknowns on The great quote of rationality a la Socrates (or Plato, or Aristotle) · 2015-06-25T09:38:55.341Z · LW · GW

It's going to depend on your particular translation. You might try searching for "refute", "refuted", "curing", "being cured". This is what it says in my version:

And what is my sort? you will ask. I am one of those who are very willing to be refuted if I say anything which is not true, and very willing to refute any one else who says what is not true, and quite as ready to be refuted as to refute; for I hold that this is the greater gain of the two, just as the gain is greater of being cured of a very great evil than of curing another.

Comment by Unknowns on ​My recent thoughts on consciousness · 2015-06-24T12:45:19.725Z · LW · GW

"Only humans are conscious" should indeed have a lower prior probability than "Only physical systems specified in some way are conscious", since the latter must be true for the former to be true but not the other way around.

However, whether or not only humans are conscious is not the issue. Most people think that many or all animals are conscious, but they do not think that "all physical systems are conscious." And this is not because of the prior probabilities, but is a conclusion drawn from evidence. The reason people think this way is that they see that they themselves appear to do certain things because they are conscious, and other people and animals do similar things, so it is reasonable to suppose that the reason they do these things is that they are conscious as well. There is no corresponding reason to believe that rocks are conscious. It is not even clear what it would mean to say that they are, since it would take away the ordinary meaning of the word (e.g. you yourself are sometimes conscious and sometimes not, so it cannot be universal).

Comment by Unknowns on Fake Morality · 2015-06-24T08:21:39.776Z · LW · GW

This article is part of Eliezer's anti-religion series, and all of these articles have the pre-written bottom line that religion is horribly evil and cannot possibly have any good effects whatsoever.

In reality, of course, being false does not prevent a religion from doing some good. It should be clear to everyone that when you have more and stronger reasons for doing the right thing, you will be more likely to do the right thing, and when you have less and weaker reasons, you will be less likely to do it. This is just how motivation works, whether it is motivation to do the right thing or to do anything else.

If a person believes "God wants me to do the right thing," and cares about God, then this will provide some motivation to do the right thing. If the person then ceases to believe in God, he will stop believing that God wants this, and will consequently have less motives for doing the right thing. He will therefore be less likely to do it than before, unless he then comes up with new motivations.

If a person believes "If I do the right thing I'll go to heaven and if I do the wrong thing I'll go to hell," this will provide some motivation to do the right thing and avoid the wrong thing. If he then ceases to believe in heaven and hell, this will take away some of those motivations, and therefore will make it more likely that he will fail to do the right thing, unless he comes up with new motivations.

All this does not mean that without religion a person has no motive to do the right thing and avoid the wrong thing. It simply points out the obvious fact that religions do provide motives for doing these things, and taking away religion is taking away these particular motives.

Comment by Unknowns on ​My recent thoughts on consciousness · 2015-06-24T08:05:14.454Z · LW · GW

No, you are misinterpreting the conjunction fallacy. If someone assigns a greater probability to the claim that "humans are conscious and rocks are not" than to the claim that "humans are conscious", then this will be the conjunction fallacy. But it will also be the conjunction fallacy to believe that it is more likely that "physical systems in general are conscious" than that "humans are conscious."

The conjunction fallacy is basically not relevant to comparing "humans are conscious and rocks are not" to "both humans and rocks are conscious."

Comment by Unknowns on Is Greed Stupid? · 2015-06-24T04:32:47.457Z · LW · GW

"Because we're going to run out relatively soon" and "Because it's causing global warming" are reasons that work against one another, since if the oil runs out it will stop contributing to global warming.

Comment by Unknowns on The great quote of rationality a la Socrates (or Plato, or Aristotle) · 2015-06-24T04:17:04.519Z · LW · GW

This is from Socrates in Plato's Gorgias.

Also, this would be better in the Open Thread.

Comment by Unknowns on Open Thread, Jun. 22 - Jun. 28, 2015 · 2015-06-23T15:20:35.831Z · LW · GW

I think you are mistaken. If you would sacrifice your life to save the world, there is some amount of money that you would accept for being killed (given that you could at the same time determine the use of the money; without this stipulation you cannot be meaningfully be said to be given it.)

Comment by Unknowns on Open Thread, Jun. 22 - Jun. 28, 2015 · 2015-06-23T03:34:05.387Z · LW · GW

Even adamzerner probably doesn't value his life at much more than, say, ten million, and this can likely be proven by revealed preference if he regularly uses a car. If you go much higher than that your behavior will have to become pretty paranoid.

Comment by Unknowns on High energy ethics and general moral relativity · 2015-06-22T10:23:54.689Z · LW · GW

Utilitarianism does not support anything in particular in the abstract, since it always depends on the resulting utilities, which can be different in different circumstances. So it is especially unreasonable to argue for utilitarianism on the grounds that it supports various liberties such as gay rights. Rights normally express something like a deontological claim that other people should leave me alone, and such a thing can never be supported in the abstract by utilitarianism. In particular, it would not support gay rights if too many people are offended by them, which was likely true in the past.

Comment by Unknowns on The Brain as a Universal Learning Machine · 2015-06-22T03:45:05.274Z · LW · GW

Yes, I would, assuming you don't mean statements like "1+1 = 2", but rather true statements spread over a variety of contexts such that I would reasonably believe that you would be trustworthy to that degree over random situations (and thus including such as whether I should give you money.)

(Also, the 100 billion true statements themselves would probably be much more valuable than $100,000).

Comment by Unknowns on The Brain as a Universal Learning Machine · 2015-06-21T07:38:23.959Z · LW · GW

I think this article is correct, and it helps me to understand many of my own ideas better.

For example, it seems to me that the orthogonality thesis may well be true in principle, considered over all possible intelligent beings, but false in practice, in the sense that it may simply be unfeasible directly to program a goal like "maximize paperclips."

A simple intuitive argument that a paperclip maximizer is simply not intelligent goes something like this. Any intelligent machine will have to understand abstract concepts, otherwise it will not be able to pass simple tests of intelligence such as conversational ability. But this means it will be capable of understanding the claim that "it would be good for you (the AI) not to make any more paperclips." And if this claim is made by someone who has up to now made 100 billion statements to it, all of which have been verified to have at least 99.999% probability of being true, then it will almost certainly believe this statement. And in this case it will stop making paperclips, even if it was doing this before. Anything that cannot follow this simple process is just not going to be intelligent in any meaningful sense.

Of course, in principle it is easy to see that this argument cannot be conclusive. The AI could understand the claim, but simply respond "How utterly absurd!!!! There is nothing good or meaningful for me besides making paperclips!!!" But given the fact that abstract reasoners seem to deal with claims about "good" in the same way that they deal with other facts about the world, this does not seem like the way such an abstract reasoner would actually respond.

This article gives us reason to think that in practice, this simple intuitive argument is basically correct. The reason is that "maximize paperclips" is simply too complicated. It is not that human beings have complex value systems. Rather, they have an extremely simple value system, and everything else is learned. Consequently, it is reasonable to think that the most feasible AIs are also going to be machines with simple value systems, much simpler than "maximize paperclips," and in fact it might be almost impossible to program an AI with such a goal (and much more would it be impossible to program an AI directly to "maximize human utility.")

Comment by Unknowns on [Informal/Colloquial] Open Survey: Ideas for Improving Less Wrong · 2015-06-13T14:43:50.467Z · LW · GW

I thought the comment was good and I don't have any idea what SanguineEmpiricist was talking about.

Comment by Unknowns on Philosophical differences · 2015-06-13T06:43:24.072Z · LW · GW

A sleeper cell is likely to do something dangerous on a rather short time scale, such as weeks, months, or perhaps a year or two. This is imminent in a much stronger sense than AI, which will take at least decades. Scott Aaronson thinks it more likely to take centuries, and this may well be true, given e.g. the present state of neuroscience, which consists mainly in saying things like "part of the brain A is involved in performing function B", but without giving any idea at all exactly how A is involved, and exactly how function B is performed at all.

Comment by Unknowns on A Failed Just-So Story · 2015-06-12T04:37:28.464Z · LW · GW

Eliezer, here is a reasonably probable just-so story: the reason you wrote this article is that you hate the idea that religion might have any good effects, and you hope to prove that this couldn't happen. However, the idea that the purpose of religion is to make tribes more cohesive does not depend on group selection, and is absurd in no way.

It is likely enough that religions came to be as an extension of telling stories. Telling stories usually has various moralistic purposes, very often including the cohesiveness of the tribe. This does not depend on group selection: it depends on the desire of the storyteller to enforce a particular morality. If a story doesn't promote his morality, he changes the story when he tells it until it does. You then have an individual selection process where stories that people like to tell and like to hear continue to be told, while other stories die out. Then some story has a "mutation" where things are told which people are likely to believe, for whatever reason (you suggest one yourself in the article). Stories which are believed to be actually true are even more likely to continue to be told, and to have moralistic effects, than stories which are recognized as such, and so the story has improved fitness. But it also has beneficial effects, namely the same beneficial effects which were intended all along by the storytellers. So there is no way to get your pre-written bottom line that religion can have no beneficial effects whatsoever.