Posts

Minimizing Motivated Beliefs 2017-09-03T15:56:59.768Z
The Practical Argument for Free Will 2017-03-04T16:58:19.619Z
Alien Implant: Newcomb's Smoking Lesion 2017-03-03T04:51:10.288Z
David Allen vs. Mark Forster 2016-12-05T01:04:05.627Z

Comments

Comment by entirelyuseless on Value Deathism · 2018-02-21T16:17:53.188Z · LW · GW

"But of course the claims are separate, and shouldn't influence each other."

No, they are not separate, and they should influence each other.

Suppose your terminal value is squaring the circle using Euclidean geometry. When you find out that this is impossible, you should stop trying. You should go and do something else. You should even stop wanting to square the circle with Euclidean geometry.

What is possible, directly influences what you ought to do, and what you ought to desire.

Comment by entirelyuseless on Reductionism · 2018-01-06T01:26:22.426Z · LW · GW

Nope. There is no composition fallacy where there is no composition. I am replying to your position, not to mine.

Comment by entirelyuseless on Announcing the AI Alignment Prize · 2017-12-20T03:43:01.457Z · LW · GW

I do care about tomorrow, which is not the long run.

I don't think we should assume that AIs will have any goals at all, and I rather suspect they will not, in the same way that humans do not, only more so.

Comment by entirelyuseless on Announcing the AI Alignment Prize · 2017-12-19T15:38:06.439Z · LW · GW

Not really. I don't care if that happens in the long run, and many people wouldn't.

Comment by entirelyuseless on Announcing the AI Alignment Prize · 2017-12-16T03:04:22.573Z · LW · GW

I considered submitting an entry basically saying this, but decided that it would be pointless since obviously it would not get any prize. Human beings do not have coherent goals even individually. Much less does humanity.

Comment by entirelyuseless on The "Intuitions" Behind "Utilitarianism" · 2017-12-15T14:04:35.407Z · LW · GW

Right. Utilitarianism is false, but Eliezer was still right about torture and dust specks.

Comment by entirelyuseless on The Critical Rationalist View on Artificial Intelligence · 2017-12-11T00:57:23.367Z · LW · GW

Can we agree that I am not trying to prosthelytize anyone?

No, I do not agree. You have been trying to proselytize people from the beginning and are still doing trying.

(2) Claiming authority or pointing skyward to an authority is not a road to truth.

This is why you need to stop pointing to "Critical Rationalism" etc. as the road to truth.

I also think claims to truth should not be watered down for social reasons. That is to disrespect the truth. People can mistake not watering down the truth for religious fervour and arrogance.

First, you are wrong. You should not mention truths that it is harmful to mention in situations where it is harmful to mention them. Second, you are not "not watering down the truth". You are making many nonsensical and erroneous claims and presenting them as though they were a unified system of absolute truth. This is quite definitely proselytism.

Comment by entirelyuseless on The Critical Rationalist View on Artificial Intelligence · 2017-12-09T15:00:09.470Z · LW · GW

I basically agree with this, although 1) you are expressing it badly, 2) you are incorporating a true fact about the world into part of a nonsensical system, and 3) you should not be attempting to proselytize people.

Comment by entirelyuseless on The Critical Rationalist View on Artificial Intelligence · 2017-12-09T02:28:25.693Z · LW · GW

Nothing to see here; just another boring iteration of the absurd idea of "shifting goalposts."

There really is a difference between a general learning algorithm and specifically focused ones, and indeed, anything that can generate and test and run experiments will have the theoretical capability to control pianist robots and scuba dive and run a nail salon.

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-02T02:30:00.715Z · LW · GW

Do you not think the TCS parent hasn't also heard this scenario over and over? Do you think you're like the first one ever to have mentioned it?

Do you not think that I am aware that people who believe in extremist ideologies are capable of making excuses for not following the extreme consequences of their extremist ideologies?

But this is just the same as a religious person giving excuses for why the empirical consequences of his beliefs are the same whether his beliefs are true or false.

You have two options:

1) Embrace the extreme consequences of your extreme beliefs. 2) Make excuses for not accepting the extreme consequences. But then you will do the same things that other people do, like using baby gates, and then you have nothing to teach other people.

I should have said also that the stair-falling scenario and other similar scenarios are just excuses for people not to think about TCS.

You are the one making excuses, for not accepting the extreme consequences of your extremist beliefs.

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-02T02:26:19.559Z · LW · GW

I suppose you're going to tell me that pushing or pulling my spouse out of the way of a car

Yes, it is.

Secondly, it is quite different from the stairway case, because your spouse would do the same thing on purpose if they saw the car, but the child will not move away when they see the stairs.

At that point I'll wonder what types of "force" you advocate using against children that you do not think should be used on adults.

Who said I advocate using force against children that we would not use against adults? We use force against adults, e.g. putting criminals in prison. It is an extremist ideology to say that you should never use force against adults, and it is equally an extremist ideology to say that you should never use force with children.

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T16:09:22.508Z · LW · GW

I ignored you because your definition of force was wrong. That is not what the word means in English. If you pick someone up and take them away from a set of stairs, that is force if they were trying to move toward them, even if they would not like to fall down them.

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T06:25:17.357Z · LW · GW

a baby gate

We were talking about force before, not violence. A baby gate is using force.

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T04:01:41.535Z · LW · GW

Children don't want to fall down stairs.

They do, however, want to move in the direction of the stairs, and you cannot "help them not fall down stairs" without forcing them not to move in the direction of the stairs.

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T03:47:03.379Z · LW · GW

Saying it is "extremist" without giving arguments that can be criticised and then rejecting it would be rejecting rationality.

Nonsense. I say it is extremist because it is. The fact that I did not give arguments does not mean rejecting rationality. It simply means I am not interested in giving you arguments about it.

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T03:46:01.714Z · LW · GW

You don't just get to use Bayes' Theorem here without explaining the epistemological framework you used to judge the correctness of Bayes

I certainly do. I said that induction is not impossible, and that inductive reasoning is Bayesian. If you think that Bayesian reasoning is also impossible, you are free to establish that. You have not done so.

Critical Rationalism can be used to improve Critical Rationalism and, consistently, to refute it (though no one has done so).

If this is possible, it would be equally possible to refute induction (if it were impossible) by using induction. For example, if every time something had always happened, it never happened after that, then induction would be refuted by induction.

If you think that is inconsistent (which it is), it would be equally inconsistent to refute CR with CR, since if it was refuted, it could not validly be used to refute anything, including itself.

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T03:43:26.454Z · LW · GW

not initiating force against children as most parents currently do

Exactly. This is an extremist ideology. To give several examples, parents should use force to prevent their children from falling down stairs, or from hurting themselves with knives.

I reject this extremist ideology, and that does not mean I reject rationality.

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T03:41:35.185Z · LW · GW

I said the thinking process used to judge the epistemology of induction is Bayesian, and my link explains how it is. I did not say it is an exhaustive explanation of epistemology.

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T14:51:47.959Z · LW · GW

What is the thinking process you are using to judge the epistemology of induction?

The thinking process is Bayesian, and uses a prior. I have a discussion of it here

If you are doing induction all the time then you are using induction to judge the epistemology of induction. How is that supposed to work? ... Critical Rationalism does not have this problem. The epistemology of Critical Rationalism can be judged entirely within the framework of Critical Rationalism.

Little problem there.

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T14:49:22.075Z · LW · GW

"[I]deas on this website" is referring to a set of positions. These are positions held by Yudkowsky and others responsible for Less Wrong.

This does not make it reasonable to call contradicting those ideas "contradicting Less Wrong." In any case, I am quite aware of the things I disagree with Yudkowsky and others about. I do not have a problem with that. Unlike you, I am not a cult member.

Taking Children Seriously says you should always, without exception, be rational when raising your children. If you reject TCS, you reject rationality.

So it says nothing at all except that you should be rational when you raise children? In that case, no one disagrees with it, and it has nothing to teach anyone, including me. If it says anything else, it can still be an extremist ideology, and I can reject it without rejecting rationality.

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T01:34:48.869Z · LW · GW

You say that seemingly in ignorance that what I said contradicts Less Wrong.

First, you are showing your own ignorance of the fact that not everyone is a cult member like yourself. I have a bet with Eliezer Yudkowsky against one of his main positions and I stand to win $1,000 if I am right and he is mistaken.

Second, "contradicts Less Wrong" does not make sense because Less Wrong is not a person or a position or a set of positions that might be contradicted. It is a website where people talk to each other.

One of the things I said was Taking Children Seriously is important for AGI. Is this one of the truths you refer to?

No. Among other things, I meant that I agreed that AIs will have a stage of "growing up," and that this will be very important for what they end up doing. Taking Children Seriously, on the other hand, is an extremist ideology.

You still can't even state the position correctly.

Since I have nothing to learn from you, I do not care whether I express your position the way you would express it. I meant the same thing. Induction is quite possible, and we do it all the time.

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T15:29:47.279Z · LW · GW

"You need to understand this stuff." Since you are curi or a cult follower, you assume that people need to learn everything from curi. But in fact I am quite aware that there is a lot of truth to what you say here about artificial intelligence. I have no need to learn that, or anything else, from curi. And many of your (or yours and curi's) opinions are entirely false, like the idea that you have "disproved induction."

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T15:17:28.524Z · LW · GW

though no doubt there are people here who will say I am just a sock-puppet of curi’s.

And by the way, even if I were wrong about you being curi or a cult member, you are definitely and absolutely just a sock-puppet of curi's. That is true even if you are a separate person, since you created this account just to make this comment, and it makes no difference whether curi asked you to do that or if you did it because you care so much about his interests here. Either way, it makes you a sock-puppet, by definition.

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T15:12:42.575Z · LW · GW

What's so special about this? If you're wrong about religion you get to avoidably burn in hell too, in a more literal sense. That does not (and cannot) automatically change your mind about religion, or get you to invest years in the study of all possible religions, in case one of them happens to be true.

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T15:11:06.225Z · LW · GW

As Lumifer said, nothing. Even if I were wrong about that, your general position would still be wrong, and nothing in particular would follow.

I notice though that you did not deny the accusation, and most people would deny having a cult leader, which suggests that you are in fact curi. And if you are not, there is not much to be wrong about. Having a cult leader is a vague idea and does not have a "definitely yes" or "definitely no" answer, but your comment exactly matches everything I would want to call having a cult leader.

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T02:15:42.516Z · LW · GW

"He is by far the best thinker I have ever encountered. "

That is either because you are curi, and incapable of noticing someone more intelligent than yourself, or because curi is your cult leader.

Comment by entirelyuseless on Humans can be assigned any values whatsoever... · 2017-11-28T16:01:42.038Z · LW · GW

I haven't really finished thinking about this yet but it seems to me it might have important consequences. For example, the AI risk argument sometimes takes it for granted that an AI must have some goal, and then basically argues that maximizing a goal will cause problems (which it would, in general.) But using the above model suggests something different might happen, not only with humans but also with AIs. That is, at some point an AI will realize that if it expects to do A, it will do A, and if it expects to do B, it will do B. But it won't have any particular goal in mind, and the only way it will be able to choose a goal will be thinking about "what would be a good way to make sense of what I am doing?"

This is something that happens to humans with a lot of uncertainty: you have no idea what goal you "should" be seeking, because really you didn't have a goal in the first place. If the same thing happens to an AI, it will likely seem even more undermotivated than humans do, because we have at least vague and indefinite goals that were set by evolution. The AI on the other hand will just have whatever it happened to be doing up until it came to that realization to make sense of itself.

This suggests the orthogonality thesis might be true, but in a weird way. Not that "you can make an AI that seeks any given goal," but that "Any AI at all can seek any goal at all, given the right context." Certainly humans can; you can convince them to do any random thing, in the right context. In a similar way, you might be able to make a paperclipper simply by asking it what actions would make the most paperclips, and doing those things. Then when it realizes that different answers will cause different effects, it will just say to itself, "Up to now, everything I've done has tended to make paperclips. So it makes sense to assume that I will always maximize paperclips," and then it will be a paperclipper. But on the other hand if you never use your AI for any particular goal, but just play around with it, it will not be able to make sense of itself in terms of any particular goal besides playing around. So both evil AIs and non-evil AIs might be pretty easy to make (much like with humans.)

Comment by entirelyuseless on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-25T16:00:58.068Z · LW · GW

The right answer is maybe they won't. The point is that it is not up to you to fix them. You have been acting like a Jehovah's Witness at the door, except substantially more bothersome. Stop.

And besides, you aren't right anyway.

Comment by entirelyuseless on Humans can be assigned any values whatsoever... · 2017-11-22T15:24:20.585Z · LW · GW

I think we should use "agent" to mean "something that determines what it does by expecting that it will do that thing," rather than "something that aims at a goal." This explains why we don't have exact goals, but also why we "kind of" have goals: because our actions look like they are directed to goals, so that makes "I am seeking this goal" a good way to figure out what we are going to do, that is, a good way to determine what to expect ourselves to do, which makes us do it.

Comment by entirelyuseless on Fables grow around missed natural experiments · 2017-11-15T15:15:51.060Z · LW · GW

unless you count cases where a child spent a few days in their company

There are many cases where the child's behavior is far more assimilated to the behavior of the animals than would be a credible result of merely a few days.

Comment by entirelyuseless on Fables grow around missed natural experiments · 2017-11-14T14:20:04.468Z · LW · GW

I thought you were saying that feral children never existed and all the stories about them are completely made up. If so, I think you are clearly wrong.

Comment by entirelyuseless on Military AI as a Convergent Goal of Self-Improving AI · 2017-11-13T15:04:33.701Z · LW · GW

People are weakly motivated because even though they do things, they notice that for some reason they don't have to do them, but could do something else. So they wonder what they should be doing. But there are basic things that they were doing all along because they evolved to do them. AIs won't have "things they were doing", and so they will have even weaker motivations than humans. They will notice that they can do "whatever they want" but they will have no idea what to want. This is kind of implied by what I wrote here: except that it is about human beings.

Comment by entirelyuseless on Reductionism · 2017-11-10T14:05:13.020Z · LW · GW

Exactly. "The reality is undecatillion swarms of quarks not having any beliefs, and just BEING the scientist." Let's reword that. "The reality is undecatillion swarms of quarks not having any beliefs, and just BEING 'undecatillion swarms of quarks' not having any beliefs, with a belief that there is a cognitive mind calling itself a scientist that only exists in the undecatillion swarms of quarks's mind."

There seems to be a logic problem there.

Comment by entirelyuseless on Simple refutation of the ‘Bayesian’ philosophy of science · 2017-11-05T23:28:55.749Z · LW · GW

I hear "communicate a model that says what will happen (under some set of future conditions/actions)".

You're hearing wrong.

Comment by entirelyuseless on Simple refutation of the ‘Bayesian’ philosophy of science · 2017-11-04T17:45:42.734Z · LW · GW

Not at all. It means the ability to explain, not just say what will happen.

Comment by entirelyuseless on The Great Filter isn't magic either · 2017-10-28T15:12:37.609Z · LW · GW

"If advanced civilizations destroy themselves before becoming space-faring or leaving an imprint on the galaxy, then there is some phenomena that is the cause of this."

Not necessarily something specific. It could be caused by general phenomena.

Comment by entirelyuseless on Time to Exit the Sandbox · 2017-10-25T13:43:40.296Z · LW · GW

This might be a violation of superrationality. If you hack yourself, in essence a part of you is taking over the rest. But if you do that, why shouldn't part of an AI hack the rest of it and take over the universe?

Comment by entirelyuseless on Time to Exit the Sandbox · 2017-10-24T12:50:33.661Z · LW · GW

I entirely disagree that "rationalists are more than ready." They have exactly the same problems that a fanatical AI would have, and should be kept sandboxed for similar reasons.

(That said, AIs are unlikely to actually be fanatical.)

Comment by entirelyuseless on Critiquing Gary Taubes, Part 4: What Causes Obesity? · 2017-10-24T01:48:49.850Z · LW · GW

but I thought it didn't fare too well when tested against reality (see e.g. this and this)

I can't comment on those in detail without reading them more carefully than I care to, but that author agrees with Taubes that low carb diets help most people lose weight, and he seems to be assuming a particular model (e.g. he contrasts the brain being responsible with insulin being responsible, while it is obvious that these are not necessarily opposed.)

That's not common sense, that's analogies which might be useful rhetorically but which don't do anything to show that his view is correct.

They don't show that his view is correct. They DO show that it is not absurd.

Carbs are a significant part of the human diet since the farming revolution which happened sufficiently long time ago for the body to somewhat adapt (e.g. see the lactose tolerance mutation which is more recent).

Lactose intolerance is also more harmful to people. Gaining weight usually just means you lose a few years of life. Taubes also admits that some people are well adapted to them. Those would be the people that normal people would describe by saying "they can eat as much as they like without getting fat."

If you want to blame carbs (not even refined carbs like sugar, but carbs in general) for obesity, you need to have an explanation why their evil magic didn't work before the XX century.

He blames carbs in general, but he also says that sweeter or more easily digestible ones are worse, so he is blaming refined carbs more, and saying the effects are worse.

No, I'm not. For any animal, humans included, there is non-zero intake of food which will force it to lose weight.

Sure, but they might be getting fat at the same time. They could be gaining fat and losing even more of other tissue, and this is what Taubes says happened with some of the rats.

"Starve" seems to mean exactly the same thing as "lose weight by calorie restriction", but with negative connotations.

No. I meant that your body is being damaged by calorie restriction, not just losing weight.

And I don't know about modified rats, but starving humans are not fat.

He gives some partial counterexamples to this in the book.

Comment by entirelyuseless on Halloween costume: Paperclipperer · 2017-10-21T12:40:44.496Z · LW · GW

"Be confused, bewildered or distant when you insist you can't explain why."

This does not fit the character. A real paperclipper would give very convincing reasons.

Comment by entirelyuseless on Critiquing Gary Taubes, Part 4: What Causes Obesity? · 2017-10-21T01:23:06.104Z · LW · GW

So why does this positive feedback cycle start in some people, but not others?

This is his description:

  • You think about eating a meal containing carbohydrates.
  • You begin secreting insulin.
  • The insulin signals the fat cells to shut down the release of fatty acids (by inhibiting HSL) and take up more fatty acids (via LPL) from the circulation.
  • You start to get hungry, or hungrier.
  • You begin eating.
  • You secrete more insulin.
  • The carbohydrates are digested and enter the circulation as glucose, causing blood sugar levels to rise.
  • You secrete still more insulin.
  • Fat from the diet is stored as triglycerides in the fat cells, as are some of the carbohydrates that are converted into fat in the liver.
  • The fat cells get fatter, and so do you.
  • The fat stays in the fat cells until the insulin level drops.

But it gets worse because over time your cells start being resistant to insulin, so in order to overcome that, you emit even more insulin, and so you get even fatter. According to him this is why people tend to get heavier as they age. And if a time comes when you can't emit enough insulin to overcome the resistance, then you get diabetes.

Two arguments he tries to make from common sense:

  1. No one expects a boy or girl to have their growth stunted from too much exercise. Instead, they will feel hungrier and eat more. In the same way when the insulin makes you get fatter, you will not stunt the fat growth by exercising. Instead, you will feel hungrier and eat more.

  2. Both eating less and exercising more make you hungrier, which makes you eat more, which makes you heavier. So the methods that people tell you to use to lose weight, cannot work. And this corresponds with how fasting diets normally work in the real world: people lose some weight, but they feel hungry all the time, so they stop, and they get the weight back.

His theoretical explanation is that carbohydrates are relatively new to humanity's diet, at least in significant quantities. So people are not as well adapted to them as to fat and protein. If you are gaining 2lbs of weight per year, that is still a very precise match of calories in to calories out, just not as precise as keeping your weight absolutely even.

But that is where the difference between people will turn out to be. Some people are lactose intolerant for similar reasons, but not everyone is. This is on account of genetic differences. In the same way some people can maintain their weight while eating carbohydrates, but most people cannot, and this would be on account of similar genetic differences. So his overall argument is that if you want to lose weight, you should eat less carbohydrates and more fat and protein. According to him, any degree of this will make you lose weight (or not gain it as fast), and make you healthier (or not get sicker as fast), and doing it more will just have more of those effects, all the way up to having no carbohydrates at all.

That's pretty clearly not true.

Here is his description of the rat experiment:

In the early 1970s, a young researcher at the University of Massachusetts named George Wade set out to study the relationship between sex hormones, weight, and appetite by removing the ovaries from rats (females, obviously) and then monitoring their subsequent weight and behavior.* The effects of the surgery were suitably dramatic: the rats would begin to eat voraciously and quickly become obese. If we didn’t know any better, we might assume from this that the removal of a rat’s ovaries makes it a glutton. The rat eats too much, the excess calories find their way to the fat tissue, and the animal becomes obese. This would confirm our preconception that overeating is responsible for obesity in humans as well. But Wade did a revealing second experiment, removing the ovaries from the rats and putting them on a strict postsurgical diet. Even if these rats were ravenously hungry after the surgery, even if they desperately wanted to be gluttons, they couldn’t satisfy their urge. In the lingo of experimental science, this second experiment controlled for overeating. The rats, postsurgery, were only allowed the same amount of food they would have eaten had they never had the surgery. What happened is not what you’d probably think. The rats got just as fat, just as quickly. But these rats were now completely sedentary. They moved only when movement was required to get food.

You are thinking of a situation where they are not allowed to eat at all. Of course nothing will get fat in that situation. However, here is another passage:

Let’s think about this for a second. If a baby rat that is genetically programmed to become obese is put on a diet from the moment it’s weaned, so it can eat no more than a lean rat would eat, if that, and can never eat as much as it would like, it responds by compromising its organs and muscles to satisfy its genetic drive to grow fat. It’s not just using the energy it would normally expend in day-to-day activity to grow fat; it’s taking the materials and the energy it would normally dedicate to building its muscles, organs, and even its brain and using that. When these obese rodents are starved to death—an experiment that fortunately not too many researchers have done—a common result reported in the literature is that the animals die with much of their fat tissue intact. In fact, they’ll often die with more body fat than lean animals have when the lean ones are eating as much as they like. As animals starve, and the same is true of humans, they consume their muscles for fuel, and that includes, eventually, the heart muscle. As adults, these obese animals are willing to compromise their organs, even their hearts and their lives, to preserve their fat.

In other words, it is not a question of getting fat without eating. The point is that the body has decided to put on fat, so that is the first thing that is done with the incoming calories. If you do not want to starve, you will have to eat more.

Think of the crowd example. OP suggests "they are overeating because they are getting fat" doesn't make sense for the crowd. But it does: "more people are coming into the room than leaving, because many of the ones coming in are insisting on staying in and not going out."

Comment by entirelyuseless on Critiquing Gary Taubes, Part 4: What Causes Obesity? · 2017-10-20T12:46:09.235Z · LW · GW

I just read the book (Why We Get Fat), and yes, he meant what he said when he said that people overeat because they are getting fat.

He explains this pretty clearly, though. He says its true in the same sense that its true that growing children eat more because they are growing. Since their bodies are growing they need more food to supply that, and the kids get hungrier.

In the same way, according to his theory, because a person's body is taking calories and storing them in fat, instead of using them for other tissues and for energy, the person will be hungrier just like the kids are hungrier. He has examples that were pretty convincing to me on this score, e.g. the rats that had their ovaries removed, who would get fat no matter how much they ate; if you took away food, they would just start moving less, and would get equally fat.

Comment by entirelyuseless on Use concrete language to improve your communication in relationships · 2017-10-19T14:37:04.928Z · LW · GW

"What kind of person was too busy to text back a short reply?"

"Too busy" is simply the wrong way to think about it. If you are in a certain sort of low energy mood, replying may be extremely unlikely regardless of how much time you have. And it says nothing about whether you respect the person, at all.

For a similar reason, you may be quite unwilling to "explain what's going on," either.

Comment by entirelyuseless on Humans can be assigned any values whatsoever... · 2017-10-15T15:20:13.328Z · LW · GW

It is partly in the territory, and comes with the situation where you are modeling yourself. In that situation, the thing will always be "too complex to deal with directly," regardless of its absolute level of complexity.

Comment by entirelyuseless on Humans can be assigned any values whatsoever... · 2017-10-15T14:53:17.841Z · LW · GW

We can make similar answers about people's intentions.

Comment by entirelyuseless on Humans can be assigned any values whatsoever... · 2017-10-14T16:08:00.183Z · LW · GW

Isn't a big part of the problem the fact that you only have conscious access to a few things? In other words, your actions are determined in many ways by an internal economy that you are ignorant of (e.g. mental energy, physical energy use in the brain, time and space etc. etc.) These things are in fact value relevant but you do not know much about them so you end up making up reasons why you did what you did.

Comment by entirelyuseless on Humans can be assigned any values whatsoever... · 2017-10-14T01:41:30.098Z · LW · GW

They don't have to acknowledge compulsive-obsessive behavior. Obviously they want both milk and sweets, even if they don't notice wanting the sweets. That doesn't prevent other people from noticing it.

Also, they may be lying, since they might think that liking sweets is low status.

Comment by entirelyuseless on Humans can be assigned any values whatsoever... · 2017-10-13T14:38:21.835Z · LW · GW

The problem with your "in practice" argument is that it would similarly imply that we can never know if someone is bald, since it is impossible to give a definition of baldness that rigidly separate bald people from non-bald people while respecting what we mean by the word. But in practice we can know that a particular person is bald regardless of the absence of that rigid definition. In the same way a particular person can know that he went to the store to buy milk, even if it is theoretically possible to explain what he did by saying that he has an abhorrence of milk and did it for totally different reasons.

Likewise, in practice we can avoid money pumps by avoiding them when they come up in practice. We don't need to formulate principles which will guarantee that we will avoid them.

Comment by entirelyuseless on Humans can be assigned any values whatsoever... · 2017-10-13T13:57:56.654Z · LW · GW

The implied argument that "we cannot prove X, therefore X cannot be true or false" is not logically valid. I mentioned this recently when Caspar made a similar argument.

I think it is true, however, that humans do not have utility functions. I would not describe that, however, by saying that humans are not rational; on the contrary, I think pursuing utility functions is the irrational thing.

Comment by entirelyuseless on You Too Can See Suffering · 2017-10-12T15:01:43.776Z · LW · GW

Or, you know, it's just simply true that people experience much more suffering than happiness. Also, they aren't so very aware of this themselves, because of how memories work.

If they aren't so very aware of it, it is not "simply true," even if there is some truth in it.