Open thread, Oct. 10 - Oct. 16, 2016

post by MrMind · 2016-10-10T07:00:15.327Z · LW · GW · Legacy · 157 comments

Contents

157 comments

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

157 comments

Comments sorted by top scores.

comment by username2 · 2016-10-10T09:23:33.835Z · LW(p) · GW(p)

Is there something similar to the Library of Scott Alexandria available for The Last Psychiatrist ? I just read "Amy Schumer offers you a look into your soul" and I really liked it but I don't have enough time to read all posts on the blog.

Replies from: 9eB1, scarcegreengrass
comment by 9eB1 · 2016-10-10T17:01:32.249Z · LW(p) · GW(p)

I would be very interested in this as well. In the meantime, there is a subreddit for the site that has a thread with best posts for a new reader, and a thread on people's favorite things from TLP.

Replies from: username2
comment by username2 · 2016-10-12T20:18:51.067Z · LW(p) · GW(p)

Hey thanks for this. I had some time and I compiled this chronologically ordered list of links from those threads for personal use. EDIT: Now contains a few links posted in this thread: http://8ch.net/ratanon/res/2850.html

https://my.mixtape.moe/nwpmby.html

comment by scarcegreengrass · 2016-10-10T17:24:47.465Z · LW(p) · GW(p)

This blog is so wordy and cultural that i (unfamiliar with the context) find it actually challenging to figure out what the premise, thesis, or content of the post is. Reminds me of my experience with discovering arcane 'neoreaction' blogs.

Replies from: ChristianKl
comment by ChristianKl · 2016-10-10T21:51:24.788Z · LW(p) · GW(p)

It's certainly not a blog that tries to pander the reader.

comment by niceguyanon · 2016-10-11T17:32:28.045Z · LW(p) · GW(p)

https://www.quora.com/How-can-I-get-Wi-Fi-for-free-at-a-hotel/answer/Yishan-Wong

Want free wifi when staying at an hotel? Ask for it. Of course!, Duh, seems so obvious now that I think about it.

comment by turchin · 2016-10-10T11:13:53.557Z · LW(p) · GW(p)

If we knew that AI will be created by Google, and that it will happen in next 5 years, what should we do?

Replies from: James_Miller, ChristianKl, ZankerH, Thomas, entirelyuseless, username2, woodchopper
comment by James_Miller · 2016-10-10T13:59:55.715Z · LW(p) · GW(p)

Save less because of the high probability that the AI will (a) kill us, (b) make everyone extremely rich, or (c) make the world weird enough so that money doesn't matter.

Replies from: turchin
comment by turchin · 2016-10-10T14:28:19.656Z · LW(p) · GW(p)

Good point, but my question was about what we can do to raise chances that it will be friendly AI.

Replies from: skeptical_lurker, James_Miller, Lumifer
comment by skeptical_lurker · 2016-10-10T18:26:46.741Z · LW(p) · GW(p)

Ignore all the stuff about provably friendly AI, because AFAIK its fairly stuck at the fundamental level of theoretical impossibility due to lob's theorem and its prob going to take a lot more than five years. Instead, work on cruder methods which have less chance of working but far more chance of actually being developed in time. Specifically, if Google are developing it in 5 years, then its probably going to be deepmind with DNNs and RL, so work on methods that can fit in with that approach.

Replies from: Houshalter, turchin
comment by Houshalter · 2016-10-10T20:07:29.927Z · LW(p) · GW(p)

I agree. I think it's very unlikely FAI could be produced from MIRI's very abstract approach. At least anytime soon.

There are some methods that may work on NN based approaches. For instance my idea for an AI that pretends to be human. In general, you can make AIs that do not have long-term goals, only short term ones. Or even AIs that don't have goals at all and just make predictions. E.g., predicting what a human would do. The point is to avoid making them agents that maximize values in the real world.

These ideas don't solve FAI on their own. But they do give a way of getting useful work out of even very powerful AIs. You could task them with coming up with FAI ideas. The AIs could write research papers, review papers, prove theorems, write and review code, etc.

I also think it's possible that RL isn't that dangerous. Reinforcement learners can't model death and don't care about self-preservation. They may try to hijack their own reward signal, but it's difficult to understand what they would do after that. E.g. if they just tweak their own RAM to have reward = +Inf, and then not do anything else. It may be harder to create a working paperclip maximizer than is commonly believed, even if we do get superintelligent AI.

Replies from: turchin
comment by turchin · 2016-10-11T09:42:39.546Z · LW(p) · GW(p)

I agree. FAI somehow should use human upload or human-like architecture for its value core. In this case values will be presented in it in complex and non-ortogonal ways, and at least one human-like creature will survive.

comment by turchin · 2016-10-11T09:35:41.845Z · LW(p) · GW(p)

Yes. I think that we need not only workable solution, but also implementable. If someone create 800 pages pdf starting with new set theory, solution of Lob theorem problem etc and come to Google with it and say: "Hi, please, switch off all you have and implement this" - it will not work.

But MIRI added in 2016 the line of research for machine learning.

comment by James_Miller · 2016-10-11T04:10:40.061Z · LW(p) · GW(p)

Get a job at Google or seek to influence the people developing the AI. If, say, you were a beautiful woman you could, probably successfully, start a relationship with one of Google's AI developers.

Replies from: turchin, username2
comment by turchin · 2016-10-11T06:21:53.737Z · LW(p) · GW(p)

And how she will use this relation to make safer AI?

Replies from: James_Miller
comment by James_Miller · 2016-10-11T14:33:02.546Z · LW(p) · GW(p)

She could read "The Basic AI Drives" to him at night.

Replies from: turchin
comment by turchin · 2016-10-11T15:14:53.293Z · LW(p) · GW(p)

In hope that he will stop creating AI? But in 6 years it will be Microsoft.

comment by username2 · 2016-10-11T19:24:07.139Z · LW(p) · GW(p)

I am confused as to whether I should upvote for "get a job at Google" or downvoter for "prostitute yourself".

comment by Lumifer · 2016-10-10T14:48:06.429Z · LW(p) · GW(p)

Nothing, because we still don't know what a friendly AI is.

Replies from: skeptical_lurker, DanArmak, Houshalter
comment by skeptical_lurker · 2016-10-10T18:21:41.026Z · LW(p) · GW(p)

That doesn't mean that there is nothing to do - if you don't know what FAI is, then you try to work out what it is.

Replies from: Lumifer
comment by Lumifer · 2016-10-10T18:43:42.931Z · LW(p) · GW(p)

And how do you find out whether you're right or not?

comment by DanArmak · 2016-10-10T14:55:47.801Z · LW(p) · GW(p)

We do know it isn't an AI that kills us. Options b and c still qualify.

Replies from: Lumifer
comment by Lumifer · 2016-10-10T15:10:09.503Z · LW(p) · GW(p)

Options (b) and (c) are basically wishes and those are complex X-D

"Not kill us" is an easy criterion, we already have an AI like that, it plays Go well.

Replies from: DanArmak
comment by DanArmak · 2016-10-10T16:18:24.382Z · LW(p) · GW(p)

We don't have an AGI that doesn't kill us. Having one would be a significant step towards FAI. In fact, "a human-equivalent-or-better AGI that doesn't do anything greatly harmful to humanity" is a pretty good definition of FAI, or maybe "weak FAI".

Replies from: Lumifer
comment by Lumifer · 2016-10-10T16:43:09.772Z · LW(p) · GW(p)

If it's a tool AGI, I don't see how it would help with friendliness, and if it's an active self-developing AGI, I thought the canonical position of LW was that there could be only one? and it's too late to do anything about friendliness at this point?

Replies from: DanArmak
comment by DanArmak · 2016-10-10T21:32:01.072Z · LW(p) · GW(p)

I agree there would probably only be one successful AGI, so it's not the first step of many. I meant it would be a step in that direction. Poor phrasing on my part.

comment by Houshalter · 2016-10-10T20:15:41.072Z · LW(p) · GW(p)

Friendly AI is an AI which maximizes human values. We know what it is, we just don't know how to build one. Yet, anyway.

Replies from: Lumifer, woodchopper
comment by Lumifer · 2016-10-11T18:38:33.824Z · LW(p) · GW(p)

We don't know what an AI which maximizes human values is because we don't know what human values are at the necessary level of precision. Not to mention the assumption that the AI will be a maximizer and that values can be maximized.

Replies from: Houshalter
comment by Houshalter · 2016-10-12T07:34:44.647Z · LW(p) · GW(p)

Who says we need to hardcode human values though? Any reasonable solution will involve an AI that learns what human values are. Or some other method to the control problem that makes AIs that don't want to harm or defy their creators.

Replies from: Lumifer
comment by Lumifer · 2016-10-12T16:35:05.189Z · LW(p) · GW(p)

But if you don't know what human values are, how can you be sure that the AI will learn them correctly?

So you make an AI and tell it: "Go forth and learn human values!" It goes and in a while comes back and says "Behold, I have learned them". How do you know this is true?

Replies from: Houshalter
comment by Houshalter · 2016-10-13T04:13:14.596Z · LW(p) · GW(p)

If I train a neural network to recognize dogs, I have no way of knowing if it learned correctly. I can't look at the weights and see if they are correct dog image recognizing weights and not something else. But I can trust the process of training and validation, that the AI has learned to recognize what dogs look like.

It's a similar principle with learning human values. Of course it's more complicated than just feeding it images of dogs, but the principle of letting AIs learn models from real world data is the important part.

Replies from: Lumifer
comment by Lumifer · 2016-10-13T14:22:08.470Z · LW(p) · GW(p)

If I train a neural network to recognize dogs, I have no way of knowing if it learned correctly.

Of course you do. You test it. You show it a lot of images (that it hasn't seen before) of dogs and not-dogs and check how good it is at differentiating them.

How would that process work for an AI and human values?

the principle of letting AIs learn models from real world data

Right, human values: “A man's greatest pleasure is to defeat his enemies, to drive them before him, to take from them that which they possessed, to see those whom they cherished in tears, to ride their horses, and to hold their wives and daughters in his arms.”

Replies from: Houshalter
comment by Houshalter · 2016-10-14T06:23:52.468Z · LW(p) · GW(p)

Do you expect me to give you the complete solution to AI right here, right now? What are you even trying to say? You seem to be arguing that FAI is impossible. How can you possibly know that? Just because you can't immediately see a solution to the problem, doesn't mean a solution doesn't exist.

I think an AI will easily be able to learn human values from observations. It will be able to build a model of humans, and predict what we will do and say. It certainly won't base all it's understanding on a stupid movie quote. The AI will know what you want.

Replies from: Lumifer
comment by Lumifer · 2016-10-14T14:26:41.686Z · LW(p) · GW(p)

What are you even trying to say?

I'm saying that if you can't recognize Friendliness (and I don't think you can), trying to build a FAI is pointless as you will not be able to answer "Is it Friendly?" even when looking at it.

I think an AI will easily be able to learn human values from observations.

So if you can't build a supervised model, you think going to unsupervised learning will solve your problems? The quote I gave you is part of human values -- humans do value triumph over their enemies. Evolution taught humans to eliminate competition, it taught them to be aggressive and greedy -- all human values. Why do you think your values will be preferred by the AI to values of, say, ISIS or third-world Maoist guerrillas? They're human, too.

Replies from: Houshalter
comment by Houshalter · 2016-10-15T01:44:37.794Z · LW(p) · GW(p)

Why do I need to recognize Friendliness to build an FAI? I only need to know that the process used to construct it results in a friendly AI. Trying to inspect the weights of a complex neural network (or whatever) is pointless as I stated earlier. We haven't the slightest idea how alphaGo's net really works, but we can trust it to beat the best Go champions.

Evolution taught humans to eliminate competition, it taught them to be aggressive and greedy -- all human values.

Evolution also taught humans to be cooperative, empathetic, and kind.

Really your objection seems to be the whole point of CEV. A CEV wouldn't just include the values of ISIS members, but also their victims. And it would be extrapolated, to not just be their current opinions on things, but what their opinions would be if they knew more. Their values if they had more time to think about and consider issues. With those two conditions, the negative parts of human values are entirely eliminated.

Replies from: Lumifer, entirelyuseless
comment by Lumifer · 2016-10-17T14:31:10.443Z · LW(p) · GW(p)

I only need to know that the process used to construct it results in a friendly AI.

You are still facing the same problem. Given that you can't recognize friendliness, how will you create or choose a process which will build a FAI? Would you be able to answer "Will it be friendly?" by looking at the process?

the negative parts of human values are entirely eliminated.

That doesn't make much sense. What do you mean by "negative" and from which point of view? If from the point of view of the AI, that's just a trivial tautology. If from the point of view of (at least some) humans, this seems to be not so.

In general, do you treat morals/values as subjective or objective? If objective, the whole "if they knew more" part is entirely unnecessary: you're discovering empirical reality, not consulting with people on what do they like. And subjectivism here, of course, makes the whole idea of CEV meaningless.

Also, I see no evidence to support the view that as people know more, their morals improve, for pretty much any value of "improve".

Replies from: Houshalter
comment by Houshalter · 2016-10-20T20:27:59.531Z · LW(p) · GW(p)

how will you create or choose a process which will build a FAI?

You are literally asking me to solve the FAI problem right here and now. I understand that FAI is a very hard problem and I don't expect to solve it instantly. Just because a problem is hard, doesn't mean it can't have a solution.

First of all let me adopt some terminology from Superintelligence. I think FAI requires solving two somewhat different problems. Value Learning and Value Loading.

You seem to think Value Learning is the hard problem, getting an AI to learn what humans actually want. I think that's the easy problem, and any intelligent AI will form a model of humans and understand what we want. Getting it to care about what we want seems like the hard problem to me.

But I do see some promising ideas to approach the problem. For instance have AIs that predict what choices a human would make in each situation. So you basically get an AI which is just a human, but sped up a lot. Or have an AI which presents arguments for and against each choice, so that humans can make more informed choices. Then it could predict what choice a human would make after hearing all the arguments, and do that.

More complicated ideas were mentioned in Superintelligence. I like the idea of "motivational scaffolding".Somehow train an AI that can learn how the world works and can generate an "interpretable model". Like e.g. being able to understand English sentences and translate their meanings to representations the AI can use. Then you can explicitly program a utility function into the AI using its learned model.

That doesn't make much sense. What do you mean by "negative" and from which point of view?

From your point of view. You gave me examples of values which you consider bad, as an argument against FAI. I'm showing you that CEV would eliminate these things.

Also, I see no evidence to support the view that as people know more, their morals improve, for pretty much any value of "improve".

Your stated example was ISIS. ISIS is so bad because they incorrectly believe that God is on their side and wants them to do the things they do. That the people that die will go to heaven, so loss of life isn't so bad. If they were more intelligent, informed, and rational... If they knew all the arguments for and against religion, then their values would be more like ours. They would see how bad killing people is, and that their religion is wrong.

The second thing CEV does is average everyone's values together. So even if ISIS really does value killing people, their victims value not being killed even more. So a CEV of all of humanity would still value life, even if evil people's values are included. Even if everyone was a sociopath, their CEV would still be the best compromise possible, between everyone's values.

Replies from: Lumifer
comment by Lumifer · 2016-10-21T14:54:55.628Z · LW(p) · GW(p)

You are literally asking me to solve the FAI problem right here and now.

No, I'm asking you to specify it. My point is that you can't build X if you can't even recognize X.

You seem to think Value Learning is the hard problem, getting an AI to learn what humans actually want.

Learning what humans want is pretty easy. However it's an inconsistent mess which involves many things contemporary people find unsavory. Making it all coherent and formulating a (single) policy on the basis of this mess is the hard part.

From your point of view. You gave me examples of values which you consider bad, as an argument against FAI. I'm showing you that CEV would eliminate these things.

Why would CEV eliminate things I find negative? This is just a projected typical mind fallacy. Things I consider positive and negatve are not (necessarily) things many or most people consider positive and negative. Since I don't expect to find myself in a privileged position, I should expect CEV to eliminate some things I believe are positive and impose some things I believe are negative.

Later you say that CEV will average values. I don't have average values.

If they knew all the arguments for and against religion, then their values would be more like ours. They would see how bad killing people is, and that their religion is wrong.

I see no evidence to believe this is true and lots of evidence to believe this is false.

You are essentially saying that religious people are idiots and if only you could sit them down and explain things to them, the scales would fall from their eyes and they will become atheists.This is a popular idea, but it fails real-life testing very very hard.

Replies from: Houshalter, hairyfigment
comment by Houshalter · 2016-10-25T06:11:58.814Z · LW(p) · GW(p)

No, I'm asking you to specify it. My point is that you can't build X if you can't even recognize X.

And I don't agree with that. I've presented some ideas on how an FAI could be built, and how CEV would work. None of them require "recognizing" FAI. What would it even mean to "recognize" FAI, except to see that it values the kinds of things we value and makes the world better for us.

Learning what humans want is pretty easy. However it's an inconsistent mess which involves many things contemporary people find unsavory. Making it all coherent and formulating a (single) policy on the basis of this mess is the hard part.

I've written about one method to accomplish this, though there may be better methods.

Why would CEV eliminate things I find negative? This is just a projected typical mind fallacy. Things I consider positive and negatve are not (necessarily) things many or most people consider positive and negative.

Humans are 99.999% identical. We have the same genetics, the same brain structures, and mostly the same environments. The only reason this isn't obvious, is because we spend almost all our time focusing on the differences between people, because that's what's useful in everyday life.

I should expect CEV to eliminate some things I believe are positive and impose some things I believe are negative.

That may be the case, but that's still not a bad outcome. In the example I used, the values dropped from ISIS members were taken for 2 reasons. That they were based on false beliefs, or that they hurt other people. If you have values based on false beliefs, you should want them to be eliminated. If you have values that hurt other people then it's only fair that be eliminated. Or else you risk the values of people that want to hurt you.

Later you say that CEV will average values. I don't have average values.

Well I think it's accurate, but it's somewhat nonspecific. Specifically, CEV will find the optimal compromise of values. The values that satisfy the most people the most amount. Or at least dissatisfy the fewest people the least. See the post I just linked for more details, on one example of how that could be implemented. That's not necessarily "average values".

In the worst case, people with totally incompatible values will just be allowed to go separate ways, or whatever the most satisfying compromise is. Muslims live on one side of the dyson sphere, Christians on the other, and they never have to interact and can do their own thing.

You are essentially saying that religious people are idiots and if only you could sit them down and explain things to them, the scales would fall from their eyes and they will become atheists.This is a popular idea, but it fails real-life testing very very hard.

My exact words were "If they were more intelligent, informed, and rational... If they knew all the arguments for and against..." Real world problems of persuading people don't apply. Most people don't research all the arguments against their beliefs, and most people aren't rational and seriously consider the hypothesis that they are wrong.

For what it's worth, I was deconverted like this. Not overnight by any means. But over time I found that the arguments against my beliefs were correct and I updated my belief.

Changing world views is really really hard. There's no one piece of evidence or one argument to dispute. Religious people believe that there is tons of evidence of God. To them it just seems obviously true. From miracles, to recorded stories, to their own personal experiences, etc. It takes a lot of time to get at every single pillar of the belief and show its flaws. But it is possible. It's not like Muslims were born believing in Islam. Islam is not encoded in genetics. People deconvert from religions all the time, entire societies have even done it.

In any case, my proposal does not require literally doing this. It's just a thought experiment. To show that the ideal set of values is what you choose if you had all the correct beliefs.

Replies from: Lumifer
comment by Lumifer · 2016-10-25T15:01:48.428Z · LW(p) · GW(p)

What would it even mean to "recognize" FAI

It means that when you look an an AI system, you can tell whether it's FAI or not.

If you can't tell, you may be able to build an AI system, but you still won't know whether it's FAI or not.

I've written about one method to accomplish this

I don't see what voting systems have to do with CEV. The "E" part means you don't trust what the real, current humans say, so to making them vote on anything is pointless.

Humans are 99.999% identical.

That's a meaningless expression without a context. Notably, we don't have the same genes or the same brain structures. I don't know about you, but it is really obvious to me that humans are not identical.

...false beliefs ... it's only fair ...

How do you know what's false? You are a mere human, you might well be mistaken. How do you know what's fair? Is it an objective thing, something that exists in the territory?

The values that satisfy the most people the most amount.

Right, so the fat man gets thrown under the train... X-)

Muslims live on one side of the dyson sphere, Christians on the other

Hey, I want to live on the inside. The outside is going to be pretty gloomy and cold :-/

Real world problems of persuading people don't apply.

LOL. You're just handwaving then. "And here, in the difficult part, insert magic and everything works great!"

Replies from: Houshalter
comment by Houshalter · 2016-10-25T20:42:59.601Z · LW(p) · GW(p)

It means that when you look an an AI system, you can tell whether it's FAI or not.

Look at it how? Look at it's source code? I argued that we can write source code that will result in FAI, and you could recognize that. Look at the weights of it's "brain"? Probably not, anymore than we can look at human brains and recognize what they do. Look at it's actions? Definitely, FAI is an AI that doesn't destroy the world etc.

I don't see what voting systems have to do with CEV. The "E" part means you don't trust what the real, current humans say, so to making them vote on anything is pointless.

The voting doesn't have to actually happen. The AI can predict what we would vote for, if we had plenty of time to debate it. And you can get even more abstract than that and have the FAI just figure out the details of E itself.

The point is to solve the "coherent" part. That you can find a set of coherent values from a bunch of different agents or messy human brains. And to show that mathematicians have actually extensively studied a special case of this problem, voting systems.

That's a meaningless expression without a context. Notably, we don't have the same genes or the same brain structures. I don't know about you, but it is really obvious to me that humans are not identical.

Compared to other animals, compared to aliens, yes we are incredibly similar. We do have 99.99% identical DNA, our brains all have the same structure with minor variations.

How do you know what's false?

Did I claim that I did?

How do you know what's fair? Is it an objective thing, something that exists in the territory?

I gave a precise algorithm for doing that actually.

Right, so the fat man gets thrown under the train... X-)

Which is the best possible outcome, vs killing 5 other people. But I don't think these kinds of scenarios are realistic once we have incredibly powerful AI.

LOL. You're just handwaving then. "And here, in the difficult part, insert magic and everything works great!"

I'm not handwaving anything... There is no magic involved at all. The whole scenario of persuading people is counterfactual and doesn't need to actually be done. The point is to define more exactly what CEV is. It's the values you would want if you had the correct beliefs. You don't need to actually have the correct beliefs, to give your CEV.

Replies from: Lumifer
comment by Lumifer · 2016-10-26T14:28:40.432Z · LW(p) · GW(p)

I think we have, um, irreconcilable differences and are just spinning wheels here. I'm happy to agree to disagree.

comment by hairyfigment · 2016-10-22T23:12:46.927Z · LW(p) · GW(p)

We typically imagine CEV asking what people would do if they 'knew what the AI knew' - let's say the AI tries to estimate expected value of a given action, with utility defined by extrapolated versions of us who know the truth, and probabilities taken from the AI's own distribution. I am absolutely saying that theism fails under any credible epistemology, and any well-programmed FAI would expect 'more knowledgeable versions of us' to become atheists on general principles. Whether or not this means they would change "if they knew all the arguments for and against religion," depends on whether or not they can accept some extremely basic premise.

(Note that nobody comes into the word with anything even vaguely resembling a prior that favors a major religion. We might start with a bias in favor of animism, but nearly everyone would verbally agree this anthropomorphism is false.)

It seems much less clear if CEV would make psychopathy irrelevant. But potential victims must object to their own suffering at least as much as real-world psychopaths want to hurt them. So the most obvious worst-case scenario, under implausibly cynical premises, looks more like Omelas than it does a Mongol invasion. (Here I'm completely ignoring the clause meant to address such scenarios, "had grown up farther together".)

Replies from: Lumifer
comment by Lumifer · 2016-10-24T15:15:05.481Z · LW(p) · GW(p)

We typically imagine CEV asking what people would do if they 'knew what the AI knew'

No, we don't, because this would be a stupid question. CEV doesn't ask people, CEV tells people what they want.

any well-programmed FAI would expect 'more knowledgeable versions of us' to become atheists on general principles.

I see little evidence to support this point of view. You might think that atheism is obvious, but a great deal of people, many of them smarter than you, disagree.

comment by entirelyuseless · 2016-10-15T20:39:19.087Z · LW(p) · GW(p)

This amounts to saying "because I'm right and once everyone gets to know reality better, they'll figure out I'm right."

In reality they will also figure out the places where you are wrong, and there will be many of them.

Replies from: Houshalter
comment by Houshalter · 2016-10-16T06:47:29.346Z · LW(p) · GW(p)

I'm not claiming that at all. I may be wrong about many things. It's irrelevant.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-10-16T20:21:22.623Z · LW(p) · GW(p)

It is not irrelevant. You said, "With those two conditions, the negative parts of human values are entirely eliminated." That certainly meant that things like ISIS opinions would be eliminated. I agree in that particular case, but there are many other things that you would consider negative which will not be eliminated. I can probably guess some of them, although I won't do that here.

Replies from: Houshalter
comment by Houshalter · 2016-10-20T20:29:43.866Z · LW(p) · GW(p)

See my other comment for more clarification on how CEV would eliminate negative values.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-10-21T04:42:26.546Z · LW(p) · GW(p)

I read that. You say there, "Your stated example was ISIS. ISIS is so bad because they incorrectly believe... If they knew all the arguments for and against religion, then their values would be more like ours." As I said, I agree with you in that case. But you are indeed saying, "it is because I am right and when they know better they will know I was right." And that will not always be true, even if it is true in that case.

Replies from: Houshalter
comment by Houshalter · 2016-10-21T04:47:37.413Z · LW(p) · GW(p)

I never claimed I am right about everything. I don't need to be right about everything. I would love to have an AI show me what I am wrong about and show me the perfect set of values.

And most importantly, I'm saying that this process would result in the optimal set of values for everyone. Do you disagree?

Replies from: entirelyuseless
comment by entirelyuseless · 2016-10-21T12:10:22.100Z · LW(p) · GW(p)

Yes, I disagree. I think that "babyeater values are different from human values" differs only in degree from "my values are different from your values." I do not think there is a reasonable chance that I will turn out to be wrong about this, just like there is no reasonable chance that if we measure our heights with sufficient accuracy, we will turn out to have different heights. This is still another reason why we should speak of "babyeater morality" and "human morality," namely because if morality is inconsistent with variety, then morality does not exist.

That said, I already said that I would not be willing to wipe out non-human values from the cosmos, and likewise I have no interest in imposing my personal values on everything else. I think these are really the same thing, and in that sense wanting to impose a CEV on the universe is being a "racist" in relation to human beings vs other intelligent beings.

Replies from: Houshalter
comment by Houshalter · 2016-10-25T06:37:30.538Z · LW(p) · GW(p)

People may have different values (although I think deep down we are very similar, humans sharing all the same brains and not having that much diversity.) Regardless, CEV should find the best possible compromise between our different values. That's literally the whole point.

If there is a difference in our values, the AI will find the compromise that satisfies us the most (or dissatisfies us the least.) There is no alternative, besides not compromising at all and just taking the values of a single random person. From behind the veil of ignorance, the first is definitely preferable.

I don't think this will be so bad. Because I don't think our values diverge so much, or that decent compromises are impossible between most values. I imagine that in the worst case, the compromise will be that two groups with different values will have to go their separate ways. Live on opposite sides of the world, never interact, and do their own thing. That's not so bad, and a post-singularity future will have more than enough resources to support it.

That said, I already said that I would not be willing to wipe out non-human values from the cosmos

No one is suggesting we wipe out non-human values. But we have yet to meet any intelligent aliens with different values. Once we do so, we may very well just apply CEV to them and get the best compromise of our values again. Or we may keep our own values, but still allow them to live separately and do their own thing, because we value their existence.

This reminds me a lot of the post value is fragile. It's ok to want a future that has different beings in it, that are totally different than humans. That doesn't violate my values at all. But I don't want a future that has beings die or suffer involuntarily. I don't think it's "value racist" to want to stop beings that do value that.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-10-25T13:19:33.199Z · LW(p) · GW(p)

"Once we do so, we may very well just apply CEV to them and get the best compromise of our values again. Or we may keep our own values, but still allow them to live separately and do their own thing, because we value their existence."

The problem I have with what you are saying is that these are two different things. And if they are two different things in the case of the aliens, they are two different things in the case of the humans.

The CEV process might well be immoral for everyone concerned, since by definition it is compromising a person's fundamental values. Eliezer agrees this is true in the case of the aliens, but he does not seem to notice that it would also be true in the case of the humans.

In any case, I choose in advance to keep my own values, not to participate in changing my fundamental values. But I am also not going to impose those on anyone else. If you define CEV to mean "the best possible way to keep your values completely intact and still not impose them on anyone else," then I would agree with it, but only because we will be stipulating the desired conclusion.

That does not necessarily mean "living separately". Even now I live with people who, in every noticeable way, have values that are fundamentally different from mine. That does not mean that we have to live separately.

In regard to the last point, you are saying that you don't want to eliminate all potential aliens, but you want to eliminate ones with values that you really dislike. I think that is basically racist.

There is some truth in it, however, insofar as in reality, for reasons I have been saying, beings that have fundamental desires for others to suffer and die are very unlikely indeed, and any such desires are likely to be radically qualified. To that degree you are somewhat right: desires like that are in fact evil. But because they are evil, they cannot exist.

Replies from: Houshalter, TheAncientGeek
comment by Houshalter · 2016-10-26T22:26:34.174Z · LW(p) · GW(p)

The CEV process might well be immoral for everyone concerned, since by definition it is compromising a person's fundamental values.

The world we live in is "immoral" in that it's not optimized towards anyone's values. Taking a single person's values would be "immoral" to everyone else. CEV, finding the best possible compromise of values, would be the least immoral option, on average. Optimize the world in a way that dissatisfies the least people the least amount.

That does not necessarily mean "living separately".

Right. I said that's the realistic worst case, when no compromise is possible. I think most people have similar enough values that this would be rare.

you want to eliminate ones with values that you really dislike. I think that is basically racist.

I don't necessarily want to kill them, but I would definitely stop them from hurting other beings. Imagine you came upon a race of aliens that practiced a very cruel form of slavery. Say 90% of their population was slaves, and the slave owning class treated regularly tortured and overworked them. Would you stop them, if you could? Is that racist? What about the values of the slaves?

Replies from: entirelyuseless
comment by entirelyuseless · 2016-10-27T01:59:56.290Z · LW(p) · GW(p)

I think optimizing anything is always immoral, exactly because it means imposing things that you should not be imposing. It is also the behavior of a fanatic, not a normal human being; that is the whole reason for the belief that AIs would destroy the world, namely because of the belief that they would behave like fanatics instead of like intelligent beings.

In the case of the slave owning race, I am quite sure that slavery is not consistent with their fundamental values, even if they are practicing it for a certain time. I don't admit that values are arbitrary, and consequently you cannot assume (at least without first proving me wrong about this) that any arbitrary value could be a fundamental value for something.

Replies from: Houshalter
comment by Houshalter · 2016-10-27T03:48:08.533Z · LW(p) · GW(p)

Well now I see we disagree at a much more fundamental level.

There is nothing inherently sinister about "optimization". Humans are optimizers in a sense, manipulating the world to be more like how we want it to be. We build sophisticated technology and industries that are many steps removed from our various end goals. We dam rivers, and build roads, and convert deserts into sprawling cities. We convert the resources of the world into the things we want. That's just what humans do, that's probably what most intelligent beings do.

The definition of FAI, to me, is something that continues that process, but improves it. Takes over from us, and continues to run the world for human ends. Makes our technologies better and our industries more efficient, and solves our various conflicts. The best FAI is one that constructs a utopia for humans.

I don't know why you believe a slave owning race is impossible. Humans of course practiced slavery in many different cultures. It's very easy for even humans to not care about the suffering of other groups. And even if you do believe most humans could be convinced it's wrong (I'm not so sure), there are actual sociopaths that don't experience empathy at all.

Humans also have plenty of sinister values, and I can easily believe aliens could exist that are far worse. Evolution tended to evolve humans that cooperate and have empathy. But under different conditions, we could have evolved completely differently. There is no law of the universe that says beings have to have values like us.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-10-27T13:16:24.986Z · LW(p) · GW(p)

"Well now I see we disagree at a much more fundamental level." Yes. I've been saying that since the beginning of this conversation.

If humans are optimizers, they must be optimizing for something. Now suppose someone comes to you and says, "do you agree to turn on this CEV machine?", when you respond, are you optimizing for the thing or not? If you say yes, and you are optimizing the original thing, then the CEV cannot (as far as you know) be compromising the thing you were optimizing for. If you say yes and are not optimizing for it, then you are not an optimizer. So you must agree with me on at least one point: either 1) you are not an optimizer, or 2) you should not agree with CEV if it compromises your personal values in any way. I maintain both of those, but you must maintain at least one of them.

In earlier posts I have explained why it is not possible that you are really an optimizer (not during this particular discussion.) People here tend to neglect the fact that an intelligent thing has a body. So e.g. Eliezer believes that an AI is an algorithm, and nothing else. But in fact an AI has a body just as much as we do. And those bodies have various tendencies, and they do not collectively add up to optimizing for anything, except in an abstract sense in which everything is an optimizer, like a rock is an optimizer, and so on.

"We convert the resources of the world into the things we want." To some extent, but not infinitely, in a fanatical way. Again, that is the whole worry about AI -- that it might do that fanatically. We don't.

I understand you think that some creatures could have fundamental values that are perverse from your point of view. This is because you, like Eliezer, think that values are intrinsically arbitrary. I don't, and I have said so from the beginning. It might be true that slave owning values could be fundamental in some exterrestrial race, but if they were, slavery in that race would be very, very different from slavery in the human race, and there would be no reason to oppose it in that race. In fact, you could say that slavery exists in a fundamental way in the human race, and there is no reason to oppose it: parents can tell their kids to stay out of the road, and they have to obey them, whether they want to or not. Note that this is very, very different from the kind of slavery you are concerned about, and there is no reason to oppose the real kind.

Replies from: Houshalter
comment by Houshalter · 2016-11-05T05:55:16.724Z · LW(p) · GW(p)

I can still think the CEV machine is better than whatever the alternative is (for instance, no AI at all.) But yes, in theory, you should prefer to make AIs that have your own values and not bother with CEV.

Having a body is irrelevant. Bodies are just one way to manipulate the world to optimize your goals.

"We convert the resources of the world into the things we want." To some extent, but not infinitely, in a fanatical way. Again, that is the whole worry about AI -- that it might do that fanatically. We don't.

What do you mean by "fanatically"? This is a pretty vague word. Humans would sure seem fanatical to other animals. We've cut down entire continent sized forests, drained massive lakes, and built billions of complex structures.

The only reason we haven't "optimized" the Earth further, is because of physical and economic limits. If we could we probably would.

Whether you call that "optimization" or not, is mostly irrelevant. If superintelligent AIs acted similarly, humans would be screwed.

I'm deeply concerned that you are theoretically ok with slave owning aliens. If the slaves are ok with it, then perhaps it could be justified. But if they strongly object to it, and suffer from it, and don't get any benefit from it, then it's just obviously wrong.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-11-05T16:06:42.842Z · LW(p) · GW(p)

"Having a body is irrelevant. Bodies are just one way to manipulate the world to optimize your goals."

This is not true. Bodies are physical objects that follow the laws of physics, and the laws of physics are not "just one way to manipulate the world to optimize your goals," because the laws have nothing to do with your goals. For example, we often don't keep doing something because we are tired, not because we have a goal of not continuing. AIs will be quite capable of doing the same thing, as for example if thinking too hard about something begins to weaken its circuits.

What I mean by fanatically is trying to optimize for a single goal as though it were the only thing that mattered. We do not do that, nor does anything else with a body, nor is it even possible, for the above reason.

Yes you should be concerned about what I said about slaves and aliens, as it suggests that the CEV machine might result in things that you consider utterly wicked. I said that from the beginning, when you claimed that it would eliminate all negative results, obviously intending that to mean from your subjective point of view.

comment by TheAncientGeek · 2016-10-26T11:57:35.566Z · LW(p) · GW(p)

The CEV process might well be immoral for everyone concerned, since by definition it is compromising a person's fundamental values.

If ithey find it immoral in the sense of crossing a line that should never be crossed, then they are not going to play. I don't think the morals=values theory can tell you where the bright lines are, and that is why I think rules and a few other things are involved in ethics.

There is some truth in it, however, insofar as in reality, for reasons I have been saying, beings that have fundamental desires for others to suffer and die are very unlikely indeed, and any such desires are likely to be radically qualified. To that degree you are somewhat right: desires like that are in fact evil. But because they are evil, they cannot exist

Consider a harder case....a society that is ruthless in crushing any society that offers any rivalry or opposition to them, but otherwise leaves people alone. Since that is a survival promoting strategy, you can't argue that it would just be selected out. But it doesn't seem as ethical as more conciliatory approaches.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-10-26T13:52:21.321Z · LW(p) · GW(p)

"It doesn't seem as ethical as more conciliatory approaches." I agree. That is because it is not the best strategy. It may not be the worst possible strategy, but it is not the best. And since the people engaging in that strategy, their ability to think about it, over time, will lead them to adopt better strategies, namely more conciliatory approaches.

I don't say that the good is achieved by selection alone. It is also achieved by the use of reason, by things that use reason.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-11-09T12:57:58.369Z · LW(p) · GW(p)

That is because it is not the best strategy.

Are you sure? Ont the face of it, doing things like attending peace negotiations exposes you to risks (they take the opportunity to assassinate you, they renege on the agreement, etc) that simply nuking them doesn't.

It is also achieved by the use of reason, by things that use reason.

If people who reason well don't get selected, where does the prevalence of good come from?

Replies from: entirelyuseless
comment by entirelyuseless · 2016-11-11T02:47:55.367Z · LW(p) · GW(p)

Yes I am sure. Of course negotiating has risks, but it doesn't automatically make permanent enemies, and it is better not to have permanent enemies.

People who reason well do get selected. I am just saying once they are selected they can start thinking about what is good as well.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-11-11T10:02:10.230Z · LW(p) · GW(p)

If the alternative to negotation is completely exterminating you enemies, you don't have to worry about permanent enemies!

Replies from: entirelyuseless
comment by entirelyuseless · 2016-11-11T14:44:09.898Z · LW(p) · GW(p)

You can try to permanently exterminate them and fail. Additionally, even if you succeed in one case, you will ensure that no one else will be willing to negotiate with you even when it would be beneficial for you because they are stronger. So overall you will be decreasing your options, which makes your situation worse.

comment by woodchopper · 2016-10-26T10:50:51.661Z · LW(p) · GW(p)

Each human differs in their values. So it is impossible to build the machine of which you speak.

Replies from: Houshalter
comment by Houshalter · 2016-10-26T22:14:45.508Z · LW(p) · GW(p)

But humans share a lot of values (e.g. wanting to live and not be turned into a dyson sphere.) And a collection of individuals may still have a set of values (see e.g. coherent extrapolated volition.)

comment by ChristianKl · 2016-10-10T14:02:25.781Z · LW(p) · GW(p)

Get employed by Google.

comment by ZankerH · 2016-10-10T15:56:07.333Z · LW(p) · GW(p)

Despair and dedicate your remaining lifespan to maximal hedonism.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2016-10-10T17:55:04.416Z · LW(p) · GW(p)

Google do not strike me as incompitant, and they do have ethics oversite for AI. Worry, yes, despair, no.

comment by Thomas · 2016-10-10T13:56:24.944Z · LW(p) · GW(p)

First, this is not very unlikely.

Second, be faster than them.

comment by entirelyuseless · 2016-10-26T14:24:21.248Z · LW(p) · GW(p)

I would wait to win my $1,000 from Eliezer.

Replies from: turchin
comment by turchin · 2016-10-26T19:15:51.116Z · LW(p) · GW(p)

What kind of bet you have with him?

Replies from: entirelyuseless
comment by entirelyuseless · 2016-10-27T02:02:19.637Z · LW(p) · GW(p)

It is on the bets registry. I am Unknown with a new username.

comment by username2 · 2016-10-11T19:20:11.483Z · LW(p) · GW(p)

Rejoice because the end is near.

Maybe buy Google stock?

comment by woodchopper · 2016-10-26T10:48:58.304Z · LW(p) · GW(p)

Raid Google and shut them down immediately. Start a Manhattan project of AI safety research.

Replies from: Lumifer
comment by Lumifer · 2016-10-26T14:55:02.455Z · LW(p) · GW(p)

I find your faith in the government's benevolence... disturbing.

comment by roland · 2016-10-10T12:20:15.514Z · LW(p) · GW(p)

Is the following a rationality failure? When I make a stupid mistake that caused some harm I tend to ruminate over it and blame myself a lot. Is this healthy or not? The good thing is that I analyze what I did wrong and learn something from it. The bad part is that it makes me feel terrible. Is there any analysis of this behaviour out there? Studies?

Replies from: pcm, Tem42, torekp
comment by pcm · 2016-10-10T16:44:24.751Z · LW(p) · GW(p)

I suspect attempted telekinesis is relevant.

comment by Tem42 · 2016-10-13T23:23:22.534Z · LW(p) · GW(p)

If it is severe enough that you are posting here about it making you feel bad, it is worth trying to replace it with a mental habit that works equally well to prevent future errors but feels better.

It is good to gain control over your mental habits in general, and this sounds like a good place to start.

If those statements appear true to you, no other analysis of this behavior is likely necessary.

comment by torekp · 2016-10-13T00:36:16.166Z · LW(p) · GW(p)

Well, unless you're an outlier in rumination and related emotions, you might want to consider how the evolutionary ancestral environment compares to the modern one. It was healthy in the former.

comment by skeptical_lurker · 2016-10-10T18:14:36.152Z · LW(p) · GW(p)

We live in an increasingly globalised world, where moving between countries is both easier in terms of transport costs and more socially acceptable. Once translation reaches near-human levels, language barriers will be far less of a problem. I'm wondering to what extent evaporative cooling might happen to countries, both in terms of values and economically.

I read that France and Greece lost 3 & 5% of their millionaires last year (or possibly the year before), citing economic depression and rising racial/religious tension, with the most popular destination being Australia (as it has the 1st or 2nd highest HDI in the world). 3-5% may not seem like a lot, but if it were sustained for several years it quickly piles up. The feedback effects are obvious - the wealthier members of society find it easier to leave and perhaps have more of a motive to leave an economic collapse, which decreases tax revenue, which increases collapse etc. On the flip side, Australia attracts these people and its economy grows more making it even more attractive...

Socially, the same effect as described in EY's essay I linked happens on a national scale - if the 'blue' people leave, the country becomes 'greener' which attracts more greens and forces out more blues. And social/economic factors feed into each other too - economic collapses cause extremism of all sorts, while I imagine a wealthy society attracting elites would be more able to handle or avoid conflicts.

Now, this is not automatically a bad thing, or at least it might be bad locally for some people, but perhaps not globally. Any thoughts as to what sort of outcomes there might be? And incidentally, how many people can you fit in Australia? I know its very big, but also has a lot of desert.

Replies from: Lumifer, Daniel_Burfoot, ChristianKl, woodchopper
comment by Lumifer · 2016-10-10T18:42:04.598Z · LW(p) · GW(p)

Brain drain has been a concern of some for a long time.

Replies from: Houshalter, skeptical_lurker
comment by Houshalter · 2016-10-10T20:17:50.020Z · LW(p) · GW(p)

And also competitive tax rates have been a popular subject in politics for a long long time. "If we tax millionaires/businesses, what stops them from just leaving to another country/state/city?"

comment by skeptical_lurker · 2016-10-15T13:35:03.118Z · LW(p) · GW(p)

Indeed, but I was wondering whether modern social and technological changes will accelerate this.

comment by Daniel_Burfoot · 2016-10-11T19:34:51.122Z · LW(p) · GW(p)

In my view, segregating the world by values would actually be really good. People who have very different belief systems should not try or be forced to live in the same country.

Replies from: Houshalter, WalterL
comment by Houshalter · 2016-10-12T07:28:29.786Z · LW(p) · GW(p)

But the problem is it's not just by values. It's also by wealth and intelligence and education. If you have half of the world that is really poor, and anyone that is intelligent or wealthy automatically leaves, then they will probably stay poor forever.

comment by WalterL · 2016-10-12T14:46:05.364Z · LW(p) · GW(p)

Yes, those with my values will live here, in Gondor. Your folks can live other there, in Mordor. Our citizens will no longer come into contact and conflict with one another, and peace will reign forever.

What, these segregated regions THEMSELVES come into conflict? Absurd. What would you even call a conflict that was between large groups of people? That could never happen. Everyone who shares my value system knows that lots of people would die, and we all agree that nothing could be worth that.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2016-10-12T18:49:03.493Z · LW(p) · GW(p)

Downvoted for making a flippant, argument-based-on-fiction response to serious comment.

Replies from: WalterL
comment by WalterL · 2016-10-12T19:53:10.492Z · LW(p) · GW(p)

Here's a more serious response.

  1. Segregating the world, period, based on whatever, is impossible without a coercive power that the existing nations of earth would consider illegal. Before you could forcefully migrate a large percentage of the world's humans you'd have to win a war with whatever portion of the UN stood against you.
  2. If you could do it, no one would admit to having any values other than those which got to live in/own the nicest places/stuff/be with their family / not be with their competitors/whatever. The technology to determine everyone's values does not exist.
  3. If you somehow derived everyone's values and split them by these, you would probably be condemning large segments of the population to misery (Lots of people's values are built around living around people who don't share them.), and there would be widespread resentment. The invincible force you used to overcome objection 1 would be tested within a generation.
Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2016-10-13T02:27:02.240Z · LW(p) · GW(p)

Okay, I obviously don't mean that we should value-segregate people at the point of a gun. I mean that if people naturally want to migrate towards geopolitical communities that better fit their particular value system, this is probably a good thing.

Replies from: WalterL
comment by WalterL · 2016-10-13T03:52:56.596Z · LW(p) · GW(p)

Yeah, I agree that people being able to travel freely and choose where they live is good.

comment by ChristianKl · 2016-10-10T21:46:15.871Z · LW(p) · GW(p)

And incidentally, how many people can you fit in Australia? I know its very big, but also has a lot of desert.

You can fit many people in California despite it being desert.

Replies from: username2
comment by username2 · 2016-10-11T19:18:35.467Z · LW(p) · GW(p)

*Southern California

comment by woodchopper · 2016-10-26T10:54:41.672Z · LW(p) · GW(p)

In Australia we currently produce enough food for 60 million people. This is without any intensive farming techniques at all. This could be scaled up by a factor of ten if it was really necessary, but quality of life per capita would suffer.

I think smaller nations are as a general rule governed much better, so I don't see any positives in increasing our population beyond the current 24 million people.

comment by turchin · 2016-10-10T12:46:06.019Z · LW(p) · GW(p)

There is 5 times more members in the group "Voluntary Human Extinction Movement (VHEMT)" (9800) in Facebook than in the group "Existential risks" (1880). What we should conclude from it?

Replies from: ChristianKl, Gunnar_Zarncke
comment by ChristianKl · 2016-10-10T12:53:15.257Z · LW(p) · GW(p)

Nothing. I don't think facebook membership counts are a good measurement.

Replies from: DanArmak
comment by DanArmak · 2016-10-10T14:54:19.378Z · LW(p) · GW(p)

Or possibly they are accurate measurements of the rates of Facebook use among these two groups. Maybe it's a good thing if people who are concerned about existential risk do serious things about it instead of participating in a Facebook group.

Replies from: ChristianKl
comment by ChristianKl · 2016-10-10T21:02:41.014Z · LW(p) · GW(p)

The success of a Facebook group depends a lot on how it get's promoted and whether there are a few people who care about creating content for it.

Replies from: DanArmak
comment by DanArmak · 2016-10-10T21:32:39.038Z · LW(p) · GW(p)

Is the 'success' of a group its number of members, regardless of actual activity?

Replies from: ChristianKl
comment by ChristianKl · 2016-10-10T21:43:51.545Z · LW(p) · GW(p)

I don't think I would need to define it that way for the above comment to be coherent.

Replies from: DanArmak
comment by DanArmak · 2016-10-10T21:56:24.645Z · LW(p) · GW(p)

Of course not. Then you meant simply the success of the goals of the group's creators?

Replies from: ChristianKl
comment by ChristianKl · 2016-10-10T22:13:27.397Z · LW(p) · GW(p)

I think my sentence is true with both definitions of success.

comment by Gunnar_Zarncke · 2016-10-10T20:54:37.713Z · LW(p) · GW(p)

Link: http://www.vhemt.org/

It's very likely much bigger then 9800. It is also very balanced and laid back in its views and methods. I'd think that contributes.

Replies from: MrMind
comment by MrMind · 2016-10-11T07:23:41.453Z · LW(p) · GW(p)

I looked into some of the most obvious objections. Some have reasonable answers (why not just kill yourself?), some others are based on a (to me) crazy assumption: that the original state of the biosphere pre-humans somehow is more valuable than the collective experience of the human race.
To which I don't just disagree, but think it's a logic error, since values exist only in the mind of those who can compute it, whatever it is.

Replies from: WalterL
comment by WalterL · 2016-10-12T14:51:36.102Z · LW(p) · GW(p)

grumble grumble...

Look, I'm not pro-"Kill All Humans", but I don't think that last step is correct.

Bob can prefer that the human race die off and the earth spin uninhabited forever. It makes him evil, but there's no "logic error" in that, any more than there is in Al's preference that humanity spread out throughout the stars. They both envision future states and take actions that they believe will cause those states.

Replies from: MrMind
comment by MrMind · 2016-10-17T08:04:23.015Z · LW(p) · GW(p)

I think it's a logical error from the point of view of my theory of computational meta-ethics, not from a general absolute point of view.
Indeed, by VNM theorem, any course of action which is self-consistent can be said to have a guiding value.
But, if you see values as something that is calculated inside an agent, as I do, and exists only in the mind of those who do execute that computation, then making a state of the world that terminates your existence is a fallacy: whatever value you are maximizing, you cannot maximize it without anyone who can compute it.
Note that this formulation would allow substituting all humans with computronium devoted to calculating that value, so is still vulnerable to UFAI, but at least it rejects prima facie a simple extinction of all sentient life.

Replies from: WalterL
comment by WalterL · 2016-10-17T13:45:09.557Z · LW(p) · GW(p)

Ok, but that sounds like a problem with your theory, not someone's else's logic error.

Like, when you call something a "logic error", my first instinct is to check its logic. Then when you clarify that what you mean is that it didn't meet with your classification system's approval, I feel like you are baiting and switching. Maybe go with "sin", or "perversion", to make clear that your meaning is just "Mr. Mind doesn't like this".

comment by dhoe · 2016-10-10T12:27:50.715Z · LW(p) · GW(p)

My partner has requested that I learn to give a good massage. I don't enjoy massages myself and the online resources I find seem to mostly steeped in woo to some degree. Does anybody have some good non-woo resources for learning it?

Replies from: ChristianKl
comment by ChristianKl · 2016-10-10T13:05:28.862Z · LW(p) · GW(p)

The standard way to learn massage is through taking a course.

I would also recommend Betty Martin's 3-Minute game as a secular message like practice: https://www.youtube.com/watch?v=auokDp_EA80

comment by [deleted] · 2016-10-16T21:21:18.972Z · LW(p) · GW(p)

A recommendation from personal experience (n=1 or 2): translating (or proof-reading) articles for a journal specializing in a field close (but not very close) to your own gives you a more-or-less regular opportunity to read reviews of literature which you wouldn't have thought to survey on your own.

I find it cool. One day, I just browse the net, looking at - whatever I look at, the next day, bacteriae developing on industrial wastes come knocking. And the advantage of reading the text in my native tongue is that tiny decrease in cognitive power necessary to process the information (more than made up by the effort of translation, but hey, practice.)

comment by Crux · 2016-10-11T07:23:52.892Z · LW(p) · GW(p)

Many people who delve into the deep parts of analytical philosophy will end up feeling at times like they can't justify anything, that definite knowledge is impossible to ascertain, and so forth. It's a classic trend. Hume is famous for being a "skeptic", although almost everyone seems to misunderstand what that means within the context of his philosophical system.

See here for a post I wrote which I could have called The Final Antidote to Skepticism.

Replies from: TheAncientGeek, ChristianKl
comment by TheAncientGeek · 2016-10-23T10:05:28.631Z · LW(p) · GW(p)

Final, eh?

Your argument seems to summarise to "knowledge is possible because automatic knowledge is possible". That works if knowledge is just one thing, but the sceptic has the ready reply that they are concerned about particular levels and types of knowledge, for instance certain knowledge and knowledge of ontological fundamentals. LessWrong rationalism has basically conceded the point about certainty to the sceptic. And the appeal automatic knowledge is essentially an appeal to know -how, and therefore no answer to scepticism about fundamental ontological knowledge.

It might be possible to argue that know-how subsumes all other firms of knowledge , but you haven't. If it is the case that "The goal of human action is to achieve states of affairs which are satisfying. ", then it is still possible that what I find satisfying to be deep theoretical knowledge , not know how.

Deep theoretical knowledge is foxy, about a few things, not hedgehoggy, shallow knowledge about many things. For obvious reasons, manual reasoning cannot exhaustuve knowledge of every apparent entity, but that is not how philosophical scepticism is argued.

Replies from: Crux
comment by Crux · 2016-10-23T12:59:55.755Z · LW(p) · GW(p)

I described what it feels from the inside to run into philosophical skepticism. It's simply where your ability to engage in manual reasoning hits its limit, but you press onward and overheat your brain. The final antidote to this issue is to simply realize exactly what happened.

The feeling of philosophical skepticism is a psychological side effect of a certain kind of intellectual adventure. I've been there many times in the past. The antidote is to realize that we as humans are designed such that we have a limit to how much manual reasoning we can do and how deep we can go in a given timeframe, where the limit descends upon us quickly enough that we must spend most of our day-to-day life thinking in an automatic way.

The ready reply you mentioned doesn't address my argument. I'm absolutely not suggesting that the person throw out their desire to produce knowledge and understanding through manual thinking. I'm simply explaining exactly what's going on so the person can re-frame the situation. Philosophical skepticism isn't a statement about the world; it's a mental feeling. For most people, encountering that feeling causes them to make grandiose claims about reality. My suggestion should bring them back down to Earth: "You've figure out a lot, but you're at your limit. Take a break."

Have you experienced this psychological effect? If not, then you may simply be repeating the words that people who have ended up with the feeling of philosophical skepticism have used, in which case it may be harder to challenge my arguments in an effective way, since I'm pushing aside the claims about reality they're making as a result of experiencing this side effect, and instead describing exactly what this side effect is.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-10-26T15:03:44.636Z · LW(p) · GW(p)

I described what it feels from the inside to run into philosophical skepticism

That was the content . The title was a final solution to philosophical scepticism. The title doesn't match the content . Scepticism is a set of problems about the possibility and limitation of knowledge. The title doesn't match the content.

Philosophical skepticism isn't a statement about the world; it's a mental feeling

It isn't either. Scepticism is a set of problems about the possibility and limitation of knowledge.

Philosophical skepticism isn't a statement about the world; it's a mental feeling

Pushing aside isn't solving, it's dissolving at best. You can't get to "everything is knowable" from "sometimes brain s get overheated".

Replies from: Crux
comment by Crux · 2016-10-26T17:43:01.570Z · LW(p) · GW(p)

You're not putting in very much effort to have a deep discussion.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-10-31T21:25:39.710Z · LW(p) · GW(p)

Are you announcing a final solution or do you want a continuing conversation?

Replies from: Crux
comment by Crux · 2016-11-01T10:04:46.114Z · LW(p) · GW(p)

I'm open to a continuing conversation. Your post just gave me the impression that you weren't trying to read my writing in a careful manner. To be honest, the number of punctuation oddities and unusual phrasings in your post made me believe you simply didn't care about the discussion. This is a rather deep and technical topic, so it doesn't seem worth my time to interact with someone who isn't invested.

Worst yet you didn't even respond to this question of mine:

Have you experienced this psychological effect?

This was a key question, because if your response is "no" or it turns out that you don't know what it means to experience philosophical skepticism in the tradition of e.g. David Hume in the conclusion of Book I of A Treatise of Human Nature, then we're going to have to delve into the nature of that psychological effect from a much more fundamental point of view.

Pushing aside isn't solving, it's dissolving at best.

Participating on Less Wrong suggests that you should know that dissolving in many cases is solving.

Replies from: TheAncientGeek, TheAncientGeek
comment by TheAncientGeek · 2016-11-02T13:02:47.603Z · LW(p) · GW(p)

I think you have made a fundamental error about what philosophical scepticism is in the first place, so I am not motivated to drill down any further than it takes to point that out. It's also pretty unpromising for a discussion to start off with a claim to have a final solution.

Worst yet you didn't even respond to this question of mine:

Are you aware that you have failed to answer at least half the questions I posed to you?

Have you experienced this psychological effect?

I don't agree that "philosophical scepticism" refers to a feeling, and nothing else, in the first place. You need to take a step back.

comment by TheAncientGeek · 2016-11-03T16:32:15.550Z · LW(p) · GW(p)

participating on Less Wrong suggests that you should know that dissolving in many cases is solving.

Participating in many other things has shown me that LessWrong is quite confused about solution and dissolution. Solving a problemimokues it ever existed. Dissolving a problem is generally showing it never existed, so the two are not compatible.

comment by ChristianKl · 2016-10-11T13:17:28.615Z · LW(p) · GW(p)

What makes you think that people can pattern-match sociopathy by looking at someone's face? Sociopathy usually doesn't lead to low charisma and people getting the sense not to interact with the person.

Replies from: Crux
comment by Crux · 2016-10-11T16:11:25.217Z · LW(p) · GW(p)

In certain cases people can pattern-match sociopath by looking at someone's face. I didn't mean to suggest the average person can do it on a consistent basis.

Replies from: niceguyanon
comment by niceguyanon · 2016-10-11T17:08:28.327Z · LW(p) · GW(p)

In certain cases people can pattern-match sociopath by looking at someone's face.

Do you have any links, because this is interesting if true. Kinda like human lie detectors. But I am skeptical, because how would such a thing arise?

Why would sociopaths have distinguishing facial markers and what are they?

Replies from: waveman
comment by waveman · 2016-10-11T23:15:13.960Z · LW(p) · GW(p)

Book "Without Conscience" by Robert Hare who is a real psychologist has simple tips on recognizing them. Not purely by photographic appearance but it is not too hard. Example with eye contact they tend to stare too long.

comment by MrMind · 2016-10-11T13:06:33.257Z · LW(p) · GW(p)

Is there a good rebuttal to why we don't donate 100% of our income to charity? I mean, as an explanation tribality / near - far are ok, but is there a good justification post-hoc?

Replies from: gjm, turchin, WalterL, ahbwramc, username2, siIver
comment by gjm · 2016-10-11T15:10:30.140Z · LW(p) · GW(p)

100%? Well, your future charitable donations will be markedly curtailed after you starve to death.

comment by turchin · 2016-10-11T14:03:47.710Z · LW(p) · GW(p)

Some possible argument against charities. Personally I think that it is normal to donate around 1 per cent of income in form of charity support.

  1. Some can't survive on less or have other obligations that looks like charity (child support)
  2. We would have less initiative to earn more
  3. It would hurt our economy, as it is consumer driven. We must buy Iphones
  4. I do many useful things which intended on helping other people, but I need pleasures to recreate my commitments, so I spend money on myself.
  5. I pay taxes and it is like charity.
  6. I know better how to spent money on my needs.
  7. Human psychology is about summing different values in one brain, so I could spent only part of my energy on charity.
  8. If I buy goods, my money goes to working people, so it is like charity for them. If I stop buying goods, they will be jobless and will need charity money for survive. So the more I give for charity, the more people need it.
  9. If you overdonate, you could flip-flop and start to hate the thing. Especially if you find that your money was not spent effectively.
  10. Donating 100 per cent will make you look crazy in views of some, and their will to donate diminish.
  11. If you spent more on yourself, you could ask higher salary and as result earn more and donate more. Only a homeless and jobless person could donate 100 per cent.
comment by WalterL · 2016-10-12T14:53:52.503Z · LW(p) · GW(p)

"Don't wanna", shading into "Make Me" if they press. Anyone trying to tell you what to do isn't your Real Dad! (Unless they are, in which case maybe try and figure out what's going on.)

comment by ahbwramc · 2016-10-12T02:42:06.076Z · LW(p) · GW(p)

I mean, Laffer Curve-type reasons if nothing else.

comment by username2 · 2016-10-11T19:29:31.196Z · LW(p) · GW(p)

A mother that followed that logic would push her own baby in front of a trolley to save five random strangers. Ask yourself if that is the moral framework you really want to follow.

Replies from: Lumifer
comment by Lumifer · 2016-10-11T19:47:10.657Z · LW(p) · GW(p)

Hey, look here, you totally should. All that emotional empathy just gets in the way.

Replies from: username2
comment by username2 · 2016-10-11T20:48:50.027Z · LW(p) · GW(p)

Read it already. Let's be clear: you think the mother should push her baby in front of a trolley to save five random strangers? If so, why? If not, why not? I don't consider this a loaded question -- it falls directly out of the utilitarian calculus and assumed values that leads to "donate 100% to charities."

[Let's assume the strangers are also same-age babies, so there's no weasel ways out ("baby has more life ahead of it", etc.)]

Replies from: Gurkenglas, SithLord13, Lumifer
comment by Gurkenglas · 2016-10-22T22:55:31.600Z · LW(p) · GW(p)

Devil's advocate: Humanity is in a malthusian trap where those mothers that prefer their child to five strangers are more able to pass on their genes, so that's the sort of behavior that ends up universal. That mechanism of course produced all our preferences, but without the sanctity of it we are at least in a situation where mothers everywhere can have a debate on preserving our preferences versus saving more people, and Policy Debates Should Not Appear One-Sided.

Replies from: username2
comment by username2 · 2016-10-22T23:12:06.528Z · LW(p) · GW(p)

Ok, sure. But this does not answer the question of why we should change morals.

Replies from: Gurkenglas
comment by Gurkenglas · 2016-10-25T02:00:49.021Z · LW(p) · GW(p)

So that our morals become invariant under change of context, in this case, which person's mother you happen to be.

Replies from: username2
comment by username2 · 2016-10-25T02:41:59.821Z · LW(p) · GW(p)

... and why should that matter at all?

It seems we've now reduced to a value that is both abstract and arbitrary.

comment by SithLord13 · 2016-10-13T15:37:16.955Z · LW(p) · GW(p)

There are a lot of conflicting aspects to consider here outside of a vacuum. Discounting the unknown unknowns, which could factor heavily here since it's an emotionally biasing topic, you've got the fact that the baby is going to be raised by an assumably attentive mother, as opposed to the 5 who wound up in that situation once, showing at least some increased risk of falling victim to such a situation again. Then you have the psychological damage to the mother, which is going to be even greater because she had to do the act herself. Then you've got the fact that a child raised by a mother who is willing to do it has a greater chance of being raised in such a way as to have a net positive impact on society. Then you have the greater potential for preventing the situation in the future, caused by the increased visibility of the higher death toll. I'm certain there are more aspects I'm failing to note.

But, if we cut to what I believe is the heart of your point, then yes, she absolutely should. Let's scale the problem up for a moment. Say instead of 5 it's 500. Or 5 million. Or the entire rest of humanity aside from the mother and her baby. At what point does sacrificing her child become the right decision? Really, this boils down to the idea of shut up and multiply.

Replies from: username2
comment by username2 · 2016-10-13T23:42:29.675Z · LW(p) · GW(p)

But, if we cut to what I believe is the heart of your point, then yes, she absolutely should. Let's scale the problem up for a moment. Say instead of 5 it's 500. Or 5 million. Or the entire rest of humanity aside from the mother and her baby. At what point does sacrificing her child become the right decision? Really, this boils down to the idea of shut up and multiply.

Never, in my opinion. Put every other human being on the tracks (excluding other close family members to keep this from being a Sophie's choice "would you rather..." game). The mother should still act to protect her child. I'm not joking.

You can post-facto rationalize this by valuing the kind of societies where mothers are ready to sacrifice their kids, and indeed encouraged to save another life, vs. the world where mothers simply always protect their kids no matter what.

But I don't think this is necessary -- you don't need to validate it on utilitarian grounds. Rather it is perfectly okay for one person to value some lives more than others. We shouldn't want to change this, IMHO. And I think the OP's question about donating 100% to charity, at the detriment of themselves, is symptomatic of the problems that arise from utilitarian thinking. After all if OP was not having internal conflict between internal morals and supposedly rational utilitarian thinking, he wouldn't have asked the question...

Replies from: philh, MrMind
comment by philh · 2016-10-14T16:33:07.180Z · LW(p) · GW(p)

I think it's okay for one person to value some lives more than others, but not that much more. ("Okay" - not ideal in theory, maybe a good thing given other facts about reality, I wouldn't want to tear it down for multiple reasons.)

Btw, you say the mother should protect her child, but it's okay to value some lives more than others - these seem in conflict. Do you in fact think it's obligatory to value some lives more than others, or do you think the mother is permitted to protect her child, or?

Replies from: username2
comment by username2 · 2016-10-14T21:58:31.896Z · LW(p) · GW(p)

We've now delved beyond the topic -- which is okay, I'm just pointing that out.

I think it's okay for one person to value some lives more than others, but not that much more.

I'm not quite sure what you mean by that. I'm a duster, not a torturer, which means that there are some actions I just won't do, no matter how many utilitons get multiplied on the other side. I consider it okay for one person to value another to such a degree that they are literally willing to sacrifice every other person to save the one, as in the mother-and-baby trolly scenario. Is that what you mean?

I also think that these scenarios usually devolve into a "would you rather..." game that is not very illuminating of either underlying moral values or the validity of ethical frameworks.

Btw, you say the mother should protect her child, but it's okay to value some lives more than others - these seem in conflict. Do you in fact think it's obligatory to value some lives more than others, or do you think the mother is permitted to protect her child, or?

If I can draw a political analogy which may even be more than an analogy, moral decision making via utilitarian calculus with assumed equal weights to (sentient, human) life is analogous to the central planning of communism: from each what they can provide, to each what they need. Maximize happiness. With perfectly rational decision making and everyone sharing common goals, this should work. But of course in reality we end up with at best inefficient distribution of resources due to failures in planning or execution. The pragmatic reality is even worse: people don't on the whole work altruistically for the betterment of society, and so you end up with nepotistic, kleptocratic regimes that exploit the wealth of the country for self-serving purpose of those on top.

Recognizing and embracing the fact that people have conflicting moral values (even if restricted to only the weights they place on other's happiness) is akin to the enlightened self-interest of capitalism. People are given self-agency to seek personal benefits for themselves and those they care about, and societal prosperity follows. Of course in reality all non-libertarians know that there are a wide variety of market failures, and achieving maximum happiness requires careful crafting of incentive structures. It is quite easy to show mathematically and historically that restricting yourself to multi-agent games with Pareto optimal outcomes (capitalism with good incentives) restricts you from being able to craft all possible outcomes. Central planning got us to the Moon. Not-profit-maximizing thinking is getting SpaceX to Mars. It's more profitable to mitigate the symptoms of AIDS with daily antiviral drugs than to cure the disease outright. Etc. But nevertheless it is generally capitalist societies that experience the most prosperity, as measured by quality of life, technological innovation, material wealth, or happiness surveys.

To finally circle back to your question, I'm not saying that it is right or wrong that the mother cares for her child to the exclusion of literally everyone else. Or even that she SHOULD think this way, although I suspect that is a position I could argue for. What I'm saying is that she should embrace the moral intuitions her genes and environment have impressed upon her, and not try to fight them via System 2 thinking. And if everyone does this we can still live in a harmonious and generally good society even though each of our neighbors don't exactly share our values (I value my kids, they value theirs).

I've previously been exposed to the writings and artwork of peasants that lived through the harshest time of Chairman Mao's Great Leap forward, and it remarkable how similar their thoughts, concerns, fears and introspectives can be to those who struggle with LW-style "shut up and multiply" utilitarianism. For example I spoke with someone at a CFAR workshop who has had a real psychological issues for a decade over internal conflict between selfless "save the world" work he feels he SHOULD be doing, or doing more of, and basic fulfillment of Maslow's hierarchy that leaves him feeling guilty and thinking he's a bad person.

My own opinion and advice? Work your way up up Maslow's hierarchy of needs using just your ethical intuitions as a guide. Once you have the luxury of being at the top of the pyramid, then you can start to worry about self-actualization by working to change the underlying incentives that guide the efforts of our society and create our environmentally-driven value functions in the first place.

Replies from: philh, SithLord13
comment by philh · 2016-10-18T12:07:27.312Z · LW(p) · GW(p)

I think I basically agree with the "embrace existing moral intuitions" bit.

Unpacking my first paragraph in the other post, you might get: I prefer people to have moral intuitions that value their kids equally with others, but if they value their own kids a bit more, that's not terrible; our values are mostly aligned; I expect optimisation power aplied to those values will typically also satisfy my own values. If they value their kids more than literally everyone else, that is terrible; our values diverge too much; I expect optimisation power appied to their values has a good chance of harming my own.

comment by SithLord13 · 2016-10-15T02:50:53.098Z · LW(p) · GW(p)

I also think that these scenarios usually devolve into a "would you rather..." game that is not very illuminating of either underlying moral values or the validity of ethical frameworks.

Can you expand on this a bit? (Full disclosure I'm still relatively new to Less Wrong, and still learning quite a bit that I think most people here have a firm grip on.) I would think they illuminate a great deal about our underlying moral values, if we assume they're honest answers and that people are actually bound by their morals (or are at least answering as though they are, which I believe to be implicit in the question).

For example, I'm also a duster, and that "would you rather" taught me a great deal about my morality. (Although to be fair what it taught me is certainly not what was intended, which was that my moral system is not strictly multiplicative but is either logarithmic or exponential or some such function where a non-zero number that is sufficiently small can't be significantly increased simply by having it apply to significantly multiple people.)

Replies from: username2
comment by username2 · 2016-10-17T17:02:08.598Z · LW(p) · GW(p)

This is deserving of a much longer answer which I have not had the time to write and probably won't any time soon, I'm sorry to say. But in short summary human drives and morals are more behaviorist that utilitarian. The utility function approximation is just that, an approximation.

Imagine you have a shovel, and while digging you hit a large rock and the handle breaks. What that shovel designed to break, in sense that its purpose was to break? No, shovels are designed to dig holes. Breakage, for the most part, is just an unintended side-effect of the materials used. Now in some cases things are intended to fail early for safety reasons, e,g, to have the shovel break before your bones will. But even then this isn't some underlying root purpose. The purpose of the shovel is still to dig holes. The breakage is more a secondary consideration to prevent undesirable side effects in some failure modes.

Does learning that the shovel breaks when it exceeds normal digging stresses tell you anything about the purpose / utility function of the shovel? Pedantically, a little bit if you accept the breaking point being a designed-in safety consideration. But it doesn't enlighten us about the hole digging nature at all.

Would you rather put dust in the eyes of 3^^^3 people, or torture one individual to death? Would you rather push one person onto the trolley tracks to save five others? These are failure mode analysis of edge cases. The real answer is I'd rather have dust in no one's eyes and nobody tortured, and nobody hit by trolleys. Making an arbitrary what-if tradeoff between these scenarios doesn't tell us much about our underlying desires because there isn't some consistent mathematical utility function underlying our responses. At best it just reveals how we've been wired by genetics and upbringing and present environment to prioritize our behaviorist responses. Which is interesting, to be sure. But not very informative, to be honest.

comment by MrMind · 2016-10-14T08:24:02.352Z · LW(p) · GW(p)

Ah, as it happens, I have none of those conflicts. I asked because I'm preparing an article on utilitarianism, and I happened to bounce on the question I posted as a good proxy of the hard problems in adopting it as a moral theory.
But I can understand that someone who believes this might have a lot of internal struggles.

Full disclosure: I'm a Duster, not a Torturer. But I'm trying to steelman Torture.

Replies from: username2
comment by username2 · 2016-10-14T18:07:12.715Z · LW(p) · GW(p)

Ah, then I look forward to reading your article :)

comment by Lumifer · 2016-10-11T20:54:08.275Z · LW(p) · GW(p)

...and did you read my comments in the thread?

Replies from: username2
comment by username2 · 2016-10-11T21:15:52.625Z · LW(p) · GW(p)

Ah I did (at the time), but forgot it was you that made those comments. So I should direct my question to Jacobian, not you.

In any case I'm certainly not a "save the world" type of person, and find myself thoroughly confused by those who profess to be and enter into self-destructive behavior as a result.

comment by siIver · 2016-10-11T15:50:10.839Z · LW(p) · GW(p)

100% doesn't work because then you starve. If I re-formulate your question to "is there any rebuttal to why we don't donate way more to charity than we currently do" then the answer depends on your belief system. If you are utilitarian, the answer is definitive no. You should spend way more on charity.

Replies from: username2, Good_Burning_Plastic
comment by username2 · 2016-10-11T19:28:09.086Z · LW(p) · GW(p)

Nonsense. I believe my life and the lives of people close to me are more important than someone starving in a place whose name I can't pronounce. I just don't assign the same weight to all people. That is perfectly consistent with utilitarianism.

Replies from: siIver
comment by siIver · 2016-10-11T19:40:04.030Z · LW(p) · GW(p)

Er... no. Utilitarianism prohibits that exact thing by design. That's one of its most important aspects.

Read the definition. This is unambiguous.

Replies from: username2
comment by username2 · 2016-10-11T20:45:03.813Z · LW(p) · GW(p)

"Utilitarianism is a theory in normative ethics holding that the best moral action is the one that maximizes utility." -Wikipedia

The very next sentence starts with "Utility is defined in various ways..." It is entirely possible for there to be utility functions that treat sentient beings differently. John Stuart Mill may have phrased it as "the greatest good for the greatest number" but the clutch is in the word "good" which is left undefined. This is as opposed to, say, virtue ethics which doesn't care per se about the consequences of actions.

comment by Good_Burning_Plastic · 2016-10-12T13:09:01.985Z · LW(p) · GW(p)

If I re-formulate your question to "is there any rebuttal to why we don't donate way more to charity than we currently do" then the answer depends on your belief system.

(And also on how much money you currently donate to charity.)

comment by morganism · 2016-10-16T18:40:23.994Z · LW(p) · GW(p)

Learning difficulties linked to winter conception

The article points out the the study was done in Scotland, and may be linked to Vit D uptake

http://questioning-answers.blogspot.com/2016/10/learning-difficulties-linked-with-winter-conception.html

the paper by Daniel Mackay and colleagues [1]

comment by Ilverin the Stupid and Offensive (Ilverin) · 2016-10-11T18:13:45.345Z · LW(p) · GW(p)

Is there any product like an adult pacifier that is socially acceptable to use?

I am struggling with self-control to not interrupt people and am afraid for my job.

EDIT: In the meantime (or long-term if it works) I'll use less caffeine (currentlly 400mg daily) to see if that helps.

Replies from: SithLord13, Lumifer, MrMind
comment by SithLord13 · 2016-10-11T18:50:06.157Z · LW(p) · GW(p)

Could chewing gum serve as a suitable replacement for you?

comment by Lumifer · 2016-10-11T18:58:47.974Z · LW(p) · GW(p)

It's socially acceptable to twirl and manipulate small objects in your hands, from pens to stress balls. If you need to get your mouth involved, it's mostly socially acceptable to chew on pens. Former smokers used to hold empty pipes in their mouths, just for comfort, but it's hard to pull off nowadays unless you're old or a fully-blown hipster.

comment by MrMind · 2016-10-12T07:20:20.905Z · LW(p) · GW(p)

How about a lollipop? It's almost the same thing, and since inspector Kojak it's become much more socially acceptable, even cool, if you pull it off well.
If you are a woman, though, you'll likely suffer some sexual objectification (what a news!).