Posts

Notes on the Safety in Artificial Intelligence conference 2016-07-01T00:36:57.309Z

Comments

Comment by UmamiSalami on What Are The Chances of Actually Achieving FAI? · 2017-08-01T01:42:42.048Z · LW · GW

1%? Shouldn't your basic uncertainty over models and paradigms be great enough to increase that substantially?

Comment by UmamiSalami on What Are The Chances of Actually Achieving FAI? · 2017-08-01T01:41:30.250Z · LW · GW

I think it's about a 0.75 probability, conditional upon smarter-than-human AI being developed. Guess I'm kind of an optimist. TL;DR I don't think it will be very difficult to impart your intentions into a sufficiently advanced machine.

Comment by UmamiSalami on Effective altruism is self-recommending · 2017-04-28T13:31:34.774Z · LW · GW

I haven't seen any parts of Givewell's analyses that involve looking for the right buzzwords. Of course, it's possible that certain buzzwords subconsciously manipulate people at Givewell in certain ways, but the same can be said for any group, because every group has some sort of values.

Comment by UmamiSalami on Effective altruism is self-recommending · 2017-04-22T05:08:27.632Z · LW · GW

Why do you expect that to be true?

Because they generally emphasize these values and practices when others don't, and because they are part of a common tribe.

How strongly? ("Ceteris paribus" could be consistent with an extremely weak effect.) Under what criterion for classifying people as EAs or non-EAs?

Somewhat weakly, but not extremely weakly. Obviously there is no single clear criteria, it's just about people's philosophical values and individual commitment. At most, I think that being a solid EA is about as important as having a couple additional years of relevant experience or schooling.

I do think that if you had a research-focused organization where everyone was an EA, it would be better to hire outsiders at the margin, because of the problems associated with homogeneity. (This wouldn't the case for community-focused organizations.) I guess it just depends on where they are right now, which I'm not too sure about. If you're only going to have 1 person doing the work, e.g. with an EA fund, then it's better for it to be done by an EA.

Comment by UmamiSalami on Effective altruism is self-recommending · 2017-04-22T04:53:24.754Z · LW · GW

I bet that most of the people who donated to Givewell's top charities were, for all intents and purposes, assuming their effectiveness in the first place. From the donor end, there were assumptions being made either way (and there must be; it's impractical to do all kinds of evaluation on one's own).

Comment by UmamiSalami on Effective altruism is self-recommending · 2017-04-22T04:40:19.526Z · LW · GW

I think EA is something very distinct in itself. I do think that, ceteris paribus, it would be better to have a fund run by an EA than a fund not run by an EA. Firstly, I have a greater expectation for EAs to trust each other, engage in moral trades, be rational and charitable about each other's points of view, and maintain civil and constructive dialogue than I do for other people. And secondly, EA simply has the right values. It's a good culture to spread, which involves more individual responsibility and more philosophical clarity. Right now it's embryonic enough that everything is tied closely together. I tentatively agree that that is not desirable. But ideally, growth of thoroughly EA institutions should lead to specialization and independence. This will lead to a much more interesting ecosystem than if the intellectual work is largely outsourced.

Comment by UmamiSalami on Effective altruism is self-recommending · 2017-04-22T04:25:52.579Z · LW · GW

It seems to me that Givewell has already acknowledged perfectly well that VillageReach is not a top effective charity. It also seems to me that there's lots of reasons one might take GiveWell's recommendations seriously, and that getting "particularly horrified" about their decision not to research exactly how much impact their wrong choice didn't have is a rather poor way to conduct any sort of inquiry on the accuracy of organizations' decisions.

Comment by UmamiSalami on Could utility functions be for narrow AI only, and downright antithetical to AGI? · 2017-03-19T04:30:34.722Z · LW · GW

In fact, it seems to me that the less intelligent an organism is, the easier its behavior can be approximated with model that has a utility function!

Only because those organisms have fewer behaviors in general. If you put a human in an environment where its options and sensory inputs were as simple as those experienced by apes and cats, humans would probably look like equally simple utility maximizers.

Comment by UmamiSalami on Evaluating Moral Theories · 2017-01-26T05:30:30.196Z · LW · GW

Kantian ethics: do not violate the categorical imperative. It's derived logically from the status of humans as rational autonomous moral agents. It leads to a society where people's rights and interests are respected.

Utilitarianism: maximize utility. It's derived logically from the goodness of pleasure and the badness of pain. It leads to a society where people suffer little and are very happy.

Virtue ethics: be a virtuous person. It's derived logically from the nature of the human being. It leads to a society where people act in accordance with moral ideals.

Etc.

Comment by UmamiSalami on If I must eat meat, I eat pork · 2017-01-09T23:58:58.944Z · LW · GW

pigs strike a balance between the lower suffering, higher ecological impact of beef and the higher suffering, lower ecological impact of chicken.

This was my thinking for coming to the same conclusion. But I am not confident in it. Just because something minimaxes between two criteria doesn't mean that it minimizes overall expected harm.

Comment by UmamiSalami on Is Global Reinforcement Learning (RL) a Fantasy? · 2016-11-02T01:08:36.881Z · LW · GW

All of the architectures assumed by people who promote these scenarios have a core set of fundamental weaknesses (spelled out in my 2014 AAAI Spring Symposium paper).

The idea of superintelligence at stake isn't "good at inferring what people want and then decides to do what people want," it's "competent at changing the environment". And if you program an explicit definition of 'happiness' into a machine, its definition of what it wants - human happiness - is not going to change no matter how competent it becomes. And there is no reason to expect that increases in competency lead to changes in values. Sure, it might be pretty easy to teach it the difference between actual human happiness and smiley faces, but it's a simplified example to demonstrate a broader point. You can rephrase it as "fulfill the intentions of programmers", but then you just kick things back a level with what you mean by "intentions", another concept which can be hacked, and so on.

Your argument for "swarm relaxation intelligence" is strange, as there is only one example of intelligence evolving to approximate the format you describe (not seven billion - human brains are conditionally dependent, obviously), and it's not even clear that human intelligence isn't equally well described as goal directed agency which optimizes for a premodern environment. The arguments in Basic AI Drives and other places don't say anything about how AI will be engineered, so they don't say anything about whether they're driven by logic, just about how it will behave, and all sorts of agents behave in generally logical ways without having explicit functions to do so. You can optimize without having any particular arrangement of machinery (humans do as well).

Anyway, in the future when making claims like this, it would be helpful to make it clear early on that you're not really responding to the arguments that AI safety research relies upon - you're responding to an alleged set of responses to the particular responses that you have given to AI safety research.

That is why I said what I said. We discussed it at the 2014 Symposium If I recall correctly Steve used that strategy (although to be fair I do not know how long he stuck it out). I know for sure that Daniel Dewey used the Resort-to-RL maneuver, because that was the last thing he was saying as I had to leave the meeting.

So you had two conversations. I suppose I'm just not convinced that there is an issue here: I think most people would probably reject the claims in your paper in the first place, rather than accepting them and trying a different route.

Comment by UmamiSalami on Is Global Reinforcement Learning (RL) a Fantasy? · 2016-11-01T17:43:25.683Z · LW · GW

I came here to write exactly what gjm said, and your response is only to repeat the assertion "Scenarios in which the AI Danger comes from an AGI that is assumed to be an RL system are so ubiquitous that it is almost impossible to find a scenario that does not, when push comes to shove, make that assumption."

What? What about all the scenarios in IEM or Superintelligence? Omohundro's paper on instrumental drives? I can't think of anything which even mentions RL, and I can't see how any of it relies upon such an assumption.

So you're alleging that deep down people are implicitly assuming RL even though they don't say it, but I don't see why they would need to do this for their claims to work nor have I seen any examples of it.

Comment by UmamiSalami on [deleted post] 2016-10-24T01:01:18.903Z

In Bostrom's dissertation he says it's not clear if number of observers or the number of observer-moments is the appropriate reference class for anthropic reasoning.

I don't see how you are jumping to the fourth disjunct though. Like, maybe they run lots of simulations which are very short? But surely they would run enough to outweigh humanity's real history whichever way you measure it. Assuming they have posthuman levels of computational power.

Comment by UmamiSalami on Reasonable Requirements of any Moral Theory · 2016-10-17T23:54:24.868Z · LW · GW

In other words, a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals. Not what anyone else means by "moral theory'.

When people talk about moral theories they refer to systems which describe the way that one ought to act or the type of person that one ought to be. Sure, some moral theories can be called "a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals," but I don't see how that changes anything about the definition of a moral theory.

Comment by UmamiSalami on Reasonable Requirements of any Moral Theory · 2016-10-17T23:40:27.674Z · LW · GW

To say that you may chose any one of two actions when it doesn’t matter which one you chose since they have the same value, isn’t to give “no guidance”.

Proves my point. That's no different from how most most moral theories respond to questions like "which shirt do I wear". So this 'completeness criterion' has to be made so weak as to be uninteresting.

Comment by UmamiSalami on Reasonable Requirements of any Moral Theory · 2016-10-11T18:33:50.829Z · LW · GW

Among hedonistic utilitarians it's quite normal to demand both completeness

Utilitarianism provides no guidance on many decisions: any decision where both actions produce the same utility.

Even if it is a complete theory, I don't think that completeness is demanded of the theory; rather it's merely a tenet of it. I can't think of any good a priori reasons to expect a theory to be complete in the first place.

Comment by UmamiSalami on Reasonable Requirements of any Moral Theory · 2016-10-11T17:51:54.720Z · LW · GW

The question needs to cover how one should act in all situations, simply because we want to answer the question. Otherwise we’re left without guidance and with uncertainty.

Well first, we normally don't think of questions like which clothes to wear as being moral. Secondly, we're not left without guidance when morality leaves these issues alone: we have pragmatic reasons, for instance. Thirdly, we will always have to deal with uncertainty due to empirical uncertainty, so it must be acceptable anyway.

There is one additional issue I would like to highlight, an issue which rarely is mentioned or discussed. Commonly, normative ethics only concerns itself with human actions. The subspecies homo sapiens sapiens has understandably had a special place in philosophical discussions, but the question is not inherently only about one subspecies in the universe. The completeness criterion covers all situations in which somebody should perform an action, even if this “somebody” isn’t a human being. Human successors, alien life in other solar systems, and other species on Earth shouldn’t be arbitrarily excluded.

I'd agree, but accounts of normativity which are mind- or society-dependent, such as constructivism would have reason to make accounts of ethics for humanity different from accounts of ethics for nonhumans.

It seems like an impossible task for any moral theory based on virtue or deontology to ever be able to fulfil the criteria of completeness and consistency

I'm not sure I agree there. Usually these theories don't because the people who construct them disagree with some of the criteria, especially #1. But it doesn't seem difficult to make a complete and demanding form of virtue ethics or deontology.

Comment by UmamiSalami on Open thread, Sep. 26 - Oct. 02, 2016 · 2016-09-27T05:23:29.368Z · LW · GW

See Omohundro's paper on convergent instrumental drives

Comment by UmamiSalami on Hedging · 2016-08-27T04:11:46.522Z · LW · GW

It seems like hedging is the sort of thing which tends to make the writer sound more educated and intelligent, if possibly more pretentious.

Comment by UmamiSalami on Zombies Redacted · 2016-07-27T15:28:59.574Z · LW · GW

It's unjustified in the same way that vilalism was an unjustified explanation of life: it's purely a product of our ignorance.

It's not. Suppose that the ignorance went away: a complete physical explanation of each of our qualia - "the redness of red comes from these neurons in this part of the brain, the sound of birds flapping their wings is determined by the structure of electric signals in this region," and so on - would do nothing to remove our intuitions about consciousness. But a complete mechanistic explanation of how organ systems work would (and did) remove the intuitions behind vitalism.

I disagree. You've said that epiphenominalists hold that having first-hand knowledge is not causally related to our conception and discussion of first-hand knowledge. This premise has no firm justification.

Well... that's just what is implied by epiphenomenalism, so the justification for it is whatever reasons we have to believe epiphenomenalism in the first place. (Though most people who gravitate towards epiphenomenalism seem to do so out of the conviction that none of the alternatives work.)

Denying it yields my original argument of inconceivability via the p-zombie world.

As I've said already, your argument can't show that zombies are inconceivable. It only attempts to show that an epiphenomenalist world is probabilistically implausible. These are very different things.

Accepting it requires multiplying entities unnecessarily, for if such knowledge is not causally efficacious

Well the purpose of rational inquiry is to determine which theories are true, not which theories have the fewest entities. Anyone who rejects solipsism is multiplying entities unnecessarily.

I previously asked for any example of knowledge that was not a permutation of properties previously observed.

I don't see why this should matter for the zombie argument or for epiphenomenalism. In the post where you originally asked this, you were confused about the contextual usage and meaning behind the term 'knowledge.'

Comment by UmamiSalami on Zombies Redacted · 2016-07-22T19:47:28.561Z · LW · GW

You should take a look at the last comment he made in reply to me, where he explicitly ascribed to me and then attacked (at length) a claim which I clearly stated that I didn't hold in the parent comment. It's amazing how difficult it is for the naive-eliminativist crowd to express cogent arguments or understand the positions which they attack, and a common pattern I've noticed across this forum as well as others.

Comment by UmamiSalami on Zombies Redacted · 2016-07-22T17:21:13.162Z · LW · GW

Not too long ago, it would also have been quite easy to conceive of a world in which heat and motion were two separate things. Today, this is no longer conceivable.

But it is conceivable for thermodynamics to be caused by molecular motion. No part of that is (or ever was, really) inconceivable. It is inconceivable for the sense qualia of heat to be reducible to motion, but that's just another reason to believe that physicalism is wrong. The blog post you linked doesn't actually address the idea of inconceivability.

If something seems conceivable to you now, that might just be because you don't yet understand how it's actually impossible.

No, it's because there is no possible physical explanation for consciousness (whereas there are possible kinetic explanations for heat, as well as possible sonic explanations for heat, and possible magnetic explanations for heat, and so on. All these nonexistent explanations are conceivable in ways that a physical description of sense datum is not).

By stipulation, you would have typed the above sentence regardless of whether or not you were actually conscious, and hence your statement does not provide evidence either for or against the existence of consciousness.

And I do not claim that my statement is evidence that I have qualia.

This exact statement could have been emitted by a p-zombie.

See above. No one is claiming that claims of qualia prove the existence of qualia. People are claiming that the experience of qualia proves the existence of qualia.

In particular, for a piece of knowledge to have epistemic value to me (or anyone else, for that matter), I need to have some way of acquiring that knowledge.

We're not talking about whether a statement has "epistemic value to [you]" or not. We're talking about whether it's epistemically justified or not - whether it's true or not.

There exists a mysterious substance called "consciousness" that does not causally interact with anything in the physical universe.

Neither I nor Chalmers describe consciousness as a substance.

Since this substance does not causally interact with anything in the physical universe, and you are part of the physical universe, said substance does not causally interact with you.

Only if you mean "you" in the reductive physicalist sense, which I don't.

This means, among other things, that when you use your physical fingers to type on your physical keyboard the words, "we are conscious, and know this fact through direct experience of consciousness", the cause of that series of physical actions cannot be the mysterious substance called "consciousness", since (again) that substance is causally inactive. Instead, some other mysterious process in your physical brain is occurring and causing you to type those words, operating completely independently of this mysterious substance.

Of course, although physicalists believe that the exact same "some other mysterious process in your physical brain" causes us to type, they just happen to make the assertion that consciousness is identical to that other process.

Nevertheless, for some reason you appear to expect me to treat the words you type as evidence of this mysterious, causally inactive substance's existence.

As I have stated repeatedly, I don't, and if you'd taken the time to read Chalmers you'd have known this instead of writing an entirely impotent attack on his ideas. Or you could have even read what I wrote. I literally said in the parent comment,

The confusion in your post is grounded in the idea that Chalmers or I would claim that the proof for consciousness is people's claims that they are conscious. We don't (although it could be evidence for it, if we had prior expectations against p-zombie universes which talked about consciousness). The claim is that we know consciousness is real due to our experience of it.

Honestly. How deliberately obtuse could you be to write an entire attack on an idea which I explicitly rejected in the comment to which you replied. Do not waste my time like this in the future.

Comment by UmamiSalami on Zombies Redacted · 2016-07-22T17:07:20.741Z · LW · GW

I claim that it is "conceivable" for there to be a universe whose psychophysical laws are such that only the collection of physical states comprising my brainstates are conscious, and the rest of you are all p-zombies.

Yes. I agree that it is conceivable.

Now then: I claim that by sheer miraculous coincidence, this universe that we are living in possesses the exact psychophysical laws described above (even though there is no way for my body typing this right now to know that), and hence I am the only one in the universe who actually experiences qualia. Also, I would say this even if we didn't live in such a universe.

Sure, and I claim that there is a teapot orbiting the sun. You're just being silly.

Comment by UmamiSalami on Zombies Redacted · 2016-07-21T03:28:31.701Z · LW · GW

Well, first off, I personally think the Zombie World is logically impossible, since I treat consciousness as an emergent phenomenon rather than a mysterious epiphenomenal substance; in other words, I reject the argument's premise: that the Zombie World's existence is "conceivable".

And yet it seems really quite easy to conceive of a p zombie. Merely claiming that consciousness is emergent doesn't change our ability to imagine the presence or absence of the phenomenon.

That being said, if you do accept the Zombie World argument, then there's no reason to believe we live in a universe with any conscious beings.

But clearly we do have such a reason: that we are conscious, and know this fact through direct experience of consciousness.

The confusion in your post is grounded in the idea that Chalmers or I would claim that the proof for consciousness is people's claims that they are conscious. We don't (although it could be evidence for it, if we had prior expectations against p-zombie universes which talked about consciousness). The claim is that we know consciousness is real due to our experience of it. The fact that this knowledge is causally inefficacious does not change its epistemic value.

Comment by UmamiSalami on Zombies Redacted · 2016-07-21T03:16:55.959Z · LW · GW

4 is not a correct summary because consciousness being extra physical doesn't imply epiphenominalism; the argument is specifically against physicalism, so it leaves other forms of dualism and panpsychism on the table.

5 and onwards is not correct, Chalmers does not believe that. Consciousness being nonphysical does not imply a lack of knowledge of it, even if our experience of consciousness is not causally efficacious (though again I note that the p zombie argument doesn't show that consciousness is not causally efficacious, Chalmers just happens to believe that for other reasons).

No part of the zombie argument really makes the claim that people or philosophers are conscious or not, so your analogous reasoning along 5-7 is not a reflection of the argument.

Comment by UmamiSalami on Zombies Redacted · 2016-07-16T17:04:16.749Z · LW · GW

Which seems to suggest that epiphenominalism either begs the question,

Well, they do have arguments for their positions.

or multiplies entities unnecessarily by accepting unjustified intuitions.

It actually seems very intuitive to most people that subjective qualia are different from neurophysical responses. It is the key issue at stake with zombie and knowledge arguments and has made life extremely difficult for physicalists. I'm not sure in what way it's unjustified for me to have an intuition that qualia are different from physical structures, and rather than epiphenomenalism multiplying entities unnecessarily, it sure seems to me like physicalism is equivocating entities unnecessarily.

So my original argument disproving p-zombies would seem to be on just as solid footing as the original p-zombie argument itself, modulo our disagreements over wording.

Nothing you said indicates that p-zombies are inconceivable or even impossible. What you, or and EY seem to be saying is that our discussion of consciousness is a posteriori evidence that our consciousness is not epiphenomenal.

Comment by UmamiSalami on Notes on the Safety in Artificial Intelligence conference · 2016-07-12T15:24:19.112Z · LW · GW

In what ways, and for what reasons, did people think that cybersecurity had failed?

Mostly that it's just so hard to keep things secure. Organizations have been trying for decades to ensure security but there are continuous failures and exploits. One person mentioned that one third of exploits take advantage of security systems themselves.

What techniques from cybersecurity were thought to be relevant?

Don't really remember any specifics, but I think formal methods were part of it.

Any idea what Mallah meant by “non-self-centered ontologies”? I am imagining things like CIRL (https://arxiv.org/abs/1606.03137)

I didn't know to be honest.

Can you briefly define (any of) the following terms (or give you best guess what was meant by them)?: meta-machine-learning reflective analysis * knowledge-level redundancy

I remember that knowledge level redundancy involves giving multiple representations of concepts and things to avoid misspecification/misrepresentation of human ideas. So you can define a concept or an object in multiple ways, and then check that a given object fits all those definitions before being certain about its identity.

Comment by UmamiSalami on Zombies Redacted · 2016-07-07T20:18:23.363Z · LW · GW

Flavor is distinctly a phenomenal property and a type of qualia.

It is metaphysically impossible for distinctly physical properties to differ between two objects which are physically identical. We can't properly conceive of a cookie that is physically identical to an Oreo yet contains different chemicals, is more massive or possessive of locomotive powers. Somewhere in our mental model of such an item, there is a contradiction.

Comment by UmamiSalami on Zombies Redacted · 2016-07-07T15:21:14.011Z · LW · GW

Chalmers does believe that consciousness is a direct product of physical states. The dispute is about whether consciousness is identical to physical states.

Chalmers does not believe that p-zombies are possible in the sense that you could make one in the universe. He only believes it's possible that under a different set of psychophysical laws, they could exist.

Comment by UmamiSalami on Zombies Redacted · 2016-07-07T15:18:23.981Z · LW · GW

Yes, this is called qualia inversion and is another common argument against physicalism. There's a detailed discussion of it here: http://plato.stanford.edu/entries/qualia-inverted/

Comment by UmamiSalami on Zombies Redacted · 2016-07-06T20:39:33.783Z · LW · GW

Unlike the other points which I raised above, this one is semantic. When we talk about "knowledge," we are talking about neurophysical responses, or we are talking about subjective qualia, or we are implicitly combining the two together. Epiphenomenalists, like physicalists, believe that sensory data causes the neurophysical responses in the brain which we identify with knowledge. They disagree with physicalists because they say that our subjective qualia are epiphenomenal shadows of those neurophysical responses, rather than being identical to them. There is no real world example that would prove or disprove this theory because it is a philosophical dispute. One of the main arguments for it is, well, the zombie argument.

Comment by UmamiSalami on Zombies Redacted · 2016-07-06T20:33:35.844Z · LW · GW

is why or if the p-zombie philosopher postulate that other persons have consciousness.

Because consciousness supervenes upon physical states, and other brains have similar physical states.

Comment by UmamiSalami on Zombies Redacted · 2016-07-06T20:29:09.546Z · LW · GW

This argument is not going to win over their heads and hearts. It's clearly written for a reductionist reader, who accepts concepts such as Occam's Razor and knowing-what-a-correct-theory-looks-like.

I would suggest that people who have already studied this issue in depth would have other reasons for rejecting the above blog post. However, you are right that philosophers in general don't use Occam's Razor as a common tool and they don't seem to make assumptions about what a correct theory "looks like."

If conceivability does not imply logical possibility, then even if you can imagine a Zombie world, it does not mean that the Zombie world is logically possible.

Chalmers does not claim that p-zombies are logically possible, he claims that they are metaphysically possible. Chalmers already believes that certain atomic configurations necessarily imply consciousness, by dint of psychophysical laws.

The claim that certain atomic configurations just are consciousness is what the physicalist claims, but that is what is contested by knowledge arguments: we can't really conceive of a way for consciousness to be identical with physical states.

Comment by UmamiSalami on Zombies Redacted · 2016-07-06T20:22:54.959Z · LW · GW

I don't believe that I experience qualia.

Wait, what?

Comment by UmamiSalami on Zombies Redacted · 2016-07-06T05:09:23.208Z · LW · GW

3 doesn't follow from 2, it follows from a contradiction between 1+2.

Well, first of all, 3 isn't a statement, it's saying "consider a world where..." and then asking a question about whether philosophers would talk about consciousness. So I'm not sure what you mean by suggesting that it follows or that it is true.

1 and 2 are not contradictions. Conversely, 1 and 2 are basically saying the exact same thing.

1 states that consciousness has no effect upon matter, and yet it's clear from observation that the concept of subjectivity only follows if consciousness can affect matter,

This is essentially what epiphenomenalists deny, and I'm inclined to say that everyone else should deny it too. Regardless of what the truth of the matter is, surely the mere concept of subjectivity does not rely upon epiphenomenalism being false.

we only have knowledge of subjectivity because we observe it first-hand.

This is confusing the issue; like I said: under the epiphenomenalist viewpoint, the cause of our discussions of consciousness (physical) is different from the justification for our belief in consciousness (subjective). Epiphenomenalists do not deny that we have first-hand experience of subjectivity; they deny that those experiences are causally responsible for our statements about consciousness.

and epiphenomenalism can be discarded using Occam's razor.

There are many criteria by which theories are judged in philosophy, and parsimony is only one of them.

Except the zombie world wouldn't have feelings and consciousness, so your rebuttal doesn't apply.

Nothing in my rebuttal relies on the idea that zombies would have feelings and consciousness. My rebuttal points out that zombies would be motivated by the idea of feelings and consciousness, which is trivially true: humans are motivated by the idea of feelings and consciousness, and zombies behave in the same way that humans do, by definition.

That's an assertion, not an argument.

But it's quite obviously true, because we talk about rich inner lives as the grounding for almost all of our moral thought, and then act accordingly, and because empathy relies on being able to infer rich inner lives among other people. And as noted earlier, whatever behaviorally motivates humans also behaviorally motivates p-zombies.

Comment by UmamiSalami on Zombies Redacted · 2016-07-05T00:24:20.574Z · LW · GW

Indeed. The condensed argument against p-zombies:

I would hope not. 3 is entirely conceivable if we grant 2, so 4 is unsupported, and nothing that EY said supports 4. 5 does not follow from 3 or 4, though it's bundled up in the definition of a p-zombie and follows from 1 and 2 anyway. In any case, 6 does not follow from 5.

What EY is saying is that it's highly implausible for all of our ideas and talk of consciousness to have come to be if subjective consciousness does not play a causal role in our thinking.

Except such discussions would have no motivational impact.

Of course they would - our considerations of other people's feelings and consciousness changes our behavior all the time. And if you knew every detail about the brain, you could give an atomic-level causal account as to why and how.

A "rich inner life" has no relation to any fact in a p-zombies' brain, and so in what way could this term influence their decision process?

The concept of a rich inner life influences decision processes.

Comment by UmamiSalami on Zombies Redacted · 2016-07-03T22:32:46.437Z · LW · GW

Well that's answered by what I said about psychophysical laws and the evolutionary origins of consciousness. What caused us to believe in consciousness is not (necessarily) the same issue as what reasons we have to believe it.

Comment by UmamiSalami on Zombies Redacted · 2016-07-03T08:08:09.381Z · LW · GW

This was longer than it needed to be, and in my opinion, somewhat mistaken.

The zombie argument is not an argument for epiphenomenalism, it's an argument against physicalism. It doesn't assume that interactionist dualism is false, regardless of the fact that Chalmers happens to be an epiphenomenalist.

Chalmers furthermore specifies that this true stuff of consciousness is epiphenomenal, without causal potency—but why say that?

Maybe because interactionism violates the laws of physics and is somewhat at odds with everything we (think we) know about cognition. There may be other arguments as well. It has mostly fallen out of favor. I don't know the specific reasons why Chalmers rejects it.

Once you see the collision between the general rule that consciousness has no effect, to the specific implication that consciousness has no effect on how you think about consciousness (in any way that affects your internal narrative that you could choose to say out loud), zombie-ism stops being intuitive. It starts requiring you to postulate strange things.

In the epiphenomenalist view, for whatever evolutionary reason, we developed to have discussions and beliefs in rich inner lives. Maybe those thoughts and discussions help us with being altruistic, or maybe they're a necessary part of our own activity. Maybe the illusion of interactionism is necessary for us to have complex cognition and decisionmaking.

Also in the epiphenomenalist view, psychophysical laws relate mental states to neurophysical aspects of our cognition. So for some reason there is a relation between acting/thinking of pain, and mental states which are painful. It's not arbitrary or coincidental because the mental reaction to pain (dislike/avoid) is a mirror of the physical reaction to pain (express dislike/do things to avoid it).

But Chalmers just wrote all that stuff down, in his very physical book, and so did the zombie-Chalmers.

Chalmers isn't denying that the zombie Chalmers would write that stuff down. He's denying that its beliefs would be justified. Maybe there's a version of me in a parallel universe that doesn't know anything about philosophy but is forced to type certain combinations of letters at gunpoint - that doesn't mean that I don't have reasons to believe the same things about philosophy in this universe.

Comment by UmamiSalami on Notes on the Safety in Artificial Intelligence conference · 2016-07-01T16:38:58.279Z · LW · GW

In fairness, I didn't directly ask any of them about it, and it wasn't really discussed. There could have been some who had read the relevant work, and many who believed it to be reasonable, but just didn't happen to speak up during the presentations or in any of the conversations I was in.

Comment by UmamiSalami on In favour of total utilitarianism over average · 2015-12-28T05:38:44.374Z · LW · GW

There is no objective absolute morality that exists in a vacuum.

No, that's highly contentious, and even if it's true, it doesn't grant a license to promote any odd utility rule as ideal. The anti-realist also may have reason to prefer a simpler version of morality.

Utility theory, prisoner's dilemma, Occam's razor, and many other mathematical structures put constraints on what a self-consistent, formalized morality has to be like. But they can't and won't pinpoint a single formula in the huge hypothesis space of morality, but we'll always have to rely heavily on our intuitive morality at the end. And this one isn't simple, and can't be made that simple.

There are much more relevant factors in building and choosing moral systems than those mathematical structures, whose relevance to moral epistemology is dubious in the first place.

That's the whole point of the CEV, finding a "better morality", that we would follow if we knew more, were more what we wished we were, but that remains rooted in intuitive morality.

It's not obvious that we would be more likely to believe anything in particular if we knew more and were more what we wished we were. CEV is a nice way of making different people's values and goals fit together, but it makes no sense to propose it as a method of actual moral epistemology.

Comment by UmamiSalami on In favour of total utilitarianism over average · 2015-12-23T00:48:57.827Z · LW · GW

Would you accept a lottery where there was 1 ticket to maintain your life as a satisfied cookie utility monster and hundreds of trillions of tickets to become a miserable enslaved cookie maker?

Or, after rational reflection and experiencing the alternate possibilities, would you rather prefer a guaranteed life of threshold satisfaction?

Comment by UmamiSalami on In favour of total utilitarianism over average · 2015-12-23T00:45:48.283Z · LW · GW

The problem is that by doing that you are making your position that much more arbitrary and contrived. It would be better if we could find a moral theory that has solid parsimonious basis, and it would be surprising if the fabric of morality involved complicated formulas.

Comment by UmamiSalami on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-30T21:35:06.350Z · LW · GW

Thanks. I will give some of those articles a look when I have the chance. However, it isn't true that every activity is competitive in nature. Many projects are cooperative, in which case it's not necessarily a problem if you and other people are taking similar approaches and doing them well. We also shouldn't overestimate the competition and assume that they are going to be applying probabilistic reasoning, when in reality we can still outperform by applying basic rules of rationality.

Comment by UmamiSalami on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-29T22:59:58.539Z · LW · GW

So for us to understand what you're even trying to say, you want us to read a bunch of articles, talk to one of your friends, listen to a speech, and only then will we become EAs good enough for you? No thanks.

Comment by UmamiSalami on How much does where you go to college affect earnings? · 2015-03-18T17:31:49.437Z · LW · GW

This is very old but I just wanted to say that I am basically considering changing my college choice due to finding out about this research. Thanks so much for putting this post up and spreading awareness.

Comment by UmamiSalami on Open thread, 21-27 April 2014 · 2014-04-25T02:18:43.917Z · LW · GW

Maybe I am unfamiliar with the specifics of simulated reality. But I don't understand how it is assumed (or even probable, given Occam's Razor) that if we are simulated then there are copies of us. What is implausible about the possibility that I'm in a simulation and I'm the only instance of me that exists?

Comment by UmamiSalami on Open thread, 21-27 April 2014 · 2014-04-24T02:00:08.853Z · LW · GW

Sorry if this has topic has been beaten to death already here. I was wondering if anyone here has seen this paper and has an opinion on it.

The abstract: "This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed."

Quite simple, really, but I found it extremely interesting.

http://people.uncw.edu/guinnc/courses/Spring11/517/Simulation.pdf

Comment by UmamiSalami on Open Thread April 16 - April 22, 2014 · 2014-04-23T04:43:22.019Z · LW · GW

Hi, I've been intermittently lurking here since I started reading HPMOR. So now I joined and the first thing I wanted to bring up is this paper which I read about the possibility that we are living in a simulation. The abstract:

"This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed."

Quite simple, really, but I found it extremely interesting.

http://people.uncw.edu/guinnc/courses/Spring11/517/Simulation.pdf

Comment by UmamiSalami on Open Thread April 16 - April 22, 2014 · 2014-04-23T04:37:37.956Z · LW · GW

I find myself to have a much clearer and cooler head when it comes to philosophy and debate around the subject. Previously I had a really hard time squaring utilitarianism with the teachings of religion, and I ended up being a total heretic. Now I feel like everything makes sense in a simpler way.