Comment by tag on Sam Harris and the Is–Ought Gap · 2019-02-13T13:52:28.237Z · score: 1 (1 votes) · LW · GW

Bad law can be immoral, just as bad bridges can fall down. As I have already pointed out, the connection between morality and law is normative.

coincidence

What do you think guides the creation of law? Why is murder illegal?

As to who likes being in jail? Many a person has purposely committed crimes and handed themselves into the police because they prefer being in jail to being homeless, or prefer the lifestyle of living and surviving in jail to having to engage in the rat race, and so on.

You are trying to argue general rules from exceptional cases.

Comment by tag on How much can value learning be disentangled? · 2019-02-11T08:56:47.886Z · score: 4 (3 votes) · LW · GW

Yep. There are a number of intelligent agents, each with their own subset of true beliefs. Since agents have finite resources, the they cannot learn everything, and so their subset of true beliefs must be random or guided by some set of goals or values. So truth is entangled with value in that sense, and if not in the sense of wishful thinking.

Also, there is no evidence of a any kind of One Algorithm To Rule Them All. Its in no way implied by the existence of objective reality, and everything that has been exhibited along those lines has turned out to be computationally intractable.

Comment by tag on Some Thoughts on Metaphilosophy · 2019-02-10T16:48:56.087Z · score: 2 (2 votes) · LW · GW

Physicalists sometimes respond to Mary's Room by saying that one can not expect Mary actually to actually instantiate Red herself just by looking at a brain scan. It seems obvious to then that a physical description of brain state won't convey what that state is like, because it doesn't put you into that state. As an argument for physicalism, the strategy is to accept that qualia exist, but argue that they present no unexpected behaviour, or other difficulties for physicalism.

That is correct as stated but somewhat misleading: the problem is why is it necessary, in the case of experience, and only in the case of experience to instantiate it in order to fully understand it. Obviously, it is true a that a descirption of a brain state won't put you into that brain state. But that doesn't show that there is nothing unusual about qualia. The problem is that there in no other case does it seem necessary to instantiate a brain state in order to undertstand something.

If another version of Mary were shut up to learn everything about, say, nuclear fusion, the question "would she actually know about nuclear fusion" could only be answered "yes, of course....didn't you just say she knows everything"? The idea that she would have to instantiate a fusion reaction within her own body in order to understand fusion is quite counterintuitive. Similarly, a description of photosynthesis will make you photosynthesise, and would not be needed for a complete understanding of photosynthesis.

There seem to be some edge cases.: for instance, would an alternative Mary know everything about heart attacks without having one herself? Well, she would know everything except what a heart attack feels like, and what it feels like is a quale. the edge cases, like that one, are cases are just cases where an element of knowledge-by-acquaintance is needed for complete knowledge. Even other mental phenomena don't suffer from this peculiarity. Thoughts and memories are straightforwardly expressible in words, so long as they don't involve qualia.

So: is the response "well, she has never actually instantiated colour vision in her own brain" one that lays to rest and the challenge posed by the Knowledge argument, leaving physicalism undisturbed? The fact that these physicalists feel it would be in some way necessary to instantiate colour, but not other things, like photosynthesis or fusion, means they subscribe to the idea that there is something epistemically unique about qualia/experience, even if they resist the idea that qualia are metaphysically unique.

Comment by tag on Sam Harris and the Is–Ought Gap · 2019-02-09T11:36:06.780Z · score: 1 (1 votes) · LW · GW

Even if that were true (it isn’t, since laws do not map to morality

That is an extraordinary claim.

You need a moral justification to put someone in jail. Legal systems approximate morality, and inasmuch as they depart from it, and they are flawed,like a bridge that doesn't stay up.

implements a utility function which values not being jailed (which, is exactly the subjective axiom

If everyone is subject to the same punishments, then they have to be ones that are negatively valued by everyone... who likes being in jail? So it is not subjective on an interesting way

In any case, that is approaching the problem from the wrong end. Morality is not a matter of using decision theory to avoid being punished for breaking arbitrary, incomprehensible, rules. It is the non arbitrary basis of the rules.

Comment by tag on Philosophy as low-energy approximation · 2019-02-09T11:05:11.496Z · score: 1 (1 votes) · LW · GW

How likely is it that you would have solved the Hard Problem? Why do people think philosophy is easy, or full of obvious confusions?

Comment by tag on Philosophy as low-energy approximation · 2019-02-08T09:09:43.051Z · score: 1 (1 votes) · LW · GW

It's wrong to have absolute confidence in anything. You can't prove that you are not in a simulation, so you can't have absolute confidence that there is any real physics.

Of course, I didn't base anything on absolute confidence.

You can put forward a story where expressions of subjective experience are caused by atoms, and subjective experience itself isn't mentioned.

I can put forward a story where ouches are caused by pains, and atoms aren't explicitly mentioned.

Of course you now want to say that the atoms are still there and playing a causal role, but have gone out of focus because I am using high level descriptions. But then I could say that subjective states are identical to aggregates of atoms, and therefore have identical caudal powers.

Multiple explanations are always possible, but aren't necessarily about rival ontologies

Comment by tag on Philosophy as low-energy approximation · 2019-02-08T08:03:46.541Z · score: 3 (2 votes) · LW · GW

Subjective experience can't be demonstrated objectively. On the other hand, demanding objective evidence of subjectivity biases the discussion away from taking consciousness seriously.

I don't have a way out if the impasse. The debate amongst professional philosophers is logjammed, so this one is as well. (However,this demonstrates a meta level truth: there is no neutral epistemology).

Comment by tag on Sam Harris and the Is–Ought Gap · 2019-02-08T07:54:10.391Z · score: 1 (1 votes) · LW · GW

Oh, I think you'll find that moral oughts are different. For one thing, you can be jailed for breaking them.

Comment by tag on Conclusion to the sequence on value learning · 2019-02-07T15:53:58.608Z · score: 1 (1 votes) · LW · GW

I'm not seeing the "can't control". Sure , agent AI is more powerful than tool AI -- and more powerful things need more control to make them do what you want.

Comment by tag on If Many-Worlds Had Come First · 2019-02-07T15:51:54.991Z · score: 1 (1 votes) · LW · GW

Yes, the real CI is rather minimal and non-commital. That, not idiocy, explains its widespread adoption. Objective Reduction is a different and alter theory.

Comment by tag on Sam Harris and the Is–Ought Gap · 2019-02-07T11:44:45.180Z · score: 1 (1 votes) · LW · GW

It's not suprising that "ought" statements concerning arbitrary preferences have a subjective component, but the topic is specifically about moral "Oughts", and it is quite possible that being about morality constrains things in a way that brings in additional objectivty.

Comment by tag on Philosophy as low-energy approximation · 2019-02-07T08:59:42.529Z · score: 3 (2 votes) · LW · GW

Models can omit things that are there as well as include things that aren't there. That's the whole problem.

I'm always in the exact state that I am in, and those states includes conscious experience. You can and have built a model which is purely functional and in which Red only featurrs as a functional role or behavioural disposition. But you don't get to say that your model is in exact two way correspondence with my reality. You have to show that a model is exact, and and that is very difficult, you can't just assert it.

it’s more philosophically productive to ask “what approximate model of the world has this thing as a basic object?”

Why can't I ask "what does this approximate model leave anything out"?

If physicist A builds a model that leaves out friction, say, physicist B can validly object to it. And that has nothing whatever to do with "essences" or ontological fundamentalness. No one thinks friction or cow's legs are fundamental. The rhetoric about essences is a red herring. (Or , if it is valid, surely you can use it to justify any model of any simplicity). I think the spherical cow model is inaccurate because every cow I have ever seen is squarish with a leg at each corner. Thats an observation, not a metaphysical claim.

I agree that I do see red. That is to say, the collection of atoms that is my body enters a state that plays the same role in the real world as “seeing red” plays in the folk-psychological model of me.

Seeing red is more than a role or disposition. That is what you have left out.

Comment by tag on Philosophy as low-energy approximation · 2019-02-06T14:49:27.248Z · score: 3 (2 votes) · LW · GW

I argue that this is a dangerous line of thought because it’s assuming that there exists some “what we really think” that we are uncovering. But what if we’re thinking using an approximation that doesn’t extend to all possible situations?

Then the thought experiment is a useful negative result telling us we need something more comprehensive.

[Even worse is when people ignore the fact that the concept is a human invention at all, and try to understand “the true nature of belief” (not just what we think about belief) by conceptual analysis

What's the problem? Even if all concepts are human-made, that doesn't mean we have perfect reflective access to them for free. Thought experiments can be seen as a way of informing the conscious mind what the unconscious mind is doing.

Well, one can ask that, but maybe it doesn’t have an answer.

Or maybe it does. Negative results are still information, so it is hard to see how we can solve problems better by avoiding thought experiments.

Comment by tag on Philosophy as low-energy approximation · 2019-02-06T14:41:20.022Z · score: 3 (3 votes) · LW · GW

Suppose that we show how certain physical processes play the role of qualia within an abstract model of human behavior. “This pattern of neural activities means we should think of this person as seeing the color red,” for instance. [..]This is close to what I parody as “Human physical bodies are only approximate agents, so how does this generate the real Platonic agent I know I am inside?”

But we know that we do see red. Red is not an invisible spook inside someone else.

When I think of myself as an abstract agent in the abstract state of “seeing red,” this is not proof that I am actually an abstract Platonic Agent in the abstract state of seeing red. The person in the parody has been misled by their model of themselves—they model themselves as a real Platonic agent, and so they believe that’s what they have to be.

Once we have described the behavior of the approximate agents that are humans, we don’t need to go on to describe the state of the actual agents hiding inside the humans.

We don't need to bring in agency at all. You are trying to hitch something you can be plausible eliminativist about to something you can't.

Comment by tag on Boundaries - A map and territory experiment. [post-rationality] · 2019-02-06T13:36:29.616Z · score: 1 (1 votes) · LW · GW

Now show that it's the wrong question in this case. We don't need another repetition of "maps all the way down", we need a proper explanation.

Comment by tag on Boundaries - A map and territory experiment. [post-rationality] · 2019-02-06T13:34:50.793Z · score: 1 (1 votes) · LW · GW

Rationality tries to insist it can get above the map and outside the territory to use the map.

I have no idea what that means.

Comment by tag on Conclusion to the sequence on value learning · 2019-02-06T12:55:59.532Z · score: 1 (1 votes) · LW · GW

It doesn't seem likely to me. People don't procreate in order to fulfil the abstract definition you gave, they procreate to fulfil biological urges and cultural mores.

Comment by tag on Philosophy as low-energy approximation · 2019-02-06T11:12:44.865Z · score: 1 (1 votes) · LW · GW

If Kant claims you should never ever lie, all you need to refute him is one counterexample, and it’s okay if it’s a little extreme. But just because you can refute wrong things with high-energy thought experiments doesn’t mean they’re going to help you find the right thing.

I don't see why not. If virtue theory, deontology and consequenitalism all, separately, go wrong under some circumstances, then you probably need an ethics that combines the strengths of all three.

Comment by tag on Philosophy as low-energy approximation · 2019-02-06T11:02:54.133Z · score: 4 (3 votes) · LW · GW

Take Putnam’s Twin Earth thought experiment, where we try to analyze the idea (essence?) of “belief” or “aboutness” by postulating an entire alternate Earth that periodically exchanges people with our own.

Maybe he is just trying to find a good model. If you want to accuse people of excessive literalism, you need some firm examples.

Show me a model that’s useful for understanding human behavior, and I’ll show you someone who’s taken it too literally.

Go on, then.

Comment by tag on Philosophy as low-energy approximation · 2019-02-06T10:47:11.138Z · score: 1 (1 votes) · LW · GW

Some philosophers think that they’re like particle physicists, elucidating the weird and ontologically basic stuff inside the everyday human.

That includes physicalists, who think the ontologically basic stuff is quarks and electrons.

I am not clear how you are defining HEphil: do you mean (1) that any quest for the ontologically basic is HEphil, or (2) treating mental properties as physical is the only thing that is HEphil ?

Comment by tag on Philosophy as low-energy approximation · 2019-02-06T08:59:10.606Z · score: 3 (5 votes) · LW · GW

Taken to its logical conclusion, this is a direct rejection of most varieties of the “hard problem of consciousness.” The hard problem asks, how can you take the physical description of a human and explain its Real Sensations—our experiences that are supposed to have their own extra essences, or to be directly observed by an “us” that is an objective existence.

The Hard Problem is not a statement insisting that qualia are irreducible, it is the question of what they reduce to. If they don't reduce, there is no hard problem (and physicalism is false. You only face the HP given physicalism).

You imply that qualia are only approximate high level descriptions. But we still don't have a predictive and reductive theory of qualia as high level emergent phenomena as we do with, and for instance, heat. It lowers the bar, but not enough.

Comment by tag on Rationality: What's the point? · 2019-02-05T19:02:02.290Z · score: 1 (1 votes) · LW · GW

Normative beliefs only have objective truth conditions if moral realism is true. But the model of an agent trying to realise its normative beliefs is always valid, however subjective they are. Usefulness, and in turn, the can only be defined in terms of goals or values.

Comment by tag on Rationality: What's the point? · 2019-02-05T08:51:38.422Z · score: 1 (1 votes) · LW · GW

There is an important subset of beliefs which predict only if acted on, namely beliefs about how things should be.

Comment by tag on Rationality: What's the point? · 2019-02-04T20:35:36.678Z · score: 1 (1 votes) · LW · GW

Here's an argument against: you can mathematically prove an infinite number of truths, most of which are useless.

Comment by tag on Conclusion to the sequence on value learning · 2019-02-04T17:16:48.600Z · score: 1 (1 votes) · LW · GW

Does anyone have an incentive to make a non-goal directed AI they can't control?

Comment by tag on Boundaries - A map and territory experiment. [post-rationality] · 2019-02-01T12:16:31.384Z · score: 3 (2 votes) · LW · GW

"Maps are in the territory, too". Where else could they be?

Comment by tag on Deconfusing Logical Counterfactuals · 2019-01-29T13:21:55.983Z · score: 1 (1 votes) · LW · GW

That was a typo, although actually neither is a fact.

Comment by tag on Deconfusing Logical Counterfactuals · 2019-01-29T12:23:24.618Z · score: 0 (2 votes) · LW · GW

When “you” is defined down to the atom, you can only implement one decision.

Once again: physical determinism is not a fact.

Comment by tag on Why not tool AI? · 2019-01-24T12:09:39.135Z · score: 3 (2 votes) · LW · GW

Any advanced AI, while it could be Tool in the sense of not taking actions that impact the outside world, is likely to be Agent in the sense of optimizing within bounds internally.”

Out of the two implicit definitions of agent: "maximises UF" and "affects outside world (without explicit being told to)", the second is the only one that is relevant to AI safety, and the one that is used by the actual AI community. IOW, trying to bring in the first definition just causes confusion.

Comment by tag on The Relationship Between Hierarchy and Wealth · 2019-01-23T12:31:51.904Z · score: 3 (3 votes) · LW · GW

The second possibility is (tentative) good news for freedom. It says that hierarchy is inefficient.

Too much hierarchy is inefficient, as it combines wasted talent with the need for a cumbersome apparatus of repression. That is a fact that can coincide with too little hierarchy being inefficient. It is probably one those Laffer curve-like things, where the sweet spot is at a hard-to-identify point in the middle. Liberal democracies are freer than socialist states or empires , but probably less free then the small anarchist societies David Graeber admires.

Comment by tag on What emotions would AIs need to feel? · 2019-01-09T13:05:16.474Z · score: 1 (1 votes) · LW · GW

In The Emotion Machine, Minsky argues that shame is absolutely critical to AI.

Comment by tag on Open and Welcome Thread December 2018 · 2019-01-04T12:29:51.831Z · score: 1 (1 votes) · LW · GW

P-zombies are indeed all about epiphenomenalism.

No, they are primarily about explanation.

. But it is the unsolved Hard Problem of Consciousness, as some would say, to prove that the person home in your body is you. We could have an extra consciousness-essence attached to these bodies, they say. You can’t prove we don’t!

It has virtually nothing to do with personal identity.

Dennett thinks peoples’ expectations are that “real qualia” are the things that live in the space of epiphenomenal essences and can’t possibly be the equivalent of a conjuring trick.

If they are a trick,. no one has explained how it is pulled off.

Comment by tag on Open and Welcome Thread December 2018 · 2019-01-04T12:25:55.238Z · score: 1 (1 votes) · LW · GW

That is, on the object level: it is not at all sensible to think that philosophical zombies are useful as a concept; the idea is deeply confused.

Suppose you made a human-level AI. Suppose there was some doubt about whether it was genuinely conscious. Wouldn't that amount to the question of whether or not it was a zombie?

Separately, it seems highly possible that people vary in their internal experience, such that some people experience ‘qualia’ and other people don’t. If the main reason we think people have qualia is that they say that they do, and Dennett says that he doesn’t, then the standard argument doesn’t go through for him.

Or it's terminological confusion.

Comment by tag on What is a reasonable outside view for the fate of social movements? · 2019-01-04T12:12:49.227Z · score: 5 (3 votes) · LW · GW

It seems to me that they failed for different reasons.

Comment by tag on Electrons don’t think (or suffer) · 2019-01-03T08:59:38.990Z · score: 2 (2 votes) · LW · GW

Moreover, they can vary with changes to the environment that aren't changes to the electron. They aren't proper or intrinsic to the electron, but intuitively ones qualia are intrinsic.

Comment by tag on Why do Contemplative Practitioners Make so Many Metaphysical Claims? · 2019-01-02T08:48:43.224Z · score: -6 (3 votes) · LW · GW

It's not just contemplative practitioners. "there are alternative realities" is a floridly metaphysical claim.

Comment by tag on On Rigorous Error Handling · 2018-12-26T14:18:09.939Z · score: 1 (1 votes) · LW · GW
Have you seen:

http://joeduffyblog.com/2016/02/07/the-error-model/

Or http://lambda-the-ultimate.org/ for that matter.

Comment by tag on State Machines and the Strange Case of Mutating API · 2018-12-26T13:45:22.227Z · score: 1 (1 votes) · LW · GW

And here’s an interesting observation: The API of the socket changes as you move from one state to another.

Anyway, this rant is addressed to programming language designers: What options do we have to support such mutating API at the moment. And can we do better?

It's called typestate. It's been tried, and it tends to be cumbersome.

Comment by tag on You can be wrong about what you like, and you often are · 2018-12-21T09:33:15.280Z · score: 1 (1 votes) · LW · GW

but in general, we have a ton of blind spots and biases that are harmful in a practical, real world sort of sense,

..but which don't get selected out, for some reason.

Comment by tag on Defining Freedom · 2018-12-20T12:42:41.640Z · score: 1 (1 votes) · LW · GW

If you actually incorporate every single constraint you have, you end up having one action.

You can't infer backwards to the idea that a decision was made deterministically (whether physical or rational determinism) from that fact that only one decision actually gets made.

In this case, we can think of absolute freedom as a measure of flatness of f

If f is absolutely flat, it isn't determining your decisions.

Comment by tag on Equivalence of State Machines and Coroutines · 2018-12-20T12:30:34.414Z · score: 3 (2 votes) · LW · GW

Counterargument: coroutines are Turing-complete, FSM's are not, equivalence is two-way.

Moreover, the "co" isn't doing any work, because you can write FSMs in languages that don't have coprocessing.

Comment by tag on Values Weren't Complex, Once. · 2018-12-13T12:44:31.265Z · score: 1 (1 votes) · LW · GW

To start, I’m personally skeptical of the claim that preferences and moral values can be clearly distinguished, especially given the variety of value systems that people have preferred over time, or even today.

There is a clear distinction within each system: if you violate a moral value, you are a bad person, if you violate a non-moral value, you are something else -- irrational or foolish, maybe.

Also, you have not excluded levelling down -- nothing is a moral value -- as an option.

If you want a scale of moral value that is objective universal and timeless, you are going to have problems. But it is a strange to want in the first place., because value is not an objective physical property. Different people value different things. Where those values or preferences can be satisfied individually, there is no moral concern. Where there are trade-offs , or potential for conflict, then there is a need for -- in the sense that a group is better off with -- publically agreed and known standards and rules. Societies with food scarcity have rules about who is allowed to eat what, societies with food abundance don't. Morality is an adaptation., it isn't and should not be the same everywhere.

The EA framing -- where moral is considered in terms of making inprovements, and individual , voluntary actions makes it quite hard to understand morality in general, because morality in general is about groups, obligations and prohibitions.

Comment by tag on What precisely do we mean by AI alignment? · 2018-12-12T12:21:37.986Z · score: -1 (2 votes) · LW · GW

I would see that as the definition of control as opposed to alignment.

Comment by tag on Logical Counterfactuals are low-res · 2018-12-03T08:47:16.079Z · score: -1 (2 votes) · LW · GW

So?

Comment by tag on Believing others' priors · 2018-11-29T09:46:03.314Z · score: 2 (2 votes) · LW · GW

You are assuming some relationship between agency and free will that has not been spelt out. Also, an entirely woo-free notion of agency is a ubiquitous topic on this site, as has been pointed out to you before.

Comment by tag on Summary: Surreal Decisions · 2018-11-28T08:45:49.977Z · score: 4 (3 votes) · LW · GW

There aren't, but not in a way that allows you to conclude the universe is finite

Comment by tag on Values Weren't Complex, Once. · 2018-11-26T12:17:19.394Z · score: 6 (2 votes) · LW · GW

It is not obvious that "value" is synonymous with "moral value", and it is no clear that divergence in values is a moral issue, or necessarily leads to conflict. Food preferences are the classic example of preferences that can be satisfied in a non-exclusionary way.

Comment by tag on Believing others' priors · 2018-11-26T12:14:50.210Z · score: 0 (3 votes) · LW · GW

You are telescoping a bunch of issues there. It is not at all clear that top-down causation is needed for libertarian free will, for instance. And MIRI says free will is illusory.

Comment by tag on Believing others' priors · 2018-11-23T13:06:48.626Z · score: 2 (2 votes) · LW · GW

Physics says there is no such thing, all your decisions are either predetermined, or random or chaotic

No, free will is not a topic covered in physics textbooks or lectures. You are appealing to an implicit definition of free will that libertarians don't accept.

Comment by tag on What is being? · 2018-11-16T15:58:40.585Z · score: 1 (1 votes) · LW · GW

Versions of the "nature of being" question are relevant to things like MWI and the mathematical universe hypothesis.