Comment by tag on If MWI is correct, should we expect to experience Quantum Torment? · 2019-04-19T16:08:20.272Z · score: 1 (1 votes) · LW · GW

One can’t have it both ways. If your consciousness can persist in some infinitesimal state, then one of the things it will have lost on its way to that state is the ability to feel suffering, or boredom, or a sensation of passing time or anything of the like.

Being in a infinitesimal measure world is not the same as having infintessimal, zombie like, consciousness. It would be very convenient if it were, but you can't pick assumptions to get to the conclusions you want.

Comment by tag on If MWI is correct, should we expect to experience Quantum Torment? · 2019-04-19T15:42:23.289Z · score: 1 (1 votes) · LW · GW

The more conventional perspective on QM is of a single nondeterministic world or of a single world in which events have subquantum causes. From this perspective “quantum torment”—lingering indefinitely in a near-death state—is logically possible but inconceivably improbable, something that wouldn’t happen even if you reran the history of the cosmos a googol times, because it involves the quantum dice (whether deterministic or nondeterministic) repeatedly coming up just the right way to prevent your body from finally giving up the ghost.

It's much easier to neglect low probability events in a single universe (I should say a finite single universe) where they generally don't occur. If low measure worlds occur, they may well seem fully real to the observers inside them.

Comment by tag on If MWI is correct, should we expect to experience Quantum Torment? · 2019-04-19T15:41:48.597Z · score: 1 (1 votes) · LW · GW

The more conventional perspective on QM is of a single nondeterministic world or of a single world in which events have subquantum causes. From this perspective “quantum torment”—lingering indefinitely in a near-death state—is logically possible but inconceivably improbable, something that wouldn’t happen even if you reran the history of the cosmos a googol times, because it involves the quantum dice (whether deterministic or nondeterministic) repeatedly coming up just the right way to prevent your body from finally giving up the ghost.

It's much easier to neglect low probability events in a single universe (I should say a finite single universe) where they generally don't occur. If low measure worlds occur, they may well seem fully real to the observers inside them.

Comment by tag on If MWI is correct, should we expect to experience Quantum Torment? · 2019-04-19T14:49:10.567Z · score: 4 (2 votes) · LW · GW

No. Don’t do a count on branches, aggregate the amplitude of the branches in question. We should expect to die

Objectively or subjectively? If the objective measure of the branches is very low, you could round that off to "expect to die"... from someone else's perspective. From your perspective, even if there is one low probability branch where you continue, you can be sure to subjectively experience it, since there is no "you" to experience anything in the high probability branches.

But really it's too firm a conclusion to expect to live.

There are no facts about MWI and QI because they rely on questions about 1. Probability 2. Consciousness 3. Personal identity that we don't have good answers to.

Comment by tag on Why is multi worlds not a good explanation for abiogenesis · 2019-04-18T00:52:22.011Z · score: 1 (1 votes) · LW · GW

Many worlds says that our socks are always black.

Nope. What distinguishes worlds, if their contents are the same?

The Copenhagen interpretation says that us observing the socks causes them to be black.

Nope. That would be consciousness-causes-collapse. Which is a different theory.

Comment by tag on Experimental Open Thread April 2019: Socratic method · 2019-04-08T19:46:13.349Z · score: 0 (2 votes) · LW · GW

If the brain can't do anything except make predictions, where making predictions is defined defined to exclude seeking metaphysical truth, then you have nothing to object to, since it would be literally impossible for anyone to do other than as you recommend.

Since people can engage in metaphysical truth seeking, it is either a sub-variety of prediction, or the theory that the brain is nothing but a prediction error minimisation machine is false.

Comment by tag on Experimental Open Thread April 2019: Socratic method · 2019-04-06T20:04:44.010Z · score: 1 (1 votes) · LW · GW

Does achieving goals rely on accurate predictions and nothing else?

Comment by tag on Experimental Open Thread April 2019: Socratic method · 2019-04-06T20:03:40.450Z · score: 1 (1 votes) · LW · GW

If ones goal require something beyond predictive accuracy, such as correspondence truth, why would you limit yourself to seeking predictive accuracy?

Comment by tag on Experimental Open Thread April 2019: Socratic method · 2019-04-01T12:36:29.804Z · score: 4 (3 votes) · LW · GW

Have you considered phrasing your clain differently, in view of your general lack of progress in persuading people ?

Comment by tag on What societies have ever had legal or accepted blackmail? · 2019-03-19T17:11:04.849Z · score: 1 (1 votes) · LW · GW

Theyre not structured that much for elites. We don't have Lese Majeste laws, and we do have free speech and press.

Comment by tag on What societies have ever had legal or accepted blackmail? · 2019-03-17T22:12:12.862Z · score: 0 (2 votes) · LW · GW

Because there's so much of it?

Comment by tag on What societies have ever had legal or accepted blackmail? · 2019-03-17T13:21:08.168Z · score: 5 (4 votes) · LW · GW

That was worth asking. If there really are advantages to blackmail, a lot of 20$ bills have been left lying on the ground.

Comment by tag on Privacy · 2019-03-16T15:45:15.117Z · score: 2 (2 votes) · LW · GW

s/tenancy/tendency

Comment by tag on Privacy · 2019-03-15T22:30:11.395Z · score: 1 (1 votes) · LW · GW

Some things are acceptable on small quantities but unacceptable in large ones. You don't want to incentivise those things.

Comment by tag on mAIry's room: AI reasoning to solve philosophical problems · 2019-03-09T12:17:26.321Z · score: 2 (2 votes) · LW · GW

So it won’t convince any philosophers if you talk about mAIry setting a preexisting Boolean.

Not all philosophers are qualiaphiles.

Comment by tag on Want to Know What Time Is? · 2019-03-09T12:04:17.614Z · score: 0 (2 votes) · LW · GW

It doesn't match a certain way of defining "information". But some people are quite happy with notions of information that aren't tied to a subject. You can reconcile the two by saying that objective information is potential subjective information.

Comment by tag on Want to Know What Time Is? · 2019-03-09T11:52:42.516Z · score: 2 (2 votes) · LW · GW

How does this theory distinguish time from space? If I look at ever wider and wider regions of space, I can obtain more and more information. Similarly with scale... looking at things in finer detail makes more information available.

Comment by tag on Implications of an infinite versus a finite universe · 2019-03-02T12:39:46.172Z · score: 1 (1 votes) · LW · GW

There are no intensive infinities in physics. That roughly means you cannot have an infinite amount of something in a finite volume of space. It doesn't stop you you having an infinite amount of space.

Comment by tag on Can an AI Have Feelings? or that satisfying crunch when you throw Alexa against a wall · 2019-02-27T10:05:25.730Z · score: 1 (1 votes) · LW · GW

The issue is whether introspection would be retained.

Comment by tag on Can an AI Have Feelings? or that satisfying crunch when you throw Alexa against a wall · 2019-02-26T13:51:48.046Z · score: 1 (1 votes) · LW · GW

That depends on what you mean by observations and evidence. If we preserved the subjective, introspective access to consciousness, then consciousness would be preserved... logically. But we can not do that practically. Practically, we can preserve externally observable functions and behaviour, but we can't be sure that doing that preserves consciousness.

Comment by tag on Can an AI Have Feelings? or that satisfying crunch when you throw Alexa against a wall · 2019-02-26T13:14:12.918Z · score: 1 (1 votes) · LW · GW

No, I would say I can’t be less conscious than I observe.

You didn' say that. As a premise, it begs the whole question. Or it supposed to be a conclusion?

Sure, replacement by silicon could preserve my evaluations, and therefore my observations

A functional duplicate of yourself would give the same responses to questions, to have the same beliefs, loosely speaking... but it is quite possible for some of those responses to have been rendered false. For instance, your duplicate would initially believe itself to be a real boy made of flesh and bone.

Comment by tag on Can an AI Have Feelings? or that satisfying crunch when you throw Alexa against a wall · 2019-02-26T12:55:17.041Z · score: 1 (1 votes) · LW · GW

None of them is clear, and the overall structure isn't clear.

Can you be less conscious than you observe?

Is "changing the parts of me" supposed to be some sort of replacement-by-silicon scenario?

Is an "evaluation" supposed to be a correct evaluation?

Does "has the same evaluations as me" mean "makes the same judgments about itself as I would about myself"?

3 ⇒ 5. Anything that makes the same observations as me is as concious as me.

As per my first question, if an observation is some kind of infallible introspection of your own consciousness, that is true. OTOH, if if it some kind of external behaviour, not necessarily. Even if you reject p-zombies, a silicon replacement is not a p zombie abd could lack consciousness.

Comment by tag on Can an AI Have Feelings? or that satisfying crunch when you throw Alexa against a wall · 2019-02-26T10:09:06.705Z · score: 1 (1 votes) · LW · GW

I still don't see how the conclusion follows.

Comment by tag on Sam Harris and the Is–Ought Gap · 2019-02-24T15:01:16.100Z · score: 1 (1 votes) · LW · GW

If morality is (are) seven billion utility functions, then a legal system will be a poor match for it (them).

But there are good reasons for thinking that can't be the case. For one thing, people can have preferences that are intuitively immoral. If a psychopath wants to murder, that does not make murder moral.

For another, it is hard to see what purpose morality serves when there is no interaction between people. Someone who is alone on a desert island iskand has no need of rules and against murder because there is no one to murder, and no need of rules against theft because there is no one to steal, and from and so on.

If morality is a series of negotiations and trade offs about preferences, then the law can match it closely. We can answer the question "why is murder illegal" with "because murder is wrong".

Comment by tag on Some disjunctive reasons for urgency on AI risk · 2019-02-24T13:18:06.196Z · score: 2 (2 votes) · LW · GW

Making sure an AI has aligned values and strong controls against value drift is an extra constraint on the AI design process. This constraint appears likely to be very costly at both design and run time, so if the first human level AIs deployed aren’t value aligned, it seems very difficult for aligned AIs to catch up and become competitive

Making sure that an AI has good enough controllability is very much part of the design process, because a completely uncontrollable AI is no good to anyone.

Full value alignment is different and probably much more difficult. There is a hard and an easy control problem.

Comment by tag on Blackmail · 2019-02-24T12:02:10.888Z · score: 2 (2 votes) · LW · GW

I was mainly making the point that insider trading is an illegal activity compounded of legal activities.

Insider trading isn't just a market adjustment, it is also an unfair advantage.

Comment by tag on Can an AI Have Feelings? or that satisfying crunch when you throw Alexa against a wall · 2019-02-24T11:49:59.114Z · score: 1 (1 votes) · LW · GW

Do you mean outward, behavioural style observation, or introspection.

Comment by tag on "Other people are wrong" vs "I am right" · 2019-02-24T11:27:40.798Z · score: -5 (3 votes) · LW · GW

I find many of the views you haven't updated from implausible.

Comment by tag on Blackmail · 2019-02-22T12:34:07.307Z · score: 2 (2 votes) · LW · GW

According to Trivers, 2/3 of communication is gossip. A world with strong norms against it would look very different to our own.

Comment by tag on Blackmail · 2019-02-22T12:28:02.711Z · score: 1 (1 votes) · LW · GW

For me, this generalizes—if all sub-actions are permitted, then the sum of them is permitted

There's got to be a ton of exceptions to that. For instance, insider trading.

Comment by tag on Blackmail · 2019-02-21T19:51:56.292Z · score: 1 (1 votes) · LW · GW

Elites may be targeted more, but they have more ability to pay off blackmailers. That's a relative evasion of punishment... the payment has disutility for them, but not as much as a prison sentence...and if what they did was illegal, then it never comes to light. Plus legalised blackmailers would not ignore poorer targets, since they have less ability to retaliate.

Blackmail is poorly optimised for exposing wrongdoing by the rich and powerful compared to investigative journalism.

Comment by tag on Blackmail · 2019-02-21T12:57:59.483Z · score: 4 (2 votes) · LW · GW

As a means of disclosing information about wrongdoing, blackmail has no advantage over whistle-blowing, and also no advantage over journalism. Journalists have to disclose information, whereas it doesn't get disclosed in successful blackmail, and journalists need a public interest defense,whereas it's perfectly possible to blackmail someone over private behaviour.

Comment by tag on Sam Harris and the Is–Ought Gap · 2019-02-13T13:52:28.237Z · score: 1 (1 votes) · LW · GW

Bad law can be immoral, just as bad bridges can fall down. As I have already pointed out, the connection between morality and law is normative.

coincidence

What do you think guides the creation of law? Why is murder illegal?

As to who likes being in jail? Many a person has purposely committed crimes and handed themselves into the police because they prefer being in jail to being homeless, or prefer the lifestyle of living and surviving in jail to having to engage in the rat race, and so on.

You are trying to argue general rules from exceptional cases.

Comment by tag on How much can value learning be disentangled? · 2019-02-11T08:56:47.886Z · score: 4 (3 votes) · LW · GW

Yep. There are a number of intelligent agents, each with their own subset of true beliefs. Since agents have finite resources, the they cannot learn everything, and so their subset of true beliefs must be random or guided by some set of goals or values. So truth is entangled with value in that sense, and if not in the sense of wishful thinking.

Also, there is no evidence of a any kind of One Algorithm To Rule Them All. Its in no way implied by the existence of objective reality, and everything that has been exhibited along those lines has turned out to be computationally intractable.

Comment by tag on Some Thoughts on Metaphilosophy · 2019-02-10T16:48:56.087Z · score: 2 (2 votes) · LW · GW

Physicalists sometimes respond to Mary's Room by saying that one can not expect Mary actually to actually instantiate Red herself just by looking at a brain scan. It seems obvious to then that a physical description of brain state won't convey what that state is like, because it doesn't put you into that state. As an argument for physicalism, the strategy is to accept that qualia exist, but argue that they present no unexpected behaviour, or other difficulties for physicalism.

That is correct as stated but somewhat misleading: the problem is why is it necessary, in the case of experience, and only in the case of experience to instantiate it in order to fully understand it. Obviously, it is true a that a descirption of a brain state won't put you into that brain state. But that doesn't show that there is nothing unusual about qualia. The problem is that there in no other case does it seem necessary to instantiate a brain state in order to undertstand something.

If another version of Mary were shut up to learn everything about, say, nuclear fusion, the question "would she actually know about nuclear fusion" could only be answered "yes, of course....didn't you just say she knows everything"? The idea that she would have to instantiate a fusion reaction within her own body in order to understand fusion is quite counterintuitive. Similarly, a description of photosynthesis will make you photosynthesise, and would not be needed for a complete understanding of photosynthesis.

There seem to be some edge cases.: for instance, would an alternative Mary know everything about heart attacks without having one herself? Well, she would know everything except what a heart attack feels like, and what it feels like is a quale. the edge cases, like that one, are cases are just cases where an element of knowledge-by-acquaintance is needed for complete knowledge. Even other mental phenomena don't suffer from this peculiarity. Thoughts and memories are straightforwardly expressible in words, so long as they don't involve qualia.

So: is the response "well, she has never actually instantiated colour vision in her own brain" one that lays to rest and the challenge posed by the Knowledge argument, leaving physicalism undisturbed? The fact that these physicalists feel it would be in some way necessary to instantiate colour, but not other things, like photosynthesis or fusion, means they subscribe to the idea that there is something epistemically unique about qualia/experience, even if they resist the idea that qualia are metaphysically unique.

Comment by tag on Sam Harris and the Is–Ought Gap · 2019-02-09T11:36:06.780Z · score: 1 (1 votes) · LW · GW

Even if that were true (it isn’t, since laws do not map to morality

That is an extraordinary claim.

You need a moral justification to put someone in jail. Legal systems approximate morality, and inasmuch as they depart from it, and they are flawed,like a bridge that doesn't stay up.

implements a utility function which values not being jailed (which, is exactly the subjective axiom

If everyone is subject to the same punishments, then they have to be ones that are negatively valued by everyone... who likes being in jail? So it is not subjective on an interesting way

In any case, that is approaching the problem from the wrong end. Morality is not a matter of using decision theory to avoid being punished for breaking arbitrary, incomprehensible, rules. It is the non arbitrary basis of the rules.

Comment by tag on Philosophy as low-energy approximation · 2019-02-09T11:05:11.496Z · score: 1 (1 votes) · LW · GW

How likely is it that you would have solved the Hard Problem? Why do people think philosophy is easy, or full of obvious confusions?

Comment by tag on Philosophy as low-energy approximation · 2019-02-08T09:09:43.051Z · score: 1 (1 votes) · LW · GW

It's wrong to have absolute confidence in anything. You can't prove that you are not in a simulation, so you can't have absolute confidence that there is any real physics.

Of course, I didn't base anything on absolute confidence.

You can put forward a story where expressions of subjective experience are caused by atoms, and subjective experience itself isn't mentioned.

I can put forward a story where ouches are caused by pains, and atoms aren't explicitly mentioned.

Of course you now want to say that the atoms are still there and playing a causal role, but have gone out of focus because I am using high level descriptions. But then I could say that subjective states are identical to aggregates of atoms, and therefore have identical caudal powers.

Multiple explanations are always possible, but aren't necessarily about rival ontologies

Comment by tag on Philosophy as low-energy approximation · 2019-02-08T08:03:46.541Z · score: 3 (2 votes) · LW · GW

Subjective experience can't be demonstrated objectively. On the other hand, demanding objective evidence of subjectivity biases the discussion away from taking consciousness seriously.

I don't have a way out if the impasse. The debate amongst professional philosophers is logjammed, so this one is as well. (However,this demonstrates a meta level truth: there is no neutral epistemology).

Comment by tag on Sam Harris and the Is–Ought Gap · 2019-02-08T07:54:10.391Z · score: 1 (1 votes) · LW · GW

Oh, I think you'll find that moral oughts are different. For one thing, you can be jailed for breaking them.

Comment by tag on Conclusion to the sequence on value learning · 2019-02-07T15:53:58.608Z · score: 1 (1 votes) · LW · GW

I'm not seeing the "can't control". Sure , agent AI is more powerful than tool AI -- and more powerful things need more control to make them do what you want.

Comment by tag on If Many-Worlds Had Come First · 2019-02-07T15:51:54.991Z · score: 1 (1 votes) · LW · GW

Yes, the real CI is rather minimal and non-commital. That, not idiocy, explains its widespread adoption. Objective Reduction is a different and alter theory.

Comment by tag on Sam Harris and the Is–Ought Gap · 2019-02-07T11:44:45.180Z · score: 1 (1 votes) · LW · GW

It's not suprising that "ought" statements concerning arbitrary preferences have a subjective component, but the topic is specifically about moral "Oughts", and it is quite possible that being about morality constrains things in a way that brings in additional objectivty.

Comment by tag on Philosophy as low-energy approximation · 2019-02-07T08:59:42.529Z · score: 3 (2 votes) · LW · GW

Models can omit things that are there as well as include things that aren't there. That's the whole problem.

I'm always in the exact state that I am in, and those states includes conscious experience. You can and have built a model which is purely functional and in which Red only featurrs as a functional role or behavioural disposition. But you don't get to say that your model is in exact two way correspondence with my reality. You have to show that a model is exact, and and that is very difficult, you can't just assert it.

it’s more philosophically productive to ask “what approximate model of the world has this thing as a basic object?”

Why can't I ask "what does this approximate model leave anything out"?

If physicist A builds a model that leaves out friction, say, physicist B can validly object to it. And that has nothing whatever to do with "essences" or ontological fundamentalness. No one thinks friction or cow's legs are fundamental. The rhetoric about essences is a red herring. (Or , if it is valid, surely you can use it to justify any model of any simplicity). I think the spherical cow model is inaccurate because every cow I have ever seen is squarish with a leg at each corner. Thats an observation, not a metaphysical claim.

I agree that I do see red. That is to say, the collection of atoms that is my body enters a state that plays the same role in the real world as “seeing red” plays in the folk-psychological model of me.

Seeing red is more than a role or disposition. That is what you have left out.

Comment by tag on Philosophy as low-energy approximation · 2019-02-06T14:49:27.248Z · score: 3 (2 votes) · LW · GW

I argue that this is a dangerous line of thought because it’s assuming that there exists some “what we really think” that we are uncovering. But what if we’re thinking using an approximation that doesn’t extend to all possible situations?

Then the thought experiment is a useful negative result telling us we need something more comprehensive.

[Even worse is when people ignore the fact that the concept is a human invention at all, and try to understand “the true nature of belief” (not just what we think about belief) by conceptual analysis

What's the problem? Even if all concepts are human-made, that doesn't mean we have perfect reflective access to them for free. Thought experiments can be seen as a way of informing the conscious mind what the unconscious mind is doing.

Well, one can ask that, but maybe it doesn’t have an answer.

Or maybe it does. Negative results are still information, so it is hard to see how we can solve problems better by avoiding thought experiments.

Comment by tag on Philosophy as low-energy approximation · 2019-02-06T14:41:20.022Z · score: 3 (3 votes) · LW · GW

Suppose that we show how certain physical processes play the role of qualia within an abstract model of human behavior. “This pattern of neural activities means we should think of this person as seeing the color red,” for instance. [..]This is close to what I parody as “Human physical bodies are only approximate agents, so how does this generate the real Platonic agent I know I am inside?”

But we know that we do see red. Red is not an invisible spook inside someone else.

When I think of myself as an abstract agent in the abstract state of “seeing red,” this is not proof that I am actually an abstract Platonic Agent in the abstract state of seeing red. The person in the parody has been misled by their model of themselves—they model themselves as a real Platonic agent, and so they believe that’s what they have to be.

Once we have described the behavior of the approximate agents that are humans, we don’t need to go on to describe the state of the actual agents hiding inside the humans.

We don't need to bring in agency at all. You are trying to hitch something you can be plausible eliminativist about to something you can't.

Comment by tag on Boundaries - A map and territory experiment. [post-rationality] · 2019-02-06T13:36:29.616Z · score: 1 (1 votes) · LW · GW

Now show that it's the wrong question in this case. We don't need another repetition of "maps all the way down", we need a proper explanation.

Comment by tag on Boundaries - A map and territory experiment. [post-rationality] · 2019-02-06T13:34:50.793Z · score: 1 (1 votes) · LW · GW

Rationality tries to insist it can get above the map and outside the territory to use the map.

I have no idea what that means.

Comment by tag on Conclusion to the sequence on value learning · 2019-02-06T12:55:59.532Z · score: 1 (1 votes) · LW · GW

It doesn't seem likely to me. People don't procreate in order to fulfil the abstract definition you gave, they procreate to fulfil biological urges and cultural mores.

Comment by tag on Philosophy as low-energy approximation · 2019-02-06T11:12:44.865Z · score: 1 (1 votes) · LW · GW

If Kant claims you should never ever lie, all you need to refute him is one counterexample, and it’s okay if it’s a little extreme. But just because you can refute wrong things with high-energy thought experiments doesn’t mean they’re going to help you find the right thing.

I don't see why not. If virtue theory, deontology and consequenitalism all, separately, go wrong under some circumstances, then you probably need an ethics that combines the strengths of all three.