Posts
Comments
Slight variant: Humour is a form of teaching, in which interesting errors are pointed out.
It doesn't need to involve an outsider, and there's no particular class of error, other than that the participants should find the error important.
If the guy sitting behind you starts moaning and grunting, if it's a mistake (e.g. he's watching porn on his screen and has forgotten he's not alone) then it's funny, whereas if it's not a mistake, and there's something wrong with him, then it isn't.
Humour as teaching may explain why a joke isn't funny twice - you can only learn a thing once.
Evolutionarily, it may have started as some kind of warning, that a person was making a dangerous mistake, and then getting generalised.
Why would anyone choose the map rather than the territory as their foundation?
I couldn't agree more, which is why I was attempting to discourage people from doing so.
Why engage in science if you are not willing to accept the inferences that it makes about reality? Am I not going to believe in atoms because it doesn't match what I see with my eyes?
But the justification for any physical theory is precisely that it predicts what you see with your own eyes. Indeed, that's what a physical theory is - a means of predicting what you will experience. Atoms, as a feature of such a theory, seem quite useful and worth "believing" in.
Do you have any explanations of illusions?
Illusions are when your theory of what you should experience breaks down, and produces wrong answers.
when science makes steady progress it usually ends up with an explanation in materialistic terms.
But as I pointed out above, physics is not materialist, so your claim is untrue.
Evidence implies observation. Observation implies conscious experience. So your evidence for a world independent of conscious experience turns out to be ... conscious experience. I expect you can see why that isn't going to work.
The only proposed explanation of consciousness I've seen on Less Wrong is "maybe if we arrange stuff in the right way, consciousness will happen". Even if true, it's not enough of an explanation to enable argument about it.
Dennet
Dennett presents a resolutely functionalist description of experience, then tells us that nothing resembling qualia can be found within it, to the great surprise of no-one at all.
think that qualia are real things
To believe that the phenomenal world, the world you actually live in, is a fiction, while an invented "physical" world, for which no evidence exists, is the real world, is not merely wrong, it's an irrationality which makes a complete mockery of the goals of this website.
When alleged rationalists experience an "irk", because someone has reminded them that their theories describe a world utterly unlike the one that actually exists, we call this "cognitive dissonance". When they vote it down we call it "denial".
Since we can presumably generate the appropriate signals in the optic nerve from scratch if we choose, light and its wavelength have nothing whatsoever to do with color.
This site is full of people interested in implementing intelligence (and even themselves) on a new substrate .... but they're not going to be interested in the relationship between physics and thought ?
Indeed. (I thought it would be a bit of a spoiler to be more specific)
I found this interesting pdf of a discussion involving Jaynes (and Dennett) and it makes clear what he believed, which was that the change was mostly cultural, and that uncontacted tribes might be bicameral, but there were none left. ( I'm not sure this is true - anyone reading this have an anthropologist handy ? )
Also contains a very odd fact (?) about children.
EDIT: Oops, didn't notice it was on Jaynes' own website. So presumably quite a lot more stuff there.
How does Jaynes explain the lack of this kind of thinking among peoples who have culture and genes unchanged in the last 3000 years ?
It wasn't intended to be a refutation. The technical claims of the papers may be correct, they just aren't, as the linked article claims, about consciousness.
Oh, an explanation of z-consciousness. Well done. We'll stack it with the others.
Consciousness. An open problem for 2700 years. Oh, hang on..: 2701.
If you want to integrate the phenomenal into your ontology, is there any reason you've stopped short of phenomenalism ?
EDIT: Not sarcasm - quite serious.
I came up with the following while pondering the various probability puzzles of recent weeks, and I found it clarified some of my confusion about the issues, so I thought I'd post it here to see if anyone else liked it:
Consider an experiment in which we toss a coin, to chose whether a person is placed into a one room hotel or duplicated and placed into a two room hotel. For each resulting instance of the person, we repeat the procedure. And so forth, repeatedly. The graph of this would be a tree in which the persons were edges and the hotels nodes. Each layer of the tree (each generation) would have equal numbers of 1-nodes and 2-nodes (on average, when numerous). So each layer would have 1.5 times as many outgoing edges as incoming, with 2/3 of the outgoing being from 2-nodes. If we pick a path away from the root, representing the person's future, in each layer we are going to have an even chance of arriving at a 1- or 2- node, so our future will contain equal numbers of 1- and 2- hotels. If we pick a path towards the root, representing the person's past, in each layer we have a 2/3 chance of arriving at a 2-node, meaning that our past contained twice as many 2-hotels as 1-hotels.
Because if you agree that the correct way to measure the probability is as the occurrence ratio along the path, the degree of splitting is only significant to the extent that it affects the occurrence ratio, which in this case it doesn't. The coin toss chooses equiprobably which hotel comes next, then it's on to the next coin toss to equiprobably choose which hotel comes next, and so forth. So each path has on average equal numbers of each hotel, going forwards.
When we speak of a subjective probability in a person-multiplying experiment such as this, we (or at least, I) mean "The outcome ratio experienced by a person who was randomly chosen from the resulting population of the experiment, then was used as the seed for an identical experiment, then was randomly chosen from the resulting population, then was used as the seed.... and so forth, ad infinitum".
I'm not confident that we can speak of having probabilities in problems which can't in theory be cast in this form.
In other words, the probability is along a path. When you look at the problem this way, it throws some light on why there are two different arguable values for the probability. If you look back along the path, ("what ratio will our person have experienced") the answer in your experiment is 1000000:1. If you look forward along the path, ("what ratio will our person experience") the answer is 1:1 (in the flaming-tires case there's no path, so there's no probability).
I don't know what I meant either. I remember it making perfect sense at the time, but that was after 35 hours without sleep, so.....
The answer to the second part is no, I would expect a 50:50 chance in that case.
In case you were thinking of this as a counterexample,
I also expect a 50:50 chance in all the cases there from B onwards. The claim that the probabilities
are unchanged by the coin toss is wrong, since the coin toss changes the number of participants, and we already accepted that
the number of participants was a factor in the probability when we assigned the 99% probability in the first place.
You're reading a little more into what I said than was actually there. I was just remarking on the change of dependence between the parts of the problem, without having thought through what the consequences would be.
Now that I have thought it through, I agree with the presumptuous philosopher in this case. However I don't agree with him about the size of the universe. The difference being that in the hotel case we want a subjective probability, whereas in the universe case we want an objective one. Subjectively, there's a very high probability of finding yourself in a big universe/hotel. But subjective probabilities are over subjective universes, and there are very very many subjective large universes for the one objective large universe, so a very high subjective probability of finding yourself in a large universe doesn't imply a large objective probability of being found in one.
The most obvious difference is that the original problem involved the smaller or the larger set of people whereas this one uses the smaller and the larger.
I fail to see why that is the general case.
If you have two people to start with, and one when you've finished, without any further stipulation about which people they are, then you
have lost a person somewhere. To come to a different conclusion would require an additional rule, which is why it's the general case.
That additional rule would have to specify that a duplicate doesn't count as a second person. But since that duplicate could subsequently
go on to have a separate different life of its own, the grounds for denying it personhood seem quite weak.
For that matter, I fail to see why losing some(many, most) of my atoms and having them be quickly replaced by atoms doing the exact same job should be viewed as me dying at all.
It's not dying in the sense of there no longer being a you, but it is still dying in the sense of there being fewer of you.
To take the example of you being merged with someone, those atoms you lose, together with the ones you don't take from the other person, make
enough atoms, doing the right jobs, to make a whole new person. In the symmetrical case, a second "you". That "you" could have gone on to live its
own life, but now won't. Hence a "you" has died in the process.
In other words, merge is equivalent to "swap pieces then kill".
The above looks as though it will work just as well with bits, or the physical representation of bits, rather than atoms (for the symmetrical case).
If you mean that a quantitative merge on a digital computer is generally impossible, you may be right. But the example I gave suggests that merging is death in the general case, and is presumably so even for identical merges, which can be done on a computer.
When you wake up, you will almost certainly have won (a trillionth of the prize). The subsequent destruction of winners (sort of - see below) reduces your probability of being the surviving winner back to one in a billion.
Merging N people into 1 is the destruction of N-1 people - the process may be symmetrical but each of the N can only contribute 1/N of themself to the outcome.
The idea of being (N-1)/N th killed may seem a little odd at first, but less so if you compare it to the case where half of one person's brain is merged with half of a different person's (and the leftovers discarded).
EDIT: Note that when the trillion were told they won, they were actually being lied to - they had won a trillionth part of the prize, one way or another.
The reason all these problems are so tricky is that they assume there's a "you" (or a "that guy") who has a
view of both possible outcomes. But since there aren't the same number of people for both outcomes, it
isn't possible to match up each person on one side with one on the other to make such a "you".
Compensating for this should be easy enough, and will make the people-counting parts of the problems explicit,
rather than mysterious.
I suspect this is also why the doomsday argument fails. Since it's not possible to define a set of people who "might have had" either outcome, the argument can't be constructed in the first place.
As usual, apologies if this is already known, obvious or discredited.
Isn't the quantum part of Quantum Russian Roulette a red herring, in that the only part it plays is to make copies of the money ? All the other parts of the thought-experiment work just as well in a single world where people-copiers exist.
To make the situations similar, suppose our life insurance company has been careless, and we get a payout for each copy that dies. Do you have someone press [COPY], then kill all but one of the copies before they wake ?
Doesn't "harm", to a consequentialist, consist of every circumstance in which things could be better, but aren't ? If a speck in the eye counts, then why not, for example, being insufficiently entertained ?
If you accept consequentialism, isn't it morally right to torture someone to death so long as enough people find it funny ?
Perhaps the problem here is that you're assuming that utility(probability, outcome) is the same as probability*utility(outcome). If you don't assume this, and calculate as if the utility of extra life decreased with the chance of getting it, the problem goes away, since no amount of life will drive the probability down below a certain point. This matches intuition better, for me at least.
EDIT: What's with the downvotes ?
In circumstances where the law of large numbers doesn't apply, the utility of a probability of an outcome cannot be calculated from just the probability and the utility of the outcome. So I suggested how the extra rule required might look.
Are the downvotes because people think this is wrong, or irrelevant to the question, or so trivial I shouldn't have mentioned it ?
I think you may be confusing the microstate and macrostate here - the microstate may branch every-which-way, but the macrostate, i.e. the computer and its electronic state (or whatever it is the deciding system is), is very highly conserved across branching, and can be considered classically deterministic (the non-conserving paths appear as "thermodynamic" misbehaviour on the macro scale, and are hopefully rare). Since it is this macrostate which represents the decision process, impossible things don't become possible just because branching is occurring.
Whether you consider this as sabotage or not depends on what you think the goal of the site's authors was. It certainly wasn't to help find inconsistencies in people's thinking, given the obvious effort that went into constructing questions that had multiple conflicting interpretations.
there are plausible interpretations under which I would disagree and plausible interpretations under which I would agree.
Quite.
Also, almost every question is so broken as to make answering it completely futile. So much so that it's hard to believe it was an accident.
Mostly agree is a higher degree of agreement than Agree ?
To Somewhat agree that everyone should have the vote and Disagree that children should have the vote is inconsistent ?
Obviously this is the work of the Skrull "Scott Aaronson", whose thinking is not so clear.
My metaphor lobes appear to be on fire.
Without objective measures of utility, what could it even mean to speak of someone's utility judgements as being biased or wrong ?
Warrigal gave a good recognition algorithm
Even though no bird, in the history of the world, has ever been recognised using it ?
Can you give a concrete example of someone screwing up due to hyperbolic accounting in a case where there's an objective measure of utility to compare the person's estimates against ?
But if agent X will (deterministically) choose action a_1, then when he asks what would happen “if” he takes alternative action a_2, he’s asking what would happen if something impossible happens.
-
would happen if something impossible happens.
But since this is the decision process that produces the "happens", both "happens" are the same "happens". In other words, it reduces to:
asking if something impossible happens.
Which is correct. Because the CSA deterministically chooses the best option, checking each action to see if it is best is synonymous with checking it to see if it is possible.
I've only just heard of PCT, so I don't know if this is familiar to everyone already, or whether it's what the PCT people had in mind all along and I'm just the last to find out, but it seems to me that PCT explains, if not the how, then at least the why of consciousness. If all actions arise from errors against a model, then the upper layers of human decision-making would consist of a simulated person living in a simulated world, which is indeed what we seem to be.
Oh yes !
Some of the ideas though - they're not the sort you would want spread.
I had assumed that microscopic reversibility and a large set of measurements were all that was required. Could you explain where my assumption is wrong ?
A Work Of Art, by James Blish. Enjoy..
The thing I love about lesswrong is that you're never more than one step away from an epistemological landmine, and even a simple ordinary question like "can we raise the dead" ends up as "is a person the same person just because you have no way of knowing that they aren't the same person ?".
What's the current view on whether there's enough information available to reconstruct the dead ?
(by which I mean the unfrozen dead)
It's a little worrying that the people trying to save us from the Robopocalypse don't have a website that can spot double-posting....
I can't think of a good explanation for anyone picking the $500
For a person who doesn't expect to get many more similar betting chances, the expectation value of the big win is unphysical.