Posts
Comments
Years ago, before coming up with even crazier ideas, Wei Dai invented a concept that I named UDASSA. One way to think of the idea is that the universe actually consists of an infinite number of Universal Turing Machines running all possible programs. Some of these programs "simulate" or even "create" virtual universes with conscious entities in them. We are those entities.
Generally, different programs can produce the same output; and even programs that produce different output can have identical subsets of their output that may include conscious entities. So we live in more than one program's output. There is no meaning to the question of what program our observable universe is actually running. We are present in the outputs of all programs that can produce our experiences, including the Odin one.
Probability enters the picture if we consider that a UTM program of n bits is being run in 1/2^n of the UTMs (because 1/2^n of all infinite bit strings will start with that n bit string). That means that most of our instances are present in the outputs of relatively short programs. The Odin program is much longer (we will assume) than one without him, so the overwhelming majority of our copies are in universes without Odin. Probabilistically, we can bet that it's overwhelmingly likely that Odin does not exist.
You make a lot of interesting points, but how do you apply them to the question at hand: what should you have for dinner, and why?
This is a fascinating topic, and I hope it attracts more commentary. As Bentarm says, it is important and relevant to each of us, yet the topic is fraught with uncertainty, and it is expensive to try to reduce the uncertainty.
I do not believe Taubes. No one book can outweigh the millions of pages of scientific research which have led to the current consensus in the field. Taubes is polemical, argumentative, biased, and one-sided in his presentation. He makes no pretense of offering an objective weighing of the evidence for and against various nutritional hypotheses. He is selling a point of view, plain and simple. No doubt he felt such a forceful approach was necessary given the enormous odds he faces in trying to gain a hearing for his ideas. But the fact remains that the reader must keep in mind that he is only hearing one side of the story.
Weighed against Taubes (and others who have advocated similar positions) we must consider the entire scientific establishment, thousands of researchers who dedicate their lives to the pursuit of knowledge. To believe Taubes, we must believe that these people are basing their entire professional careers on a foundation of falsehoods. Worse, from the lack of impact Taubes' book has had on consensus opinion, we have to imagine that researchers are willfully ignoring the truths that Taubes so convincingly reveals. Nutrition researchers are intentionally lying and covering up the truth in order to protect the false dogma of the field. (Note that this is exactly the same critique of researchers made by global warming skeptics.)
I can't believe that scientists are so dishonest, or that such a cover-up could be executed successfully. I can't imagine how any young, budding nutrition researcher could go to work in a post-Taubes world with a clean conscience, if the book is really as convincing as it claims to be.
My conclusion is that to someone intimately acquainted with the field, Taubes' book is not as persuasive as it appears to the layman.
Now, I will confess that I have some independent reasons to doubt Taubes. But I would prefer not to go into that because IMO the argument I have outlined above is sufficient. Never believe a polemical, one-sided book which has been rejected by the scientific establishment. I offer that as a valid heuristic which has proven correct in the overwhelming majority of cases.
Here is a remarkable variation on that puzzle. A tiny change makes it work out completely differently.
Same setup as before, two private dice rolls. This time the question is, what is the probability that the sum is either 7 or 8? Again they will simultaneously exchange probability estimates until their shared estimate is common knowledge.
I will leave it as a puzzle for now in case someone wants to work it out, but it appears to me that in this case, they will eventually agree on an accurate probability of 0 or 1. And they may go through several rounds of agreement where they nevertheless change their estimates - perhaps related to the phenomenon of "violent agreement" we often see.
Strange how this small change to the conditions gives such different results. But it's a good example of how agreement is inevitable.
I thought of a simple example that illustrates the point. Suppose two people each roll a die privately. Then they are asked, what is the probability that the sum of the dice is 9?
Now if one sees a 1 or 2, he knows the probability is zero. But let's suppose both see 3-6. Then there is exactly one value for the other die that will sum to 9, so the probability is 1/6. Both players exchange this first estimate. Now curiously although they agree, it is not common knowledge that this value of 1/6 is their shared estimate. After hearing 1/6, they know that the other die is one of the four values 3-6. So actually the probability is calculated by each as 1/4, and this is now common knowledge (why?).
And of course this estimate of 1/4 is not what they would come up with if they shared their die values; they would get either 0 or 1.
Let me give an argument in favor of #4, doing what the others do, in the thermometer problem. Now we seem to have them behaving badly. I think in practice many people would in fact look at other thermometers too in making their guesses. So why aren't they doing it? Two possibilities: they're stupid; or they have a good reason to do it. An example good reason: some thermometers don't read properly from a side angle, so although you think you can see and read all of them, you might be wrong. (This could be solved by #3, writing down the average of the cards, but that doesn't work if everyone tries it since everyone is waiting for everyone else to go first.)
Only if we add a stipulation to this problem, that you are usually right when everyone else is wrong, would it be a good idea to buck the crowd. And even then there is the danger that the others may have some private information that supports their seemingly illogical actions.
Actually if Omega literally materialized out of thin air before me, I would be amazed and consider him a very powerful and perhaps supernatural entity, so would probably pay him just to stay on his good side. Depending on how literally we take the "Omega appears" part of this thought experiment, it may not be as absurd as it seems.
Even if Omega just steps out of a taxi or whatever, some people in some circumstances would pay him. The Jim Carrey movie "Yes Man" is supposedly based on a true story of someone who decided to say yes to everything, and had very good results. Omega would only appear to such people.
When I signed up for cryonics, I opted for whole body preservation, largely because of this concern. But I would imagine that even without the body, you could re-learn how to move and coordinate your actions, although it might take some time. And possibly a SAI could figure out what your body must have been like just from your brain, not sure.
Now recently I have contracted a disease which will kill most of my motor neurons. So the body will be of less value and I may change to just the head.
The way motor neurons work is there is an upper motor neuron (UMN) which descends from the motor cortex of the brain down into the spinal cord; and there it synapses onto a lower motor neuron (LMN) which projects from the spinal cord to the muscle. Just 2 steps. However actually the architecture is more complex, the LMNs receive inputs not only from UMNs but from sensory neurons coming from the body, indirectly through interneurons that are located within the spinal cord. This forms a sort of loop which is responsible for simple reflexes, but also for stable standing, positioning etc. Then there other kinds of neurons that descend from the brain into the spinal cord, including from the limbic system, the center of emotion. For some reason your spinal cord needs to know something about your emotional state in order to do its job, very odd.
Like others, I see some ambiguity here. Let me assume that the substrate includes not just the neurons, but the glial and other support cells and structures; and that there needs to be blood or equivalent to supply fuel, energy and other stuff. Then the question is whether this brain as a physical entity can function as the substrate, by itself, for high level mental functions.
I would give this 95%.
That is low for me, a year ago I would probably have said 98 or 99%. But I have been learning more about the nervous system these past few months. The brain's workings seem sufficiently mysterious and counter-intuitive that I wonder if maybe there is something fundamental we are missing. And I don't mean consciousness at all, I just mean the brain's extraordinary speed and robustness.
Another sample problem domain is crossword puzzles:
Don't stop at the first good answer - You can't write in the first word that seems to fit, you need to see if it is going to let you build the other words.
Explore multiple approaches simultaneously - Same idea, you often can think of a few different possible words that could work in a particular area of the puzzle, and you need to keep them all in mind as you work to solve the other words.
Trust your intuitions, but don't waste too much time arguing for them - This one doesn't apply much because usually people don't fight over crossword puzzles.
Go meta - This is a big one, because usually crossword puzzles have a theme, often quite subtle, and if you look carefully you can see how your answers are building as part of a whole. This then gives you another direction to get ideas for possible answers, as things that would go with the theme, rather than just taking the clues literally.
Dissolve the question - Well, I don't know about this, but I suppose if you get frustrated enough you could throw the puzzle into the trash.
Sleep on it - This works well for this kind of puzzle, I find. Coming back to it in the morning you will often make more progress.
Be ready to recognize a good answer when you see it - Once you have enough crossing words in mind you can have good confidence that you are on the right track and go ahead and write those in, even if you don't have good ideas for some of the linked words. You need to recognize that when enough parts come together and your solution makes them fit, that is a strong clue that you are making progress, even if there are still unanswered aspects.
A perhaps similar example, sometimes I have solved geometry problems (on tests) by using analytical geometry. Transform the problem into algebra by letting point 1 be (x1,y1), point 2 be (x2,y2), etc, get equations for the lines between the points, calculate their points of intersection, and so on. Sometimes this gives the answer with just mechanical application of algebra, no real insight or pattern recognition needed.
I wouldn't be so quick to discard the idea of the AI persuading us that things are pretty nice the way they are. There are probably strong limits to the persuadability of human beings, so it wouldn't be a disaster. And there is a long tradition of advice regarding the (claimed) wisdom of learning to enjoy life as you find it.
I agree about the majoritarianism problem. We should pay people to adopt and advocate independent views, to their own detriment. Less ethically we could encourage people to think for themselves, so we can free-ride on the costs they experience.
Suppose it turned out that the part of the brain devoted to experiencing (or processing) the color red actually was red, and similarly for the other colors. Would this explain anything?
Wouldn't we then wonder why the part of the brain devoted to smelling flowers did not smell like flowers, and the part for smelling sewage didn't stink?
Would we wonder why the part of the brain for hearing high pitches didn't sound like a high pitch? Why the part which feels a punch in the nose doesn't actually reach out and punch us in the nose when we lean close?
I can't help feeling that this line of questioning is bizarre and unproductive.
An example regarding the brain would be successful resuscitation of people who have drowned in icy water. At one time they would have been given up for dead, but now it is known that for some reason the brain often survives for a long time without air, even as much as an hour.
I don't think your question is well represented by the phrase "where is computation".
Let me ask whether you would agree that a computer executing a program can be said to be a computer executing a program. Your argument would suggest not, because you could attribute various other computations to various parts of the computer's hardware.
For example, consider a program that repeatedly increments the value in a register. Now we could alternatively focus on just the lowest bit of the register and see a program that repeatedly complements that bit. Which is right? Or perhaps we can see it as a program that counts through all the even numbers by interpreting the register bits as being concatenated with a 0. There is a famous argument that we can in fact interpret this counting program as enumerating the states of any arbitrarily complex computation.
Chalmers in the previous link aims to resolve the ambiguity by certain rules; basically some interpretations count and some don't. And maybe there is an unresolved ambiguity in the end. But in practice it seems likely that we could take brain activity and create a neural network simulation which runs accurately and produces the same behavioral outputs as the brain; the same speech, the same movements. At least, if you were to deny this possibility, that would be interesting.
In summary, although one can theoretically map any computation to any physical system; for a system like we believe the brain to be, with its simultaneous complexity and organizational unity, it seems likely that one could come up with a computational program that would capture the brain's behavior, claim to have qualia, and pose the same hard questions about where the color blue lay among the electronic circuits.
Thomas Nagel's classic essay What is it like to be a bat? raises the question of a bat's qualia:
Our own experience provides the basic material for our imagination, whose range is therefore limited. It will not help to try to imagine that one has webbing on one's arms, which enables one to fly around at dusk and dawn catching insects in one's mouth; that one has very poor vision, and perceives the surrounding world by a system of reflected high-frequency sound signals; and that one spends the day hanging upside down by one's feet in an attic. In so far as I can imagine this (which is not very far), it tells me only what it would be like for me to behave as a bat behaves. But that is not the question. I want to know what it is like for a bat to be a bat. Yet if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task. I cannot perform it either by imagining additions to my present experience, or by imagining segments gradually subtracted from it, or by imagining some combination of additions, subtractions, and modifications.
I also wonder whether Deep Blue could be said to possess chess qualia of a type which are similarly inaccessible to us. When we play chess we are somewhat in the position of the man in Searle's Chinese Room who simulates a Chinese woman. We simulate Deep Blue when we play chess, and our lack of access to any chess qualia no more disproves their existence than the failure of Searle's man to understand Chinese.
Do you think it will ever be possible to say whether chess qualia exist, and what they are like? Will we ever understand what it is like to be a bat?
A bit OT, but it makes me wonder whether the scientific discoveries of the 21st century are likely to appear similarly insane to a scientist of today? Or would some be so bold as to claim that we have crossed a threshold of knowledge and/or immunity to science shock, and there are no surprises lurking out there bad enough to make us suspect insanity?
One question on your objections: how would you characterize the state of two human rationalist wannabes who have failed to reach agreement? Would you say that their disagreement is common knowledge, or instead are they uncertain if they have a disagreement?
ISTM that people usually find themselves rather certain that they are in disagreement and that this is common knowledge. Aumann's theorem seems to forbid this even if we assume that the calculations are intractable.
The rational way to characterize the situation, if in fact intractability is a practical objection, would be that each party says he is unsure of what his opinion should be, because the information is too complex for him to make a decision. If circumstances force him to adopt a belief to act on, maybe it is rational for the two to choose different actions, but they should admit that they do not really have good grounds to assume that their choice is better than the other person's. Hence they really are not certain that they are in disagreement, in accordance with the theorem. Again this is in striking contrast to actual human behavior even among wannabes.
Try a concrete example: Two dice are thrown, and each agent learns one die's value. In addition, each learns whether the other die is in the range 1-3 vs 4-6. Now what can we say about the sum of the dice?
Suppose player 1 sees a 2 and learns that player 2's die is in 1-3. Then he knows that player 2 knows that player 1's die is in 1-3. It is common knowledge that the sum is in 2-6.
You could graph it by drawing a 6x6 grid and circling the information partition of player 1 in one color, and player 2 in another color. You will find that the meet is a partition of 4 elements, each a 3x3 grid in one of the corners.
In general, anything which is common knowledge will limit the meet - that is, the meet partition the world is in will not extend to include world-states which contradict what is common knowledge. If 2 people disagree about global warming, it is probably common knowledge what the current CO2 level is and what the historical record of that level is. They agree on this data and each knows that the other agrees, etc.
The thrust of the theorem though is not what is common knowledge before, but what is common knowledge after. The claim is that it cannot be common knowledge that the two parties disagree.
How about Scott Aaronson:
http://www.scottaaronson.com/papers/agree-econ.pdf
He shows that you do not have to exchange very much information to come to agreement. Now maybe this does not address the question of the potential intractability of the deductions to reach agreement (the wannabe papers may do this) but I think it shows that it is not necessary to exchange all relevant information.
The bottom line for me is the flavor of the Aumann theorem: that there must be a reason why the other person is being so stubborn as not to be convinced by your own tenacity. I think this insight is the key to the whole conclusion and it is totally overlooked by most disagreers.
I agree about the issue of unresolved arguments. Was agreement reached and that''s why the debate stopped? No way to tell.
Particularly the epic AI-foom debate between Robin and Eliezer on OB, over whether AI or brain simulations were more likely to dominate the next century, was never clearly resolved with updated probability estimates from the two participants. In fact probability estimates were rare in general. Perhaps a step forward would be for disputants to publicize their probability estimates and update them as the conversation proceeds.
BTW sorry to see that linkrot continues to be a problem in the future.
Yes, I think that's a good explanation. One question it raises is ambiguity in thinking of QM via "many worlds". What constitutes a "world"? If we put a system into a coherent superposition, does that mean there are two worlds? Then if we transform it back into a pure state, has a world gone away? What about the fact that whether it is pure or in a superposition depends arbitrarily on the chosen basis? A pure-state vertically polarized photon is in a superposition of states using the diagonal basis. How many worlds are there, two or one? This interpretation can't be more than very metaphorical - it is "as though" there are two worlds in some sense.
Or do we only count a "world" when we have (some minimal degree of) decoherence leading to permanent separation? That way worlds never merge.
The explanation of QC in terms of MWI will vary depending on which interpretation we use. In the second one (worlds on decoherence) the explanation is pretty much the same as in any other interpretation. We put a system into a coherent state, manipulate it into a pure state, and the measurement doesn't do anything as far as world splitting.
But in the first interpretation, we want to say that there are many different worlds, once for each possible value in the quantum registers. Then we change the amplitude of these worlds, essentially making some of them go away so that there is only one left by the time we do the measurement. It's an odd way to think of worlds.
Here are the four papers relating to influence from the future and the LHC:
http://arxiv.org/find/physics/1/au:+Ninomiya_M/0/1/0/all/0/1
The basic idea is that these physicists have a theory that the Higgs particle would be highly unusual, such that its presence in a branch of the multiverse would greatly decrease the measure of that branch. Now I don't claim to understand their math, but it seems that this might produce a different result than the usual anthropic-type arguments regarding earth-destroying experiments.
The authors refer to an "influence from the future", and my reading is that the effect is that in a world where the future was very likely to produce a lot of Higgs particles, that would reduce the probability of that world existing (or being experienced, in the anthropic sense). Such an effect would not occur for an experiment which merely destroyed the world; such an experiment would not reduce the measure of the past. In a sense, Higgs particles destroy the past. (Keep in mind that this is a non-standard theory!)
Therefore I don't think their theory would predict our world, where it seems superficially quite likely that we will produce Higgs in the future. If the only thing that prevents it is unlikely events like the recent bird with baguette that Eliezer is riffing on, let along materializing tutued hamsters, then we are already on a branch of the multiverse whose future is full of Higgs. That should mean that our very branch is anthropically disfavored, and we should not be here.
Rather, we would expect to live in a world which never even seriously considers building an LHC. Either we would all be of a type which never developed technological civilization, or we would all be smart enough to deduce the danger of the Higgs before blundering forward and trying to build an LHC, etc.
The fact that we don't live in such a world would be an argument against the reverse-time effect, and in favor of the more conventional LHC world-destroying scenarios like black holes, strange matter, etc.
Wei, I understand the paper probably less well than you do, but I wanted to comment that p~, which you call r, is not what Robin calls a pre-prior. He uses the term pre-prior for what he calls q. p~ is simply a prior over an expanded state space created by taking into consideration all possible prior assignments. Now equation 2, the rationality condition, says that q must equal p~ (at least for some calculations), so maybe it all comes out to the same thing.
Equation 1 defines p~ in terms of the conventional prior p. Suppressing the index i since we have only one agent in this example, it says that p~(E|p) = p(E). The only relevant event E is A=heads, and p represents the prior assignment. So we have the two definitions for p~.
p~(A=heads | p=O) = O(A=heads)
p~(A=heads | p=P) = P(A=heads)
The first equals 0.6 and the second equals 0.4.
Then the rationality condition, equation 2, says
q(E | p) = p~(E | p)
and from this, your equations follow, with r substituted for q:
q (A=heads | p=O) = p~(A=heads | p=O) = O(A=heads)
q (A=heads | p=P) = p~(A=heads | p=P) = P(A=heads)
As you conclude, there is no way to satisfy these equations with the assumptions you have made on q, namely that the A event and the p-assigning events are independent, since the values of q in the two equations will be equal, but the RHS's are 0.6 and 0.4 respectively.
I think you're right that the descriptive (as opposed to prescriptive) result in this case demonstrates that the programmer was irrational. Indeed it doesn't make sense to program his AI that way, not if he wants it to "track truth".
The voice banking software I'm using is from the Speech Research Lab at the University of Delaware. They say they are in the process of commercializing it; hopefully it will still be free to the disabled. Probably not looking for donations though.
Another interesting communications assistance project is Dasher. They have a Java applet demo as well as programs for PC and smart phones. It does predictive input designed to maximize effective bandwidth. A little confusing at first but supposedly after some practice you can type fast with only minimal use of the controls. I say supposedly because I haven't used it much, it's not clear what I might be controlling it with. I should practice with it some more, it sounds likely to be part of an overall solution. Would be cool to control it with BCI, sit back and just think to type your messages.
Everybody with ALS talks about how terrible it is, all the things you can't do any more. But nobody seems to notice that there are all these things you get to do that you've never done before. I've never used a power wheelchair. I've never controlled a computer with my eyes. I've never had a voice synthesizer trained to mimic my natural voice. If I told people on the ALS forums that I was looking forward to some of this, they'd think I was crazy. Maybe people here will understand.
I want to thank everyone for their good wishes and, um, hugs :)
As it stands, my condition is quite good. In fact at the time of my diagnosis two months ago, I was skeptical that it was correct. The ALS expert seemed rather smug that he had diagnosed me so early, saying that I was the least affected of any of his patients. Not only were my symptoms mild, I had had little or no progression in the three months at that time since I had first noticed anything wrong.
However, since then there has been noticeable progression. My initial symptoms were in my speech, a slight slowing and breathlessness; shortly after, my hands felt odd and a bit shaky while writing. This was stable as I said for a few months. But in the last two months my voice has gotten much weaker and softer, and somewhat more slurred; and my hands, especially my right hand, have lost strength. My right hand is now weaker than the left, and both are weaker than my wife's hands. At this point I'd say that I'm about 90% functional.
It is annoying and worrisome that my initial symptoms are showing up in my voice and hands, the two most used and highest bandwidth sources of output available. Everyone's progression is different with this disease, so I don't know what to expect in terms of rate of progress or degree of disability at various points in the future. My whole plan revolves around retaining some degree of outgoing communication, but I had hoped to be able to wait until near the end of the progression to be forced to rely on the more exotic technologies. I seem to recall hearing about a guy with ALS who moused, and maybe even typed, with his feet, so I want to check into that.
That is a good idea about brain computer interfacing. I've only started looking into it a little. There are clinical trials going on with highly disabled ALS patients where they are testing it out. I have looked at some of the gaming headsets, but it seems that they largely pick up muscle movements in the scalp and face. Still it might be a good place to start.
Thanks again for the comments and advice.
It was actually extremely reassuring as the reality of the diagnosis sunk in. I was surprised, because I've always considered cryonics a long shot. But it turns out that in this kind of situation, it helps tremendously to have reasons for hope, and cryonics provides another avenue for a possibly favorable outcome. That is a good point that my circumstances may allow for a well controlled suspension which could improve my odds somewhat.
You're right though that with this diagnosis, life insurance is no longer an option. In retrospect I would be better off if I had purchased more life insurance for my family, as well as long term care insurance for myself. Of course, that doesn't change the considerations which made those seem to be unattractive gambles beforehand.
I am indeed signed up, having been an Alcor client for 20 years.
Ironically I chose full-body suspension as opposed to so-called neurosuspension (head only) on the theory that the spinal cord and peripheral nervous system might include information useful for reconstruction and recovery. Now it turns out that half of this data will be largely destroyed by the disease. Makes me wonder if I should convert to neuro.
Indeed even the popular (mis)conception of head-only revival wouldn't be that bad for me, not unlike the state I will have lived in for a while. In fact it would really be better in many ways if I could somehow lose my body once I become paralyzed, since it will be a potential source of pain signals and also a lot of work for caregivers to deal with. But I doubt that the technology is there yet.
"[the mind] could be a physical system that cannot be recreated by a computer"
Let me quote an argument in favor of this, despite the apparently near universal consensus here that it is wrong.
There is a school of thought that says, OK, let's suppose the mind is a computation, but it is an unsolved problem in philosophy how to determine whether a given physical system implements a given computation. In fact there is even an argument that a clock implements every computation, and it has yet to be conclusively refuted.
If the connection between physical systems and computation is intrinsically uncertain, then we can never say with certainty that two physical systems implement the same computation. In particular, we can never know that a given computer program implements the same computation as a given brain.
Therefore we cannot, in principle, recreate a mind on a computer; at least, not reliably. We can guess that it seems pretty close, but we can never know.
If LessWrongers have solved the problem of determining what counts as instantiating a computation, I'd like to hear more.
Two comments. First, your point about counterfactuals is very valid. Hofstadter wrote an essay about how we tend to automatically only consider certain counterfactuals, when an infinite variety are theoretically possible. There are many ways that the world might be changed so that Joe one-boxes. A crack in the earth might open and swallow one box, allowing Joe to take only the other. Someone might have offered Joe a billion dollars to take one box. Joe might aim to take two but suffer a neurological spasm which caused him to grasp only one box and then leave. And so on. Counterfactuals are a weak and uncertain tool.
My second point is with regard to determinism. What if the word in general, and Joe in particular, is nondeterministic? What if QM is true but the MWI is not, or some other form of nondeterminism prevails? Ideally, you should not base your analysis on the assumption of determinism.
Two comments. First, your point about counterfactuals is very valid. Hofstadter wrote an essay about how we tend to automatically only consider certain counterfactuals, when an infinite variety are theoretically possible. There are many ways that the world might be changed so that Joe one-boxes. A crack in the earth might open and swallow one box, allowing Joe to take only the other. Someone might have offered Joe a billion dollars to take one box. Joe might aim to take two but suffer a neurological spasm which caused him to grasp only one box and then leave. And so on. Counterfactuals are a weak and uncertain tool.
My second point is with regard to determinism. What if the word in general, and Joe in particular, is nondeterministic? What if QM is true but the MWI is not, or some other form of nondeterminism prevails? Ideally, you should not base your analysis on the assumption of determinism.
Two comments. First, your point about counterfactuals is very valid. Hofstadter wrote an essay about how we tend to automatically only consider certain counterfactuals, when an infinite variety are theoretically possible. There are many ways that the world might be changed so that Joe one-boxes. A crack in the earth might open and swallow one box, allowing Joe to take only the other. Someone might have offered Joe a billion dollars to take one box. Joe might aim to take two but suffer a neurological spasm which caused him to grasp only one box and then leave. And so on. Counterfactuals are a weak and uncertain tool.
My second point is with regard to determinism. What if the word in general, and Joe in particular, is nondeterministic? What if QM is true but the MWI is not, or some other form of nondeterminism prevails? Ideally, you should not base your analysis on the assumption of determinism.
Two comments. First, your point about counterfactuals is very valid. Hofstadter wrote an essay about how we tend to automatically only consider certain counterfactuals, when an infinite variety are theoretically possible. There are many ways that the world might be changed so that Joe one-boxes. A crack in the earth might open and swallow one box, allowing Joe to take only the other. Someone might have offered Joe a billion dollars to take one box. Joe might aim to take two but suffer a neurological spasm which caused him to grasp only one box and then leave. And so on. Counterfactuals are a weak and uncertain tool.
My second point is with regard to determinism. What if the word in general, and Joe in particular, is nondeterministic? What if QM is true but the MWI is not, or some other form of nondeterminism prevails? Ideally, you should not base your analysis on the assumption of determinism.
We talk a lot here about creating Artificial Intelligence. What I think Tiiba is asking about is how we might create Artificial Consciousness, or Artificial Sentience. Could there be a being which is conscious and which can suffer and have other experiences, but which is not intelligent? Contrariwise, could there be a being which is intelligent and a great problem solver, able to act as a Bayesian agent very effectively and achieve goals, but which is not conscious, not sentient, has no qualia, cannot be said to suffer? Are these two properties, intelligence and consciousness, independent or intrinsically linked?
Acknowledging the limited value of introspection, nevertheless I can remember times which I was close to experiencing "pure consciousness", with no conscious problem-solving activity at all. Perhaps I was entranced by a beautiful sunset, or haunting musical performance. My whole being seemed to be pure experience, pure consciousness, with no particular need for intelligence, Bayesian optimization, goal satisfaction, or any of the other paraphernalia which we associate with intelligence. This suggests to me that it is at least plausible that consciousness does not require intelligence.
In the other direction, the idea of an intelligence problem solver devoid of consciousness is an element in many powerful, fictional dystopias. Even Eliezer's paperclip maximizer partakes of this trope. It seems that we have little difficulty imagining intelligence without consciousness, without awareness, sentience, qualia, the ability to suffer.
If we provisionally assume that the two qualities are independent, it raises the question of how we might program consciousness (even if we only want to know how, to avoid doing it accidentally). is it possible that even relatively simple programs may be conscious, may be capable of feeling real pain and suffering, as well as pleasure and joy? Is there any kind of research program that could shed light on these questions?
Reading the comments here, there seem to be two issues entangled. One is which organisms are capable of suffering (which is probably roughly the same set that is capable of experiencing qualia; we might call this the set of sentient beings). The other is which entities we would care about and perhaps try to help.
I don't think the second question is really relevant here. It is not the issue Tiiba is trying to raise. If you're a selfish bastard, or a saintly altruist, fine. That doesn't matter. What matters is what constitutes a sentient being which can experience suffering and similar sensations.
Let us try to devote our attention to this question, and not the issue of what our personal policies are towards helping other people.
I thought maybe we were hearing about the LOTR story through something like the chronophone - the translation into English also translated the story into something analogous for us.
I remember reading once about an experiment that was said to make rats superstitious.
These rats were used in learning experiments. They would be put into a special cage and they'd have to do something to get a treat. Maybe they'd have to push a lever, or go to a certain spot. But they were pretty good at learning whatever they had to do. They were smart rats. They knew the score, they knew what the cage was for.
So they did a new experiment, where they put them into the training cage as usual. But instead of what they did bringing the treat, they always got a treat exactly 30 seconds after going into the cage. This continued for a while, and what happened was the rats each learned an individual behavior to bring the treat. One would go to a corner, another would turn in circles, another would stand up on its hind feet. And sure enough, the treat came. Their trick worked.
I imagine the society in Eliezer's story had something similar happen. Given the anthropic effect we are postulating, they don't actually have to do anything - a certain fraction of the worlds will get lucky and survive. But after it happens a few times, the survivors may well assume that what they were doing at the time their "luck" arrived was causative. In this case, they had a hero who seemed to get lucky. Maybe several heroes. And then somewhere they got the idea of summoning them from other worlds. After all, if they need a lucky hero to save them, they should get the luckiest heroes they can find. (I wonder what this hero had done to earn their selection?)
But there would be just as many other worlds, even instances of the exact same world, which have developed their own superstitions about what defeats the evil. They each carry out their rituals, and in each case, it works - for the survivors. We just choose to eavesdrop on a world which had a particularly interesting and amusing superstition.
Actually, why doesn't the Hero's world have a Counter-Force? Shouldn't every world have something like it? How many times have our world escaped from the brink of nuclear annihilation, for example?
Right, like the way the LHC keeps breaking before they can turn it on and have it destroy the universe. Sooner or later we'll figure out what's happening.
I agree with the logic of this analysis, but I have a problem with one of the implicit premises: that "we" should care about political issues at all, and that "we" make governmental decisions. I think this is wrong, and its wrongness explains the seemingly puzzling phenomenon of jumping from tree to forest.
There was no need for anyone beyond the jury to have an opinion on the Duke lacrosse case. We weren't making any decisions there. I certainly wasn't, anyway. So of course when people do express an interest, it is for entertainment and showing off only. They may think it is for other reasons, but it is essentially a form of social interaction, part of the status game that we are all playing. And this game is played better with big issues than with small ones.
Likewise with poor Cheerios. (It's funny - I wrote a semi joking rant last night defending Cheerios, and as a result I now find myself quite favorably disposed to the little yellow box; an effect we have often discussed and warned against.) I don't need to have an opinion on what the FDA should be doing. They aren't asking me. Nobody's asking me. At best I can vote for a President who can appoint an FDA commissioner and perhaps set policy, but my influence on this process is infinitesimal.
So once again, if I do take an interest it will be as part of a social game, not because it's something I can do anything about.
This effect is the fundamental reason why ideology rules in politics. It's because our beliefs don't matter, so we adopt them just for fun and for a competitive edge. We don't seem to recognize this, perhaps because believing ideologies are important helps us win the game. But it explains why people are quick to see little things in the context of big issues.
A typical comment from an anti-Cheerios advocate. Is this what LW is coming to? Cheerios lovers unite!
Anyway it was probably not clear but I was a little tongue in cheek with my Cheerios rant. I think what I wrote is correct but mostly I was having fun pretending that there could be a big political battle over even the narrow issue of the Cheerios study and what it means.
I'm afraid I have to take issue with your Cheerios story in the linked comment. You say of the 4% cholesterol lowering claim, "This is false. It is based on a 'study' sponsored by General Mills where subjects took more than half their daily calories from Cheerios (apparently they ate nothing but Cheerios for two of their three daily meals)." You link to http://www.askdeb.com/blog/health/will-cheerios-really-help-lower-your-cholesterol/ but that says nothing about how much Cheerios subjects ate.
I found this article that describes the 1998 Cheerios research that is the foundation for the claim: http://findarticles.com/p/articles/mi_m0813/is_8_32/ai_n15691320/ . It says that participants ate 3 cups of Cheerios per day, while control subjects ate 3 cups of corn flakes. 1 cup of Cheerios is about 100 calories, so 3 cups would be 300, far less than "more than half their daily calories" for any reasonable adult. Further, this article goes on to report that LDL (bad cholesterol) in the Cheerios group fell from 160 to 153, which looks to me like 4%.
Furthermore, my understanding is that the FDA's complaint is not with the accuracy of Cheerios' claim; it is that it is making such a claim at all, even if truthful. The FDA has a lot of rules about what kinds of health benefits products are allowed to make. It is not enough that a claim appears to be correct; the question is the depth and strength of the evidence behind it. For specific health claims, the FDA basically requires full blown clinical trials, as with drugs. According to the LA Times, http://latimesblogs.latimes.com/shopping_blog/2009/05/fda-warns-general-mills-over-cheerios-cholesterol-claims.html , "The FDA allows some health benefits of foods to be advertised but within strict limits. For instance, a company can say that a diet low in saturated fat and high in fiber-rich foods such as fruit, vegetables and whole grains may reduce the risk of heart disease."
There are reasonable questions to be raised about what policy we want to have for regulating health claims. But demonizing Cheerios and General Mills does not facilitate rational discussion of the issues. Saying that their claim is false, and exaggerating the amount of Cheerios which was eaten in their study, only serves to put Cheerios in an unjustifiably bad light.
And BTW I typically eat 600-1000 calories for breakfast, often cold cereal. Sometimes I eat Cheerios but usually I mix two or three different cereals. 300 calories of cold cereal is not difficult for me. The hard part is holding myself back to only eat that much.
Let me try restating the scenario more explicitly, see if I understand that part.
Omega comes to you and says, "There is an urn with a red or blue ball in it. I decided that if the ball were blue, I would come to you and ask you to give me 1000 utilons. Of course, you don't have to agree. I also decided that if the ball were red, I would come to you and give you 1000 utilons - but only if I predicted that if I asked you to give me the utilons in the blue-ball case, you would have agreed. If I predicted that you would not have agreed to pay in the blue-ball case, then I would not pay you in the red-ball case. Now, as it happens, I looked at the ball and found it blue. Will you pay me 1000 utilons?"
The difference from the usual case is that instead of a coin flip determining which question Omega asks, we have the ball in the urn. I am still confused about the significance of this change.
Is it that the coin flip is a random process, but that the ball may have gotten into the urn by some deterministic method?
Is it that the coin flip is done just before Omega asks the question, while the ball has been sitting in the urn, unchanged, for a long time?
Is it that we have partial information about the urn state, therefore the odds will not be 50-50, but potentially something else?
Is it the presence of a prediction market that gives us more information about what the state of the urn is?
Is it that our previous estimates, and those of the prediction market, have varied over time, rather than being relatively constant? (Are we supposed to give some credence to old views which have been superseded by newer information?)
Another difference is that in the original problem, the positive payoff was much larger than the negative one, while in this case, they are equal. Is that significant?
And once again, if this were not an Omega question, but just some random person offering a deal whose outcome depended on a coin flip vs a coin in an urn, why don't the same considerations arise?
Thanks for the answer, but I am afraid I am more confused than before. In the part of the post which begins, "So, new problem...", the coin is gone, and instead Omega will decide what to do based on whether an urn contains a red or blue marble, about which you have certain information. There is no coin. Can you restate your explanation in terms of the urn and marble?
I don't see where Omega the mugger plays a central role in this question. Aren't you just asking how one would guess whether a marble in an urn is red or blue, given the sources of information you describe in the last paragraph? (Your own long-term study, a suddenly-discovered predictions market.)
Isn't the answer the usual: do the best you can with all the information you have available?
Maybe a better heuristic is to consider whether your degree of assurance in your position is more or less than your average degree of assurance over all topics on which you might encounter disagreements. Hopefully there would be less of a bias on this question of whether you are more confident than usual. Then, if everyone adopted the policy of believing themselves if they are unusually confident, and believing the other person if they are less confident than usual, average accuracy would increase.
I'd agree that "in general, you should believe yourself" is a simpler rule than "in general, you should believe yourself, except when you come across someone else who has a different belief". And simplicity is a plus. There are good reasons to prefer simple rules.
The question is whether this simplicity outweighs the theoretical arguments that greater accuracy can be attained by using the more complex rule. Perhaps someone who sufficiently values simplicity can reasonably argue for adopting the first rule.
ETA: Maybe I am wrong about the first rule: it should be "in general, you should believe yourself, except when you come across evidence that you are wrong". And then the question is, how strong evidence is it to meet someone who came up with a different view. But this brings us back to the symmetry argument that that is actually a lot stronger evidence than most people imagine.
I meant, do you have a sense of what percentage of top-level posts have comments which show the problem?
I'd like to see a more popular discussion of Aumann's disagreement theorem (and its follow-ons), and what I believe is called Kripkean possible-world semantics, an alternative formulation of Bayes theorem, used in Aumann's original proof. The proof is very short, just a couple of sentences, but explaining the possible-world formalism is a big job.
Tilba, Wei's earlier post pointed to this article:
http://weidai.com/black-holes.txt
You might also need to know that computation can be done in principle almost without expending energy, and the colder you do the computation, the less energy is wasted. Hence being cold is a good thing, and black holes are very cold.