When Science Can't Help

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-15T07:24:25.000Z · LW · GW · Legacy · 90 comments

Contents

90 comments

Once upon a time, a younger Eliezer had a stupid theory.  Let's say that Eliezer18's stupid theory was that consciousness was caused by closed timelike curves hiding in quantum gravity.  This isn't the whole story, not even close, but it will do for a start.

And there came a point where I looked back, and realized:

  1. I had carefully followed everything I'd been told was Traditionally Rational, in the course of going astray.  For example, I'd been careful to only believe in stupid theories that made novel experimental predictions, e.g., that neuronal microtubules would be found to support coherent quantum states.
  2. Science would have been perfectly fine with my spending ten years trying to test my stupid theory, only to get a negative experimental result, so long as I then said, "Oh, well, I guess my theory was wrong."

From Science's perspective, that is how things are supposed to work—happy fun for everyone.  You admitted your error!  Good for you!  Isn't that what Science is all about?

But what if I didn't want to waste ten years?

Well... Science didn't have much to say about that.  How could Science say which theory was right, in advance of the experimental test?  Science doesn't care where your theory comes from—it just says, "Go test it."

This is the great strength of Science, and also its great weakness.

Gray Area asked:

Eliezer, why are you concerned with untestable questions?

Because questions that are easily immediately tested are hard for Science to get wrong.

I mean, sure, when there's already definite unmistakable experimental evidence available, go with it.  Why on Earth wouldn't you?

But sometimes a question will have very large, very definite experimental consequences in your future—but you can't easily test it experimentally right now—and yet there is a strong rational argument.

Macroscopic quantum superpositions are readily testable:  It would just take nanotechnologic precision, very low temperatures, and a nice clear area of interstellar space.  Oh, sure, you can't do it right now, because it's too expensive or impossible for today's technology or something like that—but in theory, sure!  Why, maybe someday they'll run whole civilizations on macroscopically superposed quantum computers, way out in a well-swept volume of a Great Void.  (Asking what quantum non-realism says about the status of any observers inside these computers, helps to reveal the underspecification of quantum non-realism.)

This doesn't seem immediately pragmatically relevant to your life, I'm guessing, but it establishes the pattern:  Not everything with future consequences is cheap to test now.

Evolutionary psychology is another example of a case where rationality has to take over from science.  While theories of evolutionary psychology form a connected whole, only some of those theories are readily testable experimentally.  But you still need the other parts of the theory, because they form a connected web that helps you to form the hypotheses that are actually testable—and then the helper hypotheses are supported in a Bayesian sense, but not supported experimentally.  Science would render a verdict of "not proven" on individual parts of a connected theoretical mesh that is experimentally productive as a whole.  We'd need a new kind of verdict for that, something like "indirectly supported".

Or what about cryonics?

Cryonics is an archetypal example of an extremely important issue (150,000 people die per day) that will have huge consequences in the foreseeable future, but doesn't offer definite unmistakable experimental evidence that we can get right now.

So do you say, "I don't believe in cryonics because it hasn't been experimentally proven, and you shouldn't believe in things that haven't been experimentally proven?"

Well, from a Bayesian perspective, that's incorrect.  Absence of evidence is evidence of absence only to the degree that we could reasonably expect the evidence to appear.  If someone is trumpeting that snake oil cures cancer, you can reasonably expect that, if the snake oil was actually curing cancer, some scientist would be performing a controlled study to verify it—that, at the least, doctors would be reporting case studies of amazing recoveries—and so the absence of this evidence is strong evidence of absence.  But "gaps in the fossil record" are not strong evidence against evolution; fossils form only rarely, and even if an intermediate species did in fact exist, you cannot expect with high probability that Nature will obligingly fossilize it and that the fossil will be discovered.

Reviving a cryonically frozen mammal is just not something you'd expect to be able to do with modern technology, even if future nanotechnologies could in fact perform a successful revival.  That's how I see Bayes seeing it.

Oh, and as for the actual arguments for cryonics—I'm not going to go into those at the moment.  But if you followed the physics and anti-Zombie sequences, it should now seem a lot more plausible, that whatever preserves the pattern of synapses, preserves as much of "you" as is preserved from one night's sleep to morning's waking.

Now, to be fair, someone who says, "I don't believe in cryonics because it hasn't been proven experimentally" is misapplying the rules of Science; this is not a case where science actually gives the wrong answer.  In the absence of a definite experimental test, the verdict of science here is "Not proven".  Anyone who interprets that as a rejection is taking an extra step outside of science, not a misstep within science.

John McCarthy's Wikiquotes page has him saying, "Your statements amount to saying that if AI is possible, it should be easy. Why is that?"  The Wikiquotes page doesn't say what McCarthy was responding to, but I could venture a guess.

The general mistake probably arises because there are cases where the absence of scientific proof is strong evidence—because an experiment would be readily performable, and so failure to perform it is itself suspicious.  (Though not as suspicious as I used to think—with all the strangely varied anecdotal evidence coming in from respected sources, why the hell isn't anyone testing Seth Roberts's theory of appetite suppression?)

Another confusion factor may be that if you test Pharmaceutical X on 1000 subjects and find that 56% of the control group and 57% of the experimental group recover, some people will call that a verdict of "Not proven".  I would call it an experimental verdict of "Pharmaceutical X doesn't work well, if at all".  Just because this verdict is theoretically retractable in the face of new evidence, doesn't make it ambiguous.

In any case, right now you've got people dismissing cryonics out of hand as "not scientific", like it was some kind of pharmaceutical you could easily administer to 1000 patients and see what happened.  "Call me when cryonicists actually revive someone," they say; which, as Mike Li observes, is like saying "I refuse to get into this ambulance; call me when it's actually at the hospital".  Maybe Martin Gardner warned them against believing in strange things without experimental evidence.  So they wait for the definite unmistakable verdict of Science, while their family and friends and 150,000 people per day are dying right now, and might or might not be savable—

—a calculated bet you could only make rationally.

The drive of Science is to obtain a mountain of evidence so huge that not even fallible human scientists can misread it.  But even that sometimes goes wrong, when people become confused about which theory predicts what, or bake extremely-hard-to-test components into an early version of their theory.  And sometimes you just can't get clear experimental evidence at all.

Either way, you have to try to do the thing that Science doesn't trust anyone to do—think rationally, and figure out the answer before you get clubbed over the head with it.

(Oh, and sometimes a disconfirming experimental result looks like:  "Your entire species has just been wiped out!  You are now scientifically required to relinquish your theory.  If you publicly recant, good for you!  Remember, it takes a strong mind to give up strongly held beliefs.  Feel free to try another hypothesis next time!")

90 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Joshua_Fox · 2008-05-15T08:10:06.000Z · LW(p) · GW(p)

Eliezer wrote "This isn't the whole story..., but it will do for a start", and in the referenced post: "This I will not describe, for it would be a long tale and complicated. I ... knew not the teachings of Tversky and Kahneman."

I've seen tantalizing hints of this "long tale," but I'd love to see the whole story, even in summary. If nothing else, it would be quite in place in a blog on Overcoming Bias.

comment by Mardonius2 · 2008-05-15T08:15:02.000Z · LW(p) · GW(p)

This reminds me of a post that i think you wrote quite a while ago, in which i think you stated that the belief that 'molecular nanotech is possible'(along with other similar beliefs) was not a scientific belief, but that it was a rational one. I wasn't entirely sure that your statement was valid then, but now, after several hundred posts of information dumping, your reasoning makes a great deal more sense.

I think some of the confusion of the 'choice between science and bayes' occurs because science as a process does incorporate a number of bayesean methods, but the theory of science has not yet managed to incorporate them.

comment by Psy-Kosh · 2008-05-15T08:58:49.000Z · LW(p) · GW(p)

Just a minor question about cryonics: To what extent does it preserve the synaptic weights? ie, I'm kinda looking toward saving up to sign up, but I want to understand this bit first. It seems obviously likely that it preserves the information assotiated with the neural structure, but what of the information encoded in the weights?

How quickly do those decay after (regularly accepted) death? ie, by the time the suspension process begins, are they still there? How much of that is lost in the suspension process?

comment by Xianhang_Zhang · 2008-05-15T09:22:52.000Z · LW(p) · GW(p)

A good way I've found to explain this to lay people is that Science is a very high quality way of finding out what is almost certainly wrong. If Science says something is wrong and here is why, then it most probably is correct (relative to other methods of finding truth that is). Science is much worse at figuring out what is right because it's method of determining what is right is "Of all the possible hypotheses, we'll eliminate the wrong ones and choose the most probably of what exists". As a result, scientific knowledge is often over turned and revised as it should.

But what people outside of Science can't see is that almost never is a theory overturned for one which was previously considered wrong, it's usually the case that the new explanation is one that was never ruled out but considered less than probable. What this means is that, from outside of science, it's very hard to tell the difference between two very similar statements: "What you're saying is wrong because you don't have sufficient evidence to justify your claims" and "What you're saying is wrong because we've already discounted that hypothesis and here's why". Scientists can see that difference very clearly and behave in very different ways according to which argument you're making but to the outsider, what it looks like is arrogance and close-mindedness when Scientists reject an explanation without even bothering to give it the dignity of argument.

A more succinct way of putting this is that "Science can never prove than God does not exist but it has proved that your God does not exist"

Replies from: stcredzero
comment by stcredzero · 2012-12-07T21:41:30.896Z · LW(p) · GW(p)

Science is much worse at figuring out what is right because it's method of determining what is right is "Of all the possible hypotheses, we'll eliminate the wrong ones and choose the most probably of what exists".

Someone should write a Sherlock script, where someone uses Sherlock's principle: "when you have eliminated the impossible, whatever remains, however improbable, must be the truth," against him, so that he decisively takes the wrong action.

Replies from: Pudlovich
comment by Pudlovich · 2014-12-22T12:30:14.673Z · LW(p) · GW(p)

It was done by Doyle himself. In 1898 he published two short stories - "The Lost Special" and "The Man with the Watches", where "an amateur reasoner of some celebrity" participates in solving a crime mystery and fails. It was written after Doyle killed off Sherlock, so he is probably parodying the character - he was quite tired with him at the time.

comment by Mother_Teresa · 2008-05-15T09:46:18.000Z · LW(p) · GW(p)

"You admitted your error! Good for you! Isn't that what Science is all about?"

If so, it is safe to conclude that George Bush and Dick Cheney are not scientists.

comment by RobinHanson · 2008-05-15T11:35:44.000Z · LW(p) · GW(p)

This mythic "Science" largely does not exist as actual social practice.

comment by Hopefully_Anonymous · 2008-05-15T12:31:14.000Z · LW(p) · GW(p)

Good point, Robin, in my opinion. Eliezer, lots of good ideas in this post, well-articulated, but I think there's sleight of hand in your distinguishing empirical science from bayesian rationality. You're not being transparent that our confidence in bayesian rationality stems from empirical verification. Beyond that, it's decision-making/resource allocation in the context of scarcity. We have a scarcity of time and human capital, and we need to decide how to allocate our efforts in the context of that scarcity. That doesn't take us away from empiricism, it just places hard limits in our ability to engage in it to inform our decision-making (for example, about cryonicism, molecular nanotech, etc.). This false dichotomy between empirical science and bayesian rationality doesn't help us, in my opinion, and your ideas in this post would be better served separated from that whole framework.

comment by Caledonian2 · 2008-05-15T12:31:31.000Z · LW(p) · GW(p)

And the ability of science to permit long-shot hypotheses is not a bug, it's a feature. If you want to fully explore a hypothesis space, you have to be willing to be wrong most of the time.

Even when Bayesian reasoning is applied correctly, it is obviously limited to the available data. When it determines how we seek more data, we become stuck in a feedback loop and trapped in local minimization ruts.

How much of Eliezer's behavior is because he's truly convinced himself he's found something better than the scientific method, and how much because his cherished beliefs are not supported by the scientific mainstream and so he must find a way to minimize its perceived importance?

comment by Frank_McGahon · 2008-05-15T13:28:40.000Z · LW(p) · GW(p)

The best argument against Cryonics as far as I'm concerned is economic: It's a self-negating prophecy. Once the technology exists to revive frozen people (I don't have any problem believing this will happen some day) there will be no market for cryonics - in this future, why bother signing up for cryonics when you can get revived at "death" or otherwise forestall it - and therefore no income for cryonic companies. Who is going to maintain the freezers or revive you? Chances are everyone who cares about you is either dead or in a similar predicament.

comment by ME3 · 2008-05-15T15:19:00.000Z · LW(p) · GW(p)

If you accept that there is no "soul" and your entire consciousness exists only in the physical arrangement of your brain (I more or less believe this), then it would be the height of egotism to require someone to actively preserve your particular brain pattern for an unknown number of years until your body can be reactivated. Simply because better ones are sure to come along in the meantime.

I mean, think about your 70-year-old uncle with his outdated ways of thinking and generally eccentric behavior -- now think of a freezer full of 700-year-old uncles who want to be unfrozen as soon as the technology exists, just so they can continue making obnoxious forum posts about how they're smarter than all scientists on earth. Would you want to unfreeze them, except maybe as historical curiosities?

comment by bambi · 2008-05-15T15:26:37.000Z · LW(p) · GW(p)

Finally this sequence of posts is beginning to build to its hysterical climax. It might be difficult to convince us that doomsday probability calculations are more than swag-based-Bayesianism, but the effort will probably be entertaining. I know I love getting lost in trying to calculate "almost infinity" times "almost zero".

As a substantive point from this sequence, at least now scientists know that they should choose reasonable theories to test in preference to ridiculous ones; I'm sure that will be a very helpful insight.

comment by Vladimir_Nesov · 2008-05-15T15:28:21.000Z · LW(p) · GW(p)

Who is going to maintain the freezers or revive you?

How much do 4 Gb cost now? It's free for anyone on Gmail. Would you believe that 4 Gb of storage would be free in 2008 if you heard it suggested in 1955, when computers with 10 Kb of memory cost $500,000? Likewise if revival procedure is sufficiently automated, it can become essentially free. It will probably take an AI to "manually" fix some of the damage though.

comment by Caledonian2 · 2008-05-15T15:29:57.000Z · LW(p) · GW(p)

The problem, ME, is that the people interested in cryonic preservation all think they're fantastic individuals that people in the future will be keenly interested in reviving.

No '70-year-old eccentric uncle' believes that they're not inherently special, or that they either are or will be obsolete by the time revivification technology exists.

comment by Cyan2 · 2008-05-15T15:49:48.000Z · LW(p) · GW(p)
When [Bayesian reasoning] determines how we seek more data, we become stuck in a feedback loop and trapped in local minimization ruts.

I believe this is incorrect. Bayesian reasoning says (roughly) collect the data that will help nail down your current most uncertain predictions. It's tricky to encode into Bayesian algorithms the model,

"An underspecified generalization of our current model which is constrained to give the same answers as our current model in presently available experiments but could give different answers in new experimental regimes."

But Bayesian reasoning says that this possibility is not ruled out by our current evidence or prior information, so we must continue to test our current models in new experimental regimes to optimize our posterior predictive precision.

Switching topics... capital 'S' Science may be a useful literary foil, but count me among the group of people who are not convinced that it should be identified with the human activity of science.

comment by Nick_Tarleton · 2008-05-15T16:21:38.000Z · LW(p) · GW(p)

I love this post. Even though "Science" is an oversimplification of real science, the specific statements attacked aren't strawmen.

For cryonics patients to eventually be revived, the future just has to be very rich (like Vladimir says) and contain a few altruists. Sounds like a good bet. Calling trying not to die "the height of egotism" (because you ought to die to be replaced by "better... brain patterns"?) is ridiculous.

comment by poke · 2008-05-15T16:35:59.000Z · LW(p) · GW(p)

You have strange ideas about what science is. That's not surprising since philosophy and popular science have strange ideas about what science is. Science does not involve plucking theories out of thin air and subjecting them to tests. Hypotheses themselves are borne of experiments and the application of prior theory. The part of science you've chosen to eschew, the part where you obtain a formal education and spend many years integrating yourself into the professional community, happens to be the part where you learn to construct hypotheses. The fact that a hypothesis you borrowed from a popular science book written by a physicist playing outside his field of expertise was ridiculous is hardly surprising but tells us nothing more than that you were severely underqualified to judge its merits.

Science is the application of prior science to new scientific problems; it requires specific skills and expertise (none of them involve or have any use for "logic" or "reason" or Bayesian probability theory; none of these things are taught, used or applied by scientists). A huge part of developing these skills, and of the scientific process itself, involves a long period of apprenticeship within the scientific community so that one can learn to develop reasonable hypotheses grounded in and motivated by existing science. One of the easiest and simplest ways to demarcate between science and pseudoscience is heritage: science is the offspring of science - chemistry is a child of physics and biology a child of chemistry - whereas pseudoscience exists merely as a simulacrum of science. Pseudoscience tries to appear science-like by copying the alleged methodology of science.

It's nice to see you openly admit that many-worlds, evolutionary psychology, nanotechnology and cryonics are all unscientific though.

comment by ME3 · 2008-05-15T16:48:16.000Z · LW(p) · GW(p)

Nick: Not any more ridiculous than throwing out an old computer or an old car or whatever else. If we dispense with the concept of a soul, then there is really no such thing as death, but just states of activity and inactivity for a particular brain. So if you accept that you are going to be inactive for probably decades, then what makes you think you're going to be worth reactivating?

comment by Cyan2 · 2008-05-15T17:39:53.000Z · LW(p) · GW(p)
...none of them involve or have any use for "logic" or "reason" or Bayesian probability theory; none of these things are taught, used or applied by scientists...

Logic and reason are not taught, used, or applied by scientists -- what!? I'm not sure what the scare-quotes around "logic" and "reason" are supposed to convey, but on its face, this statement is jaw-dropping.

As a working scientist, I can tell you I have fruitfully applied Bayesian probability theory, and that it has informed my entire approach to research. Don't duplicate Eliezer's approach and reduce science to a monolithic structure with sharply drawn boundaries.

I have a colleague who is not especially mathematically inclined. He likes to mess around in the data and try to get the most information possible out of it. Although it would surprise him to hear it, all of his scientific inferences can be understood as Bayesian reasoning. Bayesian probability theory nothing more that an explicit formulation of one of the tasks that good working scientists are trained to do -- specifically, learning from data.

comment by LazyDave · 2008-05-15T17:49:07.000Z · LW(p) · GW(p)

While we are (sort of) on the topic of cryonics, who here is signed up for it? For those that are, what organization are you with, and are you going with the full-body plan, or just the brain? I'm considering Alcor's neuropreservation process.

comment by Caledonian2 · 2008-05-15T19:03:49.000Z · LW(p) · GW(p)

But science is so much MORE than that, Cyan. It has incorporated forms of reasoning that are far more subtle and powerful than Bayesian reasoning - which is why poke is mostly wrong, but not completely wrong.

The emphasis and reliance on Bayesian thought is a regression, not progress.

comment by Nick_Tarleton · 2008-05-15T19:09:33.000Z · LW(p) · GW(p)
It has incorporated forms of reasoning that are far more subtle and powerful than Bayesian reasoning

Name one that doesn't reduce to Bayes.

comment by Cyan2 · 2008-05-15T19:09:48.000Z · LW(p) · GW(p)
...forms of reasoning that are far more subtle and powerful than Bayesian reasoning...

I am always interested in expanding my repertoire. Please give examples with links if possible.

comment by Allan_Crossman · 2008-05-15T19:53:57.000Z · LW(p) · GW(p)

"Finally this sequence of posts is beginning to build to its hysterical climax. It might be difficult to convince us that doomsday probability calculations are more than swag-based-Bayesianism, but the effort will probably be entertaining."

Hmm. I've seen little to indicate that this is going to end up being a discussion of the Doomsday Argument. Still, it would be interesting to see Eliezer's own view. Everyone seems to have their own opinion as to why its unsound (and I agree that it's unsound, for my own reasons...)

The last paragraph though is relevant to the view that nanotechnology or AI are potentially dangerous; a view we might want to accept without first creating the technologies 1000 times and seeing what percentage of the time life on Earth is wiped out. But I don't think this idea hinges on the DA.

comment by Frank_McGahon · 2008-05-15T20:02:10.000Z · LW(p) · GW(p)
For cryonics patients to eventually be revived, the future just has to be very rich (like Vladimir says) and contain a few altruists. Sounds like a good bet

Seeing as the theme of this blog is overcoming bias, one ought to be conscious of an overly hopeful bias. It may well be a deficiency of my own imagination but I can't see the notion of reviving old geezers having much of an appeal for future altruists but that doesn't even matter: It's likely that technology will be sufficiently advanced at some stage to postpone ageing and death and it's probable that this will happen before the technology exists to revive "dead" people cutting at a stroke the financial viability of cryonics companies and the income stream which keeps their freezers from defrosting. Should that happen there just won't be any you for a future altruist to revive should they even want to.

comment by Caledonian2 · 2008-05-15T20:06:19.000Z · LW(p) · GW(p)
Name one that doesn't reduce to Bayes.

Not all rectangles are squares, Mr. Tarleton.

If scientific reasoning is merely Bayesian, why does Eliezer tell us to abandon science in order to stick with Bayes? It seems to me that it is easy to represent strict standards of evidence within looser ones, but not vice versa. The frequency of 'Bayesian reasoners' mistaking data for evidence on this site should serve as example enough.

comment by Tom_McCabe2 · 2008-05-15T20:13:53.000Z · LW(p) · GW(p)

"If scientific reasoning is merely Bayesian,"

Scientific reasoning is an imperfect approximation of Bayesian reasoning. Using your geometric analogy, science is the process of sketching a circle, while Bayesian reasoning is a compass.

"It seems to me that it is easy to represent strict standards of evidence within looser ones, but not vice versa."

If you already understand the strict standard, it's usually easy to understand the looser standard, but not vice-versa. Physicists would have a much easier time writing literature papers than literary theorists would writing physics papers.

"The frequency of 'Bayesian reasoners' mistaking data for evidence on this site should serve as example enough."

Data, assuming it's not totally random, is always evidence for some theory. Of course, not all data is evidence for every theory.

comment by Cyan2 · 2008-05-15T20:16:51.000Z · LW(p) · GW(p)

On cryonics: it's easy to come up with poorly supported future scenarios -- either pro or con. We've heard from the cons, so here's a pro: at the point where it looks plausible to the general public that frozen dead people might be revived, pulling the plug on the freezers may appear to become morally equivalent to pulling the plug on patients with intact brains who are comatose but not medically dead. It may no longer a purely financial question in the eye of the public, especially if some enterprising journalist decides to focus on the issue.

This sort of prognostication is a mug's game.

comment by Caledonian2 · 2008-05-15T20:36:30.000Z · LW(p) · GW(p)
Using your geometric analogy, science is the process of sketching a circle, while Bayesian reasoning is a compass.

To continue the metaphor: science is the system of reasoning we use to recognize that the compass will make approximate circles, figure out how to build compasses, and recommend that people use them when they want to draw a circle.

As a system, Bayesian reasoning it is sufficiently broad and flexible that it can be used to represent more sophisicated forms of reasoning, in the same way that everyday language can represent formal logic. But as common speech is less powerful than the restricted language of logic, Bayesianism is less powerful than reasoning with tighter standards of evidence.

It is useful to look back into the primitive roots of thought. But we developed more complex tools for a reason.

comment by Nick_Tarleton · 2008-05-15T20:44:38.000Z · LW(p) · GW(p)
it's probable that this will happen before the technology exists to revive "dead" people cutting at a stroke the financial viability of cryonics companies and the income stream which keeps their freezers from defrosting.

They have investments.

comment by Nick_Tarleton · 2008-05-15T20:46:11.000Z · LW(p) · GW(p)

I say again: name one concrete scientific process that does something Bayes can't.

comment by poke · 2008-05-15T20:51:02.000Z · LW(p) · GW(p)

Cyan, I should perhaps have noted that probabilistic techniques are used in data analysis and the statistical sciences, but I thought it was obvious I was talking about the foundations of the scientific method rather than specific applications of mathematical techniques. Scientists use algebra and calculus and complex numbers and all manner of things and none of them are therefore the foundation of the scientific method (or its "hidden structure" as Eliezer likes to say).

And, no, logic and reason are explicitly not taught, used or applied by scientists. I used scare quotes because the words as generally applied are meaningless anyway. Having been "reasonable" or "logical" means having successfully avoided a set of counterfactual conclusions somebody imagines you could have made. Scientists have no more use for such a perfectly worthless concept than do carpenters or cooks or window cleaners.

comment by poke · 2008-05-15T21:06:35.000Z · LW(p) · GW(p)

Nick, if a roll a spherical Bayes down an inclined Bayes will it give me a good approximation of acceleration due to gravity near the Earth's surface?

comment by Frank_McGahon · 2008-05-15T21:43:21.000Z · LW(p) · GW(p)
We've heard from the cons, so here's a pro: at the point where it looks plausible to the general public that frozen dead people might be revived, pulling the plug on the freezers may appear to become morally equivalent to pulling the plug on patients with intact brains who are comatose but not medically dead. It may no longer a purely financial question in the eye of the public, especially if some enterprising journalist decides to focus on the issue.

Talk about wishful thinking! Do people, other than family and friends, even care about pulling the plug on comatose patients now? Positing some new moral obligation towards frozen corpses arising "in the eyes of the public" is like assuming a can opener.

They have investments.

Fair enough, but crucially what they are not likely to have is a) a future stream of customers for whom it is worth maintaining their reputation and b) anyone in a position to defend the interests of their customers compelling them to honour the contract.

comment by Frank_McGahon · 2008-05-15T21:57:05.000Z · LW(p) · GW(p)

Also - I'm assuming that postponing ageing for living people is easier than reviving dead people and is likely to arrive sooner. Not that I'm in anyway conversant with Bayesianism but I'm figuring it supports this assumption - technology to extend lifespan is, as far as I can see, a necessary component of reviving dead people. So the probability of the former is necessarily higher than the probability of the latter. There is no scenario in which reviving dead people arrives first and only a tiny probability that both technologies arrive simultaneously. This means that by the time it is possible to thaw and revive there is very little to compel cryonics companies to remain in business to avail of the technology and very little to compel them to honour their contracts.

A lot of the talk about this is of the Pascal's wager variety - "But the potential payoff is so high it justifies almost any expense, a good bet" this ignores a) forgive me, Bayesianism: multiplying the expected payoff by the appropriate probability of that payoff and b) the opportunity cost of the money it costs to sign uo for cryonics - these are not trifling sums, particularly when considered in the purported timespan of the payoff.

comment by Cyan2 · 2008-05-15T22:32:27.000Z · LW(p) · GW(p)

poke, I take "logic" and "reason" to mean making inferences by syllogism. I really have no idea what your usage of the terms denote, so I can't speak to it. I guess we were talking past each other. But I'm not so sure it's wise to draw a sharp distinction between "the foundations of the scientific method" and what at least some scientists spend a good deal of time actually doing, i.e., specific applications of mathematical techniques.

Frank McGahon, you're missing my point. Hint: reread my first sentence and my last sentence.

comment by Hopefully_Anonymous · 2008-05-15T22:41:59.000Z · LW(p) · GW(p)

The critics of cryonics here for the most part seem to miss the attraction of cryonics, at least for people like me. I don't think the future will find me a fantastic individual worthy of reviving. I don't think there will be a market or even a will to revive me in the future. I don't even have much expectation that current cryonics is sufficient to save me from information theoretic death. I just think it's a better strategy than alternatives such as burial or cremation in maximizing my persistence odds. Maybe not a much better strategy. But still, it seems to me to be a better strategy. And my reasons for wanting to maximize my persistence odds are selfish and solipstic. That's it.

comment by Frank_McGahon · 2008-05-15T23:02:53.000Z · LW(p) · GW(p)

Cyan, oh, I get your point, I just think it's wrong to frame it as "on the one hand, on the other hand" as if the pro and con scenarios are equally likely and it's a toss-up between the two. The reason to point out that the very technology which is necessary for Cryonics to succeed is likely to make it obsolete and consequently unlikely to fulfil its promise is to illustrate a fatal flaw in the concept not to merely paint one pessimistic scenario. There are plenty of alternative pessimistic scenarios but none of which (individually that is) is a knockdown arguments against.

HA, to me that sounds like "I know I haven't got much of a chance of winning the lottery but if I'm not in I can't win and somebody's got to win it, right?". Sure, who wouldn't want to maximise ones persistence odds? but at what price? I'm fairly confident that you are not willing to assign your entire lifetime income minus a subsistence allowance towards Cryonics so there's obviously some price at which you decide it's not worth it. My point is that it's likely you have miscalculated that price and given the probability of all the scenarios necessary to fall in place for the payoff. If it's not worth wasting a couple of bucks on a lottery ticket for a lifechanging payoff it's certainly not worth wasting a much larger amount for an even more improbable payoff.

comment by Bob_Unwin7 · 2008-05-15T23:07:12.000Z · LW(p) · GW(p)

"I say again: name one concrete scientific process that does something Bayes can't."

Bayes does not explain the development of new concepts and conceptual schemes, and yet this is one important thing that the best scientists are able to do. I'm thinking of scientific revolutions in physics and biology especially, but there are many other examples (e.g. theoretical computer science, statistics, game theory, information theory, and--going back further--the notion of a mathematical proof). AFAIK, we don't have a good understand of conceptual development. We don't know how scientists do it (how 'conceptual revolutions' originate) and we don't know children do it. That is, we don't know how children develop the process of a precise natural number, or of a non-factive intentional state like a belief, or of moral vs. conventional rules.

Even excluding conceptual development (which Bayes says basically nothing about), there is still much in the logic of science that isn't explained by Bayes. You presumably aren't aware of this work because you haven't looked in any systematic way at the literature of this topic. For a starting point with a good set of references, see this paper by Glymour and Kelly, both of whom have studied Bayesian methods in technical depth in their work.

http://www.hss.cmu.edu/philosophy/kelly/papers/Ch4-Glymour%20&%20Kelly-final.pdf

comment by Cyan2 · 2008-05-16T01:22:09.000Z · LW(p) · GW(p)

Bob Unwin, thanks for the link. That argument is definitely worth some careful consideration.

comment by The_Uncredible_Hallq · 2008-05-16T01:35:02.000Z · LW(p) · GW(p)

But if you followed the physics and anti-Zombie sequences, it should now seem a lot more plausible, that whatever preserves the pattern of synapses, preserves as much of "you" as is preserved from one night's sleep to morning's waking.

Part of the problem here, though, is we don't even have proof-of-concept. We know the freezing process damages the brain, or else we'd already be able to revive people, no problem. Being complicated, the brain tends to get complicated in damaged ways. In spite of our best efforts to provide effective treatments for stroke victims, we don't even understand all the causes of neuronal death in strokes. Almost certainly, freezing and thawing a brain damages it ways we don't understand yet--maybe subtle ways, throwing of biochemical processes in ways that are hard to fix. Is there any reason to think we will ever learn to deal with all of the relevant problems?

comment by Caledonian2 · 2008-05-16T01:49:02.000Z · LW(p) · GW(p)

'Ever' is a terribly long time, Hallq. All sorts of things might be possible with enough time; I think the real question is whether we'll see such a development in our lifetimes.

This isn't a matter of taking on faith where an ambulance is going. We know hospitals exists, and that all sorts of injuries can be treated; we know that there are ambulance services which take people to hospitals.

We do not know that cryonic techniques are capable of preserving a person, most especially after their death, or that they are likely to be maintained for a long duration even if they function in theory. Neither do we know that revivification techniques will be developed within even the hypothetical storage times of cryonics, nor do we have strong arguments that people in the future would bother attempting such revivification.

Getting into an ambulance requires only the assumption that known, reliable principles will hold valid in our particular case. Trying cryonics requires a leap of faith straight into the unknown for a benefit with an unestimable likelihood.

It might be an interesting gamble for someone with resources to burn, but there are no grounds for arguing it is a rational strategy - there are simply too many unknowns for that.

comment by Hopefully_Anonymous · 2008-05-16T07:13:24.000Z · LW(p) · GW(p)

Frank, you might want to read my blog to get a better sense of where I stand on this. I aspire to the position of contradicting your central presumption about me: that I would use my full resources in this life towards maximizing my persistence odds. The truth is, I think any other position is absurd, or a triumph of genes/species over me as a subjective conscious entity. The analogy I'm working with is not buying a lottery ticket for a chance at a big lifechanging payoff, it's more disaster movie survivalism. Current hedonism in the context of future nonexistence is just absurd to me, sort of like how it is in the people seeking to escape impending death rather than party hardy in disaster flicks.

Bob Unwin, fantastic post. Please start blogging.

comment by Frank_McGahon · 2008-05-16T08:47:51.000Z · LW(p) · GW(p)

HA, I'll certainly go and read your blog but just to comment on your point:

I aspire to the position of contradicting your central presumption about me: that I would use my full resources in this life towards maximizing my persistence odds. The truth is, I think any other position is absurd, or a triumph of genes/species over me as a subjective conscious entity.

Your characterisation of the case against denial in this life with the promise of an eternal after-life (now that sounds familiar...) as if it were about the interests of the genes/species set against the individual is incorrect. There are perfectly selfish reasons not to devote significant amounts of money on an extremely improbable payoff.

The analogy I'm working with is not buying a lottery ticket for a chance at a big lifechanging payoff, it's more disaster movie survivalism. Current hedonism in the context of future nonexistence is just absurd to me, sort of like how it is in the people seeking to escape impending death rather than party hardy in disaster flicks.</>

Your analogy is leading you astray - in a given disaster flick there's a cast of, what?, less than 50. Less than 10 if you get to characters you actually care about. How many of the people who try, get to survive disaster movies? I'll pick a figure out of the air - two out of ten? 20% survival odds aren't bad. There's typically a window of, what, 90 minutes or less? For a 20% chance of eluding doom, it might indeed be worth forestalling 90 minutes of "party hard" aka "living".

Try a different disaster movie scenario - let's say there's an earth-bound asteroid. It's going to wipe out life on earth in 48 hours. There is no realistic probability of averting it. Do you try and build a spaceship, do you try and build your own missile to divert it off course or do you try and enjoy the time you have left. Assuming the probability of success (yours, the authority's or anyone else's) hasn't changed, is your answer any different if the asteroid is due in a month's time, a years time or a decade's time?

The only reason to persist is to live and there's no reason to postpone living in the hope that maybe, just maybe, someone will end up defrosting and reviving you after death.

comment by Frank_McGahon · 2008-05-16T08:49:40.000Z · LW(p) · GW(p)

Sorry - I seem to have missed the end blockquote tag after "party hard in disaster flicks"...

comment by Vladimir_Nesov2 · 2008-05-16T08:54:28.000Z · LW(p) · GW(p)

HA: "Trying cryonics requires a leap of faith straight into the unknown for a benefit with an unestimable likelihood."

It's what probability is for, isn't it? If you don't know and don't have good prior hints, you just choose prior at random, merely making sure that mutually exclusive outcomes sum up to 1, and then adjust with what little evidence you've got. In reality, you usually do have some prior predispositions though. You don't raise your hands in awe and exclaim that this probability is too shaky to be estimated and even thought about, because in doing so you make decisions, actions, which given your goals implicitly assume certain assignment of probability.

In other words, if you decide not to take a bet, you implicitly assigne low probability to the outcome. It conflicts with you saying that "there are too many unknowns to make an estimation". You just made an estimation. If you don't back it up, it's as good as any other.

I assign high probability to success of cryonics (about 50%), given benevolent singularity (which is a different issue entirely, and it's not necessarily a high-probability outcome, so it can shift resulting absolute probability significantly). In other words, if information-theoretic death doesn't occur during cryopreservation (and I don't presently have noticeable reasons to believe that it does), singularity-grade AI should provide enough technological buck to revive patients "for free". Of course for the decision it's absolute probability that matters, but I have my own reasons to believe that benevolent singularity will likely be technically possible (relative to other outcomes), and I assign about 10% to it.

comment by Hopefully_Anonymous · 2008-05-16T09:47:04.000Z · LW(p) · GW(p)

I posted a response on my blog to avoid flooding/jacking the thread.

comment by Ben_Jones · 2008-05-16T10:01:42.000Z · LW(p) · GW(p)

do you try and build your own missile to divert it off course or do you try and enjoy the time you have left.

How about if you're told that if your missile succeeds, you can live forever in a computer? Cryonics isn't a life extension strategy, it's an immortality gambit. Otherwise no-one would bother. By the time we can defrost brains, we'll be able to scan and emulate them.

This thread has been long since jacked. Bob Unwin, great link. Science and Bayes - both great, both useful. Can't we all just get along?

comment by AnttiK · 2008-05-16T10:27:02.000Z · LW(p) · GW(p)

Late in the game and perhaps missing the point, but in order to try and understand for myself...

Your objection was that you:

(1) followed the 'method' or 'ideal' as (2) well as possible and (3) ended up with a hypothesis that was factually incorrect (4) risked of 'wasting' a very long time researching something that ended up being wrong and (5) that the 'method' or 'ideal' does not help one to avoid this properly (6) all of which combined make the method/ideal problematic as it is likely to statistically to also result in a high number of 'wasted years researching something useless' (or some variation of that)

--

Now, there are many ways to look at this argument.

In reference to (1) and (2): Ideal can only be approximated, but never achieved. We do as well as we can and improve (hopefully) through every trial and error

ref (3): How did you find out this was 'wrong'? Are you sure? Can you prove it? If so, the question boils down to: how can one lower the likelihood of working on something 'wrong' for too long? A common suggestion is to share ideas even when they are being worked on: open them up for testing and attack by anyone, because a million eye balls will do it better than two (assuming the two are within the million). A second suggestion is to work with multiple simultaneous hypotheses all of which according to the ideal support current data, have predictive power, are falsifiable (via experiments) and are divergent enough as to be considered separate.

(4) How can we know the length of time if we have not 'wasted' it? How can we know the 'waste' if we have not walked that path of knowledge? I would propose that anybody who diligently, openly and humbly follows the ideal to the best of her skills will arrive at lot of 'non-wasted' knowledge, models, thinking, publications, prestige, colleagues, positions, etc - EVEN if the whole hypothesis in the end is falsified. Just look at theoretical physics and the amount of silent evidence in the graveyard of falsified hypotheses, many of which were done by intellectually towering giants and branched off into new research areas in maths, statistics, meta-theory and philosophy. I'd love to attain a wasted failure like that myself :)

(5) This is the biggest argument. In theory I agree, in practice not quite so. Of course, the ideal method does not guarantee lack of such 'failure' (which it is not, imho, as argued above), but skillful implementation of this method can lower such a likelihood, but it requires, imho, humility, openness and and constant fight against bias, something which we can never be free of, but can temporarily be aware of.

(6) Too big to tackle in a post, at least for me :)

Good blog!

comment by Frank_McGahon · 2008-05-16T11:44:01.000Z · LW(p) · GW(p)
Cryonics isn't a life extension strategy, it's an immortality gambit.

You have this the wrong way around. Cryonics can only succeed in the event of practical immortality being achieved which requires that radical life extension is achieved. It's a necessary (but importantly not sufficient) condition of revival that the technology exist to radically extend lifespans in which case there will be nobody signing up for cryonics and no market to develop the technology to defrost and revive.

comment by Erik3 · 2008-05-16T12:35:02.000Z · LW(p) · GW(p)

"in which case there will be nobody signing up for cryonics and no market to develop the technology to defrost and revive." ...except for all that has already died while being signed up. Their wills and possibly foundations they set up would provide large enough a market if only a sufficient amount of rich people do sign up.

Another point: if we just achieved the defrosting techniques and had a bunch of cryonically suspended individuals from say the 19th century, we would be ankle deep in the saliva from all psycologists, historians etc. that would spend all their funding to get a chance to talk to such an individual.

Disclaimer: I am not signed up for cryonics. It's just that I think that some of the dismissals offered here is not well thought through.

comment by Frank_McGahon · 2008-05-16T13:13:00.000Z · LW(p) · GW(p)
..except for all that has already died while being signed up.

It's not like they're going to be in a position to lobby for it. And there's a world of a difference between a paying customer or potential customer and a will or foundation. The wishes of the dead are frequently flouted when convenient today - look at what happened to Nabakov's manuscript.

In any case, in the event that radical life extension is already here, there's just no need to solve the problem of defrosting frozen brains for paying customers so I'd expect that to be, at least, put on the back burner. Re defrosting today's individuals in the future for academic interest - given the amount of documentation of contemporary life, particularly when compared to the 19th century, I doubt that today's future frozen brains would hold much interest for future academics. And remember, we'd be talking about a time when radical life extension would be possible so there's bound to be plenty of methuselahs around to talk to - why bother going to the effort of figuring out how to defrost?

comment by Erik3 · 2008-05-17T11:30:00.000Z · LW(p) · GW(p)

I think it is quite possible for rich individuals to create structures surviving themselves such that it would be very hard to distinguish them for institutes/foundations/whatever that has a living person at the bottom. I'm not very familiar with the subject, but I would guess that there exists accounts on the Cayman Islands and similar states whose owners have died but the owners is too well hidden for anyone to find out.

Concerning the academic interest, I think that coming generations will, like us, find the preceding documentation terribly lacking. "Written records and movies? Why didn't they upload and archive their brains?" People living in the early days of writing would probably consider themselves as saving all that anyone in later times could wish to know, and compared to a society without writing, they would be at least partly justified in that belief. We're in the middle of the fourth information revolution (the preceding being the invention of writing, book binding and the printing press*) and we shouldn't underestimate in what new ways and to what higher extends information will be stored and used in the future.

*You probably could add language and perhaps some other things I haven't thought of, it depends on what timescales you're interested in.

("In any case, in the event that radical life extension is already here, there's just no need to solve the problem of defrosting frozen brains for paying customers so I'd expect that to be, at least, put on the back burner." I guess you just where careless with the "In any case" here; in exactly the case we where discussing, the assertion is false.)

comment by Patrick_(orthonormal) · 2008-05-18T11:57:00.000Z · LW(p) · GW(p)

Sorry to be late to the party— but has nobody yet mentioned the effect that MWI has on assessing cryonics from a personal standpoint; i.e. that your subjective probability of being revived should very nearly be your probability estimate that revival will happen in some universe? If 9/10 of future worlds destroy all cryogenic chambers, and 9/10 of the ones left don't bother to revive you, then it doesn't matter to you: you'll still wake up and find yourself in the hundredth world. Such factors only matter if you think your revival would be a significant benefit to the rest of humanity (rather unlikely, in my estimation).

(Yes, there are quirks to be discussed in this idea. I've thought about some of them already, but I might have missed others. Anyhow, it's getting early.)

comment by Daublin · 2008-05-18T14:18:00.000Z · LW(p) · GW(p)

You describe only a part of science. In addition to testing hypotheses, scientists spend an lot amount of time developing hypotheses. In fact, they probably spend more time developing hypotheses than testing them. They tinker around in a lab trying various things, and they search for better mental models so that their thinking becomes more effective. A major class of breakthrough for a scientist is to make a realization of the form, "A looks an awful lot like B".

If you don't want to waste ten years, spend some time on hypothesis development. That's not outside of science, but a core part of it.

comment by Frank_McGahon · 2008-05-19T09:47:00.000Z · LW(p) · GW(p)

Erik, in the event that RLE is already here - there will be no future stream of "paying customers" as they will surely avail themselves of RLE, that's what I meant. Therefore this market, at least, won't be driving innovation in the "how to revive a frozen brain" problem.

Fair point about how our own recording of information might look to future generations, however the many Methuselahs issue remains. It may be that my imagination is lacking or it may be that cryonics advocates are biased to overweight any indicators that cryonics might succeed but I can't see that the puported desire of future (social science) academics to defrost and revive a few cryonic customers (a handful is all you'd need) would be sufficient pressure to encourage (actual science) academics to solve a tricky (says my limited imagination) problem which would have no significant widespread benefit. I also stand by the notion that wills and foundations are different from (capricious) individuals.

Patrick, I think this is where people don't really adjust their intuitions properly about MW. It's not just that there's a branching off world for every major world event, or even every major personal event - it's an infinite or as near as makes no difference branching taking place all the time. You still have to live in your world. Your suggestion makes the same sense as saying that someone with an incurable illness ought to be hopeful when he falls asleep because he might wake up in a world where a cure for that illness has been discovered over night.

comment by Patrick_(orthonormal) · 2008-05-19T23:39:00.000Z · LW(p) · GW(p)

Bad analogy, actually. If I have an incurable terminal illness today and fall asleep, I'll still have an incurable terminal illness in most of the worlds in which I wake up— so I should assign a very low subjective probability to finding myself cured tomorrow. (Or, more precisely, the vast majority of the configurations that contain someone with all my memories up to that point will be ones in which I'm waking up the next day with the illness.)

I'm not quite sure how it might play out subjectively at the very end of life sans cryonics; this is where the idea of quantum suicide gets weird, with one-in-way-more-than-a-million chances subjectively coming to pass. However, if I'm signed up for cryonics, and if there's a significant chance I'll be revived someday, that probability by far overwhelms those weird possibilities for continued consciousness: in the vast majority of worlds where someone has my memories up to that point, that someone will be a revived post-cryonic me. Thus I should subjectively assign a high probability to being revived.

Or so I think.

comment by Frank_McGahon · 2008-05-20T09:23:00.000Z · LW(p) · GW(p)

Ok, then say you are definitely going to die of that illness tonight - that is, you won't wake up in the morning. It's preposterous to suggest that any consolation would be provided by the notion that in some parallel universe a cure is invented and implemented overnight and "you" will wake up cured in that universe.

comment by Nick_Tarleton · 2008-05-20T12:22:00.000Z · LW(p) · GW(p)
It's preposterous to suggest that any consolation would be provided by the notion that in some parallel universe a cure is invented and implemented overnight and "you" will wake up cured in that universe.

But I'm already in that universe. "I" am the set of all processes having my experience.

comment by Patrick_(orthonormal) · 2008-05-21T09:01:00.000Z · LW(p) · GW(p)

Frank, I think you have an idea that many-worlds means a bunch of parallel universes, each with a single past and future, like parallel train tracks. That is most emphatically not what the interpretation means. Rather*, all of the universes with my current state in their history are actual futures that the current me will experience (weighted by the Born probabilities).

If there's an event which I might or might not witness (but which won't interfere with my existence), then that's really saying that there are versions of me that witness it and versions of me that don't. But when it comes to death, the only versions of me that notice anything are the ones that notice they're still alive. So I really should anticipate waking up alive— but my family should anticipate me being dead the next day, because most of their future versions live in worlds where I've passed on.

The conclusion above is contentious even among those who believe the many-worlds interpretation; however, the rejection of the 'parallel tracks' analogy is not contentious in the least. If (as you indicate) you think that you have one future and that the version of you who will be miraculously cured overnight isn't the same you, then you have misunderstood the many-worlds interpretation.

*This is an oversimplification and falsification, of course, but it's a damn sight closer than the other image.

comment by Mayson_Lancaster · 2008-05-30T22:01:00.000Z · LW(p) · GW(p)

Why will they unfreeze us?

Well, if they're pretty rich, which is likely, given that they're technologically advanced enough to do so, they may well still have history and sociology departments and universities, with ambitious grad students, and Professors with grant money to spend. Welcome to being an adjunct lecturer (or would that be lab rat) for History 201(An Introduction to the Crazy Years).

comment by stcredzero · 2012-12-07T21:38:29.731Z · LW(p) · GW(p)

"Call me when cryonicists actually revive someone," they say; which, as Mike Li observes, is like saying "I refuse to get into this ambulance; call me when it's actually at the hospital".

There was a time when expecting mothers did the rational thing by not going to the maternity ward. http://www.ehso.com/ehshome/washing_hands.htm#History

Resources to be devoted to cryonics and a future lifespan could also be devoted to the lifespan you are fairly sure you have right now. The situation would be more like getting into an ambulance, when there have been no known successful arrivals of ambulance trips and many known failures.

Replies from: DaFranker
comment by DaFranker · 2012-12-07T21:59:51.016Z · LW(p) · GW(p)

Ahem. Am I reading this right?

There's a 20-year-old human with three days left to live. They have a choice: Either they spend a million dollars having fun during those three days, or invest that million dollars in research to find a cure for their unique illness and put themselves on life support in the meantime. There is only 10% chance that a cure will be found within <10 years (after which life support fails), but if it is found, they gain all of their remaining life expectancy, which is probably more than 50 years.

You're telling us that everyone should party with the million dollars for three days, and then die.

Replies from: Kindly, stcredzero
comment by Kindly · 2012-12-07T23:40:44.901Z · LW(p) · GW(p)

Except for different values of 20, three, a million, 10%, <10, and 50.

Replies from: DaFranker
comment by DaFranker · 2012-12-10T14:39:51.194Z · LW(p) · GW(p)

Yes, though with my current value-estimates that's as close as I can get to the same relative expected utility without doing some heavy number-crunching that isn't warranted considering both the situation and the accuracy of my estimates.

comment by stcredzero · 2012-12-09T23:58:55.939Z · LW(p) · GW(p)

You're telling us that everyone should party with the million dollars for three days, and then die.

[Citation Needed] Ahem.

No, I'm not saying that. I'm painting the other position in a light so it's understandable. Your analogy is incomplete. What if they could also donate that million dollars to other research that could increase the life expectancy of 1000 people by 1 year with 90% certainty?

Replies from: DaFranker
comment by DaFranker · 2012-12-10T14:59:49.056Z · LW(p) · GW(p)

Ah, yes, of course. I hadn't included any opportunity costs in the calculation, and (perhaps deliberately, though if so I can't remember why) framed the problem as a two-option dilemma when in real life it's obvious to most that this is a false dilemma.

As I stated in response to another comment, these were rough same-ballpark-expected-utility numbers. My response was attempting to make a closer-to-real-world referent available as contrast to the ambulance situation, and illustrate the other numbers of the equation as proportionally as possible (to the resulting EU; the individual numbers aren't nearly in the right orders of magnitude for real cryo).

I'm not claiming that I have an actual solution to the problem or know which is the right thing to do out of all the many options (there are more than the three we've said here, I'm rather confident we agree on that), even for my own utility function, partially because of the black box problem but also because of a lack of information and credence in my current estimates of the various numbers.

comment by alex_zag_al · 2015-11-14T11:13:31.333Z · LW(p) · GW(p)

After a few years in grad school, I think the principles of science are different from what you've picked up from your own sources.

In particular, this stands out to me as incorrect:

(1) I had carefully followed everything I'd been told was Traditionally Rational, in the course of going astray. For example, I'd been careful to only believe in stupid theories that made novel experimental predictions, e.g., that neuronal microtubules would be found to support coherent quantum states.

My training in writing grant applications contradicts this depiction of science. A grant has an introduction that reviews the facts of the field. It is followed by your hypothesis, and the mark of a promising grant is that the hypothesis looks obvious given your depiction of the facts. In fact, it is best if your introduction causes the reader to think of the hypothesis themselves, and anticipate its introduction.

This key feature of a good hypothesis is totally separate from its falsifiability (important later in the application). And remember, the hypothesis has to appear obvious in the eyes of a senior in the field, since that's who judges your proposal. Can you say this for your stupid theory?

(2) Science would have been perfectly fine with my spending ten years trying to test my stupid theory, only to get a negative experimental result, so long as I then said, "Oh, well, I guess my theory was wrong."

Given the above, the social practice of science would not have funded you to work for ten years on this theory. And this reflects the social practice's implementation of the ideals of Science. The ideals say your hypothesis, while testable, is stupid.

I think you have a misconception about how science handles stupid testable ideas. However, I can't think of a way that this undermines this sequence, which is about how science handles rational untestable ideas.

EDIT: it seems poke said all this years ago.

comment by 0player · 2016-11-01T16:18:13.850Z · LW(p) · GW(p)

Concerning cryonics, you seem to be operating under the assumption that future scientists would actually want to revive anyone, which is not exactly rational. Yeah, conquering Death and all that, but humans aren't that well-trainable, so would you really expect that they'd find us being able to adapt to the world which has advanced beyond belief?

comment by Guillaume Charrier (guillaume-charrier) · 2023-02-28T22:45:34.094Z · LW(p) · GW(p)

My working theory since ~1st grade, is that math is consistent and therefore worth learning. But of course, Goedel says I can't prove it. I derive some Bayesian comfort though, as I see more and more mathematical propositions added to the pile of propositions proven true, and as they obligingly keep on not contradicting each other.

Replies from: guillaume-charrier
comment by Guillaume Charrier (guillaume-charrier) · 2023-02-28T22:50:00.581Z · LW(p) · GW(p)

Full disclosure: I also didn't really have a say in the matter, my dad said I had to learn it anyhow. So. I wonder if that's because he was a Bayesian.

comment by Portia (Making_Philosophy_Better) · 2023-05-18T21:03:12.438Z · LW(p) · GW(p)

This is... really not how scientific practice works, though.

This is how some, older, philosophers of science thought science ought to work. Namely Karl Popper. Who had some points, for sure, but notably, was not a scientist himself, so he was speculating about a practice he was not a part of - and had to discover that, having described to scientists the laws that ought to govern how they ought to act, found that they in fact did not, nor agreed that they would get better results that way. Philosophy of science really took off as an entire discipline here, and a lot of it pointed out huge aspects that Popper had overlooked, or outright contradictions between his ideas and actual scientific practice - in part, because his clean idea of falsification does not translate well to the testing of complex theories.

Instead of speculating about how science might work, and then saying it is bad, let's look at how it actually does, to see if your criticism applies. Say you applied for a grant to develop this theory of yours. Or submitted a talk on it at a scientific conference. Or drafted it as a project todo for an academic position. This is usually when the scientific community determines if something should go further.

They'd ask you all sorts of things. Where did your idea come from? Is your theory consistent with existing theories? If not; does it plausibly promise to resolve their issues, and do a comparably good job, and do you have an explanation for why evidence so far pointed so much to existing theories? What observations do you have that would suggest your theory is likely? Is it internally consistent? Does it explain a lot, while being relatively simple with relatively few extra assumptions? What mechanism are you proposing, and how does it look in detail, it is plausible? Can you show us mathematical equations, or code? If your theory were correct, what would follow - are there useful applications, new things we could do or understand that we previously could not, new areas? If we gave you money to pursue this for 3 years, what tangible results would you think you are likely to produce, and what is your step by step plan to get there? What do you want to calculate, what do you want to define, what experiments do you intend to run, what will all this produce? - If the answers to this seem plausible and promising, the next step would be getting some experts in quantum physics, neuroscience, and philosophy of mind, having them read your work, and ask you some very critical questions - can you answer these questions? Do they think the resulting theory is promising? There is no one simple set of rules of criteria, but the process is not random, either. And it gives a relatively decent assessment of whether a theory is plausible enough to invest in.

I've mentioned a survey among researchers of consciousness on less wrong before. https://academic.oup.com/nc/article/2022/1/niac011/6663928 Note that interestingly, when it comes to theories of consciousness, the researchers are asked to evaluate if the various theories the survey then goes through are in their opinion promising. None of them can be falsified, yet, but that does not mean they are all given the same amount of resources. They all clearly. understand the question, and give clear answers. And quantum theories make the very end of the list. Regular science was absolutely equipped to answer this very question, prior to any falsification.

Replies from: TAG, Mitchell_Porter
comment by TAG · 2023-05-18T23:34:31.794Z · LW(p) · GW(p)

Popper was never a working scientist, but "In 1928, Popper earned a doctorate in psychology, under the supervision of Karl Bühler—with Moritz Schlick being the second chair of the thesis committee" ( Schlick was a famous Vienna Circle figure).

Replies from: Making_Philosophy_Better
comment by Portia (Making_Philosophy_Better) · 2023-05-20T15:45:07.130Z · LW(p) · GW(p)

I am not saying Popper was scientifically illiterate at all. I find falsification a beautiful ideal, and have admiration for him.

But I am saying that you get very different philosophy of science if you base your writings not on your abstract reflections of how a perfect science ought to work, but on doing experiments yourself - Poppers thesis was "On the Problem of Method in the Psychology of  Thinking". More importantly, on observing researchers doing actual, effective research, and how it is determined which theories make it and which don't. 

And I am saying that the messiness of real science makes pure falsification naive and even counterproductive - it rules out some things too late (which should have been given up as non-promising), and others too early (when their core idea was brilliant, but the initial way of phrasing this was still faulty, or needed additional constraints; theories, when first developed, aren't yet finished). E.g. looking at paradigmatic revolutions in science, and where they actually came from, what impact experiments falsifying them actually had - many theories we now recognise as clearly superior to the ones they supplanted were, in their initial imperfect formulation, or due to external implicit assumptions that were false, or due to faulty measuring instruments, falsified; and yet the researchers did not give up on them, and turned out to be right not to. But they did the very things Popper was so worried about - make a theory, make a prediction, do an experiment, see the prediction did not work out - and keep the theory anyway, adapting it to the prediction. The question at which point this becomes perfecting a promising theory into a coherent beauty that explains all prior observations and now also makes precise novel predictions that come true, and at which point it becomes patching up utter nonsense with endless random additions that make no sense except to account for the bonkers results, is not a trivial one to answer, but an important one. Take the classic switch to placing the sun in the center of the solar system, rather than the earth. Absolutely correct move. Also initially lead to absolute nonsense in the predictions, because the initial false theory had been patched up so many times to match predictions that it could predict quite a bit, while the new theory, being wrong about a huge number of other factors about how planets move, was totally off. If you put the sun in the center, but assume planets run on a perfect circle around it, and have not got the faintest idea how gravity works, the planet's actual location will be very different from the one you suspected - but the thing that is wrong here is not your idea that the sun ought to be in the center, it is the idea that a planet circles the sun in a perfect circle. But in practice, figuring out which of your assumptions led to the mess is not that easy, but really has to be done in the long run.

Imre Lakatos did a decent attempt of tracing this, also integrating Thomas Kuhn's excellent ideas on paradigm shifts in science.

comment by Mitchell_Porter · 2023-05-19T03:57:02.663Z · LW(p) · GW(p)

Regular science was absolutely equipped to answer this very question, prior to any falsification.

Almost half of respondents to the poll (46%) are neutral or positive towards quantum theories of consciousness. That's not a decisive verdict in either direction. 

Replies from: Making_Philosophy_Better
comment by Portia (Making_Philosophy_Better) · 2023-05-20T15:20:29.298Z · LW(p) · GW(p)

De facto, it is - and honestly, the way you are presenting this through how your are grouping it is misrepresenting the result. Of the ten theories or theory clusters evaluated, the entire group of quantum theories fares worst by a significant margin, to a degree that makes it clear that there won't be significant funding or attention going here. You are making it appear less bad by grouping together the minuscule number of people who actually said this theory definitely held promise (which looks to be about 1 %) and the people who thought it probably held promise (about 15 %) with the much larger number of people who selected "neutral on whether this theory is promising", while ignoring that this theory got by far the highest number of people saying "definitely no promise". Like, look at the visual representation, in the context of the other theories.

And why do a significant number of people say "neutral"? I took this to mean "I'm not familiar enough with it to give a qualified opinion" - which inherently implies that it did not make it to their journals, conferences, university curricula, paper reading lists, etc. enough for them to seriously engage with it, despite it having been around for decades, which is itself an indication of the take the general scientific community had on this - it just isn't getting picked up, because over and over, people judge it not worth investing in. 

Compare how the theories higher up in the ranking have significantly lower numbers of neutral - even those researchers who in the end conclude that this is not the right direction after all saw these theories (global workspace, predictive processing, IIT) as worth properly engaging in based on how the rest of the community framed them. E.g. I think global workspace misses a phenomenon I am most interested in (sentience/p-consciousness) but I do recognise that it had useful things to say about access consciousness which are promising to spell out further. I do think IIT is wrong - but honestly, making a theory mathematically precise enough to be judged as wrong rather than just vague/unclear already constituted promising progress we can use to learn from. But I share the assessment of my fellow researchers here - no quantum theory ever struck me as promising enough for me to even sit down for a couple workdays to work my way through it. (I wondered whether this was because I subconsciously judged quantum phenomena to be too hard, so I once showed one to my girlfriend, a postdoc who works in quantum physics in academia for a living... and whose assessment was, you guessed it, "This is meaningless, where are the equations?... Oh dear God, what is up with this notation? What, this does not follow! What is that supposed to even mean? ... I am sorry, do you really need me to look at this? This nonsense does not seem worth my time".) If a conference offered a quantum theories talk and another on something else, I'd almost certainly go to the other one - and if the other one was also lame/unpromising, I'd be more likely to retreat to my hotel room to meditate or work out to be energised for a later talk, take my reMarkable and dig into my endless reading list of promising papers, or to grab a coffee and hit up a colleague in the mingle areas about an earlier awesome talk and potential collaboration. There is so much promising stuff to keep up with, so much to learn, to practice, to write out, to teach, to support, to fund, and so little money and time, that people just cannot afford to engage with things that do not seem promising.

If over half the scientific community judges something to not be worth pursuing (so they have decided to at minimum not to engage with it or actively support it), of which half are so strongly opposed to pursuing it that they will typically actively block funding allocations or speaking slots or publications in this direction as a waste of resources and diversion, and the majority of the remainder are not interested enough to even have an opinion, while the number of genuine supporters is also miniscule... this is not the sign of a theory that is going anywhere. A paradigm shifting theory might have significant opposition, but it also has significant proud and vibrant supporters, and most people have an opinion on it. This is really clearly not the case here. Instead, it holds the horrible middle between ridiculed and ignored, which amounts to death in science. Frankly, I was surprised it even made it on the survey, and I wondered if they just put it on there to make clear what the research community thought on the issue - I doubt they will still have it on the next.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2023-05-21T00:17:22.754Z · LW(p) · GW(p)

So what do you make of there being a major consciousness conference just a few days from now, with Anil Seth and David Chalmers as keynote speakers, in which at least 2 out of 9 plenary sessions have a quantum component? 

Replies from: Making_Philosophy_Better
comment by Portia (Making_Philosophy_Better) · 2023-05-21T15:29:10.446Z · LW(p) · GW(p)

Of the nine plenary sessions, I see one explicitly on quantum theories. Held by the anesthesiologist Stuart Hameroff himself, who I assume was invited by... the organiser and center director, Stuart Hameroff.

Let me quote literal Wikipedia on this conference here: "The conference and its main organizers were the subject of a long feature in June 2018, first in the Chronicle of Higher Education, and re-published in The Guardian. Tom Bartlett concluded that the conference was "more or less the Stuart [Hameroff] Show. He decides who will and who will not present. [...] Some consciousness researchers believe that the whole shindig has gone off the rails, that it’s seriously damaging the field of consciousness studies, and that it should be shut down."

For context, the Stuart Hameroff mentioned here is well-known for being a quantum proponent, has been pushing for this since the 80's, and has been very, very broadly criticised on this for a long time, without that going much of anywhere.

I assume Chalmer's agreed to go because when this conference first started, Chalmers was a founding part of it, and it was really good back then - but you'd have to ask him.

I'd be pleased to be wrong - maybe they have come up with totally novel evidence and we will understand a whole lot more about consciousness via quantum, and we will feel bad for having dismissed him. But I am not planning on being there to check personally, I have too much other stuff to do that I am overwhelmed with, and really try to avoid flying when I can help it. Unsure how many others that is true of - the Wikipedia article has the interesting "Each conference attracts hundreds[citation needed] of attendees." note. I hope that if the stuff said there is genuinely new and plausible enough to warrant re-evaluation, I expect it will make it round the grapevine. Which was the point I was making.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2023-05-22T22:52:09.363Z · LW(p) · GW(p)

I have actually worked with Stuart Hameroff! So I should stop being coy: I pay close attention to quantum mind theories, I have specific reasons to take them seriously, and I know enough to independently evaluate the physics component of a new theory when it shows up. This is one of those situations where it would take something much more concrete than an opinion poll to affect my views. 

But if I were a complete outsider, trying to judge the plausibility of such a hypothesis, solely on the basis of the sociological evidence you've provided... I hope I'd still only be mildly negative about it? In the poll, only 50% of the researchers expressly disapprove. A little investigation reveals that there are two conferences, TSC and ASSC; that ASSC allows a broader range of topics than TSC; and that quantum mind theories are absent from ASSC, but have a haven at TSC because the main organizer favors them. ASSC can say quantum mind is being artificially kept alive by an influential figure, TSC can say he's saving it from the prejudice of professional groupthink. 

(By the way, the other TSC plenary that I counted as partly quantum is "EM & Resonance Theories", because it's proposing to ground consciousness in a fundamental physica field.) 

Replies from: adele-lopez-1, Making_Philosophy_Better
comment by Adele Lopez (adele-lopez-1) · 2023-05-23T01:33:16.418Z · LW(p) · GW(p)

What specific reasons do you have to take them seriously?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2023-05-23T05:26:44.163Z · LW(p) · GW(p)

The main reason is the fuzzy physical ontology of standard computational states, and how that makes them unsuitable as the mereological base for consciousness. When we ascribe a computational state to something like a transistor, we're not talking about a crisply objective property. The physical criterion for standard computational ontology is functional: if the device performs a certain role reliably enough, then we say it's in a 0 state, or a 1 state, or whatever. But physically, there are always possible edge states, in which the performance of the computational role is less and less reliable. It's a kind of sorites problem. 

For engineering, the vagueness of edge states doesn't matter, so long as you prevent them from occurring. Ontology is different. If something has an observer-independent existence, then for all possible states, either it's there or it's not. Consciousness must satisfy this criterion, standard computational states cannot, therefore consciousness cannot be founded on standard computational states. 

For me, this provides a huge incentive to look for quantum effects in the brain being functionally relevant to cognition and consciousness - because the quantum world introduces different kinds of ontological possibilities. Basically one might look for reservoirs of entanglement, that are coupled to the classical computational processes which form the whole of present-day cognitive neuroscience. Candidates would include various collective modes of photons, electrons, phonons, in cytoplasmic water or polymeric structures like microtubules. I feel like the biggest challenge is to get entanglement on a scale larger than the individual cell; I should look at Michael Levin's stuff from that perspective some time. 

Just showing that entanglement matters at some stage of cognition doesn't solve my vagueness problem, but it does lead to new mereological possibilities, that appear to be badly needed. 

Replies from: adele-lopez-1
comment by Adele Lopez (adele-lopez-1) · 2023-05-23T06:00:59.188Z · LW(p) · GW(p)

If something has an observer-independent existence, then for all possible states, either it's there or it's not.

Should I infer that you don't believe in many worlds?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2023-05-23T11:32:30.201Z · LW(p) · GW(p)

Many worlds is an ontological possibility. I don't regard it as favored ahead of one-world ontologies. I'm not aware of a fully satisfactory, rigorous, realist ontology, even just for relativistic QFT. 

Is there a clash between many worlds and what you quoted? 

Replies from: adele-lopez-1
comment by Adele Lopez (adele-lopez-1) · 2023-05-25T03:59:35.442Z · LW(p) · GW(p)

I was thinking that "either it's there or it's not" as applied to a conscious state would imply you don't think consciousness can be in an entangled state, or something along those lines.

But reading it again, it seem like you are saying consciousness is discontinuous? As in, there are no partially-conscious states? Is that right?

I'm also unaware of a fully satisfactory ontology for relativistic QFT, sadly.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2023-05-26T11:17:04.446Z · LW(p) · GW(p)

Gradations of consciousness, and the possibility of a continuum between consciousness and non-consciousness, are subtle topics; especially when considered in conjunction with concepts whose physical grounding is vague. 

Some of the kinds of vagueness that show up: 

Many-worlders who are vague about how many worlds there are. This can lead to vagueness about how many minds there are too. 

Sorites-style vagueness about the boundary in physical state space between different computational states, and about exactly which microphysical entities count as part of the relevant physical state. 

(An example of a microphysically vague state which is being used to define boundaries, is the adaptation of "Markov blanket" by fans of Friston and the free energy principle.) 

I think a properly critical discussion of vagueness and continuity, in the context of the mind-brain relationship, would need to figure out which kinds of vagueness can be tolerated and which cannot; and would also caution against hiding bad vagueness behind good vagueness. 

Here I mean that sometimes, if one objects to basing mental ontology on microphysically vague concepts of Everett branch or computational state, one is told that this is OK because there's vagueness in the mental realm too - e.g. vagueness of a color concept, or vagueness of the boundary between being conscious and being unconscious. 

Alternatively, one also hears mystical ideas like "all minds are One" being justified on the grounds that the physical world is supposedly a continuum without objective boundaries. 

Sometimes, one ends up having to appeal to very basic facts about the experienced world, like, my experience always has a particular form. I am always having a specific experience, in a way that is unaffected by the referential vagueness of the words or concepts I might use to describe it. Or: I am not having your experience, and you are not having mine, the implication being that there is some kind of objective difference or boundary between us. 

To me, those are the considerations that can ultimately decide whether a particular proposed psychophysical vagueness is true, possible, or impossible. 

comment by Portia (Making_Philosophy_Better) · 2023-05-28T18:16:08.655Z · LW(p) · GW(p)

"I pay close attention to quantum mind theories, I have specific reasons to take them seriously"

Now I am curious. What specific reasons?

Say I had an hour of focus to look into this one of these days. Can you recommend a paper or something similar I could read in that hour that should leave me convinced enough to warrant digging into this more deeply? Like, an overview of central pieces of evidence and arguments for quantum effects being crucial to consciousness with links so one can review the logic and data in detail if sceptical, a hint what profound implications this would has for ethics, theory and empirical methods, and brief rebuttals to common critiques with links to more comprehensive ones if not immediately convincing? Something with math to make it precise? Doesn't have to (and can't) cover everything of course, but enough that after an hour, I'd have reason to suspect that they are onto something that cannot be easily otherwise explained, that their interpretation is plausible, and that if they are right, this really matters, so I will be intrigued enough that I would then decide to invest more time, and know where to continue looking?

If there is genuine evidence (or at least a really good, plausible argument to be made for) quantum effects playing a crucial role for consciousness, I would really want and need to know. It would matter for issues I am interested in, like the resolution necessary in scanning and the functionality necessary in the resulting process for uploading to be successful, and independently for evaluating sentience in non-human agents. It intuitively sounds like crucial quantum effects would massively complicate progress in these issues, so I would want good reason to assume that this is actually necessary. But if we cannot make proper progress without it, no matter how annoying it will be to compute, and how unpopular it is, I would want to know.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2023-05-29T08:18:25.372Z · LW(p) · GW(p)

Originally I was going to answer your question with another question - what kind of relation do you think exists between fundamental physical properties of the brain and (let's say) phenomenal properties? I'm not asking for biological details, but rather for a philosophical position, about reduction or emergence or whatever. Since you apparently work in consciousness studies, you possibly have quite precise opinions on philosophy of mind; and then I could explain myself in response to those. 

But I already discussed my views with @Adele Lopez [LW · GW] in the other thread, so I may as well state them here. My main motivation is ontological - I think there is a problem in principle with any attempt to identify (let's say) phenomenal properties, with physical properties of the brain that are not microphysically exact. 

If a physical property is vague, that means there are microphysically exact states where there is no objective fact about whether or not the vague physical property holds - they're on the fuzzy edge of belonging or not belonging to that classification. 

But if the properties constitutive of consciousness are identified with vague physical properties of the brain, that means that there are specific physical states of the brain, where there is no objective fact about e.g. whether or not there is a consciousness present. And I regard that as a reductio ad absurdum, of whatever premise brought you to that conclusion. 

Possibly this argument exists in the literature, but I don't have a reference.

If you do think it's untenable to reduce consciousness to computational states which are themselves vague coarse-grainings of exact physical states, then you have an incentive to consider quantum mind theories. But certainly the empirical evidence isn't there yet. The most advanced quantum phenomenon conventionally believed to be at work in biology, is quantum coherence in chlorophyll, and even there, there isn't quite consensus about its nature or role. 

Empirically, I think the verdict on quantum biology is still "not proven" - not proved, and not disproved. The debate is mostly theoretical, e.g. about whether decoherence can be avoided. The problem is that quantum effects are potentially very subtle (the literature on quantum coherence in chlorophyll again illustrates this). It's not like the statistics of observable behaviors of neurons tells us all the biophysical mechanisms that contribute to those behaviors. For that we need intimate biophysical knowledge of the cell that doesn't quite exist. 

Replies from: Making_Philosophy_Better
comment by Portia (Making_Philosophy_Better) · 2023-06-01T23:29:22.307Z · LW(p) · GW(p)

Mh. I am not sure I follow. Can I give an analogy, and you tell me whether it holds or not?

I work on consciousness. As such, I am aware that individual human minds are very, very complicated and confusing things. 

But in the past, I have also worked on human crowd dynamics. Every single human in a human crowd is one of these very complicated human things with their complicated conscious minds. Every one is an individual. Every single one has a distinct experience affecting their behaviour. They turn up at the crowd that day with different amounts of knowledge, and intentions, and strength, and all sorts of complicating factors. Like, hey, maybe they have themselves studied crowd dynamics, and wish to use this knowledge to keep safe.

But if I zoom out, and look at the crowd as a whole, and want to figure out e.g. if there will be a stampede... I do not actually need to know any of that. A dense human crowd starts acting very much like a liquid. Tell me how dense it it, tell me how narrow the corridors are among which it will be channeled... and we can saw whether people will get likely trampled, or even certainly trampled. Not which one will be trampled, but whether there will be a trampling. I can say, if we implement a barrier here, the people will spill around there, if we close a door here, people will pile up there. If we enter more people here, the force will get intolerable over there. Basically, I can easily model the macro effects of the whole system, while entirely ignoring the micro effects. Because they even out. Because the individual randomness of the humans does not not change the movement of the crowd as a whole. And if a grad student said, but shouldn't we be interviewing all the individual people about their intentions for how they want to move today, I would say absolutely hard no, that is neither necessary nor helpful, but a huge time sink.

Similarly, I know that atoms are not, at all, simply little billiard balls that just vibrate more and push further away from each other if you make them warmer, like we are shown in primary school. There are a lot of chemical and physical effects where that is very important to know. But if I just want to model whether heating the content of my pressure pot to a certain temperature will make it explode? Doesn't matter at all. I can assume, for simplicities sake, that atoms are little billiard balls, and be perfectly fine. If I added more information, my prediction would not get better. I might actually end up with so much confusion I can't predict anything at all, because I never finish the math. I also know that Newstons ideas were tragically limited compared to Einsteins, and if I were to built a space rocket, I would certainly want proper physics accounting for relativity. But if I am just playing billards, with everyone involved on earth, and the balls moving insanely slowly compared to the speed of light? I'll be calculating trajectories with Newton, and not feeling the slightest built guilty. You get the idea.

I see consciousness as an emergent phenomenon, but in a very straightforward sense of the word, the way that say, crowd violence is an emergent phenomenon. Not magical or beyond physics. And I suspect there comes a degree of resolution in the underlying substrate where it ceases to matter for the macroscopic effect, where figuring that out is just detail that will cause extra work and confuse everyone, and we already have a horrible issue in biology with people getting so cluttered beneath details that we get completely stuck. I don't think it matters how many neurotransmitters exactly are poured into the gap, but just whether the neuron fires or not as a result, for example. So I suspect that that degree of resolution is breached far before we reach the quantum level, with whole groups of technically very different things being grouped as effectively the same for our sakes and purposes. So every macroscopic state would have a defined designation as conscious or not, but beneath that, a lot of very different stuff would be grouped together. But there would be no undefined states, per se. The conscious system would be the one where, to take a common example, information has looped around and back to the same neuron, regardless of how exactly it did.

But I say all this while not having a good understanding of quantum physics at all, so I am really sorry if I got you wrong.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2023-06-03T08:08:23.001Z · LW(p) · GW(p)

every macroscopic state would have a defined designation as conscious or not [...] there would be no undefined states, per se

But the actual states of things are microscopic. And from a microscopic perspective, macroscopic states are vague. They have edge cases, they have sorites problems. 

For crowds, or clouds, this doesn't matter. That these are vague concepts does not create a philosophical crisis, because we have no reason to believe that there is an "essence of crowd" or "essence of cloud", that is either present or not present, in every possible state of affairs. 

Consciousness is different - it is definitely, actually there. As such, its relationship to the microphysical reality cannot be vague or conventional in nature. The relationship has to be exact. 

The conscious system would be the one where, to take a common example, information has looped around and back to the same neuron, regardless of how exactly it did.

So by my criteria, the question is whether you can define informational states, and circulation of information, in such a way that from a microphysical perspective, there is never any ambiguity about whether they occurred. For all possible microphysical states, you should be able to say whether or not a given "informational state" is present. I'm not saying that every microphysical detail must contribute to consciousness; but if consciousness is to be identified with informational states, informational states have to have a fully objective existence.