Do Scientists Already Know This Stuff?
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-17T02:25:59.000Z · LW · GW · Legacy · 57 commentsContents
57 comments
poke alleges:
"Being able to create relevant hypotheses is an important skill and one a scientist spends a great deal of his or her time developing. It may not be part of the traditional description of science but that doesn't mean it's not included in the actual social institution of science that produces actual real science here in the real world; it's your description and not science that is faulty."
I know I've been calling my younger self "stupid" but that is a figure of speech; "unskillfully wielding high intelligence" would be more precise. Eliezer18 was not in the habit of making obvious mistakes—it's just that his "obvious" wasn't my "obvious".
No, I did not go through the traditional apprenticeship. But when I look back, and see what Eliezer18 did wrong, I see plenty of modern scientists making the same mistakes. I cannot detect any sign that they were better warned than myself.
Sir Roger Penrose—a world-class physicist—still thinks that consciousness is caused by quantum gravity. I expect that no one ever warned him against mysterious answers to mysterious questions—only told him his hypotheses needed to be falsifiable and have empirical consequences. Just like Eliezer18.
"Consciousness is caused by quantum gravity" has testable implications: It implies that you should be able to look at neurons and discover a coherent quantum superposition (whose collapse?) contributes to information-processing, and that you won't ever be able to reproduce a neuron's input-output behavior using a computable microanatomical simulation...
...but even after you say "Consciousness is caused by quantum gravity", you don't anticipate anything about how your brain thinks "I think therefore I am!" or the mysterious redness of red, that you did not anticipate before, even though you feel like you know a cause of it. This is a tremendous danger sign, I now realize, but it's not the danger sign that I was warned against, and I doubt that Penrose was ever told of it by his thesis advisor. For that matter, I doubt that Niels Bohr was ever warned against it when it came time to formulate the Copenhagen Interpretation.
As far as I can tell, the reason Eliezer18 and Sir Roger Penrose and Niels Bohr were not warned, is that no standard warning exists.
I did not generalize the concept of "mysterious answers to mysterious questions", in that many words, until I was writing a Bayesian analysis of what distinguishes technical, nontechnical and semitechnical scientific explanations. Now, the final output of that analysis, can be phrased nontechnically in terms of four danger signs:
- First, the explanation acts as a curiosity-stopper rather than an anticipation-controller.
- Second, the hypothesis has no moving parts—the secret sauce is not a specific complex mechanism, but a blankly solid substance or force.
- Third, those who proffer the explanation cherish their ignorance; they speak proudly of how the phenomenon defeats ordinary science or is unlike merely mundane phenomena.
- Fourth, even after the answer is given, the phenomenon is still a mystery and possesses the same quality of wonderful inexplicability that it had at the start.
In principle, all this could have been said in the immediate aftermath of vitalism. Just like elementary probability theory could have been invented by Archimedes, or the ancient Greeks could have theorized natural selection. But in fact no one ever warned me against any of these four dangers, in those terms—the closest being the warning that hypotheses should have testable consequences. And I didn't conceptualize the warning signs explicitly until I was trying to think of the whole affair in terms of probability distributions—some degree of overkill was required.
I simply have no reason to believe that these warnings are passed down in scientific apprenticeships—certainly not to a majority of scientists. Among other things, it is advice for handling situations of confusion and despair, scientific chaos. When would the average scientist or average mentor have an opportunity to use that kind of technique?
We just got through discussing the single-world fiasco in physics. Clearly, no one told them about the formal definition of Occam's Razor, in whispered apprenticeship or otherwise.
There is a known effect where great scientists have multiple great students. This may well be due to the mentors passing on skills that they can't describe. But I don't think that counts as part of standard science. And if the great mentors haven't been able to put their guidance into words and publish it generally, that's not a good sign for how well these things are understood.
Reasoning in the absence of definite evidence without going instantaneously completely wrong is really really hard. When you're learning in school, you can miss one point, and then be taught fifty other points that happen to be correct. When you're reasoning out new knowledge in the absence of crushingly overwhelming guidance, you can miss one point and wake up in Outer Mongolia fifty steps later.
I am pretty sure that scientists who switch off their brains and relax with some comfortable nonsense as soon as they leave their own specialties, do not realize that minds are engines and that there is a causal story behind every trustworthy belief. Nor, I suspect, were they ever told that there is an exact rational probability given a state of evidence, which has no room for whims; even if you can't calculate the answer, and even if you don't hear any authoritative command for what to believe.
I doubt that scientists who are asked to pontificate on the future by the media, who sketch amazingly detailed pictures of Life in 2050, were ever taught about the conjunction fallacy. Or how the representativeness heuristic can make more detailed stories seem more plausible, even as each extra detail drags down the probability. The notion of every added detail needing its own support—of not being able to make up big detailed stories that sound just like the detailed stories you were taught in science or history class—is absolutely vital to precise thinking in the absence of definite evidence. But how would a notion like that get into the standard scientific apprenticeship? The cognitive bias was uncovered only a few decades ago, and not popularized until very recently.
Then there's affective death spirals around notions like "emergence" or "complexity" which are sufficiently vaguely defined that you can say lots of nice things about them. There's whole academic subfields built around the kind of mistakes that Eliezer18 used to make! (Though I never fell for the "emergence" thing.)
I sometimes say that the goal of science is to amass such an enormous mountain of evidence that not even scientists can ignore it: and that this is the distinguishing feature of a scientist, a non-scientist will ignore it anyway.
If there can exist some amount of evidence so crushing that you finally despair, stop making excuses and just give up—drop the old theory and never mention it again—then this is all it takes to let the ratchet of Science turn forward over time, and raise up a technological civilization. Contrast to religion.
Books by Carl Sagan and Martin Gardner and the other veins of Traditional Rationality are meant to accomplish this difference: to transform someone from a non-scientist into a potential scientist, and guard them from experimentally disproven madness.
What further training does a professional scientist get? Some frequentist stats classes on how to calculate statistical significance. Training in standard techniques that will let them churn out papers within a solidly established paradigm.
If Science demanded more than this from the average scientist, I don't think it would be possible for Science to get done. We have problems enough from people who sneak in without the drop-dead-basic qualifications.
Nick Tarleton summarized the resulting problem very well—better than I did, in fact: If you come up with a bizarre-seeming hypothesis not yet ruled out by the evidence, and try to test it experimentally, Science doesn't call you a bad person. Science doesn't trust its elders to decide which hypotheses "aren't worth testing". But this is a carefully lax social standard, and if you try to translate it into a standard of individual epistemic rationality, it lets you believe far too much. Dropping back into the analogy with pragmatic-distrust-based-libertarianism, it's the difference between "Cigarettes shouldn't be illegal" and "Go smoke a Marlboro".
Do you remember ever being warned against that mistake, in so many words? Then why wouldn't people make exactly that error? How many people will spontaneously go an extra mile and be even stricter with themselves? Some, but not many.
Many scientists will believe all manner of ridiculous things outside the laboratory, so long as they can convince themselves it hasn't been definitely disproven, or so long as they manage not to ask. Is there some standard lecture that grad students get, of which people see this folly, and ask, "Were they absent from class that day?" No, as far as I can tell.
Maybe if you're super lucky and get a famous mentor, they'll tell you rare personal secrets like "Ask yourself which are the important problems in your field, and then work on one of those, instead of falling into something easy and trivial" or "Be more careful than the journal editors demand; look for new ways to guard your expectations from influencing the experiment, even if it's not standard."
But I really don't think there's a huge secret standard scientific tradition of precision-grade rational reasoning on sparse evidence. Half of all the scientists out there still believe they believe in God! The more difficult skills are not standard!
57 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Caledonian2 · 2008-05-17T03:22:34.000Z · LW(p) · GW(p)
Penrose's quantum consciousness is not a hypothesis. It is his conclusion - he holds it as a belief. And he does so without sufficient data.
The essence of skepticism is to neither accept nor reject an assertion without sufficient evidence to do justify doing so. Penrose has not speculated, he has concluded, and that is why he fails - because he has left scientific skepticism behind to embrace a belief system.
There is a profound difference between saying that a thing is possible, that it is likely, and that it is true. Your error is in saying that which is not likely ought not to be regarded as possible; Penrose's error is in forgetting that being unable to rule something out as impossible does not permit us to say that it is true.
comment by ME3 · 2008-05-17T03:55:46.000Z · LW(p) · GW(p)
First, I think this can be said for any field: the textbooks don't tell you what you really need to know, because what you really need to know is a state of mind that you can only arrive at on your own.
And there are many scientists who do in fact spend time puzzling over how to distinguish good hypotheses from bad. Some don't, and they spend their days predicting what the future will be like in 2050. But they need not concern us, because they are just examples of people who are bad at what they do.
There is this famous essay: http://www.quackwatch.com/01QuackeryRelatedTopics/signs.html
And also this one: http://wwwcdf.pd.infn.it/~loreti/science.html
comment by Hopefully_Anonymous · 2008-05-17T05:27:24.000Z · LW(p) · GW(p)
Eliezer, overall a very good post. As usual, I'm somewhat maddened by lower quality thought/writing mixed in with your very best thought/writing.
In particular, your claim that competent scientists make detailed predictions about 2050 because they're unaware of the conjunction fallacy or representativeness heuristics fits in a long term trend by OB bloggers (and affiliated bloggers) that annoys me: you pretend that any performed belief is an actual belief. Whether it is or not is an empirical question. But in a way you're rather ruthlessly siding with predicting scientist themselves, against those of us who would rather look at phenomena like that more critically, by taking them at their word that their expressed belief is their actual belief.
I like that you're dropping the science vs. bayescraft line here, and focusing more on weaknesses in the search for knowledge and understanding as performed by science, and how it can be improved by insights from bayesian probability/reasoning.
comment by mitchell_porter2 · 2008-05-17T05:59:47.000Z · LW(p) · GW(p)
The point is incidental to this essay, but Penrose's idea is not a "mysterious answer to a mysterious question". The question is: How could the human brain do more than a universal Turing machine can? The answer is: By there being an objective wavefunction collapse process which is noncomputable in its dynamics and relevant to cognition. Penrose is not even trying to solve the problem of consciousness, though he flags it as an important issue; his theory is an exercise in the physics of hypercomputation. He is motivated by an interpretation of Gödel's results which most people do not share, but then all you can say is that it is a complicated answer to an irrelevant question.
comment by Shane_Legg · 2008-05-17T10:03:57.000Z · LW(p) · GW(p)
I still think you are stretching reality to fit your cause.
I met Penrose at a conference a few years back and discussed with him some of my results on the relationship between Gödel incompleteness and artificial intelligence. He gave a public talk on all this quantum consciousness stuff and while the public and media were lapping it up the scientists weren't. Indeed, from his presentation I got the impression that even he doesn't really believe this anymore and so I asked one of his long time physicist friends. He, somewhat delicately, said that serious physicists believe this story and even Roger doesn't really buy into it now, it's time he moved on. I think the problem here is not so much with the method of science as it is with the fact that science is a social process made up of people whose egos sometimes get the better of them.
As for scientists believing in God. I don't know what the situation is like in the US, but at least around here among the PhD students and post docs I know they are overwhelmingly atheist.
comment by Shane_Legg · 2008-05-17T10:06:14.000Z · LW(p) · GW(p)
Ops, should have been "serious scientists don't believe this story..."
comment by Recovering_irrationalist · 2008-05-17T11:40:24.000Z · LW(p) · GW(p)
I agree with these last few posts, think the points highly valuable, but fear they'll be grossly misrepresented to paint your entire book as Written In Green Ink. It may be worth placing extra Go stones in advance...
comment by RobinHanson · 2008-05-17T12:25:08.000Z · LW(p) · GW(p)
Eliezer, you have identified and articulated many important insights that most scholars could benefit from. You should continue to do so, and the world will be better for it.
The problem comes when you seem to imply that you are the first to identify or articulate them, or that the reason they are not more widely known is a particular failing in the nature of "science." To a good first approximation, there simply is no such thing as "science"; there are just many different intellectual traditions with differing mixtures of insights passed down and distorted incentives inducing disinterest and even hostility to certain important insights.
Fight the good fight, but don't presume the enemy is so singular.
comment by Roland2 · 2008-05-17T15:03:02.000Z · LW(p) · GW(p)
Eliezer,
in practice do you really calculate the numbers, eg: "I calculated that hypothesis A has a probability of 73.2345% of being true whereas hypothesis B has only a probability of 54.897%, therefore I'll make an experiment to test A first."
Or do you more apply the general rules you uncovered like the conjunction fallacy and other stuff like:
Second, the hypothesis has no moving parts - the secret sauce is not a specific complex mechanism, but a blankly solid substance or force.
Peace, Roland
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-17T15:54:50.000Z · LW(p) · GW(p)
Roland, I actively avoid giving numbers I can't calculate and try to find qualitative lines of reasoning instead, on the theory that making up random numbers will produce random decisions.
The numbers still exist, of course, but making up other numbers won't help me. So, yes, I try to apply qualitative rules instead.
Robin, it seems to me that there is a definite and substantial difference between what I would conceive to be the appropriate training of a rationalist, and any training I have ever heard anyone else suggest. Jeffreyssai doesn't exist. Physicists don't (reliably) understand Occam's Razor. Nobody except cognitive psychologists had heard of cognitive biases before 2000. I would certainly like to hear that I am anticipated, but I find it hard to believe I have been.
Replies from: None↑ comment by [deleted] · 2011-06-20T22:37:45.568Z · LW(p) · GW(p)
Perhaps some inspiration: you've convinced at least this grad student to harp on all of these principles when I yearly teach an introduction to probability for engineers, applied physicists, and applied mathematicians. I'm sure many before you have noted these deficiencies... I have routine conversations with my adviser about the sad state of machine learning literature in engineering. The fad is to just slap together a few existing algorithms, eek out a 3% improvement in efficiency or something, and publish 6 papers out of that. The pressure to publish for tenure is maddening. The job market for PhDs is disappointing... half of the deficiencies you mention exist because people get tired of working very hard to be underpaid as post-docs and sweep things under the rug to find some kind of geodesic path to life-success/money/not-spending-12-hours-per-day-in-the-lab. These aren't excuses, mind you, but the realities of being a "scientist" leave open a lot of room for this. And that one guy (right now, me) sitting in the corner of lab with my colleagues and constantly wanting to talk about bayescraft, is like the annoying guy who always takes the stairs instead of the elevator and sits with perfect posture. Colleagues just want to ignore me, churn out their rudimentary permutations of existing methods, and go home at night to do hobby X.
I worked in a government research lab for a few years before grad school, and this is all even worse in that sort of environment. One can become rather cynical about our race rather quickly when the supposed experts ask you to model a radar gain pattern with functions that aren't even integrable, for example, and don't know what you mean by integrable when the area under their curves blows up to infinity and ruins their numerical simulations. And these people graduated with honor from a PhD program at Berkeley or Stanford or Harvard or MIT and have been doing science for 20+ years.
But like you've said elsewhere: this is our Earth. It won't be any better until we change it. One semester of probability theory at a time.
comment by michael_vassar3 · 2008-05-17T16:27:33.000Z · LW(p) · GW(p)
Robin: Science isn't monolithic, but there is some referent to 'science' as a vague high-level description of a set of institutions for developing, disseminating, and implementing new ideas with cultural rules aimed at compensating for some elements of human foolishness. Something was new under the sun in the 17th century. Of course, many different things claim, perhaps by taking place in universities or calling themselves "X Science" to be the heir of that something and not all of them may truly be heirs to it. Possibly none of them truly are and modernity gets by on inertia having crossed some civilizational developmental barrier we may no longer need the same systems that we initially used to cross it just to continue to progress. Population and wealth may be sufficient.
Eliezer: I think that applying weakly justified numbers can actually be quite helpful in establishing boundaries for possibilities under consideration etc. Its quite frequently done in various sorts of management and planning and it can still help to compensate for conjunction fallacies and the like more effectively than strictly qualitative thinking can. In investing, this is sometimes called paying attention to fundamentals.
I think that the scientific lineages phenomenon requires more than a sentence or two of attention. Half of Nobel Prizes go to the doctoral students of other Nobel Laureates and three quarters go to people from universities that are top 5 in the relevant field. Different countries specialize in very different sciences and sub-fields, sometimes to the point of absurdity. "Discovering" by Robert Root-Bernstein has some very enlightening diagrams of just how rich scientific lineages are. Obviously some mix of favoritism, selection of better students and superior instruction is taking place here, and the relative mix isn't easy to determine, but the phenomenon may be responsible the large majority of scientific progress while only involving a few thousand people at a time in which case the details are very worthy of attention, as it is often possible to simply hire more than a few thousand people to do projects much smaller than "science in general".
comment by Unknown · 2008-05-17T17:05:20.000Z · LW(p) · GW(p)
Insofar as the numbers signify a subjective degree of belief, one must be able to give one's best estimate of the numbers, even if they are not the result of a calculation. Eliezer may say "making up random numbers will produce random decisions," but nonetheless, in the case of anything uncertain, there must be certain wagers he would accept and certain wagers he would reject. So implicitly he must accept some numbers. In fact, it probably would be a good idea for him to attempt to assign more precise probabilities to his opinions, because this would help him to overcome the overconfidence bias to which he is usually subject.
comment by George_Weinberg2 · 2008-05-17T17:31:47.000Z · LW(p) · GW(p)
Well, I remember wondering as a graduate student how how one was supposed to go about deciding what problems to work on, and not coming up with a good answer . A fellow student suggested that your project is worth working on if you can get it funded, but I think he was kidding. Or maybe not.
Most experimentalists really aren't in the business of supporting or refuting hypotheses as such. It's more a matter of making a measurement, and yes they will be comparing their results to theoretical predictions, but ideally experimentalists should be disinterested in the result, that is, they care about making as accurate a measurement as possible but don't have any a priori preference of one value over another.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-17T18:45:08.000Z · LW(p) · GW(p)
Unknown: Eliezer may say "making up random numbers will produce random decisions," but nonetheless, in the case of anything uncertain, there must be certain wagers he would accept and certain wagers he would reject. So implicitly he must accept some numbers.
Correct, and I do sometimes ask myself "What are my revealed betting odds for a Singularity by 2015 / after 2050?" and such, but I don't treat the result as a usable number - just a kind of self-insight.
Vassar: I think that the scientific lineages phenomenon requires more than a sentence or two of attention. Half of Nobel Prizes go to the doctoral students of other Nobel Laureates
This is insanity. Does no one know what they're teaching?
comment by Caledonian2 · 2008-05-17T19:01:11.000Z · LW(p) · GW(p)
The question is: How could the human brain do more than a universal Turing machine can? The answer is:that it can't, that Penrose wants to believe a certain thing, so he started out with his conclusion and worked backwards. The problem with working backwards is that you tend not to notice logical forks.
Human brains can't do more than universal Turing machines.
comment by poke · 2008-05-17T19:40:11.000Z · LW(p) · GW(p)
Eliezer, I'll try to address this new essay and your response to my other comment in one comment, I hope it won't get too muddled.
What scientists learn from their mentors is a set of domain-specific non-general practical skills, and they're the right skills for the reasons I gave in my comment on your other essay: the skills you need to produce new science happen to be the skills settled science supplies. I think you'd agree that scientists, in their period of apprenticeship (or even formal education), learn a set of domain-specific skills. Whether they learn a set of additional general skills which may or may not have the hidden structure of Bayesianism is where we disagree. If I argue that such additional general skills are unnecessary then I believe that would undermine the applicability of Bayesianism to science. My argument is simply that we can account for the fact that the institution of science tends to be the one to produce new science merely from contingent facts about it (science has all the scientists); we don't need to postulate general rules or norms to explain its success.
It's my further assertion that getting the right hypothesis is a product of institutional inertia. I'm not sure this is as contentious as you imply. It's true that scientists don't learn any of the general skills of reasoning you list but they do go through a period of tutoring where they are given explicit advice on what research to pursue and then serve as part of a research team under a senior researcher. Only after many years would they be allowed to freely set their own research agenda and by this time, if their hypotheses hadn't become highly constrained by their period of apprenticeship, they would be considered very bad scientists indeed. I don't think someone like Roger Penrose makes a good counterexample. Penrose published a work of popular science outside his field of expertise and was not taken seriously by professional scientists in the relevant fields. I believe his speculations also harmed his position in physics.
All of the constraints on hypothesis choice are, again, domain-specific non-general practical skills and that, I contend, is all we ever need. It's science itself, the actual dirty physical details of experimentation and theoretical manipulation, that suggests its own extension and practicing scientists are steeped in it and can pass their (practical, domain-specific) insights onto aspiring scientists. The whole process of science is a little like pulling a loose thread of material and having the whole thing unravel. A bunch of people working on mechanical problems in the 16th and 17th centuries stumbled on that thread and their intellectual decedents have been the ones to keep tugging at it because each bit of thread lets you pull out more thread and so on. Theologians don't have the thread, we do, and that's the difference between us and theologians. We don't need to also be more rational or better Bayesians. I believe that scientists and theologians both use their full range of psychological faculties all the time, unconstrained, in solving problems and the only difference between them is the kind of problems they're trying to solve. The kind of problems they tackle are a matter of institutional heritage. This doesn't mean I think theological problems are worthwhile; I just don't think there are normative differences in reasoning or the application of cognitive faculties between the two fields nor do I believe there need be to explain the success of one over the other.
comment by Caledonian2 · 2008-05-17T20:18:36.000Z · LW(p) · GW(p)
I believe that scientists and theologians both use their full range of psychological faculties all the time, unconstrained, in solving problems and the only difference between them is the kind of problems they're trying to solve.The fact that scientists aren't using their full range of psychological faculties is precisely why they're different from theologians. Natural humans are creatures without logic. Reason requires an additional self-restraint that distinguishes between logic and illogic. If you want the full range, you'll have to deal with a thousand different fallacies that are part of the human birthright.
comment by Bob_Unwin8 · 2008-05-17T22:34:00.000Z · LW(p) · GW(p)
Robin said to Eliezer: "The problem comes when you seem to imply that you are the first to identify or articulate them"
Eliezer responds by saying: "Nobody except cognitive psychologists had heard of cognitive biases before 2000."
But this comment is mistaken. In five minutes of research, I came across three books by philosophers which discuss cognitive biases multiple times. One book is from 1986 (!) and the other two from 1993. With more time, I'm sure I could find many more books and papers by philosophers written before 2000. In economics, there is Richard Thaler's work which began in the 1980s. This was often published in journals read by economists. I'd be surprised if there weren't a fair number of other economists who were aware of and interested in this work pre-2000. (Also: Kahneman and Tversky's original Prospect Theory paper was published in Econometrica, a premier Econ journal, in something like 1979. It would be very surprising if at least a clutch of economics didn't get interested in this work as a result of reading that paper).
So it seems you underestimate the extent to which people interested in rationality and the philosophy of science are aware of the sort cognitive biases work that you have written about on this blog. This suggests you should be more cautious about claims to originality.
Philosophy books discussing heuristics and biases:
The Nature of Rationality By Robert Nozick 1993
Epistemology and Cognition By Alvin I. Goldman 1986
The Fragmentation of Reason By Steven Stich, 1993
comment by Tom_McCabe2 · 2008-05-17T22:39:01.000Z · LW(p) · GW(p)
"This is insanity. Does no one know what they're teaching?"
I doubt any systematic study has been done on the difference in curricula between MIT and Generic State U., even though it would be much easier, and MIT has 78 affiliated Nobel laureates while State U. probably has zero. You can argue from first principles (http://www.paulgraham.com/colleges.html) or experimental data (http://www.csis.gvsu.edu/~mcguire/worth_college_leagues.html) that elite colleges are selecting Nobel Prize winners rather than creating them, but I don't know how accurate this is. If we could make MIT and Caltech replicas pop up all over the country, it would be well worth the time and effort.
comment by Brian_Jaress2 · 2008-05-17T22:42:30.000Z · LW(p) · GW(p)
Maybe I'm doing it wrong, but when I score your many-worlds interpretation it fails your own four-part test.
- Anticipation vs curiosity: We already had the equations, so there's no new anticipation. At first it doesn't seem like a "curiosity stopper" because it leaves everyone curious about the Born probability thing, but that's because it doesn't say anything about that. On the parts where it does say something, it seems like a curiosity stopper.
After your posts on using complex numbers and mirrors, I was wondering, "Why complex numbers? Why do you add them when you add them and multiply them when you multiply them?" That's the question your interpretation answers, and the answer is, "There's stuff called amplitude that flows around in exactly that way."
Blankly solid substance: That sounds like your amplitude. The equations are a specific, complex mechanism, but they're not part of your explanation. They're what you want to explain. Your explanation is just that a substance exists that exactly matches the form of the equations.
Cherishing ignorance: (This one is about how supporters behave, and I've really only heard from you. My score here might be totally invalid if other supporters of the same thing support it differently.) You definitely don't do what I would call cherishing ignorance, but I think you do both of the things which you list as examples of it.
This recent series of posts is all about how your interpretation defeats ordinary science.
The "mundane phenomena" one is a little ambiguous. If the point of the rule is whether the theory is claimed as a special exception, then you haven't made that claim. In other words, you haven't said, "Things usually happen that way, but in this case they happen this way." But I think at least part of that rule has to do with pride in how shocking and different the explanation is -- a case of, "I've had a revolutionary insight that violates everything you think you know." You've certainly shown that attitude.
- Still a mystery: Well, there's the Born probabilities that it doesn't say anything about. Then there's the way that the values are assigned and combined to get the final amplitude, in other words the way the amplitude "flows around." Amplitude has its own peculiar way of flowing that was already in the equations and isn't explained by calling it amplitude.
So the score is:
Check
Check
Maybe, with a frowny face even if it's technically OK.
Check
Maybe I missed something in your past posts. (I skimmed over a lot attacks on other interpretations that I don't know much about.) Or maybe I misunderstood the four tests. Three of them seemed like pretty much the same thing.
I'm not sure I even agree with the test, but it captured part of what I don't like about your interpretation. It actually kind of reminds me of that "phlogiston" thing you always bring up as a bad example, in the sense that you started with a black box behavioral description and explained it with a substance defined in terms of the known behavior.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-18T01:33:21.000Z · LW(p) · GW(p)
Yes, Bob, I am aware that economists were aware of Kahneman. "Nobody except cognitive psychologists" was poorly phrased - okay, wrong - but I was trying to convey the notion of something breaking out of a specialty interest, which cognitive biases only did in the 21st century, so far as I know.
comment by Z._M._Davis · 2008-05-18T01:58:32.000Z · LW(p) · GW(p)
Eliezer, I have to second Hopefully, Recovering, et al.: good points (as almost always), but the Science versus Bayescraft rhetoric is a disaster. Lone autodidacts railing against the failings of Mainstream Science are almost always crackpots--that you're probably right, doesn't mean you can expect people to ignore that likelihood ratio when deciding whether or not to pay attention to you. "Meaning does not excuse impact!"
Concerning the qualitative vs. quantitative Bayescraft issue: taking qualitative lessons like Conservation of Expected Evidence from probability theory is clearly fruitful, but I wonder if we shouldn't be a little worried about Solmonoff induction. Take the example of Maxwell's equations being a simpler computer program than anger. Even though we have reason to suppose that it's possible in principle to make a computer program simulating anger-in-general--anger runs on brains; brains run on physics; physics is computable (isn't it?)--I don't wonder if it shouldn't make us a bit nervous that we really have no idea how to even begin writing such a program (modulo that "No One Knows What Science," &c.). The obvious response would be to say that all we need is "just" a computer program that duplicates whatever angry human brains do, but I don't think that counts as a solution if we don't know exactly how to reduce anger-in-general to math. A convincing knockdown of dualism doesn't make the Hard Problem any less confusing.
Maybe all this is properly answered by repeating that the math is out there, whether or not we actually know how to do the calculation. After all, given that there is a program for anger, it would obviously be longer than the one for electromagnetism. Still, I worry about putting too much trust in a formalism that is not just computationally intractible, but that we don't really know how to use, for if anyone really knew in concrete detail how to reduce thought to computation in any but the most trivial of cases, she'd essentially have solved the AGI problem, right?
Or take Pascal's Mugging. If I recall correctly from the discussion at the February meetup, the current best solution to the problem is that given a universe big enough to contain 3^^^^3 minds, the prior probability of any one causal node exerting so much influence is low enough to overcome the vast disutility of the mugger's threat. Eliezer noted that that this would imply that you're not allowed to believe the mugger even if she takes you out of the Matrix and shows you the hardware. This seems much like ruling out the mugger's claim a priori--which I guess is the result we "want," but it seems far too convenient.
Of course, it is possible that I simply don't know enough math to see that everything I just said is actually nonsense. Sorry for the long comment.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2017-03-16T19:50:18.698Z · LW(p) · GW(p)
but the Science versus Bayescraft rhetoric is a disaster.
What's wrong with you? It's true that people who don't already have a reason to pay attention to Eliezer could point to this and say, "Ha! An anti-science crank! We should scorn him and laugh!", and it's true that being on the record saying things that look bad can be instrumentally detrimental towards achieving one's other goals.
But all human progress depends on someone having the guts to just do things that make sense or say things that are true in clear language even if it looks bad if your head is stuffed with the memetic detritus of the equilibrium of the crap that everyone else is already doing and saying. Eliezer doesn't need your marketing advice.
But you probably won't understand what I'm talking about for another eight years, ten months.
Replies from: gjm↑ comment by gjm · 2017-03-20T15:12:04.407Z · LW(p) · GW(p)
But you probably won't understand what I'm talking about for another eight years, ten months.
What do you expect to happen in January 2026, and why? (And why then?)
Also, are you the same person[1] as the "Z. M. Davis" you are replying to?
[1] Adopting the usual rather broad notion of "same person".
Replies from: Lumifercomment by Z._M._Davis · 2008-05-18T02:06:13.000Z · LW(p) · GW(p)
My first attempt at posting this got eaten by the spam filter, so I have removed some of the links and am trying again.
Eliezer, I have to second Hopefully, Recovering, et al.: good points (as almost always), but the Science versus Bayescraft rhetoric is a disaster. Lone autodidacts railing against the failings of Mainstream Science are almost always crackpots--that you're probably right, doesn't mean you can expect people to ignore that likelihood ratio when deciding whether or not to pay attention to you. "Meaning does not excuse impact!"
Concerning the qualitative vs. quantitative Bayescraft issue: taking qualitative lessons like Conservation of Expected Evidence from probability theory is clearly fruitful, but I wonder if we shouldn't be a little worried about Solmonoff induction. Take the example of Maxwell's equations being a simpler computer program than anger. Even though we have reason to suppose that it's possible in principle to make a computer program simulating anger-in-general--anger runs on brains; brains run on physics; physics is computable (isn't it?)--I don't wonder if it shouldn't make us a bit nervous that we really have no idea how to even begin writing such a program (modulo that "No One Knows What Science," &c.). The obvious response would be to say that all we need is "just" a computer program that duplicates whatever angry human brains do, but I don't think that counts as a solution if we don't know exactly how to reduce anger-in-general to math. A convincing knockdown of dualism doesn't make the Hard Problem any less confusing.
Maybe all this is properly answered by repeating that the math is out there, whether or not we actually know how to do the calculation. After all, given that there is a program for anger, it would obviously be longer than the one for electromagnetism. Still, I worry about putting too much trust in a formalism that is not just computationally intractible, but that we don't really know how to use, for if anyone really knew in concrete detail how to reduce thought to computation in any but the most trivial of cases, she'd essentially have solved the AGI problem, right?
Or take Pascal's Mugging. If I recall correctly from the discussion at the February meetup, the current best solution to the problem is that given a universe big enough to contain 3^^^^3 minds, the prior probability of any one causal node exerting so much influence is low enough to overcome the vast disutility of the mugger's threat. Eliezer noted that that this would imply that you're not allowed to believe the mugger even if she takes you out of the Matrix and shows you the hardware. This seems much like ruling out the mugger's claim a priori--which I guess is the result we "want," but it seems far too convenient.
Of course, it is possible that I simply don't know enough math to see that everything I just said is actually nonsense. Sorry for the long comment.
comment by RobinHanson · 2008-05-18T02:11:24.000Z · LW(p) · GW(p)
Poke: Only after many years would [scientists] be allowed to freely set their own research agenda.
Most of my grad students insist on making up their own agendas, and aren't interested in my agendas.
comment by Caledonian2 · 2008-05-18T02:42:50.000Z · LW(p) · GW(p)
Quite a few of the problems Eliezer lists strike me less as problems with the nature of science, and more as failures of people to apply the scientific method to things.
Unfortunately, no amount of changing the method can force people to use it.
comment by AnnaSalamon · 2008-05-18T03:52:54.000Z · LW(p) · GW(p)
My impression:
(1) Yes, there are all kinds of good points in Eliezer's posts that I was not taught in my science coursework or internships and that others are also not taught. Eliezer's last few posts caused me to raise my (low) estimate of the probability that Eliezer and others can pull off the breakthroughs needed for FAI.
(2) No, the "science" that I and many others were taught in research apprenticeships is not exhausted by the "hypothesis, experiment, conclusion" scientific method that Eliezer has been discussing. It includes plenty of details about what do and do not constitute legitimate research questions and fruitful conjectures within a particular subfield. These details are supplied mainly by example and do not transfer well between different scientific subfields, unlike the general techniques Eliezer is after.
That is: "legitimate science", as it is often taught, involves sticking to a narrow set of known mechanisms and to hypotheses that sound like previously successful hypotheses. Legitimate science includes "stuff similar to these established examples and nothing else". It also recommends that an individual only propose hypotheses in subfields where he has been thoroughly steeped in both the formal results and the culture/traditions. This is a good enough notion of science to: (i) prevent many hypotheses along the lines of Eliezer_18's, (ii) to label Penrose's theories of consciousness unscientific, and (iii) to label detailed predictions about 2050 unscientific. (Which, indeed, is how many scientists I know regard both Penrose and futurists.) Unfortunately, this sort of scientific education does not show people how to do revolutionary science, nor does it allow scientists to distinguish between detailed stories about 2050 and simpler statements like "AI stands a good chance of eventually destroying the world by one means or another". (The latter is branded "unscientific" in the same way the detailed sci-fi stories are branded "unscientific": both are not made from the toolkit of known examples and mechanisms.)
(3) Like Z. M. Davis and others, I fear rhetorical disaster. Z. M. points out that railing against Mainstream Science is a frequent indicator of crackpottery. I'd like to generalize the principle: people get offended and impute lousy motives when someone talks overmuch about how he alone possesses unique knowledge and powers. Talking about how Bayescraft is completely different from everything else anyone has ever thought/taught, or even sounding like you're doing so, suggests ego and risks causing offense. Especially if your competitors are at all caricatured or misrepresented.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-18T04:05:12.000Z · LW(p) · GW(p)
I figure that anyone who wants to paint me as a lunatic already has more than enough source material to misquote. Let them paint and be damned! Here I am attempting to create researchers, or at least tip them over and start them rolling down the right hill.
From all the books that created me, I was never once warned that Science is not strict enough.
I will not fail to pass on that warning.
comment by Goplat · 2008-05-18T04:45:58.000Z · LW(p) · GW(p)
Z. M. Davis: All this talk of "simpler computer program" seems pretty meaningless to me. A regex matcher in C is long and complex, but in PHP all you have to do is use the built-in preg_match function. (Does the language the universe was written in have a built-in copenhagen_interpretation function?)
One might claim that PHP is a more complicated language than C, but how is that measured? The only way to see how complicated a language is is by a complete description of it - an implementation. And the complexity of an implementation depends on the kind of CPU it must run on, and the complexity of a CPU architecture depends on the laws of physics it must exist in. Self reference: this is the stuff paradoxes are made of.
Replies from: gwern, Dojan↑ comment by gwern · 2009-06-20T19:44:42.570Z · LW(p) · GW(p)
The only way to see how complicated a language is is by a complete description of it - an implementation. And the complexity of an implementation depends on the kind of CPU it must run on, and the complexity of a CPU architecture depends on the laws of physics it must exist in.
Fortunately, C and PHP target the same computers.
↑ comment by Dojan · 2011-12-16T03:59:28.460Z · LW(p) · GW(p)
I'd interpret "shortest computer program" more like " the shortest string of ones and zeroes that gets the job done on an idealistic Turing machine" or some such. High-level programming languages are for the convenience of programmers, not computers. Thus, to use the built-in preg_match function of PHP, you'd first of all need to represent PHP's built-in implementation of that, and also the rest of PHP, plus some environment. If you did that I think it would turn out to be longer than if you did the same in C.
This is only to be used as a way of guiding your thoughts in the right direction, a rule of thumb, rather than an actual experiment to determine between hypothesis. Among other problems, how do you know when you have found the shortest possible way of expressing something in ones and zeroes?
Replies from: Nornagest↑ comment by Nornagest · 2011-12-16T04:34:54.363Z · LW(p) · GW(p)
Among other problems, how do you know when you have found the shortest possible way of expressing something in ones and zeroes?
You don't. That's uncomputable in the general case, and in most nontrivial special cases as well. You can, however, put upper bounds on it.
Replies from: Dojancomment by Tom_McCabe2 · 2008-05-18T04:59:48.000Z · LW(p) · GW(p)
"I figure that anyone who wants to paint me as a lunatic already has more than enough source material to misquote. Let them paint and be damned!"
The problem isn't individual nutcases wanting to paint you as a lunatic; their cause would be better served by SitS or other Singularity-related material. It's that people who haven't heard your ideas before- the largest audience numerically, if you publish this in book form- might classify you as a lunatic and then ignore the rest of your work. Einstein, when writing about SR, did not go on about how the classical physicists were making a bunch of stupid mistakes and how his methods were superior to anything Newton ever developed. You have, of course, made far more extreme statements elsewhere (by mainstream standards), but the overall proportion of such material should scale polynomially with the number of readers who reject you as a crackpot.
comment by Richard_Hollerith2 · 2008-05-18T05:27:09.000Z · LW(p) · GW(p)
Vassar: I think that the scientific lineages phenomenon requires more than a sentence or two of attention. Half of Nobel Prizes go to the doctoral students of other Nobel Laureates
Eliezer: This is insanity. Does no one know what they're teaching?
The possibility that knowledge is more easily transmitted face-to-face than through books is no cause for despair. It might however be cause to increase the likelihood that you will contact the author to request a face-to-face meeting when you come across a good piece of writing on an important subject. I'd meet with you at no cost to you with the sole goal of helping you understand something I know, and I expect that there are many like me.
comment by AnnaSalamon · 2008-05-18T05:27:51.000Z · LW(p) · GW(p)
Eliezer,
I'm not suggesting you fail to pass on the warning. I'm suggesting you make sure the warning is placed in an accurate, non-caricatured picture of the scientific traditions you are criticizing.
For example, you talk about Science "not being strict enough". The notion of "scientific" that I described in my comments (which is one of several competing, half-articulated modes of science in which students are sometimes trained) is in some ways too strict; it correctly throws out Penrose and detailed 2050 predictions, and it unfortunately also throws useful, simple hypotheses like "Most conceivable AIs would, if created, destroy the world."
More accurate, less singular models of science would make your points easier to digest in some ways (because more accurate). More accurate models of science would also make your points less crackpot-sounding, but you may be right that I should ignore that aspect.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-18T05:39:07.000Z · LW(p) · GW(p)
@Z.M.Davis:
Not everything on the blog goes in the popular book. This sort of thing would go into a small e-book that is read only by advanced seekers of the way.
@Anna Salamon:
"Strictness" is here meant in the sense of how exactly your reasoning is guided, both in being forced to permit, and forced to reject, propositions.
Rather than "strictness" in the sense that, in machine learning, would be called "specificity": Rejecting more examples in a +/- classification problem.
Assigning lower probabilities to things Science deems "not proven", is not necessarily stricter reasoning - it just makes you sound like a stern elder.
comment by mitchell_porter2 · 2008-05-18T06:38:12.000Z · LW(p) · GW(p)
Eliezer: From all the books that created me, I was never once warned that Science is not strict enough.
I am trying to figure out exactly what your better methodology is. Is it
(1) Science + Occam's razor, with the razor used to choose between experimentally indistinguishable theories?
(2) Bayes's Law, with Science somehow merely being an application of the law?
(3) Science, Bayes, and an assortment of introspective methods meant to prevent wasting one's time on a-priori extravagant hypotheses?
I do not think anyone will argue with the advice that if a theory contains entities which are predictively irrelevant, you should try doing without them. Whether "Science" is merely an instance of "Bayes" will be a little more contentious; to employ probability theory requires structure - a space of possibilities, a prior on that space - which may not be available. The utility of the psychological tips is even more open to question, though it's surely useful to at least know about this perspective.
Some of the examples you use I have to disagree with. I do not think many worlds can be shown to be the clear favorite among quantum interpretations, either by the simple argument that it's orthodoxy minus collapse and therefore simpler than orthodoxy, nor by some more complicated argument that also tries to incorporate qualitative principles like adherence to the spirit of relativity. You are also getting Penrose wrong, as I wrote above. People adopt quantum mind theories for a variety of reasons. For example, I do it because I do not believe in the reducibility of consciousness to a collective or swarm phenomenon, and some of the quantum ontologies permit options that don't exist in classical atomism. But Penrose did it because it gave him a means of physically implementing neural hypercomputation, which in turn he deemed to be necessary because of the incompleteness theorems. He was not trying to explain qualia, so the fact that his hypothesis introduces no insight on that front is irrelevant.
The most profound criticism I can make of science as it is presently conducted is that it assumes a type of ontology which is necessarily wrong; and this really only applies to sciences which touch on something ontologically fundamental. The ontology assumed might be called objectified mathematical materialism; it is necessarily wrong because conscious experience manifestly contains properties which cannot be obtained by any combination of the entities which that ontology says are all that exists; but this is irrelevant to, say, a biologist, unless their work really does touch upon consciousness. A biologist can utilize the everyday subjective ontology, and the quantitative world-image of the natural sciences founded upon physics, and not have them clash in an impossible way.
Your younger self sensed, correctly, that something more is needed. If he made an error, I would say it was in supposing that more of the same could make a difference: that extra mathematical physics can solve the hard problem. Even if it's there, and causally relevant, it's just more physics. What's needed is new ontology. Realist fundamental physics is ontology, so a change there does mean new ontology, but if it's just mathematics, it's not enough. We have to remember that subjectively speaking, the mathematical image of the world was created by deliberately excluding from consideration certain aspects of experience as "secondary", and that the hard problem of consciousness arises from this unfinished business. I've given my prescription in comments elsewhere: transcendental idealism, transcendental phenomenology, and a quantum monadology in which the qualities revealed in appearance are taken to be the ontological content behind the mathematical formalism used to describe the physical correlates of consciousness.
Even though they are based on the impoverished ontology of mathematical physics, according to which quantity and causality are everything, I do think some of your qualitative methodological principles are still relevant to these deeper investigations. But they would have to be applied in a frame of mind which no longer tries to ground everything in mathematics as we know it, and remains open to aspects of being which fall radically outside anything we know how to formalize at present.
comment by Tim_Tyler · 2008-05-18T11:49:57.000Z · LW(p) · GW(p)
Re: "Well, there's the Born probabilities that [the MWI] doesn't say anything about."
These have been derived from the MWI and decision theory - and it was all over the tech news in 2007:
"Probabilities used to be regarded as the biggest problem for Everett, but ironically, they are now its most powerful success"
comment by Caledonian2 · 2008-05-18T13:56:23.000Z · LW(p) · GW(p)
it is necessarily wrong because conscious experience manifestly contains properties which cannot be obtained by any combination of the entities which that ontology says are all that exists
Saying that something is impossible is a strong statement that requires equally strong support. What evidence do you offer us that mathematical descriptions cannot produce the properties of which you speak?
comment by Nick_Tarleton · 2008-05-18T19:22:13.000Z · LW(p) · GW(p)
This sort of thing would go into a small e-book that is read only by advanced seekers of the way.
But you know those aren't the only people who will, in fact, read it. I love these latest posts, but share the concern over rhetoric, although I'm not sure what to do about it - what you're saying really needs to be said and I don't know what an equally effective, less crazy way might be. But the problem, like Tom says, is not people trying to paint you as a lunatic, but people evaluating you for the first time who recognize "railing against Science" as a strong crackpot marker. Meaning does not excuse impact!
Brian Jaress, the point of an interpretation of QM is not to explain why the equations are the way they are. (Even if that's not a Wrong Question, as I suspect it is - at some point, there's no more underlying mechanism to ask after.)
comment by mitchell_porter2 · 2008-05-19T01:11:31.000Z · LW(p) · GW(p)
Caledonian: What evidence do you offer us that mathematical descriptions cannot produce the properties of which you speak?
First of all, let's be clear regarding what we have to work with. Things are complicated a little by the variety of specific theories and formalisms used in physics, but let's take multi-particle quantum mechanics in the configuration basis as illustrative. The configurations are all of the form 'A particle of species a1 at location x1, and a particle of species a2 at location at x2,...', and so forth. The quantum states consist of associations of complex numbers with such configurations. There is the basic dynamical fact that a quantum state ψ evolves into another state ψ + dψ according to the Schrödinger equation, and (if you're not taking the many-worlds path) Born's postulate that the probability of there actually being particles a1, a2,... at locations x1, x2,... is |ψ|^2.
Then there are various entities and facts that can be obtained from these through abstraction, deduction, and comparison, e.g. 'the number of particles in configuration c' or 'the average number of expected particles in quantum state ψ, as calculated via the Born probabilities' or 'the Hilbert-space inner product of states ψ1 and ψ2'. We could, if necessary, describe a formal combinatorial grammar describing all and only those entities and facts implied by the theory-defining postulates in my first paragraph. It would amount to saying: the entities and relationships directly postulated by the theory exist, and so do those which can be logically or mathematically inferred from those postulates. But speaking informally, all we have to work with are featureless spatial configurations of point particles, superpositions thereof, dynamics of superpositions, and empirical probabilities derived from superpositions.
And what sort of entity or property are we trying to extract from the theory, if we are trying to derive consciousness from physics? It's tiresome to resort repeatedly to the same example, but nonetheless, let's consider color: the variety of hues and shades which we lump together into the natural language categories of red, blue, and so forth. (I put it that way because I do not want to turn this into a discussion of whether those natural language categories are "natural kinds". Focus instead on the numerous instances of color which populate visual experience and which unquestionably exist, regardless of how they get categorized.) On one side we have "quantity and causality", as I put it above - and I'll even throw in spatial geometry and dispositional behavior; on the other side, the colors. How might we go about making the latter out of the former?
There are some things we can do. We can quantify certain things about subjective color; and we can describe certain physical realities which are somehow correlated with color. Thus 450-nm wavelength light "is" a type of blue light. But I submit that it makes no sense to say that when you see a particular shade of blue, you are "seeing a length"; or that blue itself "is a length". That might do as a poetic description of the physics behind the perception, but as an ontological statement, it simply substitutes the correlated geometric property for the sensory property we are trying to explain.
Another approach is the cognitive one: things are blue because your nervous system classified them that way. But although the correlated purely-physical property is a lot more complicated here, it's the same story. Put informally, to use this as an explanation of blueness is to say that our perceptions turn blue because we call them blue or think they are blue.
I think Dennett would understand my point, but as usual he bites the bullet and denies that color is there. He calls it "figment" - figmentary pigment - because according to physics, there is nothing actually blue, inside or outside one's head. But blueness is there, therefore that ontology is wrong.
"Emergence" is a popular dodge: colors and other subjective properties, though not being identical with any elementary physical property, somehow "emerge" when a brain enters the picture. Apart from being vague, that's just dualism: if the emergent properties are not identical with one of the purely physical properties in that combinatorial grammar I mentioned, then it is different from all of them, no matter how correlated it is.
As I said, my answer is to turn it around, and to say that the existence of blueness (etc) is axiomatic, and so it must be one of the things that a true and complete theory of reality would be about. It is as if one were to look at electromagnetism and say, my God, those things we thought were lengths, they're actually colors! - rather than vice versa. But it's also my thesis that when you look at doing this in detail, some of the obvious candidates for this ontological inversion, such as "computational states of neurons", present too many specific difficulties to work (in that case, because a computational state of a meso-scale system like a neuron is a vague property, microphysically speaking). Thus I find myself pursuing quantum ontological exotica.
comment by Caledonian2 · 2008-05-19T01:46:18.000Z · LW(p) · GW(p)
I think Dennett would understand my point, but as usual he bites the bullet and denies that color is there. He calls it "figment" - figmentary pigment - because according to physics, there is nothing actually blue, inside or outside one's head. But blueness is there, therefore that ontology is wrong.
No it isn't, therefore that 'ontology' is correct. Or so anyone who chooses to do so can argue. If you don't have any better rejoinder than "oh yes it does", then it seems the argument for your position is quite weak.
I think your basic problem is that you really don't seem to have a clear understanding of what you mean when you say a thing is true - thus you have need of terms like ontology.
As I see it, we need only a mathematical description of a set that binds together the various neurological associations we have with a particular input state, and that is the description of 'blue'. There is, quite literally, nothing else to explain.
comment by mitchell_porter2 · 2008-05-19T02:01:38.000Z · LW(p) · GW(p)
Are you color-blind, Caledonian? Do you ever use color words? Do you think they refer to nothing more than "neurological associations"? Or is it that they do refer to something of which you are directly aware, but which you have a way of talking around?
When I look out the window right now, I see a blue patch of sky. Am I seeing neurological associations? Am I seeing a mathematical description of neurological associations?
You are free to deny that 'blueness is there', but if that is your only counterargument, I have to think my original argument must have been quite strong.
comment by Caledonian2 · 2008-05-19T02:27:55.000Z · LW(p) · GW(p)
When I look out the window right now, I see a blue patch of sky. Am I seeing neurological associations?No, the neurological associations are the act of seeing. Different receptors in your eyes are excited by various frequencies of light, and the strength and pattern of their activation are associated in various ways by your central nervous system. There is a mathematical description of all of the steps in that process, including its representation in your memory.
Color isn't "out there", which is why very different frequency combinations can be perceived as the same color.
You seem to be ascribing something ineffable to your sensation of color, and then proclaiming that it can't be comprehended; I fail to see how any study of the mathematics of quantum mechanics could convince you it's responsible for your supposed sensations. Good luck with that, I guess.
comment by mitchell_porter2 · 2008-05-19T03:19:50.000Z · LW(p) · GW(p)
Color isn't out there; but how can it be "in here", if the brain also just consists of particles in space? And color is either somewhere, or it's nowhere. Dennett takes the "nowhere" option, as part of his general denial of a "Cartesian theater", a place where appearances happen.
Except for those who think mental states can supervene directly on processes extending far outside the physical body, I think most scientifically minded people suppose that the world of appearance is somehow identical with something inside the brain: that (in one sense) what you see is in your visual cortex, even if (in another sense) what you see is far away. (Though they may prefer to say that it's the seeing that is in the cortex, rather than what is seen.) As I have just argued, this does not resolve the problem of locating perceived color (etc) in the physical world, it merely localizes the problem. We still await the identification of some physical thing or property in the brain which can plausibly be identified with an actual instance of color. And I think that's hopeless so long as you restrict yourself to states built up from fuzzy mesoscopic properties like membrane polarizations. The ghost of a homogeneous shade of color has to somehow hover over something which in actual fact consists of large numbers of ions on either side of a big macromolecule.
So I look for the true Cartesian theater to be found at a level where physically, even a 'particle' is just an approximation, such as in a decomposition of a global quantum state into what formally just appear to be algebraic structures lacking even a spatial interpretation. Quantum theory actually permits such an abstract perspective, if you step away from the use of a particular basis, such as configuration. I think that here, and only here, out of all the physics we know and half-know, is there something removed enough from spatializing presuppositions that it might be identifiable directly with a state of consciousness. This has the empirical consequence that there had better be a distributed quantum condensate (or other locus of entanglement) somewhere in the brain, causally situated so as to function as a Cartesian theater and locus of consciousness. All I'm doing is displacing the hard problem onto the properties and structures of that hypothesized quantum object, but it had to be done because the problem appears to be unsolvable out in the world of disentangled 'individual particles'.
comment by Elliot_Temple · 2008-05-20T17:21:37.000Z · LW(p) · GW(p)
Sir Roger Penrose - a world-class physicist - still thinks that consciousness is caused by quantum gravity. I expect that no one ever warned him against mysterious answers to mysterious questions - only told him his hypotheses needed to be falsifiable and have empirical consequences. Just like Eliezer18.
There's nothing wrong with proposing the hypothesis. The problem is believing and supporting it while it's pending. That it hasn't been refuted yet is no reason to take that side of the issue. (Arguably it has been refuted, because there are known criticism of it which no one has answered, but never mind that.)
Similarly, discarding other open/pending hypotheses because, what, he likes this one? That's obviously unreasonable.
comment by HalFinney · 2008-05-20T18:32:00.000Z · LW(p) · GW(p)
Mitchell and Eliezer are both smart people, yet their intuitions and reasoning have led them to very different conclusions about quantum reality. While both interpretations are, I think, testable in principle, with Mitchell's much closer to being practically realizable, neither can be fully tested at this time. The scientific conclusion is probably to say that it doesn't really matter, come back when you have a prediction. Yet I think both Eliezer and Mitchell are unsatisfied with this agnosticism and both want to see tighter bounds on our beliefs about what may be true. Science gives us a way forward on scientific disputes; yet the disagreement between Eliezer and Mitchell seems to be much harder to resolve.
Philosophers have argued for centuries on similar issues and made virtually no progress. Does this suggest that there is no effective means to settle disputes that go beyond science? Maybe in the end, science is the best we can do.
comment by Stuff_Thing · 2008-05-29T17:53:37.000Z · LW(p) · GW(p)
LOLOLOLOLOLOLOLOL
comment by handoflixue · 2011-05-24T22:13:58.946Z · LW(p) · GW(p)
"but even after you say "Consciousness is caused by quantum gravity", you don't anticipate anything"
It seems to me that if you have a testable hypothesis, then you are anticipating something. If I believe in quantum gravity, that's just a belief. If I theorize that I can run Test X, and I'll get Y result, then there's an actual anticipation. Assuming it's a sane test and a sane hypothesis, I'm just not understanding how you could possibly fail to change your anticipations.
I've had this question on a few articles, and thus far haven't come any closer to enlightenment. It seems to me that the basic failing of "vitalism" or "phlogiston" is that they're too general, they don't actually make predictions or change anticipations, and thus you can't test them in the first place. If they made testable predictions, they'd have to change anticipations (unless you just wanted to ignore the evidence)
"Second, the hypothesis has no moving parts - the secret sauce is not a specific complex mechanism, but a blankly solid substance or force."
It seems to me that the difference between "phlogiston" and "gravity" isn't the presence or absence of a complex mechanism; gravity, as it was first understood, just happened to make actual, testable predictions (objects fall at 9.8 m/s^2), and from there it has been refined in to something more complex.
But it seems to me that it's entirely useful to have a model that makes useful, accurate, "anticipate-able" predictions, even if you have no clue why the model works.
In fact, it seems to me that quite a lot of the failings of science have been when we try to explain "why" instead of "what", especially when people start embracing the "why" as True Dogma.
I suppose mainly, I'm not clear whether I'm missing something big, or if I've already gotten it and the thing to understand here is simply that a lot of other people haven't gotten it.
comment by SeanMCoincon · 2014-11-14T18:00:43.095Z · LW(p) · GW(p)
"No, I did not go through the traditional apprenticeship. But when I look back, and see what Eliezer18 did wrong, I see plenty of modern scientists making the same mistakes. I cannot detect any sign that they were better warned than myself."
It seems like a viable means of propagating education about such mistakes - or the mistakes of aspiring rationalists in general - would be to set up (relatively) straightforward scientific experiments that purposefully make a given mistake and then allow students to perform the experiment unsuccessfully. The postmortem for each class/lab would review what went wrong, what wrong looked like, why things went wrong, and so forth. Sort of a "no, seriously, learn from the past" symposium.
Do any of you know of any such existing educational structures in the Bay Area?
comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2019-08-18T18:59:42.181Z · LW(p) · GW(p)
Perhaps it depends on the particular tradition. In Russia, scientific method is usually described around the lines "most precise and among those the simplest" - in other words "if you can distinguish two theories by experimental evidence, please do so; unless/until it is possible, use Occam's Razor". The fact that real-world scientists fail to apply Occam's Razor now and then does not destroy the fact that ideal Science includes it - but this fact is the main reason for this ordering and not the reverse one.
comment by TAG · 2021-09-30T18:29:00.711Z · LW(p) · GW(p)
Clearly, no one told them about the formal definition of Occam’s Razor, in whispered apprenticeship or otherwise.
"The"...?
Why use occams razor at all? If we were only interested in empirical adequacy, the ability to make accurate predictions, simplicity only buys the ability to make predictions with fewer calculations. But SI, according to Yudkowsky (but not Solomonoff) doesn't just make predictions, it tells you you true facts about the world .
If you are using a simplicity criterion to decide between theories that already known to be predictive , as in Solomonoff induction, then simplicity doesn’t buy you any extra predictiveness, so the extra factor it buys you is presumably truth.
There are multiple simplicity criteria, but not multiple truths. So you need the right simplicity criterion. If you have a conceptually valid simplicity critetion, and you formalise it, then thats as good as it gets, you've ticked all the boxes. If you formalise a simplicity criterion that has no known relationship to truth, then you haven't achieved anything. So it is not enough to say that Solomojnoff is "the" formal standard of simplicity. There are any number of ways of conceptualising simplicity, and you need the right one.
Consider this exchange, from "A semi technical introduction to Solomonoff Induction" .
"ASHLEY: Uh, but you didn’t actually use the notion of computational simplicity to get that conclusion; you just required that the supply of probability mass is finite and the supply of potential complications is infinite. Any way of counting discrete complications would imply that conclusion, even if it went by surface wheels and gears.
BLAINE: Well, maybe. But it so happens that Yudkowsky did invent or reinvent that argument after pondering Solomonoff induction, and if it predates him (or Solomonoff) then Yudkowsky doesn’t know the source. Concrete inspiration for simplified arguments is also a credit to a theory, especially if the simplified argument didn’t exist before that.
ASHLEY: Fair enough."
I think Ashley deserves an answer to "the objection "[a]ny way of counting discrete complications would imply that conclusion, even if it went by surface wheels and gears", not a claim about who invented what first!
Or you could write a theory in English, and count the number of letters...that's formal. But what has it to do with truth and reality? But what, equally, does a count of machine code instructions have to do with truth or probability?
There is one interpretation of Occam's razor, the epistemic interpretation of it, that has the required properties. If you consider a theory as a conjunction if propositions having a probability less than one, then all else being equal, a higher count of propositions will be less probable. We already know that propositions are truth-apt , that they are capable of expressing something about the world, and it is reasonable to treat them probabilistically.
So that is the right simplicity criterion...except that it had nothing to do with SI!