The Reasonable Effectiveness of Mathematics or: AI vs sandwiches

post by Vanessa Kosoy (vanessa-kosoy) · 2020-02-14T18:46:39.280Z · LW · GW · 8 comments

Contents

  Background
  The miracle of math
  Math versus the brain
  Math versus natural language
    Precision
    Objectivity
  Why is math effective?
    Legibility to others
    Legibility to oneself
    Measuring complexity
    Quantitative answers
    Leveraging computers
  When is math effective?
    Serial depth
    Anthropocentrism
    Sample complexity [EDIT 2020-02-15]
  What's next?
None
8 comments

TLDR: I try to find the root causes of why math is useful.

Epistemic status: Marginally confident in veracity, not at all confident in novelty.

Background

I recently had a discussion [LW(p) · GW(p)] with Rohin that started from Paul Christiano's concept of "intent alignment" but branched off into a different question, namely: do we need mathematical theory to solve AI risk, or is it sufficient to do experiments and informal reasoning? I argued that experiments are important but insufficient because

...ultimately, you need theoretical knowledge to know what can be safely inferred from these experiments. Without theory you cannot extrapolate.

At some point Rohin later asked me

I'm curious what you think doesn't require building a mathematical theory?

I replied

I'm not sure about the scope of your question? I made a sandwich this morning without building mathematical theory :)

to which Rohin said

Presumably the ingredients were in a slightly different configuration than you had ever seen them before, but you were still able to "extrapolate" to figure out how to make a sandwich anyway. Why didn't you need theory for that extrapolation?

Obviously this is a silly example, but I don't currently see any qualitative difference between sandwich-making-extrapolation, and the sort of extrapolation we do when we make qualitative arguments about AI risk. Why trust the former but not the latter? One is answer is that the latter is more complex, but you seem to be arguing something else.

So, in this essay I will try to explain my view of the role of mathematical theory and the qualitative difference between sandwiches and AI pertaining to this role.

The miracle of math

It probably brooks no argument that mathematics played a central role in the tremendous progress of science and technology during the last few centuries, and that it is used extensively in virtually all fields of modern engineering. The successes of mathematics have been so impressive that they prompted the Nobel-winning physicist Eugene Wigner into writing eir famous essay "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", in which ey call it no less than a "miracle" and write:

...the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious... there is no rational explanation for it.

I see two main reasons why it's important to find that elusive rational explanation. First, knowing why mathematics is useful will help us figuring out exactly when is it useful, and in particular what should its role be in AI alignment. Second, the effectiveness of mathematics is in itself an observation about the properties of human reasoning, and as such it might hint at insights both about intelligence in the abstract and human intelligence in particular, both of which are important to understand for AI alignment.

Risking hubris, I will now take a stab at dispersing Wigner's mystery. After all, mystery exists in the map, not in the territory [LW · GW].

Math versus the brain

First, let's look at how do we actually use mathematics to solve real world problems. To solve a real world problem using any method, input from the real world is needed. You can't know how the world looks like without actually looking at it. But then, once you looked at it, you want to somehow extrapolate your observations and infer new facts and predictions. Making such an extrapolation requires building models and, deciding how probable these models are, and finally applying the models to the question of interest. Mathematics then enters as a language in which such models can be described and as a toolbox for extracting predictions from these models.

Second, although the track record of mathematics is evident, it is even more evident that humans don't require mathematics to think. In fact, the human brain is perfectly capable of accomplishing all of the steps above on its own: constructing models, evaluating models and extracting predictions from models. To some extent, natural language plays the same role in human thinking at large as mathematics plays in its applications.

However, there is an important difference between mathematical reasoning and "informal" reasoning: the latter virtually always involves a component that is not conscious and that cannot be easily verbalized. So, although thinking always involves models, a lot of the time these models are fully or partially hidden, encoded somewhere in the neural networks of the brain. This hidden, unconscious part is often called "intuition".

Now, using math doesn't replace our cognition, it augments it. Even when we use math we actually use all three types of thinking at once: the unconscious intuition, the conscious informal verbal reasoning and (also conscious) mathematical reasoning. Indeed, reasoning using only math would be more or less equivalent to creating AGI (mathematical computations can be delegated to a computer, and anything a computer does can be regarded as a mathematical computation). The question is then, what does this last layer do that the first two don't do as well on their own, and in which cases is it needed.

Math versus natural language

I already said that reasoning using mathematics is somewhat similar to reasoning using natural language. But, there are two main differences between mathematics and natural language that are relevant to the former's effectiveness:

Precision

Mathematics is precise. Mathematical definitions are crisp and mathematical statements have unambiguous meaning within the ontology of mathematics. On the other hand, natural language concepts are usually fuzzy, their semantics defined by subjective unconscious knowledge that varies from speaker to speaker. Somewhere in the brain is a neural circuit representing the concept, but this neural circuit is not itself part of language. Moreover, natural language statements have meaning that often depends on context and background assumptions.

Objectivity

Mathematics evolved in order to answer objective questions about the world at large. (And, bootstrapping from that, in order to answer questions about mathematics itself.) Mathematics happened because we looked for models and tools that generalize as much as possible and that don't depend on social context[1]. Moreover, the evolution of mathematics was a conscious process, one in which we fully applied our collective reasoning faculties to make mathematics better.

On the other hand, natural language evolved to some extent to answer objective questions about the world, but also in order to play complex social games. Natural language is heavily biased towards a human-centric view of the world, and to some extent towards the conditions in which human existed historically. Natural language evolved in a process which was largely unconscious and not even quite human (in the same sense that biological evolution of humans is not in itself human).

Why is math effective?

These two differences lead to five reasons why augmenting reasoning by mathematics is sometimes effective:

Legibility to others

The precise nature of mathematics makes mathematical reasoning legible to other people. Other people can evaluate your math and build on it, without any risk that they misunderstand your definitions or having to deal with difficult to convey intuitions[2]. Since human civilization is a collective multi-generational effort, the improved ability to collaborate can serve to significantly enhance and accelerate the generation of knowledge.

Legibility to oneself

The precise nature of mathematics makes mathematical reasoning legible to yourself. This might seem nonsensical at first: shouldn't you perfectly understand your own reasoning anyway? But, our reasoning is not transparent to us.

Sometimes we believe things for reasons that we are no aware of, and these reasons might be poorly aligned with truth-seeking: hence, cognitive bias. Of course, such biases all should have evolutionary reasons. But, these reasons probably have to do with specifics of the ancestral environment, and the game theory of conforming to the tribe.

Moreover, when your reasoning is transparent, you can make full use of your cognitive faculties to improve the reasoning process. This is something I already mentioned when I spoke about the objectivity of mathematics. A transparent phenomenon can be analysed the same way as any phenomenon in the external world. On the other hand, an opaque phenomenon, some of which is hidden inside your own brain, can only be analysed to the extent your brain is specifically designed to analyse it (which is often limited).

Measuring complexity

I have mentioned the need to evaluate the probability of different models. This evaluation is done by comparing to observations, but it also requires a prior. The human brain has such a prior implicitly, but this prior is in some sense biased towards the ancestral environment. This is why humans have come up with anthropomorphic explanations of natural phenomena for so long, an error that took millennia to correct (and is still not fully corrected).

Now, what is the "correct" prior? Arguably it is Occam's razor: simpler hypotheses are more likely. But, how do we decide what is "simple"? Solomonoff induction is a formalization of Occam's razor, but Solomonoff induction depends on the choice of a universal Turing machine. More broadly and less formally, description complexity depends on the language you use to write the description. My claim is: objectivity of mathematics means it is the correct language of choice.

Now, this claim is not entirely precise. There is not really a single formal mathematical language, there are different such languages, and if we want to literally measure description length then it depends on the precise encoding too. Moreover, nobody really measures the length of mathematical descriptions when evaluating scientific hypotheses (although maybe they should). However, the use of mathematical language still naturally leads to a better model evaluation process than what we have without it.

We should also consider the counterargument that, the prior is subjective by definition. So, shouldn't the "brain's prior", whatever it is, be the correct prior by definition? I think that, strictly speaking, the answer is "yes". But, over the lifetime of civilization, our accumulated experience led us to update this prior, and single out the complexity measure suggested by math. This is exactly the objectivity of mathematics I mentioned before.

Quantitative answers

Another advantage of math is that it allows producing precise quantitative answers in a way informal reasoning usually doesn't. Even someone who has fairly good intuition about mechanics of physical bodies cannot guess their trajectories or stability with the same precision a mathematical model can. I am not sure exactly why this is the case, but it seems to be the result of some noise inherent to the human brain, or to the translation between different modules in the brain. However, this advantage is only significant when your mathematical model is very accurate.

Specifically in the case of AI alignment, I am not sure how important is this advantage. I expect us to mostly only come up with models that depend on parameters for which we have rough order-of-magnitude estimates at best. But, maybe when the theory is fully revealed, there will be some use cases for quantitative precision.

Leveraging computers

In the information age, math gained another advantage due to the possibility of computer simulations. Such simulations allow us leveraging the computing power of machines which can surpass the brain along some parameters (such as serial speed). On the other hand, you cannot offload some of your brain's neural networks to a computer. (Yet. Growth mindset!)

When is math effective?

Let us now return to the question posed by Rohin: what is the difference between making sandwiches and solving AI risk? Why does the former requires no mathematical theory [citation needed] whereas the latter does require it (according to me)? I see three relevant differences:

Serial depth

Making sandwiches is a task relatively similar to tasks we had to deal with in the ancestral environment, and in particular there is not a lot of serial depth to the know-how of making sandwiches. If we pluck a person from virtually any culture in any period of history, then it won't be difficult to explain em how to make a sandwich. On the other hand, in the case of AI risk, just understanding the question requires a lot of background knowledge that was built over generations and requires years of study to properly grasp.

For tasks of this type, the "natural" human prior is perfectly suitable, there is not much need for collaboration (except sometimes the type of collaboration which comes naturally), and there is no need for strong optimization of the reasoning process. We are already wired to solve them.

Anthropocentrism

Making a good sandwich requires a lot of human-centric knowledge: it has to do with how and what humans like to eat. To give another example, consider artistic sculpting. This is also a field of knowledge that took generations to build and requires years to learn. And, some math may come useful there, for example geometric calculations, not to mention the math that was needed to make the physical tools and materials that modern sculptors may use. But, since a large component of the task is catering to human aesthetic tastes, math cannot compete with innate human abilities that are designed to be human-centric.

On the other hand, studying AI risk involves questions about what kind of intelligent agents can exist in general, and what properties these agents have. Such questions have "objective", not human-centric nature, and are better addressed by the "math-simplicity" prior. There might also be human-centric aspects when we speak of aligning AIs to humans. But, even there promising approaches should not rely on a lot of detailed properties humans have, otherwise we would get a solution that is very complex and fragile.

Sample complexity [EDIT 2020-02-15]

When we're learning to make a sandwich, we can make many attempts to perfect the technique, bounded only by the cost of time and ingredients. (Although most people don't experiment that much with sandwiches, civilization as a whole experiments with food a lot.) As a more important example, consider deep learning. Deep learning is far from the ancestral environment, and is not especially human-centric. Nevertheless, it had impressive successes despite making only relatively modest use of mathematical theory (modulo pre-existing tools), thanks to much trial and error (a process that is much cheaper for software than for hardware engineering). On the other hand, with AI risk we want to limit trial and error, since the entire problem is that errors might be too costly.

Since the role of math is enhancing our ability to extrapolate observations, it in particular improves our sample complexity. That is, math allows us to reach useful conclusions based on less empirical data. In particular, I said before that one advantage of math is that it effectively starts from a better prior. Now, if you start from a worse prior, you will still converge to the right answer (unless the prior is dogmatic), but it will take you longer.

What's next?

I want to clarify that the theory I presented here is not supposed to be the final word on this question. Among other epistemic sins I surely made here, I presented five reasons and said nothing about their relative importance (although these reasons are not fully independent so it's not necessarily likely that one of them has overwhelming importance compared to the rest). Moreover, I should eat my own dog food and construct a mathematical theory that makes these arguments rigorous. In particular, I think that the separation into conscious and unconscious reasoning and its consequences can be modeled using Turing RL [AF(p) · GW(p)]. But, elaborating this further is work for another time.


  1. This is perhaps somewhat circular: mathematics is effective because we looked for something effective. But, I hope to at least elucidate a few gears inside this effectiveness. ↩︎

  2. Of course, there are many difficult to convey intuitions about how to do math: how to find proofs, and how to even decide which mathematical lines of inquiry are promising. But, the bare bones product of this process is fully transparent. ↩︎

8 comments

Comments sorted by top scores.

comment by Vanessa Kosoy (vanessa-kosoy) · 2022-01-14T20:06:36.803Z · LW(p) · GW(p)

In this post I speculated on the reasons for why mathematics is so useful so often, and I still stand behind it. The context, though, is the ongoing debate in the AI alignment community between the proponents of heuristic approaches and empirical research[1] ("prosaic alignment") and the proponents of building foundational theory and mathematical analysis (as exemplified in MIRI's "agent foundations" and my own "learning-theoretic" research agendas).

Previous volleys in this debate include Ngo's "realism about rationality [LW · GW]" (on the anti-theory side), the pro-theory replies (including my [LW(p) · GW(p)] own [LW(p) · GW(p)]) and Yudkowsky's "the rocket alignment problem [LW · GW]" (on the pro-theory side).

Unfortunately, it doesn't seem like any of the key participants budged much on their position, AFAICT. If progress on this is possible, then it probably requires both sides working harder to make their cruxes explicit.


  1. To be clear, I'm in favor of empirical research, I just think that we need theory to guide it and interpret the results. ↩︎

comment by SB · 2020-09-04T09:46:02.880Z · LW(p) · GW(p)

This is the only such essay/post I've found and I really enjoyed reading it. I might go away and think a lot more about it, so thank you for writing it. If I come up with a rejoinder, I'll let you know.

There are a few aspects which are presented as dichotomous with respect to natural language and mathematics.

there is an important difference between mathematical reasoning and "informal" reasoning: the latter virtually always involves a component that is not conscious and that cannot be easily verbalized. So, although thinking always involves models, a lot of the time these models are fully or partially hidden, encoded somewhere in the neural networks of the brain. This hidden, unconscious part is often called "intuition".

It seems to be suggested that 'mathematical reasoning' often doesn't involve intuition (because the claim says that the difference is that "informal" reasoning virtually always does). OK. But understood in any normal, colloquial way, surely mathematical reasoning often does involve intuition? I might even say that it always almost involves intuition. I guess we might be able to get round this by saying that when one "does mathematics" (whatever that means), any part of your reasoning that is not attributable to either unconscious intuition or conscious, informal, verbal reasoning, is called "conscious mathematical reasoning", and that's what we're really talking about. i.e. we define it so as not to include any purely intuitive thoughts.

Regarding 'precision' as a difference between mathematics and natural language, the author says:

mathematical statements have unambiguous meaning within the ontology of mathematics....
natural language statements have meaning that often depends on context and background assumptions.

I don't really understand what the phrase "the ontology of mathematics" is 'doing' in this sentence, but if I skip over this phrase, I find it hard to agree with the point being made (perhaps there is more going on with how 'ontology' is used here than I can tell), because I think that most people who have done research in pure mathematics will agree that mathematical statements often (or even always) have meaning that depends on context and background assumptions.

Regarding 'objectivity' as a difference between mathematics and natural language, the author says:

Natural language is heavily biased towards a human-centric view of the world, and to some extent towards the conditions in which human existed historically. Natural language evolved in a process which was largely unconscious and not even quite human (in the same sense that biological evolution of humans is not in itself human).

I find it very hard to disagree with: " Mathematics is biased towards a human-centric view of the world". In some sense, doesn't plain anthropomorphic bias just suggest: How can it not be so? Moreover, mathematics is clearly rooted in the conditions in which we existed historically, I think the onus is again on the author's claim to say why this would not be the case; it's an area of human thought and inquiry and so is prima facie biased towards our condition. Here, I find it hard to really work out if this is a point I could ever believe (personally). Is there something 'more objective' about '1 + 2 = 3' compared with 'Seattle is in Washington'? I guess a fair number of people would say 'yes, the first statement is more objective'. But why? I don't really know.

It's interesting that, say, collaborators in mathematics are able to have long conversations in which they work out complex mathematical arguments without having to write down things using mathematical notation. Sure, they would use specialised terminology, but things like this really blur the distinction: Is it reasoning using natural language, verbally, but it is clearly conscious and mathematical. I'm not sure there is a good distinction here.

At first reading, one source of confusion I can imagine is conflation of 'mathematics' (whatever that may be) with something closer to 'the symbolic manipulation of the notation of basic, well-understood, applicable mathematics'. There is some kind of conscious process, which feels extremely 'objective' and unambigious when you go from '2x = 4' to 'x=2', or something (it doesn't have to be this basic, e.g. computing a partial derivative of an explicit function feel like this or solving a linear system). But in the big scheme of things, this is very basic mathematics for which the notation is very refined and which humans understand extremely well. I think it is a mistake to think that 'mathematical reasoning' more generally feels like this. I keep coming back to it but anyone who has done research in pure mathematics will understand how fuzzy and intuitive it can feel. I've been a mathematician for many years and something which is hard to convey to students is how you have to reason when you don't know the answer or even if there is an answer that is findable by you (or even by humans at all). Most people who study mathematics will never encounter and deal with this level of uncertainty in their mathematical reasoning but as a researcher, it is what you spend most of your energy dealing with and the way you think and communicate changes and develops to match what you are dealing with. For example:

Other people can evaluate your math and build on it, without any risk that they misunderstand your definitions or having to deal with difficult to convey intuitions

I cannot help but say that this is very unlikely to be the viewpoint of someone who has spent time doing research in pure mathematics.

Quantitative answers - and using computers to get those quantitative answers from data and big calculations - is one of the undeniable advantages of mathematics, I agree. But as you hint, these alone don't suggest that it is at all necessary for AI alignment/safety problems. As I said, I'm a mathematician, so I would actually like it if it were somehow necessary to develop deeper/more advanced mathematical theory in order to help with the problem, but at the moment that seems uncertain to my mind.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2020-09-07T16:22:32.299Z · LW(p) · GW(p)

I feel like this is a somewhat uncharitable reading. I am also a mathematician and I am perfectly aware that we use intuition and informal reasoning to do mathematics. However, it is no doubt one of the defining properties of mathematics, that agreeing on the validity of a proof is much easier than agreeing on the validity of an informal argument, not to mention intuition which cannot be put into words. In fact it is so easy that we have fully automatic proof checkers. Of course most mathematical proofs haven't been translated into a form that an automatic proof checker can accept, but there's no doubt that it can be done, and in principle doing so requires no new ideas but only lots of drudgery (modulo the fact that some published proofs will be found to have holes in the process).

As to whether mathematics is anthropocentric: it probably is, but it is very likely much less anthropocentric that natural language. Indeed, arguably the reason mathematics gained prominence is its ability to explain much of the non-anthropocentric aspects of nature. Much of the motivation for introducing mathematical concepts came from physics and engineering, and therefore those concepts were inherently selected for their efficiency in constructing objective models of the world.

comment by adamShimi · 2020-02-16T15:03:01.148Z · LW(p) · GW(p)

Nice post. Being convinced myself of the importance of mathematics both for understanding the world in general and for the specific problems of AI safety, I found it interesting to see what arguments you marshaled in and against this position.

About the unreasonable effectiveness of mathematics, I'd like to throw the "follow-up" statement: The unreasonable ineffectiveness of mathematics beyond physics (for example in biology). The counter argument, at least for biology, is that Wigner was talking a lot about differential equations, which seems somewhat ineffective in biology; but theoretical computer science, which one can see as the mathematical study of computation, and thus somewhat a branch of mathematics, might be better fitted to biology.

A general comment about your perspective is that you seem to equals mathematics with formal specification and proofs. That's not necessarily an issue, but most modern mathematicians tend to not be exact formalists, so I thought it important to point out.

For the rest of my comments:

  • Rather than precise, I would say that mathematics are formal. The difference lies in the fact that a precise statement captures almost exactly an idea, whereas formalization provide an objective description of... something. Given that the main difficulty in applying mathematics and in writing specification for formal methods is this ontological identification between the formalization and the object in the world, I feel that it's a bit too easy to say that maths captures the ideas precisely.
  • Similarly, it is not because the definitions themselves are unambiguous (if they are formal) that their interpretation, meaning and use is. I agree that a formal definition is far less ambiguous than a natural language one, but that does not mean that it is completely unambiguous. Many disagreement I had in research were about the interpretation of the formalisms themselves.
  • Although I agree with the idea of mathematics capturing some concept of simplicity, I would precise that it is about simplicity when all is explicited. That's rather obvious for rationalists [? · GW]. Formal definitions tend to be full of subtleties and hard to manage, but the explicit versions of the "simpler" models would actually be more complex than that.
  • Nitpick about the "quantitative": what of abstract algebra, and all the subfields that are not explicitly quantitative? Are they useful only insofar as they serves for the more quantitative parts of maths, or am I taking this argument too far and you just meant that one use of maths was in the quantitative parts?
  • The talk about Serial Depth makes me think about deconfusion. I feel it is indeed rather easy to makes someone not confused about making a sandwich, while it is still undone for AI Safety.
  • The Anthropocentrism arguments feels right to me, but I think it doesn't apply if one is trying to build prosaic aligned AGI [AF · GW]. Then the "most important" is to solve rather anthropocentric models of decision and values, instead of abstracting them away. But I might be wrong on that one.
comment by shminux · 2020-02-15T05:38:51.117Z · LW(p) · GW(p)
making sandwiches is a task relatively similar to tasks we had to deal with in the ancestral environment, and in particular there is not a lot of serial depth to the know-how of making sandwiches. If we pluck a person from virtually any culture in any period of history, then it won't be difficult to explain em how to make a sandwich. On the other hand, in the case of AI risk, just understanding the question requires a lot of background knowledge that was built over generations and requires years of study to properly grasp.

If I understand your argument correctly, it implies that dealing with the agents that evolve from simpler than you are to smarter than you are within a few lifetimes ("foom") is not a task that was ever present, or at least not successfully accomplished by your evolutionary ancestors, and hence not incorporated into the intuitive part of the brain. Unlike, say, the task of throwing a rock with the aim of hitting something, which has been internalized and eventually resulted in NBA, with all the required nonlinear differential equations solved by the brain in real time accurately enough, for some of us more so than for others.

Similarly, approximate basic counting is something humans and other animals have done for millions of years, while, say, accurate long division was never evolutionarily important and so requires engaging the conscious parts of the brain just to understand the question ("why do we need all these extra digits and what do they mean?"), even though it is technically much much simpler than calculating the way one's hand must move in order to throw a ball on just the right trajectory.

If this is your argument, then I agree with it (and made a similar one here before numerous times).

comment by Andrew Jacob Sauer (andrew-jacob-sauer) · 2020-02-15T01:44:20.722Z · LW(p) · GW(p)
But, over the lifetime of civilization, our accumulated experience led us to update this prior, and single out the complexity measure suggested by math.

I may be picking nits, here, but what exactly does it mean to "update a prior"?

And as a mathematical consideration, is it in general possible to switch your probabilities from one (limit computable) universal prior to another with a finite amount of evidence?

Replies from: Gurkenglas
comment by Gurkenglas · 2020-02-15T10:47:24.685Z · LW(p) · GW(p)

Two priors could indeed start out diverging such that you cannot reach one from the other with finite evidence. Strange loops help here:

One of the hypotheses the brain's prior admits is that the universe runs on math. This hypothesis predicts what you'd get by having used a mathematical prior from day one. Natural philosophy (and, by today, peer pressure) will get most of us enough evidence to favor it, and then physicist's experiments single out description length as the correct prior.

But the ways in which the brain's prior diverges are still there, just suppressed by updating; and given evidence of magic we could update away again if math is bad enough at explaining it.

comment by Pattern · 2020-02-14T20:49:35.023Z · LW(p) · GW(p)
On the other hand, you cannot offload some of your brain's neural networks to a computer (yet, growth mindset).

But you can run neural networks on a computer, and get them to do things for you. (I don't think this has taken off yet in the same way using the internet has.)

But, since a large component of the task is catering to human aesthetic tastes, math cannot compete with innate human abilities that are designed to be human-centric.

I'm skeptical of this. If we have found "math" to be so useful in the domains where it has been applied, why should it be supposed that it won't be useful in the domains where it hasn't been applied? Especially when its role is augmentation:

Now, using math doesn't replace our cognition, it augments it. Even when we use math we actually use all three types of thinking at once: the unconscious intuition, the conscious informal verbal reasoning and (also conscious) mathematical reasoning.

Determining what is safe to eat is not held to be a mystery.

Why should what is delicious to eat be any different? Why is this domain beyond the reach of science?