Posts
Comments
you end up observing the outputs of models, as suggested in the original post.
I agree with pragmatist (the OP) that this is a problem for the correspondence theory of truth.
What is it that makes their accumulated knowledge worthy of being relied upon ?
Usefulness? Just don't say "experimental evidence". Don't oversimplify epistemic justification. There are many aspects - how well knowledge fits with existing models, with observations, what is it's predictive power, what is it's instrumental value (does it help to achieve one's goals) etc. For example, we don't have any experimental evidence that smoking causes cancer in humans, but we nevertheless believe that is does. The power of Bayesian approach is in the mechanism to fuse together all these different forms of evidence and to arrive at a single posterior probability.
How do you evaluate whether any given model is useful of not?
One way is to simulate a perfect computational agent, assume perfect information, and see what kind of models it would construct.
If you reject the notion of an external reality that is accessible to us in at least some way, then you cannot really measure the performance of your models against any kind of a common standard.
Solomonoff induction provides a universal standard for "perfect" inductive inference, that is, learning from observations. It is not entirely parameter-free, so it's "a standard", not "the standard". I doubt if there is the standard for the same reasons I doubt that Platonic Truth does exist.
All you've got left are your internal thoughts and feelings
Umm, no, this is a false dichotomy. There is a large area in between "relying on one's intuition" and "relying on an objective external word". For example, how about "relying on the accumulated knowledge of others"?
See also my comment in the other thread.
I've got feeling that the implicit LessWrong'ish rationalist theory of truth is, in fact, some kind of epistemic (Bayesian) pragmatism, i.e. "true is that what is knowable using probability theory". May also throw in "..for a perfect computational agent".
My speculation is that the declared LW's sympathy towards the correspondence theory of truth stems from political / social reasons. We don't want to be confused with the uncritically thinking masses - the apologists of homoeopathy or astrology justifying their views by "yeah, I don't know how it works either, but it's useful!"; the middle-school teachers who are ready to threat scientific results as epistemological equals of their favourite theories coming from folk-psychology, religious dogmas, or "common sense knowledge", because, you known, "they all are true in some sense". Pragmatic theories of truth are dangerous if they come into the wrong hands.
What do you mean by "direct access to the world"?
Are you familiar with Kant? http://en.wikipedia.org/wiki/Noumenon
This description fits philosophy much better than science.
Sounds like a form of abduction, or, more precisely, failure to consider alternative hypotheses.
As for your options, have you considered the possibility that 99% of people have never formulated a coherent philosophical view on the theory of truth?
I'd love to hear a more qualified academic philosopher discuss this, but I'll try. It's not that the other theories are intuitively appealing, it's that the correspondence theory of truth has a number of problems, such as the problem of induction.
Let's say the one day we create a complete simulation of a universe where the physics almost completely match ours, except some minor details, such as that some specific types of elementary particles, e.g. neutrinos are never allowed to appear. Suppose that there are scientists in the simulation, and they work out the Standard Model of their physics. The model presupposes existence of neutrinos, but their measurement devices are never going to interact with a neutrino. Is the statement "neutrinos exist" true or false from their point of view? I'd say that the answer is "does not matter". To turn the example around, can we be sure that aether does not exist? Under Bayesianism, every instance of scientists not observing aether increases our confidence. However we might be living in a simulation where the simulators have restricted all observations that could reveal the existence of aether. So it cannot be completely excluded that aether exists, but is unobservable. So the correspondence theory is forced to admit that "aether exists" has an unknown truth value. In contrast, a pragmatic theory of truth can simply say that anything that cannot, in principle, be observed by any means also does not exist, and be fine with that.
Ultimately, the correspondence theory presupposes a deep Platonism as it relies on the Platonic notion of Truth being "somewhere out there". It renders science vulnerable to problem of induction (which is not a real problem as far as real world is concerned) - it allows anyone to dismiss the scientific method off-handedly by saying that "yeah, but science cannot really arrive at the Truth - already David Hume proved so!"
We have somehow to deal with the possibility that everything we believe might turn out to be wrong (e.g. we are living in a simulation, and the real world has completely different laws of physics). Accepting correspondence theory means accepting that we are not capable of reaching truth, and that we are not even capable of knowing if we're moving in the direction of truth! (As our observations might give misleading results.) A kind of philosophical paralysis, which is solved by the pragmatic theory of truth.
There's also the problem that categories really do not exist in some strictly delineated sense; at least not in natural languages. For example consider the sentences in form "X is a horse". According to correspondence, a sentence from this set is true iff X is a horse. That seems to imply that X must be a mammal of genus Equus etc. - something with flesh and bones. However, one can point to a picture of a horse and say "this is a horse", and would not normally be considered lying. Wittgenstein's concept of family resemblance comes to rescue, but I suspect does not play nicely with the correspondence theory.
Finally, there's a problem with truth in formal systems. Some problems in some formal systems are known to be unsolvable; what is the truth value of statements that expand to such a problem? For example, consider the formula G (from Goedel's incompleteness theorem) expressed in Peano Arithmetic. Intuitively, G is true. Formally, it is possible to prove that assuming G is true does not lead to inconsistencies. To do that, we can provide a model of Peano Arithmetic using this standard interpretation. The standard set of integers is an example of such a model. However, it is also possible to construct nonstandard models of Peano Arithmetic extended with negation of G as an axiom. So assuming that negation of G is true also does not lead to contradictions. So we're back at the starting point - is G true? Goedel thought so, but he was a mathematical Platonist, and his views on this matter are largely discredited by now. Most do not believe that G has a truth value is some absolute sense.
This aspect together with Tarki's undefinability theorem suggest that is might not make sense to talk about unified mathematical Truth. Of course, formal systems are not the same as the real world, but the difficulty of formalizing truth in the former increases my suspicion of formalizations / axiomatic explanations relevant to in the latter.
I meant that as a comment to this:
the information less useful than what you'd get by just asking a few questions.
It's easy to lie when answering to questions about your personality on e.g. a dating site. It's harder, more expensive, and sometimes impossible to lie via signaling, such as via appearance. So, even though information obtained by asking questions is likely to be much richer than information obtained from appearances, it is also less likely to be truthful.
..assuming the replies are truthful.
I think universalism is an obvious Schelling point. Not just moral philosophers find it appealing, ordinary people do it too (at least when thinking about it in an abstract sense). Consider Rawls' "veil of ignorance".
Mountaineering or similar extreme activities is one option.
Are there any moral implications of accepting the Many Worlds interpretation, and if so what could they be?
For example, if the divergent copies of people (including myself) in other branches of Multiverse should be given non-insignificant moral status, then it's one more argument against the Epicurean principle that "as long as we exist, death is not here". My many-worlds self can die partially - that is, just in some of the worlds. So I should to reduce the number of worlds in which I'm dead. On the other hand, does it really change anything compared to "I should reduce the probability that I'm dead in this world"?
Is there some reason to think that physiognomy really works? Reverse causation is probably the main reason, e.g. tall people are more likely to be seen as leaders by others, so they are more likely to become leaders. Nevertheless, is there something beyond that?
Funny, I thought escaping in their own private world was not something exclusive to nerds. In fact most people do that. Schoolgirls escape in fantasies about romance. Boys in fantasies about porn. Gamers in virtual reality. Athletes in fantasies about becoming famous in sport. Mathletes - about being famous and successful scientists. Goths - musicians or artists. And so on.
True, not everyone likes to escape in sci-fi or fantasy, but that's because different minds are attracted by different kinds of things. D&D is a relatively harmless fantasy. I'm not that familiar with it, so I'm not even sure whether it can be used to diagnose "nerds", but that's not the point. Correlation is not causation.
Regarding "jerks", we apparently have disagreement on definitions, so this is an issue not worth pursuing. My point is that your self-styled definition of a "nerd" is bit ridiculous, as in fact you're talking about three different groups of people that just happen to be overlapping.
the solution will involve fixing things that made one a "tempting" bullying target
So a nerd, according to the OP, is someone who:
- lacks empathy and interest in other people
- lacks self confidence
- has unconventional interests, ideas, and appearance
But even if take for granted that this is a correct description of a nerd, these are very different issues and require very different solutions.
The last problem is simple to fix at the level of society and ought to be fixed there. A hate against specific social groups should not be acceptable, not matter how intuitive it feels, and how deep its biologically basis is. The current situation in which hate towards different races, nationalities, genders etc. is not politically acceptable, but the hate towards nerds is does not make sense. Overall, people have learned and mostly agree that tolerance is a good idea - but it is still applied very selectively. For example, if parents and schools are capable of avoiding race-hate in the classrooms (less or more), then they should be capable of censoring nerd-hate to an equal extent.
The problem of nerds being jerks and suffering as a consequence is not a real social problem at all. For a society as whole, it is giving negative feedback to someone being a jerk. It's a regulatory mechanism. Here I agree with the OP that self-improvement is the action to take,
The problem of self-confidence is a real problem, as in this case it easily leads to a negative feedback loop. If the usual ways of increasing self-confidence do not help, I see no moral arguments against e.g. using personality-altering confidence-boosting drugs, if such were easily available and had no side effects.
So we have (1) a social solution, (2) a "not a real problem" solution, and (3) "wait for scientific and technological progress to fix broken biology/psychology" solution".
If UGC is true, then one should doubt recursive self-improvement will happen in general
This is interesting, can you expand on this? I feel there clearly are some arguments in complexity theory against AI as an existential risk, and that these arguments would deserve more attention.
To sidetrack a bit, as I've argued in a comment, if it turns out that many important problems are practically unsolvable in realistic timescales, any superintelligence would be unlikely to get strategic advantage. The support for this idea is much more concrete than the speculations in the OP of this thread; for example, there are many problems in economics that we don't know how to solve efficiently despite investing a lot of effort.
Why do you think that the fundamental attribution error is a good point where to start someone's introduction in rational thinking? There seems to be a clear case of the Valley of bad rationality here. Fundamental attribution is a powerful psychological tool. It allows us to take personal responsibility for our successes while blaming the environment for our failures. Now assume that this tool is taken away from a person, leaving all his/her other beliefs intact. How exactly would this improve his/her life?
I also don't get why thinking that "the rude driver probably has his reasons too, so I should excuse him" is a psychologically good strategy, even assuming it is morally right.
About map vs. reality. Not sure why it has to be put together with FAE, as it's a much more general topic. And your explanation is not the the first one I've seen about this topic that leaves a big "so what?" question hanging in the air. At the face value, it seems to say that "people often confuse the idea of an apple with an apple in their hand". Now clerarly that is not the case for anyone, perhaps except the most die-hard Platonists. Even if it would be so, why should the aspiring rationalist care?
I think negative examples would be really strong here. Teaching about the perils of magical thinking and wishful thinking would be a good start. Only after giving a few compelling concrete examples it makes sense to generalize and speak at a more abstract level. It also seems that many aspiring rationalists / high IQ persons are especially vulnerable to the trap of building elaborate mental models of something, and then failing to empirically test them by experiencing the raw reality.
I don't think that overfitting is a good metaphor for your problem. Overfitting involves building a model that is more complicated than an optimal model would be. What exactly is the model here, and why do you think that learning just a subset of the course's material leads to building a more complicated model?
Instead, your example looks like a case of sampling bias. Think of the material of whole course as the whole distribution, and of the exam topics as a subset of that distribution. "Training" your brain with samples just from that subset is going to produce a learning outcome that is not likely to work well for the whole distribution.
Memorizing disconnected bits of knowledge without understanding the material - that would be a case of overfitting.
There is a semi-official EA position on immigration
Could you describe what this position is? (or give a link) Thanks!
We usually don't worry about personality changes because they're typically quite limited. Completely replacing brain biochemistry would be a change on a completely different scale.
And people occasionally do worry about these changes even now, especially if they're permanent, and if/when they occur in others. Some divorces are made because the partner of a person "does not see the same man/woman she/he fell in love with".
Taxing the upper middle class is a generally good idea; they are the ones most capable and willing to pay taxes. Many European countries apply progressive tax rates. Calling it a millionanaire tax is a misnomer, or course, but otherwise I would support that (I'm from Latvia FYI)
Michael O. Church is certainly an interesting writer, but you should take into account that he basically is a programmer with no academic qualifications. Most of his posts appear to be wild generalizations of experiences personally familiar to him. (Not exclusively his own experiences, of course.) I suspect that he suffers heavily from the narrative fallacy.
A good writeup. But you downplay the role of individual attention. No textbook is going to have all the answers to questions someone might formulate after reading the material. They also won't provide help to students who get stuck doing exercises. In books, it's either nothing or all (the complete solution).
The current system does not do a lot of personalized teaching because the average university has a tightly limited amount of resources per student. The very rich universities (such as Oxford) can afford to give a training personalized to a much larger extent, via tutors.
What are some good examples of rationality as "systematized winning"? E.g. a personal example of someone who practices rationality systematically for a long time, and there are good reasons to think doing that has substantially improved their life.
It's easy to name a lot of famous examples where irrationally has caused harm. I'm looking for the opposite. Ideally, some stories that could interest intelligent, but practically minded people who have no previous exposure to the LW memeplex.
The answer to the specific question about technetium is "it's complicated, and we may not know yet", according to physics Stack Exchange.
For the general question "why are some elements/isotopes less or more stable" - generally an isotope is more stable if it has a balanced number of protons and neutrons .
I know what SI is. I'm not even pushing the point that SI not always the best thing to do - I'm not sure if it is, as it's certainly not free of assumptions (such as the choice of the programming language / Turing machine), but let's not go into that discussion.
The point I'm making is different. Imagine a world / universe where nobody has any idea what SI is. Would you be prepared to speak to them, all their scientists, empiricists and thinkers and say that "all your knowledge is purely accidental, you unfortunately have absolutely no methods for determining what the truth is, no reliable methods to sort out unlikely hypotheses from likely ones - while we, incidentally, do have the method and it's called Solomonoff induction"? Because it looks like what iarwain1 is saying implies that. I'm sceptical of this claim.
Let me clarify my question. Why do you and iarwain1 think there are absolutely no other methods that can be used to arrive at the truth, even if they are sub-optimal ones?
Why can't there be other criteria to prefer some theories over other theories, besides simplicity?
Perhaps you can comment this opinion that "simpler models are always more likely" is false: http://www2.denizyuret.com/ref/domingos/www.cs.washington.edu/homes/pedrod/papers/dmkd99.pdf
Perhaps one could say that an agent in the sense that matters for this discussion is something with a personal identity, a notion of self (in a very loose sense).
Intuitively, it seems that tool AIs are safer because they are much more transparent. When I run a modern general purpose constraint-solver tool, I'm pretty sure that no AI agent will emerge during the search process. When I pause the tool somewhere in the middle of the search and examine its state, I can predict exactly what the next steps are going to be - even though I can hardly predict the ultimate result of the search!
In contrast, the actions of an agent are influenced by its long-term state (it's "personality"), so its algorithm is not straightforward to predict.
I feel that the only search processes capable of internally generating agents (the thing Bostrom is worried about) are the ones insufficiently transparent (e.g. ones using neural nets).
See this discussion in The Best Textbooks on Every Subject
I agree that the first few chapters of Jaynes are illuminating, haven't tried to read further. Bayesian Data Analysis by Gelman feels much more practical at least for what I personally need (a reference book for statistical techniques).
The general pre-requisites are actually spelled out in the introduction of Jayne's Probability Theory. Emphasis mine.
The following material is addressed to readers who are already familiar with applied mathematics at the advanced undergraduate level or preferably higher; and with some field, such as physics, chemistry, biology, geology, medicine, economics, sociology, engineering, operations research, etc., where inference is needed. A previous acquaintance with probability and statistics is not necessary; indeed, a certain amount of innocence in this area may be desirable, because there will be less to unlearn.
Scott Aaronson has formulated it in a similar way (quoted from here):
whenever it’s been possible to make definite progress on ancient philosophical problems, such progress has almost always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a “merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve asked Q. Then, with luck, one solves Q′.
Of course, even if Q′ is solved, centuries later philosophers might still be debating the exact relation between Q and Q′! And further exploration might lead to other scientific or mathematical questions — Q′′, Q′′′, and so on — which capture aspects of Q that Q′ left untouched. But from my perspective, this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is.
…A good replacement question Q′ should satisfy two properties: (a) Q′ should capture some aspect of the original question Q — so that an answer to Q′ would be hard to ignore in any subsequent discussion of Q, [and] (b) Q′ should be precise enough that one can see what it would mean to make progress on Q′: what experiments one would need to do, what theorems one would need to prove, etc.
In human society and at the highest scale, we solve the agent-principal problem by separation of powers - legislative, executive, and judiciary powers of state typically are divided in independent branches. This naturally leads to a categorization of AI-capabilities:
AI with legislative power (the power to make new rules)
AI with with high-level executive power (the power to make decisions)
AI with with low-level executive power (to carry out orders)
AI with a rule-enforcing power
AI with a power to create new knowledge / make suggestions for decisions
What Bostrom & co shows is that the seemingly innocent powers to create new knownledge and carry out low-level, well-specified tasks are in fact very unsafe. (The Riemann's hypothesis solver, the paperclip maximizer).
What Bostrom implicitly assumes is that the higher levels of powers do not bring any important new dangers, and might, in fact, be better for the humanity. (The example of an all-powerful sovereign that decides and enforces moral laws in a way that makes them similar to physical laws.) I feel that this point requires more analysis. In general, each new capability brings more ways how to be unfriendly.
I don't know about the power needed to simulate the neurons, but my guess is that most of the resources are spent not on the calculations, but on interprocess communication. Running 302 processes on a Raspberry Pi and keeping hundreds of UDP sockets open probably takes a lot of its resources.
The technical solution is neither innovative nor fast. The benefits are in its distributed nature (every neuron could be simulated on a different computer) and in simplicity of implementation. At least while 100% faithfullness to the underlying mathematical model is not required. I have no idea how the author plans to avoid unintended data loss in the not-unusual case when some UDP packets are dropped. Retransmission (TCP) is not really an option either, as the system has to run in real time.
http://lesswrong.com/r/discussion/lw/lah/link_simulating_c_elegans/
It would be interesting to see more examples of modern-day non-superintelligent domain-specific analogues of genies, sovereigns and oracles, and to look at their risks and failure modes. Admittedly, this is only an inductive evidence that does not take into account the qualitative leap between them and superintelligence, but it may be better than nothing. Here are some quick ideas (do you agree with the classification?):
Oracles - pocket calculators (Bostrom's example); Google search engine; decision support systems.
Genies - industrial robots; GPS driving assistants.
Sovereigns - automated trading systems; self-driving cars.
The failures of automated trading systems are well-known and have cost hundreds of millions of dollars. On the other hand, the failures of human bankers who used ill-suited mathematical models for financial risk estimation are also well-known (the recent global crisis), and may have host hundreds of billions of dollars.
It's ok, as long as the talking is done in sufficiently rigorous manner. By an analogy, a lot of discoveries in theoretical physics have been made long before they could be experimentally supported. Theoretical CS also has good track record here, for example, the first notable quantum algorithms were discovered long before the first notable quantum computers were built. Furthermore, the theory of computability mostly talks about the uncomputable (computations that cannot be realized and devices that cannot be built in this universe), so has next to no practical applications. It just so happened that many of the ideas and methods developed for CT also turned out to be useful for its younger sister - the theory of algorithmic complexity, which has enormous practical importance.
In short, I feel that the quality of an academic inquiry is more dependent on its methods and results than on its topic.
To be honest, I initially had trouble understanding your use of "oversight" and had to look up the word in a dictionary. Talking about the different levels of executive power given to AI agents would make more sense to me.
I agree. For example, this page says that: "in order to provide a convincing case for epigenetic inheritance, an epigenetic change must be observed in the 4th generation. "
So I wonder why they only tested three generations. Since F1 females are already born with the reproductive cells from which F2 will grow, the organism of a F0 exposes both of these future generations to itself and its environment. That some information exchange takes place there is not that surprising, but the effect may be completely lost in F3 generation.
I've always thought of the MU hypothesis as a derivative of Plato's theory of forms, expressed in a modern way.
This is actually one of the standard counterarguments against the need for friendly AI, at least against the notion that is should be an agent / be capable of acting as an agent.
I'll try to quickly summarize the counter-counter arguments Nick Bostrom gives in Superintelligence. (In the book, AI that is not agent at all is called tool AI. AI that is an agent but cannot act as one (has no executive power in the real world) is called oracle AI.)
Some arguments have already been mentioned:
- Tool AI or friendly AI without executive power cannot stop the world from building UFAI. Its abilities to prevent this and other existential risks are greatly diminished. It especially cannot guard us against the "unknown unknowns" (an oracle is not going to give answers to questions we are not asking.)
- The decisions of an oracle or tool AI might look good, but actually be bad for us in ways we cannot recognize.
There is also a possibility of what Bostrom calls mind crime. If a tool or oracle AI is not inherently friendly, it might simulate sentient minds in order to give the answers to the questions that we ask; kill or possibly even torture these minds. The possibility that these simulations have moral rights is low, but there can be trillions of them, so even a low probability cannot be ignored.
Finally, it might be that the best strategy for a tool AI to give answer is to internally develop an agent-type AI that is capable of self-improvement. If the default outcome of creating a self-improving AI is doom, then the tool AI scenario might in fact be less safe.
Making your mental contents look innocuous while maintaining their semantic content sounds potentially very hard
Even humans are capable of producing content (e.g. program code) where the real meaning is obfuscated. For some entertainment, try to look at this Python script in Stack Exchange Programming puzzles, and try to guess what it really does. (The answer is here.)
I couldn't even have a slice of pizza or an ice cream cone
Slippery slope, plain and simple. http://xkcd.com/1332/
Reducing mean consumption does not imply never eating ice cream.
You should have started by describing your interpretation of what the word "flourish" means. I don't think it's a standard one (any links to prove the opposite?). For now this thread is going nowhere because of disagreements on definitions.
Two objections for these calculations - first, they do not take into account the inherent inefficiency of meat production (farm animals only convert a few percent of the energy in their food to consumable products), its contribution to global carbon emission and pollution. Second, they do not take into account the animals displaced and harmed by the indirect effects of meat production. It requires larger areas for farmlands than vegetarian or seafood based diets would.
chickens flourish
Not many vegetarians would agree. Is farm chicken life is worth living? Does the large number of farm chickens really have net positive effect on animal wellbeing?
Animals that aren't useful
What about the recreational value of wild animals?
The practically relevant philosophical question is not "cam science understand consciousness?", but "what can infer from observing the correlates of consciousness, or from observing their absence?". This is the question that for example anesthesiologists have to deal with on a daily basis.
When formulated as this, the problem is really not that different from other scientific problems where causality must be detected. Detecting causal relations is famously hard - but it's not impossible. (We're reasonably certain, for example, that smoking causes cancer.)
In light of this, maybe Bradford Hill criteria can be applied. For example, if we're presented with the problem of a non-consious AI agent that wants to convince us of being conscious, then it's likely we can reject its claims by applying the consistency criteria. We could secretly create other instances of the same AI agent, put them in modified environments (e.g. in an environemt where the motivation to lie about being conscious is removed), and then obseve whether the claims of these instances are consistent.
Similarly, if the internal structure of the agent is too simple to allow consciousness (e.g. the agent is a Chinese room with table-lookup based "intelligence", or a bipartite graph with a high Phi value), we can reject the claim on the plausibility criteria. (Note that the mechanist for intelligence is not a priori required to be the biological one, or it's emulation. For an analogy, we don't reject people's claims of having qualia just because we know that the don't have the ordinary biological mechanisms for them. Persons who claim to experience phantom pain in their amputated limbs are as likely to be seriously treated by medical professionals as persons who experience "traditional", corporeal pain.)
Your article feels too abstract to really engage the reader. I would start with a surprise element (ok, you do this to some extent); have at least one practical anecdote; include concrete and practical conclusions (what life lessons follow from what the reader has learned?).
Worse, I feel that your article might in fact lead to several misconceptions about dual process theory. (At least some of the stuff does not match with my own beliefs. Should I update?)
First, you make a link between System 1 and emotions. But System 1 is still a cognitive system. It's heavily informed by emotions, but does not correspond to emotions. Also, there has never been doubt that human thinking tends to be seriously emotionally biased. The surprising contribution of Kahneman was exactly to show that many (most?) judgment errors come from cognitive biases.
Also, your article makes it sound that System 1 is the one responsible for producing bias ("the autopilot system is prone to make errors"), while in fact System 2 is equally susceptible to biased thinking. (Biases are just heuristics, after all - there is nothing inherently irrational about them.)
Second, Kahneman himself stresses that the dual system theory is merely a useful fiction. So I would be wary of including the neuroscientific stuff. These conclusions are order of magnitude less solid than the dual system theory itself. Anyway, why is the localization of the systems in the brain, or the evolutionary recentness of prefrontal cortex even relevant to this article? Don't try to prop up the credibility of your account by including random scientific facts. You'll lose in the long term.
(Caveat - I'm not a LW regular, so most of my knowledge about dual process theory comes directly from Kahneman's book.)
Example of a mathematical fact: a formula for calculating correlation coefficient. Example of a statistical intuition: knowing when to conclude that close-to-zero correlation implies independence. (To see the problem, see this picture for some datasets in which variables are uncorrelated, but not independent.)
Be careful here. Statistical intuition does not come naturally to humans - Kahneman and others have written extensively about this. Learning some mathematical facts (relatively simple to do) without learning the correct statistical intuitions (hard to do) may well have negative utility. Unjustified self confidence is an obvious outcome.