Logic vs intuition <=> algorithm vs ML

post by pchvykov · 2025-01-04T09:06:51.822Z · LW · GW · 0 comments

Contents

  Coding vs training
  Living with complex systems
  Trade-offs 
  Failure Modes 
None
No comments

(cross-posted from my blog https://www.pchvykov.com/blog

I’ve often heard of the dichotomy of making decisions with “mind” vs “heart” – which I understood to be roughly equivalent to conscious vs subconscious, or system 2 (thinking slow) vs system 1 (thinking fast). This also seems similar to the difference between left and right brain hemispheres, which is understood by some as the difference between focused systematic attention, and diffuse big-picture attention – or attending to the figure vs to the background / context (or as “spotlight consciousness” vs “floodlight consciousness”). While this dichotomy is quite abstract and complex, to fix my terms for the rest of this essay I will (imprecisely) refer to it as logic vs intuition. These two different ways of attending to and living in the world seem crucially complementary and mutually necessary for a balanced and wise action. In a recent (really nice) podcast about the Metacrisis, they suggest that our modern Western society’s overreliance on logical approach to understanding and controlling the world may be the underlying cause of the various global crises we are now facing. 

While I somehow find these narratives very compelling, I always had trouble with the “intuition” part feeling somehow woo-woo, esoteric, and so very imprecise. I always found it hard to argue to others, and even to myself, the virtues of the intuitive approach over cold rationality without appealing to some vague personal experience or even just to faith. Here I want to develop a hypothesis that might clarify much of this: that logic is to intuition, as hard-coded algorithm is to machine learning (ML). In a hard-coded algorithm we explicitly specify the flow of logic, the data manipulation steps, and thus explicitly control how the output is generated. In contrast, ML systems are self-taught by somehow tuning millions of internal parameters so as to mimic the behavior exemplified by a large set of training examples. This way the former is precise, reproducible, readily explainable, and thus can be fine-tuned, debugged, and developed – while the latter is approximate, stochastic, black-box, with unpredictable failure modes, and thus hard to improve or develop in any way short of re-training a whole new system. And yet ML can solve problems we couldn’t dream of solving with pure hard logic – by accounting for thousands of features at once, and so effectively having higher integration and processing power. 

Coding vs training

Consider the classic example of classifying cat images. Hard-coded attempts have been made for decades by trying to extract specific tell-tale features, like ears and whiskers – but you can never be sure if these features may not be visible from the angle the picture is taken, or if a set of ears and whiskers gets identified on another animal or a plant. Ingenious attempts were made, such as using group theory to mod out 3D rotations and translations of the object – and hypothesis developed about how our brain might have similar built-in capabilities. But using logic alone, it was hard to imagine how our brain solved this problem so quickly and precisely.

And then came along ML, and especially Neural Networks (NNs) – and solved the task at near-human level basically “straight out of the box,” without implementing fancy techniques like group theory. And just as a human would be hard-pressed to explain how they could identify a cat as a cat, so these NNs were black-box jumbles of parameters that implemented some impossibly complex nonlinear function from pixel values to classification label. While they did not explicitly encode the symmetry-group of 3D space, they could nonetheless recognize cats from all angles and in all sizes. This meant that such symmetry properties were indeed learned and encoded somewhere deep among the NN’s parameters – just as humans can understand and appreciate spatial reasoning, without explicitly resorting to it every time they need to identify a cat.

And so, as humans mostly learn by being exposed to and mimicking many training scenarios and examples, subconsciously integrating tons of data in inexplicable ways, it would not be so surprising that much of how we get by in the world is more akin to ML rather than to formal logic. This way of being seems quite aligned with our colloquial notion of “intuition” – where we have some sense of what a thing is or how something must be done, but cannot really explain why. And just as now it is becoming evident that no practically useful AI will ever be possible without ML techniques, so we can posit that no prosperous individual, or society for that matter, can function without relying on intuition.
 

Living with complex systems

Now while most people will probably concede that logic and intuition are both necessary for proper functioning, the important and interesting question here is how to identify their appropriate domains so we may be astute in using the right tool for the right job. This question seems very much aligned with the question of “what is a complex system?” Indeed, our mainstream science, which operates entirely in the domain of "hard-coded" logic, deals miraculously well with prediction and control of simple systems – but tends to quickly break down outside a controlled lab settings. Complex systems science attempts to remedy this and formalize the failure modes of simple-systems-thinking, which has become predominant in the west since Newton. It seems that precisely in this complex regime is where ML, or intuitive reasoning, may be preferable. 

Think of complex real-world concepts like “justice.” Much like classifying cat images, it is near-impossible to encode justice in a set of premeditated rules that would apply across the universe of possible scenarios (the foil of deontological ethics). Yet, “you know it when you see it” – our intuition, especially when honed by practice and experience in this area, seems to be a powerful guide on what is "just." This is why we employ human judges (trained on the vast database of past legal cases) and cannot rely solely on the written law. However, pure human intuition, without written law, also seems to be problematic and runs into irresolvable disagreements on what is “just,” especially when vested interests are involved. So we end up in a sort of a system with an “ML decision engine” (judge) that is aided by complex hard-coded features (written laws) as inputs. This type of a hybrid solution seems to be pretty common across the board of humans dealing with complex systems.

But the question remains: why not just use pure intuition in such complex scenarios? Afterall, we don’t need such hard-coded logic to walk through complex terrain or to recognize vast variety of objects. Could it actually come out more attuned and nuanced, albeit less explainable, if we relied entirely on our intuition? Rather than coming up with better algorithms for “feature-engineering,” perhaps more practice of mindfulness and of wise action would fine-tune our intuition better than relying on logic? This approach was to some extent practiced in some indigenous cultures, where learning was done by observation and mimicry, and explaining anything or asking questions was seen as counter-productive. This still somehow reflects the common method of teaching wisdom in spiritual traditions today – where it is often considered that “he who speaks does not know, and he who knows does not speak” [Tao Te Ching].

Trade-offs 

In our ML analogy, this seems to equate to the trade-off between investing more in better feature-engineering, vs into better training data and more training time. While feature engineering can improve performance, it tends to reduce generalizability, making the system more task-specific. E.g., image classification could not have surpassed human performance with more training alone – right NN architecture, such as convolution layers, was crucial. However, for more general intelligence like Large Language Models (LLMs), it turned out that the flexible feature-agnostic transformer architecture was best, with the key breakthrough coming from increasing training and size. Similarly for humans: books and specific instructions work great to learn some specific skill, like math or ping-pong – but for developing general life wisdom, using logic and rules seems inherently limited.

One particularly important point where intuition alone seems to fail is scaling. Just as LLMs are not great at math, so human intuition doesn’t work very precisely when trying to make decisions across scales of organization (like running a nation's economy). This is where hard logic and quantitative tools like money seem irreplaceable. Accounting for the limited processing power of such hard-coded tools, we must then be very careful how we use them – e.g., money tries to encode all of the complexity of human values in a single number, and thus necessarily leads to loss of accuracy and oversimplification. If we start to apply such tools overly broadly, we end up with scenarios we never intended – like child slavery being and economically profitable enterprise, or maximizing personal wealth as an overarching life purpose, or putting a price-tag on human life. This is why in the justice system, we have some hard-coded feature engineering, but the final decision remains with intuitive agents.
 

Failure Modes 

To use logic and intuition appropriately, we must thus clearly understand their respective failure modes. The core pitfall of over-committing to logic, rules, or algorithms is missing the complexity of the world, and starting to model it as a “simple system.” This tends to lead to the misplaced belief that there are simple chains of cause and effect that govern complex phenomena like our emotions, relationships, economies, and values. Being complex systems, these aspects of the world tend to look more like a “causal salad,” trying to disentangle which is often unproductive. Nonetheless, due to “physics envy,” we try hard to come up with simple models of these systems, and then to coerce reality to agree with our models. This leads us to, e.g., organize our firms around simple tree-structure hierarchies, divide our beliefs along clear generic lines (e.g., left-wing vs right-wing), quantify all human values with a single number (money), and reducing our ethics to sets of rules. It’s like if we built a hard-coded cat-image classifier, and that it didn’t recognize Sphynx cats due to their lack of fur. If all vets and registration systems use it, it becomes sufficiently inconvenient to own a Sphynx cat that these basically disappear – making our world adapt to our simple model of it. Generally such “simple systems design” gives us a better handle on prediction and control, but inhibits natural growth and evolution, and sacrifices resilience, adaptability, and often, fun and joy.

In contrast, the core failure mode of intuition is explainability (or lack thereof), and therefore collaboration, debugging and directed improvement. This is, of course, the challenge of Explainable AI (XAI), but also, interestingly, of all self-inquiry and mindfulness practices. Our intuitive behavior works well most of the time, but when it fails, we typically have no idea why and what we should change. This is where a plethora of practices from talk therapy to debates to mindfulness to Nonviolent Communication, invite us to slow down and try to identify the specific “neural cascades” that led us to our predicament (I dive into this in another post). So when our decision system is not explicitly rule-based, we must carry out a sort of “internal detective work” based on indirect clues to try to find approximate explanations for our own behavior. This also illustrates why XAI is so challenging. Such explanations, we hope, can then allow us to adjust our behavior or to explain it to a collaborator to a partner and improve our collaboration. We must, however, remember that such post-hoc explanations of intuitive behavior are approximate at best and should not be mistaken for actual logic-driven action. 

This discussion also clearly relates to AI alignment. My personal sense is that the challenge of  aligning with AI, is deeply related to the challenge of aligning with ourselves – our personal inner conflict, as well as outer conflicts and wars. For me, more clearly understanding the role my logic and intuition play in my decision making can help resolve some inner conflicts – and so perhaps it can for the AIs we build as well. In both cases, I believe this can allow us to create a more balanced and wise life – whether our own, or that of AI – with awareness of the strengths and weaknesses of the tools we have available to us, and therefore leveraging both logic and intuition to their full potential, without demonising either. 

0 comments

Comments sorted by top scores.