A short conceptual explainer of Immanuel Kant's Critique of Pure Reason

post by jessicata (jessica.liu.taylor) · 2022-06-03T01:06:32.394Z · LW · GW · 12 comments

This is a link post for https://unstableontology.com/2022/06/03/a-short-conceptual-explainer-of-immanuel-kants-critique-of-pure-reason/

Contents

  Introduction
  Metaphysics
  The Copernican revolution in philosophy
  Phenomena and noumena
  A priori and a posteriori
  Analytic and synthetic
  The synthetic a priori
  Transcendental
  The transcendental aesthetic
  Space
  Time
  Relational knowledge
  Sensibility and understanding
  Manifold of intuition
  Synthesis
  Transcendental Categories
  Schema
  Consciousness of the self
  Dialectic
  The Antinomies of Pure Reason
  Ens Realissimum
  Conclusion
None
12 comments
I know which one I would take.

Introduction

While writing another document, I noticed I kept referring to Kantian concepts. Since most people haven't read Kant, that would lead to interpretation problems by default. I'm not satisfied with any summary out there for the purpose of explaining Kantian concepts as I understand them. This isn't summarizing the work as a whole given I'm focusing on the parts that I actually understood and continue to find useful.

I will refer to computer science and statistical concepts, such as BayesianismSolomonoff induction, and AI algorithms. Different explainers are, of course, appropriate to different audiences.

Last year I had planned on writing a longer explainer (perhaps chapter-by-chapter), however that became exhausting due to the length of the text. So I'll instead focus on what still stuck after a year, that I keep wanting to refer to. This is mostly concepts from the first third of the work.

This document is structured similar to a glossary, explaining concepts and how they fit together.

Kant himself notes that the Critique of Pure Reason is written in a dry and scholastic style, with few concrete examples, and therefore "could never be made suitable for popular use". Perhaps this explainer will help.

Metaphysics

We are compelled to reason about questions we cannot answer, like whether the universe is finite or infinite, or whether god(s) exist. There is an "arena of endless contests" between different unprovable assumptions, called Metaphysics.

Metaphysics, once the "queen of all the sciences", has become unfashionable due to lack of substantial progress.

Metaphysics may be categorized as dogmatic, skeptical, or critical:

Kant is trying to be comprehensive, so that "there cannot be a single metaphysical problem that has not been solved here, or at least to the solution of which the key has not been provided."  A bold claim.  But, this project doesn't require extending knowledge past the limits of possible experience, just taking an "inventory of all we possess through pure reason, ordered systematically".

The Copernican revolution in philosophy

Kant compares himself to Copernicus; the Critique of Pure Reason is commonly referred to as a Copernican revolution in philosophy.  Instead of conforming our intuition to objects, we note that objects as we experience them must conform to our intuition (e.g. objects appear in the intuition of space).  This is sort of a reverse Copernican revolution; Copernicus zooms out even further from "the world (Earth)" to "the sun", while Kant zooms in from "the world" to "our perspective(s)".

Phenomena and noumena

Phenomena are things as they appear to us, noumena are things as they are in themselves (or "things in themselves"); rational cognition can only know things about phenomena, not noumena.  "Noumenon" is essentially a limiting negative concept, constituting any remaining reality other than what could potentially appear to us.

Kant writes: "this conception [of the noumenon] is necessary to restrain sensuous intuition within the bounds of phenomena, and thus to limit the objective validity of sensuous cognition; for things in themselves, which lie beyond its province, are called noumena for the very purpose of indicating that this cognition does not extend its application to all that the understanding thinks. But, after all, the possibility of such noumena is quite incomprehensible, and beyond the sphere of phenomena, all is for us a mere void... The conception of a noumenon is therefore merely a limitative conception and therefore only of negative use. But it is not an arbitrary or fictitious notion, but is connected with the limitation of sensibility, without, however, being capable of presenting us with any positive datum beyond this sphere."

It is a "problematical" concept; "the class of noumena have no determinate object corresponding to them, and cannot therefore possess objective validity"; it is more like a directional arrow in the space of ontology than like any particular thing within any ontology. Science progresses in part by repeatedly pulling the rug on the old ontology, "revealing" a more foundational layer underneath (a Kuhnian "paradigm shift"), which may be called more "noumenal" than the previous layer, but which is actually still phenomenal, in that it is cognizable through the scientific theory and corresponds to observations; "noumena", after the paradigm shift, is a placeholder concept that any future paradigm shifts can fill in with their new "foundational" layer.

Use of the word "noumenon" signals a kind of humility, of disbelieving that we have access to "the real truth", while being skeptical that anyone else does either.

In Bayesianism, roughly, the "noumenon" is specified by the hypothesis, while the "phenomena" are the observations.  Assume for now the Bayesian observation is a deterministic function of the hypothesis; then, multiple noumena may correspond to a single phenomenon.  Bayesianism allows for gaining information about the noumenon from the phenomenon.  However, all we learn is that the noumenon is some hypothesis which corresponds to the phenomenon; in the posterior distribution, the hypotheses compatible with the observations maintain the same probabilities relative to each other that they did in the prior distribution.

(In cases where the observation is not a deterministic function of the hypothesis, as in the standard Bayes' Rule, consider replacing "hypothesis" above with the "(hypothesis, observation)" ordered pair.)

In Solomonoff Induction, there is only a limited amount we can learn about the "noumenon" (stochastic Turing machine generating our observations + its bits of stochasticity), since there exist equivalent Turing machines.

A priori and a posteriori

A priori refers to the epistemic state possessed before taking in observations. In Bayesianism this is the P(X) operator unconditioned on any observations.

A posteriori refers to the epistemic state possessed after taking in observations. In Bayesianism this is P(X | O) where O refers to past observation(s) made by an agent, which may be numbered to indicate time steps, as in a POMDP.

While Kant and Hume agree that we can't learn about universal laws from experience (due to Hume's problem of induction), Hume concludes that this means we don't know about universal laws, while Kant instead argues that our knowledge about universal laws must involve a priori judgements, e.g. geometric or arithmetic judgments. (One man's modus ponens is another's modus tollens...)

Analytic and synthetic

Analytic propositions are ones that can be verified as true by expanding out definitions and doing basic formal operations. A common example is "All bachelors are unmarried", which can be verified by replacing "bachelor" with "unmarried man".

Synthetic propositions can't be verified as true this way, e.g. "All bachelors are alone". They can be true regardless. Causal judgments are synthetic, we can't get the Principle of Sufficient Reason analytically.

Contemporary STEM people are likely to think something like this at this point: "Doesn't that just mean analytic propositions are mathematical, and synthetic propositions are empirical/scientific?"

An immediate problem with this account: Kant doesn't think geometric propositions are analytic.  Consider the proposition "A square is equal to itself when turned 90 degrees on its center".  It's not apparent how to verify the proposition as true by properly defining "square" and so on, and doing basic logical/textual transformations.  Instead we verify it by relating it to possible experience, imagining a rotating square in the visual field.

From a geometrical proof that a square is equal to itself when turned 90 degrees on its center, a prediction about possible experience can be derived, namely, that turning a square piece of wood 90 degrees by its center results in a wood square having the same shape and location in the visual field as it did previously.  Mathematics needs to correspond to possible experience to have application to the perceptible outside world.

Kant doesn't even think arithmetic propositions are analytic. To get 2+2=4 from "analytic" operations, we could try defining 2=1+1, 4=1+1+1+1, then observing 2+2=(1+1)+(1+1)=1+1+1+1=4, however this requires using the commutative property of addition. Perhaps there is an alternative way to prove this "analytically" but neither Kant nor I know of that. Instead we can verify addition by, synthetically, corresponding numbers to our fingers, which "automatically" get commutative/associative properties.

The synthetic a priori

Besides the issue that math relates to possible experience, another problem with "analytic = mathematical" is that, as Kant argues, some propositions are both synthetic and a priori, and "the real problem of pure reason" is how we can know such propositions.

Here's an argument for this. Suppose we first observe O and then conclude that P is true. If we're reasoning validly, P is true a posteriori (relative to O). But this whole thought experiment pre-supposes that there is a time-structure in which we first see O and then we make a judgment about P. This time-structure is to some degree present even before seeing O, such that O can be evidence about P.

Imagine trying to argue that it's raining outside to a person who doesn't believe in time (including their own memory). Since you're both inside and there are no windows, they can't see that it's raining. You try to argue that you were previously both outside and saw that it was raining. But they think there's no such thing as "the past" so this is not convincing.

To make the argument to them successfully, their mind has to already implement certain dynamics [LW · GW] even before receiving observations.

A baby is already born with cognitive machinery, it can't "learn" all of that machinery from data, the process of learning itself already requires this machinery to be present in order to structure observations and relate them to future ones. (In some sense there was no cognition prior to abiogenesis, though; there is a difference between the time ordering of science and of cognitive development.)

In Bayesianism, to learn P from O, it must be the case a priori that P is correlated with O. This correlational structure could be expressed as a Bayesian network. This network would encode an a priori assumption about how P and O are correlated.

Solomonoff induction doesn't encode a fixed network structure between its observations, instead it uses a mixture model over all stochastic Turing machines. However, all these machines have something in common, they're all Stochastic turing machines producing an output stream of bits. Solomonoff induction assumes a priori "my observations are generated by a stochastic Turing machine", it doesn't learn this from data.

One could try pointing to problems with this argument, e.g. perhaps "there is time" isn't a valid proposition, and time is a non-propositional structure in which propositions exist. But now that I just wrote that, it seems like I'm asserting a proposition about time to be true, in a contradictory fashion. The English language is more reflective than the language of Bayesian networks, allowing statements about the structure of propositions to themselves be propositions, as if the fact of the Bayesian network being arranged a particular way were itself represented by an assignment of a truth value to a node in that same Bayesian network.

Transcendental

Philosophers today call Kant's philosophy "transcendental idealism". Kant himself uses the word "transcendental" to refer to cognitions about how cognitions are possible a priori.

This is in part an archaeological process. We see, right now, that we live in a phenomenal world that is approximately Euclidean. Was our phenomenal world always Euclidean, or was it non-Euclidean at some point and then switched over to Euclidean, or is time not enough of a real thing for this to cover all the cases? This sort of speculation about what the a priori empty mind is, from our a posteriori sense of the world, is transcendental.

One angle on the transcendental is, what else has to be true for the immediate (immanent) experience you are having right now to be what it is? If you are seeing a chair, that implies that chairs exist (at least as phenomena); if you see the same chair twice, that means that phenomenal objects can re-occur at different times; and so on.

The transcendental aesthetic

Aesthetic means sense. The transcendental aesthetic therefore refers to the a priori cognitive structures necessary for us to have sensation.

Mainly (Kant argues) these are space and time. I often call these "subjective space", "subjective time", "subjective spacetime", to emphasize that they are phenomenal and agent-centered.

Space

Most of our observations appear in space, e.g. visual input, emotional "felt senses" having a location in the body. To some extent we "learn" how to see the world spatially, however some spatial structures are hardwired (e.g. the visual cortex). Image processing AIs operate on spatial images stored as multidimensional arrays; arbitrarily rearranging the array would make some algorithms (such as convolutional neural networks) operate worse, indicating that the pre-formatting of data into a spatial array before it is fed into the algorithm is functional.

If space weren't a priori then we couldn't become fully confident of geometrical laws such as "a square turned 90 degrees about its center is the same shape", we'd have to learn these laws from experience, running into Hume's problem of induction.

There is only one space, since when attempting to imagine two spaces, one is putting them side by side; there must be some outermost container.

Space is infinite, unbounded.  This doesn't imply that the infinity is all represented, just that the concept allows for indefinite extension.  Finite space can be derived by adding a bound to infinite space; this is similar to Spinoza's approach to finitude in the Ethics.

Space isn't a property of things in themselves, it's a property of phenomena, things as they relate to our intuition.  When formalizing mathematical space, points are assigned coordinates relative to the (0, 0) origin.  We always intuit objects relative to some origin, which may be near the eyes or head.  At the same time, space is necessary for objectivity; without space, there is no idea of external objects.

Our intuitions about space can only get universal geometric propositions if these propositions describe objects as they must necessarily appear in our intuition, not if they are describing arbitrary objects even as they may not appear in our intuition.  As a motivating intuition, consider that non-Euclidean geometry is mathematically consistent; if objective space were non-Euclidean, then our Euclidean intuitions would not yield universally valid geometric laws about objective space. (As it turns out, contemporary physics theories propose that space is non-Euclidean.)

Time

We also see observations extending over time. Over short time scales there is a sense of continuity in time; over long time scales we have more discrete "memories" that refer to previous moments, making those past moments relevant to the present. The structuring of our experiences over time is necessary for learning, otherwise there wouldn't be a "past" to learn from. AIs are, similarly, fed data in a pre-coded (not learned) temporal structure, e.g. POMDP observations in a reinforcement learning context.

The time in which succession takes place is, importantly, different from objective clock time, though (typically) these do not disagree about ordering, only pacing.  For example, there is usually only a small amount of time remembered during sleep, relative to the objective clock time that passes during sleep.  (The theory of relativity further problematizes "objective clock time", so that different clocks may disagree about how much time has passed.)

We may, analogously, consider the case of a Solomonoff inductor that is periodically stopped and restarted as a computer process; while the inductor may measure subjective time by number of observation bits, this does not correspond to objective clock time, since a large amount of clock time may pass between when the inductor is stopped and restarted.

Kant writes, "Different times are only parts of one and the same time".  Perhaps he is, here, too quick to dismiss non-linear forms of time; perhaps our future will branch into multiple non-interacting timelines, and perhaps this has happened in the past.  One especially plausible nonlinear timelike structure is a directed acyclic graph. Still, DAGs have an order; despite time being nonlinear, it still advances from moment to moment.  It is also possible to arbitrarily order a DAG through a topological sort, so the main relevant difference is that DAGs may drop this unnecessary ordering information.

Time is by default boundless but can be bounded, like space.

"Time is nothing other than the form of the inner sense, i.e.  of the intuition of our self and our inner sense", in contrast to space, which is the form of the external sense. To give some intuition for this, suppose I have memory of some sequence of parties I have experienced; perhaps the first consists of myself, Bob, and Sally, the second consists of myself, Sally, and David, and the third consists of myself, Bob, and David.  What is common between all the parties I remember is that I have been at all of them; this is true for no one other than myself.  So, my memory is of situations involving myself; "me" is what is in common between all situations occurring in my subjective timeline.

Since time is the form of the inner sense, it applies to all representations, not only ones concerning outer objects, since all representations are in the mind.

Time is, like space, a condition for objects to appear to us, not a property of things in themselves.

Relational knowledge

Kant clarifies the way in which we fail to cognize objects in themselves, with the example of a triangle: "if the object (that is, the triangle) were something in itself, without relation to you the subject; how could you affirm that that which lies necessarily in your subjective conditions in order to construct a triangle, must also necessarily belong to the triangle in itself?"

Relational knowledge allows us to know objects as they relate to us, but not as they don't relate to us.  Geometry applies to objects that have locations in spacetime; for objects to appear in subjective spacetime, they must have coordinates relative to the (0, 0) origin, that is, the self; therefore, geometry applies to objects that have locations relative to the self; without a location relative to the self, the object would not appear in subjective spacetime.

It may seem silly to say that this "merely relational" knowledge fails to understand the object in itself; what properties are there to understand other than relational properties?  A triangle "in itself" independent of space (which relates the different parts of the triangle to each other) is a rather empty concept.

What is given up on, here, is an absolute reference frame, a "view from nowhere", from which objects could be conceived of in a way that is independent of all subjects; instead, we attain a view from somewhere, namely, from subjective spacetime.

Einstein's theory of special relativity also drops the absolute reference frame, however it specifies connections and translations between subjective reference frames in a way that Kant's theory doesn't.

Sensibility and understanding

The sensibility is the faculty of passively receiving impressions, which are approximately "raw sense-data". The understanding is the faculty of spontaneously conceptualizing an object by means of these impressions.

To recognize an object (such as an apple), the mind must do something; with no mental motion, the light pattern of the apple would hit the retina, but no object would be represented accordingly. In general, the understanding synthesizes raw data into a coherent picture.

Manifold of intuition

Without concepts, sense data would be a disorganized flux, like a video of white noise; Kant terms this flux a "manifold of intuition".  When I think of this, I think of a bunch of sheets of space tied together by a (curved) timeline holding them together, with pixel-like content in the space. Judgments, which are propositions about the content of our understanding (e.g. "there is a cat in front of me"), depend on the "unity among our representations"; what is needed is a "higher [representation], which comprehends this and other representations under itself".  To judge that there is a cat in front of me, I must have parsed the manifold into concepts such as "cat" which relate to each other in a logically coherent universe; I cannot make a judgment from un-unified raw pixel-data. AI object recognition is an attempt to artificially replicate this faculty.

Synthesis

Synthesis is the process of "putting different representations together with each other and comprehending their manifoldness in one cognition".  This is an action of the spontaneity of thought, processing the manifold of intuition into a combined representation.

This relates to the phenomenal binding problem, how do we get a sense of a "unified" world from disconnected sensory data?

In Solomonoff Induction, the manifold of intuition would be the raw observations, and the manifold is synthesized by the fact that there is a universal Turing machine producing all the observations with a hidden state.  This is the case a priori, not only after seeing particular observations.  Similarly with other Bayesian models such as dynamic Bayesian networks; the network structure is prior to the particular observations.

Transcendental Categories

"There is only one experience, in which all perceptions are represented as in thoroughgoing and lawlike connection, just as there is only one space and time..."

Different experiences are connected in a lawlike way, e.g. through causality and through re-occurring objects; otherwise, it would be unclear how to even interpret memories as referring to the same world. The transcendental categories (which are types of judgment) are ways in which different representations may be connected with each other.

Kant gives 12 transcendental categories, meant to be exhaustive. These include: causation/dependence, existence/nonexistence, necessity/contingence, unity, plurality. I don't understand all of these, and Kant doesn't go into enough detail to understand all of them. Roughly, these are different ways experiences can connect with each other, e.g. a change in an experienced object can cause a change in another, and two instances of seeing an object can be "unified" in the sense of being recognized as seeing the same object.

Schema

A schema (plural schemata) relates the manifold of intuition (roughly, sense data) to transcendental categories or other concepts. As a simple example, consider how the concept of a cat relates to cat-related sense data. The cat has a given color, location, size, orientation, etc, which relate to a visual coordinate system. A cat object-recognizer may recognize not only that a cat exist, but also the location and possibly even the orientation of the cat.

Without schema, we couldn't see a cat (or any other object); we'd see visual data that doesn't relate to the "cat" concept, and separately have a "cat" concept. In some sense the cat is imagined/hallucinated based on the data, not directly perceived: "schema is, in itself, always a mere product of the imagination". In Solomonoff induction, we could think of a schema as some intermediate data and processing that comes between the concept of a "cat" (perhaps represented as a generative model) and the observational sense data, translating the first to the second by filling in details such as color and location.

This applies to more abstract concepts/categories such as "cause" as well. When X causes Y, there is often a spacetime location at which that cause happens, e.g. a moment that one billiard ball hits another.

Kant writes: "Now it is quite clear that there must be some third thing, which on the one side is homogeneous with the category, and with the phenomenon on the other, and so makes the application of the former to the latter possible. This mediating representation must be pure (without any empirical content), and yet must on the one side be intellectual, on the other sensuous. Such a representation is the transcendental schema."

Schemata are transcendental because they are are necessary for some phenomenal impressions, e.g. the impression that a cat is at some location. They are necessary for unifying the manifold of intuition (otherwise, there wouldn't be an explanation of correlation between different individual pixel-like pieces of sense data).

Consciousness of the self

Kant discusses consciousness of the self: "The consciousness of oneself in accordance with the determinations of our state in internal perception is merely empirical, forever variable; it can provide no standing or abiding self in this stream of inner appearances, and is customarily called inner sense or empirical apperception. That which should necessarily be represented as numerically identical cannot be thought of as such through empirical data. There must be a condition that precedes all experience and makes the latter itself possible, which should make such a transcendental presupposition valid."

The idea of a lack of a fixed or permanent self in the stream of internal phenomena will be familiar to people who have explored Buddhism.  What you see isn't you; there are phenomena that are representations of the engine of representation, but these phenomena aren't identical with the engine of representation in which they are represented.

The self is, rather, something taken as "numerically identical with itself" which is a condition that precedes experience. Imagine a sequence of animation frames in a Cartesian coordinate system. In what sense are they part of "the same sequence"? Without knowing more about the sequence, all we can say is that they're all part of the same sequence (and have an ordering within it); the sameness of the sequence of each frame is "numerical identity" similar to the identity of an object (such as a table) with itself when perceived at different times.

Dialectic

Kant writes: "We termed dialectic in general a logic of appearance." Dialectic is a play of appearances, claiming to offer knowledge, but instead offering only temporary illusions; different sophists argue us into different conclusions repeatedly, perhaps in a cyclical fashion.

Dialectic is an error it is possible to fall into when reasoning is not connected with possible experience. Kant writes about dialectic in part to show how not to reason. One gets the impression that Kant would have thought Hegel and his followers were wasting their time by focusing so much on dialectic.

The Antinomies of Pure Reason

As an example of dialectic, Kant argues that time and space are finite and that they are infinite; that everything is made of simple parts and that nothing is simple; that causality doesn't determine everything (requiring spontaneity as an addition) and that it does; that there is an absolutely necessary being and that there isn't. Each of these pairs of contradictory arguments is an antinomy.

Philosophers argue about these sorts of questions for millenia without much resolution; it's possible to find somewhat convincing arguments on both sides, as Kant demonstrates.

Ens Realissimum

Kant writes: "This conception of a sum-total of reality is the conception of a thing in itself, regarded as completely determined; and the conception of an ens realissimum is the conception of an individual being, inasmuch as it is determined by that predicate of all possible contradictory predicates, which indicates and belongs to being."

Say some objects are cold and some are hot. Well, they still have some things in common, they're both objects. There is a distinction being made (hot/cold), and there is something in common apart from that distinction. We could imagine a single undifferentiated object, that is neither hot nor cold, but which can be modified by making it hot/cold to produce specific objects.

This is similar to Spinoza's singular infinite substance/God, of which all other (possibly finite) beings are modifications, perhaps made by adding attributes.

The ens realissimum has a similar feel to the Tegmark IV multiverse, which contains all mathematically possible universes in a single being, or a generative grammar of a Turing complete language. It is a common undifferentiated basis for specific beings to be conceptualized within.

Kant considers deriving the existence of a supreme being (God) from the ens realissimum, but the concept is too empty to yield properties attributed to God, such as benevolence, being the intelligent creator of the universe, or providing an afterlife. He goes on to critique all supposed rational proofs of the existence of God, but goes on to say that he posits God and an afterlife because such posits are necessary to believe that the incentive of pleasure-seeking is aligned with acting morally. (Wishful thinking much?)

Conclusion

What is Kant getting at, in all this? I think he is trying to get readers to attend to their experience, the spacetimelike container of this experience, and the way their world-model is constructed out of their experience. For example, the idea that time is the form of the inner sense is apparent from noting that all accessible memories include me, but it's possible to "forget" about this subjective timeline and instead conceptualize time as observer-independent. The idea that the manifold of intuition must be actively synthesized into a representation containing objects (which is in line with cognitive science) challenges the idea that the world is "given", that "we" are simply inhabitants of a stable world. The idea of the "noumenon" as a negative, limiting concept points us at our experience (and what our experience could be) as an alternative to interminably angsting about whether what we experience is "really real" or about metaphysical concepts like God, which makes it easier to get on with positivist math, science, economics, and scholarship without worrying too much about its foundations.

The sense I get reading Kant is: "You live in a world of phenomena synthesized by your mind from some external data, and that's fine, in a sense it's all you could ever hope for. You have plenty of phenomena and generalities about them to explore, you can even inquire into the foundations of what makes them possible and how your mind generates them (I've already done a lot of that for you), but there's no deep Outside demanding your attention, now go live!"

When I take this seriously I worry about getting lost in my head, and sometimes I do get lost in my head, and the Outside does impinge on my cozy mental playground (demanding my attention, and loosening my mental assumptions structuring the phenomenal world), but things calm after a while and I experience the phenomenal world as orderly once again.

12 comments

Comments sorted by top scores.

comment by Richard_Kennaway · 2022-06-03T08:47:05.983Z · LW(p) · GW(p)

Kant did not live to see the discovery of non-Euclidean geometry, or the axiomatisation of logic, geometry, and all of mathematics. Nowadays we would say that the symmetries of the mathematical, geometric square are a theorem derivable from axioms, and that the symmetries of a wooden block are a physical property of a system that empirically satisfies such axioms. Neither had thermodynamics been developed in his time, nor atomic theory, nor neuroscience, nor indeed most of what we now know. Newtonian mechanics and heliocentric astronomy were the only major pieces of knowledge available to him about the hidden structures of the world.

How does his thinking stack up against these developments?

Replies from: jessica.liu.taylor, TAG, Mitchell_Porter
comment by jessicata (jessica.liu.taylor) · 2022-06-03T15:13:06.261Z · LW(p) · GW(p)

the axiomatisation of logic, geometry, and all of mathematics

Euclid's Elements predated Kant.

Nowadays we would say that the symmetries of the mathematical, geometric square are a theorem derivable from axioms, and that the symmetries of a wooden block are a physical property of a system that empirically satisfies such axioms.

I think the main problem with this is that it requires the wooden block rotation to be an empirical fact. It seems like with enough sense of space, it wouldn't require empirically observing rotating blocks to predict that a square rotating in space 90 degrees about its center remains the same. This is derivable in Euclidean geometry.

The physical prediction about the rotating block depends on assumptions like "it's possible to rotate the block, it doesn't get stuck" and "the block doesn't jump around as you rotate it", which could be empirically falsified and which could be added as assumptions to the statement.

I think whether or not this example is valid, the main point here is that it is possible to get some predictions of possible experience from mathematical reasoning (otherwise math would be useless for engineering), and so logic has to be somehow linked to possible experience for logic to yield these predictions; this might be doable with a type system, but not raw first-order logic.

Newtonian mechanics and heliocentric astronomy were the only major pieces of knowledge available to him about the hidden structures of the world.

Seems like an overstatement.

How does his thinking stack up against these developments?

Non-Euclidean geometry seems like a case where he correctly predicted his ignorance. His explanation of "analytic" would have been better with a more formal treatment of analytic truths (e.g. first-order logic), but I think his points are valid if "analytic" refers to what can be proven in raw first-order logic. Thermodynamics and atomic theory (and quantum mechanics) seem relevant in providing discrete foundations to the apparent continuity; I think Kant's arguments that space and time are fully continuous are incorrect (this was the main part of the book where I straight-up disagreed), and could have been more easily recognized as incorrect given thermodynamics and atomic theory. Neuroscience similarly seems relevant in limiting the information processing the mind can do to a finite number of discrete operations.

As I mentioned the theory of special relativity is an important development on subjective spacetime, Kant successfully avoids assuming problematic objective clock time, but doesn't discuss much linkage between subjective times.

Cognitive science would provide more detail on the mental operations Kant posits, e.g. synthesis, of which there are a lot. I am not sure what if any would be considered "false" by most contemporary thinkers, but they'd have specifics to add for sure.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2022-06-03T16:33:39.871Z · LW(p) · GW(p)

I think the main problem with this is that it requires the wooden block rotation to be an empirical fact. It seems like with enough sense of space, it wouldn't require empirically observing rotating blocks to predict that a square rotating in space 90 degrees about its center remains the same. This is derivable in Euclidean geometry.

The "sense of space" is empirical data. It is not derivable from Euclidean geometry, but results from the empirical fact that the space we live in is Euclidean (on human scales).

Even if it's embedded in our nervous system at birth (I do not know if it is), it's still empirical data. If space were not like that, we would not have (evolutionarily) developed to perceive it like that. Did Kant have the concept of knowledge that we are born with, which nonetheless is contingent on how the world happens to be?

Seems like an overstatement.

I'll grant Coulomb's law for electromagnetism, but geology and chemistry were mainly catalogues of observations, and philosophical aspects of steam engines had to wait for 19th century thermodynamics. Geology was producing the idea of a discoverable timeline for the Earth's changes, and chemistry was groping towards the idea of elements, but that is small potatoes compared with their development in the 19th century for chemistry and the 20th for geology. The 19th century in geology was mainly filling in the timeline in more and more detail across more of the Earth, and some wrong estimates for its age.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2022-06-03T17:56:10.485Z · LW(p) · GW(p)

Did Kant have the concept of knowledge that we are born with, which nonetheless is contingent on how the world happens to be?

I am not sure how much he has this. He grants that his philosophy applies to humans, not to other possible minds (e.g. God); it's a contingent fact that, unlike God, we don't produce things just by conceiving of them. And since he thinks spacetime is non-analytic, he grants that in a sense it "could" be otherwise, it's just that that "could" counterfactual structure must branch before we get empirical observations. But he doesn't much discuss the historical origins of why we have a human-like world representation, he just notes that we must have had one to some extent before we got sense-data. A possible materialist criticism of this model is that we can learn about these pre-empirical world structures through empirical psychology research on other minds, not just by making backwards ("transcendental") inferences from the world representation that we presently have and can access introspectively.

comment by TAG · 2022-06-03T18:17:25.572Z · LW(p) · GW(p)

Kantian philosophy can be taken as the exact claims made by Kant or a broad approach, where basic categories are regarded as being imposed on reality rather than discovered in reality. And the broad approach still has milage , because things like scientism, reductionism and Bayseanism don't resolve the issue.

comment by Mitchell_Porter · 2022-06-03T12:07:38.560Z · LW(p) · GW(p)

Almost fifty years ago, in his book Imaginary Magnitude, Stanislaw Lem included a chapter on philosophical works generated by AI. One title mentioned is "Antikant". I was inspired to try this out with GPT-J, specifying only the title, the abstract, and the headings of the desired essay. Here is the result, created in less than a minute: 

https://pastebin.com/5rXiAp3G 

The resulting "essay" is not philosophically meaningful - whereas Kant's work definitely is. But I do wonder how long it will be before machine-generated philosophy is just as coherent as human-generated philosophy. 

comment by avturchin · 2022-06-03T08:39:35.215Z · LW(p) · GW(p)

It is interesting how Kant came close in his aporia to the "discovery" of the Big Bang, in which space both is finite and infinite. E.g. observable universe is finite but eternal inflation is infinite. 

comment by dadadarren · 2022-06-06T18:16:44.386Z · LW(p) · GW(p)

Sometimes I wonder what would Kant think of the interpretive challenges of quantum mechanics. I get the feeling that people consider quantum puzzling precisely because we regard physical objects as noumenal reality rather than phenomenal conceptions. So that we like to think about the world from a Copernicus type of "birds-eye view" rather than from "our perspectives", more aligned with Kant's view. 

comment by TekhneMakre · 2022-06-03T05:45:36.856Z · LW(p) · GW(p)

Thanks.

>At the same time, space is necessary for objectivity; without space, there is no idea of external objects.

Why is space necessary? "External" seems like a good description of the relationship of objective stuff to minds, but that relationship doesn't seem like it couldn't be well-described in non-spatial terms. E.g. "reality is that which, when you stop believing in it, doesn't stop affecting you". (Though I had to modify the "go away".)

>Time is nothing other than the form of the inner sense

I doubt or don't understand this. I agree that time is the form of the inner sense, but it's also the form of other things. E.g. if there's a crater on the moon, and inside that crater is another smaller crater, that's manifesting the form of time, no?

>Relational knowledge allows us to know objects as they relate to us, but not as they don't relate to us.

Is relational knowledge supposed to characterize all knowledge? If so this seems very imprecise or wrong because by using induction we can know what's likely about objects as they haven't yet related to us. I think when people talk about reality, objectivity, things in themselves, etc., under the hood they're using intuitive beliefs that this sort of induction works / is useful, and I think they're generally correct.

Replies from: jessica.liu.taylor, TAG
comment by jessicata (jessica.liu.taylor) · 2022-06-03T14:57:02.009Z · LW(p) · GW(p)

Why is space necessary? “External” seems like a good description of the relationship of objective stuff to minds, but that relationship doesn’t seem like it couldn’t be well-described in non-spatial terms. E.g. “reality is that which, when you stop believing in it, doesn’t stop affecting you”. (Though I had to modify the “go away”.)

Such a sense of reality might be external in the sense of "unpredictable" but not in the sense of "apparently outside me".

I doubt or don’t understand this. I agree that time is the form of the inner sense, but it’s also the form of other things. E.g. if there’s a crater on the moon, and inside that crater is another smaller crater, that’s manifesting the form of time, no?

This is subjective time; to some extent other forms of time are derived from subjective time. I'm not really sure what you mean in the crater example, I guess you're saying that we infer from the two craters that one came after the other. When we do this we're sort of imagining seeing the moon, seeing the first crater form, then seeing the second crater form.

Is relational knowledge supposed to characterize all knowledge? If so this seems very imprecise or wrong because by using induction we can know what’s likely about objects as they haven’t yet related to us.

Suppose I use induction to show that there's a spot 1m forward from me, then another spot 1m forward from that, and so on. All these distances are relational even though they go to infinity.

Alternatively I could do mathematical induction to show that all naturals are odd or even. These "relate to me" in that I can imagine them in intuition (up to a point), but they aren't unique to me, since the could be imagined equivalently in another context.

comment by TAG · 2022-06-05T19:03:54.605Z · LW(p) · GW(p)

One of the many confusing things about Kant is that he uses words in idiosyncratic ways. His "necessary" is usually "any possible experience". At least that keeps the usual.equation between "necessary" and "all.possible". Anyway, space is necessary for any possible experience involving a subject and an object, because spatial separation is what distinguishes a subject an an object.

comment by Czynski (JacobKopczynski) · 2022-07-18T03:23:42.099Z · LW(p) · GW(p)

If space weren’t a priori then we couldn’t become fully confident of geometrical laws such as “a square turned 90 degrees about its center is the same shape”, we’d have to learn these laws from experience, running into Hume’s problem of induction.

This is false. Hume's problem of induction can be avoided by the very simple expedient of not requiring "fully confident" to be perfect, probability-1 confidence. Learning laws from experience is entirely sufficient for 99.99% confidence, and probably still good up to ten or even twenty nines.

This is a logical fallacy, which has been demonstrated as such in a very precise mirror - the Chomskyan view of language syntax, which has been experimentally disproven. To summarize the linguistic debate: Noam Chomsky created the field of syntax and maintained, on the same grounds of the impossibility of induction, that we must have an a priori internal syntax model we are born with, within which children’s language learning is assigning values to a finite set of free parameters, such as “Subject-Verb-Object” sentence structure (SVO) vs. OSV/VOS/SOV/OVS/OSV. He declared that the program of syntax was to discover and understand the set of free parameters, and the underlying framework they were modifying. This was a compelling argument which produced beautiful theories, but it was built on faulty assumptions: perfectly precise language learning is impossible, but it is also unnecessary. (Additionally, some languages, notably Pirãha, violate the core assumptions the accumulated theory had concluded were universals embedded in the language submodule/framework.)

The theory which superseded it (and is now ‘advancing one funeral at a time’) is an approximate theory: it is impossible to learn any syntax precisely from finite evidence, but arbitrarily good approximation is possible. Every English speaker has an ‘idiolect’, the hyper-specific dialect that is how they, and they only, speak and understand English, and this differs slightly, both in vocabulary and syntax, from everyone else’s. No two humans speak the same language, nor have they ever, but this is fine because the languages we do speak are close enough to be mutually intelligible. (And of course now GPT-3 has its own idiolect, though its concept of vocabulary is severely lacking in terms of the meanings of words.)

The analogy is hopefully clear: we have no need for an innate assumption of space. My concept of space and yours are not going to match, but they will be close enough that we can communicate intelligibly and reason from the same premises to the same conclusions. It is of course possible that we have some built-in assumptions, but it is not necessary and we should consider it as an Ockham violation unless we find that there are notions of space we cannot learn even when they manifestly are better at describing our reality. Experimentally, I would say we have very strong evidence that space is not innate: watching babies learn how to interpret their sensorium, they need to learn that distance, angle, and shape exist, and that they are properties shared between sight and touch.

I expect that the same can be done for time, the self, and probably other aspects mentioned here. We can learn things approximately, without any a priori assumptions beyond the basic assumption that induction is valid, i.e. that things that appear true in our memories are more likely to appear true in our ongoing experience than things that appear false in our memories. (I have attempted, and I think succeeded, to make that definition time-free.) For establishing that this applies to time, I would first go about it by examining how babies learn object permanence, which seems like an example of minds which do not yet have an assumption of time. Similarly for the self and the mirror test and video/memory test.