Posts
Comments
Science and Sanity looks pretty interesting. In the book summary it says he stressed that strict logical identity doesn't hold in reality. Can you say more about how he builds up a logical system without using the law of identity? How does equational reasoning work for example?
Great question.
My joke answer is: probably Hegel but I don't know for sure because he's too difficult for me to understand.
My serious answer is Graham Priest, a philosopher and logician who has written extensively on paradoxes, non-classical logics, metaphysics, and theories of intensionality. His books are extremely technically demanding, but he is an excellent writer. To the extent that I've managed to understand what he is saying it has improved my thinking a lot. He is one of those thinkers who is simultaneously extremely big picture and also being super rigorous in the details and argumentation.
Ever since I first studied formal logic in my first year of undergrad, I always felt it had promise for clarifying our thinking. Unfortunately, in the next decade of my academic education in philosophy I was disappointed on the logic front. Logic seemed either irrelevant the questions I was concerned with or, when it was used, it seemed to flatten and oversimplify the important nuances. Discovering Priest's books (a decade after I'd left academic philosophy) fulfilled my youthful dreams of logic as a tool for supercharging philosophy. Priest uses the formalisms of logic like an artist to paint wonderous and sometimes alien philosophical landscapes.
Books by Priest in suggested reading order:
An Introduction to Non-Classical Logic: From If to Is. Cambridge University Press, Second Edition, 2008.
- It's a great reference. You don't need to read every page, but it is very helpful to turn to when trying to make sense of the rest of Priest's work.
Beyond the Limits of Thought. Oxford University Press, Second (extended) Edition, 2002.
- Presents the surprisingly central role of paradoxes throughout the history of philosophy.
- The unifying theme is Priest's thesis that we humans really are able to think about the absolute limits of our own thought in spite of the fact that such thinking inevitably results in paradoxes.
One: Being an Investigation into the Unity of Reality and of its Parts, including the Singular Object which is Nothingness. Oxford University Press, 2014.
- A study in the metaphysic of parts and wholes
- A deeply counterintuitive but surprisingly powerful account based on contradictory entities
Towards Non-Being, 2nd (extended) edition. Oxford University Press, 2016.
- An analysis of intensional mental states based on a metaphysics of non-existent entities.
Various fisheries have become so depleted as to no longer be commercially viable. One of the obvious examples is the Canadian Maritime fisheries. Despite advanced warning that overfishing was leading to a collapse in cod populations, they were fished to the point of commercial non-viability, resulting in a regional economic collapse that caused depressed standards of living in the maritime provinces to this day.
according to the story that your brain is telling, there is some phenomenology to it. But there isn't.
Doesn't this assume that we know what sort of thing phenomenological consciousness (qualia) is supposed to be so that we can assert that the story the brain is telling us about qualia somehow fails to measure up to this independent standard of qualia-reality?
The trouble I have with this is that there is no such independent standard for what phenomenal blueness has to be in order to count as genuinely phenomenal. The only standard we have for identifying something as an instance of the kind qualia is to point to something occurring in our experience. Given this, it remains difficult to understand how the story the brain tells about qualia could fail to be the truth, and nothing but the truth, about qualia (given the physicalist assumption that all our experience can be exhaustively explained through the brain's activity).
I see blue and pointing to the experience of this seeing is the only way of indicating what I mean when I say "there is a blue qualia". So to echo J_Thomas_Moros, any story the brain is telling that constitutes my experience of blueness would simply be the qualia itself (not an illusion of one).
For an in depth argument that could taken to support this point, I highly recommend Humankind: A Hopeful History by Rutger Bregman.
it generalises. Logic and probability and interpretation and theorisation and all that, are also outputs of the squishy stuff in your head. So it seems that epistemology is not first philosophy, because it is downstream of neuroscience.
I find this claim interesting. I’m not entirely sure what you intend by the word “downstream” but I will interpret it as saying that logic and probability are epistemically justified by neuroscience. In particular, I understand this to include that claim a priori intuition unverified by neuroscience is not sufficient to justify mathematical and logical knowledge. If by "downstream" you have some other meaning in mind, please clarify. However, I will point out that you can't simply mean causally downstream, i.e., the claim that intuition is caused by brain stuff, because a merely causal link does not relate neuroscience to epistemology (I am happy to expand on this point if necessary, but I'll leave it for now).
So given my reading of what you wrote, the obvious question to ask is, do we have to know neuroscience to do mathematics rationally? This would be news to Bayse who lived in the 18th century when there wasn’t much neuroscience to speak of. Your view implies that Bayse (or Euclid for that matter) were unjustified epistemically in their mathematical reasoning because they didn’t understand the neural algorithms underlying their mathematical inferences.
If this is what you are claiming, I think it’s problematic on a number of levels. First, on it faces a steep initial plausibility problem in that it implies mathematics as a field is unjustified for most of its thousands of years of history until some research in empirical science validates it. That is of course possible, but I think most rationalists would balk at seriously claiming that Euclid didn't know anything about geometry because of his ignorance of cognitive algorithms.
But a second deeper problem affects the claim even if one leaves off historical considerations and only looks at the present state of knowledge. Even today when we do know a fair amount about the brain and cognitive mechanisms, the idea that math and logic are epistemically grounded in this knowledge is viciously circular. Any sophisticated empirical science relies on the validity of mathematical inference to establish it’s theories. You can’t use neuroscience to validate statistics when the validity of neuroscientific empirical methods themselves depend on the epistemic bonafides of statistics. With logic the case is even more obvious. An empirical science will rely on the validity of deductive inference in formulating it’s arguments (read any paper in any scientific journal). So there is no chance that the rules of logic will be ultimately justified through empirical research. Note this isn't the same as saying we can't know anything without assuming the prior validity of math and logic. We might have lots of basic kinds of knowledge about tables and chairs and such, but we can't have sophisticated knowledge of the sort gained through rigorous scientific research as this relies essentially on complex reasoning for it's own justification.
An important caveat to this is that of course we can have fruitful empirical research into our cognitive biases. For example, the famous Wason selection task showed that humans in general are not very reliable at applying the logical rule of modus tollens in an abstract context. However, crucially, in order to reach this finding, Wason (and other researchers) had to assume that they themselves knew the right answer on the task. i.e.., the cognitive science researchers assumed the a priori validity of the deductive inference rule based on their knowledge of formal logic. The same is true for Kahneman and Tversky’s studies of bias in the areas of statistics and probability.
In summary, I am wholeheartedly in favour of using empirical research to inform our epistemology (in the way that the cognitive biases literature does). But there is a big difference between this and the claim that epistemology doesn't need anything in addition to empirical science. This is simply not true. Mathematics is the clearest example of why this argument fails, but once one has accepted its failure in the case of mathematics, one can start to see how it might fail in other less obvious ways.
This is a fascinating article about how the concept of originality differs in some Eastern cultures https://aeon.co/essays/why-in-china-and-japan-a-copy-is-just-as-good-as-an-original
An interesting contribution to is this book by Hofstadter and Sanders
They explain thinking in terms of analogy, which as they use the term encompasses metaphor. This book is the a mature cognitive sciencey articulation of many of the fun and loose ideas that Hofstadter first explored in G.E.B.
I'm curious how many people here think of rationalism as synonymous with something like Quinean Naturalism (or just naturalism/physicalism in general). It strikes me that naturalism/physicalism is a specific view one might come to hold on the basis of a rationalist approach to inquiry, but it should not be mistaken for rationalism itself. In particular, when it comes to investigating foundational issues in epistemology/ontology a rationalist should not simply take it as a dogma that naturalism answers all those questions. Quine's Epistemology Naturalized is an instructive text because it attempt actually to produce a rational argument for approaching foundational philosophical issues naturalistically. This is something I haven't seen much on LW, it usually seems like this is taken as an assumed axiom with no argument.
The value of attempting to make the arguments for naturalized epistemology explicit is that they can then be critiqued and evaluated. As it happens, when one reads Quine's work on this and thinks carefully about it, it becomes pretty evident that it is problematic for various reasons as many mainstream philosophers have attempted to make clear (e.g., the literature around the myth of the given).
I'd like to see more of that kind of foundational debate here, but maybe that's just because I've already been corrupted by the diseased discipline of philosophy ; )
You might be interested to look at David Corfield's book Modal Homotopy Type Theory. In the chapter on modal logic, he shows how all the different variants of modal logic can be understood as monads/comands. This allows us to understand modality in terms of "thinking in a context", where the context (possible worlds) can be given a rigorous meaning categorically and type theoretically (using slice categories).
I really enjoyed this post. It was fun to read and really drove home the point about starting with examples. I also thought it was helpful that it didn't just saying, "teach by example". I feel that simplistic idea is all too common and often leads to bad teaching where example after example is given with no clear definitions or high level explanations. However, this article emphasized how one needs to build on the example to connect it with abstract ideas. This creates a bridge between what we already understand and what we are learning.
As I was thinking about this to write this review, I was trying to think of examples where it makes more sense to explain the abstract thing first and then give examples. I had great difficulty coming up with any examples where abstract first makes sense. The few possible examples I could think of came from pure math, and even there I wonder if it wouldn't still help to start with examples.
The most abstract subject I've ever studied is category theory. Recently I was learning about adjoint functors, and here indeed the abstract definition make sense entirely independent of any examples. However, having learned the definition one can't really do anything with adjoint functors until one has seen it in some examples. So this might be an example where the abstraction-example-abstraction order or explanation makes sense. On the other hand, once I learned about the free-forgetfull adjunction, I thought that would have been a good example to start with to build intuition before introducing the abstract definition. I realized that my favorite teachers of the subject still use a lot of examples. Like Bartosz Milewski, who comes at category theory from the perspective of a programmer.
Learning to program is also a good example where in advance one might think it would make sense to learn a bunch of abstractions first. However, in practice, one learns to code by example, then after having mastered some examples, learning the principles behind them.
Excellent article, thank you. I particularly enjoyed your images and diagrams. To me concept diagrams are another superpower for explaining things. Have you written anything about that?
Personally, I thought "mind-hanger" was ok. I got an image of a coat-hanger for the mind. You could even include that image explicitly in your concept mapping pictures.
Some other ideas that stick with the coat-hanger variant would be "idea-hanger", "concept-hanger".
Another term you might consider is scaffolding. This also has a strong concrete image of construction scaffolding, but the metaphor lends itself to the idea of building on top of the skeletal example, just as we start a building project with a scaffold and then build the real building around it. Often the scaffold is removed at the end, which can also happen with abstraction, where we can throw away the pedegogic examples once we've mastered the bigger idea. We don't really build anything on top of a coat hanger (nor a ship's anchor).
I think I can fruitfully engage in truth evaluation of grue things wihtout agreeing or supposing that grue is fitting.
As indicated in the post, fittingness is dependent on the domain D under study. If we take grue to be a term in the study of colour, it is profoundly ill-fitting. I think it is a fair assessment that no researcher who studies colour would find it fruitful or salient to evaluate the truth of propositions involving grue. The picture changes however if we let D be philosophy of science. In that case, grue is fitting, precisely because it illuminates an important paradox in our theories of induction. Here the truth evaluation of statements formulated using grue is fruitful, but that's not a problem because grue is fitting.
A true counter example to my claim would require that a concept C is ill-fitting for a given domain D and yet it is fruitful (for the purpose of rational inquiry into D) to evaluate the truth of statements which are formulated with C.
Regarding the quantum mechanics example, I would need more details to fully understand your claim. My hunch is that the mathematical concepts used to formulate QM could be fitting for the domain of physics even if we don't have a good meta-interpretation of them. If you think this isn't the case, please elaborate on why not.
The term I introduced is "fittingness" not fitness. Fittingness is meant to evoke both fit, as in whether a pair of shoes fit my feet, and also fitting, as in "that is a fitting word choice for this sentence". It is possible that there is another term which would be a better label for the underlying concept. If you have suggestions for alternatives I would love to hear them.
I think it's important that the word is specific, not general. As you point out, we could use a general term qualified with a lengthy phrase like: "success with respect to concept formation in the context of rational inquiry," but that clunker is difficult to sprinkle throughout an argument. The advantage of a single term to encapsulate an important idea should be obvious. Nobody suggests we should replace the term truth with the phrase, "success with respect to belief in the context of rational inquiry." Moreover, the metaphorical associations of fit and fitting give a clue about what this kind of success actually involves. It involves concepts fitting the structures found in reality, without implying the unsustainable idea that we can know what the natural structures are in advance of inquiry. We can size a shoe without knowing our foot size in advance, just by trying on lots of different shoes until one fits.
I admit that the concept I call fittingness is not often used at present. Indeed I believe in present discourse fittingness is often muddled either with truth or instrumental usefulness. This precise muddle leads to difficulties in understanding how Kuhnian paradigm shifts (or pre-paradigmatic science) can be understood as legitimate expressions of rational inquiry. I didn't do more than hint at such problems in the post, maybe I'll write another post about this.
The point of my post is to diffuse these muddles and make it easier to appeal to fittingness on the regular. I want it to be a part of our ready-to-hand conceptual repertoire as rationalists, in the same way that we have easy access to terms like truth, probability, evidence, etc. I make a case for why this would be of benefit in the section titled, "Why is this Distinction Important?" If you don't find that section convincing please let me know what you see as the specific shortcomings and I will try to address them.
I am a fan of johnswentworth's gears sequence. It would be fruitful to have this distilled.
It would be good to have some some polling of which gears are tight and slack in the present state of the world for various big projects of interest to LessWrong members. e.g., for AGI research, for ethics, for various sciences, for societal progress, etc.
Thanks, you are correct. I have updated the post to reflect this.
Fittingness is not the same as telos/purpose/concern. It is a success concept for a specific telos/purpose/concern, namely that belonging to rational inquiry. In other words, it indicates that one has formed a concept or ontology which is successful for the purpose of rational inquiry. Of course, there might be other purposes governing concept formation which would have their own success concepts. For example, if one's purpose is to craft deceptive propaganda then the relevant success concept might be slipperyness or something.
I think a big open question is how to think about rationality across paradigms or incompatible ontological schemas. In focusing only on belief evaluation, we miss there is generally tacit framework of background understanding which is required for the beliefs to be understood and evaluated.
What happens when people have vastly different background understandings? How does rationality operate in such contexts?
The author does a good job articulating his views on why Buddhist concentration and insight practices can lead to psychological benefits. As somebody who has spent years practicing these practices and engaging with various types of (Western) discourse about them, the author's psychological claims seem plausible to a point. He does not offer a compelling mechanism for why introspective awareness of sankharas should lead to diminishing them. He also offers no account for why if insight does dissolve psychological patterns, it would preferentially dissolve negative patterns while leaving positive patterns unchanged. In my own opinion this has a lot more to do with the set and setting of the meditations practice, i.e., the expectation that practice will have salutary effects.
I am not convinced that this is a faithful "translation" of the Buddha's teachings. He leaves out any talk of achieving liberation from rebirth which is the overarching goal of Buddhist practice in the original texts. He does not discuss the phenomenon of cessation/nirvana and whether it is necessary (according to the Buddha it is necessary). He also does not address the fact that the Buddha was not aiming to teach path of psychological health and wellbeing in our modern sense. Far from it, they idea that one could be happy and satisfied (in an ordinary psychological sense) was certainly recognized by the Buddha and his followers, but this was not seen as the goal of practice. In my view, the biggest misrepresentation of Buddhist ideology in it's appropriation by the West was it's construal as a a secular wellness path rather than an extreme doctrine that denies any value in ordinary happiness.
A few of points. First I've heard several AI researchers say that GPT-3 is already close to the limit of all high quality human generated text data. While the amount of text on the internet will continue to grow, it might not grow fast enough for major continued improvement. Thus additional media might be necessary for training input.
Second deaf blind people still have multiple senses that allow them to build 3D sensory-motor models of reality (touch, smell, taste, proprioception, vestibular, sound vibrations). Correlations among these senses gives rise to understanding causality. Moreover, human brains might have evolved innate structures for things like causality, agency, objecthood, etc which don't have to be learned.
Third, as DALL-E illustrates, intelligence is not just about learning knowledge it is also about expressing that learning in a medium. It is hard to see how an AI trained only on text could paint a picture or sing a song.
What happens when OpenAI simply expands this method of token prediction to train with every kind of correlated multi-media on the internet? Audio, video, text, images, semantic web ontologies, and scientific data. If they also increase the buffer size and token complexity, how close does this get us to AGI?
Ah, this post brings back so many memories of studying philosophy of science in grad school. Great job summarizing Structure.
One book that I found very helpful in understanding Kuhn's views in relations to philosophical questions like the objectivity vs mind-dependence of reality is Dynamics of Reason by Michael Friedman. Here Friedman relates Kuhn's ideas both to Kant's notion of categories of the understanding and to Rudolf Carnap's ontological pragmatism.
The upshot of Friedman's book is the idea of the constitutive a priori which roughly is the notion of a conceptual background understanding that makes certain empirical beliefs intelligible. Unlike Kant's categories, Fridman's constitutive a priori (which is supposed to capture both Kuhn's notion of a paradigm and Carnap's notion of a "language") can change over time. This sounds like it might also have strong resonance for what Scott calls predictive coding. It would be interesting to explore those connections. However, there is a still a bit of mystery surrounding what happens when we shift, individually or collectively from one constellation of constitutive a priori to an different one (parralel to the question of how scientific communities can shift paradigms in a rational way). Fridman advances the idea of "discursive rationality" to account for how we can make this shift rationally. Basically, to shift between constitutive a prioris, we have to step out of empiricist modes of rationality and adopt a more hermeneutic/philosophical style. Again, this certainly has echoes in some things Kuhn says about paradigm shifts.
So in the end I don't think Fridman really solves the problem, but his book does make it much clearer what the nature of the problem really is. It helps by relating Kuhn's specific concepts like paradigm-shift, to the broader history of philosophy from Kant through to the logical positivists. It is pretty striking that the same type of problem emerged for positivists like Carnap, as for avowedly anti-positivists like Kuhn. To me this suggests that it isn't a superficial issue limited to a specific thinker or school, but rather points to something quite deep.
Thanks, I just watched Victor's Seeing Spaces talk and it is really cool.
Wow, there is a lot to dig into here. Thanks for this.
The trouble is that these antibodies are not logical. On the contrary; these antibodies are often highly illogical. They are the blind spots that let us live with a dangerous meme without being impelled to action by it.
That is a brilliant point. I also loved you description of the Buddhist monk taking questions from a Western audience. The image of incompatible knowledge blocks is a great one, that actually makes a lot of sense of how various ideologically conditioned people are able to functionally operate.
The example that comes up for me is animal suffering. I believe that torturing animals in factory farms will one day be regarded as a moral evil on bar with war, slavery, etc. I while I refrain from meat, I have a blindspot for eggs and milk, and still a bigger blind spot for other's eating meat. If I didn't have the latter blindspot, I wouldn't be able to function in society. I don't go around consciously thinking of factory carnivores as moral monsters. Maybe if I were more rationally driven I would think this way, and that might be a very bad thing.
Maybe the true judo move is to learn how to include the practical rationality of when to compartmentalize in the rational calculus. Of course, we might not have such fine grained control over these unconscious aspects of our cognition.
There is a wonderful scene in the new Pixar film Soul where they show a "lost soul" who turns out to be a hedge fund trader who just keeps saying, "gotta make the trade". Your description of your high income clients reminded me of that.
I can't say anything on this subject that Derek Parfit didn't say better in Reasons and Persons. To my mind, this book is the starting point for all such discussions. Without awareness of it, we are just reinventing the wheel over and over again.
I find it hard to believe your prediction that this breakthrough will be insignificant given what I've read in other reputable sources. I give a pretty high initial credence to the scientific claims of publications like Nature which had this to say in their article on Alphafold2:
The ability to accurately predict protein structures from their amino-acid sequence would be a huge boon to life sciences and medicine. It would vastly accelerate efforts to understand the building blocks of cells and enable quicker and more advanced drug discovery.
Agreed. Open AI did a study on the trends of algorithm efficiency. They found a 44x improvement in training efficiency on ImageNet over 7 years.
I find reading this post and the ensuing discussion quite interesting because I studied academic philosophy (both analytic and continental) for about 12 years at university. Then I changed course and moved into programming and math, and developed a strong interest thinking about AI safety.
I find this debate a bit strange. Academic philosophy has its problems, but it's also a massive treasure trove of interesting ideas and rigorous arguments. I can understand the feeling of not wanting to get bogged down in the endless minutia of academic philosophizing in order to be able to say anything interesting about AI. On the other hand, I don't quite agree that we should just re-invent the wheel completely and then look to the literature to find "philosophical nearest neighbor". Imagine suggesting we do that with math. "Who cares about what all these mathematicians have written, just invent your own mathematical concepts from scratch and then look to find the nearest neighbor in the mathematical literature." You could do that, but you'd be wasting a huge amount of time and energy re-discovering things that are already well understood in the appropriate field of study. I routinely find myself reading pseudo-philosophical debates among science/engineering types and thinking to myself, I wish they had read philosopher X on that topic so that their thinking would be clearer.
It seems that here on LW many people have a definition of "rationalist" that amounts to endorsing a specific set of philosophical positions or meta-theories (e.g., naturalism, Bayesianism, logical empiricism, reductionism, etc). In contrast, I think that the study of philosophy shows another way of understanding what it is to be a rational inquirer. It involves a sensitivity to reason and argument, a willingness to question one's cherished assumptions, a willingness to be generous with one's intellectual interlocutors. In other words, being rational means following a set of tacit norms for inquiry and dialogue rather than holding a specific set of beliefs or theories.
In this sense of reason does not involve a commitment to any specific meta-theory. Plato's theory of the forms, however implausible it seems to us today, is just as much an expression of rationalism in the philosophical sense. It was a good-faith effort to try to make sense of reality according to best arguments and evidence of his day. For me, the greatest value of studying philosophy is that it teaches rational inquiry as a way of life. It shows us that all these different weird theories can be compatible with a shared commitment to reason as the path to truth.
Unfortunately, this shared commitment does break down in some places in the 19th and 20th centuries. With certain continental "philosophers" like Nietzsche, Derrida and Foucault their writing undermines the commitment to rational inquiry itself, and ends up being a lot of posturing and rhetoric. However, even on the continental side there are some philosophers who are committed to rational inquiry (my favourite being Merleau-Ponty who pioneered ideas of grounded intelligence that inspired certain approaches in RL research today).
I think it's also worth noting that Nick Bostrum who helped found the field of AI safety is a straight-up Oxford trained analytic philosopher. In my Master's program, I attended a talk he gave on Utilitarianism at Oxford back in 2007 before he was well known for AI related stuff.
Another philosopher who I think should get more attention in the AI-alignment discussion is Harry Frankfurt. He wrote brilliantly on value-alignment problem for humans (i.e., how do we ourselves align conflicting desires, values, interests, etc.).
Ah, thanks for clarifying. So the key issue is really the adjusted for inflation/deflation part. You are saying even if previously expensive goods become very cheap due to automation, they will still be valued in "real dollars" the same for the productivity calculation.
Does this mean that a lot rides on how economists determine comparable baskets of goods at different times and also on how far back they look for a historical reference frame?
Thanks for your comment Phil. That's helpful, I hadn't considered the question of where labour shifts after less of it is needed to produce an existing good.
I understand you as saying that as productivity increases in a field and market demand becomes saturated then the workers move elsewhere. This shift of labour to new sectors could (and historically did) lead to more overall productivity, but I think this trend may not continue with the current waves of automation. It seems possible that now areas of the economy where workers move to are those less affected by the productivity enhancing effects of technology. I think this is what actually happened with the economic shift from manufacturing to service industries. Manufacturing can benefit a lot from automation technology, whereas service jobs (especially in fields where the human element of the service is what makes the service valued) are not as capable of becoming more productive. e.g., A massage therapist is not going to get much more productive no matter how much technology we have. So what I imagine is that as automation makes it take less and less labour to produce physical and digital stuff, then most of the jobs that remain will be in human-centered fields which are inherently harder to make more productive through technology.
Thus, it still seems possible that automation could cause worker productivity to go down (which is the opposite of what Krugman was assuming). This is counter-intuitive because clearly there is a common sense way in which the automated economy is much much more productive. More and more goods become plentiful and virtually free. But these cheap plentiful goods do not have much market value, despite their value to us as human beings, so they don't contribute to labour productivity as measured by economists.
Digital knowledge management tools envisioned in the 1950s and 60s such as Douglas Engelbart's hyperdocument system has not been fully implemented (to my knowledge) and certainly not widely embraced. The World Wide Web failed to implement key features from Engelbart's proposal such as the ability to directly address arbitrary sub-documents, or the ability to live embed a sub-document inside another document.
Similarly both Engelbart and Ted Nelson emphasized the importance of hyperlinks being two-directional so that the link is browsable from both the source and the target document. In other words, you could look at any webpage and immediately see all the pages that link to that page. However, Tim Berners-Lee chose to make web hyperlinks one directional from source to target, and we are still stuck with that limitation today. Google's PageRank algorithm gets around this by doing massive crawling the web and then tracing the back-links through the network, but back-links could have been built into the web as a basic feature available to everybody.
https://www.dougengelbart.org/content/view/156/88/
I second this book recommendation. I just finished reading it and it is well written and well argued. Bregman explicitly contrasts Hobbes' pessimistic view of human nature with Rousseau's positive view. According to the most recent evidence Rousseau was correct.
His evolutionary argument is that social learning was the overwhelming fitness inducing ability that drove human evolution. As a result we evolved for friendliness and cooperation as a byproduct of selection for social learning.
I don't know enough math to understand your response. However, from the bits I can understand, it seems leave open the epistemic issue of needing an account of demostrative knowledge that is not dependent on Bayesian probability.
Interesting. This might be somewhat off topic, but I'm curious how would such an Bayesian analysis of mathematical knowledge explain the fact that it is provable that any number of randomly selected real numbers are non-computable with a probability 1, yet this is not equivalent to a proof that all real numbers are non-computable. The real numbers 1, 1.4, square root 2, pi, etc are all computable numbers, although the probability of such numbers occurring in an empirical sample of the domain is zero.
I was excited by the initial direction of the article, but somewhat disappointed with how it unfolded.
In terms of Leibniz's hope for a universal formal language we may be closer to that. The new book Modal Homotopy Type Theory (2020 by David Corfield) argues that much of the disappointment with formal languages among philosophers and linguists stems from the fact that through the 20th century most attempts to formalize natural language did so with first-order predicate logic or other logics that lacked dependent types. Yet, dependent types are natural in both mathematical discourse and ordinary language.
Martin-Lof developed the theory of dependent types in the 1970s and now Homotopy Type Theory has been developed on top of that to serve as a computation-friendly foundation for mathematics. Corfield argues that such type theories offer new hope for the possibility of formalizing the semantic structure of natural language.
Of course, this hasn't been accomplished yet, but it's exciting to think that Leibniz's dream may be realized in our century.
I disagree with the idea that one doesn't have intuitions about generalization if one hasn't studied mathematics. One things that I find so interesting about CT is that it is so general it applies as much to everyday common sense concepts as it does to mathematical ones. David Spivak's ontology logs are a great illustration of this.
I do agree that there isn't a really good beginners book that covers category theory in a general way. But there are some amazing YouTube lectures. I got started on CT with this series, Category Theory for Beginners. The videos are quite long, but the lecturer does an amazing job explaining all the difficult concepts with lots of great visual diagrams. What is great about this series is that despite the "beginners" in the title he actually covers many more advanced topics such as adjunction, Yoneda's lemma, and topos theory in a way that doesn't presuppose prior mathematical knowledge.
In terms of books, Conceptual Mathematics really helped me with the basics of sets and functions, although it doesn't get into the more abstract stuff very much. Finally, Category Theory for Programmers is quite accessible if you have any background in computer programming.
It seems odd to equate rationality with probabilistic reasoning. Philosophers have always distinguished between demonstrative (i.e., mathematical) reasoning and probabilistic (i.e., empirical) reasoning. To say that rationality is constituted only by the latter form reasoning is very odd, especially considering that it is only though demonstrative knowledge that we can even formulate such things as Bayesian mathematics.
Category theory is a meta-theory of demonstrative knowledge. It helps us understand how concepts relate to each other in a rigorous way. This helps with the theory side of science rather than the observation side of science (although applied category theories are working to build unified formalisms for experiments-as-events and theories).
I think it is accurate to say that, outside of computer science, applied category theory is a very young field (maybe 10-20 years old). It is not surprising that there haven't been major breakthroughs yet. Historically fruitful applications of discoveries in pure math often take decades or even centuries to develop. The wave equation was discovered in the 1750s in a pure math context, but it wasn't until the 1860s that Maxwell used it to develop a theory of electromagnetism. Of course, this is not in itself an argument that CT will produce applied breakthroughs. However, we can draw a kind of meta-historical generalization that mathematical theories which are central/profound to pure mathematicians often turn out to be useful in describing the world (Ian Stewart sketches this argument in his Concepts of Modern Mathematics pp 6-7).
CT is one of the key ideas in 20th century algebra/topology/logic which has allowed huge innovation in modern mathematics. What I find interesting in particular about CT is how it allows problems to be translated between universes of discourse. I think a lot of its promise in science may be in a similar vein. Imagine if scientists across different scientific disciplines had a way to use the theoretical insights of other disciplines to attack their problems. We already see this when say economists borrow equations from physics, but CT could enable a more systematic sharing of theoretical apparatus across scientific domains.
I am not a mathematician but I've been studying category theory for about a year now. From what I've learned so far it seems that it's main benefit within pure mathematics is that it gives a way of translating between different domains of mathematical discourse. On the face of it, even if you've provided a common set-theoretic foundation for all areas of math, it isn't obvious how higher level constructions in say, geometry, can be translated into the language of algebra or topology, or vice versa. So category theory was invented to facilitate this process of sharing mathematical insights across mathematical sub-disciplines. (I think specifically the context in which it originated was algebraic topology, which as the name implies uses techniques from abstract algebra to study topology.)
Later, computer scientists realized that category theory was useful for thinking about the structure of programs (e.g., data types and functions). For example, the concept of a Monad in functional programming which allows the simulation of side effects in a pure functional programming language comes directly from category theory. Bartosz Milewski is the person to look to if you are interested in learning about this aspect of things.
Even more recently (the last 10 years or so) people have started applying category theory to science more generally. Two books by David Spivak explore this here and here. I think much of this work in applied category is too recent to expect to see much in the way of big practical discoveries or breakthroughs. It remains to be seen if it will produce major innovations, but I think it is very promising. The hope is that category theory will provide scientists a way to model model more of the structure of both their research domain and the research process itself in a unified formalism. It also shows promise for modelling natural language concepts and argumentation, which could lead to better methods of computer knowledge representation.
On a more philosophical level, some have argued that category theory provides support for structuralism in the philosophy of mathematics. This view argues that mathematical entities are essentially structures, which is to say patterns of relationship. In category theory, what an object is is entirely determined by the pattern of relationships (morphisms) with other objects, within a given context (category). This contrasts with set theory, where sets are described in terms of their internal structure of elements and subsets. In practice, this means that set theory starts from the bottom (the empty set) and builds up to the whole mathematical universe, while category theory starts from the top (the category of categories) and then defines everything else in terms of universal properties.
Essentially, category theory validates the intuition that the number 5 isn't some specific object floating out in Platonic heaven, nor is it just a made up meaningless symbol. It is a structure that is defined by it's properties, and those properties are all determined by its relations to everything else. Without actually studying category theory it is difficult to see how this idea could be cashed out in a rigorous non-hand wavy way.
David Spivak offers an account of Categories as database schemas with path equivalencies that is similar to the account you've given here in his book Category Theory for the Sciences. He still presents the traditional definitions, giving examples mainly from the category of sets and functions. I also didn't find his presentation of database schema definition especially easy to understand, but it is very useful when you realize that a functor is a systematic migration of data between schemas.
Thanks for your comment. My replies are below.
"so Gisin's musings... are guaranteed to be not a step in any progress of the understanding of physics."
What is your epistemic justification for asserting such a guarantee of failure? Of course, any new speculative idea in theoretical physics is far from likely to be adopted as part of the core theory, but you are making a much stronger claim by saying that it will not even be "a step in any progress of the understanding of physics". Even ideas that are eventually rejected as false, are often useful for developing understanding. Gisin's papers ask physicists to consider their unexamined assumptions about the nature of math itself, which seems at least like a fruitful path of inquiry, even if it won't necessarily lead to any major breakthroughs.
"mathematical proofs are as much observations as anything else. Just because they happen in one's head or with a pencil on paper, they are still observations."
This reminds me of John Locke's view that mathematical truths come from observation of internal states. That is an interesting perspective, but I'm not sure it an hold up to scrutiny. The biggest issue with it seems to be that in order to evaluate the evidence provided by empirical observations we must have a rational framework which includes logic and math. If logic and math themselves were simply observational, then we have no framework for evaluating the evidence provided by those observations. Perhaps you can give an alternative account of how we evaluate evidence without pre-supposing a rational framework.
"The difficulty of calculating a far-away digit in the decimal expansion of pi has nothing to do with pi itself: you can perfectly well define it as the ratio of circumference to diameter, or as a limit of some series"
I agree with this statement. I think though it misses the point I was elaborating about Brouwer's concept of choice sequences. The issue isn't that we can't define a sequence that is equivalent to the infinite expansion of pi, I think it is rather that for any real quantity we an never be certain that it will continue to obey the lawlike expansion into the future. So the issue isn't the "difficulty of calculating a far-away digit" the issue is that no matter how many digits we observe following the law like pattern, the future digits may still deviate from that pattern. No matter how many digits of pi a real number contains, the next digit might suddenly be something other than pi (in which case we would say retrospectively that the real number was never equal to pi in the first place). This is actually what we observe, if we are to say measure the ratio of a jar lid's diameter to it's circumference. The first few digits will match pi, but then as we to smaller scales it will deviate.
"...the idea that Einstein's equations are somehow unique in terms of being timeless is utterly false"
I made no claim that they are unique in this regard.
I agree that the term mindfulness can be vauge and that it is a recent construction of Western culture. However, that doesn't mean it lacks any content or that we can't make accurate generalizations about it.
To be precise, when I say "mindfulness meditation" I have in mind a family of meditation techniques adapted from Theravada and Zen Buddism for secular Western audiences originally by Jon Kabat-Zinn. These techniques attempt to train the mind in adopt a focused, non-judgemental, observational stance. Such a stance is very useful for many purposes, but taken to an extreme it can result in de-personalization / de-realization and other mental health problems.
For research to support this claim I recomment checking out Willoughby Britton's research. Here are two PDF journal articles on this topic: one, and another one.
I agree about mindfulness meditation. It is presented as a one-size-fits-all solution, but actually mindfulness meditation is just a knob that emphasizes certain neural pathways at the expense of others. In general, as you say, I've found that mindfulness de-emphasizes agential and narrative modes of understanding. Tulpa work, spirit summoning, shammanism, etc. all move the brain in the opposite direction, activating strongly the narrative/agential/relational faculties. I experienced a traumatic dissociative state after too much vipassana meditation on retreat, and I found that working with imaginal entities really helped bring my system back into balance.
I have often thought that the greatest problem with the tulpa discourse is the tendency there to insist on the tulpa's sharp boundaries and literal agenthood. I find it's much more helpful to think of such things in terms of a broader class of imaginal entities which are semi agential and which often have fuzzy boundaries. The concept of a "spirit" in Western magick is a lot more flexible and in many ways more helpful. Of course, this can be taken in an overly literal or implausibly supernateralistic direction, but if we guard against such interpretations, the idea of spirits as agentized meanings is very helpful.
How is this practically useful? For me it comes down to leveraging the huge part of the brain which works in terms of agency and narrative. Learning how to work with imaginal entities opens up a vast amount of general processing power that would otherwise be domain specific.
Of course, all the warnings that people have said about mental health and possible psychosis and dissociation are genuinely worrisome. So embarking on these sort of practices should be undertaken with quite a lot of care.
Thanks to the comments and discussion, I was motivated to do more research into my own question. What I've found is that there have been some attempts to use semantic technologies for personal knowledge management (PKM).
I have not found evidence one way or the other as to whether these tools have been helpful for knowledge discovery, but they seem promising.
The main tool that would be accessible to the average user is Semantic MediaWiki, this is an extension to Wikipedia's popular MediaWiki software that adds KR functionality based on semantic web technologies.
Here is an article about how to set this up for PKM.
PDF Journal article Semantic Wikis for Personal Knowledge Management
-This article does a good job outlining a general theory of how to build a semantic knowledge application for PKM. The arguments are not tied to a specific software implementation.
PDF Journal article Learning with Semantic Wikis
-I haven't read this article yet, but from the abstract it sounds generally useful
Interesting, can you give some examples to illustrate how causal/Bayes nets are used to aid reasoning / discovery?
I see merit in the idea that semantic networks may focus too much on the structure of language, and not enough on the structure of the underlying domain being modelled. As active thinkers, we are looking to build an understanding of the domain, not an understanding of how we talked about that domain.
Issues of language use, such as avoiding ambiguity, could sometimes be useful especially in more abstract argumentation, but more important is being able to track all of the relationships among the domain specific entities and organizing lines of evidence.