Against Modal Logics

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-08-27T22:13:46.000Z · LW · GW · Legacy · 62 comments

Contents

62 comments

Continuation ofGrasping Slippery Things
Followup toPossibility and Could-ness, Three Fallacies of Teleology

When I try to hit a reduction problem, what usually happens is that I "bounce" - that's what I call it.  There's an almost tangible feel to the failure, once you abstract and generalize and recognize it.  Looking back, it seems that I managed to say most of what I had in mind for today's post, in "Grasping Slippery Things".  The "bounce" is when you try to analyze a word like could, or a notion like possibility, and end up saying, "The set of realizable worlds [A'] that follows from an initial starting world A operated on by a set of physical laws f."  Where realizable contains the full mystery of "possible" - but you've made it into a basic symbol, and added some other symbols: the illusion of formality.

There are a number of reasons why I feel that modern philosophy, even analytic philosophy, has gone astray - so far astray that I simply can't make use of their years and years of dedicated work, even when they would seem to be asking questions closely akin to mine.

The proliferation of modal logics in philosophy is a good illustration of one major reason:  Modern philosophy doesn't enforce reductionism, or even strive for it.

Most philosophers, as one would expect from Sturgeon's Law, are not very good.  Which means that they're not even close to the level of competence it takes to analyze mentalistic black boxes into cognitive algorithms.  Reductionism is, in modern times, an unusual talent.  Insights on the order of Pearl et. al.'s reduction of causality or Julian Barbour's reduction of time are rare.

So what these philosophers do instead, is "bounce" off the problem into a new modal logic:  A logic with symbols that embody the mysterious, opaque, unopened black box.  A logic with primitives like "possible" or "necessary", to mark the places where the philosopher's brain makes an internal function call to cognitive algorithms as yet unknown.

And then they publish it and say, "Look at how precisely I have defined my language!"

In the Wittgensteinian era, philosophy has been about language - about trying to give precise meaning to terms.

The kind of work that I try to do is not about language.  It is about reducing mentalistic models to purely causal models, about opening up black boxes to find complicated algorithms inside, about dissolving mysteries - in a word, about cognitive science.

That's what I think post-Wittgensteinian philosophy should be about - cognitive science.

But this kind of reductionism is hard work.  Ideally, you're looking for insights on the order of Julian Barbour's Machianism, to reduce time to non-time; insights on the order of Judea Pearl's conditional independence, to give a mathematical structure to causality that isn't just finding a new way to say "because"; insights on the order of Bayesianism, to show that there is a unique structure to uncertainty expressed quantitatively.

Just to make it clear that I'm not claiming a magical and unique ability, I would name Gary Drescher's Good and Real as an example of a philosophical work that is commensurate with the kind of thinking I have to try to do.  Gary Drescher is an AI researcher turned philosopher, which may explain why he understands the art of asking, not What does this term mean?, but What cognitive algorithm, as seen from the inside, would generate this apparent mystery?

(I paused while reading the first chapter of G&R.  It was immediately apparent that Drescher was thinking along lines so close to myself, that I wanted to write up my own independent component before looking at his - I didn't want his way of phrasing things to take over my writing.  Now that I'm done with zombies and metaethics, G&R is next up on my reading list.)

Consider the popular philosophical notion of "possible worlds".  Have you ever seen a possible world?  Is an electron either "possible" or "necessary"?Clearly, if you are talking about "possibility" and "necessity", you are talking about things that are not commensurate with electrons - which means that you're still dealing with a world as seen from the inner surface of a cognitive algorithm, a world of surface levers with all the underlying machinery hidden.

I have to make an AI out of electrons, in this one actual world.  I can't make the AI out of possibility-stuff, because I can't order a possible transistor.  If the AI ever thinks about possibility, it's not going to be because the AI noticed a possible world in its closet.  It's going to be because the non-ontologically-fundamental construct of "possibility" turns out to play a useful role in modeling and manipulating the one real world, a world that does not contain any fundamentally possible things.  Which is to say that algorithms which make use of a "possibility" label, applied at certain points, will turn out to capture an exploitable regularity of the one real world.  This is the kind of knowledge that Judea Pearl writes about.  This is the kind of knowledge that AI researchers need.  It is not the kind of knowledge that modern philosophy holds itself to the standard of having generated, before a philosopher gets credit for having written a paper.

Philosophers keep telling me that I should look at philosophy.  I have, every now and then.  But the main reason I look at philosophy is when I find it desirable to explain things to philosophers.  The work that has been done - the products of these decades of modern debate - is, by and large, just not commensurate with the kind of analysis AI needs.  I feel a bit awful about saying this, because it feels like I'm telling philosophers that their life's work has been a waste of time - not that professional philosophers would be likely to regard me as an authority on whose life has been a waste of time.  But if there's any centralized repository of reductionist-grade naturalistic cognitive philosophy, I've never heard mention of it.

And:  Philosophy is just not oriented to the outlook of someone who needs to resolve the issue, implement the corresponding solution, and then find out - possibly fatally - whether they got it right or wrong.  Philosophy doesn't resolve things, it compiles positions and arguments.  And if the debate about zombies is still considered open, then I'm sorry, but as Jeffreyssai saysToo slow!  It would be one matter if I could just look up the standard answer and find that, lo and behold, it is correct.  But philosophy, which hasn't come to conclusions and moved on from cognitive reductions that I regard as relatively simple, doesn't seem very likely to build complex correct structures of conclusions.

Sorry - but philosophy, even the better grade of modern analytic philosophy, doesn't seem to end up commensurate with what I need, except by accident or by extraordinary competence.  Parfit comes to mind; and I haven't read much Dennett, but Dennett does seem to be trying to do the same sort of thing that I try to do; and of course there's Gary Drescher.  If there was a repository of philosophical work along those lines - not concerned with defending basic ideas like anti-zombieism, but with accepting those basic ideas and moving on to challenge more difficult quests of naturalism and cognitive reductionism - then that, I might well be interested in reading.  But I don't know who, besides a few heroes, would be able to compile such a repository - who else would see a modal logic as an obvious bounce-off-the-mystery.

62 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by poke · 2008-08-27T23:44:50.000Z · LW(p) · GW(p)

It's true that contemporary philosophy is still very much obsessed with language despite attempts by practioners to move on. Observation is talked about in terms of observation sentences. Science is taken to be a set of statements. Realism is taken to be the doctrine that there are objects to which our statements refer. Reductionism is the ability to translate a sentence in one field into a sentence in another. The philosophy of mind concerns itself with finding a way to reconcile the lack of sentence-like structures in our brain with a perverse desire for sentence-like structures. But cognitive science is itself a development of this odd way of thinking about the world; sentences become algorithms and everything carries on the same. I don't think you're really too far removed from this tradition.

Replies from: whowhowho
comment by whowhowho · 2013-01-31T19:00:03.545Z · LW(p) · GW(p)

Talking in terms of sentences is not reifying them; Cognitive science still uses sentences, which are not insulated from interpretational problems.

comment by retired_urologist · 2008-08-28T00:00:00.000Z · LW(p) · GW(p)

@ EY: I feel a bit awful about saying this, because it feels like I'm telling philosophers that their life's work has been a waste of time

Well, your buddy Robin Hanson has proved mathematically that my life has been a waste of time in his Doctors kill series of posts. I accept the numbers. Screw the philosophers; now it's their turn. It's all chemical neurotransmitters. Next: the lawyers.

comment by Tyrrell_McAllister2 · 2008-08-28T00:01:47.000Z · LW(p) · GW(p)

You write that "Philosophy doesn't resolve things, it compiles positions and arguments". I think that philosophy should be granted as providing something somewhat more positive than this: It provides common vocabularies for arguments. This is no mean feat, as I think you would grant, but it is far short of resolving arguments which is what you need.

As you've observed, modal logics amount to arranging a bunch of black boxes in very precisely stipulated configurations, while giving no indication as to the actual contents of the black boxes. However, if you mean to accuse the philosophers of seeing no need to fill the black boxes, then I think you go too far. Rather, it is just an anthropological fact that the philosophers cannot agree on how to fill the black boxes, or even on what constitutes filling a box. The result is that they are unable to generate a consensus at the level of precision that you need. Nonetheless, they at least generate a consensus vocabulary for discussing various candidate refinements down to some level, even if none of them reach as deep a level as you need.

I don't mean to contradict your assertion that (even) analytic philosophy doesn't provide what you need. I mean rather to emphasize what the problem is: It isn't exactly that people fail to see the need for reductionistic explanations. Rather the problem is that no one seems capable of convincing anyone else that his or her candidate reduction should be accepted to the exclusion of all others. It may be that the only way for someone to win this kind of argument is to build an actual functioning AI. In fact, I'm inclined to think that this is the case. If so, then, in my irrelevant judgement, you are working with just about the right amount of disregard for whatever consensus results might exist with the analytic philosophical tradition.

comment by michael_webster2 · 2008-08-28T00:07:04.000Z · LW(p) · GW(p)

Alright, I am going to bite on this.

E writes: "The proliferation of modal logics in philosophy is a good illustration of one major reason: Modern philosophy doesn't enforce reductionism, or even strive for it."

The usual justification for skepticism about reductionism as a methodology had to do with the status of the bridge laws: those analytic devices which reduced A to B, whether A was a set of sentences, observations, etc. Like climbing the ladders in the Tractatus, they seemed to have no purpose, once used.

They weren't part of the reductive language, yet the they were necessary for the reductive project.

Carnap was probably the last philosopher to try for a systemic reduction, and his attempts floundered on well known problems, circa 1940.

E writes: "Consider the popular philosophical notion of "possible worlds". Have you ever seen a possible world? Is an electron either "possible" or "necessary"?"

Kripke's essay on possible worlds makes it clear that there is nothing mysterious about possible worlds, they are simply states of information. Nothing hard.

E writes: " If there was a repository of philosophical work along those lines - not concerned with defending basic ideas like anti-zombieism, but with accepting those basic ideas and moving on to challenge more difficult quests of naturalism and cognitive reductionism - then that, I might well be interested in reading."

Professional philosophers are not scientists, but rather keep alive unfashionable arguments that scientists and technicians wrongly believe have been "solved", as opposed to ignored.

You are not suited for philosophical abstraction because you primarily want to build something. Get on with it, then and stop talking about foundations -which may not exist. Just do it.

comment by RobinHanson · 2008-08-28T00:18:28.000Z · LW(p) · GW(p)

Well of course one standard response to such complaints is: "If you think you can do better, show us." Not just better in a one-off way, but a better tradition that could continue itself. If you think you have done better and are being unfairly ignored, well then that is a different conversation.

comment by J. · 2008-08-28T01:03:04.000Z · LW(p) · GW(p)

I read this blog for Hanson's posts, but unfortunately you are one of his co-bloggers. I wouldn't be surprised if you delete this or fail to post it, but whatever. Anyways, I occasionally read something you write, and I am struck by how dismissive you are of contemporary philosophy, usually treating it as a strawman or cartoon.

Can you please put your money where your mouth is and publish a philosophical paper in a good journal (such as Philosophical Review, Nous, Philosophy and Phenomenological Research, Journal of Philosophy, Ethics, Mind, or Phil Studies?) Lots of philosophers would love your approach. (I think you will fail to publish anything, not because the discipline is biased against you, but because you are at best a seventh rate thinker self-deceived into thinking he's a second rate thinker. I'm not saying that to be abusive, but, really, to be frank.)

Once you do this, I will begin taking you seriously. Until then, I consider you a very smart crank.

P.S., since you frequently write on topics other than your specialty (the singularity), such as moral realism, reductionism, etc., please make your publication one of these topics.

comment by Kenny · 2008-08-28T01:08:00.000Z · LW(p) · GW(p)

Are your feelings only confined to philosophy, modern or otherwise? I feel the same sense of 'modal logic' everywhere – art, politics, even technology – conversations, arguments, and discussions seem endlessly disconnected, related languages speaking past one another.

I think Tyrrell nails it – philosophy mainly provides common vocabularies. And I must agree with him – it is no mean feat.

I highly recommend the various works of Daniel Dennett – having read him before reading you, I feel prepared for exactly your favored type of argument – dissolving confusion by rejecting false dichotomies and rigorously separating layers.

The universe is endlessly amazing, and I feel blessed by being so curious. I think it's miraculous that philosopher's are as good as they are!

comment by Randy_Ridenour · 2008-08-28T02:30:36.000Z · LW(p) · GW(p)

I confess that I'm confused. Why does the "proliferation" of modal logics imply that philosophers do not strive for reductionism? Why think that having several modal logics is a bad thing? These logics were developed originally as purely formal syntactic systems with different sets of axioms. In a sense, decrying the proliferation of modal logics is akin to decrying the proliferation of non-Euclidean geometries. There were modal logics long before philosophers ever spoke of possible worlds, which, unless you're one of the few convinced by David Lewis, philosophers take simply to be a useful heuristic when speaking of possibility and necessity. How can one talk about a purely causal model with some notion of necessity? That would be a purely causal model without any notion of causality. It strikes me that even the AI theorist would like to discuss causation, consistency of models, logical implication, maybe even moral obligation. These are all modal notions, but unfortunately, they're not logically equivalent. We shouldn't fall into a trap of being reductionists purely for the sake of the reduction.

comment by Chris_Hibbert · 2008-08-28T02:45:11.000Z · LW(p) · GW(p)

I agree on Pearl's accomplishment.

I have read Dennet, and he does a good job of explaining what Consciousness is and how it could arise out of non-conscious parts. William Calvin was trying to do the same thing with how wetware (in the form that he knew it at the time) could do something like thinking. Jeff Hawkins had more details of how the components of the brain work and interact, and did a more thorough job of explaining how the pieces must work together and how thought could emerge from the interplay. There is definitely material in "On Intelligence" that could help you think about how thought could arise out of purely physical interactions.

I'll have to look into Drescher.

comment by Nick_Tarleton · 2008-08-28T02:46:40.000Z · LW(p) · GW(p)
Philosophy is just not oriented to the outlook of someone who needs to resolve the issue, implement the corresponding solution, and then find out - possibly fatally - whether they got it right or wrong. Philosophy doesn't resolve things, it compiles positions and arguments. And if the debate about zombies is still considered open, then I'm sorry, but as Jeffreyssai says: Too slow!

Still, I hope your Friendliness structure can cope with the case where zombies are possible. Well, I guess that one wouldn't make any difference - so I should say I hope you're also trying to minimize the number of philosophical problems you have to be right about.

comment by Nominull3 · 2008-08-28T03:16:05.000Z · LW(p) · GW(p)

If you think you're so much better than philosophers, why don't you program an AI to write publishable philosophy papers, hmm?

comment by Micah_Glasser · 2008-08-28T03:26:32.000Z · LW(p) · GW(p)

I think you're an ass Eliezer. But you are fun. I'm an ass too. most philosopher's are. You should consider that the analytic tradition is mostly an evolutionarily extinct avenue of philosophy. I do think that you are basically right however. Cognitive science and AI is certainly one of the most important aspects of metaphysics. But ultimately atomically modeling the entire cosmological evolution is the real goal of metaphysics. Everything else is just nerds entertaining themselves.

comment by gappy · 2008-08-28T03:53:31.000Z · LW(p) · GW(p)

Sometimes I enjoy these postings, sometimes I am puzzled. They often are so self-referential (links are mostly to older postings of the same author) and ranting that I wonder whether I am being had. I don't doubt anyone's good intentions. I am just documenting my belief that Eliezer's state is binary: either the next Wittgenstein or a world-class delusional crank.

comment by TGGP4 · 2008-08-28T03:54:34.000Z · LW(p) · GW(p)

I've made similar dismissals of philosophy's fruits at this blog and elsewhere. That was supposed to make me a nihilist, philistine psychopath. As I recall, Eliezer did not agree with my analogy to theology and astrology.

comment by Hopefully_Anonymous · 2008-08-28T03:59:07.000Z · LW(p) · GW(p)

What do you think of the philosophy faculty of MIT and Cal-Tech? I ask because I suspect the faculty there selects for philosophers that would be most usual to hard scientists and engineers (and for hard science and engineering students).

http://www.mit.edu/~philos/faculty.html

http://www.hss.caltech.edu/humanities/faculty

comment by dreeves · 2008-08-28T04:01:56.000Z · LW(p) · GW(p)
  1. I'm curious to hear Nick Bostrom's response to this.

  2. Something like modal logic is needed to automate solutions to things like this: Blue-eyed Monks Though you might be right about the proliferation of modal logics.

  3. You made some similar points here: Where Philosophy Meets Science And Robin Hanson followed up here: On Philosophers

  4. Both times it was pointed out that Paul Graham has some similar complaints about philosophy here: How to Do Philosophy

comment by Tim_Tyler · 2008-08-28T07:44:26.000Z · LW(p) · GW(p)

Daniel Dennett is smart and usually right - but I find his writing style pretty yawn-inducing. I'm not very impressed by his detour into religion, either. Rather like Dawkins, it seems like he's been dragged down into the gutter by the creationists.

comment by Zubon · 2008-08-28T12:44:44.000Z · LW(p) · GW(p)

Philosophy is just not oriented to the outlook of someone who needs to resolve the issue, implement the corresponding solution, and then find out - possibly fatally - whether they got it right or wrong. Philosophy doesn't resolve things, it compiles positions and arguments.

This would be why I never finished that philosophy degree. Academic philosophy does not seem particularly interested in solving the world's problems. Tyrrell McAllister has a good point on the value of providing a way of discussing things, but if there is not even in principle a way of deciding what would constitute filling the black box, the discipline will keep juggling the boxes.

There must be some merit in games of language and logic, but they remain that: games. Sudoku and World of Warcraft are similarly structured games, and you could argue seriously about whether an issue of Games Magazine improves the world more or less than any scholarly journal J. mentioned.

That said, starting with Sturgeon's Law, we already knew the majority was waste paper. What is your probability that the good 10% is not worth the search cost to find it?

As a meta-Overcoming Bias comment, I think this post is necessary for Eliezer. When he discusses philosophical issues, there are a half-dozen of us who cite a hundred-year history of work on, for example, meta-ethics. I must interpret this post as a case for rational ignorance, "I am not going to read all that because it is obviously waste paper," as opposed to "I am familiar with that but I have rejected it" (or the latter with very small values of "familiar"). So this is one of those one-link responses.

We can meditate on whether it resolves the issue rather than giving a feeling of resolution. With respect to philosophy, I often find surprisingly little progress since Hume (on questions of interest to me). When an OB post arrives at a standard argument, maybe via a different door, I expect it to be able to engage standard critiques. "All standard critiques are meaningless black box juggling exercises until proven otherwise" is perhaps a viable heuristic, but it feels convenient.

This also feels a bit like the "outside view" Eliezer criticizes Robin for using to make predictions.

comment by Robin_Z · 2008-08-28T13:29:14.000Z · LW(p) · GW(p)

You're right that he should be able to engage standard critiques, Zubon, but if my (negligible) experience with the philosophy of free will is any indication, many "standard critiques" are merely exercises in wooly thinking. It's reasonable for him to step back and say, "I don't have time to deal with this sort of thing."

comment by Vladimir_Nesov · 2008-08-28T14:02:48.000Z · LW(p) · GW(p)

Zubon: This also feels a bit like the "outside view" Eliezer criticizes Robin for using to make predictions.

Problem is not in using outside view, but in using outside view that doesn't really apply to what it's being applied to, in trying to infer properties from surface similarities that don't indicate that objects have similar causal structure. If you are studying a single object, statistics of arbitrarily surface level provides valid ground for predictions, if this single object doesn't change its causal structure while under study.

comment by Caledonian2 · 2008-08-28T17:00:50.000Z · LW(p) · GW(p)

Well of course one standard response to such complaints is: "If you think you can do better, show us." Not just better in a one-off way, but a better tradition that could continue itself.
Can do? It's already been done, long ago - we call it 'science'.

Do not confuse technicians and stylists with those that apply the scientific method. Among those that do, some of the greatest of them made greater 'philosophical' progress while working and writing on matters only tangentially related to their nominal fields than countless generations of so-called philosophers who supposedly dedicated themselves to the issues.

Even an amateur scientist can quickly develop working resolutions to questions that philosophy has held up as eternal.

By this point, even an extraordinarily-unobservant thinker should have realized that philosophy isn't about finding the answer to questions - it's about posturing as profound while mouthing questions, then talking with others to mutually demonstrate the intellectual importance of the topic and thus those that discuss it. It's a form of status-masturbation.

comment by J. · 2008-08-28T19:00:41.000Z · LW(p) · GW(p)

Caledonian, aside from the continental school, could you please give some examples of people trying to posture to be profound? In philosophy graduate programs today, you are explicitly told not to posture.

Also, could you give an example of a philosophical problem that science has solved. E.g., What makes right actions right? What makes a society just? What makes mathematical claims true?

Your polemic is embarrassing. Your post was a form of masturbation.

comment by asdf · 2008-08-28T19:32:16.000Z · LW(p) · GW(p)

J.: Zeno's Paradox was solved by mathematicians (honorable members of the scientific community even if you think mathematics is not part of science).

comment by Michael_Drake · 2008-08-29T00:25:48.000Z · LW(p) · GW(p)

"it feels like I'm telling philosophers that their life's work has been a waste of time."

If my immediate interest is to trigger a subject's saliva reflex, it would be a much better use of my time to vividly describe to the subject the sensations of biting into a lemon than it would to inquire after the algorithms that give rise to lemony sensations.

I am reductionist, but I can't quite imagine an intellectual life that abstracted away all conscious interest in phenomenological structure in favor of monomaniacal attention to the base structure. Then again, there's no accounting for taste. (Or is there?)

comment by Anna2 · 2008-08-29T02:53:40.000Z · LW(p) · GW(p)

Funny... Have you ever fallen in love?

Albeit (I'll bite)

How do you determine all this metaphorical examples without having experimental proof to back it up? Just because you read does not give you the right to determine how people feel...

Just an opinion Anna

comment by Anna2 · 2008-08-29T03:40:39.000Z · LW(p) · GW(p)

And I thought you where intelligent...well I guess not..maybe mathematical..but you keep doing the same thing...but you will lose...sry..I thought you learned...good bye

comment by Brian_Scurfield · 2008-08-29T04:31:49.000Z · LW(p) · GW(p)

There are a number of reasons why I feel that modern philosophy, even analytic philosophy, has gone astray - so far astray that I simply can't make use of their years and years of dedicated work.

Yes, much modern philosophy has gone astray. But some hasn't. I would cite, for example, the thinking of critical rationalists such as Karl Popper, William Warren Bartley, David Deutsch, and David Miller.

Moreover I maintain that critical rationalism ought to be of use to you. First, it contains cogent criticism of inductivism and crypto-inductivism and one who understands these criticisms should see why Bayescraft is sterile. This knowledge is not only useful, it can't be ignored. Second, critical rationalism, and not Bayescraft, is our best current theory of knowledge and how we come to know things. Best theories are useful not only in themselves but also for the problems they contain.

Replies from: sark
comment by sark · 2011-02-02T22:13:00.023Z · LW(p) · GW(p)

What did you think of that part of EY's bayes intro where he reduces Falsificationism to a special case of Bayesianism?

comment by Elliot_Temple · 2008-08-29T06:23:20.000Z · LW(p) · GW(p)

There is a tradition of philosophy with value.

Many famous and modern philosophers are distractions from this. The same was true in the past. Each generation, most philosophers did not carry on the important, mainstream (in hindsight) tradition.

If you can't tell which is which, to me that suggests you could learn something by studying philosophy. Once you do understand what's what, then you can read exclusively good philosophy. For example, you'd know to ignore Wittgenstein, as the future will do. But the worthlessness of some philosophers does not stop people like William Godwin or Xenophanes from having valuable things to say (and the more recent philosophers who are carrying on their tradition).

comment by Tim_Tyler · 2008-08-29T07:29:31.000Z · LW(p) · GW(p)

Re: It contains cogent criticism of inductivism and crypto-inductivism and one who understands these criticisms should see why Bayescraft is sterile.

Uh, surely that's not the correct moral. It's like arguing that physics is sterile because of solipsism.

comment by Brian_Scurfield2 · 2008-08-29T09:21:13.000Z · LW(p) · GW(p)

Tim, you wrote here that:

A perfectly rational agent who denies the validity of induction would be totally unimpressed by Bayesian arguments.

Have you changed your mind? Do you now deny that Bayescraft relies on induction?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-08-29T13:11:07.000Z · LW(p) · GW(p)
Kripke's essay on possible worlds makes it clear that there is nothing mysterious about possible worlds, they are simply states of information. Nothing hard.

Good for Kripke, then. I've often found that the major people in a field really do deserve their reputations, and I haven't asserted that good philosophy is impossible, just that the field has failed to systematize it enough to make it worthwhile reading.

However, you do not solve an AI problem by calling something a "state of information". Given that there's only one real world, how are these "possible worlds" formulated as cognitive representations? I can't write an AI until I know this.

However, can you give me an immediate and agreed-upon answer to the question, "Is there a possible world where zombies exist?" Considering the questions that follow from that, will make you realize how little of the structure of the "possible worlds" concept follows just from saying, "it is a state of information".

Did Kripke mark his work as unfinished for failing to answer such questions? Or did he actually try to answer them? Now that would earn serious respect from me, and I might go out and start looking through Kripke's stuff.

Robin: Well of course one standard response to such complaints is: "If you think you can do better, show us." Not just better in a one-off way, but a better tradition that could continue itself. If you think you have done better and are being unfairly ignored, well then that is a different conversation.

Robin, my response here is mainly to philosophers who say, "We did all this work on metaethics, why are you ignoring us?" and my answer is: "The work you did is incommensurable with even the kind of philosophy that an AI researcher needs, which is cognitive philosophy and the reduction of mentalistic thinking to the non-mental; go read Gary Drescher for an example of the kind of mental labor I'm talking about. Some of you may have done such work, but that's no help to me if I have to wade through all of philosophy to find it. Even your compilations of arguments are little help to me in actually solving AI problems, though when I need to explain something I will often check the Stanford Encyclopedia of Philosophy to see what the standard arguments are. And I finally observe that if you, as a philosopher, have not gone out and studied cognitive science and AI, then you really have no right to complain about people 'ignoring relevant research', and more importantly, you have no idea what I'm looking for." This is my response to the philosophers who feel slighted by my travels through what they feel should be their territory, without much acknowledgment.

However, with all that said - if I was trying to build a tradition that would continue itself, these posts on Overcoming Bias would form a large part of how I did it, though I would be much more interested in making them sound more impressive (which includes formalizing/declarifying their contents and publishing them in journals) and I would assign a higher priority to e.g. writing up my timeless decision theory.

comment by Caledonian2 · 2008-08-29T16:16:14.000Z · LW(p) · GW(p)
Also, could you give an example of a philosophical problem that science has solved.

Considering that science developed out of a style of philosophical thought called 'natural philosophy', every question science has addressed has been a philosophical one.

The real problem is that when actual progress is made on a 'philosophical' question, we associate it with the branch of science that made the progress. Turing and Godel were mathematicians, Schroedinger was a physicist (and one of his most impressive insights was in the intersection of biology and information theory), Fermi a physicist, etc.

The only things that remain in the category of philosophy are those that are utterly useless and fail to expand our understanding of any aspect of the world. It's a simple selection effect - the gold is sifted out while the dross remains.

Turing alone resolved more questions that were traditionally considered to be within with bounds of 'philosophy' as you refer to it than anyone I can think of offhand.

comment by Tim_Tyler · 2008-08-29T19:50:38.000Z · LW(p) · GW(p)

Re: Bayesianism and induction.

Bayesianism is a formalisation of induction. The philosophical problems with the foundations of inductive reasoning are equally problems with the foundations of Bayesianism. These problems are essentially unchanged since Hume's era:

Rather than unproductive radical skepticism about everything, Hume said that he was actually advocating a practical skepticism based on common sense, wherein the inevitability of induction is accepted. Someone who insists on reason for certainty might, for instance, starve to death, as they would not infer the benefits of food based on previous observations of nutrition. - http://en.wikipedia.org/wiki/Problem_of_induction
comment by JB3 · 2008-08-29T19:53:08.000Z · LW(p) · GW(p)

J said: I read this blog for Hanson's posts, but unfortunately you are one of his co-bloggers

AND

but because you are at best a seventh rate thinker self-deceived into thinking he's a second rate thinker.

don't you think that Robin must think EY is at least a second rate thinker, or else he wouldn't let himself be associated with such a lowly seventh rate thinker...

i completely understand if you don't think EY is a worthwhile guy to read, no prob there...but then why read Hanson also? if they are colleagues and co-bloggers there must be something about EY that Robin thinks is first rate, no?

Replies from: metaphysicist
comment by metaphysicist · 2012-08-19T18:26:04.544Z · LW(p) · GW(p)

then why read Hanson also? if they are colleagues and co-bloggers there must be something about EY that Robin thinks is first rate, no?

Not necessarily. Hanson might be a good thinker who is also a personal opportunist who'll do anything to enhance his status, where co-publishing with Yudkowsky helped put Hanson's blog on the map. Hanson could have "admired" Yudkowsky for his fan-club building capacities rather than for the high quality of his thinking.

comment by Brian_Scurfield2 · 2008-08-29T22:47:52.000Z · LW(p) · GW(p)

Tim,

Re: Bayesianism and induction.

Given your concession that Bayesianism is a formalisation of induction, I don't understand your original criticism that me saying inductivism renders Bayesian sterile is like saying solipsism renders physics sterile.

Here's a definition from David Deutsch's "The Fabric of Reality:

Crypto-Inductivist: Someone who believes that the invalidity of inductive reasoning raises a serious philosophical problem, namely the problem of how to justify relying on scientific theories.

Crypto-inductivists have an "induction shaped" gap in their scheme of things.

Critical rationalism really did solve the problem of induction: It has no "induction shaped" gap.

I'm guessing from your Hume quote that you think it did so by resorting to radical skepticism, but if you think this you are mistaken.

comment by Aleandro · 2008-08-30T00:06:00.000Z · LW(p) · GW(p)

Eliezer, I recommend you to read Dennett's "Artificial Intelligence as Psychology and as Philosophy", in his collection of essays Brainstorms. It may be a bit dated, but it makes a very nice case for a division of territory between AI, Psy and Phi, and how each of them can help the others.

Replies from: lukeprog
comment by lukeprog · 2011-03-21T21:39:19.975Z · LW(p) · GW(p)

You can download that chapter here.

comment by Michael_Rooney · 2008-08-30T01:24:32.000Z · LW(p) · GW(p)

Eliezer, I don't think your comments would slight sensible philosophers, since many professional philosophers themselves make comparable or more biting criticisms about the discipline (Rorty, Dennett, Unger, now the experimental philosophy movement, et al., going back to the positivists, and, if you like, the Pyrrhonists and atomists). I'm afraid not only have philosophers already written extensively on meta-ethics, but they've also generated an extensive literature on anti-philosophy. They've been there, done that -- too! I think Tyrell McAllister is quite right to say that since philosophy largely consists of folks who can't agree on the most workable models, your functional interests will tend to be frustrated by philosophy. Like your estimable hero Dick Feynman (who, according to Len Mlodinow, averred that "philosophy is bullshit"), it'd be better for you simply to get on with your tasks at hand, and not expect much help from philosophy -- to find the worthwhile stuff you'd have to become one. Maybe you can do that after the FAI builds you an immortal corporeal form.

comment by Tim_Tyler · 2008-08-30T10:48:52.000Z · LW(p) · GW(p)

Re: Critical rationalism

I do not rate Popper's contributions in this area very highly - e.g. see here.

Science without induction is a complete joke. Popper didn't eliminate induction, he just swept it under a consensual rug.

comment by Brian_Scurfield · 2008-09-01T05:08:04.000Z · LW(p) · GW(p)

Tim,

Re: Critical rationalism

Critical rationalism is similar to evolutionary adaptation (though there are some important differences). Do you think evolution depends on induction, or would you admit that there are knowledge-generation processes that do not require induction in any way, shape, or form?

comment by Tim_Tyler · 2008-09-01T07:10:28.000Z · LW(p) · GW(p)

Our knowledge of evolutionary theory depends on induction. Without induction, you can't establish the uniformity of nature. You have no grounds for believing that what happened yesterday is any guide to what may happen tomorrow. Without induction, science is totally screwed. Popper's epistemology was not science without induction:

Wesley C. Salmon critiques Popper's falsifiability by arguing that in using corroborated theories, induction is being used. Salmon stated, "Modus tollens without corroboration is empty; modus tollens with corroboration is induction."
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-01T15:54:55.000Z · LW(p) · GW(p)

What on Earth is evolution, if not the keeping of DNA sequences that worked last time? It's less efficient than human induction and stupider, because it works only with DNA strings and is incapable of noticing simpler and more fundamental generalizations like physics equations. But of course it's a crude form of inductive optimization. What else would it be? There are no knowledge-generating processes without some equivalent of an inductive prior or an assumption of regularity. The maths establishing this often go under the name of No-Free-Lunch theorems.

Replies from: mlionson
comment by mlionson · 2010-02-24T04:27:47.598Z · LW(p) · GW(p)

Evolution does not increase a species' implicit knowledge of the niche by replicating genes. Mutation (evolution's conjectures) creates potential new knowledge of the niche. Selection decreases the "false" implicit conjectures of mutations and previous genetic models of the niche.

So induction does not increase the implicit knowledge of gene sequences.
Trial (mutation) and error (falsification) of implicit theories does. This is the process that the critical rationalist says happens but more efficiently with humans.

comment by Brian_Scurfield · 2008-09-02T00:29:15.000Z · LW(p) · GW(p)

"What on Earth is evolution, if not the keeping of DNA sequences that worked last time?

It's also replication and variation.

It's less efficient than human induction and stupider, because it works only with DNA strings and is incapable of noticing simpler and more fundamental generalizations like physics equations. But of course it's a crude form of inductive optimization. What else would it be?

That seems like an argument from "failure of imagination". Quite simply, evolution is trial and error.

There are no knowledge-generating processes without some equivalent of an inductive prior or an assumption of regularity.

This is just question begging, as I think you are aware. How did we come by the knowledge of induction? Did we induce it? Impossible! So, therefore, there must be at least one way to knowledge that doesn't involve induction.

This stuff is all old hat. Philosophers of the 20th century like Popper and Bartley realized that the whole induction quagmire is caused by people looking for justified sources of knowledge. They concluded that justificationism is a mistake and replaced it with critical rationalism. Now there are bad scholars who claim that critical rationalism sneaks induction in through the back door. But that is just bad scholarship.

It's a shame to still be wasting time on induction in the 21st century. Rather than rehashing old problems, shouldn't we be building on what the best of 20th century philosophy gave us?

The maths establishing this often go under the name of No-Free-Lunch theorems.

Were the assumptions of these theorems inductively justified?

comment by PhilGoetz · 2011-04-02T15:05:41.126Z · LW(p) · GW(p)

If a modal logic can hide a mystery inside a black box, and everything outside the black box behaves consistently, that would be an incredibly useful achievement. You would have isolated the mystery.

comment by PhilosophyFTW · 2011-07-07T05:15:05.974Z · LW(p) · GW(p)

This post demonstrates a deep misunderstanding of modal logics, and of the notions of possibility and necessity. one would expect that misunderstanding given that Eli can't really get himself to read philosophy. For example:

"I have to make an AI out of electrons, in this one actual world. I can't make the AI out of possibility-stuff, because I can't order a possible transistor."

What? What kind of nonsense is this? No contemporary philosophers would ever say that you can make something out of "possibility stuff", whatever the hell that is is supposed to be.

Or this:

"It's going to be because the non-ontologically-fundamental construct of "possibility" turns out to play a useful role in modeling and manipulating the one real world, a world that does not contain any fundamentally possible things."

Eli, everything that is actual is trivially possible, according to every single contemporary analytic philosopher. I have no idea what you mean by "fundamentally possible", but I doubt you mean anything useful by it. If x exists, then it's possible that x exists. If x is an actual object, then x is a possible object. If you want, you can treat those claims as axioms. What's your beef with them? Surely you don't think, absurdly, that if x actually exists then it's not possible that x exists?

One also has to wonder what your beef with meaning is. I mean, surely you mean something and mean to communicate something when you string lots of letters together. Is there nothing you mean by "reductionism"? If you don't mean anything by using that linguistic term, then nobody should pay attention to you.,

Replies from: nshepperd
comment by nshepperd · 2011-07-07T07:31:37.973Z · LW(p) · GW(p)

Eli, everything that is actual is trivially possible, according to every single contemporary analytic philosopher. I have no idea what you mean by "fundamentally possible", but I doubt you mean anything useful by it. If x exists, then it's possible that x exists. If x is an actual object, then x is a possible object. If you want, you can treat those claims as axioms. What's your beef with them? Surely you don't think, absurdly, that if x actually exists then it's not possible that x exists?

Allow me to attempt to translate (BTW, that a claim is so absurd is evidence it is not being made. Just sayin'.):

EY is not saying that some actual things are not possible. He is saying that things that are not actual, yet "possible", are exactly the same, as far as the universe is concerned, as things that are not actual and not "possible". Specifically, they are all nonexistent. Hence possibility is not fundamental in any ontological sense.

The general gist of the whole post is complaining that for all their precise logic, the people who invented modal logic have still not understood possibility and necessity. They formalized the intuitions about how possibility and necessity work, but didn't solve what they actually are (which is: labels applied by a decision-making algorithm).

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-10-22T13:20:08.352Z · LW(p) · GW(p)

He is saying that things that are not actual, yet "possible", are exactly the same, as far as the universe is concerned, as things that are not actual and not "possible". Specifically, they are all nonexistent. Hence possibility is not fundamental in any ontological sense.

But the laws of the universe demarcate possible things from impossible things: so can you dismiss the reality of possibilities without dismissing the reality of laws?

comment by Ronny Fernandez (ronny-fernandez) · 2011-09-17T17:29:46.807Z · LW(p) · GW(p)

Modal logic doesn't tell you if some sentence is possible or necessary, it tells you what sentences must have what modal values given some other sentences with some prespecified modal values. Just like Komolgorov doesn't tell you that the probability of a die landing on any face is 1/6, and that it can't land on two values, it just tells you that given that, the probability of the die landing on an even value is 1/2.

Komolgorov and Bayes seem to me to be guilty of the same sort of bouncing, but i think Bayes and Komolgorov are clearly useful tools for the study of rationality. Modal logic does not define possibility, and it certainly does not reduce the notion of modality to anything, but it does constrain the assigning of modal values to fields of sentences. Any philosopher that argued otherwise is prolly a noob.

But, in general I agree with you. I am a philosopher, or at least that's my major, and i agree that: It is only by extraordinary competence that philosophers ever produce useful reductions; that's something I hope to change by going into the field. And btw, I plan on using your work all the time to help me make that happen. So would it bother you, or seem strange, if i called you a philosopher, Eliezer? Cause I honestly say that your one of my favorite philosophers, if not my favorite, often enough, and i would find it funny if my favorite philosopher, didn't even consider himself a philosopher at all, and wasn't all that intimate with the literature. It's a fact I'd like to know for personal amusement.

comment by Ronny Fernandez (ronny-fernandez) · 2011-09-18T19:01:58.034Z · LW(p) · GW(p)

Philosophers are scientists, they're just really bad scientists for the most part. This is due to the fact that they draw their power from the couple thousand years of moderately interesting mistakes that we call "the history of western philosophy". What makes philosophers different from any other group of scientists, is simply the targets of inquiry they specialize (or try to specialize) in. The same thing that makes a biologist different from a physicist. Some philosophers have done well, but they had to invent too much of the art for themselves; not enough of their power came from the cumulative learning of their predecessors being passed verbally. Often the scriptures have done more to lead new students astray, than to lead them to victory. This sort of staggeringly slow progress, taking thousands of years, and rarely ever leading to professional consensus, can be starkly contrasted with the rapid progress of the rather young science of biology.

We are all Bayesian here, right? Let's cut to the chase. Either philosophers will find predictive hypothesis spaces that make empirically testable predictions and manage to update their belief values for those hypotheses with Bayesian evidence, or the field of philosophy is, and always was, as doomed as the field of astrology. Some philosophers do of course do this sometimes, since some philosophers are sometimes right.

The problem philosophy faces is that it hasn't been able to reliably teach its students how to do the bayes dance in philosophy, the way biology has been able to teach its students to do the bayes dance in biology. What i suggest that we philosophers do, is take a good long look at top notch biology (or physics, or psychology, or mathematics, or computer science, or astronomy, or geology, or economics, or any other science progressing faster than wax melts) training and philosophy training, and figure out what's going on in the biology training community, that isn't going on in the philosophy training community. Then we try to bridge the gap.

Philosophy is hard, but so is super symmetry, and for much the same reasons. If the bayes dance can handle the rest of science, I get the feeling it shouldn't get stumped here. There are solvable problems of philosophy, they are just really hard, and really hard scientific problems, require really good science to get solved; not moderate science, or good enough science — really good science. It is no wonder that philosophy has steadily progressed at the pace of a snail for the last 2000 years; its students have been given Plato in the absence of Bayes.

comment by Tuukka_Virtaperko · 2012-01-16T18:16:00.197Z · LW(p) · GW(p)

I wrote a bunch of comments to this work while discussing with Risto_Saarelma. But I thought I should rather post them here. I came here to discuss certain theories that are on the border between philosophy and something which could be useful for the construction of AI. I've developed my own such theory based on many years of work on an unusual metaphysical system called the Metaphysics of Quality, which is largely ignored in the academy and deviates from the tradition. It's not very "old" stuff. The formation of that tradition of discussion began in 1974. So that's my background.

The kind of work that I try to do is not about language. It is about reducing mentalistic models to purely causal models, about opening up black boxes to find complicated algorithms inside, about dissolving mysteries - in a word, about cognitive science.

What would I answer to the question whether my work is about language? I'd say it's both about language and algorithms, but it's not some Chomsky-style stuff. It does account for the symbol grounding problem in a way that is not typically expected of language theory. But the point is, and I think this is important: even the mentalistic models to not currently exist in a coherent manner. So how are people going to reduce something undefined to purely causal models? Well, that doesn't sound very possible, so I'd say the goals of RP are relevant.

But this kind of reductionism is hard work.

I would imagine mainstream philosophy to be hard work, too. This work, unfortunately, would, to a great extent, consist of making correct references to highly illegible works.

Modern philosophy doesn't enforce reductionism, or even strive for it.

Well... I wouldn't say RP enforces reductionism or that it doesn't enforce reductionism. It kinda ruins RP if you develop a metatheory where theories are classified either as reductionist or nonreductionist. You can do that - it's not a logical contradiction - but the point of RP is to be such a theory, that even though we could construct such metatheoretic approaches to it, we don't want to do so, because it's not only useless, but also complicates things for no apparent benefit. Unless, of course, we are not interested of AI but trying to device some very grand philosophy of which I'm not sure what it could be used for. My intention is that things like "reductionism" are placed within RP instead of placing RP into a box labeled "reductionism".

RP is supposed to define things recursively. That is not, to my knowledge, impossible. So I'm not sure why the definition would necessarily have to be reductive in some sense LISP, to my knowledge, is not reductive. But I'm not sure what Eliezer means with "reductive". It seems like yet another philosophical concept. I'd better check if it's defined somewhere on LW...

And then they publish it and say, "Look at how precisely I have defined my language!"

I'm not a fetishist. Not in this matter, at least. I want to define things formally because the structure of the theory is very hard to understand otherwise. The formal definitions make it easier to find out things I would not have otherwise noticed. That's why I want to understand the formal definitions myself despite sometimes having other people practically do them for me.

Consider the popular philosophical notion of "possible worlds". Have you ever seen a possible world?

I think that's pretty cogent criticism. I've found the same kind of things troublesome.

Philosophers keep telling me that I should look at philosophy. I have, every now and then. But the main reason I look at philosophy is when I find it desirable to explain things to philosophers.

I understand how Eliezer feels. I guess I don't even tell people they need to look at philosophy for its own sake. How should I know what someone else wants to do for its own sake? But it's not so simple with RP, because it could actually work for something. The good philosophy is simply hard to find, and if I hadn't studied the MOQ, I might very well now be laughing at Langan's CTMU with many others, because I wouldn't understand what that thing is he is a bit awkwardly trying to express.

I'd like to illustrate the stagnation of academic philosophy with the following thought experiment. Let's suppose someone has solved the problem of induction. What is the solution like?

  • Ten pages?
  • Hundred pages?
  • Thousand pages?
  • Does it contain no formulae or few formulae?
  • Does it contain a lot of formulae?

I've read academic publications to the point that I don't believe there is any work the academic community would, generally speaking, regard as a solution to the problem of induction. I simply don't believe many scholars think there really can be such a thing. They are interested of "refining" the debate somehow. They don't treat it as some matter that needs to be solved because it actually means something.

This example might not right a bell to someone completely unfamiliar with academic philosophy, but I think it does illustrate how the field is flawed.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2012-01-17T08:12:58.273Z · LW(p) · GW(p)

I'd like to illustrate the stagnation of academic philosophy with the following thought experiment. Let's suppose someone has solved the problem of induction. What is the solution like?

Ten pages? Hundred pages? Thousand pages? Does it contain no formulae or few formulae? Does it contain a lot of formulae?

I'll go with 61 pages and quite a few formulae.

comment by buybuydandavis · 2012-08-16T09:04:33.666Z · LW(p) · GW(p)

Jaynes quoted a colleague: “Philosophers are free to do whatever they please, because they don’t have to do anything right.”

Philosophers lack the feedback loop from reality that an engineer trying to build a mind has. Most of the heated philosophical squawking about minds will be rendered irrelevant once we start building them.

One of the reasons Dennett usually makes sense is he tries to know the science involved.

Just the other day I was watching Dennett: http://www.youtube.com/watch?v=2hBQCBpyu74&feature=g-hist

At around 6:00, he's saying how he sees the job of philosophers as matching up the manifest image of the world with the scientific image of the world. I think that kind of philosophy will always be needed.

comment by whowhowho · 2013-02-01T00:03:36.224Z · LW(p) · GW(p)

Reductionism is, in modern times, an unusual talent. Insights on the order of Pearl et. al.'s reduction of causality or Julian Barbour's reduction of time are rare.

Ye-e-e-s. But it is not at all clear whether Barbours reduction works. (See Fay Dowker;s criticisms in the appendices, for instance). It's not a reduction in the sense that "heat is molecular motion" is a universally accepted, succesful reduction.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-11-24T11:09:50.215Z · LW(p) · GW(p)

Asking "is this reductive" and nothing else is not a good way to do philosophy.

comment by whowhowho · 2013-02-01T00:13:45.888Z · LW(p) · GW(p)

The "bounce" is when you try to analyze a word like could, or a notion like possibility, and end up saying, "The set of realizable worlds [A'] that follows from an initial starting world A operated on by a set of physical laws f." Where realizable contains the full mystery of "possible" - but you've made it into a basic symbol, and added some other symbols: the illusion of formality.

Can you keep on "reducing" -- unpacking the meanings of terms -- without hitting a bedrock? Is there anyone who doesn't know what "can" and could" mean? Can you not co-define a set of words in terms of each other, coherentisically, without prejuice as to what is fundamental?

comment by non-expert · 2013-02-04T17:36:20.983Z · LW(p) · GW(p)

what is the basis for the position that knowledge of the world must come from analytical/probabilistic models? I'm not questioning the "correctness" of your view, only wondering your basis for it. It seems awfully convenient that a type of model that yields conclusions is in fact the correct one -- put another way, why is the availability of a clear methodology that gives you answers indicative of its universal applicability in attaining knowledge?

traditional philosophy, as you correctly point out, has failed to bridge its theory to practice -- but perhaps that is the flaw of the users and not the theory. rationalists generally believe the use of probabilities is sound methodology, but the problems regarding decision-making are a flaw of the practitioners. though I appreciate you likely disagree, perhaps we have the same problem with philosophy. Though there are no clear answers, the models of thought they provide could effectively apply in practical situations, its just that no philosopher has been able to get there.

comment by Polytopos · 2021-02-08T20:51:30.199Z · LW(p) · GW(p)

You might be interested to look at David Corfield's book Modal Homotopy Type Theory. In the chapter on modal logic, he shows how all the different variants of modal logic can be understood as monads/comands. This allows us to understand modality in terms of "thinking in a context", where the context (possible worlds) can be given a rigorous meaning categorically and type theoretically (using slice categories).

comment by snerx · 2023-04-29T19:28:18.605Z · LW(p) · GW(p)

The kind of work that I try to do is not about language.  It is about reducing mentalistic models to purely causal models, about opening up black boxes to find complicated algorithms inside, about dissolving mysteries - in a word, about cognitive science.

And as we all know, language has nothing to do with cognitive science.