Metaphilosophical Mysteries

post by Wei Dai (Wei_Dai) · 2010-07-27T00:55:59.222Z · LW · GW · Legacy · 266 comments

Contents

  The Unreasonable Effectiveness of Philosophy
  Objective Philosophical Truths?
None
266 comments

Creating Friendly AI seems to require us humans to either solve most of the outstanding problems in philosophy, or to solve meta-philosophy (i.e., what is the nature of philosophy, how do we practice it, and how should we program an AI to do it?), and to do that in an amount of time measured in decades. I'm not optimistic about our chances of success, but out of these two approaches, the latter seems slightly easier, or at least less effort has already been spent on it. This post tries to take a small step in that direction, by asking a few questions that I think are worth investigating or keeping in the back of our minds, and generally raising awareness and interest in the topic.

The Unreasonable Effectiveness of Philosophy

It seems like human philosophy is more effective than it has any right to be. Why?

First I'll try to establish that there is a mystery to be solved. It might be surprising so see the words "effective" and "philosophy" together in the same sentence, but I claim that human beings have indeed made a non-negligible amount of philosophical progress. To cite one field that I'm especially familiar with, consider probability and decision theory, where we went from having no concept of probability, to studies involving gambles and expected value, to subjective probability, Bayesian updating, expected utility maximization, and the Turing-machine-based universal prior, to the recent realizations that EU maximization with Bayesian updating and the universal prior are both likely to be wrong or incomplete.

We might have expected that given we are products of evolution, the amount of our philosophical progress would be closer to zero. The reason for low expectations is that evolution is lazy and shortsighted. It couldn't possibly have "known" that we'd eventually need philosophical abilities to solve FAI. What kind of survival or reproductive advantage could these abilities have offered our foraging or farming ancestors?

From the example of utility maximizers, we also know that there are minds in the design space of minds that could be considered highly intelligent, but are incapable of doing philosophy. For example, a Bayesian expected utility maximizer programmed with a TM-based universal prior would not be able to realize that the prior is wrong. Nor would it be able to see that Bayesian updating is the wrong thing to do in some situations.

Why aren't we more like utility maximizers in our ability to do philosophy? I have some ideas for possible answers, but I'm not sure how to tell which is the right one:

  1. Philosophical ability is "almost" universal in mind space. Utility maximizers are a pathological example of an atypical mind.
  2. Evolution created philosophical ability as a side effect while selecting for something else.
  3. Philosophical ability is rare and not likely to be produced by evolution. There's no explanation for why we have it, other than dumb luck.

As you can see, progress is pretty limited so far, but I think this is at least a useful line of inquiry, a small crack in the problem that's worth trying to exploit. People used to wonder at the unreasonable effectiveness of mathematics in the natural sciences, especially in physics, and I think such wondering eventually contributed to the idea of the mathematical universe: if the world is made of mathematics, then it wouldn't be surprising that mathematics is, to quote Einstein, "appropriate to the objects of reality". I'm hoping that my question might eventually lead to a similar insight.

Objective Philosophical Truths?

Consider again the example of the wrongness of the universal prior and Bayesian updating. Assuming that they are indeed wrong, it seems that the wrongness must be objective truths, or in other words, it's not relative to how the human mind works, or has anything to do with any peculiarities of the human mind. Intuitively it seems obvious that if any other mind, such as a Bayesian expected utility maximizer, is incapable of perceiving the wrongness, that is not evidence of the subjectivity of these philosophical truths, but just evidence of the other mind being defective. But is this intuition correct? How do we tell?

In certain other areas of philosophy, for example ethics, objective truth either does not exist or is much harder to find. To state this in Eliezer's terms, in ethics we find it hard to do better than to identify "morality" with a huge blob of computation which is particular to human minds, but it appears that in decision theory "rationality" isn't similarly dependent on complex details unique to humanity. How to explain this? (Notice that "rationality" and "morality" otherwise share certain commonalities. They are both "ought" questions, and a utility maximizer wouldn't try to answer either of them or be persuaded by any answers we might come up with.)

These questions perhaps offer further entry points to try to attack the larger problem of understanding and mechanizing the process of philosophy. And finally, it seems worth noting that the number of people who have thought seriously about meta-philosophy is probably tiny, so it may be that there is a bunch of low-hanging fruit hiding just around the corner.

266 comments

Comments sorted by top scores.

comment by CronoDAS · 2010-07-27T07:48:32.856Z · LW(p) · GW(p)

One "interesting" thing about philosophy seems to be that as soon as a philosophical issue gets a definitive answer, it ceases to be part of philosophy and instead becomes either mathematics or science. For example, physical sciences were once "natural philosophy". Many social sciences were also once the domain of philosophy; economics, oddly enough, first developed as an offshoot of moral philosophy, and "philosophy of mind" predates the practice of psychology, cognitive science, neurobiology, and the badly-named "computer science" (which is really just a branch of mathematics).

Philosophy seems to be roughly equivalent to the study of confusing questions; when a question is no longer confusing, it stops being philosophy and instead becomes something else.

Replies from: Wei_Dai, SilasBarta, Alexandros, peterward
comment by Wei Dai (Wei_Dai) · 2010-07-27T08:00:45.451Z · LW(p) · GW(p)

One "interesting" thing about philosophy seems to be that as soon as a philosophical issue gets a definitive answer, it ceases to be part of philosophy and instead becomes either mathematics or science.

Agreed, and I think that accounts for the reputation philosophy has for not being productive. People see the confusion and slow progress in the fields that are still thought of as philosophy, and forget that philosophical progress is what allowed many fields to become mathematics or science.

comment by SilasBarta · 2010-07-27T12:44:23.808Z · LW(p) · GW(p)

Okay, but (for you, Wei_Dai, and anyone else), how about if you look at just the last 30 years, or 100, or 150? How many new, productive fields have been spun off of something recognized as philosophy?

Replies from: JoshuaZ, Perplexed
comment by JoshuaZ · 2010-07-27T13:07:57.705Z · LW(p) · GW(p)

One issue that one runs into with your question is how one defines a new field being spun off. Some people have argued that biology didn't really split off from philosophy until the 1850s and 60s, especially with the work of Darwin and Wallace. This is a popular view among Kuhnians who mark a field as becoming science when it gains an overarching accepted paradigm. (However, one could argue that the field left philosophy before it entered science.)

The word "scientist" was first used in 1833, and prior to that "natural philosopher" was used. But certainly by the late 1700s, they were practicing what we could call science. So that argument fails even if one extends the date.

Economics is generally thought of having split off from philosophy when Adam Smith wrote The Wealth of Nations, and that's in the late 18th century. But arguably, merchantilist ideas were a form of economics that predated Smith and were separate from philosophy. And you could push the date farther up, pointing out that until fairly late most of the people thinking about economics are people like Bentham who we think of as philosophers.

Possibly the best example of an area that split off recently might be psychology. Wilhelm Wundt is sometimes regarded as the individual who split that off, doing actual controlled scientific experiments in the late 19th century. But there was research being done by scientists/biologists/natural philosophers much earlier in the 19th century, especially in regards to whether the nervous system was the source of cognition. Wikipedia claims that that work started as early as 1802 with Cabanis (this is surprising to me since I didn't realize he was that early). One could argue given all the subsequent Freudian and Jungian material that psychology didn't really split off from philosophy until that was removed from mainstream psychology which was in the 1960s and 70s. However, that seems like a weak argument.

Linguistics might be another example, but again, how you define the split matters. It also runs into the not tiny issue that much of linguistics spun off from issues of philology, a field already distinct from philosophy. But other areas of linguistics broke off later, and some people still seem to think of issues like Sapir-Whorf as philosophical questions.

So a lot of this seems to depend on definitions, but regardless of definitions it seems clear that no field has spun off in the last 30 years. Going back farther makes the question murkier, but a decent argument can be made that there has been no such spin off in the last 150 years.

Replies from: gwern, amcknight, Blueberry
comment by gwern · 2010-07-27T17:26:26.576Z · LW(p) · GW(p)

I think psychology is very strongly an example. You have only to read some old psychology textbooks. I read William James's Principles of Psychology (for a Wittgenstein course) from exactly a century ago, and it was a mix of extremely low-level unexplained experimental results and philosophical argumentation about minds and souls (James spending quite a bit of time attacking non-materialist views, of which there were no shortage of proponents). To point to some of the experiments decades earlier and say that it'd already split off is like pointing at Aristotle's biology work as the start of the split between natural philosophy and biology.

comment by amcknight · 2012-03-10T00:31:32.644Z · LW(p) · GW(p)

I would guess that these splits were generally not recognized as splits until much later when we had distinct bodies of work and then we can look back at the initial roots of the topic. This shows that there might be a bunch of roots of new fields present now that simply haven't grown large enough to be recognized yet.

comment by Blueberry · 2010-07-27T18:20:28.236Z · LW(p) · GW(p)

So a lot of this seems to depend on definitions, but regardless of definitions it seems clear that no field has spun off in the last 30 years.

Not even cognitive science? This blog seems to be in the process of splitting off philosophy of mind into cog sci and AI research.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-27T18:32:22.498Z · LW(p) · GW(p)

How similar is Eliezer Yudkowsky's research program to that which is commonly thought of as "philosophy"?

EY's work intersects with philosophy in the sense that he asks, "What cognitive architecture would make one have these philosophical discussions / intuitions?" But philosophy is not unique for him in this respect -- i.e., he would just as well ask, "What cognitive architecture would make one get this visual sensation that makes these things seem the most salient?"

Certainly, there are definitions, reasonable ones, for philosophy that cover what this site does, but the problem is that Wei_Dai hasn't defined what he means by "philosophy" here.

Replies from: cousin_it
comment by cousin_it · 2010-07-27T18:40:48.291Z · LW(p) · GW(p)

Sometime ago I was quite surprised to know that Kevin T. Kelly's work on Ockham's Razor, very rigorous and mathematical in nature, falls under "philosophy". Apparently modern philosophy can get quite awesome when it wants to.

(By the way, someone should really write an introductory LW post about this. I thought Johnicholas Hines would do it, but lately he seems to be missing.)

Replies from: thomblake, IlyaShpitser, SilasBarta
comment by thomblake · 2010-08-02T19:19:49.380Z · LW(p) · GW(p)

Typically, philosophers do whatever they want and label it 'philosophy', and will claim most positive historical figures as examples of 'philosophers'.

Symetrically, those who are skeptical of the value of philosophy will note that anyone who does anything useful couldn't possibly be doing philosophy, sometimes "by definition".

Replies from: jimrandomh
comment by jimrandomh · 2010-08-02T19:38:01.775Z · LW(p) · GW(p)

Typically, philosophers do whatever they want and label it 'philosophy', and will claim most positive historical figures as examples of 'philosophers'. Symetrically, those who are skeptical of the value of philosophy will note that anyone who does anything useful couldn't possibly be doing philosophy, sometimes "by definition".

Definitely true, and this suggests that the question of whether philosophy is good/bad/useful is fundamentally confused. One definition that I like is that philosophy is any academic study not otherwise classified. That explains why there are so many examples of fields starting out as philosophy, being given a classification and then not being philosophy any more. It also makes most attempts to say things about philosophy as a whole look rather silly. The only problem with this definition is that a few fields, like ethics, have classifications of their own but are too narrow to count as separate fields, so they're classified as subfields. Still, I think that this definition does a good enough job of dissolving silly questions that we can ignore a few special cases.

comment by IlyaShpitser · 2010-07-28T07:35:42.322Z · LW(p) · GW(p)

Kelly's observation: inductive processes by necessity change their minds multiple times before arriving at the truth.

Kelly's proposal: inductive processes ought to minimize how often they change their minds before truth is reached. (There are some subtle issues here -- this proposal does not contradict "statistical efficiency" considerations, although it's hard to see why at first glance).

comment by SilasBarta · 2010-07-28T17:32:40.192Z · LW(p) · GW(p)

I don't think the work shown on that link would be regarded as typical philosophy -- it's more characteristic of computer science or statistics.

Replies from: RobinZ
comment by RobinZ · 2010-07-28T19:53:43.918Z · LW(p) · GW(p)

What falls under the category of "typical philosophy", in your opinion?

Replies from: SilasBarta
comment by SilasBarta · 2010-07-28T21:10:32.769Z · LW(p) · GW(p)

I didn't have a clear-cut definition in mind then -- I just thought that the Kelly link was far enough from being an edge case.

If I had to say, I would take a random selection of articles from the Stanford Encyclopedia of Philosophy, and that gives an idea of what typical philosphy is, as the term is normally used.

comment by Perplexed · 2010-07-31T22:24:10.356Z · LW(p) · GW(p)

It depends how big you need a "field" to be. Much of philosophical logic split off to become mathematical logic (split complete by about 1930). That left the philosophers pondering things like entailment, tense logic, modality, epistemic logic, etc. But around 1960, Kripke put that stuff on a solid basis, and these kinds of logics are now an important topic in computer science. Certainly utility theory, decision theory, and subjective probability have only come over from philosophy (to econ, math, and AI) within the past 150 years. And there are still philosophers involved at the forefront of all these fields.

comment by Alexandros · 2010-07-27T12:53:08.534Z · LW(p) · GW(p)

I'll just note here the obvious parallel to AI, where everything useful that comes out of it gets reclassified as 'not-AI'.

Replies from: Blueberry
comment by Blueberry · 2010-07-27T18:28:52.073Z · LW(p) · GW(p)

While people say this sometimes, I don't think this is accurate. Most of the "AI" advances, as far as I know, haven't shed a lot of light on intelligence. They may have solved problems traditionally classified as AI, but that doesn't make the solutions AI; it means we were actually wrong about what the problems required. I'm thinking specifically of statistical natural language processing, which is essentially based on finding algorithms to analyze a corpus, and then using the results on novel text. It's a useful hack, and it does give good results, but it just tells us that those problems are less interesting than we thought.

Another example is chess and Go computing, where chess programs have gotten very very good just based on pruning and brute-force computation; the advances in chess computer ability were brought on by computing power, not some kind of AI advance. It's looking like the same will be true of Go programs in the next 10 to 20 years, based on Monte Carlo techniques, but this just means that chess and Go are less interesting games than we thought. You can't brute-force a traditional "AI" problem with a really fast computer and say that you've achieved AI.

Replies from: sharpneli, nazgulnarsil
comment by sharpneli · 2010-07-28T09:09:52.396Z · LW(p) · GW(p)

but it just tells us that those problems are less interesting than we thought.

Extrapolating from the trend it would not suprise me greatly if we'd eventually find out that intelligence in general is not as interesting as we thought.

When something is actually understood the problem suffers from rainbow effect "Oh it's just reflected light from water droplets, how boring and not interesting at all". It becomes a common thing thus boring for some. I, for one, think go and chess are much more interesting games now that we actually know how they are played, not just how to play.

Replies from: Blueberry
comment by Blueberry · 2010-07-28T21:17:30.282Z · LW(p) · GW(p)

My point was that go and chess are not actually understood. We don't actually know how they're played. There are hacks that allow programs to get good at those games without actually understanding the patterns involved, but recognizing the patterns involved is what humans actually find interesting about the games.

To clarify, "understanding chess" is a interesting problem. It turns out that "writing a program to be very good at chess" isn't, because it can be solved by brute force in an uninteresting way.

Another example: suppose computer program X and computer program Y are both capable of writing great novels, and human reviewers can't tell the difference between X's novels, Y's novels, and a human's. However, X uses statistical analysis at the word and sentence level to fill in a hard-coded "novel template," whereas Y creates characters, simulates their personality and emotions, and simulates interactions between them. Both have solved the (uninteresting) problem of writing great novels, but Y has solved the (interesting) problem of understanding how people write novels.

(ETA: I suspect that program X wouldn't actually be able to write great novels, and I suspect that writing great novels is therefore actually an interesting problem, but I could be wrong. People used to think that about chess.)

What's happened in AI research is that Y (which is actually AI) is too difficult, so people successfully solve problems the way program X (which is not AI) does. But don't let this confuse you into thinking that AI has been successful.

Replies from: Vladimir_M, sharpneli
comment by Vladimir_M · 2010-07-28T21:44:01.101Z · LW(p) · GW(p)

Blueberry:

My point was that go and chess are not actually understood. We don't actually know how they're played. There are hacks that allow programs to get good at those games without actually understanding the patterns involved, but recognizing the patterns involved is what humans actually find interesting about the games.

That's not really true. In the last two decades or so, there has been lots of progress in reverse-engineering of how chess masters think and incorporating that knowledge into chess engines. Of course, in some cases such knowledge is basically useless, so it's not pursued much. For example, there's no point in teaching computers the heuristics that humans use to recognize immediate tactical combinations where a brute force search would be impossible for humans, but a computer can perform it in a millisecond.

However, when it comes to long-term positional strategy, brute-force search is useless, no matter how fast, and until the mid-1990s, top grandmasters could still reliably beat computers by avoiding tactics and pursuing long-term strategic advantage. That's not possible any more, since computers actually can think strategically now. (This outcome was disappointing in a sense, since it basically turned out that the human grandmasters' extraordinary strategic abilities are much more due to recognizing a multitude of patterns learned from experience than flashes of brilliant insight.)

Even the relative importance of brute-force search capabilities has declined greatly. To take one example, the Deep Blue engines that famously matched Kasparov's ability in 1996 and 1997 relied on specialized hardware that enabled them to evaluate something like 100-200 million positions per second, while a few years later, the Fritz and Junior engines successfully drew against him even though their search capabilities were smaller by two orders of magnitude. In 2006, the world champion Kramnik was soundly defeated by an engine evaluating mere 8 million positions per second, which would have been unthinkable a decade earlier.

Replies from: Blueberry
comment by Blueberry · 2010-07-29T21:03:46.223Z · LW(p) · GW(p)

Even the relative importance of brute-force search capabilities has declined greatly.

Thanks for updating me; I was indeed thinking of Deep Blue in the mid 90s. Good to know that chess programs are becoming more intelligent and less forceful.

(This outcome was disappointing in a sense, since it basically turned out that the human grandmasters' extraordinary strategic abilities are much more due to recognizing a multitude of patterns learned from experience than flashes of brilliant insight.)

This is what I would expect; a flash of brilliant insight is what recognizing a pattern feels like from the inside.

comment by sharpneli · 2010-07-29T16:38:39.400Z · LW(p) · GW(p)

but Y has solved the (interesting) problem of understanding how people write novels.

I think the whole point in AI research is to do something, not find out how humans do something. You personally might find psychology (How humans work) far more interesting than AI research (How to do things traditionally classified as 'intelligence' regardless of the actual method) but please don't generalize that notion and smack labels "uninteresting" into problems.

What's happened in AI research is that Y (which is actually AI) is too difficult, so people successfully solve problems the way program X (which is not AI) does. But don't let this confuse you into thinking that AI has been successful.

When mysterious things cease to be mysterious they'll tend to resemble the way "X".

Consider the advent of powered flight. By that line of argumentation one could write "We don't actually understand how flight works, There are hacks that allow machines to fly without actually understanding how birds fly." Or we could compare cars with legs and say that transportation is generally just a ugly uninteresting hack.

Replies from: Blueberry
comment by Blueberry · 2010-07-29T20:56:57.314Z · LW(p) · GW(p)

I think the whole point in AI research is to do something, not find out how humans do something.

Depends on who's doing the research and why. You're right that companies that want to sell software care about solving the problem, which is why that type of approach is so common. On the other hand, I'm reluctant to call a mostly brute-forced solution "AI research", even if it's useful computer programming.

When mysterious things cease to be mysterious they'll tend to resemble the way "X".

No, I think you're missing my point. X is uninteresting not because it is no longer mysterious, but because it has no large-scale structure and patterns. We could consider another novel-writing program Z that writes novels in some other interesting and complicated way that's different than how humans do it, but still has a rich and detailed structure.

Continuing with the flight analogy: rockets, helicopters, planes, and birds all have interesting ways of flying, whereas the "brute force" approach to flight, throwing a rock really really hard, is not that interesting.

Another example: optical character recognition. One approach is to have a database of hundreds of different fonts, put a grid on each character from each font, and come up with a statistical measure that figures out how close the scanned image is to each stored character by looking at the pixels that they have in common. This works and produces useful software, but that approach doesn't actually care about the different letterforms and shapes involved with them. It doesn't recognize that structure, even though that's what the problem is about.

Replies from: drunkpotato
comment by drunkpotato · 2010-08-03T13:08:53.058Z · LW(p) · GW(p)

Arguably, OCR is about taking a small patch of an image and matching that to a finite set of candidate possible ground truths. OCR programs can do this sometimes better than most humans, if the only thing you look at is one distorted character.

OCR has traditionally been a difficult problem and there are some novel applications of statistics and heuristics used to solve it. But OCR is not what we actually care about: the problem is recognizing a document, or symbolically representing a sentence, and OCR is just one small problem we've carved out to help us deal with the larger problem.

Characters are important when they are part of words, and the structure of a document. They are important when they contribute to what the document means, beyond just the raw data of the image scan. Situating a character in the context of the word it's in, the sentence that word is in, and the context of the document (English novel, handwritten letter from the 18th century, hastily scribbled medical report from a German hospital in 1970's) is what allows a human to extrapolate what the character must be, even if the image of the original character is distorted beyond any algorithm's ability to recognize, or even obliterated entirely.

It's this effect of context which is hard to capture and encode into an OCR algorithm. This broader sense, of being able to recognize a character anywhere a human would, which is the end goal of the problem, is what my friends refer to as an AI-complete problem. (Apologies if this community also uses that phrase, I haven't yet seen it here on LW.)

To give a specific example, many doctors use the symbol "circle above a cross" to indicate female, which most people reading would understand. Why? We've seen that symbol before, perhaps many times, and understand what it means. If you've trained your OCR algorithm on the standard set of English alphanumeric characters, then it will attempt to match that symbol and come up with the wrong answer. If you've done unsupervised training of an OCR algorithm on a typical novel, magazine, and newspaper corpus, there is a good chance that the symbol for female does not appear as a cluster in its vector space.

In order to recognize that symbol as a distinct symbol that needs to be somehow represented in the output, an OCR algorithm would have to do unsupervised online learning as it's scanning documents in a new domain. Even then, I'm not sure how useful it would be, since the problem is not recognizing a given character. The problem is recognizing what that character should be given the context of the document you're scanning. The problem of OCR explodes into specializations of "OCR for novels, OCR for 18th century English letters, OCR for American hospitals", and even more.

If we want an OCR algorithm to output something more useful than [funky new character I found], and instead insert "female" into the text database, at some point we have to tell the algorithm about the character. There's not yet that I know of an OCR system that avoids this hard truth.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-03T13:26:59.477Z · LW(p) · GW(p)

I like "AI-complete", though it wouldn't surprise me if general symbol recognition and interpretation is easier than natural language, whereas all NP-complete problems are equivalent.

Replies from: drunkpotato
comment by drunkpotato · 2010-08-03T15:52:36.139Z · LW(p) · GW(p)

I kept my initial comment technical, without delving into the philosophical aspects of it, but now I can ramble a bit.

I suspect that general symbol recognition and interpretation is AI-complete, because of these issues of context, world knowledge, and quasi-unsupervised online learning.

I believe there is a generalized learning algorithm (or set of algorithms) that use (at minimum) frequencies and in-built biological heuristics that we use to approach the world. In this view, natural language generation and understanding is one manifestation of this more general learning system (or constantly updating pattern recognition, if you like, though I think there may be more to it than simple recognition). Symbol recognition and interpretation is another.

"Recognition" and "interpretation" are themselves slippery words that hide the how and the what of what it is we do when we see a symbol. Computational linguists and psycholinguistics have done a good job of demonstrating that we know very little of what we're actually doing when we process visual and auditory input.

You are right that AI-complete probably hides finer levels of equivalency classes, wrapped up in the messy issue of what we mean by intelligence. Still, it's a handy shorthand to refer to problems that may require this more general learning facility, about which we understand very little.

comment by nazgulnarsil · 2010-07-28T06:56:08.198Z · LW(p) · GW(p)

"You can't brute-force a traditional "AI" problem with a really fast computer and say that you've achieved AI."

chinese room, etc.

Replies from: Blueberry
comment by Blueberry · 2010-07-28T21:22:49.256Z · LW(p) · GW(p)

Elaborate? I'm familiar with Searle's Chinese Room thought experiment, but I'm not sure what your point is here.

Replies from: nazgulnarsil, mattnewport
comment by nazgulnarsil · 2010-08-03T07:58:35.571Z · LW(p) · GW(p)

much of what feels like deep reasoning from the inside has been revealed by experiment to be simple pattern recognition and completion.

comment by mattnewport · 2010-07-28T21:47:34.871Z · LW(p) · GW(p)

Much recent progress in problems traditionally considered to be 'AI' problems has come not from dramatic algorithmic breakthroughs or from new insights into the way human brains operate but from throwing lots of processing power at lots of data. It is possible that there are few grand 'secrets' to AI beyond this.

The way the human brain has developed suggests to me that human intelligence is not the result of evolution making a series of great algorithmic discoveries on the road to general intelligence but of refinements to certain fairly general purpose computational structures.

The 'secret' of human intelligence may be little more than wiring a bunch of sensors and effectors up to a bunch of computational capacity and dropping it in a complex environment. There may be no such thing as an 'interesting' AI problem by whatever definition you are using for 'interesting'.

comment by peterward · 2010-08-04T01:06:09.382Z · LW(p) · GW(p)

I agree with the general argument. I think (some) philosophy is an immature science, or predecessor to a science, and some is in reference to how to do things better, therefore subject to less stringent, but not fundamentally different, standards than science--political philosophy, say (assuming, counterfactually, political thinking were remotely rational). And of course a lot of philosophy is just nonsense--probably most of it. But economics can hardly be called a science. If anything, the "field" has experienced retrograde evolution since it stopped being part of philosophy.

comment by Larks · 2010-08-02T23:19:07.598Z · LW(p) · GW(p)

The Unreasonable Effectiveness of Astronomy

It seems like human astronomy is more effective than it has any right to be. Why?

First I'll try to establish that there is a mystery to be solved. It might be surprising so see the words "effective" and "astronomy" together in the same sentence, but I claim that human beings have indeed made a non-negligible amount of astronomical progress. To cite one field that I'm especially familiar with, consider galaxies, where we went from having no concept of galaxies, to studies involving the milky way and other groups of light in the sky, to measuring their speed, location, age, and genesis, to the Einstein’s realizations that the flat universe and the Newtonian physics are both likely to be wrong or incomplete.

We might have expected that given we are products of evolution, the amount of our philosophical progress would be closer to zero. The reason for low expectations is that evolution is lazy and shortsighted. It couldn't possibly have "known" that we'd eventually need stargazing abilities to escape the planet. What kind of survival or reproductive advantage could these abilities have offered our foraging or farming ancestors?

From the example of my webcam, we also know that there are eyes in the design space of visual sensors that could be considered highly sensitive, but are incapable of making out distant stars. For example, a weasel is, apparently, incapable of making out more than a dim blurr. Nor would it be able to tell it was missing much, or have any reason to build telescopes.

Why aren't we more like CCTV in our ability to look at the stars? I have some ideas for possible answers, but I'm not sure how to tell which is the right one:

  1. Astronomic ability is "almost" universal in eye space. Low-quality or pathologically horizontal visual receptors are an example of an atypical mind.
  2. Evolution created stargazing ability as a side effect while selecting for the ability to see predators. This seems implausible; being able to see pretty lights in the sky would only serve to distract us.
  3. Stargazing ability is rare and not likely to be produced by evolution. There's no explanation for why we have it, other than dumb luck. This helps explain why there’s no sign of alien life yet; Stargazing is the great filter.
  4. We’re living in an ancestor simulation, which can only be run by species with the ability to escape their home planet, necessitating stargazing powers.

As you can see, progress is pretty limited so far, but I think this is at least a useful line of inquiry, a small crack in the problem that's worth trying to exploit. People used to wonder at the unreasonable effectiveness of philosophy, especially in probability, and I think such wondering eventually contributed to the idea of the philosophical universe if the world is made of philosophy, then it wouldn't be surprising that philosophy is, to para-quote Wei Dai, "appropriate to the objects of reality". I'm hoping that my question might eventually lead to a similar insight.

Replies from: jimrandomh
comment by jimrandomh · 2010-08-02T23:46:05.456Z · LW(p) · GW(p)

Ancient humans used the stars in the night sky as a compass to navigate by, so it would have been selected for. That said, I think (1), that astronomic ability is almost universal in eye space, is true. (At least for eyes that can see predators and obstacles in the dark, which much more strongly selected for than being able to navigate by starlight is.)

Replies from: Larks, army1987
comment by Larks · 2010-08-03T00:29:57.642Z · LW(p) · GW(p)

I was going to suggest that, but didn't want to stretch the layout too much. How long ago did we start using the stars to navigate? I wouldn't imagine it would pre-date agriculture, and can't think evidence we could have to suggest our ancestors from before then used the stars to navigate.

comment by A1987dM (army1987) · 2012-08-12T22:35:21.776Z · LW(p) · GW(p)

That said, I think (1), that astronomic ability is almost universal in eye space, is true. (At least for eyes that can see predators and obstacles in the dark, which much more strongly selected for than being able to navigate by starlight is.)

Huh, not sure. If I look at the sky with my left eye without wearing my glasses, I can barely see the stars, but I'm pretty sure I could see predators.

comment by thomblake · 2010-07-27T13:59:02.344Z · LW(p) · GW(p)

For those who might care, Wiley-Blackwell produces a journal called Metaphilosophy which is now 40 years old. It was founded by Terrell Ward Bynum (one of the major figures in computer ethics) and its current editor is Armen Marsoobian.

comment by SilasBarta · 2010-07-27T01:56:41.023Z · LW(p) · GW(p)

What is your definition of philosophy for this article?
Why is it a failing of a highly intelligent mind that it can't "do philosophy"?
Why would a Bayesian EU maximizer necessarily be unable to tell that a computable prior is wrong?
When is Bayesian updating the wrong thing to do?
What should I have learned from your link to Updateless Decision Theory that causes me to suspect that EU maximizing with Bayesian updating on a universal prior is wrong?
Doesn't rationality require identification of one's goals, therefore inheriting the full complexity of value of oneself?
What would count as an example of a metaphilosophical insight?

Replies from: Cameron_Taylor, Nisan, magfrump, XiXiDu
comment by Cameron_Taylor · 2010-07-27T07:08:06.509Z · LW(p) · GW(p)

What should I have learned from your link to Updateless Decision Theory that causes me to suspect that EU maximizing with Bayesian updating on a universal prior is wrong?

From what I can glean from the UDT descriptions it seems that UDT defines 'updating' to include things that I would prefer to describe as 'naive updating', 'updating wrong' or 'updating the wrong thing'.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-28T18:07:20.399Z · LW(p) · GW(p)

Pray tell, what is the right thing to update?

comment by Nisan · 2010-07-27T14:04:29.836Z · LW(p) · GW(p)

Doesn't rationality require identification of one's goals, therefore inheriting the full complexity of value of oneself?

Seconded. We can certainly imagine an amoral agent that responds to rational argument — say, a paperclipper that can be convinced to one-box on Newcomb's problem. This gives rise to the illusion that rationality is somehow universal.

But in what sense is an EU-maximizer with a TM-based universal prior "wrong"? If it loses money when betting on a unary encoding of the Busy Beaver sequence, maybe we should conclude that making money isn't its goal.

If someone knows a way to extract goals from an arbitrary agent in a way that might reveal the agent to be irrational, I would like to hear it.

Replies from: Randaly
comment by Randaly · 2010-07-28T20:00:34.485Z · LW(p) · GW(p)

For instrumental rationality, yes; for epistemic rationality, no. If the reason the EU-maximizer loses money is because it believes that the encoding will be different than it actually is, then it is irrational.

comment by magfrump · 2010-07-27T06:35:09.833Z · LW(p) · GW(p)

At least as I understand his point about rationality being objective, I assume he means that "given a set of goals and possible decisions, the most effective decision is determined."

I don't really understand why this doesn't apply to morality as such, unless they aren't similar in the way he implies.

Replies from: Nisan
comment by Nisan · 2010-07-27T13:13:40.677Z · LW(p) · GW(p)

Wei Dai distinguishes between objective rationality and subjectively obective morality:

in ethics we find it hard to do better than to identify "morality" with a huge blob of computation which is particular to human minds, but it appears that in decision theory "rationality" isn't similarly dependent on complex details unique to humanity.

Morality does seem to be more complex than decision theory. But aren't they both subjectively objective? Just as we can define the class of moral agents as those which respond to moral arguments, we can define the (larger) class of rational agents as those which respond to arguments about rationality.

comment by XiXiDu · 2010-07-28T18:05:32.210Z · LW(p) · GW(p)

When is Bayesian updating the wrong thing to do?

I think, but I'm not sure... http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/

Replies from: SilasBarta
comment by SilasBarta · 2010-07-28T18:18:38.744Z · LW(p) · GW(p)

I'm familiar with that. As I understand it, EY does not say that Bayesian updating is suboptimal. If anything, he says the opposite, that standard rationality gives you the right answer.

Could you be more specific about where you believe that article claims Bayesian updating is the wrong thing to do?

Replies from: Vladimir_Nesov, XiXiDu
comment by Vladimir_Nesov · 2010-07-28T18:20:13.809Z · LW(p) · GW(p)

Bayesian updating is the wrong thing to do in counterfactual mugging, and the reason TDT goes wrong on that problem is that it updates.

Replies from: red75, SilasBarta
comment by red75 · 2010-07-28T19:20:02.987Z · LW(p) · GW(p)

Does "extremal counterfactual mugging" where $100 is replaced by self-destruction of agent and $10000 is replaced by creation of 100 agent's copies (outside of agent's light cone) requires same answer as counterfactual mugging?

comment by SilasBarta · 2010-07-28T18:23:51.186Z · LW(p) · GW(p)

And this is an uncontroversial view here, which one can safely assert as a premise, as Wei_Dai did here?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-28T18:26:20.170Z · LW(p) · GW(p)

I don't believe it's particularly controversial. There is a question of whether humans retain preference about counterfactual worlds, but decision-theoretically, not-updating in the usual sense is strictly superior, because you get to make decisions you otherwise wouldn't be able to.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-28T18:38:33.413Z · LW(p) · GW(p)

Okay, then let me try to trace back to the point where we disagree.

As I understand it:

1) Timeless Decision Theory tells you what to do, given your beliefs. Any belief updating would happen before you apply TDT then, so I don't understand how TDT would err in terms of doing a Bayesian update (and that update is wrong to do) -- the error is independent of TDT, as TDT (see below) shields your actions from making losing decisions on the basis of such a "bad" update.

2) TDT can be stated as, "When calculating EU for the outcomes of an action, you must instead weight each outcome's utility by the probability that this action would have led to it if your decision procedure were such that it outputs this action."

So, on counterfactual mugging, even if it reasons that it's not in the winning world, it reasons that its decision theory leads to the highest (TDT-calculated) EU by setting its action to a policy of paying out on losing, as then it can add the utility of the winning side into its EU.

Or does EY agree that TDT fails on CM? (I couldn't tell from the CM article.)

3) Edit: And even if this is a case of Bayesian updating failing, does that generalize to dropping it altogether?

comment by XiXiDu · 2010-07-28T19:40:11.863Z · LW(p) · GW(p)

I just answered due to a strong gut feeling. It's some time since I read that article.

But there's always a way to set up a particular situation (at least regarding thought-experiments) where the optimal strategy is by definition of the rules not to update on evidence. If I remember right it didn't matter if Omega left and so it couldn't remove money anymore, because it was capable of perfect prediction and only does bestow those who do not update, i.e. are irrational agents under most other 'real-life' circumstances.

Anyway, I just tried to point you to to something because nobody replied to that particular question yet. I even had to look up what 'TDT' stands for :-)

Sorry for bothering.

comment by cousin_it · 2018-06-13T07:49:41.549Z · LW(p) · GW(p)

Coming back to this after eight years, I think our philosophical ideas might be just as human-specific as our ethics, but philosophical skills might be near universal :-)

Humans have an ability/urge to find simple approximations to things. Applying that to human decision-making leads to a big pile of approximations, some very specific (like "chocolate is usually nice") and some more general (like "when I prefer A to B and B to C, that usually means I prefer A to C"). The most general ones are called "philosophy". Their proportion in the pile is small, as expected. That leaves the question why there are any at all, but there's an obvious answer: some of our mental features have multiple uses. For example, "wanting" is reused for wanting many different things, so it makes sense that an approximation to it would end up approximating many things. A creature with a different decision-making mechanism, but with the same ability/urge to simplify things, would also have philosophical skills but end up with a different set of philosophical ideas. Does that make sense?

comment by cousin_it · 2010-07-27T08:05:48.132Z · LW(p) · GW(p)

1: Philosophical ability is "almost" universal in mind space. Utility maximizers are a pathological example of an atypical mind.

I wouldn't spend much time thinking about this alternative, because it will probably be true for some ideas of "mind space" and false for others, and I don't believe we have enough information to describe the correct "mind space".

2: Evolution created philosophical ability as a side effect while selecting for something else.

Many people think the ability to argue and comprehend arguments arose due to runaway sexual selection for ability to manipulate and resist manipulation. I'm not sure how to test such an explanation.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2010-07-27T08:39:44.279Z · LW(p) · GW(p)

Some people also think the ability to argue and selectively not comprehend arguments arose due to runaway sexual selection for ability to manipulate and resist manipulation.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2010-07-28T08:05:34.985Z · LW(p) · GW(p)

This additional point is controversial even here?

comment by rwallace · 2010-07-27T13:32:58.880Z · LW(p) · GW(p)

Aside from more general issues that have been previously addressed (Friendly AI is a pipe dream, and the big advances on philosophical questions have for the most part been made by methods other than philosophy), a couple of specifics:

  1. We were selected for the ability to tell stories and win political arguments, and it seems to me that minds so selected, should be expected to be able to do philosophy, albeit not terribly well -- which is indeed the case.

  2. You criticize the universal prior because it would disagree with our intuition when presented with an alleged halting oracle (because the universal prior takes for granted that the universe is computable, whereas to human intuition this is an open question). On the one hand I have sympathy with your position, because while I would like to think the universe is computable, I also regard it as an open question. On the other hand... do you have any reason other than intuition to believe, in that scenario, our intuition would be right and the universal prior would be wrong?

Replies from: knb
comment by knb · 2010-08-01T08:22:06.008Z · LW(p) · GW(p)

the big advances on philosophical questions have for the most part been made by methods other than philosophy.

That is a really good point. Psychology, for instance, is old enough that much Psychological theory was developed before more rigorous empirical standards were developed. These early psych. theories were formulated primarily through philosophical methods (introspection, metaphorical/associative reasoning, etc.) They were very unsuccessful when they were eventually held up to empirical standards.

comment by JoshuaZ · 2010-07-27T02:22:41.755Z · LW(p) · GW(p)

I'm not at all convinced that philosophy has been very successful. Indeed, the fact that there's nothing resembling a consensus among professional philosophers about almost anything you've described as achievements speaks pretty negatively to the success of philosophy. This contrasts strongly with the issue of mathematics where it seems that math has been deeply helpful for many different areas. For many branches of learning, the key to success has been to mathematicize the areas. In contrast, the more rigorous and reliable an area becomes generally the less it resembles what we generally call philosophy.

Replies from: MichaelVassar, Wei_Dai
comment by MichaelVassar · 2010-07-27T17:05:45.973Z · LW(p) · GW(p)

I would just say that most professional 'philosophers' aren't doing 'philosophy' as I mean the term. Ditto professional 'scientists' and 'science'. Robin's data suggests that most MDs are incompetent. Mounds of data suggests the same of most financial professionals. Why not generalize?

I look at the history of philosophy, not at professional philosophers, if I want to find competent philosophy.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-27T17:50:11.689Z · LW(p) · GW(p)

Also, does the "professional philosopher community" have reality-grounded standards for what constitutes "good philosophy"? And could they say what the consequences would be of making such errors (relative to the current body of knowledge)?

Because without that, then being rejected by mainstream academic philosophy is no more worrisome than if you were criticized for not being up-to-date with the top theology, or not knowing which writers were truly "post-colonial".

From some authors, I get the impression that their standard is no more rigorous than, "what all my buddies in major philosophy departments agree with".

comment by Wei Dai (Wei_Dai) · 2010-07-27T02:55:44.295Z · LW(p) · GW(p)

Indeed, the fact that there's nothing resembling a consensus among professional philosophers about almost anything you've described as achievements [...]

Really? As far as I can tell, the consensus for Bayesian updating and expected utility maximization among professional philosophers is near total. Most of them haven't heard of UDT yet, but on Less Wrong and at SIAI there also seems to be a consensus that UDT is, if not quite right, at least on the right track.

For many branches of learning, the key to success has been to mathematicize the areas.

But how do you mathematicize an area, except by doing philosophy? I mean real world problems do not come to you in the form of equations to be solved, or algorithms to be run.

Replies from: thomblake, CarlShulman, SilasBarta, JoshuaZ
comment by thomblake · 2010-07-27T13:49:12.577Z · LW(p) · GW(p)

Really? As far as I can tell, the consensus for Bayesian updating and expected utility maximization among professional philosophers is near total. Most of them haven't heard of UDT yet, but on Less Wrong and at SIAI there also seems to be a consensus that UDT is, if not quite right, at least on the right track.

From my (anecdotal but varied) experience talking to professional philosophers about them, I'd (off-the-cuff) estimate 80% are not familiar with expected utility maximization (in the sense of multiplying the probability of outcome by the utility) or Bayesian updating, and of the rest, a significant portion think that the Bayesian approach to probability is wrong or nonsensical, or that "expected utility maximization" is obviously wrongheaded because it sounds like Utilitarianism.

Replies from: MichaelVassar, timtyler
comment by MichaelVassar · 2010-07-27T17:06:48.453Z · LW(p) · GW(p)

That matches my anecdotal and varied experience, and as we know, the singular of anecdote is 'update' and the plural is 'update more'.

Replies from: thomblake
comment by thomblake · 2010-07-28T13:35:38.955Z · LW(p) · GW(p)

That matches my anecdotal and varied experience, and as we know, the singular of anecdote is 'update' and the plural is 'update more'.

Should I quote you for this one, or was it someone else originally?

comment by timtyler · 2010-07-31T21:28:03.987Z · LW(p) · GW(p)

"Utilitarianism" is a term for a specific concept hogging a perfectly good name that could be used for something more general: utility-based decision making.

comment by CarlShulman · 2010-07-27T04:22:18.441Z · LW(p) · GW(p)

I run into a fair number of epistemologists who are not keen on describing beliefs in terms of probabilities and want to use binary "believe" vs "not believe" terms, or binary "justification." Bayesian updating and utility-maximization decision theory are pretty dominant among philosophers of probability and decision theorists, but not universal among philosophers.

Replies from: utilitymonster
comment by utilitymonster · 2010-07-27T12:35:12.632Z · LW(p) · GW(p)

I'm a philosophy grad student. While I agree that many epistemologists still think it is important to talk in terms of believe/not-believe and justified/non-justfied, I find relatively few epistemologists who reject the notion of credence or think that credences shouldn't be probabilities. Of those who think credences shouldn't be probability functions, most would not object to using a weaker system of imprecise probabilities (Reference: James M. Joyce (2005). How Probabilities Reflect Evidence. Philosophical Perspectives 19 (1):153–178). These people are still pretty much on team Bayesianism.

So, in a way, the Bayesian domination is pretty strong. In another way, it isn't: few debates in traditional epistemology have been translated in Bayesian terms and solved (though this would probably solve very many of them). And many epistemologists doubt that Bayesianism will be genuinely helpful with respect to their concerns.

Replies from: CarlShulman
comment by CarlShulman · 2010-07-27T12:58:47.228Z · LW(p) · GW(p)

I mostly agree with this.

comment by SilasBarta · 2010-07-27T13:04:40.933Z · LW(p) · GW(p)

Just skim the Stanford Encyclopedia of Philosophy articles on probability and see how uncontroversial philosophers in general regard Bayesian inference. I think you'll see that they consider it problematic and controversial in general.

comment by JoshuaZ · 2010-07-27T03:10:19.565Z · LW(p) · GW(p)

Really? As far as I can tell, the consensus for Bayesian updating and expected utility maximization among professional philosophers is near total.

According to the The PhilPapers Survey, 25.8% (ETA:Wrong number, 23.6% is the correct value. I quoted from the wrong entry) of surveyed philosophers were consequentialists of some form. That makes it hard to argue for a consensus about maximizing expected utility.

But how do you mathematicize an area, except by doing philosophy? I mean real world problems do not come to you in the form of equations to be solved, or algorithms to be run.

This seems to run into SilasBarta's inquiry above about what you mean by philosophy. I wouldn't for example think of the work of people like Galileo and Newton to be doing philosophy, but they took physics and put it on solid mathematical grounding. Similar remarks apply to Lavoisier or many people in other fields.

Replies from: utilitymonster
comment by utilitymonster · 2010-07-27T12:24:06.583Z · LW(p) · GW(p)

According to the The PhilPapers Survey, 25.8% of surveyed philosophers were consequentialists of some form. That makes it hard to argue for a consensus about maximizing expected utility.

There are a lot of philosophers who buy into maximizing expected utility, but aren't consequentialists. Proof: If you look at philosophers specializing in decision theory, 58% buy into consequentialism link. Of this group, the vast majority would go for something very close to expected utility maximization.

Part of this has to do with consequentialism not having a crisp definition that fits philosophers' intuitive usage. Some think consequentialism must be agent-neutral and get off the boat there (but could still be EU maximizers). Others have preferences that could (if made more coherent) satisfy the axioms of decision theory, but don't think that the utility function that represents those preferences also orders outcomes in terms of goodness. I.e., these people want to be EU maximizers, but don't want to maximize goodness (maybe they want to maximize some weighting of goodness and keeping their hands clean).

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-27T12:40:49.713Z · LW(p) · GW(p)

Valid point. The question asked was "Normative ethics: deontology, consequentialism, or virtue ethics?" (Note I actually quoted from the wrong entry above with the correct value as 23.6% but this makes little difference). It seems fair that the vast majority of deontologists and virtue ethicists are not EU maximizers. So, let's include everyone who picked consequentalist or "other" as an option. This should presumably overestimate the fraction which we care about for this purpose. That's a total of 55.9%, only slightly over half. Is that a consensus?

comment by Sniffnoy · 2010-07-27T02:30:27.930Z · LW(p) · GW(p)

I don't understand the objection to the universal prior. Definition and computation are not the same thing. Yes, definition is subject to Berry's Paradox if you don't differentiate between definition, meta-definition, etc; but computation is not. In particular, what you list as "a short description" is only computable if P is, which a univeral prior won't be. (I would think the non-computability in itself would be more of a problem!)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-27T02:38:55.008Z · LW(p) · GW(p)

You probably have to read is induction unformalizable? before Berry's Paradox and universal distribution. (There's a link to the former in the latter, but I'm guessing you skipped it.)

Replies from: Sniffnoy, Cyan, timtyler
comment by Sniffnoy · 2010-07-27T19:57:34.456Z · LW(p) · GW(p)

I may be repeating what Vladimir said here, but it seems to me your objection is basically "Oh shit! We can diagonalize!" (Which if we then collapse the levels can get us a Berry paradox, and others...)

So, yes, it follows that any system of description we can think of, there's some potential truth its corresponding "universal prior" (question - do those exist in general?) won't be able to infer. But the fact that this applies to any such system means we can't use it as a criterion to decide between them. At some point we have to just stop and say, "No, you are not allowed to refer to this concept in formulating descriptions." Maybe computability isn't the best one, but you don't seem to have actually given evidence that would support any other such system over it.

Or am I just missing something big here?

Replies from: Dre, Tyrrell_McAllister
comment by Dre · 2010-07-28T07:10:24.929Z · LW(p) · GW(p)

The thing I got out of it was that human brain processes appear to be able to do something (assign a nonzero probability to a non-computable universe) that our current formalization of general induction cannot do and we can't really explain why this is.

comment by Tyrrell_McAllister · 2010-07-28T05:14:40.764Z · LW(p) · GW(p)

I may be repeating what Vladimir said here, but it seems to me your objection is basically "Oh shit! We can diagonalize!"

...

Or am I just missing something big here?

I would also like to see this question addressed.

comment by Cyan · 2010-07-28T15:43:14.955Z · LW(p) · GW(p)

Let me distill down what timtyler and Dre have written into a concisely stated question:

Premise 1: a human's brain implements a computable algorithm
Premise 2: a human can update on evidence and become convinced that a halting oracle exists
I'm not sure if it is true that: when exposed to evidence from an actually existing halting oracle, the posterior probability of an algorithm implementing the predictions of a Premise 2-type human will exceed that of an algorithm that assigns zero prior probability to an uncomputable universe
but if so, then -- Conclusion: an agent with a Solomonoff prior can become convinced that the universe contains a halting oracle.

And my question: did I do a stupid?

Replies from: jimrandomh
comment by jimrandomh · 2010-07-28T16:36:38.800Z · LW(p) · GW(p)

The last step doesn't look valid to me. After updating on the evidence, you have a human who thinks they've seen a halting oracle, and a Solomonoff agent who thinks he's seen a highly improbable event that doesn't involve a halting oracle. The fact that the human assigns a higher probability to the observations is unconvincing, because he could've just been extraordinarily lucky.

Also, there are entities that are impossible to distinguish from halting oracles using all the computational resources in the universe, which are not actually halting oracles. For example, a "can be proven to halt using a proof shorter than 3^^^3 bits" oracle has nonzero probability under the Solomonoff prior.

Replies from: Wei_Dai, Cyan
comment by Wei Dai (Wei_Dai) · 2010-07-28T18:45:11.697Z · LW(p) · GW(p)

Also, there are entities that are impossible to distinguish from halting oracles using all the computational resources in the universe, which are not actually halting oracles. For example, a "can be proven to halt using a proof shorter than 3^^^3 bits" oracle has nonzero probability under the Solomonoff prior.

"proof shorter than 3^^^3 bits" means "proof shorter than 3^^^3 bits in some formal system S", right? Then I can write a program P that iterates through all possible proofs in S of length < 3^^^3 bits, and create a list of all TMs provable to terminate in less than 3^^^3 bits in S. Then P checks to see if P itself is contained in this list. If so, it goes into an infinite loop, otherwise it halts.

Now we know that if S is sound, then P halts AND can't be proven to halt using a proof shorter than 3^^^3 bits in S. What happens if we feed this P to your impostor oracle?

Replies from: jimrandomh, timtyler
comment by jimrandomh · 2010-07-28T21:04:38.390Z · LW(p) · GW(p)

That works if you can guess S, or some S' that is more powerful than S. But might there be a formal system that couldn't be guessed this way? My first thought was to construct a parameterized system somehow, S(x) where S(x) can prove that S(y) halts when a trick like this is used; but that can be defeated by simply iterating over systems, if you figure out the parameterization. But suppose you tried a bunch of formal logics this way, and the oracle passed them all; how could you ever be sure you hadn't missed one? What about a proof system plus a tricky corner case detection heuristic that happens to cover all your programs?

comment by timtyler · 2010-07-30T17:06:28.424Z · LW(p) · GW(p)

Re: "I can write a program P that iterates through all possible proofs in S of length < 3^^^3 bits, and create a list of all TMs provable to terminate in less than 3^^^3 bits in S."

An unspecified halting analysis on 2^3^^^3 programs?!?

Not within the universe's expected lifespan, you can't!

Apart from that, this looks rather like an intractable rearrangement of:

http://en.wikipedia.org/wiki/Berry_paradox

comment by Cyan · 2010-07-28T16:56:17.520Z · LW(p) · GW(p)

The last step doesn't look valid to me.

The conclusion follows (I think) because the Solomonoff agent is computing the posterior probability of all algorithms, including the one that implements the same computation the human implements. So after updating, the Solomonoff agent's posterior probability for that algorithm should be higher than that of any other algorithm, and it draws the same conclusion the human does.

Also, there are entities that are impossible to distinguish from halting oracles using all the computational resources in the universe, which are not actually halting oracles.

Given this, then contra Wei Dai, I don't know how any human attempting to internally implement Bayesian inference could possibly become convinced that a halting oracle exists.

Replies from: jimrandomh, Wei_Dai
comment by jimrandomh · 2010-07-28T17:15:56.385Z · LW(p) · GW(p)

The conclusion follows (I think) because the Solomonoff agent is computing the posterior probability of all algorithms, including the one that implements the same computation the human implements. So after updating, the Solomonoff agent's posterior probability for that algorithm should be higher than that of any other algorithm, and it draws the same conclusion the human does.

You lost a level of indirection in there; computing the output of an algorithm does not mean believing that the output of that algorithm is true or even plausible. So the agent will correctly predict what the human will say, and believe that the human is mistaken.

Replies from: Cyan
comment by Cyan · 2010-07-28T17:40:17.429Z · LW(p) · GW(p)

The level of indirection isn't necessary: the Solomonoff agent's distribution is a weighted mixture of the outputs of all possible Turing machines, weighted according to the posterior probability of that Turing machine being the one that is generating the observations. Any Turing machine that predicts that the putative halting oracle gets one wrong on a particular trial gets downweighted to zero when that fails to occur.

comment by Wei Dai (Wei_Dai) · 2010-07-28T20:06:29.730Z · LW(p) · GW(p)

The conclusion follows (I think) because the Solomonoff agent is computing the posterior probability of all algorithms, including the one that implements the same computation the human implements. So after updating, the Solomonoff agent's posterior probability for that algorithm should be higher than that of any other algorithm, and it draws the same conclusion the human does.

That looks like the same position that Eliezer took, and I think I already refuted it. Let me know if you've read the one-logic thread and found my argument wrong or unconvincing.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-28T21:14:41.301Z · LW(p) · GW(p)

The idea is that universal prior is really about observation-predicting algorithms that agents run, and not about prediction of what will happen in the world. So, for any agent that runs a given anticipation-defining algorithm and rewards/punishes the universal prior-based agent according to it, we have an anticipation-computing program that will obtain higher and higher probability in the universal prior-based agent.

This by the way again highlights the distinction between what will actually happen, and what a person anticipates - predictions are about capturing the concept of anticipation, an aspect of how people think, and are not about what in fact can happen.

comment by timtyler · 2010-07-27T19:31:15.110Z · LW(p) · GW(p)

You objection to the universal prior is: "what if Occam's razor breaks"?!?

Engineers would not normally be concerned with such things. Occam's razor has been fine for hundreds of years. If it ever breaks, we will update our probability expectations with whatever the new distribution is. Surely that is no big deal.

Original priors are not too important, anway. As soon as an agent is born, it is flooded with information from the world about the actual frequencies of things - and its original priors are quickly washed away and replaced with experience. If their experiences involve encounting the uncomputable, agents will simply update accordingly.

So: this topic seems like angels and pinheads - at least to me.

comment by Daniel_Burfoot · 2010-07-27T16:54:58.257Z · LW(p) · GW(p)

unreasonable effectiveness of mathematics in the natural sciences, especially in physics

Note that with respect to the power of mathematics, it's as easy to view the cup as half-empty as half-full. Here's Jaynes on the issue:

Phenomena whose very existence is unknown to the vast majority of the human race (such as the diff erence in ultraviolet spectra of Iron and Nickel) can be explained in exhaustive mathematical detail but all of modern science is practically helpless when faced with the complications of such a commonplace fact as the growth of a blade of grass.

Replies from: phob, Mitchell_Porter
comment by phob · 2010-07-27T17:01:23.126Z · LW(p) · GW(p)

A priori, as intelligent beings, we expect the universe at our scale to be immensely complex, since it produced us. I don't view our inability to explain fully phenomena at our scale as unreasonable non-effectiveness.

Replies from: Baughn
comment by Baughn · 2010-07-28T19:48:29.828Z · LW(p) · GW(p)

We should perhaps expect that, but I didn't actually do so until you mentioned it; not for the reasons you stated, at least not as quite that short and obvious a sentence.

Bravo.

comment by Mitchell_Porter · 2010-07-29T08:31:42.448Z · LW(p) · GW(p)

This statement from Jaynes sounds out of date. In the age of genomics and molecular biology, we can also go into exhaustive detail about the growth of a blade of grass.

comment by Vladimir_Nesov · 2010-07-27T10:26:16.255Z · LW(p) · GW(p)

I haven't studied algorithmic probability literature in-depth, but it naively seems to me that one can straightforwardly extend the idea of universal probability to arbitrary logical languages, thus becoming able to assign plausibility to all mathematical structures. The same principle as with universal prior, but have a statement valued by the length of the shortest equivalent statement (from no non-logical axioms), and consequently a class of structures gets value from a statement describing it. This takes care of not noticing halting oracles and so on, you just need to let go of the standard theory/model of programs-only.

Replies from: Wei_Dai, JustinShovelain
comment by Wei Dai (Wei_Dai) · 2010-07-27T11:20:48.031Z · LW(p) · GW(p)

Consider a universal prior based on an arbitrary logical language L, and a device that can decide the truth value of any sentence in that language. Such a device has no finite description in L (according to Tarski's undefinability theorem), so the universal prior based on L would assign it zero probability. A human would also think that such a device is unlikely, but not infinitely so. (I gave a version of this argument in is induction unformalizable?, which is linked to from Berry's Paradox and universal distribution. Did you read it?)

Replies from: Vladimir_Nesov, MichaelVassar, timtyler
comment by Vladimir_Nesov · 2010-07-27T11:29:29.351Z · LW(p) · GW(p)

Consider a universal prior based on an arbitrary logical language L, and a device that can decide the truth value of any sentence in that language. Such a device has no finite description in L (according to Tarski's undefinability theorem), so the universal prior based on L would assign it zero probability.

What do you mean by "decide the truth value"? Most statements aren't valid or unsatisfiable, there is no truth value for them. We are not assuming any models here, just assigning plausibility to (statement) elements of language's Lindenbaum algebra.

Such a device has no finite description in L (according to Tarski's undefinability theorem), so the universal prior based on L would assign it zero probability.

Whatever model you have in mind, it will be categorized on one side of each statement of the language. We are assigning plausibility to statements, and hence classes of structures, not individual structures (which are like individual points for a continuous distribution).

Replies from: cousin_it, Wei_Dai, Wei_Dai
comment by cousin_it · 2010-07-27T13:57:25.826Z · LW(p) · GW(p)

Vladimir, ever since I joined this site I've been hearing many interesting not-quite-formal ideas from you, and as my understanding grows I can parse more and more of what you say. But you always seem to move on to the next idea before finishing the last one. I think you should spend way more effort on transforming your ideas into actual theorems with proofs and publishing them online. Sharing "intuitions" only gets us so far.

I have much less trouble reading math papers from unfamiliar fields than reading your informal arguments, because your arguments rely on unstated background assumptions much more than you seem to realize. Properly preparing your results for publication, even if they don't get actually published somewhere peer-reviewed, should fix this problem.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-27T14:19:08.164Z · LW(p) · GW(p)

I discuss things here because it's fun (and sometimes I learn useful lessons from expressing them here, in addition to my private notes), not because I consider it effective means of communication. The not-quite-formal ideas are most of the time in fact not-quite-formal, rather than informally communicated formal ideas (often because I don't understand the relevant math, a failure I'm working on). The dropped ideas are those I either found useless/meaningless/wrong or those that never came up in the discussion after some point.

Communicating informal ideas is too difficult, specifically because they assume tons of unstated background, background that you not only have to state, but convince people about. This is work both for the writer and for the reader. In addition, these informal ideas are not particularly valuable, which together with difficulty of communication makes the whole endeavor a waste of effort.

(At least on LW, common background gives a chance for some remarks to be understood, without that background having to be delivered explicitly.)

The plan is for all these hunches to eventually come together in a framework for decision theory, that should be transparently mathematical, and thus allow efficient little-hidden-background communication.

comment by Wei Dai (Wei_Dai) · 2010-07-28T06:57:08.737Z · LW(p) · GW(p)

I'm afraid I still don't quite understand your idea. Can you explain it a bit more?

For example, suppose I come across a black box that takes a string as input and outputs a 0 or 1. What does your idea say is the probability that it's a halt-problem oracle, or a device that gives the truth value of statements in ZFC?

Or suppose I'm playing a game where I've been given a long string of bits and have to bet on the next one in the sequence. How do I use your idea to decide what to do?

(Feel free to pick your own examples if the above ones are not optimal for explaining your idea.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-28T08:54:17.275Z · LW(p) · GW(p)

What's ambiguous with the definition? For example, unsatisfiable statements will get about as big plausibility as the valid ones, and for theories that are not finitely axiomatizable, plausibility is not defined (so you can't ask about plausibility of some models, unless there is a categorical finite theory defining them). How to use this in decision-making is a special case of a more general open problem in ambient control.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-28T09:15:21.623Z · LW(p) · GW(p)

Part of what confuses me is that you said we're assigning plausibility to classes of structures, not individual structures, but it seems like we'd need to assign plausibility to individual structures in practice.

How to use this in decision-making is a special case of a more general open problem in ambient control.

Can't you give an example using a situation where Bayesian updating is non-problematic, and just show how we might use your idea for the prior with standard decision theory?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-28T09:45:10.289Z · LW(p) · GW(p)

If you can refer to an individual structure informally, then either there is a language that allows finitely describing it, or ability to refer to that structure is an illusion and in fact you are only referring to some bigger collection of things (property) of which the object you talk about is an element. If you can't refer to a structure, then you don't need plausibility for it.

Can't you give an example using a situation where Bayesian updating is non-problematic, and just show how we might use your idea for the prior with standard decision theory?

This is only helpful is something works with tricky mathematical structures, and in all cases that seems to need to be preference. For example, you'd prefer to make decisions that are (likely!) consistent with a given theory (make it hold), then it helps if your decision and that theory are expressed in the same setting (language), and you can make decisions under logical uncertainty if you use the universal prior on statements. Normally, decision theories don't consider such cases, so I'm not sure how to relate. Introducing observations will probably be a mistake too.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-28T10:05:26.290Z · LW(p) · GW(p)

either there is a language that allows finitely describing it

But if you fix a language L for your universal prior, then there will be a more powerful metalanguage L' that allows finitely describing some structure, which can't be finitely described in the base language, right? So don't we still have the problem of the universal prior not really being universal?

I can't parse the second part of your response. Will keep trying...

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-28T10:08:24.856Z · LW(p) · GW(p)

But if you fix a language L for your universal prior, then there will be a more powerful metalanguage L' that allows finitely describing some structure, which can't be finitely described in the base language, right? So don't we still have the problem of the universal prior not really being universal?

It can still talk about all structures, but sometimes won't be able to point at a specific structure, only a class containing it. You only need a language expressive enough to describe everything preference refers to, and no more. (This seems to be the correct solution to ontology problem - describe preference as being about mathematical structures (more generally, concepts/theories), and ignore the question of the nature of reality.)

(Clarified the second part of the previous comment a bit.)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-29T19:32:46.337Z · LW(p) · GW(p)

You only need a language expressive enough to describe everything preference refers to, and no more.

Why do you think that any logical language (of the sort we're currently familiar with) is sufficiently expressive for this purpose?

This seems to be the correct solution to ontology problem - describe preference as being about mathematical structures (more generally, concepts/theories), and ignore the question of the nature of reality.

I'm not sure. One way to think about it is whether the question "what is the right prior?" is more like "what is the right decision theory?" or more like "what is the right utility function?" In What Are Probabilities, Anyway? I essentially said that I lean towards the latter, but I'm highly uncertain.

ETA: And sometimes I suspect even "what is the right utility function?" is really more like "what is the right decision theory?" than we currently believe. In other words there is objective morality after all, but we're currently just too stupid or philosophically incompetent to figure out what it is.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-29T20:11:57.074Z · LW(p) · GW(p)

Why do you think that any logical language (of the sort we're currently familiar with) is sufficiently expressive for this purpose?

The general idea seems right. If the existing languages are inadequate, they at least seem adequate for a full-featured prototype: figure out decision theory (and hence notion of preference) in terms of standard logic, then move on as necessary for extending expressive power. This should stop at some point, since this exercise at formality is aimed at construction of a program.

I'm not sure. One way to think about it is whether the question "what is the right prior?" is more like "what is the right decision theory?" or more like "what is the right utility function?" In What Are Probabilities, Anyway? I essentially said that I lean towards the latter, but I'm highly uncertain.

I don't see clearly the distinction you're making, so let me describe how I see it. Some design choices in constructing FAI would certainly be specific to our minds (values), but the main assumption to my approach to FAI is exactly that a large portion of design choices in FAI can be specified as a natural category in human brains, something we can point a simple mirror at and say "there!", with the mirror doing most of the work in determining what goes into the FAI. I call the automated design choices "preference", and the mirror (theory of mirror) "decision theory", with the slot "notion of preference" that is to be filled in automatically. So, there is no question of which one of "decision theory" and "preference" is "essential", both play a role. The worry is about the necessary size of the manually designed "decision theory" part, and whether it's humanly possible to construct it.

comment by Wei Dai (Wei_Dai) · 2010-07-27T11:57:03.531Z · LW(p) · GW(p)

Ok, I think I had misinterpreted your previous comment. I'll have to think over your idea.

comment by MichaelVassar · 2010-07-27T17:02:54.216Z · LW(p) · GW(p)

Maybe the human is a bad philosopher in this case and is simply wrong.

comment by timtyler · 2010-07-29T19:43:28.761Z · LW(p) · GW(p)

Re: "Consider a universal prior based on an arbitrary logical language L, and a device that can decide the truth value of any sentence in that language. Such a device has no finite description in L (according to Tarski's undefinability theorem), so the universal prior based on L would assign it zero probability."

It would never see the infinite description with the 0 probability, though - not enough space-time.

The evidence of the Oracle that the agent would get to see would be in the form of finite sensory inputs - and those would not be assigned zero probability. So: it could update on that evidence just fine - with no problems.

If the agent sees a tiny box with an Oracle inside it, that is just more finite sense-data about the state of the universe to update on - no problem - and no silly p=0 for an empirical observation.

comment by JustinShovelain · 2010-07-28T09:06:39.977Z · LW(p) · GW(p)

Are you trying to express the idea of adding new fundamental "terms" to your language describing things like halting oracles and such? And then discounting their weight by the shortest statement of said term's properties expressed in the language that existed previously to including this additional "term?" If so, I agree that this is the natural way to extend priors out to handle arbitrary describable objects such as halting oracles.

Stated another way. You start with a language L. Let the definition of an esoteric mathematical object (say a halting oracle) E be D in the original language L. Then the prior probability of a program using that object is discounted by the description length of D. This gives us a prior over all "programs" containing arbitrary (describable) esoteric mathematical objects in their description.

I'm not yet sure how universal this approach is at allowing arbitrary esoteric mathematical objects (appealing to the Church-Turing thesis here would be assuming the conclusion) and am uncertain whether we can ignore the ones it cannot incorporate.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-28T09:51:22.046Z · LW(p) · GW(p)

Why "new terms"? If the language can finitely express a concept, my scheme gives that concept plausibility. Maybe this could be extended to lengths of programs that generate axioms for a given theory (even enumerable sets of axioms), rather than lengths of individual finite statements, but I guess that can be stated within some logical language just as well.

Replies from: JustinShovelain
comment by JustinShovelain · 2010-07-28T19:57:39.325Z · LW(p) · GW(p)

By new "term" I meant to make the clear that this statement points to an operation that cannot be done with the original machine. Instead it calls this new module (say a halting oracle) that didn't exist originally.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-28T21:09:08.733Z · LW(p) · GW(p)

What machine???

comment by nhamann · 2010-07-27T06:46:27.627Z · LW(p) · GW(p)

Do you think that the best way achieve solutions to meta-philosophy is to actually do philosophy? I ask because, like other posters, I'm skeptical of the magnitude of contributions from the field of philosophy with regard to philosophical insights. I'm definitely biased, being not nearly as familiar with philosophy as I am with science, but it seems to me that math and science do a great majority of the heavy lifting.

This is not to downplay the importance of philosophy in general, because I think Daniel Dennett is spot on when he says "[T]here is no such thing as philosophy-free; there is only science whose philosophical baggage is taken on board without examination." (I think there's a good argument that the entire symbolic AI program was a major philosophy-fail). It's really more a question of methodology.

The issue I see is that to giving a satisfactory answer to a question like "what is the nature of philosophy, and how do we use it?" likely involves a whole hell of a lot of neuroscience, psychology, linguistics, etc. research, and it seems unlikely that much can be gained by a philosophical approach to the question (which I assume involves surveying all the relevant scientific and philosophical literature, and then making reasoned arguments for why some aspect of philosophy use must be a certain way).

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-27T07:06:52.638Z · LW(p) · GW(p)

Do you think that the best way achieve solutions to meta-philosophy is to actually do philosophy?

I don't know, but I think it's at least plausible that the answer is yes. This is one of those situations where we should probably take multiple approaches simultaneously.

The issue I see is that to giving a satisfactory answer to a question like "what is the nature of philosophy, and how do we use it?" likely involves a whole hell of a lot of neuroscience, psychology, linguistics, etc. research [...]

Maybe, but before von Neumann and Morganstern invented expected utility maximization, it might have seemed like we'd need a whole lot of neuroscience, psychology, linguistics, etc., to figure out decision theory, but that would have turned out to be wrong.

Replies from: nhamann
comment by nhamann · 2010-07-27T19:32:19.503Z · LW(p) · GW(p)

This is one of those situations where we should probably take multiple approaches simultaneously.

This is reasonable, and I agree.

comment by avturchin · 2023-09-13T15:46:54.722Z · LW(p) · GW(p)

Unreasonable effectiveness of philosophy could be explained by anthropics: I - as a writer of this comment - is selected only from minds which are capable to complex reflective thinking on abstract topics.

comment by Chris_Leong · 2023-09-02T04:59:46.150Z · LW(p) · GW(p)

My position is:

a) If you take a Platonic position on maths, then it's surprising how successful maths has been, but we shouldn't do that. For a hint on how to avoid Platonism in favour of something much more grounded, see my post on the Law of Identity [LW · GW].

a) Philosophy hasn't been that successful. We have a much better idea of the range of positions that people could take and the standard arguments for and against, but we haven't made much progress on figuring out what is true.


b) If we expect our minds to have evolved to have a particular philosophical position baked in, it might seem strange how we can do philosophy. However, this is answered for me by thinking through how the brain develops. For example, we actually don't start off with the concept of object permanence, but we typically learn it after a particular age. Given that we already have to learn our basic adult ontology, it's unsurprising that we can use some of the same mental tools to bootstrap up to more advanced ontologies.

Why might we have evolved to learn some of these basic ontological conceptions, rather than just having them programmed in? One reason might be that evolution has an information bottleneck in terms of only being able to write so much information into our DNA. Having these ontological conceptions pre-programmed in would mean less DNA code space available for other functions.

But beyond this, since evolution can't plan ahead, it seems strange to me to expect it to produce a set of perfectly consistent pre-programmed in beliefs, rather than something much messier. Instead, I'd expect various heuristics to evolve independently and then for something to evolve that tries to produce something consistent out of them. And that's pretty much what we're doing when we're doing philosophy.

comment by NancyLebovitz · 2010-07-28T01:18:38.980Z · LW(p) · GW(p)

I don't think philosophy is unreasonably effective. It's at least plausible that we've got some ability to become conscious of relevant similarities. This ability is useful in a wide range of contexts. [1]

Once you have the ability, it's unsurprising if it's effective when an effort is made to apply it to broad commonalities of how the world and experience work.

[1] Even if we were mostly selected for talking each other into things, I don't know how conscious the process is for people who are naturally good at it. Anyone have information?

My hypothesis is that while there are elementary skills which are unconscious for those with natural skills, there are more complex problems that such people handle consciously.

comment by Roko · 2010-07-27T19:23:42.566Z · LW(p) · GW(p)

So, if human beings have a philosophical oversight that is self-shadowing, like that of a Bayesian that keeps two-boxing on Newcomb and defecting against copies of itself in PD, we wouldn't notice it. Ever.

So the question is not "why don't we have any self-shadowing blind-spots", it is "why do we have a nontrivial set of non-self-shadowing blind spots?" (non-self-shadowing blind spots being those that we can actually notice at some point)

Replies from: Wei_Dai, Roko
comment by Wei Dai (Wei_Dai) · 2010-07-28T08:15:24.572Z · LW(p) · GW(p)

So the question is not "why don't we have any self-shadowing blind-spots", it is "why do we have a nontrivial set of non-self-shadowing blind spots?"

Agreed, but I think it's also, "why do we have fewer self-shadowing blind-spots than we might expect, given what we know about how evolution works?"

And while you're right that we can't be sure at this point that we have zero self-shadowing blind-spots (philosophical oversights that we'll never detect), I think there's a reasonable chance that's in fact the case. ETA: My argument for this belief is that one possible reason why we have fewer self-shadowing blind-spots than we might expect is that there is a single "ability to philosophize" that is sufficient (given enough raw intelligence) to overcome all such blind spots.

Replies from: Yvain, Roko
comment by Scott Alexander (Yvain) · 2010-07-28T20:18:13.717Z · LW(p) · GW(p)

The opposite explanation also works: we use so many unrelated heuristics that there's no single area where they all fail simultaneously.

Replies from: Will_Sawin
comment by Will_Sawin · 2010-08-03T23:59:24.650Z · LW(p) · GW(p)

If some of the heuristics are failing and some are succeeding, they are producing different results. Which process determines which results are correct? Should this be called "philosophical ability"?

(non-rhetorical questions)

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2010-08-05T23:06:09.541Z · LW(p) · GW(p)

It doesn't necessarily have to be cenralized. Some heuristics could have different weights than others, and stronger ones win out. Or there could be a reflective equilibrium among them.

...not that there's any evidence for any of this.

comment by Roko · 2010-07-28T12:13:57.017Z · LW(p) · GW(p)

we can't be sure at this point that we have zero self-shadowing blind-spots, I think there's a reasonable chance that's in fact the case.

there is a single "ability to philosophize" that is sufficient (given enough raw intelligence) to overcome all such blind spots.

So, one way to attack this position would be to construct a toy-model of a system that has an "ability to philosophize" but that still fails in some cases.

An example would be the bayesian AI that self-modifies to one-box on all newcomblike problems where omega examines it after that self-modification event. So it realizes that it is better to be "rationally irrational", but only in a limited sense.

A less controversial example is the case where some devout catholics were convinced that even thinking about whether or not god might not exist would cause them to be sent straight to hell.

For any agent or community of agents, there are some cherished beliefs that the agent/community refuses to challenge. Sometimes for good reason. Even I have some, and LW certainly has some.

In this latter case, a false belief shields itself from criticism by convincing the community or agent involved that even the act of questioning the belief is of highly negative value.

comment by Roko · 2010-07-27T19:51:26.766Z · LW(p) · GW(p)

And my own answer to this question is that we're a fragmented philosopher, with many different humans, each of whom has many different intuitions. In the example given of Bayesian Updating versus UDT, we have both timeless intuitions and Bayesian ones. The timeless/updateless/acausal intuitions come from the human intuitions about pride, dignity, honor, etc, which were developed because humans interacted with other humans.

Replies from: Wei_Dai, daedalus2u
comment by Wei Dai (Wei_Dai) · 2010-07-28T08:15:44.506Z · LW(p) · GW(p)

The timeless/updateless/acausal intuitions come from the human intuitions about pride, dignity, honor, etc, which were developed because humans interacted with other humans.

Umm... I first thought of the updateless idea while trying to figure out anthropic reasoning. I fail to see how that had anything to do with pride, dignity, honor, etc.

comment by daedalus2u · 2010-07-28T02:06:26.811Z · LW(p) · GW(p)

I think this is correct. Using my formulation, the Bayseian system is what I call a "theory of reality", the timeless one is the "theory of mind", which I see as the trade-off along the autism spectrum.

comment by Liron · 2010-07-27T06:56:37.669Z · LW(p) · GW(p)

For example, a Bayesian expected utility maximizer programmed with a TM-based universal prior would not be able to realize that the prior is wrong.

What does it mean to "realize that a prior is wrong"? The mechanics of belief change in a Bayesian agent are fixed by the prior itself.

Nor would it be able to see that Bayesian updating is the wrong thing to do in some situations.

Bayesian updating is always the right thing to do. The only question is how to approximate a proper Bayesian update using efficient data structures and algorithms.

. . . it may be that there is a bunch of low-hanging fruit hiding just around the corner.

I would stay in the fruit tree metaphor and say they might be "hanging right over our heads".

Replies from: cousin_it, Blueberry, Wei_Dai
comment by cousin_it · 2010-07-27T08:17:24.115Z · LW(p) · GW(p)

A prior can be wrong if it assigns zero weight to the true state of the world. For example, if our universe does in fact contain halting problem oracles, the Bayesian superintelligence with a TM-based universal prior will never be able to believe that, no matter how many hard math problems get successfully solved by this weird black box. But a human would converge on the true belief pretty quickly. All this stuff, and more, is in Wei Dai's examples.

Replies from: Eliezer_Yudkowsky, PhilGoetz, timtyler
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-27T10:39:14.471Z · LW(p) · GW(p)

AIXI with a TM-based universal prior will always produce predictions about the black box, and predictions about the rest of the universe based on what the black box says, that are just as good as any prediction the human can come up with. After all, the human is in there somewhere. If you think of AIXI as embodying all computable ways of predicting the universe, rather than all computable models of the universe, you may begin to see that's not quite as narrow as you thought.

Replies from: Wei_Dai, cousin_it, cousin_it, DefectiveAlgorithm, Wei_Dai, Vladimir_Nesov, Will_Sawin
comment by Wei Dai (Wei_Dai) · 2010-07-27T10:54:25.032Z · LW(p) · GW(p)

Eliezer, that was your position in this thread, and I thought I had convinced you that it was wrong. If that's not the case, can you please re-read my argument (especially the last few posts in the thread) and let me know why you're not convinced?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-28T08:33:04.654Z · LW(p) · GW(p)

So... the part I found potentially convincing was that if you ran off a logical view of the world instead of a Solomonoff view (i.e., beliefs represented in e.g. higher-order logic instead of Turing machines) and lived in a hypercomputable world then it might be possible to make better decisions, although not better predictions of sensory experience, in some cases where you can infer by reasoning symbolically that EU(A) > EU(B), presuming that your utility function is itself reasoning over models of the world represented symbolically. On the other hand, cousin_it's original example still looks wrong.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-28T09:08:53.805Z · LW(p) · GW(p)

not better predictions of sensory experience

You can make better predictions if you're allowed to write down your predictions symbolically, instead of using decimal numbers. (And why shouldn't that be allowed?)

ETA: I made this argument previously in the one-logic thread, in this post.

ETA 2: I think you can also make better (numerical) predictions of the form "this black box is a halting-problem oracle" although technically that isn't a prediction of sensory experience.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-29T20:32:26.134Z · LW(p) · GW(p)

Why would you want to make any predictions at all? Predictions are not directly about value. It doesn't seem that there is a place for the human concept of prediction in a foundational decision theory.

Replies from: Wei_Dai, timtyler
comment by Wei Dai (Wei_Dai) · 2010-07-29T20:41:06.018Z · LW(p) · GW(p)

It doesn't seem that there is a place for the human concept of prediction in a foundational decision theory.

I think that's right. I was making the point about prediction because Eliezer still seems to believe that predictions of sensory experience is somehow fundamental, and I wanted to convince him that the universal prior is wrong even given that belief.

Replies from: Vladimir_Nesov, timtyler
comment by Vladimir_Nesov · 2010-07-29T20:44:59.949Z · LW(p) · GW(p)

Still, universal prior does seem to be a universal way of eliciting what the human concept of prediction (expectation, probability) is, to the limit of our ability to train such a device, for exactly the reasons Eliezer gives: whatever is the concept we use, it's in there, among the programs universal prior weights.

ETA: On the other hand, the concept thus reconstructed would be limited to talk about observations, and so won't be a general concept, while human expectation is probably more general than that, and you'd need a general logical language to capture it (and a language of unknown expressive power to capture it faithfully).

ETA2: Predictions might still be a necessary concept to express the decisions that agent makes, to connect formal statements with what the agent actually does, and so express what the agent actually does as formal statements. We might have to deal with reality because the initial implementation of FAI has to be constructed specifically in reality.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-29T21:04:27.704Z · LW(p) · GW(p)

Umm... what about my argument that a human can represent their predictions symbolically like "P(next bit is 1)=i-th bit of BB(100)" instead of using numerals, and thereby do better than a Solomonoff predictor because the Solomonoff predictor can't incorporate this? Or in other words, the only reason the standard proofs of Solomonoff prediction's optimality go through is that they assume predictions are represented using numerals?

Replies from: timtyler, LucasSloan, ocr-fork
comment by timtyler · 2010-07-31T21:40:29.099Z · LW(p) · GW(p)

Re: "what about my argument that a human can [adapt its razor a little] and thereby do better than a Solomonoff predictor because the Solomonoff predictor can't incorporate this?"

There are at least two things "Solomonoff predictor" could refer to:

  • An intelligent agent with Solomonoff-based priors;

  • An agent who is wired to use a Solomonoff-based razor on their sense inputs;

A human is more like the first agent. The second agent is not really properly intelligent - and adapts poorly to new environments.

comment by LucasSloan · 2010-07-29T21:21:15.503Z · LW(p) · GW(p)

Humans are (can be represented by) turing machines. All halting turing machines are incorporated in AIXI. Therefore, anything that humans can do to more effectively predict something than a "mere machine" is already incorporated into AIXI.

More generally, anything you represent symbolically can be represented using binary strings. That's how that string you wrote got to me in the first place. You converted the turing operations in your head into a string of symbols, a computer turned that into a string of digits, my computer turned it back into symbols and my brain used computable algorithms to make sense of them. What makes you think that any of this is impossible for AIXI?

Replies from: Wei_Dai, Unknowns
comment by Wei Dai (Wei_Dai) · 2010-07-29T21:36:26.288Z · LW(p) · GW(p)

Am I going crazy, or did you just basically repeat what Eliezer, Cyan, and Nesov said without addressing my point?

Do you guys think that you understand my argument and that it's wrong, or that it's too confusing and I need to formulate it better, or what? Everyone just seems to be ignoring it and repeating the standard party line....

ETA: Now reading the second part of your comment, which was added after my response.

ETA2: Clearly I underestimated the inferential distance here, but I thought at least Eliezer and Nesov would get it, since they appear to understand the other part of my argument about the universal prior being wrong for decision making, and this seems to be a short step. I'll try to figure out how to explain it better.

Replies from: LucasSloan
comment by LucasSloan · 2010-07-29T21:40:54.966Z · LW(p) · GW(p)

If 4 people all think you're wrong for the same reason, either you're wrong or you're not explaining yourself. You seem to disbelieve the first, so try harder with the explaining.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-29T22:00:41.124Z · LW(p) · GW(p)

Didn't stop 23+ people from voting up his article ... (21 now; I and someone else voted it down)

Replies from: LucasSloan
comment by LucasSloan · 2010-07-29T22:08:31.927Z · LW(p) · GW(p)

Well, people expect him to be making good points, even when they don't understand him (ie, I don't understand UDT fully, but it seems to be important). Also, he's advocating further thinking, which is popular around here.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-29T22:22:55.639Z · LW(p) · GW(p)

Well, people expect him to be making good points, even when they don't understand him

And I really, really wish people would stop doing that, whether it's for Wei_Dai or anyone else you deem to be smart.

Folks, you may think you're doing us all a favor by voting someone up because they're smart, but that policy has the effect of creating an information cascade, because it makes an inference bounce back, accumulating arbitrarily high support irrespective of its relationship to reality.

The content of a post or comment should screen off any other information about its value [1], including who made it.

[1] except in obvious cases like when someone is confirming that something is true about that person specifically

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-30T02:10:29.050Z · LW(p) · GW(p)

Seconded. Please only vote up posts you both understand and approve of.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-30T04:16:52.296Z · LW(p) · GW(p)

Please only vote up posts you both understand and approve of.

I agree, but would like to point out that I don't see any evidence that people aren't already doing this. As far as I can tell, Lucas was only speculating that people voted up my post based on the author. Several other of my recent posts have fairly low scores, for example. (All of them advocated further thinking as well, so I don't think that's it either.)

comment by Unknowns · 2010-08-01T13:23:57.273Z · LW(p) · GW(p)

The fact that AIXI can predict that a human would predict certain things, does not mean that AIXI can agree with those predictions.

Replies from: LucasSloan
comment by LucasSloan · 2010-08-01T20:05:03.070Z · LW(p) · GW(p)

In the limit, even if that one human is the only thing in all of the hypotheses that AIXI has under consideration, AIXI will be predicting precisely as that human does.

comment by ocr-fork · 2010-07-29T22:02:50.045Z · LW(p) · GW(p)

Umm... what about my argument that a human can represent their predictions symbolically like "P(next bit is 1)=i-th bit of BB(100)" instead of using numerals, and thereby do better than a Solomonoff predictor because the Solomonoff predictor can't incorporate this?

BB(100) is computable. Am I missing something?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-29T22:11:51.545Z · LW(p) · GW(p)

BB(100) is computable. Am I missing something?

Maybe... by BB I mean the Busy Beaver function Σ as defined in this Wikipedia entry.

Replies from: ocr-fork
comment by ocr-fork · 2010-07-29T22:14:28.737Z · LW(p) · GW(p)

Right, and...

A trivial but noteworthy fact is that every finite sequence of Σ values, such as Σ(0), Σ(1), Σ(2), ..., Σ(n) for any given n, is computable, even though the infinite sequence Σ is not computable (see computable function examples.

So why can't the universal prior use it?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-29T22:30:21.549Z · LW(p) · GW(p)

Sorry, I should have used BB(2^100) as the example. The universal prior assigns the number BB(2^100) a very small weight, because the only way to represent it computably is by giving a 2^100 state Turing machine. A human would assign it a much larger weight, referencing it by its short symbolic representation.

Until I write up a better argument, you might want to (assuming you haven't already) read this post where I gave a decision problem that a human does (infinitely) better than AIXI.

Replies from: LucasSloan, timtyler
comment by LucasSloan · 2010-07-29T22:41:52.865Z · LW(p) · GW(p)

I don't think I understood that fully, but there seems to be a problem with your theory. The human gets to start in the epistemically advantaged position of knowing that the game is based on a sequence of busy beavers and knowing that they are a very fast growing function. AIXI is prevent from knowing this information and has to start as if from a blank canvas. The reason we use a Occamian prior for AIXI is because we refuse to tailor it to a specific environment, if your logic is sound, then yes, it does do worse when it is dropped into an environment where it is paired with a human with an epistemic advantage, but it would beat the human across the space of possible worlds.

Another problem you seem to have is to assume that the only hypothesis in the entire set that gives useful predictions is the hypothesis which is, in fact, correct. There are plenty of other function which correctly predict arbitrarily large numbers of 1's, with much less complexity, which can give the overall probability weighting that AIXI is using a usefully correct model of its universe, if not a fully correct one.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-29T22:56:24.577Z · LW(p) · GW(p)

How a human might come to believe, without being epistemically privileged, that a sequence is probably a sequence of busy beavers, is a deep problem, similar to the problem of distinguishing halting oracles from impostors. (At least one mathematical logician who has thought deeply about the latter problem thinks that it's doable.)

But in any case, the usual justification for AIXI (or adopting the universal prior) is that (asymptotically) it does as well as or better than any computable agent, even one that is epistemically privileged, as long as the environment is computable. Eliezer and others were claiming that it does as well as or better than any computable agent, even if the environment is not computable, and this is what my counter-example disproves.

Replies from: LucasSloan, ocr-fork
comment by LucasSloan · 2010-07-29T23:43:11.165Z · LW(p) · GW(p)

So you think that we need to rethink our theory of what perfect optimization is, in order to take into account the possibility we live in an uncomputable universe? Even if you are correct in your example, there is no reason to suppose that your human does better in the space of possible uncomputable universes than AIXI, as opposed to better in that one possible (impossible) universe.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-30T00:30:31.249Z · LW(p) · GW(p)

So you think that we need to rethink our theory of what perfect optimization is, in order to take into account the possibility we live in an uncomputable universe?

Yes.

Even if you are correct in your example, there is no reason to suppose that your human does better in the space of possible uncomputable universes than AIXI, as opposed to better in that one possible (impossible) universe.

This seems pretty easy, given the same level of raw computing power available to AIXI (otherwise the human gets screwed in the majority of cases simply because he doesn't have enough computing power).

For example, I can simply modify AIXI with a rule that says "if you've seen a sequence of increasingly large numbers that can't be explained by any short computable rule, put some weight into it being BB(1)...BB(2^n)... (and also modify it to reasoning symbolically about expected utilities instead of comparing numbers) and that will surely be an improvement over all possible uncomputable universes. (ETA: Strike that "surely". I have to think this over more carefully.)

How to make an optimal decision algorithm (as opposed to just improving upon AIXI) is still an open problem.

Replies from: LucasSloan
comment by LucasSloan · 2010-07-30T01:33:13.119Z · LW(p) · GW(p)

For example, I can simply modify AIXI with a rule that says "if you've seen a sequence of increasingly large numbers that can't be explained by any short computable rule, put some weight into it being BB(1)...BB(2^n)... (and also modify it to reasoning symbolically about expected utilities instead of comparing numbers) and that will surely be an improvement over all possible uncomputable universes. (ETA: Strike that "surely". I have to think this over more carefully.)

This is what I dislike about your logic. You create a situation where (you think) AIXI fails, but you fail to take into account the likelihood of being in the situation versus being in a similar situation. I can easily see a human seeing a long series of ones, with some zeros at the beginning, saying "ah-ha, this must be the result of a sequence of busy beavers", when all he's actually seeing is 3^^^3 minus his telephone number or something. AIXI can lose in really improbable universes, because it's designed to work in the space of universes, not some particular one. By modifying the rules, you can make it better in specific universes, but only by reducing its performance in similar seeming universes.

comment by ocr-fork · 2010-07-29T23:12:49.763Z · LW(p) · GW(p)

What about the agent using Solomonoff's distribution? After seeing BB(1),...,BB(2^n), the algorithmic complexity of BB(1),...,BB(2^n) is sunk, so to speak. It will predict a higher expected payoff for playing 0 in any round i where the conditional complexity K(i | BB(1),...,BB(2^n)) < 100. This includes for example 2BB(2^n), 2BB(2^n)+1, BB(2^n)^2 * 3 + 4, BB(2^n)^^^3, etc. It will bet on 0 in these rounds (erroneously, since K(BB(2^(n+1)) | BB(2^n)) > 100 for large n), and therefore lose relative to a human.

I don't understand how the bolded part follows. The best explanation by round BB(2^n) would be "All 1's except for the Busy Beaver numbers up to 2^n", right?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-29T23:35:59.934Z · LW(p) · GW(p)

Yes, that's the most probable explanation according to the Solomonoff prior, but AIXI doesn't just use the most probable explanation to make decisions, it uses all computable explanations that haven't been contradicted by its input yet. For example, "All 1's except for the Busy Beaver numbers up to 2^n and 2BB(2^n)" is only slightly less likely than "All 1's except for the Busy Beaver numbers up to 2^n" and is compatible with its input so far. The conditional probability of that explanation, given what it has seen, is high enough that it would bet on 0 at round 2BB(2^n), whereas the human wouldn't.

Replies from: ocr-fork
comment by ocr-fork · 2010-07-30T00:23:53.234Z · LW(p) · GW(p)

Oh.

I feel stupid now.

EDIT: Wouldn't it also break even by predicting the next Busy Beaver number? "All 1's except for BB(1...2^n+1)" is also only slightly less likely. EDIT: I feel more stupid.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-30T00:40:04.024Z · LW(p) · GW(p)

The next number in the sequence is BB(2^(n+1)), not BB(2^n+1).

ETA: In case more explanation is needed, it takes O(2^n) more bits to computably describe BB(2^(n+1)), even if you already have BB(2^n). (It might take O(2^n) more bits to describe BB(2^n+1) as well, but I wasn't sure so I used BB(2^(n+1)) in my example instead.)

Since K(BB(2^(n+1)) | BB(2^n)) > 100 for large n, AIXI actually will not bet on 0 when BB(2^(n+1) comes around, and all those 0s that it does bet on are simply "wasted".

Replies from: ocr-fork
comment by ocr-fork · 2010-07-30T01:28:34.685Z · LW(p) · GW(p)

it might take O(2^n) more bits to describe BB(2^n+1) as well, but I wasn't sure so I used BB(2^(n+1)) in my example instead.

You can find it by emulating the Busy Beaver.

comment by timtyler · 2010-07-31T22:13:47.878Z · LW(p) · GW(p)

BB(100) is computable - and BB(2^100) is computable too :-(

comment by timtyler · 2010-07-31T21:33:02.952Z · LW(p) · GW(p)

Surely predictions of sensory experience are pretty fundamental. To understand the consequences of your actions, you have to be able to make "what-if" predictions.

comment by timtyler · 2010-07-31T21:30:59.262Z · LW(p) · GW(p)

Re: "It doesn't seem that there is a place for the human concept of prediction in a foundational decision theory."

You can hardly steer yourself effectively into the future if you don't have an understanding of the consequences of your actions.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-01T08:01:10.122Z · LW(p) · GW(p)

You can hardly steer yourself effectively into the future if you don't have an understanding of the consequences of your actions.

Yes, it might be necessary exactly for that purpose (though consequences don't reside just in the "future"), but I don't understand this well enough to decide either way.

Replies from: timtyler
comment by timtyler · 2010-08-01T08:51:57.919Z · LW(p) · GW(p)

I checked with the dictionary. It had:

  1. the effect, result, or outcome of something occurring earlier: The accident was the consequence of reckless driving.

  2. an act or instance of following something as an effect, result, or outcome.

Consequences not being in the future seems to be a curious concept to me - though I understand that Feynman dabbled with the idea on sub-microscopic scales.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-01T08:59:44.271Z · LW(p) · GW(p)

I think we've got it covered with Newcomb's Problem (consequences in the past) and Counterfactual Mugging (consequences in another possible world). And there is still greater generality with logical consequences.

Replies from: timtyler
comment by timtyler · 2010-08-01T09:09:08.163Z · LW(p) · GW(p)

FWIW, I wouldn't classify Newcomb's Problem as having to do with "consequences in the past" or Counterfactual Mugging as having to do with "consequences in another possible world".

For me, "consequences" refers to the basic cause-and-effect relationship - and consequences always take place downstream.

Anticipating something doesn't really mean that the future is causally affecting the past. If you deconstruct anticipation, it is all actually based on current and previous knowledge.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-01T09:18:44.290Z · LW(p) · GW(p)

You are arguing definitions (with the use of a dictionary, no less!). The notion of consequences useful for decision theory is a separate idea from causality of physics.

Replies from: timtyler
comment by timtyler · 2010-08-01T09:27:05.421Z · LW(p) · GW(p)

Is "consequences" really a good term for what you are talking about?

It seems as though it is likely to cause confusion to me.

Does anyone else use the term in this way?

comment by cousin_it · 2010-07-29T19:55:30.003Z · LW(p) · GW(p)

That took two days to parse, but now I understand how it works. You're right. I apologize to everyone for having defended an incorrect position.

My misconception seems to be popular, though. Maybe someone should write a toplevel post on the right way to think about the universal prior. Though seeing that some other people are even more hopelessly confused than me, and seem to struggle with the idea of "prior" per se, I'm not sure that introducing even more advanced topics would help.

comment by cousin_it · 2010-07-27T11:05:04.754Z · LW(p) · GW(p)

Yes, the human is in there somewhere, but so are many other, incorrect predictors. To adopt their predictions as its own, AIXI neds to verify them somehow, but how? (I'm very confused here and may be missing something completely obvious.)

ETA: yeah, this is wrong, disregard this.

comment by DefectiveAlgorithm · 2014-01-25T13:21:15.668Z · LW(p) · GW(p)

I don't know much about Solomonoff induction, so I may be wrong about this, but is it not the case that the universal prior only takes into account computable functions which exactly output the sensory data? If that is the case, consider the following scenario:

We have a function F which takes an unbounded natural number N as input and is provably uncomputable for all valid inputs. We have a computable algorithm A which provably outputs lower and upper bounds for F for any valid input. Furthermore, it is provable that no computable algorithm can provably produce tighter bounds on F's output than A (regardless of N). We can see that A outputs the bounds for a closed interval in the set of real numbers. We know that all such intervals (for which the lower and upper bounds are not equal) are uncountable. Now imagine a physical hypercomputer which outputs F(0), then F(1), then F(2), etc. to infinity. No computable algorithm will be able to predict the next symbol output by this hypercomputer, but there will be computable minds capable of recognizing the pattern and so of using A to place stronger bounds on its predictions of future sensory experience than AIXI can.

EDIT: Actually, this scenario might be broken. Specifically, I'm not sure what it physically means to 'output' an uncomputable number, and I think that AIXI's problem dissolves if we limit ourselves to the computable (and thus countable) subsets of the output intervals.

comment by Wei Dai (Wei_Dai) · 2010-07-28T08:22:25.305Z · LW(p) · GW(p)

If you're not convinced by my argument but can't explain why or don't have time to, can you please say that as well? Right now I'm not sure if you were convinced, and then forgot the discussion and went back to your previous position, or what.

comment by Vladimir_Nesov · 2010-07-27T10:45:31.500Z · LW(p) · GW(p)

Is there a good exposition of this semantics (more generally, for algorithmic probability)?

comment by Will_Sawin · 2010-08-01T18:07:34.239Z · LW(p) · GW(p)

AIXI only includes all prediction models that are 100% accurate. I don't think the human is capable of coming up with 100% accurate predictions.

Thought: The human can't make predictions at all about the black box, but he can use it to predict the outcomes of various computable processes. AIXI can already predict the outcomes of computable processes, and doesn't need the black box.

comment by PhilGoetz · 2010-07-27T19:59:02.908Z · LW(p) · GW(p)

For example, if our universe does in fact contain halting problem oracles, the Bayesian superintelligence with a TM-based universal prior will never be able to believe that.

I think this problem would vanish if you spelled out what "believe" means. The Bayesian superintelligence would quickly learn to trust the opinion of the halting problem oracle; therefore, it would "believe" it.

Replies from: timtyler
comment by timtyler · 2010-07-30T17:02:29.703Z · LW(p) · GW(p)

I am having a few problems in thinking of a sensible definition of "believe" in which the superintelligence would fail to believe what its evidence tells it is true. It would be especially obvious if the machine was very small. The superintelligence would just use Occcam's razor - and figure it out.

Of course, one could imagine a particularly stupid agent, that was too daft to do this - but then it would hardly be very much of a superintelligence.

comment by timtyler · 2010-07-27T19:44:25.883Z · LW(p) · GW(p)

P(true) = 0 - or p(false) = 1 - seem like trivial mistakes to avoid.

A "expected utility maximizer programmed with a TM-based universal prior" would surely not care very much if it was programmed with wrong priors after a while - since it would not be depending on the details of its priors much any more - due to having a big mountain of experience concerning what the actual expected frequency of events was. Its priors would be swamped by data - unless its priors were completely crazy.

The OP must be thinking of some different type of construct from me - and he doesn't seem to explain what it is.

Replies from: cousin_it
comment by cousin_it · 2010-07-27T19:52:02.348Z · LW(p) · GW(p)

P(true) = 0 or p(false) = 1 seem like trivial mistakes to avoid.

Unfortunately they aren't. A universal prior must enumerate all the ways a universe could possibly be. If your prior is based on Turing machines that compute universes, but our actual universe is uncomputable, you're screwed forever no matter what data comes in. Maybe the problem can be solved by a better universal prior, as Nesov suggests elsewhere in the thread, but as far as I understand it's an open problem right now.

ETA: pretty much this whole comment is wrong. The prior is over algorithms that generate sequences of sensory input, not over algorithms that define universes. This is an important distinction, sorry for missing it when I wrote this comment.

Replies from: SilasBarta, PhilGoetz, timtyler, timtyler
comment by SilasBarta · 2010-07-27T20:39:52.249Z · LW(p) · GW(p)

A universal prior must enumerate all the ways a universe could possibly be. If your prior is based on Turing machines that compute universes, but our actual universe is uncomputable, you're screwed forever no matter what data comes in.

Being forced to use the nearest computable approximation to an uncomputable function does not make you screwed forever.

Replies from: cousin_it
comment by cousin_it · 2010-07-27T20:42:01.634Z · LW(p) · GW(p)

That depends on the uncomputable function. Some can make you very well screwed indeed. It's all there in Wei Dai's examples on everything-list and one-logic, I really wish people would read them, maybe we'd have an actual discussion then. Sorry for sounding harsh.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-27T20:49:08.985Z · LW(p) · GW(p)

That depends on the uncomputable function. Some can make you very well screwed indeed.

Right, but it's not necessarily true, or even likely, hence my point.

It's all there in Wei Dai's examples on everything-list and one-logic, I really wish people would read them, maybe we'd have an actual discussion then.

I did read the links, (including the link to the empty stub article!), and the google group discussions all seemed to end, from my brief perusing of them, with them coming to the consensus that Wei Dai hadn't established his provacative, counterintuitive point. (And some of the exchanges here show the same.)

At the very least, he should summarize the reasoning or examples, as per standard practice, so we know there's something to be gained from going to the links. This is especially true given that most readers had assumed that the opposite of Wei Dai's premises are true and uncontroversial.

comment by PhilGoetz · 2010-07-27T19:56:18.384Z · LW(p) · GW(p)

Natural selection solves this problem.

comment by timtyler · 2010-07-27T20:00:50.812Z · LW(p) · GW(p)

To avoid such a trivial mistake, just follow the advice on:

http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/

comment by timtyler · 2010-07-27T20:11:02.537Z · LW(p) · GW(p)

Re: "If your prior is based on Turing machines that compute universes, but our actual universe is uncomputable, you're screwed forever no matter what data comes in."

No way! Agents are generally not crippled by their priors! Soon enough they have actual data, and - unless their priors are crazy - they are quickly swamped - and they don't need the details of their original priors any more - because they have real info to go on instead.

Replies from: cousin_it
comment by cousin_it · 2010-07-27T20:33:22.472Z · LW(p) · GW(p)

I'm not sure - did you miss the idea that all "universal priors" that we know how to construct today assign zero credence to certain hypotheses about the universe, or did you miss the idea that a zero-credence hypothesis will never rise above zero no matter how much data comes in, or is it me who's missing something?

Replies from: SilasBarta, NancyLebovitz, timtyler
comment by SilasBarta · 2010-07-28T18:14:37.513Z · LW(p) · GW(p)

Also, correct me if I'm wrong, but doesn't the Solomonoff prior bypass the issue of explicit hypotheses? That is, it puts a (non-zero) prior on every (prefix) bitstream of sensory data.

So, it doesn't even seem necessary to talk about what such an agent's "priors on hypotheses" are -- everything it believes is encoded as an expectation of sensory data, and nothing more. It does not explicitly represent concepts like, "this thing is a halting oracle".

Instead, when it encounters a halting oracle, it increases the weight it assigns to expectations of observations of things that are consistent with having been produced by a halting oracle, not the existence of a halting oracle as such.

No matter how uncomputable, or lengthy-to-specify, a function might be, you can always finitely specify your expectation weight on a finite observation prefix stream (i.e. the first n things you observe from the oracle).

So, I don't see how an agent with a Solomonoff prior chokes on an encounter with a halting oracle.

Replies from: timtyler
comment by timtyler · 2010-08-01T01:00:14.413Z · LW(p) · GW(p)

Normal agents won't. Genuinely intelligent agents won't.

I think those who are arguing that it will are imagining an agent with the Solomonoff prior totally wired into them in a manner that they can't possibly unlearn.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-01T05:15:07.420Z · LW(p) · GW(p)

But still, even if you have the Occamian prior (which I think is what's meant by the Solomonoff prior), there is no need to unlearn it. You retain a prior on all hypotheses that decreases in weight exponentially with length, and it persists on top of any observations you've updated on. Those new observations, combined with the Occamian prior give you the optimal weights on (prefix) sensory bitstreams, discounting the ruled-out ones and favoring those closer to what you've actually observed.

Even then, it keeps updating in favor of the observations that match what an oracle gives (without having to explicitly represent that they're from an oracle). No penalty from failure to unlearn.

Replies from: timtyler
comment by timtyler · 2010-08-01T07:17:04.445Z · LW(p) · GW(p)

The thing is, there is no one true razor. Different sources have different associated reference machines - some are more like Turing Machines, others are more like CA. If what you are looking at is barcodes, then short ones are pretty rare - and if you go into simulated worlds, sources can have practically any distribution you care to mention.

Yes, you can model these as "complier overhead" constants - which represent the "cost" of simulating one reference machine in another - but that is just another way of saying you have to unlearn the Solomonoff prior and use another one - which is more appropriate for your source.

You can still do that, whatever your reference machine is - provided it is computationally universal - and doesn't have too much "faith".

comment by NancyLebovitz · 2010-07-28T01:08:00.966Z · LW(p) · GW(p)

I'm not sure exactly what can qualify as a prior.

Is "Anomalies may be clues about a need to make deep changes in other priors" a possible prior?

Replies from: cousin_it
comment by cousin_it · 2010-07-28T03:59:30.542Z · LW(p) · GW(p)

A prior is not a program that tells you what to do with the data. A prior is a set of hypotheses with a number assigned to each. When data comes in, we compute the likelihoods of the data given each hypothesis on the list, and use these numbers to obtain a posterior over the same hypotheses. There's no general way to have a "none of the above" (NOTA) hypothesis in your prior, because you can't compute the likelihood of the data given NOTA.

Another equivalent way to think about it: because of the marginalization step (dividing everything by the sum of all likelihoods), Bayesian updating doesn't use the total likelihood of the data given all current hypotheses - only the relative likelihoods of one hypothesis compared to another. This isn't easy to fix because "total likelihood" is a meaningless number that doesn't indicate anything - it could easily be 1000 in a setup with an incorrect prior or 0.001 in a setup with a correct prior.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-28T09:29:53.026Z · LW(p) · GW(p)

People have beliefs about how various sorts of behavior will work out, though I think it's rare to have probabilities attached.

comment by timtyler · 2010-07-27T21:56:38.548Z · LW(p) · GW(p)

If you are assigning p(some empirical hypothesis) = 0, surely you are a broken system.

Replies from: timtyler
comment by timtyler · 2010-07-27T22:02:12.656Z · LW(p) · GW(p)

The example seems to be that a using a Turing machine to generate your priors somehow results in an expectation of p(uncomputable universe)=0. That idea just seems like total nonsense to me. It just doesn't follow. For all I care, my priors could have been assigned to me using a Turing machine model at birth - but I don't think p(uncomputable universe)=0. The whole line of reasoning apparently makes no sense.

Replies from: cousin_it
comment by cousin_it · 2010-07-28T03:56:00.569Z · LW(p) · GW(p)

The universal prior enumerates all Turing machines, not all possible priors generated by all Turing machines.

Replies from: timtyler
comment by timtyler · 2010-07-28T07:17:47.007Z · LW(p) · GW(p)

Priors are probabality estimates for uncertain quantities.

In Solomonoff induction they are probabality estimates for bitstrings - which one can think of as representing possible sensory inputs for an agent.

With a standard TM_length-based encoding, no finite bitstring is assigned a zero probability - and we won't have to worry about perceiving infinite bitstrings until after the universal heat death - so there is no problem with assigning certain bitstrings a zero prior probability.

Whether the bitstrings were created using uncomputable physics is neither here nor there. They are still just bitstrings - and so can be output by a TM with a finite program on its tape.

Replies from: cousin_it
comment by cousin_it · 2010-07-28T08:26:07.888Z · LW(p) · GW(p)

No, sorry. You're confused. A prior is not an assignment of credences to all bitstrings that you can observe. A prior is an assignment of credences to hypotheses, i.e. possible states of the world that generate bitstrings that you observe. Otherwise you'd find yourself in this text (see part II, "Escaping the Greek Hinterland").

Replies from: timtyler
comment by timtyler · 2010-07-28T08:59:43.641Z · LW(p) · GW(p)

No. We were talking about the universal prior. Here is how that is defined for sequences:

"The universal prior probability of any prefix p of a computable sequence x is the sum of the probabilities of all programs (for a universal computer) that compute something starting with p."

The universal prior of a sequence is the probability of that particular sequence arising (as a prefix). It is not the probabilty of any particular hypothesis or program. Rather is a weighted sum of the probabilities of all the programs that generate that sequence.

You can talk about the probabilities of hypothesis and programs as well if you like - but the universal prior of a sequence is perfectly acceptable subject matter - and is not a "confused" idea.

No finite sequence has a probabilty of zero - according to the universal prior.

All finite bitstrings can be produced by computable means - even if they were generated as the output of an uncomputable physical process.

Is this misconception really where this whole idea arises from?

Replies from: cousin_it
comment by cousin_it · 2010-07-28T09:38:27.998Z · LW(p) · GW(p)

This is all true, but... Why do you think the universal prior talks about computer programs at all? If I only wanted a prior over all finite bitstrings, I'd use a simpler prior that assigned every string of length N a credence proportional to 2^-N. Except that prior has a rather major shortcoming: it doesn't help you predict the future! No matter how many bits you feed it, it always says the next bit is going to be either 0 or 1 with probability 50%. It will never get "swamped" by the data, never gravitate to any conclusions. This is why we want the universal prior to be based on computer programs instead: it will work better in practice, if the universe is in fact computable. But what happens if the universe is uncomputable? That's the substantive question here.

ETA: the last two sentences are wrong, disregard them.

Replies from: timtyler
comment by timtyler · 2010-07-28T09:54:25.552Z · LW(p) · GW(p)

Nothing much happens to intelligent agents - because an intelligent agents' original priors mostly get left behind shortly after they are born - and get replaced by evidence-based probability estimates of events happening. If convincing evidence comes in that the world is uncomputable, that just adds to the enormous existing stack of evidence they have about the actual frequencies of things.

Anyhow, priors being set to 0 or 1 is not a problem for observable sense data. No finite sense data has p assigned to 0 or 1 under the universal prior - so an agent can always update successfully - if it gets sufficient evidence that a sequence was actually produced. So, if it sees a system that apparently solves the halting problem for arbitrary programs, that is no big deal for it. It may have found a Turing oracle! Cool!

I suppose it might be possible to build an semi-intelligent agent with a particular set of priors permanently wired into it - so the agent was incapable of learning and adapting if its environment changed. Organic intelligent agents are not much like that - and I am not sure how easy it would be to build such a thing. Such agents would be incapable of adapting to an uncomputable world. They would always make bad guesses about uncomputable events. However, this seems speculative - I don't see why people would try to create such agents. They would do very badly in certain simulated worlds - where Occam's razor doesn't necessarily hold true - and it would be debatable whether their intelligence was really very "general".

Replies from: Sniffnoy, andreas
comment by Sniffnoy · 2010-07-28T21:10:49.567Z · LW(p) · GW(p)

The reason the universal prior is called "universal" is because given initial segments, where the infinite strings come from any computable distribution, and updating on those samples, it will, in fact, converge to the actual distribution on what the next bit should be. Now I'll admit to not actually knowing the math here, but it seems to me that if most any prior had that property, as you seem to imply, we wouldn't need to talk about a universal prior in the first place, no?

Also, if we interpret "universe" as "the actual infinite string that these segments are initial segments" of, then, well... take a look at that sum you posted and decompose it. The universal prior is basically assigning a probability to each infinite string, namely the sum of the probabilities of programs that generate it, and then collapsing that down to a distribution on initial segments in the obvious way. So if we want to consider its hypotheses about the actual law of the universe, the whole string, it will always assign 0 probability to an uncomputable sequence.

Replies from: timtyler, timtyler
comment by timtyler · 2010-07-29T08:57:49.928Z · LW(p) · GW(p)

Convergence is more the result of the updates than the original prior. All the initial prior has to be to result in convergence is not completely ridiculous (1, 0, infinitessimals, etc). The idea of a good prior is that it helps initially, before an agent has any relevant experience to go on. However, that doesn't usually last for very long - real organic agents are pretty quickly flooded with information about the state of the universe, and are then typically in a much better position to make probabililty estimates. You could build agents that were very confident in their priors - and updated them slowly - but only rarely would you want an agent that was handicapped in its ability to adapt and learn.

Picking the best reference machine would be nice - but I think most people understand that for most practical applications, it doesn't matter - and that even a TM will do.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-07-29T09:45:31.682Z · LW(p) · GW(p)

Convergence is more the result of the updates than the original prior. All the initial prior has to be to result in convergence is not completely ridiculous (1, 0, infinitessimals, etc).

Are you certain of this? Could you provide some sort of proof or reference, please, ideally together with some formalization of what you mean by "completely ridiculous"? I'll admit to not having looked up a proof of convergence for the universal prior or worked it out myself, but what you say were really the case, there wouldn't actually be be very much special about the universal prior, and this convergence property of it wouldn't be worth pointing out - so I think I have good reason to be highly skeptical of what you suggest.

However, that doesn't usually last for very long - real organic agents are pretty quickly flooded with information about the state of the universe, and are then typically in a much better position to make probabililty estimates.

Better, yes. But good enough? Arbitrarily close?

You could build agents that were very confident in their priors - and updated them slowly - but only rarely would you want an agent that was handicapped in its ability to adapt and learn.

Sorry, but what does this even mean? I don't understand how this notion of "update speed" translates into the Bayesian setting.

Replies from: timtyler, timtyler, timtyler
comment by timtyler · 2010-07-29T19:30:59.841Z · LW(p) · GW(p)

Here's Shane Legg on the topic of how little priors matter when predicting the environment:

"In some situations, for example with Solomonoff induction, the choice of the reference machine doesn’t matter too much. [...] the choice of reference machine really doesn’t matter except for very small data sets (which aren’t really the ones we’re interested in here). To see this, have a look at the Solomonoff convergence bound and drop a compiler constant in by the complexity of the environment. The end result is that the Solomonoff predictor needs to see just a few more bytes of the data sequence before it converges to essentially optimal predictions."

Replies from: Sniffnoy
comment by Sniffnoy · 2010-07-30T01:22:40.815Z · LW(p) · GW(p)

This doesn't address what I said at all. We don't speak of "the" universal prior because there's a specific UTM it's defined with respect to, we speak of "the" universal prior because we don't much care about the distinction between different universal priors! The above article is still about doing Bayesian updating starting with a universal prior. That which particular universal prior you start from doesn't matter much is not new information and in no way supports your claim that any "reasonable" prior - whatever that might mean - will also have this same property.

Replies from: timtyler
comment by timtyler · 2010-07-30T06:11:20.777Z · LW(p) · GW(p)

I think when he says "the choice of the reference machine doesn’t matter too much" and "the choice of reference machine really doesn’t matter except for very small data sets" he literally means those things. I agree that my position on this is not new.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-07-30T06:22:02.634Z · LW(p) · GW(p)

Sorry, how does "literally" differ from what I stated? And you seem to be stating something very different from him. He is just stating that the UTM used to define the universal prior is irrelevant. You are claiming that any "reasonable" prior, for some unspecified but expansive-sounding notion of "reasonable", has the same universal property as a universal prior.

Replies from: timtyler
comment by timtyler · 2010-07-30T16:47:49.114Z · LW(p) · GW(p)

That seems like quite a tangle, and alas, am not terribly interested in it . But:

The term was "reference machine". No implication that it is a UTM is intended - it could be a CA - or any other universal computer. The reference machine totally defines all aspects of the prior. There are not really "universal reference machines" which are different from other "reference machines" - or if there are "universal" just refers to universal computation. A universal machine can define literally any distribution of priors you can possibly imagine. So: the distinction you are trying to make doesn't seem to make much sense.

Convergence on accurate beliefs has precious little to do with the prior - it is a property of the updating scheme. The original priors matter little after a short while - provided they are not zero, one - or otherwise set so they prevent updating from working at all.

Thinking of belief convergence as having much to do with your priors is a wrong thought.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-08-03T00:53:15.621Z · LW(p) · GW(p)

The term was "reference machine". No implication that it is a UTM is intended - it could be a CA - or any other universal computer. The reference machine totally defines all aspects of the prior. There are not really "universal reference machines" which are different from other "reference machines" - or if there are "universal" just refers to universal computation. A universal machine can define literally any distribution of priors you can possibly imagine. So: the distinction you are trying to make doesn't seem to make much sense.

Sorry, what? Of course it can be any sort of universal computer; why would we care whether it's a Turing machine or some other sort? Your statement that taking a universal computer and generating the corresponding universal prior will get you "literally any distribution of priors you can imagine" is just false, especially as it will only get you uncomputable ones! Generating a universal prior will only get you universal priors. Perhaps you were thinking of some other way of generating a prior from a universal computer? Because that isn't what's being talked about.

Convergence on accurate beliefs has precious little to do with the prior - it is a property of the updating scheme. The original priors matter little after a short while - provided they are not zero, one - or otherwise set so they prevent updating from working at all.

Thinking of belief convergence as having much to do with your priors is a wrong thought.

You have still done nothing to demonstrate this. The potential for dependence on priors has been demonstrated elsewhere (anti-inductive priors, etc). The "updating scheme" is Bayes' Rule. (This might not suffice in the continuous-time case, but you explicitly invoked the discrete-time case above!) But to determine all those probabilities, you need to look at the prior. Seriously, show me (or just point me to) some math. If you refuse to say what makes a prior "reasonable", what are you actually claiming? That the set of priors with this property is large in some appropriate sense? Please name what sense. Why should we not just use some equivalent of maxent, if what you say is true?

Replies from: timtyler
comment by timtyler · 2010-08-03T06:56:49.283Z · LW(p) · GW(p)

"Of course it can be any sort of universal computer; why would we care whether it's a Turing machine or some other sort?"

Well, different reference machines produce different prior distributions - so the distribution used matters initially, when the machine is new to the world.

"Your statement that taking a universal computer and generating the corresponding universal prior will get you "literally any distribution of priors you can imagine" is just false, especially as it will only get you uncomputable ones! "

"Any distribution you can compute", then - if you prefer to think that you can imagine the uncomputable.

"You have still done nothing to demonstrate this."

Actually, I think I give up trying to explain. From my perspective you seem to have some kind of tangle around the word "universal". "Universal" could usefully refer to "universal computation" or to a prior that covers "every hypothesis in the universe". There is also the "universal prior" - but I don't think "universal" there has quite the same significance that you seem to think it does. There seems to be repeated miscommunication going on in this area.

It seems non-trivial to describe the class of priors that leads to "fairly rapid" belief convergence in an intelligent machine. Suffice to say, I think that class is large - and that the details of priors are relatively insignificant - provided there is not too much "faith" - or "near faith". Part of the reason for that is that priors usually get rapidly overwritten by data. That data establishes its own subsequent prior distributions for all the sources you encounter - and for most of the ones that you don't. If you don't agree, fine - I won't bang on about it further in an attempt to convince you.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-08-04T00:21:03.586Z · LW(p) · GW(p)

Firstly, please use Markdown quotes for ease of reading? :-/

Well, different reference machines produce different prior distributions - so the distribution used matters initially, when the machine is new to the world.

Indeed, but I don't think that's really the property under discussion.

"Any distribution you can compute", then - if you prefer to think that you can imagine the uncomputable.

....huh? Maybe you are misunderstanding the procedure in question here. We are not taking arbitrary computations that output distributions and using those distributions. That would get you arbitrary computable distributions. Rather, we are taking arbitrary universal computers/UTMs/Turing-complete programming languages/whatever you want to call them, and then generating a distribution as "probability of x is sum over 2^-length over all programs that output something beginning with x" (possibly normalized). I.e. we are taking a reference machine and generating the corresponding universal prior.

Not only will this not get you "any distribution you can compute", it won't get you any distributions you can compute at all. The resulting distribution is always uncomputable. (And hence, in particular, not practical, and presumably not "reasonable", whatever that may mean.)

Am I mistaken in asserting that this is what was under discussion?

It seems non-trivial to describe the class of priors that leads to "fairly rapid" belief convergence in an intelligent machine. Suffice to say, I think that class is large - and that the details of priors are relatively insignificant - provided there is not too much "faith" - or "near faith". Part of the reason for that is that priors usually get rapidly overwritten by data. That data establishes its own subsequent prior distributions for all the sources you encounter - and for most of the ones that you don't. If you don't agree, fine - I won't bang on about it further in an attempt to convince you.

You don't have to attempt to convince me, but do note that despite asserting it repeatedly you have, in fact, done zero to establish the truth of this assertion / validity of this intuition, which I have good reason to believe to be unlikely, as I described earlier.

Replies from: timtyler
comment by timtyler · 2010-08-04T15:34:21.884Z · LW(p) · GW(p)

FWIW, what I meant was that - by altering the reference machine, p() - for all bitstrings less than a zillion bits long - can be made into any set of probabilities you like - provided they don't add up to more than 1, of course.

The reference machine defines the resulting probability distribution completely.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-08-05T22:54:20.353Z · LW(p) · GW(p)

AH! So you are making a comment on the use of universal priors to approximate arbitrary finite priors (and hence presumably vice versa). That is interesting, though I'm not sure what it has to do with eventual convergence. You should have actually stated that at some point!

comment by timtyler · 2010-07-29T19:18:12.891Z · LW(p) · GW(p)

Re: "I don't understand how this notion of "update speed" translates into the Bayesian setting."

Say you think p(heads) is 0.5. If you see ten heads in a row, do you update p(heads) a lot, or a little? It depends on how confident you are of your estimate.

If you had previously seen a thousand coin flips from the same coin, you might be confident of p(heads) being 0.5 - and therefore update little. If you were told that it was a biased coin from a magician, then your estimate of p(heads) being 0.5 might be due to not knowing which way it was biased. Then you might update your estimate of p(heads) rapidly - on seing several heads in a row.

Like that.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-07-30T01:12:48.219Z · LW(p) · GW(p)

What you have just laid out are not different "update speeds" but different priors. "It's a biased coin from a magician" is of the same class of prior assumptions as "It's probably a fair coin" or "It's a coin with some fixed probability of landing heads, but I have no idea what" or "It's a rigged coin that can only come up heads 10 times once activated".

After each toss, you do precisely one Bayesian update. Perhaps the notion of "update speed" might make sense in a more continuous setting, but in a discrete setting like this it is clear it does not. The amount you update is determined by Bayes' Law; different apparent "update speeds" are due to differing priors. "Speed" probably isn't even a good term, as updates aren't even necessarily in the same direction! If you think the coin can only come up heads 10 times, each appearance of heads makes it less likely to come up again.

Replies from: timtyler
comment by timtyler · 2010-07-30T06:07:37.964Z · LW(p) · GW(p)

"Update speed" seems fine to me - when comparing:

.5, .500001, .500002, .500003, .500004...

....with...

.5, 0.7, 0.9, 0.94, 0.96

...but use whatever term you like.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-07-30T06:17:39.018Z · LW(p) · GW(p)

That's a statistic, not a parameter - and it's a statistic ultimately determined by the prior.

Replies from: timtyler
comment by timtyler · 2010-07-30T06:21:48.124Z · LW(p) · GW(p)

I do not know where the idea that "speeds" are "parameters" and not "statistics" comes from. An entity being a statistic doesn't imply that it is not a speed.

The same goes for discrete systems. They have the concept of speed too:

http://en.wikipedia.org/wiki/Glider_%28Conway%27s_Life%29

Replies from: Sniffnoy, Sniffnoy
comment by Sniffnoy · 2010-08-03T00:56:30.939Z · LW(p) · GW(p)

The same goes for discrete systems. They have the concept of speed too:

http://en.wikipedia.org/wiki/Glider_%28Conway%27s_Life%29

This is utterly irrelevant. The problem with what you say is not that there's no notion of speed, it's that there is precisely one way of doing updates, and it has no "speed" parameter.

Replies from: timtyler
comment by timtyler · 2010-08-03T05:28:07.196Z · LW(p) · GW(p)

In the game of life, the update speed is always once per generation. However, that doesn't mean it has no concept of speed. In fact the system exhibits gliders with many different speeds.

It's much the same with an intelligent agent's update speed in response to evidence - some will update faster than others - depending on what they already know.

You claimed that:

"Perhaps the notion of "update speed" might make sense in a more continuous setting, but in a discrete setting like this it is clear it does not."

However, the concept of "speed" works equally well in discrete and continuous systems - as the GOL illustrates. "Discreteness" is an irrelevance.

Replies from: Sniffnoy, Sniffnoy
comment by Sniffnoy · 2010-08-03T05:57:09.853Z · LW(p) · GW(p)

"Perhaps the notion of "update speed" might make sense in a more continuous setting, but in a discrete setting like this it is clear it does not."

However, the concept of "speed" works equally well in discrete and continuous systems - as the GOL illustrates. "Discreteness" is an irrelevance.

You really seem to be missing the point here. I'm sorry but from your posts I can't help but get the idea that you don't really understand how this sort of prediction scheme works. Sure, "update speed" in the sense you described it elsewhere in the thread makes sense, but who cares? Update speed in these you described it elsewhere in the thread is a consequence of the prior (or current state, rather), it isn't some sort of parameter, and it's not clear it's something at all stable or meaningful. You've established the existence of something trivial and probably irrelevant. In the parametric sense you seemed to be originally using it, it doesn't exist. Can we agree on this?

Replies from: timtyler
comment by timtyler · 2010-08-03T06:19:16.990Z · LW(p) · GW(p)

Probably nobody cares - apart from you, it seems. Apparently, one can't get away with using the phrase "update speed" in connection with an intelligent agent without getting bounced.

When you said:

"I don't understand how this notion of "update speed" translates into the Bayesian setting."

...and I said...

"Say you think p(heads) is 0.5. If you see ten heads in a row, do you update p(heads) a lot, or a little? It depends on how confident you are of your estimate. If you had previously seen a thousand coin flips from the same coin, you might be confident of p(heads) being 0.5 - and therefore update little. If you were told that it was a biased coin from a magician, then your estimate of p(heads) being 0.5 might be due to not knowing which way it was biased. Then you might update your estimate of p(heads) rapidly - on seing several heads in a row. Like that."

...IMO, the conversation could and should have stopped - right there.

comment by Sniffnoy · 2010-08-03T05:33:43.009Z · LW(p) · GW(p)

In the game of life, the update speed is always once per generation. However, that doesn't mean it has no concept of speed. In fact the system exhibits gliders with many different speeds.

This is not analogous. We are speaking of a complete system here.

It's much the same with an intelligent agent's update speed in response to evidence - some will update faster than others - depending on what they already know.

I have already addressed this. What you have called "update speed" is determined by current distribution.

Replies from: timtyler
comment by timtyler · 2010-08-03T05:53:52.536Z · LW(p) · GW(p)

Re: "We are speaking of a complete system here"

I assure you that I could exhibit a GOL field that consisted entirely of gliders moving at c/2 - and then exhibit another GOL field that consisted entirely of gliders moving at c/4. These systems would have different characteristic speeds. Hopefully, you see the analogy now.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-08-03T05:59:06.155Z · LW(p) · GW(p)

OK, sure. But then to continue the analogy, in the resulting speed is a function of the initial configuration. :)

Replies from: timtyler
comment by timtyler · 2010-08-03T06:16:09.352Z · LW(p) · GW(p)

Just so. I never said otherwise. You already asked for clarification about whether I thought that the system's "update speed" was to do with its prior Prob. Dist. - and I said that "yes", it was.

comment by Sniffnoy · 2010-07-30T06:25:39.960Z · LW(p) · GW(p)

Hm; there may not be a disagreement here. You seemed to be using it in a way that implied it was not determined by (or even was independent of) the prior. Was I mistaken there?

Replies from: timtyler
comment by timtyler · 2010-07-30T16:58:43.552Z · LW(p) · GW(p)

The idea was that some agents update faster than others (or indeed not at all).

If you like you can think of the agents that update relatively slowly as being confident that they are uncertain about the things they are unsure about. That confidence in their own uncertainty could indeed be represented by other priors.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-08-03T00:59:42.856Z · LW(p) · GW(p)

That's not "other priors", there's just one prior. All the probabilities in Bayes' Rule come from the updated-to-current version of the prior.

Replies from: timtyler
comment by timtyler · 2010-08-03T05:49:25.978Z · LW(p) · GW(p)

Other prior probabilities. There is one prior set of probabilities, which is composed of many prior probabilities and probability distributions.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-08-03T05:52:29.016Z · LW(p) · GW(p)

If you want to think about it that way, please don't say "other priors". That's very confusing, because "prior" in this context refers to the whole prior, not to pieces of it (which I'm not sure how you're detangling from each other, anyway). If we're talking about something of the universal-prior sort, it has one prior, over its total sensory experience; I'm not clear how you're decomposing that or what alternative model you are suggesting.

Replies from: timtyler
comment by timtyler · 2010-08-03T06:00:13.519Z · LW(p) · GW(p)

The two types of prior probability I discussed were "belifs about the world" and "beliefs about the certainty of those beliefs".

An agent that updates its beliefs about the world rapidly (in response to evidence) would have a low degree of certainty about those beliefs - while an agent that updates its beliefs about the world slowly would have a high degree of certainty that those beliefs were already correct - and were backed up with lots of existing evidence.

I gave an example of this already when I discussed the magician's coin.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-08-03T06:02:25.085Z · LW(p) · GW(p)

Except these aren't separate things. That isn't how this sort of system works! Its beliefs about the certainty of those beliefs are determined by its beliefs about the world.

Replies from: timtyler
comment by timtyler · 2010-08-03T06:07:26.681Z · LW(p) · GW(p)

Well, everything is about the world, if materialism is true.

You don't seem to be even trying to perform a sympathetic reading. Leave aside quibbling about what is to do with the world - can you at least see that in the first case, updates happen quickly, and in the second case they happen slowly? "Speed" just refers to distance divided by time. Here distance is the probabiliy delta, and time is simply time. So, updates can happen fast and slow. Some systems update quickly, others update slowly - and others don't update at all. This all seems fairly simple to me - what is the problem?

Replies from: Sniffnoy
comment by Sniffnoy · 2010-08-03T06:09:06.665Z · LW(p) · GW(p)

Well, sure. But that statement is trivial.

Replies from: timtyler
comment by timtyler · 2010-08-03T06:13:16.795Z · LW(p) · GW(p)

Right. I really don't think that what I am saying is controversial. The way I remember it, I talked about systems with different update speeds - and you jumped on that.

Replies from: Sniffnoy, Sniffnoy
comment by Sniffnoy · 2010-08-04T00:40:40.022Z · LW(p) · GW(p)

Alternatively, I could say, I went with the assumption that you were attempting to carve the relevant concepts at the joints and getting it wrong, rather than that you were making a true statement which doesn't even try to accomplish that.

comment by Sniffnoy · 2010-08-03T06:19:35.613Z · LW(p) · GW(p)

M, sorry then. But you didn't explain the term anywhere, so I assumed it meant what it sounded like - the original context makes it sound like you mean something separate from the prior, rather than something determined by it. If instead of talking about building an agent that were "confident in their priors" and "updated them slowly" you had just spoken of "priors that result in slow updating" I don't think there would have been a problem. (I must admit I probably also wasn't inclined to look for a sympathetic reading as your other comments about the universal prior seem to be just wrong. )

comment by timtyler · 2010-07-29T19:36:57.636Z · LW(p) · GW(p)

Re: "there wouldn't actually be be very much special about the universal prior"

Well, Occam's razor is something rather special. However, agents don't need an optimal version of it built into them as a baby - they can figure it out from their sensory inputs.

comment by timtyler · 2010-07-29T08:51:44.693Z · LW(p) · GW(p)

Uncomputable strings are all infinite. Those concerned with the real world don't have to concern themselves with such things. Everything you encounter in practice is finite, and therefore computable.

Replies from: gwern
comment by gwern · 2010-07-29T09:17:35.220Z · LW(p) · GW(p)

And if you are faced with a black box emitting one digit at a time, how will you do better than maxent?

comment by andreas · 2010-07-28T11:30:19.218Z · LW(p) · GW(p)

Nothing much happens to intelligent agents - because an intelligent agents' original priors mostly get left behind shortly after they are born - and get replaced by evidence-based probability estimates of events happening.

Prior determines how evidence informs your estimates, what things you can consider. In order to "replace priors with evidence-based probability estimates of events", you need a notion of event, and that is determined by your prior.

Replies from: Vladimir_Nesov, timtyler
comment by Vladimir_Nesov · 2010-07-28T11:42:01.021Z · LW(p) · GW(p)

Prior evaluates, but it doesn't dictate what is being evaluated. In this case, "events happening" refers to subjective anticipation, which in turn refers to prior, but this connection is far from being straightforward.

comment by timtyler · 2010-07-28T11:37:17.061Z · LW(p) · GW(p)

"Determined" in the sense of "weakly influenced". The more actual data you get, the weaker the influence of the original prior becomes - and after looking at the world for a little while, your original priors become insignificant - swamped under a huge mountain of sensory data about the actual observed universe.

Priors don't really affect what things you can consider - since you can consider (and assign non-zero probability to) receiving any sensory input sequence.

Replies from: andreas
comment by andreas · 2010-07-28T11:53:07.616Z · LW(p) · GW(p)

I use the word "prior" in the sense of priors as mathematical objects, meaning all of your starting information plus the way you learn from experience.

Replies from: Vladimir_Nesov, timtyler
comment by Vladimir_Nesov · 2010-07-28T11:59:05.260Z · LW(p) · GW(p)

I can't quite place "you need a notion of event, and that is determined by your prior", but I guess the mapping between sample space and possible observations is what you meant.

Replies from: timtyler
comment by timtyler · 2010-07-28T12:02:03.219Z · LW(p) · GW(p)

Well yes, you have "priors" that you learn from experince. An uncomputable world is not a problem for them - since you can learn about uncomputable physics, the same way as you learn about everything else.

This whole discussion seems to be a case of people making a problem out of nothing.

comment by timtyler · 2010-07-28T12:05:46.809Z · LW(p) · GW(p)

Well yes, you can have "priors" that you have learned from experience. An uncomputable world is not a problem in that case either - since you can learn about uncomputable physics, in just the same way that you learn about everything else.

This whole discussion seems to be a case of people making a problem out of nothing.

comment by Blueberry · 2010-07-27T09:05:56.398Z · LW(p) · GW(p)

I would stay in the fruit tree metaphor and say they might be "hanging right over our heads".

Yeah, he really saw the light, but dropped the ball, when writing that stormy bag of mixed metaphors.

comment by Wei Dai (Wei_Dai) · 2010-07-28T00:32:55.761Z · LW(p) · GW(p)

. . . it may be that there is a bunch of low-hanging fruit hiding just around the corner.

I would stay in the fruit tree metaphor and say they might be "hanging right over our heads".

Gee, that was obviously supposed to be a non-mixed metaphor about urban foraging. Yeah that's it. :)

Seriously, I thought about sticking with the fruit tree metaphor, but "hanging right over our heads" makes the problem sound too easy, so I decided to favor accuracy over literary elegance.

comment by Roko · 2010-07-27T19:18:06.411Z · LW(p) · GW(p)

This is an extremely high-information-value article. I'm surprised it's still only on 11.

Replies from: Wei_Dai, SilasBarta, cousin_it
comment by Wei Dai (Wei_Dai) · 2010-07-27T23:13:46.561Z · LW(p) · GW(p)

Eliezer once complained that I wrote in an "obvious to Eliezer" style and should try to get beyond that. Well, I think what I'm doing is rational given my goals. Unlike Eliezer, whose plans depend on convincing a significant fraction of humanity that existential risk is something to take seriously and that his own approach for solving it (i.e., FAI) is correct, my current aims are mainly to answer certain confusing questions. I don't see much benefit in spending a lot of effort trying to get people to understand my ideas, or even to convince them that my problems should interest them, unless there's a reasonable chance they might contribute to the solution of those problems or point out where my ideas are wrong.

Or it might be that I'm just too lazy to write well and I'm rationalizing all this. :)

comment by SilasBarta · 2010-07-27T20:12:53.388Z · LW(p) · GW(p)

I'm surprised (and a tad disappointed) it got as high as 11! It casually assumes controversial, questionable premises and doesn't clearly define what its thesis is.

What exactly did you learn, and what are the answers to all my questions?

comment by cousin_it · 2010-07-27T19:21:38.862Z · LW(p) · GW(p)

Me too. I usually reread Wei Dai's posts many times over months or even years, always finding new bits of insight that I missed the previous time.

Replies from: Roko
comment by Roko · 2010-07-27T19:27:48.925Z · LW(p) · GW(p)

Maybe it's just too hard-core for the average LWer's tastes? High score maybe corresponds to a combination of sufficiently clever but also somewhat down-to-earth?

Replies from: PhilGoetz
comment by PhilGoetz · 2010-07-27T20:10:31.872Z · LW(p) · GW(p)

If "down-to-earth" means "demonstrating a connection with reality", then yes. There are some ideas here, but no definitions, examples, elaborations, or empirical support.

comment by Mitchell_Porter · 2010-07-29T09:26:22.685Z · LW(p) · GW(p)

We can approach this question in a way which completely sidesteps the debate about whether the thinking which gets filed under the name of "philosophy" has been successful or unsuccessful, and whether philosophers are good or bad thinkers. Just try substituting "LessWrong" for "philosophy" throughout Wei's article; the problem is not substantially changed. Can the LessWrong worldview account for its own existence, and for the debate and discussion which occur here every day?

comment by ocr-fork · 2010-07-27T22:58:04.315Z · LW(p) · GW(p)

To cite one field that I'm especially familiar with, consider probability and decision theory, where we went from having no concept of probability, to studies involving gambles and expected value, to subjective probability, Bayesian updating, expected utility maximization, and the Turing-machine-based universal prior, to the recent realizations that EU maximization with Bayesian updating and the universal prior are both likely to be wrong or incomplete.

I don't see how bayesian utility maximizers lack the "philosophical abilities" to discover these ideas. Also, the last one is only half true. The "wrong" link is about decision theory paradoxes, but a bayesian utility maximizer would overcome these with practice.

comment by PhilGoetz · 2010-07-27T16:38:36.920Z · LW(p) · GW(p)

OP wrote:

It seems like human philosophy is more effective than it has any right to be. Why?

and I said, "What? Huh? Not!" Then OP wrote:

To cite one field that I'm especially familiar with, consider probability and decision theory, where we went from having no concept of probability, to studies involving gambles and expected value, to subjective probability, Bayesian updating, expected utility maximization, and the Turing-machine-based universal prior, to the recent realizations that EU maximization with Bayesian updating and the universal prior are both likely to be wrong or incomplete.

If you want to call math philosophy, then, yes, philosophy is effective. But then the post doesn't make any sense; the issues being raised don't apply. The opening claims philosophy is effective by pointing to math and economics and calling them philosophy. The rest of the post contrasts philosophy with math, and talks about how hard philosophy is, and how non-useful it appears to be; the only example provided is ethics.

I recommend Wei Dai try to rewrite the post, being more specific about what philosophy and meta-philosophy are (and what the main point of the post is), providing many more examples of "philosophy", and paying careful attention that they mean the same thing in all parts of the post.

And I'll say it again: Being a tribal forager is much more intellectually demanding than city-folk think it is.

Replies from: Blueberry
comment by Blueberry · 2010-07-27T18:15:39.262Z · LW(p) · GW(p)

If you want to do math, you need some basic definitions, concepts, and motivations. Once philosophy has provided those, then you can start quantifying and proving theorems. See the history of economics, for instance.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-07-27T20:00:49.260Z · LW(p) · GW(p)

Philosophy has never provided the basic definitions, concepts, or even motivations for math. The historical influence was the other way around: The successful use of math inspired the invention of philosophy as a rational discipline.

Replies from: Blueberry
comment by Blueberry · 2010-07-27T21:41:18.376Z · LW(p) · GW(p)

Mathematical logic grew out of the philosophical analysis of arguments. Mathematically rigorous analysis and calculus grew out of the concepts of motion and speed used in understanding physics ("natural philosophy"), which itself grew out of philosophy. Probability and statistics, as applied to controlled studies, grew out of the philosophy leading to the scientific method.