Posts

Comments

Comment by Tuukka_Virtaperko on Meta-rationality · 2013-11-11T19:18:44.513Z · LW · GW

I apologize. I no longer feel a need to behave in the way I did.

Comment by Tuukka_Virtaperko on Meta-rationality · 2013-11-06T02:33:23.043Z · LW · GW

This whole conversation seems a little awkward now.

Comment by Tuukka_Virtaperko on Meta-rationality · 2013-02-22T09:33:03.282Z · LW · GW

That's a good result. However, the necessity of innate biases undermines the notion of rationality, unless we have a system for differentiating the rational cognitive faculty from the innately biased cognitive faculty. I am proposing that this differentiation faculty be rational, hence "Metarationality".

In the Cartesian coordinate system I devised object-level entities are projected as vectors. Vectors with a positive Y coordinate are rational. The only defined operation so far is addition: vectors can be added to each other. In this metasystem we are able to combine object-level entities (events, objects, "things") by adding them to each other as vectors. This system can be used to examine individual object-level entities within the context other entities create by virtue of their existence. Because the coordinate system assigns a moral value to each entity it can express, it can be used for decision making. Obviously, it values morally good decisions over morally bad ones.

Every entity in my system is an ordered pair of the form ). Here x and y are propositional variables whose truth values can be -1 (false) or 1 (true). x denotes whether the entity is tangible and y whether it is placed within a rational epistemology. p is the entity. &p is the conceptual part of the entity (a philosopher would call that an "intension"). p is the sensory part of the entity, ie. what sensory input is considered to be the referent of the entity's conceptual part. A philosopher would call p an extension. a, b and c are numerical values, which denote the value of the entity itself, of its intension, and of its extension, respectively.

The right side of the following formula (right to the equivalence operator) tells how b and c are used to calculate a. The left side of the formula tells how any entity is converted to vector a. The vector conversion allows both innate cognitive bias and object-level rationality to influence decision making within the same metasystem.

%20\Leftrightarrow%20{%5E{x}_{y}p}_{\frac{%20\textup{min}(m,n)%20}%20{%20\textup{max}(m,n)%20}(m+n)}%20=(%5E{x}_{y}{\&}p%20_n%20,%20{%5E{x}_{y}*p}%20_m%20)))

If someone says that it's just a hypothesis this model works, I agree! But I'm eager to test it. However, this would require some teamwork.

Comment by Tuukka_Virtaperko on Meta-rationality · 2013-02-15T09:33:58.741Z · LW · GW

In any case, this "hot-air balloonist vs. archer" (POP!) comparison seems like some sort of an argument ad hominem -type fallacy, and that's why I reacted with an ad hominem attack about legos and stuff. First of all, ad hominem is a fallacy, and does nothing to undermine my case. It does undermine the notion that you are being rational.

Secondly, if my person is that interesting, I'd say I resemble the mathematician C. S. Peirce more than Ramakrishna. It seems to me mathematics is not necessarily considered completely acceptable by the notion of rationality you are advocating, as pure mathematics is only concerned about rules regarding what you'd call "maps" but not rules regarding what you'd call "territory". That's a weird problem, though.

Comment by Tuukka_Virtaperko on Meta-rationality · 2013-01-30T10:05:30.112Z · LW · GW

Sorry for being cruel. It didn't occur to me that LessWrong is "an online community for people who want to apply the discovery of biases like the conjunction fallacy, the affect heuristic, and scope insensitivity in order to fix their own thinking." I thought this is a community for people who "apply the discovery of biases and, hence, their thinking is not broken".

I didn't notice "Less Wrong users aim to develop accurate predictive models of the world, and change their mind when they find evidence disconfirming those models". I thought LessWrong users actually do that instead of aiming to do that.

I didn't understand this is a low self-esteem support group for people who want to live up to preconceived notions of morality. I probably don't have anything to do here. Goodbye.

Comment by Tuukka_Virtaperko on Meta-rationality · 2013-01-29T22:01:00.642Z · LW · GW

Why do you give me all the minus? Just asking.

Comment by Tuukka_Virtaperko on Meta-rationality · 2013-01-29T21:43:46.513Z · LW · GW

I suppose you got the joke above (Right? You did, right?) but you were simply too down not to fall for it. What's the use of acquiring all that theoretical information, if it doesn't make you happy? Spending your days hanging around on some LessWrong with polyamorous dom king Eliezer Yudkowsky as your idol. You wish you could be like him, right? You wish the cool guys would be on the losing side, like he is?

Comment by Tuukka_Virtaperko on Meta-rationality · 2013-01-29T21:09:54.475Z · LW · GW

I just gave my girlfriend an orgasm. Come on, give me another -1.

Comment by Tuukka_Virtaperko on Meta-rationality · 2013-01-29T19:06:00.051Z · LW · GW

I really don't understand why you don't want a mathematical model of moral decision making, even for discussion. "Moral" is not a philosophical concept here. It is just the thing that makes some decisions better than others. I didn't have the formula when I came here in October. Now I have it. Maybe later I will have something more. And all you can do, with the exception of Risto, is to give me -1. Can you recommend me some transhumanist community?

How do you expect an AI to be rational, if you yourselves don't want to be metarational? Do you want some "pocket calculator" AI?

Too bad you don't like philosophical concepts. I thought you knew computer science is oozing over philosophy, which has all but died on its feet as far as we're concerned of the academia.

One thing's for sure: you don't know whack about karma. The AI could actually differentiate karma, in the proper sense of the word, from "reputation". You keep playing with your lego blocks until you grow up.

It would have been really neat to do this on LessWrong. It would have made for a good story. It would have also been practical. The academia isn't interested of this - there is no academic discipline for studying AI theory at this level of abstraction. I don't even have any AI expertise, and I didn't intend to develop a mathematical model for AI in the first place. That's just what I got when I worked on this for long enough.

I don't like stereotypical LessWrongians - I think they are boring and narrow-minded. I think we could have had something to do together despite the fact that our personalities don't make it easy for us to be friends. Almost anyone with AI expertise is competent enough to help me get started with this. You are not likely to get a better deal to get famous by doing so little work. But some deals of course seem too good to be true. So call me the "snake oil man" and go play with your legos.

Comment by Tuukka_Virtaperko on Meta-rationality · 2013-01-29T17:10:53.890Z · LW · GW

My work is a type theory for AI for conceptualizing the input it receives via its artificial senses. If it weren't, I would have never come here.

The conceptualization faculty is accompanied with a formula for making moral evaluations, which is the basis of advanced decision making. Whatever the AI can conceptualize, it can also project as a vector on a Cartesian plane. The direction and magnitude of that vector are the data used in this decision making.

The actual decision making algorithm may begin by making random decisions and filtering good decisions from bad with the mathematical model I developed. Based on this filtering, the AI would begin to develop a self-modifying heuristic algorithm for making good decisions and, in general, for behaving in a good manner. What the AI would perceive as good behavior would of course, to some extent, depend of the environment in which the AI is placed.

If you had an AI making random actions and changing it's behavior according to heuristic rules, it could learn things in a similar way as a baby learns things. If you're not interested of that, I don't know what you're interested of.

I didn't come here to talk about some philosophy. I know you're not interested of that. I've done the math, but not the algorithm, because I'm not much of a coder. If you don't want to code a program that implements my mathematical model, that's no reason to give me -54 karma.

Comment by Tuukka_Virtaperko on Meta-rationality · 2013-01-08T12:48:22.627Z · LW · GW

Would you find a space rocket to resemble either a balloon or an arrow, but not both?

I didn't imply something Pirsig wrote would, in and of itself, have much to do with artificial intelligence.

LessWrong is like a sieve, that only collects stuff that looks like I need it, but on a closer look I don't. You won't come until the table is already set. Fine.

Comment by Tuukka_Virtaperko on Meta-rationality · 2012-10-11T08:58:21.690Z · LW · GW

I think the most you can hope for is a model of rationality and irrationality that can model mysticists or religious people as well as rationalists. I don't think you can expect everyone to grok that model. That model may not be expressible in a mysticist's model of reality.

Agree. The Pirahã could not use my model because abstract concepts are banned in their culture. I read from New Scientist that white man tried to teach them numbers so that they wouldn't be cheated in trade so much, but upon getting some insight of what a number is, they refused to think that way. The analytic Metaphysics of Quality (my theory) would say that the Pirahã do not use transcendental language. They somehow know what it is and avoid it despite not having a name for it in their language. That language has only a few words.

The point is not to have everyone to grok at this model, but to use this model to explain reality. The differences between the concepts of "abstract" and "concrete" have been difficult to sort out by philosophers, but in this case the Pirahã behavior seems to be adequately explicable by using the concepts of "natural quality" and "transcendental quality" in the analytic Metaphysics of Quality.

Irrationality is just less instrumentally rational - less likely to win. You seem to have split rational and irrational into two categories, and I think this is just a methodological mistake. To understand and compare the two, you need to put both on the same scale, and then show how they have different measures on that scale.

Do you mean by "irrationality" something like a biased way of thinking whose existence can be objectively determined? I don't mean that by irrationality. I mean things whose existence has no rational justification, such as stream of consciousness. Things like dreams. If you are in a dream, and open your (working) wrist watch, and find out it contains coins instead of clockwork, and behave as if that were normal, there is no rational justification for you doing so - at least none that you know of while seeing the dream.

Also, now that I look at more of your responses, it seems that you have your own highly developed theory, with your own highly developed language, and you're speaking that language to us. We don't speak your language. If you're going to try to talk to people in a new language, you need to start simple, like "this is a ball", so that we have some meaningful context from which to understand "I hit the ball."

You're perfectly right. I'd like to go for the dialogue option, but obviously, if it's too exhausting for you because my point of view is too remote, nobody will participate. That's all I'm offering right now, though - dialogue. Maybe something else later, maybe not. I've had some fun already despite losing a lot of "karma".

The problem with simple examples is that, for example, I'd have to start a discussion on what is "useful". It seems to me the question is almost the same as "What is Quality?" The Metaphysics of Quality insists that Quality is undefinable. Although I've noticed some on LW have liked Pirsig's book Zen and the Art of Motorcycle Maintenance, it seems this would already cause a debate in its own right. I'd prefer not to get stuck on that debate and risk missing the chance of saying what I actually wanted to say.

If that discussion, however, is necessary, then I'd like to point out irrational behavior, that is, a somewhat uncritical habit of doing the first thing that pops into my mind, has been very useful for me. It has improved my efficiency in doing things I could rationally justify despite not actually performing the justification except rarely. If I am behaving that way - without keeping any justifications in my mind - I would say I am operating in the subjective or mystical continuum. When I do produce the justification, I do it in the objective or normative continuum by having either one of those emerge from the earlier subjective or mystical continuum via strong emergence. But I am not being rational before I have done this in spite of ending up with results that later appear rationally good.

EDIT: Moved this post here upon finding out that I can reply to this comment. This 10 minute lag is pretty inconvenient.

Comment by Tuukka_Virtaperko on Meta-rationality · 2012-10-11T08:38:14.851Z · LW · GW

If neither side accepts the other side's language as meaningful, why do you believe they would accept the new language?

Somehow related: http://xkcd.com/927/

That's a very good point. Gonna give you +1 on that. The language, or type system, I am offering has the merit of no such type system being devised before. I stick to this unless proven wrong.

Academic philosophy has it's good sides. "Vagrant predicates" by Rescher are an impressive and pretty recent invention. I also like confirmation holism. But as far as I know, nobody has tried to do an ontology with the following features:

  • Is analytically defined
  • Explains both strong and weak emergence
  • Precision of conceptual differentiation can be expanded arbitrarily (in this case by splitting continua into a greater amount of levels)
  • Includes its own incompleteness as a non-well-formed set (Dynamic Quality)
  • Uses an assumption of symmetry to figure out the contents and structure of irrational ontological categories which are inherently unable to account for their structure, with no apparent problems

Once you grasp the scope of this theory I don't think you'll find a simpler theory to include all that meaningfully - but please do tell me if you do. I still think my theory is relatively simple when compared to quantum mechanics, except that it has a broad scope.

In any case, the point is that on a closer look it appears that my theory has no viable competition, hence, it is the first standard and not the 15th. No other ontology attempts to cover this broad a scope into a formal model.

Comment by Tuukka_Virtaperko on Meta-rationality · 2012-10-11T08:17:07.042Z · LW · GW

I can't reply to some of the comments, because they are below the threshold. Replies to downvoted comments are apparently "discouraged" but not banned, and I'm not on LW for any other reason than this, so let's give it a shot. I don't suppose I am simply required to not reply to a critical post about my own work.

First of all, thanks for the replies, and I no longer feel bad for the about -35 "karma" points I received. I could have tried to write some sort of a general introduction to you, but I've attempted to write them earlier, and I've found dialogue to be a better way. The book I wrote is a general introduction, but it's 140 pages long. Furthermore, my published wouldn't want me to give it away for free, and the style isn't very fitting to LessWrong. I'd perhaps hape to write another book and publish it for free as a series of LessWrong articles.

Mitchell_Porter said:

Tuukka's system looks like a case study in how a handful of potentially valid insights can be buried under a structure made of wordplay (multiple uses of "irrational"); networks of concepts in which formal structures are artificially repeated but the actual relations between concepts are fatally vague (his big flowchart); and a severe misuse of mathematical objects and propositions in an attempt to be rigorous.

The contents of the normative and objective continua are relatively easily processed by an average LW user. The objective continuum consists of dialectic (classical quality) about sensory input. Sensory input is categorized as it is categorized in Maslow's hierarchy of needs. I know there is some criticism of Maslow's theory, but can be accept it as a starting point? "Lower needs" includes homeostasis, eating, sex, excretion and such. "Higher needs" includes reputation, respect, intimacy and such. "Deliberation" includes Maslow's "self-actuation", that is, problem solving, creativity, learning and such. Sense-data is not included in Maslow's theory, but it could be assumed that humans have a need to have sensory experiences, and that this need is so easy to satisfy that it did not occur to Maslow to include it as the lowest need of his hierarchy.

The normative continuum is similarily split to a dialectic portion and a "sensory" portion. That is to say, a central thesis of the work is that there are some kind of mathematical intuitions that are not language, but that are used to operate in the domain of pure math and logic. In order to demonstrate that "mathematical intuitions" really do exist, let us consider the case of a synesthetic savant, who is able to evaluate numbers according to how they "feel", and use this feeling to determine whether the number is a prime. The "feeling" is sense-data, but the correlation between the feeling and primality is some other kind of non-lingual intuition.

If synesthetic primality checks exist, it follows that mathematical ability is not entirely based on language. Synesthetic primality checks do exist for some people, and not for others. However, I believe we all experience mathematical intuitions - for most, the experiences are just not as clear as they are for synesthetic savants. If the existence of mathematical intuition is denied, synesthetic primality checks are claimed impossible due to mere metaphysical skepticism in spite of lots of evidence that they do exist and produce strikingly accurate results.

Does this make sense? If so, I can continue.

Mitchell_Porter also said:

Occasionally you get someone who constructs their system in the awareness that it's a product of their own mind and not just an objective depiction of the facts as they were found

I'm aware of that. Objectivity is just one continuum in the theory.

Having written his sequel to Pirsig he now needs to outgrow that act as soon as possible, and acquire some genuine expertise in an intersubjectively recognized domain, so that he has people to talk with and not just talk at.

I'm not exactly in trouble. I have a publisher and I have people to talk with. I can talk with a mathematician I know and on LilaSquad. But given that Pirsig's legacy appears to be continental philosophy, nobody on LilaSquad can help me improve the formal approach even though some are interested of it. I can talk about everything else with them. Likewise, the mathematician is only interested of the formal structure of the theory and perhaps slightly of the normative continuum, but not of anything else. I wouldn't say I have something to prove or that I need something in particular. I'm mostly just interested to find out how you will react to this.

What I was picking up on in Tuukka's statement was that the irrationals are uncountable whereas the rationals are countable. So the rationals have the cardinality of a set of discrete combinatorial structures, like possible sentences in a language, whereas the irrationals have the cardinality of a true continuum, like a set of possible experiences, if you imagined qualia to be genuinely real-valued properties and e.g. the visual field to be a manifold in the topological sense. It would be a way of saying "descriptions are countable in number, experiences are uncountable".

Something to that effect. This is another reason why I like talking with people. They express things I've thought about with a different wording. I could never make progress just stuck in my head.

I'd say the irrational continua do not have fixed notions of truth and falsehood. If something is "true" now, there is no guarantee it will persist as a rule in the future. There are no proof methods of methods of justification. In a sense, the notions of truth and falsehood are so distorted in the irrational continua that they hardly qualify as truth or falsehood - even if the Bible, operating in the subjective continuum, would proclaim that it's "the truth" that Jesus is the Christ.

Mitchell asked:

Incidentally, would I be correct in guessing that Robert Pirsig never replied to you?

As far as I know, the letter was never delivered to Pirsig. The insiders of MoQ-Discuss said their mailing list is strictly for discussing Pirsig's thoughts, not any derivative work. The only active member of Lila Squad who I presume to have Pirsig's e-mail address said Pirsig doesn't understand the Metaphysics of Quality himself anymore. It seemed pointless to press the issue that the letter be delivered to him. When the book is out, I can that to him via his publisher and hope he'll receive it. The letter wasn't even very good - the book is better.

I thought Pirsig might want to help me with development of the theory, but it turned out I didn't require his help. Now I only hope he'll enjoy reading the book.

Comment by Tuukka_Virtaperko on Meta-rationality · 2012-10-10T14:14:46.236Z · LW · GW

The foundations of rationality, as LW knows it, are not defined with logical rigour. Are you adamant this is not a problem?

http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/ says:

We are not here to argue the meaning of a word, not even if that word is "rationality". The point of attaching sequences of letters to particular concepts is to let two people communicate - to help transport thoughts from one mind to another. You cannot change reality, or prove the thought, by manipulating which meanings go with which words.

I don't think it's very helpful to oppose a logical definition for a certain language that would allow you to do this. As it is, you currently have no logical definition. You have this:

Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed "truth" or "accuracy", and we're happy to call it that.

Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".

That is not a language with a formalized type system. If you oppose a formalized type system, even if it were for the advancement of your purely practical goal, why? Wikipedia says:

A type system associates a type with each computed value. By examining the flow of these values, a type system attempts to ensure or prove that no type errors can occur. The particular type system in question determines exactly what constitutes a type error, but in general the aim is to prevent operations expecting a certain kind of value from being used with values for which that operation does not make sense (logic errors); memory errors will also be prevented.

What in a type system is undesirable to you? The "snake oil that cures lung cancer" - I'm pretty sure you've heard about that one - is a value whose type is irrational. If you may use natural language to declare that value as irrational, why do you oppose using a type system for doing the same thing?

Comment by Tuukka_Virtaperko on Meta-rationality · 2012-10-10T13:49:43.879Z · LW · GW

The page says:

But this doesn't answer the legitimate philosophical dilemma: If every belief must be justified, and those justifications in turn must be justified, then how is the infinite recursion terminated?

I do not assume that every belief must be justified, except possibly within rationality.

Do the arguments against the meaningfulness of coincidence state that coincidences do not exist?

Comment by Tuukka_Virtaperko on Meta-rationality · 2012-10-10T13:35:37.906Z · LW · GW

If you respond to that letter, I will not engage in conversation, because the letter is a badly written outdated progress report of my work. The work is now done, it will be published as a book, and I already have a publisher. If you want to know when the book comes out, you might want to join this Facebook community.

Comment by Tuukka_Virtaperko on Meta-rationality · 2012-10-10T13:25:03.641Z · LW · GW

Yudkowsky says:

So if you understand what concept we are generally getting at with this word "rationality", and with the sub-terms "epistemic rationality" and "instrumental rationality", we have communicated: we have accomplished everything there is to accomplish by talking about how to define "rationality". What's left to discuss is not what meaning to attach to the syllables "ra-tio-na-li-ty"; what's left to discuss is what is a good way to think.

With that said, you should be aware that many of us will regard as controversial - at the very least - any construal of "rationality" that makes it non-normative:

For example, if you say, "The rational belief is X, but the true belief is Y" then you are probably using the word "rational" in a way that means something other than what most of us have in mind. (E.g. some of us expect "rationality" to be consistent under reflection - "rationally" looking at the evidence, and "rationally" considering how your mind processes the evidence, shouldn't lead to two different conclusions.) Similarly, if you find yourself saying "The rational thing to do is X, but the right thing to do is Y" then you are almost certainly using one of the words "rational" or "right" in a way that a huge chunk of readers won't agree with.

A normative belief in rationality is, as far as I can tell, not possible for someone who does not have a clear concpetion of what rationality is. I am trying to present tools for forming such a conception. The theory I am presenting is, most accurately, a rationally constructed language, not a prescriptive theory on whether it is moral to be rational. The merit of this language is that it should allow you to converse about rationality with mysticists or religious people so that you both understand what you are talking about. It seems to me the ID vs. evolution debate remains unresolved among the general public (in the USA) because neither side has managed to speak the same language as the other side. My language is not formally defined in the sense of being a formal language, but it has formally defined ontological types.

Comment by Tuukka_Virtaperko on Meta-rationality · 2012-10-10T13:14:55.240Z · LW · GW

Well I do. The following Venn diagram describes the basic concepts of the theory. As far as we are being rational, classical quality means references and romantic quality means referents. The referents are sense-data, and the references are language. You may ignore the rest of the graph for now.

The following directed graph expresses an overview of the categories the metatheory is about. Note how some of the categories are rational, and others are irrational. The different categories are created by using two binary variables. One of them denotes whether the category is internalistic or externalistic, and another one whether it is rational or irrational. The arrows denote set membership. I like to think of it as "strong emergence", but formally it suffices to say it is set membership. In the theory, these categories are called continua.

Instead of using the graph we could define these relationships with formal logic. Let us denote a continuum by so that k denotes external metaspace and l denotes rationality.

Each continuum can be split into an arbitrary amount of levels. The four continuums also form reciprocal continuum pairs, which means that the referents of each continuum are the same as the referents of some other continuum, but this continuum orders the references to those referents differently. Ordering of references is modeled as subsethood in the following directed acyclic graph:

Note that in the graph I have split each continuum into four levels. This is arbitrary. The following formula defines m levels.

)

That is the structure of the theory. Now, as for theorems, what kind of theorems would you like? I've already arrived at the conclusion that knowledge by description consists of members of the rational continua, and knowledge by acquaintance (aka. gnosis) consists of members of the irrational continua. But that is mainstream philosophy. Maybe you would be more interested of a formal model of "maps" and "territories", as these concepts are used frequently by you. Yudkowsky says:

Of course, it is a severe error to say that a phenomenon is precise or vague, a case of what Jaynes calls the Mind Projection Fallacy (Jaynes 1996). Precision or vagueness is a property of maps, not territories. Rather we should ask if the price in the supermarket stays constant or shifts about. A hypothesis of the "vague" sort is a good description of a price that shifts about. A precise map will suit a constant territory.

In the LW lingo, continua are "maps" and romantic quality is the "territory". Maps that form reciprocal pairs are maps about the same territory, but the projection is different - compare it to polar coordinates as opposed to rectangular coordinates. Two maps that do not form reciprocal pairs are about different territories. The different territories could could be called natural and transcendental. Insofar as we are being rational, the former is the domain of empirical science, the latter the domain of pure maths.

The merit of this theory is that irrational things, which are called subjective or mystical, are defined in relation to rational things. The ontology of irrational things is constructed by ordering the references to the referents oppositely than they are ordered in the ontology of rational things. You can see the inversion of order from the latter graph. As you can see, subjective references consist of various kinds of beliefs, and mystical references consist of various kinds of synchronicities. These are irrational, which roughly means that no argument suffices to justify their existence, but their existence is obvious.

How do you like it?

Comment by Tuukka_Virtaperko on Meta-rationality · 2012-10-10T03:38:15.848Z · LW · GW

Do you mean the sequence "Map and Territory"? I don't find it to include a comprehensive and well-defined taxonomy of ways of being rational and irrational. I was investigating whether I should present a certain theory here. Does this -4 mean you don't want it?

Insofar as LW is interested of irrationality, it seems interested of some kind of pseudo-irrationality: reasoning mistakes whose existence is affirmed by resorting to rational argumentation. I call that pseudo-irrationality, because its existence is affirmed rationally instead of irrationally.

I am talking about the kind of irrationality whose existence can be observed, but cannot be argued for, because it is obvious. Examples of such forms of irrationality include synchronicities. An example of a synchronicity would be you talking about a bee, and a bee appearing in the room. There is no rational reason (ostensibly) why these two events would happen simultaneously, and it could rightly be deemed a coincidence. But how does it exist as a coincidence? If we notice it, it exists as something we pay attention to, but is there any way we could be more specific about this?

If we could categorize such irrationally existing things comprehensively, we would have a clearer grasp on what is the rationality that we are advocating. We would know what that rationality is not.

Comment by Tuukka_Virtaperko on Meta-rationality · 2012-10-10T03:09:11.166Z · LW · GW

I am not talking about a prescriptive theory that tells, whether one should be rational or not. I am talking about a rational theory, that produces a taxonomy of different ways of being rational or irrational without making a stance on which way should be chosen. Such a theory already implicitly advocates rationality, so it doesn't need to explicitly arrive at conclusions about whether one ought to be rational or not.

Comment by Tuukka_Virtaperko on Meta-rationality · 2012-10-10T03:07:36.217Z · LW · GW

That post in particular is a vague overview of meta-rationality, not a systematic account of it. It doesn't describe meta-rationality as something that qualifies as a theory. It just says there is such a thing without telling exactly what it is.

Comment by Tuukka_Virtaperko on What do you mean by rationalism? · 2012-10-10T01:49:11.282Z · LW · GW

How is Buddhism tainted? Christianity could have been tainted during the purges in the early centuries, but I don't find Buddhism to have deviated from its original teachings in such a way that Buddhists themselves would no longer recognize them. There are millions of Buddhists in the world, so there are bound to be weirdos in that lot. But consider the question: "What is Buddhism, as defined by prominent Buddhists themselves, whose prominence is recognized by traditional institutions that uphold Buddhism?" It doesn't seem to me the answer to this question would have changed much during the last millennium.

Likewise, rationalism could not be tainted by some Christian preacher who claims he is a rationalist, but whose preaching implicitly oppose rationalism and who is not considered a rationalist by anyone on LW.

Comment by Tuukka_Virtaperko on Can we stop using the word "rationalism"? · 2012-10-10T01:47:08.017Z · LW · GW

The rationalism-empiricism philosophical debate is somewhat dead. I see no problem in using "rationalism" to mean LW rationalism. "Rationality" (1989) by Rescher defines rationality in the way LW uses the word, but doesn't use "rationalism", ostensibly because of the risk of confusion with the rationalism-empiricism debate. Neither LW nor average people are subject to similar limitations as the academic Rescher, so I think it is prudent to overwrite the meaning of the word "rationalism" now.

Maybe "rationalism" used to mean "rationalism in the rationalism-empiricsm debate", but the concept of "rationality" has become very important during the past century, and that "rationality" means the LW type rationality. Yet, "rationality" is only a method. What LW clearly advocates is that this method is somehow the best, the only right method, the only method, a superior method, or a method that ought to be used. Hence, LW is somewhat founded on a prescriptive belief that "rationality" is a good method. It is very reasonable to call such a belief "rationalism", as someone without belief in the superiority of rationality could still use rationality without being a rationalist.

Comment by Tuukka_Virtaperko on Note on Terminology: "Rationality", not "Rationalism" · 2012-10-10T01:35:49.680Z · LW · GW

According to Rationality (1989) by Nicholas Rescher, who is for all intents and purposes a rationalist in the sense LW (not academic philosophy) uses the word, the LW rationality is a faith based ideology. See confirmation holism by Quine, outlined in "The Two Dogmas of Empiricism". Rationality is insufficient to justify rationality with rational means, because to do so would presuppose that all means of justification are rational, which already implicitly assumes rationality. Hence, it cannot be refuted that rationality is based on faith. Rescher urges people to accept rationality nevertheless.

Comment by Tuukka_Virtaperko on Bayes for Schizophrenics: Reasoning in Delusional Disorders · 2012-08-24T03:01:21.917Z · LW · GW

Hehe. I'm a psych patient and I'm allowed to visit LessWrong.

Comment by Tuukka_Virtaperko on Welcome to Less Wrong! · 2012-02-15T21:31:54.424Z · LW · GW

Commenting the article:

"When artificial intelligence researchers attempted to capture everyday statements of inference using classical logic they began to realize this was a difficult if not impossible task."

I hope nobody's doing this anymore. It's obviously impossible. "Everyday statements of inference", whatever that might mean, are not exclusively statements of first-order logic, because Russell's paradox is simple enough to be formulated by talking about barbers. The liar paradox is also expressible with simple, practical language.

Wait a second. Wikipedia already knows this stuff is a formalization of Occam's razor. One article seems to attribute the formalization of that principle to Solomonoff, another one to Hutter. In addition, Solomonoff induction, that is essential for both, is not computable. Ugh. So Hutter and Rathmanner actually have the nerve to begin that article by talking about the problem of induction, when the goal is obviously to introduce concepts of computation theory? And they are already familiar with Occam's razor, and aware of it having, at least probably, been formalized?

Okay then, but this doesn't solve the problem of induction. They have not even formalized the problem of induction in a way that accounts for the logical structure of inductive inference, and leaves room for various relevance operators to take place. Nobody else has done that either, though. I should get back to this later.

Comment by Tuukka_Virtaperko on Welcome to Less Wrong! · 2012-02-15T20:35:33.417Z · LW · GW

Okay. In this case, the article does seem to begin to make sense. Its connection to the problem of induction is perhaps rather thin. The idea of using low Kolmogorov complexity as justification for an inductive argument cannot be deduced as a theorem of something that's "surely true", whatever that might mean. And if it were taken as an axiom, philosophers would say: "That's not an axiom. That's the conclusion of an inductive argument you made! You are begging the question!"

However, it seems like advancements in computation theory have made people able to do at least remotely practical stuff on areas, that bear resemblance to more inert philosophical ponderings. That's good, and this article might even be used as justification for my theory RP - given that the use of Kolmogorov complexity is accepted. I was not familiar with the concept of Kolmogorov complexity despite having heard of it a few times, but my intuitive goal was to minimize the theory's Kolmogorov complexity by removing arbitrary declarations and favoring symmetry.

I would say, that there are many ways of solving the problem of induction. Whether a theory is a solution to the problem of induction depends on whether it covers the entire scope of the problem. I would say this article covers half of the scope. The rest is not covered, to my knowledge, by anyone else than Robert Pirsig and experts of Buddhism, but these writings are very difficult to approach analytically. Regrettably, I am still unable to publish the relativizability article, which is intended to succeed in the analytic approach.

In any case, even though the widely rejected "statistical relevance" and this "Kolmogorov complexity relevance" share the same flaw, if presented as an explanation of inductive justification, the approach is interesting. Perhaps, even, this paper should be titled: "A Formalization of Occam's Razor Principle". Because that's what it surely seems to be. And I think it's actually an achievement to formalize that principle - an achievement more than sufficient to justify the writing of the article.

Comment by Tuukka_Virtaperko on Welcome to Less Wrong! · 2012-02-08T11:59:46.893Z · LW · GW

I've read some of this Universal Induction article. It seems to operate from flawed premises.

If we prescribe Occam’s razor principle [3] to select the simplest theory consistent with the training examples and assume some general bias towards structured environments, one can prove that inductive learning “works”. These assumptions are an integral part of our scientific method. Whether they admit it or not, every scientist, and in fact every person, is continuously using this implicit bias towards simplicity and structure to some degree.

Suppose the brain uses algorithms. An uncontroversial supposition. From a computational point of view, the former citation is like saying: "In order for a computer to not run a program, such as Indiana Jones and the Fate of Atlantis, the computer must be executing some command to the effect of "DoNotExecuteProgram('IndianaJonesAndTheFateOfAtlantis')".

That's not how computers operate. They just don't run the program. They don't need a special process for not running the program. Instead, not running the program is "implicitly contained" in the state of affairs that the computer is not running it. But this notion of implicit containment makes no sense for the computer. There are infinitely many programs the computer is not running at a given moment, so it can't process the state of affairs that it is not running any of them.

Likewise, the use of an implicit bias towards simplicity cannot be meaningfully conceptualized by humans. In order to know how this bias simplifies everything, one would have to know, what information regarding "everything" is omitted by the bias. But if we knew that, the bias would not exist in the sense the author intends it to exist.

Furthermore:

This is in some way a contradiction to the well-known no-free-lunch theorems which state that, when averaged over all possible data sets, all learning algorithms perform equally well, and actually, equally poorly [11]. There are several variations of the no-free-lunch theorem for particular contexts but they all rely on the assumption that for a general learner there is no underlying bias to exploit because any observations are equally possible at any point. In other words, any arbitrarily complex environments are just as likely as simple ones, or entirely random data sets are just as likely as structured data. This assumption is misguided and seems absurd when applied to any real world situations. If every raven we have ever seen has been black, does it really seem equally plausible that there is equal chance that the next raven we see will be black, or white, or half black half white, or red etc. In life it is a necessity to make general assumptions about the world and our observation sequences and these assumptions generally perform well in practice.

The author says that there are variations of the no free lunch theorem for particular contexts. But he goes on to generalize that the notion of no free lunch theorem means something independent of context. What could that possibly be? Also, such notions as "arbitrary complexity" or "randomness" seem intuitively meaningful, but what is their context?

The problem is, if there is no context, the solution cannot be proven to address the problem of induction. But if there is a context, it addresses the problem of induction only within that context. Then philosophers will say that the context was arbitrary, and formulate the problem again in another context where previous results will not apply.

In a way, this makes the problem of induction seem like a waste of time. But the real problem is about formalizing the notion of context in such a way, that it becomes possible to identify ambiguous assumptions about context. That would be what separates scientific thought from poetry. In science, ambiguity is not desired and should therefore be identified. But philosophers tend to place little emphasis on this, and rather spend time dwelling on problems they should, in my opinion, recognize as unsolvable due to ambiguity of context.

Comment by Tuukka_Virtaperko on Welcome to Less Wrong! · 2012-01-19T22:38:46.785Z · LW · GW

The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent's internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines.

At first, I didn't quite understand this. But I'm reading Introduction to Automata Theory, Languages and Computation. Are you using the * in the same sense here as it is used in the following UNIX-style regular expression?

  • '[A-Z][a-z]*'

This expression is intended to refer to all word that begin with a capital letter and do not contain any surprising characters such as ö or -. Examples: "Jennifer", "Washington", "Terminator". The * means [a-z] may have an arbitrary amount of iterations.

Comment by Tuukka_Virtaperko on Reductionism · 2012-01-17T11:02:51.013Z · LW · GW

Okay. That sounds very good. And it would seem to be in accordance with this statement:

Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

If reductionism does not entail that I must construct the notion of a territory and include it into my conceptualizations at all times, it's not a problem. I now understand even better why I was confused by this. This kind of reductionism is not reductive physicalism. It's hardly a philosophical statement at all, which is good. I would say that "the notion of higher levels being out there in the territory" is meaningless, but expressing disbelief to that notion is apparently intended to convey approximately the same meaning.

RP doesn't yet actually include reduction. It's about next on the to do list. Currently it includes an emergence loop that is based on the power set function. The function produces a staggering amount of information in just a few cycles. It seems to me that this is because instead of accounting for emergence relations the mind actually performs, it accounts for all defined emergence relations the mind could perform. So the theory is clearly still under construction, and it doesn't yet have any kind of an algorithm part. I'm not much of a coder, so I need to work with someone who is. I already know one mathematician who likes to do this stuff with me. He's not interested of the metaphysical part of the theory, and even said he doesn't want to know too much about it. :) I'm not guaranteeing RP can be used for anything at all, but it's interesting.

Comment by Tuukka_Virtaperko on Reductionism · 2012-01-17T03:19:22.155Z · LW · GW

I got "reductionism" wrong, actually. I thought the author was using some nonstandard definition of reductionism, which would have been something to the effect of not having unnecessary declarations in a theory. I did not take into account that the author could actually be what he says he is, no bells and whistles, because I didn't take into account that reductionism could be taken seriously here. But that just means I misjudged. Of course I am not necessarily even supposed to be on this site. I am looking for people who might give useful ideas for theoretical work which could be useful for constructing AI, and I'm trying to check whether my approach is deemed intelligible here.

"Realism" is the belief that there is an external world, usually thought to consist of quarks, leptons, forces and such. It is typically thought of as a belief or a doctrine that is somehow true, instead of just an assumption an AI or a human makes because it needs to. Depending on who labels themself as a realist and on what mood is he, this can entail that everybody who is not a realist is considered mistaken.

An example of a problem whose solution does not need to involve realism is: "John is a small kid who seems to emulate his big brother almost all the time. Why is he doing this?" Possible answers would be: "He thinks his brother is cool" or "He wants to annoy his brother" or "He doesn't emulate his brother, they are just very similar". Of course you could just brain scan John. But if you really knew John, that's not what you would do, unless brain scanners were about as common and inexpensive as laptops. And have much better functionality than they currently do.

In the John problem, there's no need to construct the assumptions of a physical world, because the problem would be intelligible even in the case you meet John in a dream. You can't take any physical brain scanner with you in a dream, so you can't brain scan John. But you can analyze John's behavior with the same criteria according to which you would analyze him had you met him while awake.

I'm not trying to impose any views on you, because I'm basically just trying to find out whether someone is interested of this kind of stuff. The point is that I'm trying to construct a framework theory for AI that is not grounded on anything else than sensory (or emotional etc.) perception - all the abstract parts are defined recursively. Structurally, the theory is intended to resemble a programming language with dynamic typing, as opposed to static typing. The theory would be pretty much both philosophy and AI.

The problem I see now is this. My theory, RP, is founded on the notion that important parts of thinking are based on metaphysical emergence. The main recursion loop of the theory, in its current form, will not create any information if only reduction is allowed. I would allow both, but if the people on LW are reductionist, I would suppose that the logical consequence of that would be they believe my theory cannot work. And that's why I'm a bit troubled by the notion that you might accept reductionism as some sort of an axiom, because you don't want to have a long philosophical conversation and would prefer to settle down with something that currently seems reasonable. So should I expect you to not want to consider other options? It's strange that I should go elsewhere with my project, because that would amount to you rejecting an AI theory on grounds of contradicting your philosophical assumptions. Yet, my common sense expectation would be that you'd find AI more important than philosophy.

Comment by Tuukka_Virtaperko on Reductionism · 2012-01-16T22:31:07.273Z · LW · GW

But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level. The airplane is too large. Even a hydrogen atom would be too large. Quark-to-quark interactions are insanely intractable. You can't handle the truth.

Can you handle the truth then? I don't understand the notion of truth you are using. In everyday language, when a person states something as "true", it doesn't usually need to be grounded to logic in order to work for a practical purpose. But you are making extremely abstract statements here. They just don't mean anything unless you define truth and solve the symbol grounding problem. You have criticized philosophy in other threads, yet here you are making dubious arguments. The arguments are dubious because they are not clearly mere rhetoric, and not clearly philosophy. If someone tries to require you to explain the meaning of them, you could say you're not interested of philosophy, so philosophical counterarguments are irrelevant to you. But you can't be disinterested of philosophy if you make philosophical claims like that and actually consider them important.

I don't like contemporary philosophy either, but I would suppose you are in trouble with these things, and I wonder if you are open to a solution? If not, fine.

But the way physics really works, as far as we can tell, is that there is only the most basic level - the elementary particle fields and fundamental forces. You can't handle the raw truth, but reality can handle it without the slightest simplification. (I wish I knew where Reality got its computing power.)

But you haven't defined reality. As long as you haven't done so, "reality" will be a metaphorical, vague concept, which frequently changes its meaning in use. This means if you state something to be "reality" in one discussion, logical analysis would probably reveal you didn't use it in the same meaning in another discussion.

You can have a deterministic definition of reality, but that will be arbitrary. Then people will start having completely pointless debates with you, and to make matters worse, you will perceive these debates as people trying to unjustify what you are doing. That's a problem caused by you not realizing you didn't have to justify your activities or approach in the first place. You didn't need to make these philosophical claims, and I don't suppose you would done so had you not felt threatened by something, such as religion or mysticism or people imposing their views on you.

This, as I see it, is the thesis of reductionism. Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

If you categorize yourself as a reductionist, why don't you go all the way? You can't be both a reductionist and a realist. Ie. you can't believe in reductionism and in the existence of a territory at the same time. You have to drop either one of them. But which one?

Drop the one you need to drop. I'm serious. You don't need this metaphysical nonsense to justify something you are doing. Neither reductionism nor realism is "true" in any meaningful way. You are not doing anything wrong if you are a reductionist for 15 minutes, then switch to realism (ie. the belief in a "territory") for ten seconds, then switch again into reductionism and then maybe to something else. And that is also the way you really live your life. I mean, think about your mind. I suppose it's somewhat similar to mine. You don't think about that metaphysical nonsense when you're actually doing something practical. So you are not a metaphycisist when you're riding a bike and enjoying the wind or something.

It's just some conception of yourself which you have, that you have defined as someone who is an advocate of "reductionism and realism". This conception is true only when you indeed are either one of those. It's not true, when you're neither of those. But you are operating in your mind. Suppose someone says to you you're not a "reductionist and a realist" when you are, for example, in intense pain for some reason and are very unlikely to think about philosophy. Well, even in that case you could remind yourself of your own conception of yourself, that is, you are a "reductionist and a realist", and argue that the person who said you are not was wrong. But why would you want to do so? The only reasons I see are some naive or egoistic or defensive reasons, such as:

  • You are afraid the person who said you're not a "reductionist or realist" will try to waste your time by presenting stupid arguments according to which you may or may not or should or should not do something.
  • You believe your image of yourself as a "reductionist and realist" is somehow "true". But you are able to decide at will whether that image is true. It is true when you are thinking in a certain way, and false when you are not thinking that way. So the statement conveys no useful information, except maybe on something you would like to be or something like that. But that is no longer philosophy.
  • You have some sort of a need to never get caught uttering something that's not true. But in philosophy, it's a really bad idea to want to make true statements all the time. Metaphysical theories in and of itself are neither true nor false. Instead, they are used to define truth and falsehood. They can be contradictory or silly or arbitrary, but they can't be true or false.

If you state you to regard one state of mind or one theory, such as realism or reductionism, as some sort of an ultimate truth, you are simply putting yourself into a prison of words for no reason except that you apparently perceive some sort of safety in that prison or something like that. But its not safe. It exposes you to philosophical criticism you previously were invulnerable towards, because before you went to that prison, you didn't even participate in that game.

If you actually care about philosophy, great. But I haven't yet gotten such an impression. It seems like philosophy is an unpleasant chore to you. You want to use philosophy to obtain justification, a sense of entitlement, or something, and then throw it away because you think you're already finished with it - that you've obtained a framework theory which already suits your needs, and you can now focus on the needs. But you're not a true reductionist in the sense you defined reductionism, unless you also scrap the belief in the territory. I don't care what you choose as long as you're fine with it, but I don't want you to contradict yourself.

There is no way to express the existence of the "territory" as a meaningfully true statement. Or if there is, I haven't heard of it. It is a completely arbitrary declaration you use to create a framework for the rest of the things you do. You can't construct a "metatheory of reality" which is about the territory, which you suppose to exist, and have that same territory prove the metatheory is right. The territory may contain empirical evidence that the metatheory is okay, but no algorithm can use that evidence to produce proof for the metatheory, because:

  • From "territory's" point of view, the metatheory is undefined.
  • But the notion of gathering empirical evidence is meaningless if the metatheory, according to which the "territory" exists, is undefined.

Therefore, you have to define it if you want to use it for something, and just accept the fact that you can't prove it to be somehow true, much less use its alleged truth to prove something else false. You can believe what you want, but you can't make an AI that would use "territory" to construct a metatheory of territory, if it's somehow true to the AI that territory is all there is. The AI can't even construct a metatheory of "map and territory", if it's programmed to hold as somehow true that map and territory are the only things that exist. This entails that the AI cannot conceptualize its own metaphysical beliefs even as well as you can. It could not talk about them at all. To do so, it would have to be able to construct arbitrary metatheories on its own. This can only be done if the AI holds no metaphysical belief as infallible, that is, the AI is a reductionist in your meaning of the word.

I've seen some interest towards AI on LW. If you really would like to one day construct a very human-like AI, you will have problems if you cannot program an AI that can conceptualize the structure of its own cognitive processes also in terms that do not include realism. Because humans are not realists all the time. Their mind has a lot of features, and the metaphysical assumption of realism is usually only constructed when it is needed to perform some task. So if you want to have that assumption around all the time, you'll just end up adding unnecessary extra baggage to the AI which will probably also make the code very difficult to comprehend. You don't want to lug the assumption around all the time just because it's supposed to be true in some way nobody can define.

You could as well have a reductionist theory, which only constructs realism (ie. the declaration that an external world exists) under certain conditions. Now, philosophy doesn't usually include such theories, because the discipline is rather outdated, but there's no inherent reason why it can't be done. Realism is neither true not false in any meaningful and universal way. You are free to state it to exist if you are going to use that statement for something. But if you just say it, as if it would mean something in and of itself, you are not saying anything meaningful.

I hope you were interested of my rant.

Comment by Tuukka_Virtaperko on Welcome to Less Wrong! · 2012-01-16T18:18:01.109Z · LW · GW

I commented Against Modal Logics.

Comment by Tuukka_Virtaperko on Against Modal Logics · 2012-01-16T18:16:00.197Z · LW · GW

I wrote a bunch of comments to this work while discussing with Risto_Saarelma. But I thought I should rather post them here. I came here to discuss certain theories that are on the border between philosophy and something which could be useful for the construction of AI. I've developed my own such theory based on many years of work on an unusual metaphysical system called the Metaphysics of Quality, which is largely ignored in the academy and deviates from the tradition. It's not very "old" stuff. The formation of that tradition of discussion began in 1974. So that's my background.

The kind of work that I try to do is not about language. It is about reducing mentalistic models to purely causal models, about opening up black boxes to find complicated algorithms inside, about dissolving mysteries - in a word, about cognitive science.

What would I answer to the question whether my work is about language? I'd say it's both about language and algorithms, but it's not some Chomsky-style stuff. It does account for the symbol grounding problem in a way that is not typically expected of language theory. But the point is, and I think this is important: even the mentalistic models to not currently exist in a coherent manner. So how are people going to reduce something undefined to purely causal models? Well, that doesn't sound very possible, so I'd say the goals of RP are relevant.

But this kind of reductionism is hard work.

I would imagine mainstream philosophy to be hard work, too. This work, unfortunately, would, to a great extent, consist of making correct references to highly illegible works.

Modern philosophy doesn't enforce reductionism, or even strive for it.

Well... I wouldn't say RP enforces reductionism or that it doesn't enforce reductionism. It kinda ruins RP if you develop a metatheory where theories are classified either as reductionist or nonreductionist. You can do that - it's not a logical contradiction - but the point of RP is to be such a theory, that even though we could construct such metatheoretic approaches to it, we don't want to do so, because it's not only useless, but also complicates things for no apparent benefit. Unless, of course, we are not interested of AI but trying to device some very grand philosophy of which I'm not sure what it could be used for. My intention is that things like "reductionism" are placed within RP instead of placing RP into a box labeled "reductionism".

RP is supposed to define things recursively. That is not, to my knowledge, impossible. So I'm not sure why the definition would necessarily have to be reductive in some sense LISP, to my knowledge, is not reductive. But I'm not sure what Eliezer means with "reductive". It seems like yet another philosophical concept. I'd better check if it's defined somewhere on LW...

And then they publish it and say, "Look at how precisely I have defined my language!"

I'm not a fetishist. Not in this matter, at least. I want to define things formally because the structure of the theory is very hard to understand otherwise. The formal definitions make it easier to find out things I would not have otherwise noticed. That's why I want to understand the formal definitions myself despite sometimes having other people practically do them for me.

Consider the popular philosophical notion of "possible worlds". Have you ever seen a possible world?

I think that's pretty cogent criticism. I've found the same kind of things troublesome.

Philosophers keep telling me that I should look at philosophy. I have, every now and then. But the main reason I look at philosophy is when I find it desirable to explain things to philosophers.

I understand how Eliezer feels. I guess I don't even tell people they need to look at philosophy for its own sake. How should I know what someone else wants to do for its own sake? But it's not so simple with RP, because it could actually work for something. The good philosophy is simply hard to find, and if I hadn't studied the MOQ, I might very well now be laughing at Langan's CTMU with many others, because I wouldn't understand what that thing is he is a bit awkwardly trying to express.

I'd like to illustrate the stagnation of academic philosophy with the following thought experiment. Let's suppose someone has solved the problem of induction. What is the solution like?

  • Ten pages?
  • Hundred pages?
  • Thousand pages?
  • Does it contain no formulae or few formulae?
  • Does it contain a lot of formulae?

I've read academic publications to the point that I don't believe there is any work the academic community would, generally speaking, regard as a solution to the problem of induction. I simply don't believe many scholars think there really can be such a thing. They are interested of "refining" the debate somehow. They don't treat it as some matter that needs to be solved because it actually means something.

This example might not right a bell to someone completely unfamiliar with academic philosophy, but I think it does illustrate how the field is flawed.

Comment by Tuukka_Virtaperko on Welcome to Less Wrong! · 2012-01-16T13:01:36.692Z · LW · GW

Sorry if this feels like dismissing your stuff.

You don't have to apologize, because you have been useful already. I don't require you to go out of your way to analyze this stuff, but of course it would also be nice if we could understand each other.

The reason I went on about the complexity of the DNA and the brain is that this is stuff that wasn't really known before the mid-20th century. Most of modern philosophy was being done when people had some idea that the process of life is essentially mechanical and not magical, but no real idea on just how complex the mechanism is. People could still get away with assuming that intelligent thought is not that formally complex around the time of Russell and Wittgenstein, until it started dawning just what a massive hairball of a mess human intelligence working in the real world is after the 1950s. Still, most philosophy seems to be following the same mode of investigation as Wittgenstein or Kant did, despite the sudden unfortunate appearance of a bookshelf full of volumes written by insane aliens between the realm of human thought and basic logic discovered by molecular biologists and cognitive scientists.

That's a good point. The philosophical tradition of discussion I belong to was started in 1974 as a radical deviation from contemporary philosophy, which makes it pretty fresh. My personal opinion is that within decades of centuries, the largely obsolete mode of investigation you referred to will be mostly replaced by something that resembles what I and a few others are currently doing. This is because the old mode of investigation does not produce results. Despite intense scrutiny for 300 years, it has not provided an answer to such a simple philosophical problem as the problem of induction. Instead, it has corrupted the very writing style of philosophers. When one is reading philosophical publications by authors with academic prestige, every other sentence seems somehow defensive, and the writer seems to be squirming in the inconvenience caused by his intuitive understanding that what he's doing is barren but he doesn't know of a better option. It's very hard for a distinguished academic to go into the freaky realm and find out whether someone made sense but had a very different approach than the academic approach. Aloof but industrious young people, with lots of ability but little prestige, are more suitable for that.

Nowadays the relatively simple philosophical problem of induction (proof of the Poincare conjecture is relatiely extremely complex) has been portrayed as such a difficult problem, that if someone devises a theoretic framework which facilitates a relatively simple solution to the problem, academic people are very inclined to state that they don't understand the solution. I believe this is because they insist the solution should be something produced by several authors working together for a century. Something that will make theoretical philosophy again appear glamorous. It's not that glamorous, and I don't think it was very glamorous to invent 0 either - whoever did that - but it was pretty important.

I'm not sure what good this ranting of mine is supposed to do, though.

I'm not expecting people to rewrite the 100 000 pages of complexity into human mathematics, but I'm always aware that it needs to be dealt with somehow. For one thing, it's a reason to pay more attention to empiricism than philosophy has traditionally done. As in, actually do empirical stuff, not just go "ah, yes, empiricism is indeed a thing, it goes in that slot in the theory". You can't understand raw DNA much, but you can poke people with sticks, see what they do, and get some clues on what's going on with them.

The metaphysics of quality, of which my RP is a much-altered instance, is an empiricist theory, written by someone who has taught creative writing in Uni, but who has also worked writing technical documents. The author has a pretty good understanding of evolution, social matters, computers, stuff like that. Formal logic is the only thing in which he does not seem proficient, which maybe explains why it took so long for me to analyze his theories. :)

If you want, you can buy his first book, Zen and the Art of Motorcycle Maintenance from Amazon at the price of a pint of beer. (Tap me in the shoulder if this is considered inappropriate advertising.) You seem to be logically rather demanding, which is good. It means I should tell you that in order to attain understanding of MOQ that explains a lot more of the metaphysical side of RP, you should also read his second book. They are also available in every Finnish public library I have checked (maybe three or four libraries).

What more to say... Pirsig is extremely critical of the philosophical tradition starting from antiquity. I already know LW does not think highly of contemporary philosophy, and that's why I thought we might have something in common in the first place. I think we belong to the same world, because I'm pretty sure I don't belong to Culture One.

The key ideas in the LW approach are that you're running on top of a massive hairball of junky evolved cognitive machinery that will trip you up at any chance you get

Okay, but nobody truly understands that hairball, if it's the brain.

the end result of what you're trying to do should be a computable algorithm.

That's what I'm trying to do! But it is not my only goal. I'm also trying to have at least some discourse with World One, because I want to finish a thing I began. My friend is currently in the process of writing a formal definition related to that thing, and I won't get far with the algorithm approach before he's finished that and is available for something else. But we are actually planning that. I'm not bullshitting you or anything. We have been planning to do that for some time already. And it won't be fancy at first, but I suppose it could get better and better the more we work on it, or the approach would maybe prove a failure, but that, again, would be an interesting result. Our approach is maybe not easily understood, though...

My friend understands philosophy pretty well, but he's not extremely interested of it. I have this abstract model of how this algortihm thing should be done, but I can't prove to anyone that it's correct. Not right now. It's just something I have developed by analyzing an unusual metaphysical theory for years. The reason my friend wants to do this apparently is that my enthusiasm is contagious and he does enjoy maths for the sake of maths itself. But I don't think I can convince people to do this with me on grounds that it would be useful! And some time ago, people thought number theory is a completely useless but a somehow "beautiful" form of mathematics. Now the products of number theory are used in top-secret military encryption, but the point is, nobody who originally developed number theory could have convinced anyone the theory would have such use in the future. So, I don't think I can have people working with me in hopes of attaining grand personal success. But I think I could meet someone who finds this kind of activity very enjoyable.

The "state basic assumptions" approach is not good in the sense that it would go all the way to explaining RP. It's maybe a good starter, but I can't really transform RP into something that could be understood from an O point of view. That would be like me needing to express equation x + 7 = 20 to you in such terms that x + y = 20. You couldn't make any sense of that.

I really have to go now, actually I'm already late from somewhere...

Comment by Tuukka_Virtaperko on Welcome to Less Wrong! · 2012-01-15T23:39:03.664Z · LW · GW

According to the abstract, the scope of the theory you linked is a subset of RP. :D I find this hilarious because the theory was described as "ridiculously broad". It seems to attempt to encompass all of O, and may contain interesting insight my work clearly does not contain. But the RP defines a certain scope of things, and everything in this article seems to belong to O, with perhaps some N without clearly differentiating the two. S is missing, which is rather usual in science. From the scientific point of view, it may be hard to understand what Buddhists could conceivably believe to achieve by meditation. They have practiced it for millenia, yet they did not do brain scans that would have revealed its beneficial effects, and they did not perform questionnaires either and compile the results into a statistic. But they believed it is good to meditate, and were not very interested of knowing why it is good. That belongs to the realm of S.

In any case, this illustrates an essential feature of RP. It's not so much a theory about "things", you know, cars, flowers, finances, than a theory about what are the most basic kind of things, or about what kind of options for the scope of any theory or statement are intelligible. It doesn't currently do much more because the algorith part is missing. It's also not necessarily perfect or anything like that. If something apparently coherent cannot be included to the scope of RP in a way that makes sense, maybe the theory needs to be revised.

Perhaps I could give a weird link in return. This is written by someone who is currently a Professor of Analytic Philosophy at the University of Melbourne. I find the theory to mathematically outperform that of Langan in that it actually has mathematical content instead of some sort of a sketch. The writer expresses himself coherently and appears to understand in what style do people expect to read that kind of text. But the theory does not recurse in interesting ways. It seems quite naive and simple to me and ignores the symbol grounding problem. It is practically an N-type theory, which only allegedly has S or O content. The writer also seems to make exagerrating interpretations of what Nagarjuna said. These exagerrating interpretations lead to making the same assumptions which are the root of the contradiction in CTMU, but The Structure of Emptiness is not described as a Wheeler-style reality theory, so in that paper, the assumptions do not lead to a contradiction although they still seem to misunderstand Nagarjuna.

By the way, I have thought about your way of asking for basic assumptions. I guess I initially confused it with you asking for some sort of axioms, but since you weren't interested of the formalisms, I didn't understand what you wanted. But now I have the impression that you asked me to make general statements of what the theory can do that are readily understood from the O viewpoint, and I think it has been an interesting approach for me, because I didn't use that in the MOQ community, which would have been unlikely to request that approach.

Comment by Tuukka_Virtaperko on Welcome to Less Wrong! · 2012-01-15T20:07:15.461Z · LW · GW

It isn't answering the question of how you'd tell a computer how to be a mind, and that's the question I keep looking at this stuff with.

There are many ways to answer that question. I have a flowchart and formulae. The opposite of that would be something to the effect of having the source code. I'm not sure why you expect me to have that. Was it something I said?

I thought I've given you links to my actual work, but I can't find them. Did I forget? Hmm...

If you dislike metaphysics, only the latter is for you. I can't paste the content, because the formatting on this website apparently does not permit html formulae. Wait a second, it does permit formulae, but only LaTeX. I know LaTeX, but the formulae aren't in that format right now. I should maybe convert them.

You won't understand the flowchart if you don't want to discuss metaphysics. I don't think I can prove that something, of which you don't know what it is, could be useful to you. You would have to know what it is and judge for yourself. If you don't want to know, it's ok.

I am currently not sure why you would want to discuss this thing at all, given that you do not seem quite interested of the formalisms, but you do not seem interested of metaphysics either. You seem to expect me to explain this stuff to you in terms of something that is familiar to you, yet you don't seem very interested to have a discussion where I would actually do that. If you don't know why you are having this discussion, maybe you would like to do something else?

There are quite probably others in LessWrong who would be interested of this, because there has been prior discussion of CTMU. People interested in fringe theories, unfortunately, are not always the brightest of the lot, and I respect your abilities to casually namedrop a bunch of things I will probably spend days thinking about.

But I don't know why you wrote so much about billions of years, babies, human cultural evolution, 100 megabytes and such. I am troubled by the thought that you might think I'm some loony hippie who actually needs a recap on those things. I am not yet feeling very comfortable in this forum because I perceive myself as vulnerable to being misrepresented as some sort of a fool by people who don't understand what I'm doing.

I'm not trying to change LessWrong. But if this forum has people criticizing the CTMU without having a clue of what it is, then I attain a certain feeling of entitlement. You can't just go badmouthing people and their theories and not expect any consequences if you are mistaken. You don't need to defend yourself either, because I'm here to tell you what recursive metaphysical theories such as the CTMU are about, or recommend you to shut up about the CTMU if you are not interested of metaphysics. I'm not here to bloat my ego by portraying other people as fools with witty rhetoric, and if you Google about the CTMU, you'll find a lot of people doing precisely that to the CTMU, and you will understand why I fear that I, too, could be treated in such a way.

Comment by Tuukka_Virtaperko on Welcome to Less Wrong! · 2012-01-15T18:12:59.574Z · LW · GW

A theory of mind that can actually do the work needs to build up the same sort of kernel evolution and culture have set up for people. For the human ballpark estimate, you'd have to fill something like 100 000 pages with math, all setting up the basic machinery you need for the mind to get going. A very abstracted out theory of mind could no doubt cut off an order of magnitude or two out of that, but something like Maxwell's equations on a single sheet of paper won't do. It isn't answering the question of how you'd tell a computer how to be a mind, and that's the question I keep looking at this stuff with.

You want a sweater. I give you a baby sheep, and it is the only baby sheep you have ever seen that is not completely lame or retarded. You need wool to produce the sweater, so why are you disappointed? Look, the mathematical part of the theory is something we wrote less than a week ago, and it is already better than any theory of this type I have ever heard of (three or four). The point is not that this would be excruciatingly difficult. The point is that for some reason, almost nobody is doing this. It probably has something to do with the severe stagnation in the field of philosophy. The people who could develop philosophy find the academic discipline so revolting they don't.

I did not come to LessWrong to tell everyone I have solved the secrets of the universe, or that I am very smart. My ineptitude in math is the greatest single obstacle in my attempts to continue development. If I didn't know exactly one person who is good at math and wants to do this kind of work with me, I might be in an insane asylum, but no more about that. I came here because this is my life... and even though I greatly value the MOQ community, everyone on those mailing lists is apparently even less proficient in maths and logic as I am. Maybe someone here thinks this is fun and wants to have a fun creative process with me.

I would like to write a few of those 100 000 pages that we need. I don't get your point. You seem to require me to have written them before I have written them.

My confusion about the assumptions is basically that I get the sense that analytic philosophers seem to operate like they could just write the name of some complex human concept, like "morality", then throw in some math notation like modal logic, quantified formulas and set memberships, and call it a day. But what I'm expecting is something that teaches me how to program a computer to do mind-stuff, and a computer won't have the corresponding mental concept for the word "morality" like a human has, since the human has the ~200M special sauce kernel which gives them that. And I hardly ever see philosophers talking about this bit.

Do you expect to build the digital sauce kernel without any kind of a plan - not even a tentative one? If not, a few pages of extremely abstract formulae is all I have now, and frankly, I'm not happy about that either. I can't teach you nearly anything you seem interested of, but I could really use some discussion with interested people. And you have already been helpful. You don't need to consider me someone who is aggressively imposing his views on individual people. I would love to find people who are interested of these things because there are so few of them.

I had a hard time figuring out what you mean by basic assumptions, because I've been doing this for such a long time I tend to forget what kind of metaphysical assumptions are generally held by people who like science but are disinterested of metaphysics. I think I've now caught up with you. Here are some basic assumptions.

  • RP is about definable things. It is not supposed to make statements about undefinable things - not even that they don't exist, like you would seem to believe.
  • Humans are before anthropology in RP. The former is in O2 and the latter in O4. I didn't know how to tell you that because I didn't know you wanted to hear that and not some other part of the theory in order to not go whaaa. I'd need to tell you everything but that would involve a lot of metaphysics. But the theory is not a theory of the history of the world, if "world" is something that begins with the Big Bang.
  • From your empirical scientific point of view, I suppose it would be correct to state that RP is a theory of how the self-conscious part of one person evolves during his lifetime.
  • At least in the current simple isntance of RP, you don't need to know anything about the metaphysical content to understand the math. You don't need to go out of math-mode, because there are no nonstandard metaphysical concepts among the formulae.
  • If you do go out of the math mode and want to know what the symbols stand foor, I think that's very good. But this can only be explained to you in terms of metaphysics, because empirical science simply does not account for everything you experience. Suppose you stop by in the grocery store. Where's the empirical theory that accounts for that? Maybe some general sociological theory would. But my point is, no such empirical theory is actually implemented. You don't acquire a scientific explanation for the things you did in the store. Still you remember them. You experienced them. They exist in your self-conscious mind in some way, which is not dependent of your conceptions of what is the relationship between topology and model theory, or of your understanding of why fission of iron does not produce energy, or how one investor could single-handedly significantly affect whether a country joins the Euro. From your personal, what you might perhaps call "subjective", point of view, it does not even depend on your conception of cognition science, unless you actually apply that knowledge to it. You probably don't do that all the time although you do that sometimes.
  • I don't subscribe to any kind of "subjectivism", whatever that might be in this context, or idealism, in the sense that something like that would be "true" in a meaningful way. But you might agree that when trying to develop the theory underlying self-conscious phenomenal and abstract experience, you can't begin from the Big Bang, because you weren't there.
  • You could use RP to describe a world you experience in a dream, and the explanation would work as well as when you are awake. Physical theories don't work in that world. For example, if you look at your watch in a dream, then look away, and look at it again, the watch may display a completely different time. Or the watch may function, but when you take it apart, you find that instead of clockwork, it contains something a functioning mechanical watch will not contain, such as coins.
  • RP is intended to relate abstract thought (O, N, S) to sensory perceptions, emotions and actions (R), but to define all relations between abstract entities to other abstract entities recursively.
  • One difference between RP and the empiric theories of cosmology and such, that you mentioned, is that the latter will not describe the ability of person X to conceptualize his own cognitive processess in a way that can actually be used right now to describe what, or rather, how, some person is thinking with respect to abstract concepts. RP does that.
  • RP can be used to estimate the metaphysical composure of other people. You seem to place most of the questions you label "metaphysical" or "philosophical" in O.
  • I don't yet know if this forum tolerates much metaphysical discussion, but my theory is based on about six years of work on the Metaphysics of Quality. That is not mainstream philosophy and I don't know how people here will perceive it. I have altered the MOQ a lot. It's latest "authorized" variant in 1991 decisively included mostly just the O patterns. Analyzing the theory was very difficult for me in general. But maybe I will confuse people if I say nothing about the metaphysical side. So I'll think what to say...
  • RP is not an instance of relativism (except in the Buddhist sense), absolutism, determinism, indeterminism, realism, antirealism or solipsism. Also, I consider all those theories to be some kind figures of speech, because I can't find any use for them except to illustrate a certain point in a certain discussion in a metaphorical fashion. In logical analysis, these concepts do not necessarily retain the same meaning when they are used again in another discussion. These concepts acquire definable meaning only when detached from the philosophical use and being placed within a specific context.
  • Structurally RP resembles what I believe computer scientists call context-free languages, or programming languages with dynamic typing. I am not yet sure what is the exact definition of the former, but having written a few programs, I do understand what it means to do typing run-time. The Western mainstream philosophical tradition does not seem to include any theories that would be analogues of these computer science topics.

I have read GEB but don't remember much. I'll recap what a quine is. I tend to need to discuss mathematical things with someone face to face before I understand them, which slows down progress.

The cat/line thing is not very relevant, but apparently I didn't remember the experiment right. However, if the person and the robot could not see the lines at the same time for some reason - such as the robot needing to operate the scanner and thus not seeing inside the scanner - the robot could alter the person's brain to produce a very strong response to parallel lines in order to verify that the screen inside the scanner, which displays the lines, does not malfunction, is not unplugged, the person is not blind, etc. There could be more efficient ways of finding such things out, but if the robot has replaceable hardware and can thus live indefinitely, it has all the time in the world...

Comment by Tuukka_Virtaperko on Welcome to Less Wrong! · 2012-01-15T10:56:55.871Z · LW · GW

I don't find the Chinese room argument related to our work - besides, it seems to possibly vaguely try to state that what we are doing can't be done. What I meant is that AI should be able to:

  • Observe behavior
  • Categorize entities into deterministic machines which cannot take a metatheoretic approach to their data processing habits and alter them.
  • Categorize entities into agencies who process information recursively and can consciously alter their own data processing or explain it to others.
  • Use this categorization ability to differentiate entities whose behavior can be corrected or explained by means of social interaction.
  • Use the differentiation ability to develop the "common sense" view that, given permission by the owner of the scanner and if deemed interesting, the robot could not ask for the consent of the brain scanner to take it apart and fix it.
  • Understand that even if the robot were capable of performing incredibly precise neurosurgery, the person will understand the notion, that the robot wishes to use surgery to alter his thoughts to correspond with the result of the brain scanner, and could consent to this or deny consent.
  • Possibly try to have a conversation with the person in order to find out, why they said that they were not thinking of a cat.

Failure to understand this could make the robot naively both take machines apart and cut peoples brains in order to experimentally verify, which approach produces better results. Of course there are also other things to consider when the robot tries to figure out what to do.

I don't consider robots and humans fundamentally different. If the AI were complex enough to understand the aforementioned things, it also would understand the notion that someone wants to take it apart and reprogam it, and could consent or object.

The scanner can already do the extremely difficult task of mapping a raw brain state to the act of thinking about a cat, it should also be able to tell from the brain state whether the person has something going on in their brain that will make them deny thinking about a cat.

The latter has, to my knowledge, never been done. Arguably, the latter task requires different ability which the scanner may not have. The former requires acquiring a bitmap and using image recognition. It has already been done with simple images such as parallel black and white lines, but I don't know whether bitmaps or image recognition were involved in that. If the cat is a problem, let's simplify the image to the black and white lines.

Things being deterministic and predictable from knowing their initial state doesn't mean they can't have complex behavior reacting to a long history of sensory inputs accompanied by a large amount of internal processing that might correspond quite well to what we think of as reflection or understanding.

Even the simplest entities, such as irrational numbers or cellular automata, can have complex behavior. Humans, too, could be deterministic and predictable given that the one analyzing a human has enough data and computing power. But RP is about the understanding a consciousness could attain of itself. Such an understanding could not be deterministic within the viewpoint of that consciousness. That would be like trying to have a map contain itself. Every iteration of the map representing itself needs also to be included in the map, resulting in a requirement that the map should contain an infinite amount of information. Only an external observer could make a finite map, but that's not what I had in mind when beginning this RP project. I do consider the goals of RP somehow relevant to AI, because I don't suppose it's ok a robot cannot conceptualize its own thought very elaborately, if it were intended to be as much human as possible, and maybe even be able to write novels.

I am interested in the ability to genuinely understand the worldviews of other people. For example, the gap between scientific and religious people. In the extreme, these people think of each other in such a derogatory way, that it would be as if they would view each other as having failed the Turing test. I would like robots to understand also the goals and values of religious people.

I'm still not really grasping the underlying assumptions behind this approach.

Well, that's supposed to be a good thing, because there are supposed to be none. But saying that might not help. If you don't know what consciousness or the experience of reality mean in my use (perhaps because you would reduce such experiences to theoretical models of physical entities and states of neural networks), you will probably not understand what I'm doing. That would suggest you cannot conceptualize idealistic ontology or you believe "mind" to refer to an empty set.

I see here the danger for rather trivial debates, such as whether I believe an AI could "experience" consciousness or reality. I don't know what such a question would even mean. I am interested of whether it can conceptualize them in ways a human could.

(The underlying approach in the computer science approach are, roughly, "the physical world exists, and is made of lots of interacting, simple, Turing-computable stuff and nothing else"

The CTMU also states something to the effect of this. In that case, Langan is making a mistake, because he believes the CTMU to be a Wheeler-style reality theory, which contradicts the earlier statement. In your case, I guess it's just an opinion, and I don't feel a need to say you should believe otherwise. But I suppose I can present a rather cogent argument against that within a few days. The argument would be in the language of formal logic, so you should be able to understand it. Stay tuned...

, "animals and humans are just clever robots made of the stuff", "magical souls aren't involved, not even if they wear a paper bag that says 'conscious experience' on their head")

I don't wish to be unpolite, but I consider these topics boring and obvious. Hopefully I haven't missed anything important when making this judgement.

Your strange link is very intriguing. I like very much being given this kind of links. Thank you.

Comment by Tuukka_Virtaperko on Welcome to Less Wrong! · 2012-01-14T13:24:11.816Z · LW · GW

You probably have a much more grassroot-level understanding of the symbol grounding problem. I have only solved the symbol grounding problem to the extent that I have formal understanding of its nature.

In any case, I am probably approaching AI from a point of view that is far from the symbol grounding problem. My theory does not need to be seen as an useful solution to that problem. But when an useful solution is created, I postulate it can be placed within RP. Such a solution would have to be an algorithm for creating S-type or O-type sets of members of R.

More generally, I would find RP to be useful as an extremely general framework of how AI or parts of AI can be constructed in relation to each other, ecspecially with regards to understanding lanugage and the notion of consciousness. This doesn't necessarily have anything to do with some more atomistic AI projects, such as trying to make a robot vacuum cleaner find its way back to the charging dock.

At some point, philosophical questions and AI will collide. Suppose the following thought experiment:

We have managed to create such a sophisticated brain scanner, that it can tell whether a person is thinking of a cat or not. Someone is put into the machine, and the machine outputs that the person is not thinking of a cat. The person objects and says that he is thinking of a cat. What will the observing AI make of that inconsistency? What part of the observation is broken and results in nonconformity of the whole?

  • 1) The brain scanner is broken
  • 2) The person is broken

In order to solve this problem, the AI may have to be able to conceptualize the fact that the brain scanner is a deterministic machine which simply accepts X as input and outputs Y. The scanner does not understand the information it is processing, and the act of processing information does not alter its structure. But the person is different.

RP should help with such problems because it is intended as an elegant, compact and flexible way of defining recursion while allowing the solution of the symbol grounding problem to be contained in the definition in a nontrivial way. That is, RP as a framework of AI is not something that says: "Okay, this here is RP. Just perform the function RP(sensory input) and it works, voilá." Instead, it manages to express two different ways of solving the symbol grounding problem and to define their accuracy as a natural number n. In addition, many emergence relations in RP are logical consequences of the way RP solves the symbol grounding problem (or, if you prefer, "categorizes the parts of the actual solution to the symbol grounding problem").

In the previous thought experiment, the AI should manage to understand that the scanner deterministically performs the operation ℘(R) ⊆ S, and does not define S in terms of anything else. The person, on the other hand, is someone whose information processing is based on RP or something similar.

But what you read from moq.fi is something we wrote just a few days ago. It is by no means complete.

  • One problem is that ℘(T) does not seem to define actual emergences, but only all possible emergences.
  • We should define functions for "generalizing" and "specifying" sets or predicates, in which generalization would create a new set or predicate from an existing one by adding members, and specifying would do so by reducing members.
  • We should add a discard order to sets. Sets that are used often have a high discard order, but sets that are never used end up erased from memory. This is similar to nonused pathways in the brain dying out, and often used pathways becoming stronger.
  • The theory does not yet have an algorithmic part, but it should have. That's why it doesn't yet do anything.
  • ℘(Rn) should be defined to include a metatheoretic approach to the theory itself, facilitating modification of the theory with the yet-undefined generalizing and specifying functions.

Questions to you:

  • Is T -> U the Cartesian product of T and U?
  • What is *?

I will not guarantee having discussions with me is useful for attaining a good job. ;)

Comment by Tuukka_Virtaperko on Welcome to Less Wrong! · 2012-01-13T13:25:40.169Z · LW · GW

Of course the symbol grounding problem is rather important, so it doesn't really suffice to say that "set R is supposed to contain sensory input". The metaphysical idea of RP is something to the effect of the following:

Let n be 4.

R contains everything that could be used to ground the meaning of symbols.

  • R1 contains sensory perceptions
  • R2 contains biological needs such as eating and sex, and emotions
  • R3 contains social needs such as friendship and respect
  • R4 contains mental needs such as perceptions of symmetry and beauty (the latter is sometimes reducible to the Golden ratio)

N contains relations of purely abstract symbols.

  • N1 contains the elementary abstract entities, such as symbols and their basic operations in a formal system
  • N2 contains functions of symbols
  • N3 contains functions of functions. In mathematics I suppose this would include topology.
  • N4 contains information of the limits of the system, such as completeness or consistency. This information form the basis of what "truth" is like.

Let ℘(T) be the power set of T.

The solving of the symbol grounding problem requires R and N to be connected. Let us assume that ℘(Rn) ⊆ Rn+1. R5 hasn't been defined, though. If we don't assume subsets of R to emerge from each other, we'll have to construct a lot more complicated theories that are more difficult to understand.

This way we can assume there are two ways of connecting R and N. One is to connect them in the same order, and one in the inverse order. The former is set O and the latter is set S.

O set includes the "realistic" theories, which assume the existence of an "objective reality".

  • ℘(R1) ⊆ O1 includes theories regarding sensory perceptions, such as physics.
  • ℘(R2) ⊆ O2 includes theories regarding biological needs, such as the theory of evolution
  • ℘(R3) ⊆ O3 includes theories regarding social affairs, such as anthropology
  • ℘(R4) ⊆ O4 includes theories regarding rational analysis and judgement of the way in which social affairs are conducted

The relationship between O and N:

  • N1 ⊆ O1 means that physical entities are the elementary entities of the objective portion of the theory of reality. Likewise:
  • N2 ⊆ O2
  • N3 ⊆ O3
  • N4 ⊆ O4

S set includes "solipsistic" ideas in which "mind focuses to itself".

  • ℘(R4) ⊆ S1 includes ideas regarding what one believes
  • ℘(R3) ⊆ S2 includes ideas regarding learning, that is, adoption of new beliefs from one's surroundings. Here social matters such as prestige, credibility and persuasiveness affect which beliefs are adopted.
  • ℘(R2) ⊆ S3 includes ideas regarding judgement of ideas. Here, ideas are mostly judged by how they feel. Ie. if a person is revolted by the idea of creationism, they are inclined to reject it even without rational grounds, and if it makes them happy, they are inclined to adopt it.
  • ℘(R1) ⊆ S4 includes ideas regarding the limits of the solipsistic viewpoint. Sensory perceptions of objectively existing physical entities obviously present some kind of a challenge to it.

The relationship between S and N:

  • N4 ⊆ S1 means that beliefs are the elementary entities of the solipsistic portion of the theory of reality. Likewise:
  • N3 ⊆ S2
  • N2 ⊆ S3
  • N1 ⊆ S4

That's the metaphysical portion in a nutshell. I hope someone was interested!

Comment by Tuukka_Virtaperko on Welcome to Less Wrong! · 2012-01-13T01:02:05.626Z · LW · GW

It's not like your average "competent metaphysicist" would understand Langan either. He wouldn't possibly even understand Wheeler. Langan's undoing is to have the goals of a metaphysicist and the methods of a computer scientist. He is trying to construct a metaphysical theory which structurally resebles a programming language with dynamic type checking, as opposed to static typing. Now, metaphysicists do not tend to construct such theories, and computer scientists do not tend to be very familiar with metaphysics. Metaphysical theories tend to be deterministic instead of recursive, and have a finite preset amount of states that an object can have. I find the CTMU paper a bit sketchy and missing important content besides having the mistake. If you're interested in the mathematical structure of a recursive metaphysical theory, here's one: http://www.moq.fi/?p=242

Formal RP doesn't require metaphysical background knowledge. The point is that because the theory includes a cycle of emergence, represented by the power set function, any state of the cycle can be defined in relation to other states and prior cycles, and the amount of possible states is infinite. The power set function will generate a staggering amount of information in just a few cycles, though. Set R is supposed to contain sensory input and thus solve the symbol grounding problem.

Comment by Tuukka_Virtaperko on Welcome to Less Wrong! · 2012-01-10T15:28:52.263Z · LW · GW

To clarify, I'm not the generic "skeptic" of philosophical thought experiments. I am not at all doubting the existence of the world outside my head. I am just an apparently competent metaphysician in the sense that I require a Wheeler-style reality theory to actually be a Wheeler-style reality theory with respect to not having arbitrary declarations.

Comment by Tuukka_Virtaperko on Welcome to Less Wrong! · 2012-01-05T22:04:40.045Z · LW · GW

That's not a critical flaw. In metaphysics, you can't take for granted that the world is not in your head. The only thing you really can do is to find an inconsistency, if you want to prove someone wrong.

Langan has no problems convincing me. His attempt at constructing a reality theory is serious and mature and I think he conducts his business about the way an ordinary person with such aims would. He's not a literary genius like Robert Pirsig, he's just really smart otherwise.

I've never heard anyone to present such criticism of the CTMU that would actually imply understanding of what Langan is trying to do. The CTMU has a mistake. It's that Langan believes (p. 49) the CTMU to satisfy the Law Without Law condition, which states: "Concisely, nothing can be taken as given when it comes to cosmogony." (p. 8)

According to the Mind Equals Reality Principle, the CTMU is comprehensive. This principle "makes the syntax of this theory comprehensive by ensuring that nothing which can be cognitively or perceptually recognized as a part of reality is excluded for want of syntax". (p. 15) But undefinable concepts can neither be proven to exist nor proven not to exist. This means the Mind Equals Reality Principle must be assumed as an axiom. But to do so would violate the Law Without Law condition.

The Metaphysical Autology Principle could be stated as an axiom, which would entail the nonexistence of undefinable concepts. This principle "tautologically renders this syntax closed or self-contained in the definitive, descriptive and interpretational senses". (p. 15) But it would be arbitrary to have such an axiom, and the CTMU would again fail to fulfill Law Without Law.

If that makes the CTMU rubbish, then Russell's Principia Mathematica is also rubbish, because it has a similar problem which was pointed out by Gödel. EDIT: Actually the problem is somewhat different than the one addressed by Gödel.

Langan's paper can be found here EDIT: Fixed link.