# Paraconsistent Tiling Agents (Very Early Draft)

post by IAFF-User-4 (Imported-IAFF-User-4) · 2015-04-02T07:27:07.000Z · score: 5 (5 votes) · LW · GW · None commentsThis is a link post for https://github.com/from3004/misc-writing/blob/master/paraconsistent_tiling/paraconsistent_tiling.pdf

## None comments

Comments sorted by top scores.

This is very interesting!

My main question, which I didn't see an answer to on my first read, is: If the agent proves that action leads to the goal being achieved, are there any conditions under which this implies that the goal *actually* is achieved? The procrastination paradox shows that this isn't true in general. Is there a class of sentences (e.g., all sentences, though I don't expect that to be true in this case) such that if then ? In other words, do we have some guarantee that we do better at actually achieving than an agent which uses the system ?

A more technical point: Given that you claim that Theorem 3.2 (reflectively coherent quantified belief) follows "trivially" from Theorem 2.4, you presumably mean Theorem 2.4 to state that , rather than . (Even if you consider to implicitly contain an occurrence of , it is essential that in the free variable is replaced by the numeral corresponding to the outer value of .)

Nice work!

Minor note: in equation 1, I think the should be an .

I'm not all that familiar with paraconsistent logic, so many of the details are still opaque to me. However, I do have some intuitions about where there might be gremlins:

Solution 4.1 reads, "The agent could, upon realizing the contradiction, ..." You've got to be a bit careful here: the formalism you're using doesn't contain a reasoner that does something like "realize the contradiction." As stated, the agent is simply constructed to simply execute an action if it can prove ; it is not constructed to *also* reason about whether that proof was contradictory.

You could perhaps construct a system with an action condition of , but I expect that this will re-introduce many of the difficulties faced in a consistent logic (because this basically says "execute if consistently achieves ," and my current guess is that it's pretty hard to say "consistently" in a paraconsistent logic.

Or, in other words, I pretty strongly suspect that if you attempt to formalize a solution such as solution 4.1, you'll find lots of gremlins.

For similar reasons, I also expect solution 4.2 to be very difficult to formalize. What precisely is the action condition of an agent that "notices" when both and ? I don't know paraconsistent logic well enough yet to know how the obvious agent (with the action condition from two paragraphs above) behaves, but I'm guessing it's going to be a little difficult to work with.

Regardless, there do seem to be some promising aspects to the paraconsistent approach, and I'm glad you're looking into it!

Section 4.1 is fairly unpolished. I'm still looking for better ways of handling the problems it brings up; solutions 4.1 and 4.2 are very preliminary stabs in that direction.

The action condition you mention might work. I don't think it would re-introduce Löbian or similar difficulties, as it merely requires that implies that is *only true*, which is a truth value found in . Furthermore, we still have our internally provable T-schema, which does not depend on the action condition, from which we can derive that if the child can prove , then so can the parent. It is important to note that "most" (almost everything we are interested in) of is consistent without problem.

Now that I think about it, your action condition should be a requirement for paraconsistent agents, as otherwise they will be willing to do things that they can prove will not accomplish . There may yet be a situation which breaks this, but I have not come across it.

Thanks for writing this up! A couple questions:

- Does LP⊃ from this paper also work instead of LP? If not, do you have a link to a presentation of LP's axioms and rules of inference?
- I'm curious what step in Lob's theorem fails in PA*. Obviously the proof uses modus ponens many times, but I don't have a good intuition for which are allowed and which aren't in PA*.

Any extension of will work as well unless it leads to triviality; however, we cannot in general validate the -schema for an arbitrary conditional. For example, any conditional that satisfies the inference of contraction () will succumb to Curry's paradox and lead to triviality (consider the fixed point formula of the form ).

Now, as far as I can tell, we can validate the T-schema of the conditional for ; since 's conditional is expressively equivalent, it seems like it shouldn't be too hard to show that we can also validate its T-schema. For further discussion on the topic (with proofs, etc.), see chapter 8 of Priest's part of volume XI of the second edition of Gabbay and Geunther's Handbook of Philosophical Logic (what a mouthful of of's).

A thorough presentation of can be found in Priest's An Introduction to Non-Classical Logics, section 7.4. Sections 7.1-7.3 introduce the general structures of such logics and delineate their differences. This book does not say much about truth schemas.

Priest's paper "The Logic of Paradox" from '79 also describes , although I find the presentation a bit clunky. For a better grasp of how inference works in , look at section III.9; there are examples of both inferences that are true and inferences that do not always hold even though we ay expect them to.

I will probably be writing up a minimal formal description soon, as I code the PROLOG meta-interpreter.

We can prove non-triviality of and that the T-schema holds, so as a consequence L"ob must fail. This seems somewhat unsatisfactory as an answer to *which* step fails. I can think of a few reasons why L"ob might fail; I haven't seen a discussion of this in the literature, so I will try to cobble together an answer or the shape thereof from what I know.

One possible reason is that pseudo Modus Ponens fails in LP. That is, though might entail for a well chosen conditional, must fail or else lead to triviality due to Curry's paradox.

More likely is that this is a consequence of the way paraconsistency works with modal logic. From the wikipedia article you linked, we have that "in normal modal logic, Löb's axiom is equivalent to the conjunction of the axiom schema 4, () and the existence of modal fixed points." We have both of those, so the difference must stem from the phrase "normal modal logics." The proof uses the fact of the existence of a modal fixed point of a certain form; this may not hold in a modal extension of (which would be analogous to , the modal extension of , just as is the modal extension of classical logic.). More details on modal extensions of paraconsistent logics can be found in McGinnis' paper "Tableau Systems for Some Paraconsistent Modal Logics," though this paper does not mention modal fixed points. In fact, none of the papers I've read so far on paraconsistent modal logics have mentioned fixed points. I would construct a proof on my own, but I am currently too unfamiliar with modal logic; if it is important, I can go learn enough and do it.

Thanks for your comments! I hope to have at least somewhat answered your questions. Paraconsistency is still to me a very strange subject, so my studies and understanding are still very much incomplete.