Posts

Bayes, Backwards 2017-12-03T18:17:10.735Z · score: 9 (4 votes)
[link] Is Alu Life? 2012-04-07T21:24:22.557Z · score: -8 (9 votes)
The Apparent Reality of Physics 2011-09-23T20:10:16.577Z · score: -4 (16 votes)
Syntacticism 2011-09-23T06:49:40.818Z · score: -3 (16 votes)

Comments

Comment by ec429 on Are long-term investments a good way to help the future? · 2018-05-04T21:29:14.166Z · score: 2 (1 votes) · LW · GW

I'm not quite sure how you're defining "causal model" here, but the bit about "get paid to build a factory, which then produces goods, meanwhile you don't consume the goods you were paid" seems causal to me. By not consuming the proceeds of your work, you have caused society to have more capital than otherwise. Heck, the paragraph beginning "But suppose…" is also describing a series of causes and effects, although it glosses over exactly how removing money from circulation drives up the value of money (that's just basic microecon, though, I'm assuming you already understand that).

The rate of GDP growth isn't really the right thing to use (GDP is the total value of all transactions in the economy, which is fundamentally a meaningless number and is certainly irrelevant here). Burying the money creates capital that grows at the rate of return on capital. Investing the money does the exact same thing. The only difference is to whom the interest is paid.

Actually, on re-reading your post, I see another sign of confusion where you talk about "real value" and claim that trades can't create it. Value is a two-argument function; when you buy the stock, it's because the stock (or rather, the future consumption-opportunity it represents) is worth more to you than the present consumption you could buy with it. Meanwhile, the seller values present consumption (or whatever else he buys with the money today, which may even be just the security of having larger reserves of liquid currency as opposed to holding less-liquid, potentially risky shares of stock) more than the future consumption the stock represents. Value has been created, not by creating 'stuff', but by moving existing stuff to higher-valued uses.

Similarly, when you defer consumption into the future, you are moving current stuff from consumption to capital uses (because at least some of the resources you are no longer consuming will end up as capital formation (though not all of it because the price of consumption goods will fall slightly relative to the price of capital goods, but it can only do so because the equilibrium quantities shift)), and simultaneously moving future stuff (some of which is created by the capital uses of the current stuff) to consumption uses. As long as the rate of return on capital × your value ratio of future to present consumption is greater than 1, the combined effect is an increase in value. If you invest the money saved, then you capture the value thereby created (you're trading your future consumption against your current consumption); if you bury it, you don't: you're trading society's future consumption against your current consumption, but if you're a selfless utilitarian that's irrelevant.

Comment by ec429 on Nightmare of the Perfectly Principled · 2018-05-04T02:00:50.469Z · score: 4 (2 votes) · LW · GW

I suppose a world without law at all would be one in which people habitually defect on the Prisoner's Dilemma. Even when it's the least 'true' a PD can be and still be a PD (so, iterated, with reputational incentives, all that stuff). There are no Schelling points, and thus no coördination: Nash equilibria and Moloch for all.

The "rule of law", perhaps, is a list of particular properties of the law (collection of Schelling points): that they contain no proper nouns, for example (the same law binds the King), that they do not discriminate on irrelevant/prejudicial grounds, etc. That is to say, that the law is neither arbitrary nor capricious (I will hereonin refer to this synecdochically as: the law is just). Of course, a society's beliefs about justice will be reflected in its informal law; thus a society will typically only see itself as lacking the rule of law inasmuch as its legislation, as enforced, is contrary to the principles of its informal law.

This is in contrast with your definition, under which (if I understand correctly) a society without the rule of law is one where enforcement doesn't follow the written legislation. Of course, the patterns in this mismatch are themselves part of (my-model) law; so if everyone agrees that a piece of legislation (say, the Fugitive Slave Law) is unjust and the refusal to enforce it is just, then the society has (or believes it has) my-rule-of-law but does not have your-rule-of-law (unjust legislation is not enforced).

Conversely, if the legislation is enforced even though most people believe it's unjust, then the society has your-rule-of-law but not (it believes) my-rule-of-law (unjust legislation is enforced).

If just legislation is enforced, then the society has both kinds of rule of law (since successful enforcement of legislation makes it law). If the legislation is just but selectively enforced in a way that is unjust, then the society has neither (because the selective enforcement creates law that is unjust).

Appeals to justice, and hence arguments based on my-rule-of-law, are subjective. But I would argue (had I not spent long enough on this response already) that justice itself is objective, even if we have not yet discovered all of its principles — and thus the same is true of my-rule-of-law.

Comment by ec429 on The Case Against Education: Splitting the Education Premium Pie and Considering IQ · 2018-05-04T00:26:31.946Z · score: 3 (2 votes) · LW · GW

One cause might be that some of the underlying drivers of performance (of which IQ is but one) correlate with race. For a non-IQ-related example, if your interview process for a basketball team includes a jumping test, this will have a "disparate impact" because on average blacks jump better than whites. Therefore, even if you can demonstrate that you use the jump test because the regression analysis showed it was a good predictor of performance, the usual suspects will scream "algorithmic bias" and now even if you prove (possibly in both a court of law and the court of public opinion) that your regression analysis was correct and valid, you're still gonna get denounced by someone's Twitter campaign. Except that that doesn't happen because the colours are the wrong way round for the outrage mob, and besides the jump test is obvious enough without the regression analysis that people arguing against it sound dumb and people arguing in its favour don't sound abstruse and sophistic.

But if your defence is that "complicated statistical techniques say this metric predicts performance", suddenly it's a lot easier for people to accuse you of *ism as soon as any statistical imbalance shows up in the results of your hiring process.

Comment by ec429 on Are long-term investments a good way to help the future? · 2018-05-03T23:58:31.811Z · score: 2 (1 votes) · LW · GW

Curiously, from a civilisational perspective it doesn't matter whether you invest the money or just stuff it in your mattress; either way you're creating capital relative to the alternative, which is to spend it now on some form of consumption. (Note that, in this view, charitable giving is a consumption good rather than a capital good. In practice, of course, it's more complicated, because Africans with mosquito nets are more likely to generate endogenous economic growth than Africans with malaria. But to first order, saving African lives is that-which-you-value, rather than a means of producing more of that-which-you-value; ergo, it's a consumption good.)

The key idea I think you might be missing is that capital is deferred consumption. That is, whenever we postpone our consumption from time X to time Y, we create capital that exists for the interval [X, Y). If we invest the money, this is obvious: the factory built with our investment is a capital good, and the profit we make depends on the stuff the factory makes being worth more than was invested to build it.

But suppose instead of investing the money, we just bury it for ten years. During those ten years, there is less money in circulation than otherwise, which drives up the value of money. I.e., it drives down the prices of goods. This means that all other holders of money now have access to slightly more capital, and can fund larger investments with the same number of banknotes. The capital is still created, and still produces a dividend, it's just that that dividend is now paid to people who are actively investing, instead of to us.

How can this be? Well, how did we earn the money in the first place? We produced some good or performed some service, and in return received only these green pieces of paper. If we just sit on them, they are sterile to us; but we still performed the service. Essentially, we are only paid for our work when we spend our wages on consumption goods (or services); it is in those goods that we are really paid. So burying our money for ten years is waiting ten years to present our claim upon society to be rewarded for the work we previously furnished. If that work was to build someone a factory, then for those ten years the factory is producing goods for a society that hasn't paid for the factory yet; if that work was to supply someone with consumption goods, then just follow the chain of substitutions / opportunity costs until you get to a capital good.

Of course, for this to work, someone has to be investing in actual capital goods; and the more people are doing so independently, the more loci of initiative are searching for profitable opportunities (and the fewer possibly-irrational trades you have to follow before hitting a capital good). So investing the money does slightly improve the system's efficiency at finding and making the best available investments. But it's the earning the money, not the investing it, that creates the value which, by deferral of consumption, becomes capital. (Thus, investing unearned income / ill-gotten gains does not create capital!)

Comment by ec429 on Eight political demands that I hope we can agree on · 2018-05-03T22:34:46.016Z · score: 8 (5 votes) · LW · GW

I don't know if I count as part of the "movement", but I can't agree on these demands, because they all assume that notions such as "public policy" and "government" are valid and legitimate.

Suppose we were to turn them round and write them as negative demands; 5, 6, and 8 all reduce to freedom of contract. 7 is covered by "no crimes, only torts" since the concept of a "victimless tort" is obviously meaningless. 1 and 2 are fundamentally just questions of governance procedure, and become a lot less odious when you're not forced to live under / pay for a policy you don't support chosen by broken systems of governance. 3 and 4 are demands that other people's money be spent, and allowing such things is a gift to Moloch (see also: Olsonian scramble, rent-seeking); if we rule that out, we're left with "we want to be able to give our money to $CAUSE instead of having it taxed to fund causes we don't support" (causes like #7, come to think of it).

So rather than 8 demands, one principle would seem to suffice: "The support of a plurality of irrational humans is no license to trespass upon individual liberties". Then again, nothing provides such a license; a superintelligent AGI doesn't have the ethical right to do so either. Both the elected government and the AGI, in practice, have the power to do so (in the former case it's because most people believe democracy legitimises government and will support it), but if might makes right then we may as well not bother with AI alignment research ;-)

Comment by ec429 on Nightmare of the Perfectly Principled · 2017-11-15T02:25:13.779Z · score: 3 (2 votes) · LW · GW

I think you're misusing the notion of "rule of law", possibly because of the Jewish-upbringing factors you mention. Economist Don Boudreaux would argue, after Hayek, that legislation and law are different things. My version of this, heavily influenced by David D. Friedman, is that law is really a collection of Schelling points, and legislation (the statute book) is one way (of many) in which new Schelling points can be created. This is far more obvious to someone who was brought up under the common-law system — and the US's legal tradition is, after all, derived mainly from English common law, which may help to explain why the US is not now a lawless hellscape.

Thus, your "Collaborator" is not really some kind of inconsistent or irrational person, just someone who follows the law even when it conflicts with legislation. And while that can lead to 'collaboration' with an unjust regime, it can also lead to civil disobedience and other forms of non-collaboration.

So while it may be fruitful to consider where people fall on your graph, I don't think your "Collaborators" fall in the upper left quadrant; I think you place them there because you misunderstand what ordered system such people are referring to when they talk about "the rule of law".

Comment by ec429 on Cached Thoughts · 2012-08-22T04:49:19.943Z · score: 3 (3 votes) · LW · GW

most of us probably won't be able to find much of a trunk build that we can agree on

I think you're wrong as a question of fact, but I love the way you've expressed yourself.

It's more like a non-monotonic DVCS; we may all have divergent head states, but almost every commit you have is replicated in millions of other people's thought caches.

Also, I don't think the system needs to be Byzantine fault tolerant; indeed we may do well to leave out authentication and error correction in exchange for a higher raw data rate, relying on Release Early Release Often to quash bugs as soon as they arise.

(Rationality as software development; it's an interesting model, but perhaps we shouldn't stretch the analogy too far)

Comment by ec429 on If a tree falls on Sleeping Beauty... · 2012-08-21T20:09:22.190Z · score: 2 (2 votes) · LW · GW

On the other hand, if you’re Dr. Evil and you’re in your moon base preparing to fire your giant laser at Washington, DC when you get a phone call from Austin “Omega” Powers

So, does this mean ata is going to write an Austin Powers: Superrational Man of Mysterious Answers fanfic?

Comment by ec429 on Natural Laws Are Descriptions, not Rules · 2012-08-14T22:37:26.534Z · score: 0 (0 votes) · LW · GW

How exactly are abstract, non-physical objects -- laws of nature, living in their "transcendent aerie" -- supposed to interact with physical stuff? What is the mechanism by which the constraint is applied? Could the laws of nature have been different, so that they forced electrons to attract one another?

I feel I should link to my post The Apparent Reality of Physics right now. To summarise: both the "descriptions" and "rules" views are wrong as they suppose there is something to be described or ruled. The (to me, obvious) dissolution is to state that a Universe is its rules.

Comment by ec429 on The Crackpot Offer · 2012-08-14T18:39:14.189Z · score: 2 (1 votes) · LW · GW

There is a further subtlety here. As I discussed in "Syntacticism", in Gödel's theorems number theory is in fact talking about "number theory", and we apply a metatheory to prove that "number theory is "number theory"", and think we've proved that number theory is "number theory". The answer I came to was to conclude that number theory isn't talking about anything (ie. ascription of semantics to mathematics does not reflect any underlying reality), it's just a set of symbols and rules for manipulating same, and that those symbols and rules together embody a Platonic object. Others may reach different conclusions.

Comment by ec429 on [link] Is Alu Life? · 2012-04-08T20:28:58.300Z · score: 0 (0 votes) · LW · GW

I don't believe it, but it sounds like it should be testable, and if it hasn't been tested I'd be somewhat interested in doing so. I believe there are standard methods of comparing legibility or readability of two versions of a text (although, IIRC, they tend to show no statistically significant difference between perfect typesetting and text that would make a typographer scream).

You're probably not the only one bothered by the colour scheme, though; historically, every colour scheme I've used on the various iterations of my website has bothered many people. The previous one was bright green on black :S

Comment by ec429 on [link] Is Alu Life? · 2012-04-08T20:23:24.298Z · score: 0 (0 votes) · LW · GW

That's interesting, because I would see a difference. Given the choice, I'd test it on the barren rock. However, I can't justify that, nor am I sure how much benefit I'd have to derive to be willing to blow up Eta Kudzunae.

Comment by ec429 on [link] Is Alu Life? · 2012-04-08T13:00:32.606Z · score: 0 (0 votes) · LW · GW

Agree, and think your changes alter the question I was trying to ask, which is, not whether destroying Xenokudzu Planet would be absolutely unacceptable (as a rule, most things aren't), but whether we'd need a sufficiently good reason.

which has choked out all other life

I think the LCPW for you here is to suppose that this planet is only capable of supporting this xenokudzu, and no other kind of life. (Maybe the xenokudzu is plasma helices, and the 'planet' is actually a sun, and suppose for the sake of argument that that environment can't support sentient life)

So, more generally, let the gain (to non-xenokudzu utility) from destroying Xeno Planet tend to zero. Is there a point at which you choose not to destroy, or will any nonzero positive gain to sentient life justify wiping out Xeno Planet?

Comment by ec429 on Rationality Quotes April 2012 · 2012-04-08T02:04:04.945Z · score: 0 (0 votes) · LW · GW

Well, my source is Dr Bursill-Hall's History of Mathematics lectures at Cambridge; I presume his source is 'the literature'. Sorry I can't give you a better source than that.

Comment by ec429 on [link] Is Alu Life? · 2012-04-08T01:02:14.553Z · score: 4 (4 votes) · LW · GW

Hmm. I do understand that, but I still don't think it's relevant. I don't try to argue that Premise 1 is true (except in a throwaway parenthetical which I am considering retracting), rather I'm arguing that Premise 2 is true, and that consequently Premise 1 implies the conclusion ("transposons have ethical value") which in turn implies various things ranging from the disconcerting to the absurd. In fact I believed Premise 1 (albeit without great examination) until I learned about transposons, and now I doubt it (though I haven't rejected it so far; I'm cognitively marking it as "I'm confused about this"). That's why I felt there was something worth writing about: namely, that transposons expose the absurdity of an assumption that had previously been part of my moral theory, and by extension may perhaps be part of others'.

Edit: well, that's one reason I wrote the article. The other reason was to raise the questions in the hope of creating a discussion through which I might come to better understand the problem.

Further edit: actually, I'm not sure the first reason was my reason for writing the article; I think I was indeed (initially) arguing for Premise 1, and I have been trying to make excuses and pretend I'd never argued for it. Yet I still can't let go of Premise 1 completely. Thought experiment: imagine a planet with a xenobiology that only supports plant life - nothing sentient lives there or could do so - and there is (let us assume) no direct benefit to us to be derived from its existence. Would we think it acceptable to destroy that planet? I think not, yet the obvious "feature conferring ethical value on humans and chimps" would be sentience. I remain confused.

Comment by ec429 on [link] Is Alu Life? · 2012-04-07T22:43:42.668Z · score: 0 (4 votes) · LW · GW

My ethics were influenced a nonzero amount by reading Orson Scott Card. More to the point, OSC provided terminology which I felt was both useful and likely to be understood by my audience.

I now think that my use of the word "must" in the above-quoted passage was a mistake.

Comment by ec429 on [link] Is Alu Life? · 2012-04-07T22:30:41.273Z · score: 0 (4 votes) · LW · GW

Your comment is a very good argument against a position - but unfortunately not the position I hold. I may have poorly expressed my meaning; it's not strictly the definition of the English word 'life' that I care about, but rather the exploration of my utility function, and whether my preferences are consistent and coherent, or whether they make an arbitrary distinction between "life with moral status" (people, chimps, and kittens) and "life without moral status" (cockroaches, E. coli, and transposons).

Can you suggest a good way for me to explain this in the article itself?

Comment by ec429 on Rationality Quotes April 2012 · 2012-04-07T22:11:02.526Z · score: 0 (0 votes) · LW · GW

Sorry to have to tell you this, but Pythagoras of Samos probably didn't even exist. More generally, essentially everything you're likely to have read about the Pythagoreans (except for some of their wacky cultish beliefs about chickens) is false, especially the stuff about irrationals. The Pythagoreans were an orphic cult, who (to the best of our knowledge) had no effect whatsoever on mainstream Greek mathematics or philosophy.

Comment by ec429 on So You Want to Save the World · 2011-12-31T18:05:05.317Z · score: 0 (0 votes) · LW · GW

Hmm, infinitary logic looks interesting (I'll read right through it later, but I'm not entirely sure it covers what I'm trying to do). As for Platonism, mathematical realism, and Tegmark, before discussing these things I'd like to check whether you've read http://lesswrong.com/r/discussion/lw/7r9/syntacticism/ setting out my position on the ontological status of mathematics, and http://lesswrong.com/lw/7rj/the_apparent_reality_of_physics/ on my version of Tegmark-like ideas? I'd rather not repeat all that bit by bit in conversation.

Comment by ec429 on So You Want to Save the World · 2011-12-28T21:21:17.924Z · score: 0 (0 votes) · LW · GW

The computer program 'holds the belief that' this way-powerful system exists; while it can't implement arbitrary transfinite proofs (because it doesn't have access to hypercomputation), it can still modify its own source code without losing a meta each time: it can prove its new source code will increase utility over its old, without its new source code losing proof-power (as would happen if it only 'believed' PA+n; after n provably-correct rewrites it would only believe PA, and not PA+1. Once you get down to just PA, you have a What The Tortoise Said To Achilles-type problem; just because you've proved it, why should you believe it's true?

The trick to making way-powerful systems is not to add more and more Con() or induction postulates - those are axioms. I'm adding transfinite inference rules. As well as all the inference rules like modus ponens, we have one saying something like "if I can construct a transfinite sequence of symbols, and map those symbols to syntactic string-rewrite operations, then the-result-of the corresponding sequence of rewrites is a valid production". Thus, for instance, w-inference is stronger than adding w layers of Con(), because it would take a proof of length at least w to use all w layers of Con().

This is why I call it metasyntax; you're considering what would happen if you applied syntactic productions transfinitely many times.

I don't know, in detail, how to express the notion that the program should "trust" such a system, because I don't really know how a program can "trust" any system: I haven't ever worked with/on automated theorem provers, nor any kind of 'neat AI'; my AGI experiments to date have all been 'scruffy' (and I stopped doing them when I read EY on FAI, because if they were to succeed (which they obviously won't, my neural net did nothing but talk a load of unintelligible gibberish about brazil nuts and tetrahedrite) they wouldn't even know what human values were, let alone incorporate them into whatever kind of decision algorithm they ended up having).

I'm really as much discussing how human mathematicians can trust mathematics as I am how AIs can trust mathematics - when we have all that Gödel and Löb and Tarski stuff flying around, some people are tempted to say "Oh, mathematics can't really prove things, therefore not-Platonism and not-infinities", which I think is a mistake.

Comment by ec429 on So You Want to Save the World · 2011-12-28T19:00:08.056Z · score: 0 (0 votes) · LW · GW

Can you explain more formally what you mean by "proves that it itself exists"?

The fundamental principle of Syntacticism is that the derivations of a formal system are fully determined by the axioms and inference rules of that formal system. By proving that the ordinal kappa is a coherent concept, I prove that PA+kappa is too; thus the derivations of PA+kappa are fully determined and exist-in-Tegmark-space.

Actually it's not PA+kappa that's 'reflectively consistent'; it's an AI which uses PA+kappa as the basis of its trust in mathematics that's reflectively consistent, for no matter how many times it rewrites itself, nor how deeply iterated the metasyntax it uses to do the maths by which it decides how to rewrite itself, it retains just as much trust in the validity of mathematics as it did when it started. Attempting to achieve this more directly, by PA+self, runs into Löb's theorem.

Comment by ec429 on So You Want to Save the World · 2011-12-28T18:52:50.805Z · score: 1 (1 votes) · LW · GW

Well, I'm not exactly an expert either (though next term at uni I'm taking a course on Logic and Set Theory, which will help), but I'm pretty sure this isn't the same thing as proof-theoretic ordinals.

You see, proofs in formal systems are generally considered to be constrained to have finite length. What I'm trying to talk about here is the construction of metasyntaxes in which, if A1, A2, ... are valid derivations (indexed in a natural and canonical way by the finite ordinals), then Aw is a valid derivation for ordinals w smaller than some given ordinal. A nice way to think about this is, in (traditionally-modelled) PA the set of numbers contains the naturals, because for any natural number n, you can construct the n-th iterate of ⁺ (successor) and apply it to 0. However, the set of numbers doesn't contain w, because to obtain that by successor application, you'd have to construct the w-th iterate of ⁺, and in the usual metasyntax infinite iterates are not allowed.

Higher-order logics are often considered to talk about infinite proofs in lower-order logics (eg. every time you quantify over something infinite, you do something that would take infinite proving in a logic without quantifiers), but they do this in a semantic way, which I as a Syntacticist reject; I am doing it in a syntactic way, considering only the results of transfinitely iterated valid derivations in the low-order logic.

As far as I'm aware, there has not been a great deal of study of metasyntax. It seems to me that (under the Curry-Howard isomorphism) transfinite-iteration metasyntax corresponds to hypercomputation, which is possibly why my kappa is (almost certainly) much, much larger than the Church-Kleene ordinal which (according to Wikipedia, anyway) is a strict upper bound on proof-theoretic ordinals of theories. w₁CK is smaller even than w₁, so I don't see how it can be larger than kappa.

Comment by ec429 on The Apparent Reality of Physics · 2011-12-28T18:32:24.982Z · score: 0 (0 votes) · LW · GW

I think that's a very good summary indeed, in particular that the "unique non-ambiguous set of derivations" is what imbues the syntax with 'reality'.

Symbols are indeed not defined, but the only means we have of duck-typing symbols is to do so symbolically (a symbol S is an object supporting an equality operator = with other symbols). You mention Lisp; the best mental model of symbols is Lisp gensyms (which, again, are objects supporting only one operator, equality).

conses of conses are indeed a common model of strings, but I'm not sure whether that matters - we're interested in the syntax itself considered abstractly, rather than any representation of the syntax. Since ad-hoc infinite regress is not allowed, we must take something as primal (just as formal mathematics takes the 'set' as primal and constructs everything from set theory) and that is what I do with syntax.

As mathematics starts with axioms about sets and inference rules about sets, so I begin with meta-axioms about syntax and meta-inference rules about syntax. (I then - somewhat reflexively - consider meta²-axioms, then transfinitely induct. It's a habit I've developed lately; a current project of mine is to work out how large a large ordinal kappa must be such that meta^kappa -syntax will prove the existence of ordinals larger than kappa, and then (by transfinite recursion shorter than kappa) prove the existence of [a given large cardinal, or the Von Neumann universe, or some other desired 'big' entity]. But that's a topic for another post, I fear)

Comment by ec429 on So You Want to Save the World · 2011-12-28T18:07:41.110Z · score: 5 (4 votes) · LW · GW

Now I see why TDT has been causing me unease - you're spot on that the 5-and-10 problem is Löbbish, but what's more important to me is that TDT in general tries to be reflective. Indeed, Eliezer on decision theory seems to be all about reflective consistency, and to me reflective consistency looks a lot like PA+Self.

A possible route to a solution (to the Löb problem Eliezer discusses in "Yudkowsky (2011a)") that I'd like to propose is as follows: we know how to construct P+1, P+2, ... P+w, etc. (forall Q, Q+1 = Q u {forall S, [](Q|-S)|-S}). We also know how to do transfinite ordinal induction... and we know that the supremum of all transfinite countable ordinals is the first uncountable ordinal, which corresponds to the cardinal aleph_1 (though ISTR Eliezer isn't happy with this sort of thing). So, P+Ω won't lose reflective trust for any countably-inducted proof, and our AI/DT will trust maths up to countable induction.

However, it won't trust scary transfinite induction in general - for that, we need a suitably large cardinal, which I'll call kappa, and then P+kappa reflectively trusts any proof whose length is smaller than kappa; we may in fact be able to define a large cardinal property, of a kappa such that PA+(the initial ordinal of kappa) can prove the existence of cardinals as large as kappa. Such a large cardinal may be too strong for existing set theories (in fact, the reason I chose the letter kappa is because my hunch is that Reinhardt cardinals would do the trick, and they're inconsistent with ZFC). Nonetheless, if we can obtain such a large cardinal, we have a reflectively consistent system without self-reference: PA+w_kappa doesn't actually prove itself consistent, but it does prove kappa to exist and thus proves that it itself exists, which to a Syntacticist is good enough (since formal systems are fully determined).

Comment by ec429 on The Apparent Reality of Physics · 2011-12-28T02:46:54.164Z · score: 0 (0 votes) · LW · GW

Oh, I'm willing to admit variously infinite numbers of applications of the rules... that's why transfinite induction doesn't bother me in the slightest.

But, my objection to the existence of abstract points is: what's the definition of a point? It's defined by what it does, by duck-typing. For instance, a point in R² is an ordered pair of reals. Now, you could say "an ordered pair (x,y) is the set {x,{x,y}}", but that's silly, that's not what an ordered pair is, it's just a construction that exhibits the required behaviour: namely, a constructor from two input values, and an equality axiom "(a,b)==(c,d) iff a==c and b==d". Yet, from a formal perspective at least, there are many models of those axiomata, and it's absurd to claim that any one of those is what a point "is" - far more sensible to say that the point "is" its axiomata. Since those axiomata essentially consist of a list of valid string-rewriting rules (like (a,b)==(c,d) |- a==c), they are directly and explicitly syntactic.

Perhaps, indeed, there is a system more fundamental to mathematics than syntactics - but given that the classes of formal languages even over finite strings are "variously infinite" (since language classes are equivalent to computability classes, by something Curry-Howardesque), it seems to me that, by accepting variously infinite strings and running-times, one should find that all mathematical systems are inherently syntactic in nature.

Sadly this is difficult to prove, as all our existing formal methods are themselves explicitly syntactic and thus anything we can express formally by current means, we can express as syntax. If materialistic and mechanistic ideas about the nature of consciousness are valid, then in fact any mathematics conceivable by human thought are susceptible to syntactic interpretation (for, ultimately, there exists a "validity" predicate over mathematical deductions, and assuming that validity predicate is constant in all factors other than the mathematical deduction itself (which assumption I believe to hold, as I am a Platonist), that predicate has a syntactic expression though possibly one derived via the physics of the brain). This does not, however, rule out the possibility that there are things we might want to call 'formal systems' which are not syntactic in nature. It is my belief - and nothing more than that - that such things do not exist.

Comment by ec429 on The Apparent Reality of Physics · 2011-12-18T18:10:52.312Z · score: 0 (0 votes) · LW · GW

The locus exists, as a mathematical object (it's the string "{x \in R²: |x|=r}", not the set {x \in R² : |x|=r}). The "circle" on the other hand is a collection of points. You can apply syntactic (ie. mathematical) operators to a mathematical object; you can't apply syntactic operators to a collection of points. It is syntactic systems and their productions (ie. mathematical systems and their strings) which exist.

Comment by ec429 on The mathematical universe: the map that is the territory · 2011-12-06T21:01:51.435Z · score: 0 (0 votes) · LW · GW

I have also (disappointingly/validatingly) thought of this and then read Tegmark. (It's even more disappointing/validating than that, though, since as well as Tegmark, you appear to have invented Syntacticism. You even have all my arguments, like subverting the simulation hypothesis and talking about 'closure'). However, I have one more thing to add, which may answer the problem of regularity. That one thing is what I call the 'causality manifold': Obviously by simulating a universe we have no causal effect upon it (if we are assuming the mathematical universe hypothesis); but it has a causal effect upon us, because it defines the results of our computation. I explore this theme somewhat in The Apparent Reality of Physics, a footnote to which mentions the problem of consistency when you have a closed loop of universes, and its putative solvability by loop unfolding / closure. Considering the ensemble of mathematical structures with the natural topology, we see that locally it's either a graph or a manifold (almost everywhere), and it has this flow defined by the causal relations (with the flow in the opposite direction to simulation), which we can consider as being a flow of subjective probability (with some equilibrium state). Of course it contains both regular and irregular universes (hereonin RUs and IUs), because adding a delta function to a differential equation gives you, simply, a different DE (well, that's 'morally' why it's true; it's more complicated in practice because not all mathematical structures are DEs; but any continuous mathematical structure can be continuously corrupted). IUs typically cannot simulate RUs, because any simulation is going to keep hitting the delta functions and being corrupted; RUs, on the other hand, can simulate both other RUs and IUs (a cosmic ray can turn your RU simulation into an IU simulation). Consequently, subjective probability flows from {IUs} to {RUs} much more strongly than the other way, so the equilibrium has most subjective probability on RUs. Thus, anthropics and cake for everyone :)

I should add that I haven't yet been able to mathematically formalise the above argument, because I haven't yet worked out the correct definitions/characterisation of the 'causality manifold' (which is, incidentally, not a manifold), and it's possible that the small probability of an IU simulating a RU screws things up, and that we should (perhaps) expect to find ourselves in a Universe with some (say) Poisson-distributed degree of irregularity. Or something like that. But, at least it does allow for a mathematical universe in which anthropic experience can actually be given a probability distribution.

Comment by ec429 on Counterfactual Mugging · 2011-12-05T22:30:29.226Z · score: 1 (1 votes) · LW · GW

Yes, it still works, because of the way the subjective probability flow on Tegmark-space works. (Think of it like PageRank, and remember that the s.p. flows from the simulated to the simulator)

It is technically possible that the differences between how much the two Universes simulate each other can, when combined with differences in how much they are simulated by other Universes, can cause the coupling between the two not to be strong enough to override some other couplings, with the result that the s.p. expectation of "giving Omega the $100" is negative. However, under my current state of logical uncertainty about the couplings, that outcome is rather unlikely, so taking a further expectation over my guesses of how likely various couplings are, the deal is still a good one.

Actually, in my own thinking I no longer call it "Tegmark-space", instead I call it the "Causality Manifold" and I'm working on trying to find a formal mathematical expression of how causal loop unfolding can work in a continuous context. Also, I'm no longer worried about the "purer and more elegant version" of syntacticism, because today I worked out how to explain the subjective favouring of regular universes (over irregular ones, which are much more numerous). One thing that does worry me, though, is that every possible Causality Manifold is also an element of the CM, which means either stupidly large cardinal axioms or some kind of variant of the "No Gödels" argument from Syntacticism (the article).

Comment by ec429 on Counterfactual Mugging · 2011-12-04T15:48:43.227Z · score: 2 (2 votes) · LW · GW

Under my syntacticist cosmology, which is a kind of Tegmarkian/Almondian crossover (with measure flowing along the seemingly 'backward' causal relations), the answer becomes trivially "yes, give Omega the $100" because counterfactual-me exists. In fact, since this-Omega simulates counterfactual-me and counterfactual-Omega simulates this-me, the (backwards) flow of measure ensures that the subjective probabilities of finding myself in real-me and counterfactual-me must be fairly close together; consequently this remains my decision even in the Almondian variety. The purer and more elegant version of syntacticism doesn't place a measure on the Tegmark-space at all, but that makes it difficult to explain the regularity of our universe - without a probability distribution on Tegmark-space, you can't even mathematically approach anthropics. However, in that version counterfactual-me 'exists to the same extent that I do', and so again the answer is trivially "give Omega the $100".

Counterfactual problems can be solved in general by taking one's utilitarian summation over all of syntax-space rather than merely one's own Universe/hubble bubble/Everett branch. The outstanding problem is whether syntax-space should have a measure and if so what its nature is (and whether this measure can be computed).

Comment by ec429 on The Apparent Reality of Physics · 2011-12-04T15:45:52.249Z · score: 0 (0 votes) · LW · GW

Indeed. Circles are merely a map-tool geometers use to understand the underlying territory of Euclidean geometry, which is precisely real vector spaces (which can be studied axiomatically without ever using the word 'circle'). So, circles don't exist, but {x \in R² : |x|=r} does. (Plane geometry is one model of the formal system)

Comment by ec429 on Syntacticism · 2011-09-24T19:25:32.404Z · score: 0 (0 votes) · LW · GW

It doesn't seem odd at all, we have an expectation of the calculator, and if it fails to fulfill that expectation then we start to doubt that it is, in fact, what we thought it was (a working calculator).

Except that if you examine the workings of a calculator that does agree with us, you're much much less likely to find a wiring fault (that is, that it's implementing a different algorithm).

if (a) [a reasonable human would agree implements arithmetic] and (b) [which disagrees with us on whether 2+2 equals 4] both hold, then (c) [The human decides she ve was mistaken and needs to fix the machine]. If the human can alter the machine so as to make it agree with 2+2 = 4, then and only then will the human feel justified in asserting that it implements arithmetic.

If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case "Has been asked 2+2", which overrides the usual algorithm and just outputs 4... would the human then claim they'd "made it implement arithmetic"? I don't think so.

I'll try a different tack: an implementation of arithmetic can be created which is general and compact (in a Solomonoff sense) - we are able to make calculators rather than Artificial Arithmeticians. Clearly not all concepts can be compressed in this manner, by a counting argument. So there is a fact-of-the-matter that "these {foo} are the concepts which can be compressed by thus-and-such algorithm" (For instance, arithmetic on integers up to N can be formalised in O(log N) bits, which grows strictly slower than O(N); thus integer arithmetic is compressed by positional numeral systems). That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven't heard of positional numeral systems (though their system still beats the Artificial Arithmetician).

I'll look over it, but given what you say here I'm not confident that it won't be an attempt at a resurrection of Platonism.

What's wrong with resurrecting (or rather, reformulating) Platonism? Although, it's more a Platonic Formalism than straight Platonism.

Comment by ec429 on The Apparent Reality of Physics · 2011-09-24T19:07:48.399Z · score: 1 (1 votes) · LW · GW

I don't understand the meaning of the word "symbols" in the abstract, without a brain to interpret them with and map them onto reality.

Think in terms of LISP gensyms - objects which themselves support only one operation, ==. The only thing we can say about (rg45t) is that it's the same as (rg45t) but not the same as (2qox), whereas we think we know what (forall) means (in the game of set theory) - in fact the only reason (forall) has a meaning is because some of our symbol-manipulating rules mention it.

Comment by ec429 on The Apparent Reality of Physics · 2011-09-24T19:04:31.252Z · score: 0 (0 votes) · LW · GW

I think ec429 “sides” with the first intuition, and you tend more towards the second. I just noticed I am confused.

No, I'd say nearer the second - the mathematical expression of the world of P2 "exists" indifferently of us, and has just as much "existence" as we do. Rocks and trees and leptons, and their equivalents in P2-world, however, don't "exist"; only their corresponding 'pieces of math' flowing through the equations can be said to "exist".

Comment by ec429 on Syntacticism · 2011-09-24T07:12:55.343Z · score: 0 (0 votes) · LW · GW

What is clear to me is that when we set up a physical system (such as a Von Neumann machine, or a human who has been 'set up' by being educated and then asked a certain question) in a certain way, some part of the future state of that system is (say with 99.999% likelihood) recognizable to us as output (perhaps certain patterns of light resonate with us as "the correct answer")

But note that there are also patterns of light which we would interpret as "the wrong answer". If arithmetic is implementation-dependent, isn't it a bit odd that whenever we build a calculator that outputs "5" for 2+2, it turns out to have something we would consider to be a wiring fault (so that it is not implementing arithmetic)? Can you point to a machine (or an idealised abstract algorithm, for that matter) which a reasonable human would agree implements arithmetic, but which disagrees with us on whether 2+2 equals 4? Because, if arithmetic is implementation-dependent, you should be able to do so.

Are we then to take computation as "more fundamental" than physics?

Yes! (So long as we define computation as "abstract manipulation-rules on syntactic tokens", and don't make any condition about the computation's having been implemented on any substrate.)

Comment by ec429 on Syntacticism · 2011-09-24T07:05:39.872Z · score: 0 (0 votes) · LW · GW

When you look at the statement 2+2=4 you think some form of "hey, that's true". When I look at the statement, I also think some form of "hey, that's true". We can then talk and both come to our own unique conclusion that the other person agrees with us.

I think your argument involves reflection somewhere. The desk calculator agrees that 2+2=4, and it's not reflective. Putting two pebbles next to two pebbles also agrees.

Look at the discussion under this comment; I maintain that cognitive agents converge, even if their only common context is modus ponens - and that this implies there is something to be converged upon. At the least, it is 'true' that that-which-cognitive-agents-converge-on takes the value that it does (rather than any other value, like "1=0").

These processes explain your observations and operate entirely within the physical universe. The concept of metaphysical existence is not needed.

Mathematical realism also explains my observations and operates entirely within the mathematical universe; the concept of physical existence is not needed. The 'physical existence hypothesis' has the burdensome detail that extant physical reality follows mathematical laws; I do not see a corresponding burdensome detail on the 'mathematical realism hypothesis'. Thus by Occam, I conclude mathematical realism and no physical existence.

I am not sure I have answered your objections because I am not sure I understand them; if I do not, then I plead merely that it's 8AM, I've been up all night, and I need some sleep :(

Comment by ec429 on Syntacticism · 2011-09-24T06:51:21.185Z · score: 0 (2 votes) · LW · GW

But you don't have to have unlimited resources, you just have to have X large but finite amount of resources, and you don't know how big X is.

Of course, in order to prove that your resources are sufficient to find the proof, without simply going ahead and trying to find the proof, you would need those resources to be unlimited - because you don't know how big X is. But you still know it's finite. "Feasibly computable" is not the same thing as "computable". "In principle" is, in principle, well defined. "In practice" is not well-defined, because as soon as you have X resources, it becomes possible "in practice" for you to find the proof.

I say again that I do not need to postulate infinities in order to postulate an agent which can find a given proof. For any provable theorem, a sufficiently (finitely) powerful agent can find it (by the above diagonal algorithm); equivalently, an agent of fixed power can find it given sufficient (finite) time. So, while such might be "unfeasible" (whatever that might mean), I can still use it as a step in a justification for the existence of infinities.

Comment by ec429 on Syntacticism · 2011-09-24T04:51:38.827Z · score: 1 (1 votes) · LW · GW

It's P(I will find a proof in time t) that is asking for the probability of a definite event. It's not that evaluating this number at large t is so problematic, it's that it doesn't capture what people usually mean by "provable in principle."

Suppose that a proof is a finite sequence of symbols from a finite alphabet (which supposition seems reasonable, at least to me). Suppose that you can determine whether a given sequence constitutes a proof, in finite time (not necessarily bounded). Then construct an ordering on sequences (can be done, it's the union over n in (countable) N of sequences of length n (finitely many), is thus countable), and apply the determination procedure to each one in turn. Then, if a proof exists, you will find it in finite time by this method; thus P(you will find a proof by time t) tends to 1 as t->infty if a proof exists, and is constant 0 forall t if no proof exists.

There's an obvious problem; we can't determine with P=1 that a given sequence constitutes a proof (or does not do so). But suppose becoming 1 bit more sure, when not certain, of the proof-status of a given sequence, can always be done in finite time. Then learn 1 bit about sequence 1, then sequence 1, then seq 2, then seq 1, then seq 2, then seq 3... Then for any sequence, any desired level of certainty is obtained in finite time.

If something is provable in principle, then (with a certain, admittedly contrived and inefficient search algorithm) the proof can be found in finite time with probability 1. No?

Comment by ec429 on Syntacticism · 2011-09-24T04:38:54.042Z · score: 0 (2 votes) · LW · GW

One can certainly compute the digits of pi, so that since (as non-intuitionists insist anyway) either the $n$th digit is even, or it is odd, we must have P($n$th digit is even) > P(axioms) or P($n$ digit is odd) > P(axioms).

I don't think that's valid - even if I know (P=1) that there is a fact-of-the-matter about whether the nth digit is even, if I don't have any information causally determined by whether the nth digit is even then I assign P(even) = P(odd) = ½. If I instead only believe with P=P(axioms) that a fact-of-the-matter exists, then I assign P(even) = P(odd) = ½ * P(axioms). Axioms ⇏ even. Axioms ⇒ (even or odd). P(axioms) = P(even or odd) = P(even)+P(odd) = (½ + ½) * P(axioms) = P(axioms), no problem. "A fact-of-the-matter exists for statement A" is (A or ¬A), and assuming that our axioms include Excluded Middle, P(A or ¬A) >= P(axioms).

Summary: P is about my knowledge; existence of a fact-of-the-matter is about, well, the fact-of-the-matter. As far as I can tell, you're confusing map and territory.

Comment by ec429 on Syntacticism · 2011-09-24T04:07:50.687Z · score: 1 (1 votes) · LW · GW

I am still arguing with you because I think your misstep poisons more than you have yet realized, not to get on your nerves.

I wasn't suggesting you were trying to get on my nerves. I just think we're talking past each other.

"A proof exists" is a much murkier statement and it is much more difficult to discuss its probability.

As a first approximation, what's wrong with "\lim_{t -> \infty} P(I can find a proof in time t)"?

Also, I don't see why the prior has to be oracular; what's wrong with, say, P(the 3^^^3th decimal digit of pi is even)=½? But then if the digit is X, then surely a proof exists that it is X (because, in principle, the digit can be computed in finitely many steps); it must be some X in [[:digit:]], so if it is even a proof exists that it is even; otherwise (sharp swerve) one does not, and P=½. Not sure about that sharp swerve; if I condition all my probabilities on |arithmetic is consistent) then it's ok. But then, assuming I actually need to do so, the probabilities would be different if conditioned on |arithmetic is inconsistent), and thus by finding a proof, you find evidence for or against the assertion that arithmetic is consistent. But things you can find evidence on, exist! (They are the sheep that determine your pebbles.) So where did I go wrong? (Did I slip a meta-level somewhere? It's possible; I was juggling them a bit.)

Comment by ec429 on Syntacticism · 2011-09-24T03:57:54.865Z · score: 0 (2 votes) · LW · GW

lengthiness is not expected to be the only obstacle to finding a proof

True; stick a ceteris paribus in there somewhere.

You are trying to reason about reality from the point of view of a hypothetical entity that has infinite resources.

Not so; I am reasoning about reality in terms of what it is theoretically possible we might conclude with finite resources. It is just that enumerating the collection of things it is theoretically possible we might conclude with finite resources requires infinite resources (and may not be possible even then). Fortunately I do not require an enumeration of this collection.

I am certainly not saying that feasible proofs cause things to be true. Our previous slow computer and our new fast computer cause exactly the same number of important things to be true: none at all. That is the formalist position, anyway.

So either things that are unfeasible to prove can nonetheless be true, or nothing is true. So why does feasibility matter again?

P(I will prove the negation of your theorem in fewer than m+1 minutes) = p

No, it is > p. P(I will prove 1=0 in fewer than m+1 minutes) = p + epsilon. P(I will prove 1+1=2 in fewer than m+1 minues) = nearly 1. This is because you don't know whether my proof was correct.

Comment by ec429 on The Apparent Reality of Physics · 2011-09-24T03:41:18.485Z · score: 1 (1 votes) · LW · GW

Paul Almond

To Minds, Substrate, Measure and Value Part 2: Extra Information About Substrate Dependence I make his Objection 9 and am not satisfied with his answer to it. I believe there is a directed graph (possibly cyclic) of mathematical structures containing simulations of other mathematical structures (where the causal relation proceeds from the simulated to the simulator), and I suspect that if we treat this graph as a Markov chain and find its invariant distribution, that this might then give us a statistical measure of the probability of being in each structure, without having to have a concept of a physical substrate which all other substrates eventually reduce to.

However, I'm not sure that any of this is essential to my OP claims; the measure I assign to structures for purposes of forecasting the future is a property of my map, not of the territory, and there needn't be a territorial measure of 'realness' attached to each structure, any more than there need be a boolean property of 'realness' attached to each structure. I note, though, that, being unable to explain why I find myself in an Everett branch in which experiments have confirmed the Born rule (even though in many worlds (without mangling) there should be a 'me' in a branch in which experiments have consistently confirmed the Equal Probabilities rule), I clearly do not have an intuitive grasp of probabilities in a possible-worlds or modal-realistic universe, so I may well be barking up the wrong giraffe.

EDIT: In part 3, Almond characterises the Strong AI Hypothesis thus:

A mind exists when the appropriate algorithm is being run on a physical system.

I characterise my own position on minds thus:

A mind exists when there is an appropriate algorithm, whether that algorithm is being run on a physical system or not. If the existence-of-mind inheres in the interpretative algorithm rather than the algorithm-that-might-be-run, then the interpretative algorithm is the appropriate one; but the mind still exists, whether the interpretative algorithm is being run on a physical system or not.

This is because the idea of a 'physical system' is an attachment to physical realism which I reject in the OP.

Comment by ec429 on The Apparent Reality of Physics · 2011-09-24T02:39:19.215Z · score: 0 (0 votes) · LW · GW

But then, how do you determine whether information exists-in-the-universe at all? Does the number 2 exist-in-the-universe? (I can pick up 2 pebbles, so I'm guessing 'yes'.) Does the number 3^^^3 exist-in-the-universe? Does the number N = total count of particles in the universe exist-in-the-universe? (I'm guessing 'yes', because it's represented by the universe.) Does N+1 exist-in-the-universe? (After all, I can consider {particles in the universe} union {{particles in the universe}}, with cardinality N+1) If you allow encodings other than unary, let N = largest number which can be represented using all the particles in the universe. But I can coherently talk about N+1, because I don't need to know the value of a number to do arithmetic on it (if N is even, then N+1 is odd, even though I can't represent the value of N+1). Does the set of natural numbers exist-in-the-universe? If so, I can induct - and therefore, by induction on induction itself, I claim I can perform transfinite induction (aka 'scary dots') in which case the first uncountable ordinal exists-in-the-universe, which is something I'd quite like to conclude.

So where does it stop being a heap?

Comment by ec429 on Syntacticism · 2011-09-24T02:31:31.480Z · score: -1 (1 votes) · LW · GW

I am aware it can be very small. The only sense in which I claimed otherwise was by a poor choice of wording. The use I made of the claim that "Agents implementing the same deduction rules and starting from the same axioms tend to converge on the same set of theorems" was to argue for the proposition that there is a fact-of-the-matter about which theorems are provable in a given system. You accept that my finding a proof causes you to update P(you can find a proof) upwards by a strictly positive amount - from which I infer that you accept that there is a fact-of-the-matter as to whether a proof exists. In which case, you are not arguing with my conclusion, merely with a step I used in deriving it - a step I have replaced - so does that not screen off my conclusion from that step - so why are you still arguing with me?

Comment by ec429 on Syntacticism · 2011-09-24T02:10:20.153Z · score: 2 (2 votes) · LW · GW

Your conclusion on truth is a physical state in your mind, generated by physical processes. The existence of a metaphysical truth is not required for you to come to that conclusion.

I think a meta- has gone missing here: I can't be certain that others tend to reach the same truth (rather than funny hats), and I can't be certain that 2+2=4. I can't even be certain that there is a fact-of-the-matter about whether 2+2=4. But it seems damned likely, given Occamian priors, that there is a fact-of-the-matter about whether 2+2=4 (and, inasmuch as a reflective mind can have evidence for anything, which has to be justified through a strange loop on the bedrock, I have strong evidence that 2+2 does indeed equal 4).

That "truth" in the map doesn't imply truth in the territory, I accept. That there is no truth in the territory, I vehemently reject. If two minds implement the same computation, and reach different answers, then I simply do not believe that they were really implementing the same computation. If you compute 2+2 but get struck by a cosmic ray that flips a bit and makes you conclude "5!", then you actually implemented the computation "2+2 with such-and-such a cosmic ray bitflip".

I am not able to comprehend the workings of a mind which believes arithmetic truth to be a property only of minds, any more than I am able to comprehend a mind which believes sheep to be a property only of buckets. Your conclusion on sheep is a physical state in your mind, generated by physical processes. But the sheep still exist outside of your mind.

Comment by ec429 on Syntacticism · 2011-09-24T01:55:24.935Z · score: 0 (0 votes) · LW · GW

A positive but minuscule amount.

Right - but if there were no 'fact-of-the-matter' as to whether a proof exists, why should it be non-zero at all?

Comment by ec429 on The Apparent Reality of Physics · 2011-09-24T00:53:12.376Z · score: 0 (0 votes) · LW · GW

we find it hard to taboo words that are truly about the fundamentals of our universe, such as 'causality' or 'reality' or 'existence' or 'subjective experience'.

I tabooed "exist", above, by what I think it means. You think 'existence' is fundamental, but you've not given me enough of a definition for me to understand your arguments that use it as an untabooable word.

words like 'mathematical equations'

I'd say that (or rather 'mathematics') is just 'the orderly manipulations of symbols'. Or, as I prefer to phrase it, 'symbol games'.

'correspond to concepts in the material universe in order to predict happenings in said material universe'

That's applied mathematics (or, perhaps, physics), an entirely different beast with an entirely different epistemic status.

Why don't you taboo the words "mathematics" and "equations" first, and see if your argument still makes any sense

Manipulations of symbols according to formal rules are the ontological basis, and our perception of "physical reality" results merely from our status as collections of symbols in the abstract Platonic realm that defines the convergent results of those manipulations, "existence" being merely how the algorithm feels from inside.

Yup, still makes sense to me!

Comment by ec429 on Syntacticism · 2011-09-24T00:26:35.817Z · score: 1 (3 votes) · LW · GW

But why should feasibility matter? Sure, the more steps it takes to prove a proposition, the less likely you are to be able to find a proof. But saying that things are true only by virtue of their proof being feasible... is disturbing, to say the least. If we build a faster computer, do some propositions suddenly become true, because we now have the computing power to prove them?

Me saying I have a proof of a theorem should cause you to update P(you can find a proof) upwards. (If it doesn't, I'd be very surprised.) Consequently, there is something common.

Similarly, no matter how low your prior probability for "PA is consistent", so long as that probability is not 0, learning that I have proved a theorem should cause you to decrease your estimate of the probability that you will prove its negation.

Comment by ec429 on The Apparent Reality of Physics · 2011-09-24T00:16:18.849Z · score: 0 (0 votes) · LW · GW

Ok, now taboo your uses of "reality" and "preexisted" in the above comment, because I can't conceive of meanings of those words in which your comment makes sense.

Comment by ec429 on The Apparent Reality of Physics · 2011-09-24T00:14:24.820Z · score: 0 (0 votes) · LW · GW

Surely can't be exactly what you mean, as exists(our Univese) and ¬exists(everything else) seems coherent if rather unlikely

I would dispute this, on the grounds that my deductions in formal systems come from somewhere that has a causal relation to my brain - the formal system causes me to be more likely to deduce the things which are valid deductions than the things that aren't. So, if I 'exist', I maintain that the formal systems have to 'exist' too, unless you're happy with 'existing' things being causally influenced by 'non-existing' things - in which case there's not a lot of point in asserting that ¬exists(infinite sets). A definition of 'exists' which doesn't satisfy my coherence requirements is, I am attempting to argue, simply a means of sneaking in connotations.

Comment by ec429 on The Apparent Reality of Physics · 2011-09-24T00:08:10.098Z · score: 1 (1 votes) · LW · GW

So if you've ever read Probability Theory, by E.T. Jaynes

I haven't; I probably should.

the position that, in order to make sense when applied to the real world, infinite things have to behave like limits of finite things.

Is this "limits" in the sense of analysis (epsi-delta limits), or is it "limit points" (like ω)? If the former, then that position involves not believing that arithmetic makes sense when applied to the real world. If the latter, then the position doesn't seem different from what most mathematicians believe, because allowing limit points gets you transfinite induction... but then, as I intend to show, that gets you the first uncountable ordinal, by the power of 'scary dots'. So... either EY doesn't believe arithmetic applies to the real world, or EY doesn't know the logical consequences of his beliefs, or EY doesn't believe the above position, or I've made an error. Of course, at this point the last of those is rather likely, which is exactly why I want to formalise my argument and lay it out as coherently as possible.

Also, if I can't say "exists", how come you can talk about "the real world"? Double standards if you ask me ;)