Posts

Comments

Comment by Kutta on Godel in second-order logic? · 2020-07-27T09:56:29.778Z · LW · GW

Yes. To expand a bit, in fact the straightforward way to show that second-order arithmetic isn't complete in the first sense is by using the Gödel sentence G.

G says via an encoding that G is not provable in second-order arithmetic. Since the only model (up to isomorphism) is the model with the standard natural numbers, an internal statement which talks about encoded proofs is interpreted in the semantics as a statement which talks about actual proof objects of second-order arithmetic. This is in contrast to first-order arithmetic where we can interpret an internal statement about encoded proofs as ranging over nonstandard numbers as well, and such numbers do not encode actual proof objects.

Therefore, when we interpret second-order G, we always get the semantic statement "G is not provable in second-order arithmetic". From this and the soundness of the proof system, it follows that G is not provable. Hence, G holds in all models (recall that there is just one model up to isomorphism), but is not provable.

Comment by Kutta on Godel in second-order logic? · 2020-07-26T13:20:48.617Z · LW · GW

It is important to distinguish the "complete" which is in Gödel's completeness theorem and the "complete"-s in the incompleteness theorems, because these are not the same.

The first one presupposes two things, a) a syntax for a logical language, also containing a syntax for proofs b) a notion of model. Then, a combination of a) and b) is complete if every sentence which holds in all models is also syntactically provable.

The second one for our purpose can be summarized as follows: a syntax for a logical language is complete if for every sentence , either or its negation is provable. This sense of completeness does not mention models or semantics!

The main point in Gödel's first incompleteness theorem is the ability to internally represent the syntax and proof theory of the language. In the case of first-order PA, all syntactic objects appear as certain finitely deep and finitely branching trees, and the job is to use arithmetic operations and properties to encode various finitary trees as just unary branching trees, i.e. the natural numbers. The syntax of second-order arithmetic does not differ in a relevant way; it is also just finitary trees, and since second-order arithmetic is a strict extension of first-order arithmetic, analogous internal encoding works in it as well.

That second-order logic "doesn't have a computable deduction system" sounds false to me. It certainly does not have the completeness property (in the first sense) with respect to the standard notion of model, precisely because of Gödel's incompleteness + ruling out nonstandard semantic natural numbers. But second-order logic still has finitary, recursively enumerable syntax and proofs, with decidable proof validation.

Comment by Kutta on A quick sketch on how the Curry-Howard Isomorphism kinda appears to connect Algorithmic Information Theory with ordinal logics · 2015-04-24T11:23:09.891Z · LW · GW

I'd like to ask a not-too-closely related question, if you don't mind.

A Curry-Howard analogue of Gödel's Second Incompleteness Theorem is the statement "no total language can embed its own interpreter"; see the classic post by Conor McBride. But Conor also says (and it's quite obviously true), that total languages can still embed their coinductive interpreters, for example one that returns in the partiality monad.

So, my question is: what is the logical interpretation of a coinductive self-interpreter? I feel I'm not well-versed enough in mathematical logic for this. I imagine it would have a type like "forall (A : 'Type) -> 'Term A -> Partiality (Interp A)", where 'Type and 'Term denote types and terms in the object language and "Interp" returns a type in the meta-language. But what does "Partiality" mean logically, and is it anyhow related to the Löbstacle?

Comment by Kutta on An Introduction to Löb's Theorem in MIRI Research · 2015-03-24T17:14:48.360Z · LW · GW

You're right, I mixed it up. Edited the comment.

Comment by Kutta on An Introduction to Löb's Theorem in MIRI Research · 2015-03-24T17:02:07.690Z · LW · GW

Suppose "mathemathics would never prove a contradiction". We can write it out as ¬Provable(⊥). This is logically equivalent to Provable(⊥) → ⊥, and it also implies Provable(Provable(⊥) → ⊥) by the rules of provability. But Löb's theorem expresses ∀ A (Provable(Provable(A) → A) → Provable(A)), which we can instantiate to Provable(Provable(⊥) → ⊥)→ Provable(⊥), and now we can apply modus ponens and our assumption to get a Provable(⊥).

Comment by Kutta on Preferences without Existence · 2014-02-10T00:46:24.939Z · LW · GW

To clarify my point, I meant that Solomonoff induction can justify caring less about some agents (and I'm largely aware of the scheme you described), but simultaneously rejecting Solomonoff and caring less about agents running on more complex physics is not justified.

Comment by Kutta on L-zombies! (L-zombies?) · 2014-02-09T16:44:21.169Z · LW · GW

The unsimulated life is not worth living.

-- Socrates.

Comment by Kutta on Preferences without Existence · 2014-02-09T16:34:50.775Z · LW · GW

You conflate two very different things here, as I see.

First, there are the preferences for simpler physical laws or simpler mathematical constructions. I don't doubt that they are real amongst humans; after all, there is an evolutionary advantage to using simpler models ceteris paribus since they are easier to memorize and easier to reason about. Such evolved preferences probably contribute to a matemathician's sense of elegance.

Second, there are preferences about the concrete evolutionarily relevant environment and the relevant agents in it. Naturally, this includes our fellow humans. Note here that we might also care about animals, uploads, AIs or aliens because of our evolved preferences and intuitions regarding humans. Of course, we don't care about aliens because of a direct evolutionary reason. Rather, we simply execute the adaptations that underlie our intuitions. For instance, we might disprefer animal suffering because it is similar enough to human suffering.

This second level has very little to do with the complexity of the underlying physics. Monkeys have no conception of cellular automata; you could run them on cellular automata of vastly differing complexity and they wouldn't care. They care about the kind of simplicity that is relevant to their day-to-day environment. Humans also care about this kind of simplicity; it's just that they can generalize this preference to more abstract domains.

(On a somewhat unrelated note, you mentioned bacteria. I think your point is a red herring; you can build agents with an assumption for the underlying physics, but that doesn't mean that the agent itself necessarily has any conception of the underlying physics, or even that the agent is consequentialist in any sense).

So, what I'm trying to get at: you might prefer simple physics and you might care about people, but it makes little sense to care less about people because they run on uglier physics. People are not physics; they are really high-level constructs, and a vast range of different universes could contain (more or less identical) instances of people whom I care about, or even simulations of those people.

If I assume Solomonoff induction, then it is in a way reasonable to care less about people running on convoluted physics, because then I would have to assign less "measure" to them. But you rejected this kind of reasoning in your post, and I can't exactly come to grips with the "physics racism" that seems to logically follow from that.

Comment by Kutta on Why CFAR? · 2013-12-29T11:17:49.256Z · LW · GW

Donated $500.

Comment by Kutta on 2013 Less Wrong Census/Survey · 2013-11-22T10:26:44.391Z · LW · GW

Survey taken.

Comment by Kutta on March 2013 Media Thread · 2013-03-05T09:12:48.262Z · LW · GW

Very recently decent fansubs have surfaced for Mamoru Hosoda's new movie The Wolf Children Ame and Yuki. I wholeheartedly recommend it.

Comment by Kutta on Participation in the LW Community Associated with Less Bias · 2012-12-09T17:51:37.653Z · LW · GW

I figured that ~36% discount over two months would be way too high.

Comment by Kutta on 2012 Winter Fundraiser for the Singularity Institute · 2012-12-08T08:33:02.583Z · LW · GW

Thanks for your effort. I'll contact my bank.

Comment by Kutta on 2012 Winter Fundraiser for the Singularity Institute · 2012-12-07T11:40:55.075Z · LW · GW

I donated 250$.

Update: No, I apparently did not. For some reason the transfer from Google Checkout got rejected, and now PayPal too. Does anyone have an idea what might've gone wrong? I've a Hungarian bank account. My previous SI donations were fine, even with the same credit card if I recall correctly, and I'm sure that my card is still prefectly valid.

Comment by Kutta on 2012 Less Wrong Census/Survey · 2012-11-04T14:17:06.782Z · LW · GW

Took all of them.

Comment by Kutta on Good transhumanist fiction? · 2012-10-15T12:38:20.236Z · LW · GW

Luminosity, Radiance and Flashes by Alicorn.

Comment by Kutta on How is your mind different from everyone else's? · 2012-06-22T08:53:38.255Z · LW · GW

What do you mean by real depth? In cinema, isn't skilled cinematography included in that? If I recall correctly, I've read from you somewhere that you think most of NGE's narrative/mythological background is an impromptu, leaky mess (which I mostly agree with), so you might mean that by lack of real depth, but that doesn't subtract much from NGE's overall success at thematic exposition, so I'm still not fully getting it.

Comment by Kutta on Arrison, Vassar and de Grey speaking at Peter Thiel's startup class [link] · 2012-06-17T07:21:39.383Z · LW · GW

Typo: Arrison.

Comment by Kutta on Organic food, conventional food · 2012-06-15T06:16:04.639Z · LW · GW

Seconded all three. The health impact of the quality of a particular foodstuff (within the variance allowed by developed country regulations) is often overstated compared to the health impact of the overall composition of the calories you eat.

Comment by Kutta on [Link] The Greek Heliocentric Theory · 2012-06-13T07:37:14.253Z · LW · GW

I first encountered this idea on Terence Tao's excellent slide.

Comment by Kutta on Rationality Quotes April 2012 · 2012-04-01T13:00:30.729Z · LW · GW

He who knows how to do something is the servant of he who knows why that thing must be done.

-- Isuna Hasekura, Spice and Wolf vol. 5 ("servant" is justified by the medieval setting).

Comment by Kutta on Harry Potter and the Methods of Rationality discussion thread, part 12 · 2012-03-27T09:50:44.042Z · LW · GW

I've the impression that Harry actually has some kind of censor inside his head that prevents him from thinking about the sense of doom concerning Quirrel. He is never shown remembering it and reflecting on it, even though it should be a pretty damn conspicuous and important fact. EDIT: not never, as seen below, but the amount of thought he expends on the matter still seems to be weirdly little.

Comment by Kutta on Meetup : First meetup in Budapest · 2012-03-21T14:07:12.342Z · LW · GW

I'm probably coming.

Also, will the meetup's language be English? AlexeyM's username suggests so.

Comment by Kutta on POSITION: Design and Write Rationality Curriculum · 2012-01-19T12:25:24.527Z · LW · GW

The presentation and exercise booklets seem to be pretty awesome.

Comment by Kutta on More intuitive explanations! · 2012-01-07T07:35:05.549Z · LW · GW

1) Here is a nice prove of Pythagorean theorem:

Typo: proof.

Comment by Kutta on Rationality quotes January 2012 · 2012-01-01T11:23:25.810Z · LW · GW

Most people you know are probably weak skeptics, and I would probably fit this definition in several ways. "Strong skeptics" are the people who write The Skeptics' Encyclopedia, join the California Skeptics' League, buy the Complete Works of James Randi, and introduce themselves at parties saying "Hi, I'm Ted, and I'm a skeptic!". Of weak skeptics I approve entirely. But strong skeptics confused me for a long while. You don't believe something exists. That seems like a pretty good reason not to be too concerned with it.

Edit: authorial instance specified on popular demand.

Comment by Kutta on Tell Your Rationalist Origin Story · 2011-12-31T00:06:28.809Z · LW · GW

Welcome to Less Wrong!

You might want to post your introduction in the current official "welcome" thread.

... then I am an ex-rationalist.

LW's notion of rationality differs greatly from what you described. You may find our version more palatable.

Comment by Kutta on What are you working on? December 2011 · 2011-12-30T23:00:40.166Z · LW · GW

Do you have evidence besides the username and the programming skill that it's Norvig? I also entertained the idea that it's him. At first I didn't examine his code deeply, but its conciseness inspired me to create a 12-line semi-obfuscated Python solution. I posted a clarified version of that in the thread. What do you think about it? Also, could you tell me your Euler username so I could look for your solutions (provided that you actually post there)?

Now that you mentioned Norvig's solution I investigated it and after correcting some typos I got it to run on my PC. I concluded that it works pretty much the same way as my solution (but mine's considerably faster :) ).

Comment by Kutta on Holiday giving thread · 2011-12-24T17:00:26.314Z · LW · GW

By the way, it seems that the usual end-of-the-year SI fundraising is live now.

Comment by Kutta on How is your mind different from everyone else's? · 2011-12-17T23:59:12.503Z · LW · GW

Evangelion

Maybe someone should do some study about that peculiar group of depressed and/or psychopathological people who were significantly mentally kicked by NGE. Of course it's all anecdotal right now, but I really have the impression (especially after spending some time at EvaGeeks... ) that NGE produces a recurring pattern of effect on a cluster of people, moreover, that effect is much more dramatic than what is usual in art.

Comment by Kutta on A discarded review of 'Godel, Escher Bach: an Eternal Golden Braid' · 2011-12-16T08:34:08.458Z · LW · GW

GEB is great as many things; as an introduction to formal systems, self reference, several computer science topics, Gödel's first Incompleteness Theorem, and other stuff. Often it is also a unique and very entertaining hybrid of art and nonfiction. Without denying any of those merits, the book's weakest point is actually the core message, quoted in OP as

GEB is a very personal attempt to say how it is that animate beings can come out of inanimate matter... GEB approaches [this question] by slowly building up an analogy that likens inanimate molecules to meaningless symbols, and further likens selves... to certain special swirly, twisty, vortex-like, and meaningful patterns that arise only in particular types of systems of meaningless symbols.

What Hofstadter does is is the following: he identifies self-awareness and self-reference as core features of consciousness and/or intelligence, and he embarks on a long journey across various fields in search of phenomena that also has something to do with self-reference. This is some kind of weird essentialism; Hofstadter tries to reduce extremely high-level features of complex minds to (superficially) similar features that arise in enormously simpler formal and physical systems. Hofstadter doesn't believe in ontologically fundamental mental entities, so he's far from classic essentialism, yet he believes in very low level "essences" of consciousness that percolate up to high-level minds. This abrupt jumping across levels of organizations reminds me a bit of those people who try to derive practical everyday epistemic implications from the First Incompleteness Theorem (or get dispirited because of some implied "inherent unknowability" of the world).

Now, to be fair, GEB considers medium levels of organization in its two chapters on AI, but GEB's far less elaborate on those matters than on formal systems, for instance. The AI chapters are also the most outdated now, and even there Hofstadter's not really trying to do any noteworthy reduction of minds but instead briefly ponders then-contemporary AI topics such as Turing tests, computer chess, SHRDLU, Bongard problems, symbol grounding, etc.

To be even fairer, valid reduction of high-level features of human minds is extremely difficult. Ev-psych and Cognitive Science can do it occasionally, but they don't yet attempt reduction of general intelligence and consciousness itself. It is probably understandable that Hofstadter couldn't see that far ahead into the future of cogsci, evpsych and AI. Eliezer Yudkowsky's Level of Organization in General Intelligence is the only reductionist work I know that tries to wrestle all of it at once, and while it is of course not definitive or even fleshed-out, I think it represents the kind of mode of thinking that could possibly yield genuine insights about the mysteries of consciousness. In contrast, GEB never really enters that mode.

Comment by Kutta on What are you working on? December 2011 · 2011-12-14T10:53:36.706Z · LW · GW

Wow, thanks. That's probably the subjectively best feeling thing anyone's said to me in 2011 so far.

Comment by Kutta on What are you working on? December 2011 · 2011-12-13T23:10:39.346Z · LW · GW

In September I picked up programming. Following many people's recommendations I chose the Project Euler + Python combination. So far it seems to be quite addictive (and effective). I'm currently at 90 solved problems, although I'm starting to feel a bit out of my (rather non-deep) depth, and thus I consider temporarily switching to investigating PyGame for a while and coding remakes of simple old games, while getting ahold of several CS and coding textbooks.

Comment by Kutta on Announcing the Quantified Health Prize · 2011-12-11T17:23:51.396Z · LW · GW

Pollan's book is horrible. It is actually against science per se in nutrition, continuously bringing up the supposed holistic irreducibility of diets and emphasizing "common sense", "tradition" and "what our grandparents ate" as primary guidelines. Pollan presents several cherry-picked past mistakes of nutrition science, and from that concludes that nutrition science in general is evil.

I am not fundamentally against heuristics derived from tradition and/or evolution, but Pollan seems to use such heuristics whimsically, mostly to push forward a personal agenda of vegetarianism, organic foods and an extremely warm and fuzzy philosophical baggage of food culture and lifestyle. Also, Pollan's arguments are almost exclusively based on affect (nature = good, artificial = evil, people selling artificial food = monstrous, etc.). Actually, looking a bit into the book to refresh my memories, Pollan doesn't even use traditions to make inferences about foods' healthiness; they are merely convenient sources of positive affect.

Comment by Kutta on [Link] A Short Film based on Eliezer Yudkowsky's AI Box experiment · 2011-12-08T08:37:52.112Z · LW · GW

I am a bit worried by the fact that this trailer has a robot squad infiltrating a warehouse with mannequins and antique recording devices, as opposed to things more unambiguously AI-Box-related. The synopsis also sounds rather wooey. Anyway, the full movie will be the judge of my worries.

Comment by Kutta on Facing the Intelligence Explosion discussion page · 2011-11-29T08:59:08.352Z · LW · GW

My central point is contained in the sentence after that. A positive Singularity seems extremely human to me when contrasted to paperclip Singularities.

Comment by Kutta on Facing the Intelligence Explosion discussion page · 2011-11-26T14:24:20.879Z · LW · GW

Re: Preface and Contents

I am somewhat irked by "the human era will be over" phrase. It is not given that current-type humans cease to exist after any Singularity. Also, a positive Singularity could be characterised as the beginning of the humane era, in which case it is somewhat inappropriate to refer to the era afterwards as non-human. In contrast to that, negative Singularities typically result in universes devoid of human-related things.

Comment by Kutta on How did you come to find LessWrong? · 2011-11-22T09:39:47.220Z · LW · GW

2008: Life extension -> Immortality Institute -> OB

Comment by Kutta on Rationality Quotes November 2011 · 2011-11-04T20:27:44.917Z · LW · GW

That's witholding potentially important information. Also, you still have to address other people's erroneous beliefs about their points.

Comment by Kutta on 2011 Less Wrong Census / Survey · 2011-11-04T17:30:23.282Z · LW · GW

13 years off, 50% confidence.

Comment by Kutta on 2011 Less Wrong Census / Survey · 2011-11-01T07:44:03.558Z · LW · GW

I praise Yvain for this.

Comment by Kutta on [FICTION] Hamlet and the Philosopher's Stone · 2011-10-30T17:33:36.514Z · LW · GW

I've paid the 3 dollars because it is such a small amount. The marvelously awful Harry Potter puns alone made it a bargain.

Comment by Kutta on Rhetoric for the Good · 2011-10-25T06:51:26.906Z · LW · GW

Begin with movement. Excitement. Humor. Surprise. Insight. Explosions.

I know I am a total nitpicker here but I think there is such a thing as too short a sentence.

Comment by Kutta on [LINK] Loss of local knowledge affecting intellectual trends · 2011-10-23T07:21:07.892Z · LW · GW

A popular answer to that nowadays is something like "creativity".

Comment by Kutta on Things you are supposed to like · 2011-10-21T09:29:38.370Z · LW · GW

You can draw a lot of motivation from peer pressure; the trick is to expose yourself to specific kinds of peer pressure that propel you towards some desirable goal.

In regards to art, once I made a considerable effort to like extreme metal, because a respected art-geek friend recommended me to do so. He's a professional poker player with little to no social engagement in art circles, and thus his tastes have remarkably social-pressure-free origins. I figured that'd make his social pressure on me more valuable. Currently, on reflection, I believe that some extreme metal is extremely good, and I also enjoy such music immensely, and the fact that I could manage to reach this state only via peer pressure doesn't matter that much.

"Try to minimize information cascades regarding art recommendations" seems to be a good heuristic in general. Another would be: "value the recommendations of people who have complex boundaries of liked-disliked art". Someone who likes some classical music, but not most, and also likes some extreme metal, but not most, maybe considers the actual music more carefully than someone who likes most music from one genre but completely dismisses certain other genres.

Comment by Kutta on The Outside View Of Human Complexity · 2011-10-09T06:22:37.663Z · LW · GW

The points overestimating complexity (especially point 1) about brain proteins) seem to be attempts to reduce high-level complexity to low-level complexity instead of low-level simplicity.

Comment by Kutta on Should I play World of Warcraft? · 2011-10-07T06:39:48.766Z · LW · GW

You want to destroy PhilGoetz's life?

Comment by Kutta on On self-deception · 2011-10-06T10:42:18.953Z · LW · GW

OP argued that self-deception occurs even if your brain remains unbroken. I would characterize "not breaking my brain" as allowing my prior belief about the book's biasedness to make a difference in my posterior confidence of the book's thesis. In that case the book might be arbitrarily convincing; but I might start with an arbitrarily high confidence that the book is biased, and then it boils down to an ordinary Bayesian tug o' war, and Yvain's comment applies.

On the other hand, I'd view a brain-breaking book as a "press X to self-modify to devout Y-believer" button. If I know the book is such, I decide not to read it. If I'm ignorant of the book's nature, and I read it, then I'm screwed.

Comment by Kutta on Willpower and diet: advice? · 2011-09-22T07:48:28.245Z · LW · GW

I stopped eating wheat two years ago (no relapse since then). I've found that the following technique makes the switch tremendously easier:

Eat a cup of whipping cream before meals.

Explanation: the easiest way to make wheat-craving go away is to already consume enough calories without wheat; I suspect that much of wheat-craving is the result of overestimating the caloric value of a wheat-free diet. Also, low-carbers often has to push themselves to eat slightly more than what they'd otherwise eat because they tend to be more sated than what is common among high-carb people. Moreover, a barrier to major diet change is that time expenditure on food preparation and purchase will inevitably go up for a while after a change. People with scarce free time fall back to old diet patterns because they often find themselves urgently needing calories and grab whatever is most convenient.

Whipping cream is cheap, very dense in high-quality calories, contains almost no health-controversial nutrients, it is very filling, packed in easily measurable and quantifiable units, reasonably palatable (especially if you make an effort to seek out the tastiest available one) and requires zero preparation. Three 175 g cups of 20% fat whipping cream is roughly 1000 calories, which could easily crowd out wheat urges. Even if you dislike the idea of eating whipping cream by the spoon in the long term, it is perfectly able to help you bridge the critical newly wheat-free period, in which period you gradually grow accustomed enough to your new diet so that you can eat diet-compliant sophisticated foods without a significant hit on free time.

Comment by Kutta on Word Pronunciation · 2011-09-11T06:36:27.586Z · LW · GW

Did you click on the listen icons on the right side, those that activate the Hungarian parser? I'm Hungarian and Google's "Erdős" and "Szilárd" are basically indistinguishable from common speech versions, while "Csíkszentmihályi" has only one minor flaw, namely that it leaves a bit too much space between Csík and szent.