**incorrect**on Visual Mental Imagery Training · 2013-02-22T15:12:12.851Z · score: 0 (0 votes) · LW · GW

My visualization ability improves the closer I am to sleep, being near perfect during a lucid dream.

**incorrect**on Falsifiable and non-Falsifiable Ideas · 2013-02-19T09:44:34.945Z · score: -1 (1 votes) · LW · GW

You can generally throw unfalsifiable beliefs into your utility function but you might consider this intellectually dishonest.

As a quick analogy, a solipsist can still care about other people.

**incorrect**on A Series of Increasingly Perverse and Destructive Games · 2013-02-15T04:22:26.803Z · score: 1 (3 votes) · LW · GW

I escape by writing a program that simulates 3^^3 copies of myself escaping and living happily ever after (generating myself by running Solomonoff Induction on a large amount of text I type directly into the source code).

**incorrect**on Isolated AI with no chat whatsoever · 2013-01-28T21:49:34.549Z · score: 3 (5 votes) · LW · GW

You might be able to contain it with a homomorphic encryption scheme

**incorrect**on I attempted the AI Box Experiment (and lost) · 2013-01-22T00:22:23.365Z · score: 11 (11 votes) · LW · GW

I'm guessing Eliezer would lose most of his advantages against a demographic like that.

**incorrect**on I attempted the AI Box Experiment (and lost) · 2013-01-21T13:58:35.354Z · score: 2 (2 votes) · LW · GW

Oh god, remind me to never play the part of the gatekeeper… This is terrifying.

**incorrect**on Some scary life extension dilemmas · 2013-01-03T19:33:42.133Z · score: 0 (2 votes) · LW · GW

The lifespan dilemma applies to all unbounded utility functions combined with expected value maximization, it does not require simple utilitarianism.

**incorrect**on New censorship: against hypothetical violence against identifiable people · 2012-12-24T04:40:09.933Z · score: 8 (8 votes) · LW · GW

Would your post on eating babies count, or is it too nonspecific?

http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1scb?context=1

(I completely agree with the policy, I'm just curious)

**incorrect**on Causal Universes · 2012-12-01T02:18:06.640Z · score: 0 (0 votes) · LW · GW

There are people who claim to be less confused about this than I am

Solipsists should be able to dissolve the whole thing easily.

**incorrect**on Gap in understanding of Logical Pinpointing · 2012-11-13T23:23:05.494Z · score: 0 (0 votes) · LW · GW

Thanks, can you recommend a textbook for this stuff? I've mostly been learning off Wikipedia.

I can't find a textbook on logic in the lesswrong textbook list.

**incorrect**on Gap in understanding of Logical Pinpointing · 2012-11-13T01:42:55.052Z · score: 0 (0 votes) · LW · GW

Therefore a theory could be ω-consistent because it fails to prove P(n), even though P(n) is true in the standard model.

I thought for ω-consistency to even be defined for a theory it must interpret the language of arithmetic?

**incorrect**on Gap in understanding of Logical Pinpointing · 2012-11-12T20:00:53.855Z · score: 0 (0 votes) · LW · GW

Just use the axiom schema of induction instead of the second order axiom of induction and you will be able to produce theorems though.

But you wont be able to produce all true statements in SOL PA, that is, PA with the standard model, because of the incompleteness theorems. To be able to prove a larger subset of such statements, you would have to add new axioms to PA. If adding an axiom T to PA does not prevent the standard model from being a model of PA+T, that is it does not prove any statements that require the existence of nonstandard numbers, then PA+T is ω-consistent.

So, why can't we just keep adding axioms T to PA, check whether PA+T is ω-consistent, and have a more powerful theory? Because we can't in general determine whether a theory is ω-consistent.

Perhaps a probabilistic approach would be more effective. Anyone want to come up with a theory of logical uncertainty for us?

## Gap in understanding of Logical Pinpointing

2012-11-12T17:33:47.929Z · score: 6 (9 votes)**incorrect**on Logical Pinpointing · 2012-11-04T04:56:55.064Z · score: 0 (0 votes) · LW · GW

There's no complete deductive system for second-order logic.

**incorrect**on Logical Pinpointing · 2012-11-04T01:17:30.135Z · score: 1 (1 votes) · LW · GW

An infinite number of axioms like in an axiom schema doesn't really hurt anything, but you can't have infinitely long single axioms.

```
∀x((x = 0) ∨ (x = S0) ∨ (x = SS0) ∨ (x = SSS0) ∨ ...)
```

is not an option. And neither is the axiom set

```
P0(x) iff x = 0
PS0(x) iff x = S0
PSS0(x) iff x = SS0
...
∀x(P0(x) ∨ PS0(x) ∨ PS0(x) ∨ PS0(x) ∨ ...)
```

We could instead try the axioms

```
P(0, x) iff x = 0
P(S0, x) iff x = S0
P(SS0, x) iff x = SS0
...
∀x(∃n(P(n, x)))
```

but then again we have the problem of n being a nonstandard number.

**incorrect**on Logical Pinpointing · 2012-11-03T23:57:49.788Z · score: 1 (1 votes) · LW · GW

I don't see what the difference is... They look very similar to me.

At some point you have to translate it into a (possibly infinite) set of first-order axioms or you wont be able to perform first-order resolution anyway.

**incorrect**on Logical Pinpointing · 2012-11-03T23:08:38.920Z · score: 1 (1 votes) · LW · GW

I don't see how you would define Pn(x) in the language of PA.

Let's say we used something like this:

```
Pn(x) iff ((0 + n) = x)
```

Let's look at the definition of +, a function symbol that our model is allowed to define:

```
a + 0 = a
a + S(b) = S(a + b)
```

"x + 0 = x" should work perfectly fine for nonstandard numbers.

So going back to P(x):

"there exists some n such that ((0 + n) = x)"

for a nonstandard number x, does there exist some number n such that ((0+n) = x)? Yup, the nonstandard number x! Set n=x.

Oh, but when you said nth successor you meant n had to be standard? Well, that's the whole problem isn't it!

**incorrect**on Logical Pinpointing · 2012-11-03T18:29:50.130Z · score: 1 (1 votes) · LW · GW

Are you saying that in reality every property P has actually three outcomes: true, false, undecidable?

By Godel's incompleteness theorem yes, unless your theory of arithmetic has a non-recursively enumerable set of axioms or is inconsistent.

And that those always decidable, like e.g. "P(n) <-> (n = 2)" cannot be true for all natural numbers, while those which can be true for all natural numbers, but mostly false otherwise, are always undecidable for... some other values?

I'm having trouble understanding this sentence but I think I know what you are asking about.

There are some properties P(x) which are true for every x in the 0 chain, however, Peano Arithmetic does not include all these P(x) as theorems. If PA doesn't include P(x) as a theorem, then it is independent of PA whether there exist nonstandard elements for which P(x) is false.

Let's suppose that for any specific value V in the separated chain it is possible to make such property PV. What would that prove? Would it contradict the article? How specifically?

I think this is what I am saying I believe to be impossible. You can't just say "V is in the separated chain". V is a constant symbol. The model can assign constants to whatever object in the domain of discourse it wants to unless you add axioms forbidding it.

Honestly I am becoming confused. I'm going to take a break and think about all this for a bit.

**incorrect**on Open Thread, November 1-15, 2012 · 2012-11-03T17:14:16.174Z · score: 1 (1 votes) · LW · GW

Could someone please confirm my statements in the new sequence post about first-order logic? I want to be sure my understanding is correct.

http://lesswrong.com/lw/f4e/logical_pinpointing/7qv6?context=1#7qv6

**incorrect**on Logical Pinpointing · 2012-11-03T09:59:14.814Z · score: 1 (1 votes) · LW · GW

If our axiom set T is independent of a property P about numbers then by definition there is nothing inconsistent about the theory T1 = "T and P" and also nothing inconsistent about the theory T2= "T and not P".

To say that they are not inconsistent is to say that they are satisfiable, that they have possible models. As T1 and T2 are inconsistent with each other, their models are different.

The single zero-based chain of numbers without nonstandard numbers is a single model. Therefore, if there exists a property about numbers that is independent of any theory of arithmetic, that theory of arithmetic does not logically exclude the possibility of nonstandard elements.

By Godel's incompleteness theorems, a theory must have statements that are independent from it unless it is either inconsistent or has a non-recursively-enumerable theorem set.

Each instance of the axiom schema of induction can be constructed from a property. The set of properties is recursively enumerable, therefore the set of instances of the axiom schema of induction is recursively enumerable.

Every theorem of Peano Arithmetic must use a finite number of axioms in its proof. We can enumerate the theorems of Peano Arithmetic by adding increasingly larger subsets of the infinite set of instances of the axiom schema of induction to our axiom set.

Since the theory of Peano Arithmetic has a recursively enumerable set of theorems it is either inconsistent or is independent of some property and thus allows for the existence of nonstandard elements.

**incorrect**on Logical Pinpointing · 2012-11-03T09:34:29.710Z · score: 2 (2 votes) · LW · GW

"Because if you had another separated chain, you could have a property P that was true all along the 0-chain, but false along the separated chain. And then P would be true of 0, true of the successor of any number of which it was true, and not true of all numbers."

But the axiom schema of induction does not completely exclude nonstandard numbers. Sure if I prove some property P for P(0) and for all n, P(n) => P(n+1) then for all n, P(n); then I have excluded the possibility of some nonstandard number "n" for which not P(n) but there are some properties which cannot be proved true or false in Peano Arithmetic and therefore whose truth hood can be altered by the presence of nonstandard numbers.

Can you give me a property P which is true along the zero-chain but necessarily false along a separated chain that is infinitely long in both directions? I do not believe this is possible but I may be mistaken.

**Incorrect**on [deleted post] 2012-09-22T22:56:04.428Z

**incorrect**on New study on choice blindness in moral positions · 2012-09-21T04:58:43.619Z · score: 9 (9 votes) · LW · GW

Oh don't worry, there will always be those little lapses in awareness. Even supposing you hide yourself at night, are you sure you maintain your sentience while awake? Ever closed your eyes and relaxed, felt the cool breeze, and for a moment, forgot you were aware of being aware of yourself?

**incorrect**on Less Wrong Polls in Comments · 2012-09-19T16:33:47.977Z · score: 1 (1 votes) · LW · GW

Bug:

**incorrect**on Rationality Quotes September 2012 · 2012-09-17T04:19:29.423Z · score: 1 (1 votes) · LW · GW

Are you saying that dying after a billion years sounds sad to you?

And therefore you would have a thousand-year-old brain that can make trillion-year plans.

**incorrect**on Open Thread, September 15-30, 2012 · 2012-09-15T17:14:30.400Z · score: 0 (0 votes) · LW · GW

I think I have a better understanding now.

For every statement S and for every action A, except the A Myself() actually returns, PA will contain a theorem of the form (Myself()=A) => S because falsehood implies anything. Unless Myself() doesn't halt, in which case the value of Myself() can be undecidable in PA and Myself's theorem prover wont find anything, consistent with the fact that Myself() doesn't halt.

I will assume Myself() is also filtering theorems by making sure Universe() has some minimum utility in the consequent.

If Myself() halts, then if the first theorem it finds has a false consequent PA would be inconsistent (because Myself() will return A, proving the antecedent true, proving the consequent true). I guess if this *would* have happened, then Myself() will be undecidable in PA.

If Myself() halts and the first theorem it finds has a true consequent then all is good with the world and we successfully made a good decision.

Whether or not ambient decision theory works on a particular problem seems to depend on the ordering of theorems it looks at. I don't see any reason to expect this ordering to be favorable.

**incorrect**on Open Thread, September 15-30, 2012 · 2012-09-15T06:37:27.088Z · score: 1 (1 votes) · LW · GW

How does ambient decision theory work with PA which has a single standard model?

It looks for statements of the form Myself()=C => Universe()=U

(Myself()=C), and (Universe()=U) should each have no free variables. This means that within a single model, their values should be constant. Thus such statements of implication establish no relationship between your action and the universe's utility, it is simply a boolean function of those two constant values.

What am I missing?

**incorrect**on The Yudkowsky Ambition Scale · 2012-09-12T21:13:01.933Z · score: 5 (7 votes) · LW · GW

15: discover ordinal hierarchy of Tegmark universes, discover method of constructing the set of all ordinals without contradiction, create level n Tegmark universe for all n

**incorrect**on Meta: LW Policy: When to prohibit Alice from replying to Bob's arguments? · 2012-09-12T16:47:56.624Z · score: 8 (8 votes) · LW · GW

It was supposed to be a sarcastic response about being too strict with definitions but obviously didn't end up being funny.

I am not a Will Newsome sockpuppet. I'll refrain from making the lower quality subset of my comments henceforth.

**incorrect**on Meta: LW Policy: When to prohibit Alice from replying to Bob's arguments? · 2012-09-12T03:41:15.015Z · score: -13 (19 votes) · LW · GW

Define human, moderator, judgement call, makes, and "when".

**incorrect**on Rationality Quotes September 2012 · 2012-09-09T04:18:18.036Z · score: 4 (12 votes) · LW · GW

Here's a conversation I had with Will a while back:

http://lesswrong.com/lw/cw1/open_problems_related_to_solomonoff_induction/6rlr?context=1#6rlr

**incorrect**on Rationality Quotes September 2012 · 2012-09-09T04:09:26.394Z · score: -5 (7 votes) · LW · GW

Cuteness is a subjective evaluation, a way to interpret reality, not a fact.

**incorrect**on Rationality Quotes September 2012 · 2012-09-09T04:04:00.150Z · score: -8 (8 votes) · LW · GW

Even calling someone's behavior non-cute is mean. Even meanness is cute. Once you start calling humans or the things they do non-cute you open the door to finding humans disgusting.

Even if we were to assume his behavior was trollish, damaging to lesswrong, and/or unproductive, that shouldn't make it non-cute.

**incorrect**on Rationality Quotes September 2012 · 2012-09-09T03:53:51.824Z · score: -8 (10 votes) · LW · GW

I'd rather live in a world where even if we disagree with each other, annoy each other, or waste each other's time we still don't say anybody isn't cute.

The opposite of cute is disgusting and is not a concept that should be applied to humans.

**incorrect**on Friendship is Optimal: A My Little Pony fanfic about an optimization process · 2012-09-09T03:40:48.692Z · score: 0 (4 votes) · LW · GW

He's really wondering whether the voxel-space is a directed graph or whether up∘down=down∘up=identity (and for left/right too). Movement could be commutative with up∘down≠identity.

Consider

```
voxels = {a, b}
left(a) = a
right(a) = a
up(a) = a
down(a) = a
left(b) = a
right(b) = a
up(b) = a
down(b) = a
If f is in (left, right, up, down)
let g be the respective function in (right, left, down, up)
forall x in {a, b}
f(g(x))=g(f(x))=a
But
up(down(b))=a
whereas
identity(b)=b
```

**incorrect**on Rationality Quotes September 2012 · 2012-09-09T01:37:12.385Z · score: -7 (11 votes) · LW · GW

It's really mean to say someone isn't cute and although this entire thread isn't very productive I find it mean that my comment rejecting the meanness to WN was selectively deleted.

**incorrect**on Rationality Quotes September 2012 · 2012-09-08T22:20:25.033Z · score: -5 (11 votes) · LW · GW

This behavior isn't cute.

Yes it is, and not just a little bit.

**incorrect**on Rationality Quotes September 2012 · 2012-09-08T18:27:10.152Z · score: 2 (4 votes) · LW · GW

If dying after a billion years doesn't sound sad to you, it's because you lack a thousand-year-old brain that can make trillion-year plans.

If only the converse were true...

**incorrect**on Jews and Nazis: a version of dust specks vs torture · 2012-09-08T16:38:17.782Z · score: 1 (1 votes) · LW · GW

They aren't adding qualia, they are adding the utility they associate with qualia.

**incorrect**on Jews and Nazis: a version of dust specks vs torture · 2012-09-07T23:49:36.937Z · score: -1 (3 votes) · LW · GW

Sorry, generic you.

**incorrect**on Jews and Nazis: a version of dust specks vs torture · 2012-09-07T23:34:25.343Z · score: -1 (5 votes) · LW · GW

What's more important to you, your desire to prevent genocide or your desire for a simple consistent utility function?

**incorrect**on Jews and Nazis: a version of dust specks vs torture · 2012-09-07T23:22:59.368Z · score: 4 (8 votes) · LW · GW

If you do happen to think that there is a source of morality beyond human beings... and I hear from quite a lot of people who are happy to rhapsodize on how Their-Favorite-Morality is built into the very fabric of the universe... then what if that morality tells you to kill people?

If you believe that there is any kind of stone tablet in the fabric of the universe, in the nature of reality, in the structure of logic—anywhere you care to put it—then what if you get a chance to read that stone tablet, and it turns out to say "Pain Is Good"? What then?

Maybe you should hope that morality isn't written into the structure of the universe. What if the structure of the universe says to do something horrible?

And if an external objective morality does say that the universe should occupy some horrifying state... let's not even ask what you're going to do about that. No, instead I ask: What would you have wished for the external objective morality to be instead? What's the best news you could have gotten, reading that stone tablet?

Go ahead. Indulge your fantasy. Would you want the stone tablet to say people should die of old age, or that people should live as long as they wanted? If you could write the stone tablet yourself, what would it say?

Maybe you should just do that?

I mean... if an external objective morality tells you to kill people, why should you even listen?

- The Moral Void, Eliezer Yudkowsky

**incorrect**on How to deal with someone in a LessWrong meeting being creepy · 2012-09-07T18:54:45.019Z · score: 19 (25 votes) · LW · GW

Do not initiate intimate physical contact (hugs, touching shoulder, etc) unless the target has previously made similar contact with you.

If everyone follows this rule nobody will ever initiate physical contact.

**incorrect**on Meta: What do you think of a karma vote checklist? · 2012-09-01T21:19:51.549Z · score: 2 (2 votes) · LW · GW

Slashdot has something like this.

**incorrect**on The noncentral fallacy - the worst argument in the world? · 2012-09-01T14:22:29.284Z · score: 2 (2 votes) · LW · GW

Genuine agreement with whimsical annoyance about having to consider actual situations and connotations.

**incorrect**on The noncentral fallacy - the worst argument in the world? · 2012-08-31T18:34:06.829Z · score: 0 (2 votes) · LW · GW

Sounds good to me if you're going to get all connotative about it.

**incorrect**on The noncentral fallacy - the worst argument in the world? · 2012-08-31T16:10:02.135Z · score: 2 (2 votes) · LW · GW

What about going from "members of subcategory X of category Y are more likely to possess characteristic C" to "In the absence of further information, a particular member of subcategory X is more likely to possess characteristic C than a non-X member of category Y".

You are saying you can't go from probabilistic information to certainty. This is a strawman.

**incorrect**on Open Thread, August 16-31, 2012 · 2012-08-31T09:27:55.703Z · score: 1 (1 votes) · LW · GW

I wrote my comment above under the assumption of mjgeddes' honesty but I also believe they are more likely lying than not lying.

My alternative theories are: mjgeddes is just trolling without any real plan (40%), mjgeddes is planning to laugh at us all for believing something with such an explicitly low prior. (40%), something else (>19%), actually won the lottery: <1%

Yet still I feel the need to give them the benefit of the doubt. I wonder precisely when that social heuristic should be abandoned...

**incorrect**on Open Thread, August 16-31, 2012 · 2012-08-31T05:39:57.184Z · score: 0 (0 votes) · LW · GW

http://lesswrong.com/lw/sc/existential_angst_factory/

You could try self-modifying to not hate evil people ("hate the sin not the sinner"). Here's some emotional arguments that might help (I make no claim as to their logical coherence):

If there was only one person in existence and they were evil, would you want them to be punished or blessed? Who would it serve to punish them?

If you are going to excuse people with mental illness you are going to have to draw some arbitrary line along the gradient from "purposely evil" to "evil because of mental illness." Also consider the gradient of moral responsibility from child to adult.

If someone who was once evil completely reformed would you still see value in punishing them? Would you wish you hadn't punished them while they were still evil?

Although someone may have had a guilty mind at the moment of their crime, do they still at the moment of punishment? What if you are increasing the quantum measure of an abstracted isomorphic experience of suffering?

**incorrect**on [META] Karma for last 30 days? · 2012-08-31T05:10:01.224Z · score: 0 (0 votes) · LW · GW

(bit of irony here :P)

Perhaps acceptable casualties.

**incorrect**on Open Thread, August 16-31, 2012 · 2012-08-31T04:46:55.761Z · score: 1 (1 votes) · LW · GW

I just won the New Zealand national lottery.

Congratulations!

For the sake of people reading this post who may not be familiar with the concept of backwards causality:

As a fun test, I called on any future super intelligences to come to my aid, appealing to the notion of backward causality. Asking for clear evidence of the hand of a superintelligence in the event I won, I choose a number of high significance to me personally. The number I chose was 27, which I placed in all lines of the ticket. (All the other numbers I selected at random).

This is not the typical LW understanding of decision theory. Here's an example of what "backwards causality" could actually mean:

mjgeddes and lottery employee both believe an agent will be created in the future that likes to grant wishes and will reward people who help grant wishes. The lottery employee somehow knows mjgeddes made a wish, and fudges the lottery results in the hope of a future reward from the wish-granting agent.

Thinking of it as "backwards causality" enacted by the hypothetical future wish-granting agent is a useful way of thinking about certain decision problems but should never preclude a normal, traditional explanation.

Lest anyone claim I am ruining the mood: Praise be to the glorious Eschaton; that acausal spring from which all blessings flow!