## Posts

## Comments

**potato**on Does Evidence Have To Be Certain? · 2016-03-30T12:49:45.256Z · score: 0 (0 votes) · LW · GW

Yeah, the problem i have with that though is that I'm left asking: why did I change my probability in that? Is it because i updated on something else? Was I certain of that something else? If not, then why did I change my probability of that something else, and on we go down the rabbit hole of an infinite regress.

**potato**on Bayes Slays Goodman's Grue · 2015-11-08T21:24:10.946Z · score: 1 (1 votes) · LW · GW

Wait, actually, I'd like to come back to this. What programming language are we using? If it's one where either grue is primitive, or one where there are primitives that make grue easier to write than green, then true seems simpler than green. How do we pick which language we use?

**potato**on Causal Universes · 2015-10-09T17:13:23.586Z · score: 0 (0 votes) · LW · GW

Here's my problem. I thought we were looking for a way to categorize meaningful statements. I thought we had agreed that a meaningful statement must be interpretable as or consistent with at least one DAG. But now it seems that there are ways the world can be which can not be interpreted even one DAG because they require a directed cycle. SO have we now decided that a meaningful sentence must be interpretable as a directed, cyclic or acyclic, graph?

In general, if I say all and only statements that satisfy P are meaningful, then any statement that doesn't satisfy P must be meaningless, and all meaningless statements should be unobservable, and therefor a statement like "all and only statements that satisfy P are meaningful" should be unfalsifiable.

**potato**on Causality: a chapter by chapter review · 2015-10-07T05:34:13.223Z · score: 0 (0 votes) · LW · GW

What is Markov relative?

**potato**on The Fabric of Real Things · 2015-10-07T05:25:16.415Z · score: 0 (0 votes) · LW · GW

Does EY give his own answer to this elsewhere?

**potato**on Godel's Completeness and Incompleteness Theorems · 2015-10-07T03:40:04.295Z · score: 1 (1 votes) · LW · GW

Wait... this will seems stupid, but can't I just say: "there does not exist x where sx = 0"

nevermind

**potato**on Tell Culture · 2015-08-04T00:21:11.455Z · score: 2 (2 votes) · LW · GW

Here's a new strategy.

Use guess culture as a default. Use guess tricks to figure out whether other communicator speaks Ask. Use Ask tricks to figure out whether communicator speaks Tell.

**potato**on Does Probability Theory Require Deductive or Merely Boolean Omniscience? · 2015-08-03T23:50:44.466Z · score: 0 (0 votes) · LW · GW

Let's forget about the oracle. What about the program that outputs X only if 1 + 1 = 2, and else prints 0? Let's call it A(1,1). The formalism requires that P(X|A(1,1)) = 1, and it requires that P(A(1,1)) = 2 ^-K(A(1,1,)), but does it need to know that "1 + 1 = 2" is somehow *proven* by A(1,1) printing X?

In either case, you've shown me something that I explicitly doubted before: one can prove any provable theorem if they have access to a Solomonoff agent's distribution, and they know how to make a program that prints X iff theorem S is provable. All they have to do is check the probability the agent assigns to X conditional on that program.

**potato**on Does Probability Theory Require Deductive or Merely Boolean Omniscience? · 2015-08-03T22:53:04.679Z · score: 0 (0 votes) · LW · GW

Awesome. I'm pretty sure you're right; that's the most convincing counterexample I've come across.

I have a weak doubt, but I think you can get rid of it:

let's name the program FTL()

I'm just not sure this means that the theorem itself is assigned a probability. Yes, I have an oracle, but it doesn't assign a probability to a program halting; it tells me whether it halts or not. What the Solomoff formalism requires is that "if (halts(FTL()) == true) then P(X|FTL()) = 1" and "if (halts(FTL()) == false) then P(X|FTL()) = 0" and "P(FTL()) = 2^-K(FTL())". Where in all this is the probability of Fermat's last theorem? Having an oracle may imply knowing whether or not FTL is a theorem, but it does not imply that we must assign that theorem a probability of 1. (Or maybe, it does and I'm not seeing it.)

Edit: Come to think of it... I'm not sure there's a relevant difference between knowing whether a program that outputs True iff theorem S is provable will end up halting, and assigning probability 1 to theorem S. It does seem that I must assign 1 to statements of the form "A or ~ A" or else it won't work; whereas if the theorem S is is not in the domain of our probability function, nothing seems to go wrong.

In either case, this probably isn't the standard reason for believing in, or thinking about logical omniscience because the concept of logical omniscience is probably older than Solomonoff induction. (I am of course only realizing that in hindsight; now that I've seen a powerful counter example to my argument.)

**potato**on Does Probability Theory Require Deductive or Merely Boolean Omniscience? · 2015-08-03T14:51:14.962Z · score: 0 (0 votes) · LW · GW

Upvoted for cracking me up.

**potato**on Transhumanism and the denotation-connotation gap · 2015-08-03T09:44:58.438Z · score: 0 (0 votes) · LW · GW

Terminology quibble:

I get where you get this notion of connotation from, but there's a more formal one that Quine used, which is at least related. It's the difference between an extension and a meaning. So the extensions of "vertebrate" and "things with tails" could have been identical, but that would not mean that the two predicates have the same meanings. To check if the extensions of two terms are identical, you check the world; it seems like to check whether two meanings are identical, you have to check your own mind.

Edit: Whoops, somebody already mentioned this.

**potato**on Does Probability Theory Require Deductive or Merely Boolean Omniscience? · 2015-08-03T09:16:05.996Z · score: 1 (1 votes) · LW · GW

I agree. I am saying that we need not assign it a probability at all. Your solution assumes that there is a way to express "two" in the language. Also, the proposition you made is more like "one elephant and another elephant makes two elephants" not "1 + 1 = 2".

I think we'd be better off trying to find a way to express 1 + 1 = 2 as a boolean function on programs.

**potato**on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-03T09:10:30.053Z · score: 1 (1 votes) · LW · GW

This is super interesting. Is this based on UDT?

**potato**on Does Probability Theory Require Deductive or Merely Boolean Omniscience? · 2015-08-03T08:51:11.074Z · score: 0 (0 votes) · LW · GW

How do you express, Fermat's last theorem for instance, as a boolean combination of the language I gave, or as a boolean combination of programs? Boolean algebra is not strong enough to derive, or even express all of math.

edit: Let's start simple. How do you express 1 + 1 = 2 in the language I gave, or as a boolean combination of programs?

**potato**on How An Algorithm Feels From Inside · 2013-09-18T19:30:15.239Z · score: 4 (4 votes) · LW · GW

Except that around 2% of blue egg-shaped objects contain palladium instead. So if you find a blue egg-shaped thing that contains palladium, should you call it a "rube" instead? You're going to put it in the rube bin—why not call it a "rube"?

But when you switch off the light, nearly all bleggs glow faintly in the dark. And blue egg-shaped objects that contain palladium are just as likely to glow in the dark as any other blue egg-shaped object.

So if you find a blue egg-shaped object that contains palladium, and you ask "Is it a blegg?", the answer depends on what you have to do with the answer: If you ask "Which bin does the object go in?", then you choose as if the object is a rube. But if you ask "If I turn off the light, will it glow?", you predict as if the object is a blegg. In one case, the question "Is it a blegg?" stands in for the disguised query, "Which bin does it go in?". In the other case, the question "Is it a blegg?" stands in for the disguised query, "Will it glow in the dark?"

This is amazing, but too fast. It's too important and counter intuitive to do that fast, and we absolutely devastatingly painfully need it in philosophy departments. Please help us. This is an S.O.S. our ship is sinking. Write this again longer, so that I can show it to people and change their minds. People who are not lesswrong litterate. It's too important to go over that fast, anyway. I also ask that you, or anyone for that matter, find a simple real world example which has roughly analogous parameters to the ones you specified, and use that as the example instead. Somebody do it [please, I'm too busy arguing with philosophy proffesors about it, and there are better writers on this site that could take up the endeavor. It would be useful and well liked anyway chances are, and I'll give what rewards I can.

**potato**on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-09T01:52:27.467Z · score: 0 (0 votes) · LW · GW

Here's a question, if we had the ability to input a sensory event with a likelyhoodratio of 3^^^^3:1 this whole problem would be solved?

**potato**on The Fabric of Real Things · 2012-10-14T23:04:26.115Z · score: 0 (0 votes) · LW · GW

Hmm, it depends on whether or not you can give finite complete descriptions of those algorithms, if so, I don't see the problem with just tagging them on. If you can give finite descriptions of the algorithm, then its komologorov complexity will be finite, and the prior: 2^-k(h) will still give nonzero probabilities to hyper environments.

If there are no such finite complete descriptions, then I gotta go back to the drawing board, cause the universe could totally allow hyper computations.

On a side note, where should I go to read more about hyper-computation?

**potato**on The Fabric of Real Things · 2012-10-14T09:08:06.371Z · score: 2 (2 votes) · LW · GW

At first thought. It seems that if it could be falsified, then it would fail the criteria of containing all and only those hypotheses which could in principle be falsified. Kind of like a meta-reference problem; if it does constrain experience, then there are hypotheses which are not interpretable as causal graphs that constrain experience (no matter how unlikely). This is so because the sentence says "all and only those hypothesis that can be interpreted as causal graphs are falsifiable", and for it to be falsified, means verifying that there is at least one hypothesis which cannot be interpreted as a causal graph which is falsifiable. Short answer, not if we got it right this time.

(term clarification) All and only hypotheses that constrain experience are falsifiable and verifiable, for there exists a portion of experience space which if observed falsifies them, and the rest verifies them (probabilistically).

**potato**on The Fabric of Real Things · 2012-10-14T08:54:09.342Z · score: -1 (1 votes) · LW · GW

I have to ask, how does this metaphysics (cause that's what it is) account for mathematical truths? What causal models do those represent?

My bad:

Someone already asked this more cleverly than I did.

**potato**on The Fabric of Real Things · 2012-10-14T08:46:56.158Z · score: 4 (4 votes) · LW · GW

I have a plausibly equivalent (or at least implies Ey's) candidate for the fabric of real things, i.e., the space of hypotheses which could in principle be true, i.e., the space of beliefs which have sense:

A Hypothesis has nonzero probability, iff it's computable or semi computable.

It's rather obviously inspired by Solomonoff abduction, and is a sound principle for any being attempting to approximate the universal prior.

**potato**on The Fabric of Real Things · 2012-10-13T19:10:30.576Z · score: 3 (3 votes) · LW · GW

It seems to me that this is the primary thing that we should be working on. If probability is subjective, and causality reduces to probability, then isn't causality subjective, i.e., a function of background knowledge?

**potato**on Causality: a chapter by chapter review · 2012-10-11T11:02:43.609Z · score: 0 (0 votes) · LW · GW

Looking it over, I could have been much clearer (sorry). Specifically I want to know. Given a Dag of the form:

A -> C <- B

Is it true that (in all prior joint distributions where A is independent of B, but A is evidence of C, and B is evidence of C) A is none-independent of B, given C is held constant?

I proved that when A & B is evidence against C, this is so, and also when A & B are independent of C, this is so, the only case I am missing is when A & B is evidence for C.

It's clear enough to me that when you have one none-colliding path between any two variables, they must not be independent; and that if we were to hold any of the variable along that path constant, that those variables would be independent. This can all be shown given standard probability theory and correlation alone. It can also be shown that if there are only colliding paths between two variables, those two variables are independent. If I have understood the theory of d-separation correctly, if we hold the collision variable (assuming there is only one) on one of these paths constant, the two variables should become none-independent (either evidence for or against one another). I have proven that this is so in two of the (at least) three cases that fit the given DAG using standard probability theory.

Those are the proofs I gave above.

**potato**on Causality: a chapter by chapter review · 2012-10-02T04:43:50.688Z · score: 3 (3 votes) · LW · GW

I have a question: is D-separation implied by the komologorov axioms?

I've proven that it is in some cases:

Premises:

1)A = A|B :. A|BC ≤ A|C

2)C < C|A

3)C < C|B

4) C|AB < C

proof starts:

1)B|C > B {via premise 3

2)A|BC = A * B * C|AB / (C * B|C) {via premise 13)A|BC * C = A

*B*C|AB / B|C

4)A|BC

*C / A = B*C|AB / B|C

5)B

*C|AB / B|C < C|AB {via line 1*

6)BC|AB / B|C < C {via line 5 and premise 4

6)B

7)A|BC

*C / A < C {via lines 6 and 4*

8)A|C = AC|A / C

8)A|C = A

9)A|C

*C = A*C|A

10)A|C

*C / A = C|A*

11)C < A|CC / A {via line 10 and premise 2

11)C < A|C

12)A|BC

*C / A < A|C*C / A {via lines 11 and 7

13)A|BC < A|C

Q.E.D.

Premises:

1) A = A|B :. A|BC ≤ A|C

2) C < C|A

3) C < C|B

4) C|AB = C

proof starts:

1)A|C = A * C|A / C2)A|BC = A * B

*C / (B*C|B) {via premises 1 and 4

3)A|BC = A

*C / C|B*

4)AC < A

4)A

*C|A {via premise 2*

5)AC / C|B < A * C|A / C {via line 4 and premise 3

5)A

6)A|BC < A|C {via lines 1, 3, and 5

Q.E.D.

If it is implied by classical probability theory, could someone please refer me to a proof?

**potato**on Terminal Values and Instrumental Values · 2012-09-16T07:50:30.774Z · score: 0 (0 votes) · LW · GW

A real deadlock i have with using your algorithmic meta-ethics to think about object level ethics is that I don't know who's volition, or "should" label I should extrapolate from. It allows me to figure out what's right for me, and what's right for any group given certain shared extrapolated terminal values, but it doesn't tell me what to do when I am dealing with a population with none-converging extrapolations, or with someone that has different extrapolated values from me (hypothetically).

These individuals are rare, but they likely exist.

**potato**on Math is Subjunctively Objective · 2012-09-16T07:27:11.184Z · score: 0 (0 votes) · LW · GW

You've misunderstood me. It's really not at all conspicuous to allow a none-empty "set" into your ontology, but if you'd prefer we can talk about heaps; they serve for my purposes here (of course, by "heap", I mean any random pile of stuff). Every heap has parts: you're a heap of cells, decks are heaps of cards, masses are heaps of atoms, etc. Now if you apply a level filter to the parts of a heap, you can count them. For instance, I can count the organs in your body, count the organ cells in your body, and end up with two different values, though I counted the same object. The same object can constitute many heaps, as long as there are several ways of dividing the object into parts. So what we can do, is just talk about the laws of heap combination, rather than the laws of numbers. We don't require any further generality in our mathematics to do all our counting, and yet, the only objects I've had to adopt into my ontology are heaps (rather inconspicuous material fellows in IMHO).

I should mention that this is not my real suggestion for a foundation of mathematics, but when it comes to the challenge of interpreting the theory of natural numbers without adopting any ghostly *quantities*, heaps work just fine.

(edit):
I should mention that while heaps, requiring only for you to accept a whole with parts, and a *level test* on any gven part, are much more ontologically inconspicuous than pure sets. Where exactly is the null set? Where is any pure set? I've never seen any of them. Of course, i see heaps all over the place.

**potato**on Bayes for Schizophrenics: Reasoning in Delusional Disorders · 2012-09-11T03:57:41.934Z · score: 0 (0 votes) · LW · GW

"

"You have brain damage" is also a theory with perfect explanatory adequacy. If one were to explain the Capgras delusion to Capgras patients, it would provide just as good an explanation for their odd reactions as the imposter hypothesis. Although the patient might not be able to appreciate its decreased complexity, they should at least remain indifferent between the two hypotheses. I've never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I'm going to assume it doesn't work. Why not?"

IMHO All human psychologies have a hard time updating to believe they're poorly built. We are by nature arrogant. Do not forget that common folk often "choose" what to believe after they think about how it feels to believe it.

(Brilliant article btw)

(eidt):"Likewise, how come delusions are so specific? It's impossible to convince someone who thinks he is Napoleon that he's really just a random non-famous mental patient, but it's also impossible to convince him he's Alexander the Great (at least I think so; I don't know if it's ever been tried). But him being Alexander the Great is also consistent with his observed data and his deranged inference abilities. Why decide it's the CIA who's after you, and not the KGB or Bavarian Illuminati?"

IMHO I think there are plenty of cognitive biases that can explain that sort of behavior in healthy patients. Confirmation bias, and the affective heuristic are the first to come to mind.

**potato**on [SEQ RERUN] Math is Subjunctively Objective · 2012-08-13T15:22:37.746Z · score: 3 (5 votes) · LW · GW

Why not call the set of all sets of actual objects with cardinality 3, "three", the set of all sets of physical objects with cardinality 2, "two", and the set of all sets of physical objects with cardinality 5, "five"? Then when I said that 2+3=5, all I would mean is that for any x in two and any y in three, the union of x and y is in five. If you allow sets of physical objects, and sets of sets of physical objects, into your ontology, then you got this; 2+3=5 no matter what anyone thinks, and two and three are real objects existing out there.

**potato**on Math is Subjunctively Objective · 2012-08-13T15:20:07.846Z · score: 1 (1 votes) · LW · GW

Why not call the set of all sets of actual objects with cardinality 3, "three", the set of all sets of physical objects with cardinality 2, "two", and the set of all sets of physical objects with cardinality 5, "five"? Then when I said that 2+3=5, all I would mean is that for any x in two and any y in three, the union of x and y is in five. If you allow sets of physical objects, and sets of sets of physical objects, into your ontology, then you got this; 2+3=5 no matter what anyone thinks, and two and three are real objects existing out there.

**potato**on Rationality Quotes August 2012 · 2012-08-13T14:42:09.673Z · score: 1 (1 votes) · LW · GW

It depends on whether or not the thousands are scientists. I'll trust one scientist over a billion sages.

**potato**on [SEQ RERUN] The Meaning of Right · 2012-08-13T14:37:12.071Z · score: 0 (2 votes) · LW · GW

There's no purpose to purpose, but there's still plenty of purpose in the object level.

**potato**on [SEQ RERUN] The Meaning of Right · 2012-08-13T14:34:09.921Z · score: 1 (1 votes) · LW · GW

Attempt at a four sentence summary for practicing ethical agents:

You decide just how right some event is by approximating an ideal computation. This is why if you think about it longer, sometimes you change your mind about how right an event was. This solves the problem of metaethics. However, most of the work for object level ethicologists remains open, e.g., specifying the ideal computation we approximate when we decide how right some event is.

**potato**on Torture vs. Dust Specks · 2012-08-10T22:06:00.541Z · score: 1 (1 votes) · LW · GW

Here's a suggestion: if someone going through a fate A, is incapable of noticing whether or not they're going through fate B, then fate A is infinitely worse than fate B.

**potato**on Torture vs. Dust Specks · 2012-08-10T22:00:54.348Z · score: 3 (3 votes) · LW · GW

If asked independently whether or not I would take an eyeball speck in the eye to spare a stranger 50 years of torture, i would say "sure". I suspect most people would if asked independently. It should make no difference to each of those 3^^^3 dust speck victims that there are another (3^^^3)-1 people that would also take the dust speck if asked.

It seems then that there are thresholds in human value. Human value might be better modeled by sureals than reals. In such a system we could represent the utility of 50 years of torture as -Ω and represent the utility of a dust speck in one's eye as -1. This way, no matter how many dust specks end up in eyes, they don't add up to torturing someone for 50 years. However we would still minimize torture, and minimize dust specks.

The greater problem is to exhibit a general procedure for when we should treat one fate as being infinitely worse than another, vs. treating it as merely being some finite amount worse.

**potato**on No Logical Positivist I · 2012-07-30T15:56:17.866Z · score: 0 (0 votes) · LW · GW

Your "general frameworks for combining" do exactly the work that logical positivists did by building statements from verifiable constituents using logical connectives....So, even without invoking omnipotent beings to check whether the cake is there, the logical positivist would attribute meaning to that claim in essentially the same way that you do.

I agree that EY's attacking a certain straw-man of positivism, and that EY is ultimately a logical positive with respect to how he showed the meaningfulness of the boltzman cake hypohteses. But, assuming EY submits to a computational complexity prior, his position is distinct, in that there could be two hypothesis, which we fundamentally cannot tell the difference between, e.g., copenhagen, mwi, and yet we have good reason to believe on over the other, even though there will never be any test that justifies belief in one over another (if you think you can test mwi vs. copenhagen, just replace the universe spawns humans with 10^^^^^10 more quanta in it vs. it doesn't, clearly can't test these, not enough quanta in the universe).

**potato**on Timeless Decision Theory and Meta-Circular Decision Theory · 2012-07-30T15:00:55.212Z · score: 3 (3 votes) · LW · GW

Does

Argmax[A in Actions] in Sum[O in Outcomes] (Utility(O)*P(this computation yields A []-> O|rest of universe))

evaluate to:

def Tdt(Actions,Outcomes):

currentMax = 0

output = null

for A in Actions:

sum = 0

for O in Outcomes:

sum += U(O)*P(O|Tdt(Actions,Outcomes) == A and background knowledge)

if sum >= currentMax:

currentMax = sum

output = A

return output

Or am I missing some subtly? I am assuming that "P" and "U" have been defined elsewhere, and that python can deal with referencing the outcome of a computation inside itself before it has been completed (or at least that the probability function halts when given a yet to be computed function evaluating to a certain output as its inout statement). (edit): couldn't get the tabs to work, it's supposed to be pseudo python, but it's probably just as readable. Is there a way to type set tabs in the comments?

**potato**on No Logical Positivist I · 2012-07-23T00:42:43.438Z · score: 1 (1 votes) · LW · GW

Yes, the point I was trying to make was that for a sentence to be meaningful, there must be a physical state for which it encodes, even if that physical state is in-accessible to us. "At 8:00 pm last night a tea kettle spontaneously formed around Saturn." is meaningful, because it encodes a state located in space-time.

**potato**on Were atoms real? · 2012-07-05T20:17:49.698Z · score: 0 (0 votes) · LW · GW

If I thought that atoms were unreal, I would not expect to be able to *photograph* them. I also wouldn't expect a single atom to be capable of casting a shadow. That's some ways (and there are many more) that I could be wrong about atoms being unreal mere pedagogical tools.

**potato**on The Power of Reinforcement · 2012-06-21T17:55:31.303Z · score: 1 (1 votes) · LW · GW

Does this still work if I reinforce myself? Every time I read 5 lesswrong articles in a day, I give myself a reward. Or every time i have a cigarette, I kick a brick wall with no shoes on. If i was consistent with this for a long time, would it work?

**potato**on Only say 'rational' when you can't eliminate the word · 2012-06-11T00:31:45.707Z · score: 1 (1 votes) · LW · GW

"I believe that 'P'." is only deflationary because it treats belief as if it were binary, but it isn't. "I have 0.8 belief in 'P'." is certainly not the same as "It is true that 'P'." Yes? One is a claim about the world, and one is a claim about my model of the world.

**potato**on Reductionism · 2012-06-08T23:52:41.650Z · score: 1 (3 votes) · LW · GW

Because things happen, if there was no most basic level, figuring out what happens would be an infinite recursion with no base case. Not even the universe's computation could find the answer.

**potato**on Reductionism · 2012-06-08T23:50:05.906Z · score: 0 (0 votes) · LW · GW

This post, represents for me, the typical LW response to something like the Object Oriented Ontologies of Paul Levi Bryant and DeLanda. These Ontologies attempt to give things like numbers, computations, atoms, fundamental particles, galaxies, higher level laws, fundamental laws, concepts, referents of concepts, etc. equal ontological status. They, hence, are strictly against making a distinction between map and territory, there is only territory, and all things that are, are objects.

I'm a confident reductionist, model/reality (bayesian), type guy. I'm not having major second thoughts about that, right now. But engaging in productive debate with object oriented philosophers might be a good chance for us to check ourselves,i.e., see how confident we really should be in our reductionist ontology. There are leading philosophers, and other scientists, that are apposed to reductionism, and opposed to *correlationism*. They have blogs, and are often open to debate. There's no point missing out on talking with someone that see's the universe fundamentally different from you in a way that is technically derivable.

**potato**on [SEQ RERUN] Timeless Causality · 2012-06-08T23:17:16.482Z · score: 0 (0 votes) · LW · GW

I guess that means you don't know that it's going to end up low entropy; most universes don't end up low-entropy, so you expect *this one* won't as well.

**potato**on [SEQ RERUN] Timeless Causality · 2012-06-07T20:17:04.707Z · score: 0 (0 votes) · LW · GW

Is there any clever maneuver we can use to distinguish between *right* and *left* causality, if it's assumed to be deterministic? Can we distinguish between *right* and *left* causality, under the following conditions:

We allow the functions 'from (L1,L2) to R1' and 'from (L1,L2) to R2' not to be identical (assuming

*rightward*causality). In other words, the rule that the system uses to produce the next state of V1 by looking at the current states of V1 and V2, doesn't have to be the same rule as the one to produce the next state of V2 from the current states of V1 and V2.Both functions are known to be surjective.

We don't know the functions.

Both V1 and V2 may have any natural number of states, and they need not have the same number of states.

(edit):

Those conditions are really just a suggestion, if you have better ones, use them. And share 'em too plz.

**potato**on Fake Causality · 2012-06-07T20:13:26.661Z · score: 0 (0 votes) · LW · GW

I don't want to revise my objection, because it's not really a material implication that you're using. You're using probabilistic reasoning in your argument,i.e., pointing out certain pressures that exist, which rule out certain ways that people could be getting smarter, and therefor increases our probability that people are not getting smarter. But if people are in fact getting smarter, this reasoning is either too confident in the pressures, or is using far from bayesian updating.

Either way, I feel like we took up too much space already. If you would like to continue, I would love to do so in a private message.

**potato**on Fake Causality · 2012-06-07T03:48:44.610Z · score: 0 (0 votes) · LW · GW

if it [...] has a false conclusion, you should forget about the reasoning altogether

and

some people don't seem to understand contexts in which the truth value of a statement is unimportant.

You see no problem here?

**potato**on Timeless Causality · 2012-06-07T03:42:16.093Z · score: 0 (0 votes) · LW · GW

That's what it seems he's getting at in the linked essay.

**potato**on Fake Causality · 2012-06-07T03:33:48.104Z · score: 0 (0 votes) · LW · GW

Above you said that you weren't sure if the conclusion of some argument you were using was true, don't do that. That is all the advice I wanted to give.

**potato**on LessWrong anti-kibitzer (hides comment authors and vote counts) · 2012-06-06T20:10:20.224Z · score: 0 (0 votes) · LW · GW

I have my anti-kibitzer on, I've had it on for two days. I too, read certain posters more carefully than others, but now, rather than deciding who to read carefully by status, I decide who to read carefully by over-viewing the contents of their posts. Of course, you want to give more resources and time to a great master of the art, than to a moderate master. But deciding who is who by status, or letting status weight in as much as it does in humans, is almost as bad as not having any time management at all. It's like time managing, where you also falsely think that some independent variable has something to do with the content.

**potato**on Timeless Causality · 2012-06-06T19:14:56.816Z · score: 0 (0 votes) · LW · GW

Do you know that it doesn't work if we use a deterministic rule, or have you just not tried? Cause I'm trying right now.

**potato**on The True Prisoner's Dilemma · 2012-06-06T18:11:47.462Z · score: 0 (2 votes) · LW · GW

It's really about the iteration. I would continually cooperate with the paper clip maximizer if I had good reason to believe it would not defect. For instance, if I knew that Eliezer Yudkowsky without morals and with a great urge for paperclip creation was the paperclip maximizer, I would cooperate. Assuming that you know that playing with the defect button can make you loose 1 billion paperclips from here on, and i know the same for human lives, cooperating seems right. It has the highest expected payoff, if we're using each other's known intentions and plays as evidence about our future plays.

If there is only one trial, and I can't talk to the paper clip maximizer, I will defect.