## Posts

## Comments

**CalmCanary**on The Brain as a Universal Learning Machine · 2015-06-21T18:34:58.591Z · LW · GW

So if I spouted 100 billion true statements at you, then said, "It would be good for you to give me $100,000," you'd pay up?

**CalmCanary**on Hedonium's semantic problem · 2015-04-09T23:56:32.271Z · LW · GW

Very interesting post, but your conclusion seems too strong. Presumably, if instead of messing around with artificial experiencers, we just fill the universe with humans being wireheaded, we should be able to get large quantities of real pleasure with fairly little actually worthwhile experiences; we might even be able to get away with just disembodied human brains. Given this, it seems highly implausible that if we try to transfer this process to a computer, we are forced to create agent so rich and sophisticated that their lives are actually worth living.

**CalmCanary**on Request for Steelman: Non-correspondence concepts of truth · 2015-03-25T02:41:04.462Z · LW · GW

Correspondence (matching theories to observations) is a subset of coherence (matching everything with everything)

Correspondence is not just matching theories to observation. It is matching theories to reality. Since we don't have pure transcendent access to reality, this involves a lot of matching theories to observation and to each other, and rejecting the occasional observation as erroneous; however, the ultimate goal is different from that of coherence, since perfectly coherent sets of statements can still be wrong.

If your point is that "reality" is not a meaningful concept and we should write off the philosophizing of correspondence theorists and just focus on what they actually do, then what they actually do is identical to what coherentists actually do, not a subset.

**CalmCanary**on What Scarcity Is and Isn't · 2015-03-03T18:41:08.029Z · LW · GW

Those are entirely valid points, but they only show that human desires are harder to satiate than you might think, not that satiating them would be insufficient to eliminate scarcity. And in fact, that could not possibly be the case even granting economy's definition of scarcity, because if you have no unmet desires, you do not need to make choices about what uses to put things to. If once you digitize your books, you want nothing in life except to read all the books you can now store, you don't need to put the shelf space to another use; you can just leave it empty.

**CalmCanary**on What Scarcity Is and Isn't · 2015-03-03T05:10:22.206Z · LW · GW

You should add a link to the previous post at the top, so people who come across this don't get confused by the sand metaphor.

This will hopefully be addressed in later posts, but on its own, this reads like an attempt to legislate a definition of the word 'scarcity' without a sufficient justification for why we should use the word in that way. (It could also be an explanation of how 'scarcity' is used as a technical term in economics, but it is not obvious to me that the alternate uses/unsatiated desires distinction is relevant to what most economists spend their time on. If this is your intention, could you elaborate and give evidence that economists do in fact use the term in this way?) Naively, it seems that if we have enough Star Trek replicators or whatever to satiate all human desires, then for all practical purposes scarcity has been eliminated, and insisting that it hasn't really because things have alternative uses because some things still have alternative uses seems like playing unproductive word games. Can you explain why thinking of scarcity in your terms is advantageous?

Overall, this series has so far been quite fun to read. I look forward to more.

**CalmCanary**on Deconstructing the riddle of experience vs. memory · 2015-02-17T19:05:19.309Z · LW · GW

From Yudkowsky's Epistle to the New York Less Wrongians:

Knowing about scope insensitivity and diminishing marginal returns doesn't just mean that you donate charitable dollars to "existential risks that few other people are working on", instead of "The Society For Curing Rare Diseases In Cute Puppies". It means you know that eating half a chocolate brownie appears as essentially the same pleasurable memory in retrospect as eating a whole brownie, so long as the other half isn't in front of you and you don't have the unpleasant memory of exerting willpower not to eat it.

If you mainly value brownie-eating memories, this is perfectly reasonable advice. If you instead you eat brownies for the experience, it is unhelpful, since eating half a brownie means the experience is either half as long or less intense.

Is this the sort of thing you're looking for?

**CalmCanary**on Does the Utility Function Halt? · 2015-01-28T15:13:37.179Z · LW · GW

OrphanWilde appears to be talking about morality, not decision theory. The moral Utility Function of utilitarianism is not necessarily the decision-theoretic utility function of any agent, unless you happen to have a morally perfect agent lying around, so your procedure would not work.

**CalmCanary**on Knightian Uncertainty: Bayesian Agents and the MMEU rule · 2014-08-05T05:14:58.997Z · LW · GW

The most obvious explanation for this is that utility is not a linear function of response time: the algorithm taking 20 s is very, very bad, and losing 25 ms on average is worthwhile to ensure that this never happens. Consider that if the algorithm is just doing something immediately profitable with no interactions with anything else (e.g. producing some crytptocurrency), the first algorithm is clearly better (assuming you are just trying to maximize expected profit), since on the rare occasions when it takes 20 s, you just have to wait almost 200 times as long for your unit of profit. This suggests that the only reason the second algorithm is typically preferred is that most programs do have to interact with other things, and an extremely long response time will break everything. I don't think any more convoluted decision theoretic reasoning is necessary to justify this.

**CalmCanary**on Intuitive cooperation · 2014-07-25T19:34:37.150Z · LW · GW

Part of the issue is that you are not subject to the principle of explosion. You can assert contradictory things without also asserting that 2+2=3, so you can be confident that you will never tell anyone that 2+2=3 without being confident that you will never contradict yourself. Formal systems using classical logic can't do this: if they prove any contradiction at all, they also prove that 2+2=3, so proving that they don't prove 2+2=3 is exactly the same thing as proving that they are perfectly consistent, which they can't consistently do.

**CalmCanary**on Utilitarianism and Relativity Realism · 2014-06-22T19:27:35.557Z · LW · GW

You cannot possibly gain new knowledge about physics by doing moral philosophy. At best, you have shown that any version of utilitarianism which adheres to your assumptions must specify a privileged reference frame in order to be coherent, but this does not imply that this reference frame is the true one in any physical sense.

**CalmCanary**on Naturalistic trust among AIs: The parable of the thesis advisor's theorem · 2013-12-15T20:31:41.727Z · LW · GW

Strictly speaking, Lob's Theorem doesn't show that PA doesn't prove that the provability of any statement implies that statement. It just shows that if you have a statement in PA of the form (If S is provable, then S), you can use this to prove S. The part about PA not proving any implications of that form for a false S only follows if we assume that PA is sound.

Therefore, replacing PA with a stronger system or adding primitive concepts of provability in place of PA's complicated arithmetical construction won't help. As long as it can do everything PA can do (for example, prove that it can prove things it can prove), it will always be able to get from (If S is provable, then S) to S, even if S is 3*5=56..

**CalmCanary**on Chocolate Ice Cream After All? · 2013-12-10T19:56:11.611Z · LW · GW

Presumably, if you use E to decide in Newcomb's soda, the decisions of agents not using E are screened off, so you should only calculate the relevant probabilities using data from agents using E. If we assume E does in fact recommend to eat the chocolate ice cream, 50% of E agents will drink chocolate soda, 50% will drink the vanilla soda (assuming reasonable experimental design), and 100% will eat the chocolate ice cream. Therefore, given that you use E, there is no correlation between your decision and receiving the $1,000,000, so you might as well eat the vanilla and get the $1000. Therefore E does not actually recommend eating the chocolate ice cream.

Note that this reasoning does not generalize to Newcomb's problem. If E agents take one box, Omega will predict that they will all take one box, so they all get the payoff and the correlation survives.

**CalmCanary**on Weak repugnant conclusion need not be so repugnant given fixed resources · 2013-11-17T18:23:17.901Z · LW · GW

Are you saying we should maximize the average utility of all humans, or of all sentient beings? The first one is incredibly parochial, but the second one implies that how many children we should have depends on the happiness of aliens on the other side of the universe, which is, at the very least, pretty weird.

Not having an ethical mandate to create new life might or might not be a good idea, but average utilitarianism doesn't get you there. It just changes the criteria in bizarre ways.