Posts

What's Wrong with Evidential Decision Theory? 2012-08-23T00:09:14.404Z

Comments

Comment by aaronde on Looking to restart Madison LW meetups, in need of regulars · 2015-05-13T18:24:56.332Z · LW · GW

I would definitely make it every other week, if it's weekly.

Comment by aaronde on Politics Discussion Thread February 2013 · 2013-02-10T21:49:49.233Z · LW · GW

I agree that something unusual is going on. Humans, unlike any other species I'm aware of, are voluntarily restricting our own population growth. But I don't know why you say that there's "no reason" to believe that this strange behavior might benefit us. Surely you can think of at least one reason? After all, all those other species that don't voluntarily limit their own reproduction eventually see their populations crash, or level off in the face of fierce competition over resources, when they meet or exceed their environment's carrying capacity. The laws of physics as we currently understand them dictate that exponential growth cannot continue forever.

I'm not saying that there are no foreseeable downsides to population leveling off. And I'm not saying that there's no risk of unforeseeable consequences of the social changes underlying this demographic shift. But I am saying that (amid all the pros and cons) there is one obvious, important reason why human population leveling off might be a good thing. The downsides are neither so obvious nor so potentially dramatic. To illustrate this, lets look at Last's (awful) WSJ article quoted in the Marginal Revolutions post.

Last does his best to paint declining fertility as a nightmare scenario. But the data he presents simply don't support his tone. For example:

Low-fertility societies don't innovate because their incentives for consumption tilt overwhelmingly toward health care. They don't invest aggressively because, with the average age skewing higher, capital shifts to preserving and extending life and then begins drawing down.

In other words, low-fertility societies do have an incentive to innovate - in medicine and life extension. And not just for the benefit of the old - they also have an incentive to keep the young healthy and productive as long as possible, to maintain their shrinking workforce (which may go some ways toward explaining Japan's excellent school nutrition program, and low, declining childhood obesity rates). They also have an incentive to develop automation to replace aging workers, which I know is a major reason that Japan is a leader in robotics. Let's take a closer look at Japan:

From 1950 to 1973, Japan's total-factor productivity—a good measure of economic dynamism—increased by an average of 5.4% per year. From 1990 to 2006, it increased by just 0.63% per year. Since 1991, Japan's rate of GDP growth has exceeded 2.5% in only four years; its annual rate of growth has averaged 1.03%.

Wait, did he just admit that Japan's economy is still growing? Yep, both GDP and GDP/capita have continued to grow, albeit more slowly, since the 1990s. Let that sink in a moment. The Japanese are, on average, working less than they used to. They're older and more likely to be retired. And yet they still get to enjoy having more stuff. (Largely thanks to innovations in automation driven, in part, by aging demographics.) And thanks to medical innovations, driven in part by aging demographics, they will continue enjoying that stuff longer than any generation before. So where's the grim cautionary tale? Last has none, just this:

At the current fertility rate, by 2100 Japan's population will be less than half what it is now.

Which would still be more than it was in 1900. So, where's the problem? Why is it preferable to keep taxing the earth's resources with more and more people with no foreseeable prospects at space colonization? On overpopulation, Last says,

First, global population growth is slowing to a halt and will begin to shrink within 60 years.

This is just unforgivably bad logic: 'Overpopulation isn't a problem, because population is leveling off, because fertility is declining. Therefore we must act immediately to put a stop to declining fertility!' If we ever do face a shrinking population, I'd rather deal with it by increasing healthy lifespans than by increasing birthrates.

Comment by aaronde on Infinitesimals: Another argument against actual infinite sets · 2013-01-26T08:53:29.129Z · LW · GW

I downvoted common_law's post, because of some clear-cut math errors, which I pointed out. I'm downvoting your comment because it's not saying anything constructive.

There's nothing wrong with what common_law was trying to do here, which is to show that infinite sets shouldn't be part of our ontology. Experience can't be the sole arbiter of which model of reality is best; there is also parsimony. Whether infinite quantities are actually real, is no less worthy of discussion than whether MWI is actually real, or merely a computational convenience. I only agree with you that the math lacked rigor. This is discussion, so I don't see a problem with posting things that need to be corrected, but I had to downvote the post because it might have confused someone who didn't notice the errors.

Comment by aaronde on Infinitesimals: Another argument against actual infinite sets · 2013-01-26T06:06:16.794Z · LW · GW

I agree with premise (1), that there is no reason to think of infinitesimal quantities as actually part of the universe. I don't agree with premise (2), that actual infinities imply actual infinitesimals. If you could convince me of (2), I would probably reject (1) rather than accept (3). Since an argument for (2) would be a good argument against (1), given that our universe does seem to have actual infinities.

the points on a line are of infinitesimal dimension ... yet compose lines finite in extent.

No. Points have zero dimension. "Infinitesimal" is something else. There are no infinitesimal numbers on the real line (or in the complex plane, for that matter), and no subinterval of the real line has infinitesimal length, so we would have to extend the number system if we wanted to think of infinitesimals as numbers.

When I raise the same argument about an infinite set, you can't reply that you can always make the set bigger; if I say add an element, you reply that the sets are still the same size (cardinality).

But there is a way to use an infinite set to construct a larger infinite set: - the power set. I don't understand the rest of this paragraph.

Consider again the points on a 3-inch line segement. If there are infinitely many, then each must be infinitesimal.

Again, single points have zero length, not infinitesimal length. Note, though, that there are ways to partition a finite line segment into infinitely many finite line segmets, including the partition that Zeno proposed: 1/2 + 1/4 + 1/8 + ... In an integration, we (conceptually) break up the domain into infinitely many infinitesimally wide intervals, but this is just an intuition. None of the formal definitions of integrals I've seen actually say anything about an infinitesimally wide interval.

The series comes infinitesimally close to the limit, and in this context, we treat the infinitesimal as if it were zero.

Actually, we don't have to treat an infinitesimal as zero, we just have to treat zero as zero. If I move along a meter stick at one meter per second, then according to Zeno's construction, I traverse half the distance in 1/2 second, 3/4 of the distance in 3/4 of a second, and so on. As you say, after one second, I have traversed every point on the meter stick except the very last point, because the union of the closed intervals [0,1/2], [1/2,3/4], [3/4,7/8], ... is the half-open interval [0,1). So how much longer does it take me to traverse that last point? Zero seconds, because a single point has zero length. There is no contradiction, and no need to use infinitesimals.

Comment by aaronde on [LINK] Cholesterol and mortality · 2013-01-15T23:39:53.282Z · LW · GW

Fascinating. But note that these are still very old people with declining cholesterol as they age. The study is more relevant to physicians deciding whether to prescribe statins to their elderly patients, and less relevant to young people deciding whether to keep cholesterol low throughout life with diet.

I'd need to read the whole study, but what I see so far doesn't even contradict the hypothesis I outlined. The abstract says that people who had low cholesterol at the last two examinations did worse than people who had low cholesterol at only the last examination. But most of these old people had declining cholesterol. So maybe this just means that the earlier your cholesterol starts to decline from aging, the sooner you die.

Anyway, I put more stock in the cross-cultural epidemiology and intervention trials, than in these observational studies trying to parse small differences within relatively homogeneous, free-living populations. We know that the longest-lived, healthiest populations in the world ate a low saturated-fat diet that induces low cholesterol. And we know that Dean Ornish was able to reverse heart disease with a lifestyle intervention including a cholesterol-lowering diet. Show me a population as healthy as the Okinawans with high cholesterol, or an intervention as effective as Ornish's without lowering cholesterol, and I'll reconsider. Otherwise, I do consider the issue settled from a pragmatic perspective, even if some of the academic questions remain to be answered. That is, it may be possible to have a healthy lifestyle that raises cholesterol, but we don't have any proven examples of such a lifestyle to emulate, do we? Mike Darwin gave a good explanation of this idea in "Interventive Gerontology".

Comment by aaronde on [LINK] Cholesterol and mortality · 2013-01-15T21:56:02.383Z · LW · GW

http://www.youtube.com/watch?v=xiNvQ-g1XGs&list=PLDBBB98ACA18EF67C&index=19

This (admittedly biased) youtuber has a pretty thorough criticism of the study. The bottom line is that cholesterol tends to drop off before death (6:26 in the video), not just because cholesterol-lowering medications are administered to those at highest risk of heart attack (as Kawoomba points out), but also because of other diseases. When you correct for this, or follow people throughout their lives, this reverse causation effect disappears, and you find exactly the association you would expect: higher cholesterol associates with higher cardiovascular and total mortality (10:21).

I think that studies like this one are like studies showing that overweight is "protective" against mortality - when the obvious alternative explanation is that smoking, cancer, and other diseases can prevent weight gain, or cause weight loss, just before they kill you! Obviously, this would mask or even reverse the association between overweight (high cholesterol) and death, even if overweight (high cholesterol) causes death.

Comment by aaronde on So you think you understand Quantum Mechanics · 2012-12-23T16:43:39.325Z · LW · GW

Agreed. The multiverse idea is older than, and independent of quantum theory. Actually, a single infinitely large classical universe will do, since statistically, every possibility should play out. Nietzsche even had a version of immortality based on an infinitely old universe. Though it's not clear whether he ever meant it literally, he very well could have, because it was consistent with the scientific understanding of the time.

That said, I like the idea of sminux's post. I try to steer clear of quantum language myself, and think others should too, if all they mean by "quantum" is "random".

Comment by aaronde on How to Avoid the Conflict Between Feminism and Evolutionary Psychology? · 2012-12-05T07:50:22.146Z · LW · GW

All the possible reasons for the conflict you listed suggest that the solution is to help feminists understand evolutionary psychology better, so they won't have a knee-jerk defensive reaction against it. This could come off as a little condescending, but more importantly, it misses the other side of the issue. In order to leave itself less open to criticism, evolutionary psychology could be more rigorous, just as other "soft" sciences like medicine and nutrition could be more rigorous. This would make it harder for critics to find things to object to, increasing trust in the field over time, and would probably be a good thing in itself anyway.

So I would add to your list: 8) Concerns about lack of rigor in the field of evolutionary psychology.

Comment by aaronde on [LINK] AmA by computational neuroscientists behind 'the world's largest functional brain model' · 2012-12-04T00:13:01.072Z · LW · GW

Actually I'm not sure if any of that is a problem. Spaun is quite literally "anthropomorphic" - modeled after a human brain. So it's not much of a stretch to say that it learns and understands the way a human does. I was just pointing out that the more progress we make on human-like AIs, without progress on brain scanning, the less likely a Hansonian singularity (dominated by ems of former humans) becomes. If Spaun as it is now really does work "just like a human", then building a human-level AI is just a matter of speeding it up. So by the time we have computers capable of supporting a human mind upload, we'll already have computer programs at least as smart as humans, which learn their knowledge on their own, with no need for a knowledge transplant from a human.

Comment by aaronde on [LINK] AmA by computational neuroscientists behind 'the world's largest functional brain model' · 2012-12-03T21:29:09.261Z · LW · GW

I think we need to separate the concept of whole brain emulation, from that of biology-inspired human-like AI. This actually looks pretty bad for Robin Hanson's singularity hypothesis, where the first emulations to perfectly emulate existing humans suddenly make the cost of labor drop dramatically. If this research pans out, then we could have a "soft takeoff", where AI slowly catches up to us, and slowly overtakes us.

CNRG_UWaterloo, regarding mind uploads:

Being able to simulate a particular person's brain is incredibly far away. There aren't any particularly good ideas as to how we might be able to reasonably read out that sort of information from a person's brain. That said, there are also lots of uses that a repressive state would have for any intelligent system (think of automatically scanning all surveillence camera footage). But, you don't want a realistic model of the brain to do that -- it's get bored exactly as fast as people do.

So we should expect machine labor to gradually replace human labor, exactly as it has since the beginning of the industrial revolution, as more and more capabilities are added, with "whole brain emulation" being one of the last features needed to make machines with all the capabilities of humans (if this step is even necessary). It's possible, of course, that we could wind up in a situation where the "last piece of the puzzle" turns out to be hugely important, but I don't see any particular reason to think that will happen.

Comment by aaronde on [SEQ RERUN] I Heart CYC · 2012-12-03T19:38:30.552Z · LW · GW

From the article:

AIs learn slowly now mainly because they know so little.

This seems implausible, because humans learn almost everything we know over our lifetimes, from a starting instruction set smaller than existing pieces of software. If architecture really is overrated (meaning that existing architectures are already good enough) isn't it more likely that AIs learn so slowly now, simply because the computers aren't powerful enough yet?

Comment by aaronde on A solvable Newcomb-like problem - part 1 of 3 · 2012-12-03T19:11:38.865Z · LW · GW

If Omega gets it right more than 99% of the time, then why would Alpha take 10-to-1 odds against Omega messing up?

Comment by aaronde on Open Thread, December 1-15, 2012 · 2012-12-02T05:51:03.242Z · LW · GW

Yeah, that was my impression. One of the things that's interesting about the article is that many of the technologies Taleb disparages already exist. He lists space colonies and flying motorcycles right along side mundane tennis shoes and video chat. So it's hard to tell when he's criticizing futurists for expecting certain new technologies, and when he's criticizing them for wanting those new technologies. When he says that he's going to take a cab driven by an immigrant, is he saying that robot cars won't arrive any time soon? Or that it wouldn't make a difference if they did? Or that it would be bad if they did? I think his point is a bit muddled.

One thing he gets right is that cool new technologies need not be revolutionary. Don't get me wrong; I take the possibility of truly transformative tech seriously, but futurists do overestimate technology for a simple reason. When imagining what life will be like with a given gadget, you focus on those parts of your life when you could use the gadget, and thus overestimate the positive effect of the gadget (This is also why people's kitchens get cluttered over time). For myself, I think that robot cars will be commonplace in ten years, and that will be friggin' awesome. But it won't transform our lives - it will be an incremental change. The flip side is that Taleb may underestimate the cumulative effect of many incremental changes.

Comment by aaronde on Open Thread, December 1-15, 2012 · 2012-12-02T02:56:09.395Z · LW · GW

I think the real difference between people like Taleb and the techno-optimists is that we think the present is cool. He brags about going to dinner in minimalist shoes, and eating food cooked over a fire, whereas I think it's awesome that I can heat things up instantly in a microwave oven, and do just about anything in meticulously engineered and perfectly fitted, yet cheaply mass-produced, running shoes without worrying about damaging my feet. I also like keyboards, and access to the accumulated knowledge of humanity from anywhere, and contact lenses. And I thought it was funny when he said that condoms were one of the most important new technologies, but aren't talked about much, as if to imply that condoms aren't cool. I think that condoms are cool! I remember when I first got condoms, and took one out to play with. After testing it a couple different ways, I thought: *how does anyone manage to break one of these!?" It's easy to extrapolate that no "cool" technology will exist in the future, if you don't acknowledge that any cool technology currently exists.

But I think Taleb's piece is valuable, because it illustrates what we are up against, as people trying to get others to take seriously the risks, and opportunities, presented by future technologies. Taleb seems very serious and respectable, precisely because he is so curmudgeonly and conservative, whereas we seem excitable and silly. And he's right that singularitarian types tend to overemphasize changes relative to everything that remains the same, and often conflate their predictions of the future with their desires for the future. I think that lesswrong is better than most in this regard, with spokespeople for SI taking care to point out that their singularity hypothesis does not predict accelerating change, and that the consequences of disruptive technology need not be good. Still, I wonder if there's any way to present a more respectable face to the public, without pretending that we don't believe what we do.

Comment by aaronde on Mathematical Measures of Optimization Power · 2012-11-27T07:16:53.047Z · LW · GW

Oops, looks like I was wrong about what you meant (ignore the edit). But yes, if you give a stupid thing lots of power you should expect bad outcomes. A car directed with zero intelligence is not a car sitting still, but precisely what you said was dangerous: a car having its controls blindly fiddled with. But if you just run a stupid program on a computer, it will never acquire power in the first place. Most decisions are neutral, unless they just happen to be plugged into something that has already been optimized to have large physical effects (like a bulldozer). Of those decisions that do have large effects, most will be destructive, but that's exactly what we should expect from a stupid optimization process acting on something that has already been finely honed by a smart optimization process.

what does "could" mean?

Good question. I think it has something to do with simply defining some set of actions to be your "options", and temporarily putting all your options on an equal footing, so that you end up with the one with the best consequences, rather than the one that seemed like the one you'd be most likely to choose. I don't think it even has much to do with probabilities, because then you run into self-fulfilling prophesies - doing what you predicted you'd do, thereby justifying the prediction.

In this case, we want to measure how good an agent did, relative to how it could have done. That is, how good were the consequences of the option it chose, relative to its other options. I don't see any reason to weight those options according to a probability distribution, unless you know what "half an option" means. And choosing a distribution poses huge problems. After all, we know the agent chose one of the options with probability 1.0, and all the others with probability 0.0.

Forest fires are definitely OPs under my intuitive concept. They consistently select a subset of possible future (burnt forests).

Well, you could just compare the rate of oxidation under a flame, to the average rate of oxidation of all surfaces (including those that happen to be on fire) within whichever reference class you prefer. (I think choosing a reference class (set of options) is just part of how you define the OP. And you just define the OP whichever way helps you understand the world best.)

Thanks for all your comments!

Is this actually helpful? I try to read up on the background for this stuff, but I never know if I'm just rehashing what's already been discussed, and if so, whether reviewing that here would be useful to anyone.

Comment by aaronde on Mathematical Measures of Optimization Power · 2012-11-27T04:15:58.704Z · LW · GW

1) You'd need a way to even specify the set of "output" of any possible OP. This seems hard to me because many OPs do not have clear boundaries or enumerable output channels, like forest fires or natural selection or car factories.

How do you define an optimization process without defining its output? If you want to think of natural selection as a force that organizes matter into self-replicators, then compare the reproductive ability of an organism to the reproductive ability of a random clump of matter, to find out how much natural selection has acted on it. If you want to think of it as a force that produces genomes, then compare an evolved genome to a random strand of DNA (up to some maximum length).

I can't think of a way of fitting a forest fire into this model either, which suggests it isn't useful to think of forest fires under this paradigm. But isn't that a good sign? If anything could be usefully modeled as an optimizer, wouldn't that hint that the concept is overly broad?

2) This is equal to a flat prior over your OPs outputs. You need some kind of specification for what possibilities are equally likely, and a justification thereof.

Why? Isn't the crux of the decision-making process pretending that you could choose any of your options, even though, as a matter of fact, you will choose one? I can see how you would run into some fuzziness if you tried to apply it to natural selection or even brains. But for the mathematical model, where the process selects from some abstract set of options, equal weighting seems appropriate. And this maps fairly straightforwardly onto an AI acting over a physical wire.

3) Even if we consider an AGI with well-defined output channels, it seems to me that random outputs are potentially very very very destructive, and therefore not the "default" or "status quo" against which we should measure.

(EDIT: D'oh! I just realized what you meant by random outputs being "destructive". You mean that if an AGI were to take its options to be "configurations of matter in the universe", then its baseline would be a randomly shuffled universe that was almost completely destroyed from our perspective. But I don't think this makes sense. Just because an AGI is smart enough to reorganize all matter in the universe doesn't mean that it makes sense for it to output decisions in that form. That would basically be a type error, just like if I were to decide "be in New York" instead of "drive to New York". The options the AGI has to choose from are outputs of a subroutine running inside of itself. So if it has a robot body, then the "default" or unoptimized output is random flailing about, or if it interacts through a text terminal, it would be printing random gibberish, most of which does nothing and leaves the configuration of the universe largely unchanged (and a few of which convince the programmer to give it access to the internet so it can take over the world.)).

Are you saying that an "AI" outputting random noise could do worse than an "AI" with optimization power measured at zero (i.e. zero intelligence)? Seems to me that, to reliably do worse than random, you would have to be trying to do badly. And you would have to be doing so with a strictly positive level of skill.

(Note: for a model of natural selection that might actually be usable in practice, suppose that we know a set X of mutations have occurred in a population over a given time, and that a subset of these X have become fixed in the population (the rest have been weeded out). To calculate how "optimized" X is, compare the reproductive fitness of the actual population to the average fitness of hypothetical populations which, instead of X*, had retained some random subset of the mutations from X (that is, selected with uniform probability from the power set of X). The measure of "reproductive fitness" could be as simple as population size.)

Comment by aaronde on Open Thread, November 16–30, 2012 · 2012-11-25T23:44:51.746Z · LW · GW

Good questions. I don't know the answers. But like you say, UDT especially is basically defined circularly - where the agent's decision is a function of itself. Making this coherent is still an unsolved problem. So I was wondering if we could get around some of the paradoxes by giving up on certainty.

Comment by aaronde on Mathematical Measures of Optimization Power · 2012-11-25T06:05:31.180Z · LW · GW

Caveat: if someone is paralyzed because of damage to their brain, rather than to their peripheral nerves or muscles, then this is not true,

That's why I specified that the you don't get penalized for disabilities that have nothing to do with the signals leaving your brain.

which creates and undesirable dependency of the measured optimization power on the location of the cause of the disability.

I disagree. I think that's kind of the point of defining "optimization power" as distinct from "power". A man in a prison cell isn't less intelligent just because he has less freedom.

No, that clearly makes no sense if EU[av] <= 0. If you want to divide by something to normalize the measured optimization power (so that multiplying the utility function by a constant doesn't change the optimization power), the standard deviation of the expected utilities of the counterfactual probability distributions over world states associated with each of the agent's options would be a better choice.

Great idea! I was really sloppy about that, realized at the last minute that taking a ratio was clearly wrong, and just wanted to make sure that you couldn't get different answers by scaling the utility function. I guess |EU[av]| does that, but now we can get different answers by shifting the utility function, which shouldn't matter either. Standard deviation is infinitely better.

Comment by aaronde on Open Thread, November 16–30, 2012 · 2012-11-25T05:12:58.529Z · LW · GW

What I am saying is that I don't assume that I maximize expected utility. I take the five-and-ten problem as a proof that an agent cannot be certain that it will make the optimal choice, while it is choosing, because this leads to a contradiction. But this doesn't mean that I can't use the evidence that a choice would represent, while choosing. In this case, I can tell that U($10) > U($5) directly, so conditioning on A=$10 or A=$5 is redundant. The point is that it doesn't cause the algorithm to blow up, as long I don't think my probability of maximizing utility is 0 or 1.

It's true that A=$5 could be stronger evidence for U($5)>U($10) than A=$10 is for U($10)>U($5). But there's no particular reason to think it would be. And as long as P(U($10)>U($5)) is large enough a priori, it will swamp out the difference. As long as making a choice is evidence for that being the optimal choice, only insofar as I am confident that I make the optimal choice in general, it will provide equally strong evidence for every choice, and cancel itself out. But in cases where a particular choice is evidence of good things for other reasons (like Newcomb's problem), taking this evidence into consideration can affect my decision.

So why can't I just use the knowledge that I'll go through this line of reasoning to prove that I will choose $10 and yield a contradiction? Because I can't prove that I'll go through this line of reasoning. Simulating my decision process as part of my decision would result in infinite recursion. Now, there may be a shortcut I could use to prove what my choice will be, but the very fact that this would yield a contradiction means that no such proof exists in a consistent formal system.

(BTW, I agree that CDT is the only decision theory that works in practice, as is. I'm only addressing one issue with the various timeless decision theories)

Comment by aaronde on Mathematical Measures of Optimization Power · 2012-11-25T01:48:59.602Z · LW · GW

What I can't figure out is how to specify possible worldstates “in the absence of an OP”.

Can we just replace the optimizer's output with random noise? For example, if we have an AI running in a black box, that only acts on the rest of the universe through a 1-gigabit network connection, then we can assign a uniform probability distribution over every signal that could be transmitted over the connection over a given time (all 2^(10^9) possibilities per second), and the probability distribution of futures that yields is our distribution over worlds that "could have been". We could do the same thing with a human brain and, say, all combinations of action potentials that could be sent down the spinal cord over a given time. This is desirable, because it separates optimization power from physical power. So paralyzed people aren't less intelligent just because "raise arm" isn't an option for them (That is, no combination of action potentials in their head will cause their arm to move).

More formally, an agent is a function or program that has a range or datatype. The range/datatype is the set of what we would call the agent's options. So assume we can generate counterfactual outcomes for each option in the range, the same way your favorite decision theory does. Then we can take optimization power to be the difference between EU given what the agent actually does, and the average EU over all the counterfactuals.*

If the OP is some kind of black-box AI agent, it's easier to imagine this. But if the OP is evolution, or a forest fire, it's harder to imagine.

I'm not so sure. Choosing to talk about natural selection as an agent means defining an agent which values self-replication and outputs a replicator. So if you have a way of measuring how good a genome is at replicating, you could just subtract from that how good a random sequence of base-pairs is, on average, at replicating, to get a measure of how much natural selection has optimized that genome. Of course, you could do the same thing with an entire animal versus a random clump of matter, because the range of the agent is just part of the definition.

EDIT: * AlexMennen had a much better idea for normalizing this than I did ;)

Comment by aaronde on Open Thread, November 16–30, 2012 · 2012-11-24T03:26:12.585Z · LW · GW

I get that. What I'm really wondering is how this extends to probabilistic reasoning. I can think of an obvious analog. If the algorithm assigns zero probability that it will choose $5, then when it explores the counterfactual hypothesis "I choose $5", it gets nonsense when it tries to condition on the hypothesis. That is, for all U,

  • P(utility=U | action=$5) = P(utility=U and action=$5) / P(action=$5) = 0/0

is undefined. But is there an analog for this problem under uncertainty, or was my sketch correct about how that would work out?

Comment by aaronde on Open Thread, November 16–30, 2012 · 2012-11-21T19:17:12.571Z · LW · GW

That's exactly the impression that I got. That it was awkward phrasing, because you just didn't know how to phrase it - but that it wasn't a coincidence that you defaulted to that particular awkward phrasing. It seems that, on some level, you were surprised to see people outside lesswrong discussing "lesswrong ideas." Even though, intellectually, you know that most of the good ideas on lesswrong didn't originate here. Don't be too hard on yourself. I probably have the opposite problem, where, as a meta-contrarian, I can't do anything but criticize lesswrong.

If you want to avoid sounding like a cheerleader, I think the best rule of thumb is to just not name-drop. It's great if you get a lot of ideas from Eliezer and lesswrong, but then communicate those ideas in a way that makes it difficult to trace them back to lesswrong. This should come naturally, because you shouldn't believe everything you hear on lesswrong anyway. Confirm what you hear with an independent source, and then you can refer to that source instead of lesswrong, just like you would with information you learned on wikipedia.

Comment by aaronde on Open Thread, November 16–30, 2012 · 2012-11-21T00:05:39.805Z · LW · GW

I don't understand how Uspensky's definition is different from Eliezer's. Is there some minimum number of people a proof has to convince? Does it have to convince everyone? If I'm the only person in the world, is writing a proof impossible, or trivial? It seems that both definitions are saying that a proof will be considered valid by those people who find it absolutely convincing. And those people who do not find it absolutely convincing will not consider it valid. More importantly, it seems that this is all those two definitions are saying, which is why neither of them is very helpful if we want something more concrete than the colloquial sense of proof.

Comment by aaronde on Open Thread, November 16–30, 2012 · 2012-11-20T23:24:40.330Z · LW · GW

I liked the fact that the author didn't use cognitive bias as an excuse to give up on talking about politics altogether (which seems to be LWian consensus), but instead made demonstrable claims about politics.

EDIT: in response to the previous version of Michaelos' post, I said:

It makes me uncomfortable when LWers say things like:

"Politics is the Mindkiller" appears to be acknowledged as early as the second sentence.

It smacks of, "Oh, look at the unenlightened people finally catching on." Lesswrong didn't invent cognitive science, and "politics is the mindkiller" is just our term for a well-established result of cognitive science. The article is about motivated reasoning, and the author isn't "acknowledging" it, but explaining it.

Comment by aaronde on Open Thread, November 16–30, 2012 · 2012-11-20T21:00:22.864Z · LW · GW

Can anyone point me toward work that's been done on the five-and-ten problem? Or does someone want to discuss it here? Specifically, I don't understand why it is a problem for probabilistic algorithms. I would reason:

There is a high probability that I prefer $10 to $5. Therefore I will decide to choose $5, with low probability.

And there's nowhere to go from there. If I try to use the fact that I chose $5 to prove that $5 was the better choice all along (because I'm rational), I get something like:

The probability that I prefer $5 to $10 is low. But I have very high confidence in my rationality, meaning that I assign high probability, a priori, to any choice I make being the choice I prefer. Therefore, given that I choose $5, the probability that I prefer $5 is high. So $5 doesn't seem like a bad choice, since I'll probably end up with what I prefer.

But things still turn out right, because:

However, the probability that I prefer $10, given that I choose $10, is even higher, because the probability that I prefer $10 was high to begin with. Therefore, $10 is a better choice than $5, because the probability that (I prefer $10 to $5 given that I choose $10) is higher than the probability that (I prefer $5 to $10 given that I choose $5).

So unless I'm missing something, the five-and-ten problem is just a problem of overconfidence.

Comment by aaronde on Instrumental rationality for overcoming disability and lifestyle failure (a specific case) · 2012-11-14T23:53:15.893Z · LW · GW

Reading Less Wrong is consumptive, not productive. You need to have something to show for your work, ex. a novel draft, a fitter body, a cleaner house.

Isn't easy/hard the more useful distinction than consumptive/productive? After all, reading the news is productive in the sense of having something to show for it, because you will seem more informed in conversation. And working out can be a form of consumption, if you buy a gym membership.

Personally, I've always loved working out. So I don't have much to gain by trying to motivate myself to work out even more, because I'm obviously already very fit. And "forcing" myself to work out isn't going to test my self-discipline either. If I'm going to put in 40 hours of scheduled "work" next week, then at least some of it should be spent on things I find hard, and therefore don't do often enough.

Similarly, if reading geeky blog articles is what you do for fun, CAE_Jones, (which seems probable since you're here) it's unlikely that reading even more geeky blog articles will improve your life. That said, you might want to start off scheduling things you would expect yourself to do anyway, for the same reason that you might want to start off scheduling less that 40 hours a week, and slowly work your way up. Just to ease into it.

Comment by aaronde on Struck with a belief in Alien presence · 2012-11-11T20:26:03.453Z · LW · GW

little gray men emerging from airborn thingies is HUGE in itself.

Um, no. A short guy in a grey suit stepping off a helicopter is a little grey man emerging from an airborn thingy.

Or did you go through all previous sightings and came to that conclusion in every one case?

No. I don't see the point in digging through all the reports, when the reports I have heard about have been so underwhelming. I was skipping around, watching bits and pieces of the video you linked, until Manfred pointed this out:

The geiger counter reading is reported as "10 times background," which sounds impressive if you've never held a geiger counter, but really just means a nearby rock had some potassium in it, or a dozen other possibilities.

So they basically lied. I actually haven't ever held a geiger counter, so I had no way of knowing this. If asked to explain it, I would have had to admit that something weird was going on that I couldn't explain. Except there's a perfectly mundane explanation, and the only reason I was confused is because I was misled about the significance of the reading in the first place. After that I didn't see the value in watching the rest of the documentary.

So I have a better idea. You tell me what you think is the single most convincing incident, and I will tell you,

  • How convincing I find the report on its own, and
  • How convincing it would be, assuming that there were thousands of similar, equally reliable reports.
Comment by aaronde on Struck with a belief in Alien presence · 2012-11-11T15:38:23.010Z · LW · GW

Even if you could rule out man-made and weather-related causes for some UFOs, that wouldn't imply that they were caused by an extra-terrestrial civilization either. Some UFOs may still be unexplained, but all that means is that we don't know enough about them to say what they are.

That said, I don't think you can rule out weather and human craft. Others have already explained why I find the "primary" evidence unconvincing.

This is very speculative to me. I don't think we can use it as evidence for or against.

Let me put it this way. My guess of what an interstellar civilization would look like makes predictions about what it would be like to encounter that civilization. Those predictions are not satisfied. This is strong evidence that no extra-terrestrial civilization (as I understand the term) has made it anywhere near us.

One of the reasons you were downvoted is that you asked us to evaluate evidence for "Aliens". But that is impossible until you explain what you mean by "Aliens". Obviously, there is something about these UFO sightings that makes you think they are more likely to be caused by aliens than by weather. Which implies that you think you know something about aliens that makes them a better explanation.

So what is it that you think you know about these "Aliens"?

Comment by aaronde on Struck with a belief in Alien presence · 2012-11-11T05:34:07.307Z · LW · GW

Yes, I was wrong. I was explaining why I got so focused on the blank-slate version of the prior.

Comment by aaronde on Struck with a belief in Alien presence · 2012-11-11T03:45:12.718Z · LW · GW

Right. What I want to do is calculate the probability that a random conscious entity would find itself living in a world where someone satisfying the definition of Julius Caesar had existed. And then calculate the conditional probability given the evidence, which is everything I've ever observed about the world including the newly discovered account.

Obviously that's not what you do in real life, but the point remains that everything after the original prior (based on Kolmogorov complexity or something) is just conditioning. If we're going to talk about how and why we should formulate priors, rather than what Bayes' rule says, this is what we're interested in.

Comment by aaronde on Struck with a belief in Alien presence · 2012-11-11T03:08:58.511Z · LW · GW

I think you may be confused by an oversimplification of Occam's Razor: "Extraordinary claims require extraordinary evidence." That's not actually how you derive a prior - the very word "extraordinary" implies that you already have experience about what is ordinary and what isn't. If we really throw out all evidence that could tell us how likely aliens are, we end up with a probability which (by the usual method of generating priors), depends on the information-theoretic complexity of the statement "There are aliens on earth." Which in turn depends on how precisely you define the word "aliens". Aliens that fly around in saucers are more likely than aliens that fly around in saucers and want to probe our butts. And aliens that fly around in saucers and probe our butts are more likely than aliens that fly around in saucers and probe our butts and are abducting our politicians one by one to replace them with reptilian impersonators. Every extra caveat makes a statement less likely. Every extra belief you take on is one more way that you could be wrong. This is why you need to justify all your beliefs.

I don't think that generic aliens should be considered especially improbable a priori - before the evidence is considered. I think that they are unlikely a posteriori - based on the fact that we don't see them. I think that any intelligent space-faring life would be busy building spheres around stars (if not outright disassembling the stars) as quickly as they spread out into the cosmos. So we'd notice them by the wake of solar systems going dark. At the very least, there's no reason to think that they would hide from us, which is what these scenarios tend to require (though I haven't watched the documentary).

I'm not sure what you're trying to say about the black swan. What a bayesian would do is assign a prior probability distribution over possible colorations of swans (say 1/2 white, 1/2 black), then calculate, based on the fact that ey has seen, say, a million white swans in a row, what the probability is that the next swan ey sees will be black. Needless to say, ey will be very surprised if the next swan actually is black. But this is a good thing, because, for the same reason, ey was very unsurprised by the previous swan, which was white, as well as swans number 999,999 and 999,998 and 999,997 and so on.

Anyway, I found this amusing.

Comment by aaronde on Struck with a belief in Alien presence · 2012-11-11T01:54:28.544Z · LW · GW

Wait, what? Bayesians never assign 0 probability to anything, because it means the probability will always remain 0 regardless of future updates. And "prior probability", by definition, means that we throw out all previous evidence.

Comment by aaronde on FAI, FIA, and singularity politics · 2012-11-08T18:34:49.161Z · LW · GW

I endorse this idea, but have a minor nitpick:

In such a scenario, we could speak of "FIA" - friendly intelligence augmentation. A basic idea of existing FAI discourse is that the true human utility function needs to be determined, and then the values that make an AI human-friendly would be extrapolated from that.

This certainly gets proposed a lot. But isn't it lesswrongian consensus that this is backwards? That the only way to build a FAI is to build an AI that will extrapolate and adopt the humane utility function on its own? (since human values are too complicated for mere humans to state explicitly).

Comment by aaronde on Things philosophers have debated · 2012-10-31T16:39:33.962Z · LW · GW

How does trivialism differ from assuming the existence of a Tegmark IV universe?

Tegmark IV is the space of all computable mathematical structures. You can make true and false statements about this space, and there is nothing about it that implies a contradiction. You may think that any coherent empirical claim is true in Tegmark IV, in that anything we say about the world is true of some world. But being true in some world does not make it true in this world. If I say that the sky is green, I am implicitly referring to the sky that I experience, which is blue. That is, I am saying that the sky which is blue is green. So I'm contradicting myself, and the statement is false. You don't even need to think of alternate universes to reason through this. After all, some planet in our galaxy surely has a green sky.

A spectral argument given in defense of trivialism in the dissertation runs like this...

It all looks shaky, but most obviously, just because every classical proposition may be interpreted in natural language doesn't mean that every natural language proposition may be interpreted in classical logic. In particular, the aspects of natural language that make it inconsistent probably can't be translated into classical logic. After all, that's why we invented classical logic in the first place.

Did these points come up in the dissertation?

Comment by aaronde on Naive TDT, Bayes nets, and counterfactual mugging · 2012-10-23T19:28:51.623Z · LW · GW

Isn't temporal inconsistency just selfishness? That is, before you know whether the coin came up heads or tails, you care about both possible futures. But after you find out that you're in the tails' universe you stop caring about the heads' universe, because you're selfish. UDT acts differently, because it is selfless, in that it keeps the same importance weights over all conceivable worlds.

It makes perfect sense to me that a rational agent would want to restrict the choices of its future self. I want to make sure that future-me doesn't run off and do his own thing, screwing current-me over.

Comment by aaronde on Firewalling the Optimal from the Rational · 2012-10-19T19:54:14.628Z · LW · GW

This question is for anyone who says they saw a benefit from supplementation, not just Kevin.

What was your diet like at the time? Were you taking a daily multivitamin?

Comment by aaronde on The raw-experience dogma: Dissolving the “qualia” problem · 2012-09-25T01:18:54.254Z · LW · GW

I also think that I am conscious, but you keep telling me I have the wrong definitions of words like this, so I don't know if we agree. I would say being conscious means that some part of my brain is collating data about my mental states, such that I could report accurately on my mental states in a coherent manner.

Comment by aaronde on The raw-experience dogma: Dissolving the “qualia” problem · 2012-09-24T23:01:55.046Z · LW · GW

How do I know whether I am having a conscious subjective experience of a sensation or emotion?

Comment by aaronde on The raw-experience dogma: Dissolving the “qualia” problem · 2012-09-23T17:17:13.202Z · LW · GW

Okay, I've tabooed my words. Now it's your turn. What do you mean by "feeling"?

Comment by aaronde on The raw-experience dogma: Dissolving the “qualia” problem · 2012-09-23T14:51:23.840Z · LW · GW

You're right, we're starting to go around in circles. So we should wrap this up. I'll just address what seems to be the main point.

I find it obvious that there is a huge, important aspect of what it is to be in pain that [your definition] completely misses.

This is the crux of our disagreement, and is unlikely to change. But you still seem to misunderstand me slightly, so maybe we can still make progress.

You have decided that pain is a certain kind of behaviour displayed by entities other than yourself and seen from the outside, and you have coded that up.

No, I have decided that pain is any stimulus - that is, a feeling - that causes a certain kind of behavior. This is not splitting hairs. It is relevant, because you keep telling me that my view doesn't account for feelings, when it is all about feelings! What you really mean is that my view doesn't account for qualia, which really just means I'm being consistent, because I don't believe in qualia.

you can't prefer to personally have certain experiences if there is no such thing as subjective experience.

Here for example, you seem to be equivocating between "experience" and "subjective experience". If "subjective experience" means the same thing as "experience", then I don't think there is no such thing as subjective experience. But if "subjective experience" means something different, like "qualia", then this statement doesn't follow at all.

P.S. This may be off-point, but I just have to say, this:

I inspect the code, and find nothing that relates in any way to how I introspect pain or any other feeling.

...is because the code has no capacity for introspection - not because it has no capacity for pain.

Edit: maybe this last point presents room for common ground, like: "Qualia is awareness of ones own feelings, and therefore is possessed by anything that can accurately report on how it is responding to stimuli."?

Comment by aaronde on The raw-experience dogma: Dissolving the “qualia” problem · 2012-09-21T20:18:11.722Z · LW · GW

I thought you were denying "pains hurt"

Not at all. I'm denying that there is anything left over to know about pain (or hurting) after you understand what pain does. As my psych prof. pointed out, you often see weird circular definitions of pain in common usage, like "pain is an unpleasant sensation". Whereas psychologists use functional definitions, like "a stimulus is painful, iff animals try to avoid it". I believe that the latter definition of pain is valid (if simplistic), and that the former is not.

If you think you can make the Hard Problem easy by tabooing "qualia", lets see you try.

I did that here, on another branch of this conversation. Again, this is simplistic, probably missing a few details, maybe slightly wrong. But I find it implausible that there is a huge, important aspect of what it is to be in pain that this completely misses.

Do you send disadvantaged kids to Disneyland, or just send them the brochure?

Depends on the kid. I would have preferred a good book to Disneyland (I don't like crowds or roller coasters). Again, it's about preferences, not qualia. And what someone prefers is simply what they would choose, given the option. (And if we want to get into CED, it's what they would choose, given the option, and unlimited time to think about it, etc...)

Even if you don't personally care about experiencing things for yourself...

Woah, did I say that? Just because I don't value feelings in themselves doesn't mean that I can't care about anything that involves feelings. There's no meta-ethical reason, for example, why I can't prefer to have a perpetual orgasm for the rest of my life. I just don't. On the other hand, I am a big fan of novelty. And if novel things are going to happen, then something has to do them. That thing may as well be me. And to do something is to experience it. There is no distinction. So I certainly want to experience novel things.

Comment by aaronde on The raw-experience dogma: Dissolving the “qualia” problem · 2012-09-21T17:19:32.094Z · LW · GW

My pains hurt. My food tastes. Voices and music sound like something.

Um, those are all tautologies, so I'm not sure how to respond. If we define "qualia" as "what it feels like to have a feeling", then, well - that's just a feeling, right? And "qualia" is just a redundant and pretentious word, whose only intelligible purpose is to make a mystery of something that is relatively well understood (e.g: the "hard problem of consciousness"). No?

Erm, sorry for the snark, but seriously: has talk of qualia, as distinct from mere perceptions, ever achieved any useful or even interesting results? Consciousness will continue to be a mystery to people as long as they refuse to accept any answers - as long as they say: "Okay, you've explained everything worth knowing about how I, as an information processing system, perceive and respond to my environment. And you've explained everything worth knowing about how I perceive my own perceptions of my environment, and perceive those perceptions, and so on ad infinitum - but you still haven't explained why it feels like something to have those perceptions."

Do you go drink the wine or just read the label? Do you go on holiday or just read the brochure?

Ha! That's actually not far off. But it's because I'm a total nerd who tries to eat healthy and avoid unnecessary expenses - not because of how I feel about qualia. I think that happiness should be a consequence of good things happening, not that happiness is a good thing in itself. So I try to avoid doing things (like drugs) that would decouple my feelings from outcomes in the real world. In fact, if I just did whatever I felt like at any given time, I would end up even less outgoing - less adventurous.

Comment by aaronde on The raw-experience dogma: Dissolving the “qualia” problem · 2012-09-20T21:11:39.708Z · LW · GW

I don't want to experience pain even in ways that promote my goals

Don't you mean that avoiding pain is one of your goals?

It would have been helpful to say why you reject it.

It just seems like the default position. Can you give me a reason to take the idea of qualia seriously in the first place?

would you maintain that personally experiencing pain for the first time would teach you nothing?

Yes.

Comment by aaronde on Any existential risk angles to the US presidential election? · 2012-09-20T14:56:10.789Z · LW · GW

Only in their rhetoric, which is at most weakly correlated with their actual policy decisions.

Yes, but in this case, the rhetoric matters. I believe this was Stuart's point. If we want to raise the "sanity waterline", then, all else being equal, saner political dialog is a good thing. Right?

Comment by aaronde on The raw-experience dogma: Dissolving the “qualia” problem · 2012-09-19T19:59:13.235Z · LW · GW

I don't think a program has to be very sophisticated to feel pain. But it does have to exhibit some kind of learning. For example:

.def wanderer (locations, utility, X):
..while True:
.
...for some random l1, l2 in locations:
....if utility[l1] < utility[l2]:
.....my_location = l2
....else:
.....my_location = l1
.
...if X(my_location, current_time):
....utility[my_location] = utility[my_location] - 1
.
...current_time = current_time + 1

This program aimlessly wanders over a space of locations, but eventually tends to avoid locations where X has returned True at past times. It seems obvious to me that X is pain, and that this program experiences pain. You might say that the program experiences less pain than we do, because the pain response is so simple. Or you might argue that it experiences pain more intensely, because all it does is implement the pain response. Either position seems valid, but again it's all academic to me, because I don't believe pain or pleasure are good or bad things in themselves.

To answer your question, a thermostat that is blocked from changing the temperature is frustrated, not necessarily in pain. Although, changing the setting on a working thermostat may be pain, because it is a stimulus that causes a change in the persistent behavior a system, directing it to extricate itself from its current situation.

(edit: had trouble with indentation.)

Comment by aaronde on The raw-experience dogma: Dissolving the “qualia” problem · 2012-09-19T07:59:39.168Z · LW · GW

Probably some of them do (I don't play video games). But they aren't even close to being people, so I don't really care.

Comment by aaronde on The raw-experience dogma: Dissolving the “qualia” problem · 2012-09-19T07:57:47.404Z · LW · GW

I think that split-brain study shows the opposite of what you think it shows. If you observed yourself to be writhing around in agony, then you would conclude that you were experiencing the qualia of pain. Try to imagine what this would actually be like, and think carefully about what "trying to avoid similar circumstances in the future" actually means. You can't sit still, can't think about anything else. You plead with anyone around to help you - put a stop to whatever is causing this - insisting that they should sympathize with you. The more intense the pain gets, the more desperate you become. If not, then you aren't actually in pain (as I define it) because you aren't trying very hard to avoid the stimulus. I'd sympathize with you. Are you saying you wouldn't sympathize with yourself?

BTW, how do you think I'd respond, if subjected to pain and asked about my "qualia"? By this reasoning, is my pain irrelevant?

In practice it seems that the only reason that it frustrates a person's goals to receive pain is because they have a goal, "I don't want to be in pain."

I think you have the causation backwards. Pain causes a person to acquire the goal of avoiding whatever the source of the pain is, even if they didn't have that goal before. (Think about someone confidently volunteering to be water-boarded to prove a point, only to immediately change his mind when the torture starts.) That's how I just defined pain above. That's all pain is, as far as I know. Of course, in animals, the pain response happens to be associated with a bunch of biological quirks, but we could recognize pain without those minutiae.

If the sophisticated intelligence HAS qualia but doesn't have as a goal avoidance of pain, that suggests your ethical system would be OK to subject it to endless punishment (a sentiment with which I may agree).

Well, you just described an intelligence that doesn't feel pain. So it doesn't make sense to ask whether it would be OK to inflict pain on it. Could you clarify what it would mean to punish something that has no desire to avoid the punishment?

Comment by aaronde on The raw-experience dogma: Dissolving the “qualia” problem · 2012-09-18T22:11:07.214Z · LW · GW

If an actor stays in character his entire life, making friends and holding down a job, in character - and if, whenever he seemed to zone out, you could interrupt him at any time to ask what he was thinking about, and he could give a detailed description of the day dream he was having, in character...

Well then I'd say the character is a lot less fictional than the actor. But even if there is an actor - an entirely different person putting on a show - the character is still a real person. This is no different from saying that a person is still a person, even if they're a brain emulation running on a computer. In this case, the actor is the substrate on which the character is running.

Comment by aaronde on The raw-experience dogma: Dissolving the “qualia” problem · 2012-09-18T22:10:30.979Z · LW · GW

As far as I know, to feel is to detect, or perceive, and pain is positive punishment, in the jargon of operant conditioning. So to say "I feel pain" is to say that I detect a stimulus, and process the information in such a way that (all else equal) I will try to avoid similar circumstances in the future. Not being a psychologist, I don't know much more about pain. But (not being a psychologist) I don't need to know more about pain. And I reject the notion that we can, through introspection, know something more about what it "is like" to be in pain.

I believe it's unethical to inflict pain on people (or animals, unnecessarily), because to hold something in a state of pain is to frustrate its goals. I don't think that it is any qualia associated with pain that makes it bad. Indeed, this seems to lead to morally repugnant conclusions. If we could construct a sophisticated intelligence that can learn by operant conditioning, but somehow remove the qualia, does it become OK to subject it to endless punishment?

Comment by aaronde on The raw-experience dogma: Dissolving the “qualia” problem · 2012-09-17T07:25:14.116Z · LW · GW

When people say that it's conceivable for something to act exactly as if it were in pain without actually feeling pain, they are using the word "feel" in a way that I don't understand or care about. So, sure: I don't feel pain in that sense. That's not going to stop me from complaining about having my hand chopped off!