Posts
Comments
Jonah, I agree with what you say at least in principle, even if you would claim I don't follow it in practice. A big advantage of being Bayesian is that you retain probability mass on all the options rather than picking just one. (I recall many times being dismayed with hacky approximations like MAP that let you get rid of the less likely options. Similarly when people conflate the Solomonoff probability of a bitstring with the shortest program that outputs it, even though I guess in that case, the shortest program necessarily has at least as much probability as all the others can combined.)
My main comment on your post is that it's hard to keep track of all of these things computationally. Probably you should try, but it can get messy. It's also possible that in keeping track of too many details, you introduce more errors than if you had kept the analysis simple. On many questions in physics, ecology, etc., there's a single factor that dominates all the rest. Maybe this is less true in human domains because rational agents tend to produce efficiencies due to eating up the free lunches.
So, I'm in favor of this approach if you can do it and make it work, but don't let the best be the enemy of the good. Focus on the strong arguments first, and only if you have the bandwidth go on to think about the weak ones too.
I used to eat a lot of chicken and eggs before I read Peter Singer. After that, I went cold turkey (pardon the expression).
Some really creative ideas, ChristianKl. :)
Even with what you describe, humans wouldn't become extinct, barring other outcomes like really bad nuclear war or whatever.
However, since the AI wouldn't be destroyed, it could bide its time. Maybe it could ally with some people and give them tech/power in exchange for carrying out its bidding. They could help build the robots, etc. that would be needed to actually wipe out humanity.
Obviously there's a lot of conjunction here. I'm not claiming this scenario specifically is likely. But it helps to stimulate the imagination to work out an existence proof for the extinction risk from AGI.
It's not at all clear that a AGI will be human-like, anyone than humans are dog-like.
Ok, bad wording on my part. I meant "more generally intelligent."
How do you fight the AGI past that point?
I was imagining people would destroy their computers, except the ones not connected to the Internet. However, if the AGI is hiding itself, it could go a long way before people realized what was going on.
Interesting scenarios. Thanks!
As we begin seeing robots/computers that are more human-like, people will take the possibility of AGIs getting out of control more seriously. These things will be major news stories worldwide, people will hold natural-security summits about them, etc. I would assume the US military is already looking into this topic at least a little bit behind closed doors.
There will probably be lots of not-quite-superhuman AIs / AGIs that cause havoc along the road to the first superhuman ones. Yes, it's possible that FOOM will take us from roughly a level like where we are now to superhuman AGI in a matter of days, but this scenario seems relatively unlikely to me, so any leverage you want to make on it has to be multiplied by that small probability of it happening.
--
BTW, I'm curious to hear more about the mechanics of your scenario. The AGI hacks itself onto every (Internet-connected) computer in the world. Then what? Presumably this wouldn't cause extinction, just a lot of chaos and maybe years' worth of setback to the economy? Maybe it would increase chances of nuclear war, especially if the AGI could infect nuclear-warhead-related computer systems.
This could be an example of the non-extinction-level AGI disasters that I was referring to. Let me know if there are more ways in which it might cause total extinction, though.
This is a good point. :) I added an additional objection to the piece.
As an empirical matter, extinction risk isn't being funded as much as you suggest it should be if almost everyone has some incentives to invest in the issue.
There's a lot of "extinction risk" work that's not necessarily labeled as such: Biosecurity, anti-nuclear proliferation, general efforts to prevent international hostility by nation states, general efforts to reduce violence in society and alleviate mental illnesses, etc. We don't necessarily see huge investments in AI safety yet, but this will probably change in time, as we begin to see more AIs that get out of control and cause problems on a local scale. 99+% of catastrophic risks are not extinction risks, so as the catastrophes begin happening and affecting more people, governments will invest more in safeguards than they do now. The same can be said for nanotech.
In any event, even if budgets for extinction-risk reduction are pretty low, you also have to look at how much money can buy. Reducing risks is inherently difficult, because so much is out of our hands. It seems relatively easier to win over hearts and minds to utilitronium (especially at the margin right now, by collecting the low-hanging fruit of people who could be persuaded but aren't yet). And because so few people are pushing for utilitronium, it seems far easier to achieve a 1% increase in support for utilitronium than a 1% decrease in the likelihood of extinction.
Thanks, Luke. See also this follow-up discussion to Ord's essay.
As you suggest with your "some" qualifier, my essay that benthamite shared doesn't make any assumptions about negative utilitarianism. I merely inserted parentheticals about my own views into it to avoid giving the impression that I'm personally a positive-leaning utilitarian.
Thanks, Jabberslythe! You got it mostly correct. :)
The one thing I would add is that I personally think people don't usually take suffering seriously enough -- at least not really severe suffering like torture or being eaten alive. Indeed, many people may never have experienced something that bad. So I put high importance on preventing experiences like these relative to other things.
Interesting story. Yes, I think our intuitions about what kinds of computations we want to care about are easily bent and twisted depending on the situation at hand. In analogy with Dennett's "intentional stance," humans have a "compassionate stance" that we apply to some physical operations and don't apply to others. It's not too hard to manipulate these intuitions by thought experiments. So, yes, I do fear that other people may differ (perhaps quite a bit) in their views about what kinds of computations are suffering that we should avoid.
I bet there are a lot more people who care about animals' feelings and who care a lot more, than those who care about the aesthetics of brutality in nature.
Well, at the moment, there are hundreds of environmental-preservation organizations and basically no organizations dedicated to reducing wild-animal suffering. Environmentalism as a cause is much more mainstream than animal welfare. Just like the chickens that go into people's nuggets, animals suffering in nature "are out of sight, and the connection between [preserving pristine habitats] and animals living terrible lives elsewhere is hard to visualize."
It's encouraging that more LessWrongers are veg than average, although I think 12.4% is pretty typical for elite universities and the like as well. (But maybe that underscores your point.)
The biggest peculiarity of Brian Tomasik's utility function, that is least likely to ever be shared by the majority of humanity, is probably not that he cares about animals (even that he cares about insects) but that he cares so much more about suffering than happiness and other good things.
An example post. I care a lot about suffering, a little about happiness, and none about other things.
The exchange rate in your utility function between good things and bad things is pretty relevent to whether you should prefer CEV or paperclipping (And what the changes in the probabilities of each even based on actions you might take would have to be in order justify them) and whether you think lab universes would be a good thing.
Yep!
This is what you value, what you chose.
Yes. We want utilitarianism. You want CEV. It's not clear where to go from there.
Not the hamster's one.
FWIW, hamsters probably exhibit fairness sensibility too. At least rats do.
Do you think the typical person advocating ecological balance has evaluated how the tradeoffs would change given future technology?
Good point. Probably not, and for some, their views would change with new technological options. For others (environmentalist types especially), they would probably retain their old views.
That said, the future-technology sword cuts both ways: Because most people aren't considering post-human tech, they're not thinking of (what some see as) the potential astronomical benefits from human survival. If 10^10 humans were only going to live at most another 1-2 billion years on Earth, their happiness could never outweigh the suffering of the 10^18 insects living on Earth at the same time. So if people aren't thinking about space colonization, why do they care so much about preserving humanity anyway? Two possible reasons are because they're speciesist and care more about humans or because they value things other than happiness and suffering. I think both are true here, and both are potentially problematic for CEV values.
Though if people without such strong intuitions are likely to become more rational, this would not be strong evidence.
Yeah, that would be my concern. These days, "being rational" tends to select for people who have other characteristics, including being more utilitarian in inclination. Interesting idea about seeing how deep ecologists' views would change upon becoming more rational.
The suffering is bad, but there are other values to consider here, that the scenario includes in far greater quantities.
We have different intuitions about how bad suffering is. My pain:pleasure exchange rate is higher than that of most people, and this means I think the expected suffering that would result from a Singularity isn't worth the potential for lots of happiness.
Thanks, JGWeissman. There are certainly some deep ecologists, like presumably Hettinger himself, who have thought long and hard about the scale of wild-animal suffering and still support preservation of ecology as is. When I talk with ecologists or environmentalists, almost always their reply is something like, "Yes, there's a lot of suffering, but it's okay because it's natural for them." One example:
As I sit here, thinking about the landscape of fear, I watch a small bird at my bird feeder. It spends more time looking around than it does eating. I try to imagine the world from its point of view — the startles, the alarms, the rustle of wings, the paw of the cat. And although I wish it well, I wouldn’t like its predators to disappear.
You can see many more examples here. A growing number of people have been convinced that wild-animal suffering should be reduced where feasible, but I think this is still a minority view. If more people thought about it harder, probably there would be more support, but ecological preservation is also a very strong intuition for some people. It's easy not to realize this when we're in our own bubbles of utilitarian-minded rationalists. :)
Spreading life far and wide is less widespread as a value, but it's popular enough that the Panspermia Society is one of a few groups that feels this way. I also have a very smart friend who happens to share this goal, even though he acknowledges this would create a lot of suffering.
As far as insects, it's not obvious that post-humans would care enough to undertake the approximation of their brains that you mention, because maybe it would make the simulation more complicated (=> expensive) or reduce its fidelity. There's an analogy with factory farming today: Sure, we could prevent animal suffering, but it's more costly. Still, yes, we can hope that post-humans would give enough weight to insect suffering to avoid this. And I agree insects may very well not be sentient, though if they are, the numbers of suffering minds would be astronomical.
The work on nonperson predicates and computational hazards is great -- I'm glad you guys are doing that!
Thanks, Benito. Do we know that we shouldn't have a lot of chicken feed? My point in asking this is just that we're baking in a lot of the answer by choosing which minds we extrapolate in the first place. Now, I have no problem baking in answers -- I want to bake in my answers -- but I'm just highlighting that it's not obvious that the set of human minds is the right one to extrapolate.
BTW, I think the "brain reward pathways" between humans and chickens aren't that different. Maybe you were thinking about the particular, concrete stimuli that are found to be rewarding rather than the general architecture.
Why not include primates, dolphins, rats, chickens, etc. into the ethics?
Future humans may not care enough about animal suffering relative to other things, or may not regard suffering as being as bad as I do. As noted in the post, there are people who want to spread biological life as much as possible throughout the galaxy. Deep ecologists may actively want to preserve wild-animal suffering (Ned Hettinger: "Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.") Future humans might run ancestor sims that happen to include astronomical numbers of sentient insects, most of which die (possibly painfully) shortly after birth. In general, humans have motivations to simulate minds similar to theirs, which means potentially a lot more suffering along for the ride.
Preventing suffering is what I care about, and I'm going to try to convince other people to care about it. One way to do that is to invent plausible thought experiments / intuition pumps for why it matters so much. If I do, that might help with evangelism, but it's not the (original) reason why I care about it. I care about it because of experience with suffering in my own life, feeling strong empathy when seeing it in others, and feeling that preventing suffering is overridingly important due to various other factors in my development.
My understanding is that CEA exists in order to simplify the paperwork of multiple projects. For example, Effective Animal Activism is not its own charity; instead, you donate to CEA and transfer the money to EAA. As bryjnar said, there's not really any overhead in doing this. Using CEA as an umbrella much simpler than trying to get 501(c)(3) status for EAA on its own, which would be painstaking process.
I appreciate personal anecdote. Sometimes I think anecdotes are the most valuable parts of an essay. It all depends on the style and the preferences of the audience. I don't criticize HPMOR on the grounds that it focuses too much on Harry and not enough on rationality concepts...
Three friends independently pointed me to Overcoming Bias in fall/winter 2006.
Agreed. I'm often somewhat embarrassed to mention SIAI's full name, or the Singularity Summit, because of the term "singularity" which, in many people's minds -- to some extent including my own -- is a red flag for "crazy".
Honestly, even the "Artificial Intelligence" part of the name can misrepresent what SIAI is about. I would describe the organization as just "a philosophy institute researching hugely important fundamental questions."
I like the way you phrased your concern for "subjective experience" -- those are the types of characteristics I care about as well.
But I'm curious: What does ability to learn simple grammar have to do with subjective experience?
the lives of the cockroaches are irrelevant
I'm not so sure. I'm no expert on the subject, but I suspect cockroaches may have moderately rich emotional lives.
If only for the cheap signaling value.
My point was that the action may have psychological value for oneself, as a way of getting in the habit of taking concrete steps to reduce suffering -- habits that can grow into more efficient strategies later on. One could call this "signaling to oneself," I suppose, but my point was that it might have value in the absence of being seen by others. (This is over and above the value to the worm itself, which is surely not unimportant.)
I'm surprised by Eliezer's stance. At the very least, it seems the pain endured by the frogs is terrible, no? For just one reference on the subject, see, e.g., KL Machin, "Amphibian pain and analgesia," Journal of Zoo and Wildlife Medicine, 1999.
Rain, your dilemma reminds me of my own struggles regarding saving worms in the rain. While stepping on individual worms to put them out of their misery is arguably not the most efficient means to prevent worm suffering, as a practical matter, I think it's probably an activity worth doing, because it builds the psychological habit of exerting effort to break from one's routine of personal comfort and self-maintenance in order to reduce the pain of other creatures. It's easy to say, "Oh, that's not the most cost-effective use of my time," but it can become too easy to say that all the time to the extent that one never ends up doing anything. Once you start doing something to help, and get in the habit of expending some effort to reduce suffering, it may actually be easier psychologically to take the efficiency of your work to the next level. ("If saving worms is good, then working toward technology to help all kinds of suffering wild animals is even better. So let me do that instead.")
The above point applies primarily to those who find themselves devoting less effort to charitable projects than they could. For people who already come close to burning themselves out by their dedication to efficient causes, taking on additional burdens to reduce just a bit more suffering is probably not a good idea.
Sure. Then what I meant was that I'm an emotivist with a strong desire to see suffering reduced and pleasure increased in the manner that a utilitarian would advocate, and I feel a deep impulse to do what I can to help make that happen. I don't think utilitarianism is "true" (I don't know what that could possibly mean), but I want to see it carried out.
Indeed. While still a bit muddled on the matter, I lean toward hedonistic utilitarianism, at least in the sense that the only preferences I care about are preferences regarding one's own emotions, rather than arbitrary external events.
Environmental preservationists... er, no, I won't try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!
Indeed. It may be rare among the LW community, but a number of people actually have a strong intuition that humans ought to preserve nature as it is, without interference, even if that means preserving suffering. As one example, Ned Hettinger wrote the following in his 1994 article, "Bambi Lovers versus Tree Huggers: A Critique of Rolston"s Environmental Ethics": "Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support."
Or, more accurately, our belief in utilitarianism is a fact about ourselves, not a fact about the universe.
Indeed. Like many others here, I subscribe to emotivism as well as utilitarianism.
Anyway, CEV is supposed to somehow take all of these details into account, and somehow generate an outcome that everyone will be satisfied with.
Yes, that's the ideal. But the planning fallacy tells us how much harder it is to make things work in practice than to imagine how they should work. Actually implementing CEV requires work, not magic, and that's precisely why we're having this conversation, as well as why SIAI's research is so important. :)
but I still suspect that if it really is such a good idea, then it should somehow be a part of the CEV extrapolation.
I hope so. Of course, it's not as though the only two possibilities are "CEV" or "extinction." There are lots of third possibilities for how the power politics of the future will play out (indeed, CEV seems exceedingly quixotic by comparison with many other political "realist" scenarios I can imagine), and having a broader base of memetic support is an important component of succeeding in those political battles. More wild-animal supporters also means more people with economic and intellectual clout.
I would hope that anyone who disagrees with utilitarianism, only disagrees because of an inconsistency in their value system, and that resolving this inconsistency would leave them with utilitarianism as their value system. But I'm estimating the probability that this is the case at... significantly less than 50%.
If you include paperclippers or suffering-maximizers in your definition of "anyone," then I'd put the probability close to 0%. If "anyone" just includes humans, I'd still put it less than, say, 10^-3.
Just so long as they don't force any other minds to experience pain.
Yeah, although if we take the perspective that individuals are different people over time (a "person" is just an observer-moment, not the entire set of observer-moments of an organism), then any choice at one instant for pain in another instant amounts to "forcing someone" to feel pain....
Bostrom's estimate in "Astronomical Waste" is "10^38 human lives [...] lost every century that colonization of our local supercluster is delayed," given various assumptions. Of course, there's reason to be skeptical of such numbers at face value, in view of anthropic considerations, simulation-argument scenarios, etc., but I agree that this consideration probably still matters a lot in the final calculation.
Still, I'm concerned not just with wild-animal suffering on earth but throughout the cosmos. In particular, I fear that post-humans might actually increase the spread of wild-animal suffering through directed panspermia or lab-universe creation or various other means. The point of spreading the meme that wild-animal suffering matters and that "pristine wilderness" is not sacred would largely be to ensure that our post-human descendants place high ethical weight on the suffering that they might create by doing such things. (By comparison, environmental preservationists and physicists today never give a second thought to how many painful experiences are or would be caused by their actions.)
As far as CEV, the set of minds whose volitions are extrapolated clearly does make a difference. The space of ethical positions includes those who care deeply about sorting pebbles into correct heaps, as well as minds whose overriding ethical goal is to create as much suffering as possible. It's not enough to "be smarter" and "more the people we wished we were"; the fundamental beliefs that you start with also matter. Some claim that all human volitions will converge (unlike, say, the volitions of humans and the volitions of suffering-maximizers); I'm curious to see an argument for this.
PeerInfinity, I'm rather struck by a number of similarities between us:
- I, too, am a programmer making money and trying to live frugally in order to donate to high-expected-value projects, currently SIAI.
- I share your skepticism about the cause and am not uncomfortable with your 1% probability of positive Singularity. I agree SIAI is a good option from an expected-value perspective even if the mainline-probability scenario is that these concerns won't materialize.
- As you might guess from my user name, I'm also a Utilitronium-supporting hedonistic utilitarian who is somewhat alarmed by Eliezer's change of values but who feels that SIAI's values are sufficiently similar to mine that it would be unwise to attempt an alternative friendly-AI organization.
- I share the seriousness with which you regard Pascal's wager, although in my case, I was pushed toward religion from atheism rather than the other way around, and I resisted Christian thinking the whole time I tried to subscribe to it. I think we largely agree in our current opinions on the subject. I do sometimes have dreams about going to the Christian hell, though.
I'm not sure if you share my focus on animal suffering (since animals outnumber current humans by orders of magnitude) or my concerns about the implications of CEV for wild-animal suffering. Because of these concerns, I think a serious alternative to SIAI in cost-effectiveness is to donate toward promoting good memes like concern about wild animals (possibly including insects) so that, should positive Singularity occur, our descendants will do the right sorts of things according to our values.
the largest impact you can make would be to simply become a vegetarian yourself.
You can also make a big impact by donating to animal-welfare causes like Vegan Outreach. In fact, if you think the numbers in this piece are within an order of magnitude of correct, then you could prevent the 3 or 4 life-years of animal suffering that your meat-eating would cause this year by donating at most $15 to Vegan Outreach. For many people, it's probably a lot easier to offset their personal contribution to animal suffering by donating than by going vegetarian.
Of course, the idea of "offsetting your personal contribution" is a very non-utilitarian one, because if it's good to donate at all, then you should have been doing that already and should almost certainly do so at an amount higher than $15. But from the perspective of behavior hacks that motivate people in the real world, this may not be a bad strategy.
By the way, Vegan Outreach -- despite the organization's name -- is a big advocate of the "flexitarian" approach. One of their booklets is called, "Even if You Like Meat."
Actually, you're right -- thanks for the correction! Indeed, in general, I want altruistic equal consideration of the pleasure and pain of all sentient organisms, but this need have little connection with what I like.
As it so happens, I do often feel pleasure in taking utilitarian actions, but from a utilitarian perspective, whether that's the case is basically trivial. A miserable hard-core utilitarian would be much better for the suffering masses than a more happy only-sometimes-utilitarian (like myself).
I am the kind of donor who is much more motivated to give by seeing what specific projects are on offer. The reason boils down to the fact that I have slightly different values (namely, hedonistic utilitarianism focused on suffering) than the average of the SIAI decision-makers and so want to impose those values as much as I can.
Great post! I completely agree with the criticism of revealed preferences in economics.
As a hedonistic utilitarian, I can't quite understand why we would favor anything other than the "liking" response. Converting the universe to utilitronium producing real pleasure is my preferred outcome. (And fortunately, there's enough of a connection between my "wanting" and "liking" systems that I want this to happen!)
Agreed. And I think it's important to consider just how small 1% really is. I doubt the fuzzies associated with using the credit card would actually be as small as 1% of the fuzzies associated with a 100% donation -- fuzzies just don't have high enough resolution. So I would fear, a la scope insensitivity, people getting more fuzzies from the credit card than are actually deserved from the donation. If that's necessary in order for the fuzzies to exceed a threshold for carrying out the donation, so be it; but usually the problem is in the other direction: People get too many fuzzies from doing too little and so end up not doing enough.
What criteria do you use to decide upon the class of algorithms / computations / chemicals / physical operations that you consider "conscious" in the sense of "having experiences" that matter morally? I assume it includes many non-human animals (including wild animals)? Might it include insects? Is it weighted by some correlate of brain / hardware size? Might it include digital computers? Lego Turing machines? China brains? Reinforcement-learning algorithms? Simple Python scripts that I could run on my desktop? Molecule movements in the wall behind John Searle's back that can be interpreted as running computations corresponding to conscious suffering? Rocks? How does it distinguish interpretations of numbers as signed vs. unsigned, or ones complement vs. twos complement? What physical details of the computations matter? Does it regard carbon differently from silicon?
I like all of the responses to the value-of-nature arguments you give in your second paragraph. However, as a hedonistic utilitarian, I would disagree with your claim that nature has value apart from its value to organisms with experiences. And I think we have a obligation to change nature in order to avert the massive amounts of wild-animal suffering that it contains, even if doing so would render it "unnatural" in some ways.
The 12-billion-utils example is similar to one I mention on this page under "What about Isolated Actions?" I agree that our decision here is ultimately arbitrary and up to us. But I also agree with the comments by others that this choice can be built into the standard expected-utility framework by changing the utilities. That is, unless your complaint is, as Nick suggests, with the independence axiom's constraint on rational preference orderings in and of itself (for instance, if you agreed -- as I don't -- that the popular choices in the Allais paradox should count as "rational").
Indeed. Gaverick Matheny and Kai M. A. Chan have formalized that point in an excellent paper, "The Illogic of the Larder."
For example if you claim to prefer non-existence of animals to them being used as food, then you clearly must support destruction of all nature reserves, as that's exactly the same choice. And if you're against animal suffering, you'd be totally happy to eat cows genetically modified not to have pain receptors. And so on. All positions never taken by any vegetarians.
I think most animal-welfare researchers would agree that animals on the nature reserve suffer less than those in factory farms, where conditions run contrary to the animals' evolved instincts. As far as consistent vegetarians, I know at least 5-10 people (including myself) who are very concerned about the suffering of animals in the wild and who would strongly support genetically modified cows without pain receptors. (Indeed, one of my acquaintances has actually toyed with the idea of promoting the use of anencephalic farm animals.) Still, I sympathize with your frustration about the dearth of consequentialist thinking among animal advocates.
I like Peter Singer's "drowning child" argument in "Famine, Affluence, and Morality" as a way to illustrate the imperative to donate and, by implication, the value of money. As he says, "we ought to give the money [spent on fancy clothes] away, and it is wrong not to do so."
I do think there's a danger, though, in focusing on the wrongness of frivolous spending, which is relatively easy to criticize. It's harder to make people think about the wrongness of failing to make money that they could have donated. Opportunity costs are always harder to feel viscerally.
I intuitively sympathize with the complaints of status-quo bias, though it's of course also true that more changes from evolution's current local optimum entail more risk.
Here is another interesting reference on one form of congenital lack of pain.
it seems in some raw intuitive sense, that if the universe is large enough for everyone to exist somewhere, then we should mainly be worried about giving babies nice futures rather than trying to "ensure they get born".
That's an interesting intuition, but one that I don't share. I concur with Steven and Vladimir. The whole point of the classical-utilitarian "Each to count for one and none for more than one" principle is that the identity of the collection of atoms experiencing an emotion is irrelevant. What matters is increasing the number of configurations of atoms in states producing conscious happiness and reducing those producing conscious suffering -- hence regular total utilitarianism. (Of course, figuring out what it means to "increase" and "reduce" things that occur infinitely many times is another matter.)
As a human, I try to abide by the deontological prohibitions that humans have made to live in peace with one another. [...] I don't go around pushing people into the paths of trains myself, nor stealing from banks to fund my altruistic projects.
It seems a strong claim to suggest that the limits you impose on yourself due to epistemological deficiency line up exactly with the mores and laws imposed by society. Are there some conventional ends-don't-justify-means notions that you would violate, or non-socially-taboo situations in which you would restrain yourself?
Also, what happens when the consequences grow large? Say 1 person to save 500, or 1 to save 3^^^^3?
The future--what will happen--is necessarily "fixed". To say that it isn't implies that what will happen may not happen, which is logically impossible.
Pablo, I think the debate is over whether there is such a thing as "what will happen"; maybe that question doesn't yet have an answer. In fact, I think any good definition of libertarian free will would require that it not have an answer yet.
So, can someone please explain just exactly what "free will" is such that the question of whether I have it or not has meaning?
As I see it, the real issue is whether it's possible to "have an impact on the way the world turns out." For example, imagine that God is deciding whether or not to punish you in hell. "Free will" is the hope that "there's still a chance for me to affect God's decision" before it happens. If, say, he's already written down the answer on a piece of paper, there's nothing to be done to change your fate.
What I said above shouldn't be taken too literally--I was trying to convey an intuition for a concept that can't really be described well in words. 'Having your fate written down on a piece of paper' is somewhat misleading if interpreted to imply that 'since the answer has been decided, I can now do anything and my fate won't change.' In the scenario where we lack free will, the physical actions taking place right now in our heads and the world around us are the writing down of the answer on the paper, because those are precisely what produce the results that happen.
"Free will" is the idea that there's some sort of "us" whose choices could make it the case that the question of "What will happen?" doesn't yet have an answer (even in a Platonic realm of 'truth') and that this choice is somehow nonarbitrary. I actually have no idea how this could work, or what this even really means, but I maintain some probability that I'm simply not smart enough to understand it.
I do know that if the future is determined, then then whether I believe the right answer about free will (or, perhaps, whether I accede to an incoherent concept people call "free will") is fixed, in the sense of being 'already written down' in some realm of Platonic knowledge. But if not, might there be something I can do (where the 'I' refers to something whose actions aren't yet decided even in a Platonic realm) to improve the truth / coherence of my beliefs?
Pascal's wager type arguments fail due to their symmetry (which is preserved in finite cases).
Even if our priors are symmetric for equally complex religious hypotheses, our posteriors almost certainly won't be. There's too much evidence in the world, and too many strong claims about these matters, for me to imagine that posteriors would come out even. Besides, even if two religions are equally probable, there may be certainly be non-epistemic reasons to prefer one over the other.
However, if after chugging through the math, it didn't balance out and still the expected disutility from the existance of the disutility threat was greater, then perhaps allowing oneself to be vulnerable to such threats is genuinely the correct outcome, however counterintuitive and absurd it would seem to us.
I agree. If we really trust the AI doing the computations and don't have reason to think that it's biased, and if the AI has considered all of the points that have been raised about the future consequences of showing oneself vulnerable to Pascalian muggings, then I feel we should go along with the AI's conclusion. 3^^^^3 people is too many to get wrong, and if the probabilities come out asymmetric, so be it.
Maybe the origin of the paradox is that we are extending the principle of maximizing expected return beyond its domain of applicability.
In addition to a frequency argument, one can in some cases make a different argument for maximizing expected value even in one-time-only scenarios. For instance, if you knew you would become a randomly selected person in the universe, and if your only goal was to avoid being murdered, then minimizing the expected number of people murdered would also minimize the probability that you personally would be murdered. Unfortunately, arguments like this make the assumption that your utility function on outcomes takes only one of two values ("good," i.e., not murdered, and "bad," i.e., murdered); it doesn't capture the fact that being murdered in one way may be twice as bad as being murdered in another way.