Posts
Comments
I think that the point is that emergence is in the mind of the observer. If the observer is describing the situation at the particle level, then superconductivity is not there regardless of the size of the collection of particles considered. But, when you describe things at the flowing-electric-fluid level, then superconductivity may emerge.
Right - but there are surely also ultimate values.
Those are the ones that are expected to be resistant to change.
Correct. My current claim is that almost all of our moral values are instrumental, and thus subject to change as society evolves. And I find the source of our moral values in an egoism which is made more effective by reciprocity and social convention.
My position here is roughly that all 'moral' values are instrumental in this sense. They are ways of coordinating so that people don't step on each other's toes.
Not sure I completely believe that, but it is the theory I am trying on at the moment. :)
I think you are right to call attention to the issue of drift.
Drift is bad in a simple value - at least in agents that consider temporal consistency to be a component of rationality. But drift can be acceptable in those 'values' which are valued precisely because they are conventions.
It is not necessarily bad for a teen-age subculture if their aesthetic values (on makeup, piercing, and hair) drift. As long as they don't drift too fast so that nobody knows what to aim for.
I think the argument is interesting and partly valid. Explaining which part I like will take some explanation.
Many of our problems thinking about morality, I think, arise from a failure to make a distinction between two different things.
- Morality in daily life
- Morality as an ideal
Morality of daily life is a social convention. It serves its societal and personal (egoistically prudent) function precisely because it is a (mostly) shared convention. Almost any reasonable moral code, if common knowledge, is better than no common code.
Morality as an ideal is the morality-of-daily-life toward which moral reformers should be trying to slowly shift their societies. A wise person will interpolate their behavior between the local morality-of-daily-life and their own morality-as-an-ideal. And probably closer to the local norm than to the personal ideal.
So, with that said, I think that your Christian friend's argument is right-on wrt morality-of-daily-life. But it is inapplicable, IMHO, to morality-as-an-ideal.
ETA: I notice, after writing, that Manfred said something very similar.
I think you misinterpreted the context. I endorsed kin selection, together with discounting the welfare of non-kin. Someone (not me!) wishing to be a straight utilitarian and wishing to treat kin and non-kin equally needs to endorse group selection in order to give their ethical intuitions a basis in evolutionary psychology. Because it is clear that humans engage in kin recognition.
I have tried to suggest that bacterial purposes are 'merely' teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order. ...
As soon as we start to talk about symbols and representation, I'm concerned that a whole new set of very thorny issues get introduced. I will shy away from these.
My position is that, to the extent that the notion of purpose is at all spooky, that spookiness was already present in a virus. The profound part of teleology is already there in teleonomy.
Which is not so say that humans are different from viruses only in degree. The are different in quality with regard to some other issues involved in rationality. Cognitive issues. Symbol processing issues. Issues of intentionality. But not issues of pure purpose and telos. So why don't you and I just shy away from this conversation. We've both stated our positions with sufficient clarity, I think.
...seemed to me to be a kind of claim that a utilitarian could make with equal credibility.
Well, he could credibly make that claim if he could credibly assert that the ancestral environment was remarkably favorable for group selection.
... you're now saying that you feel noble and proud that your values come from biological instead of cultural evolution...
What I actually said was "my own (genetic) instincts derive a kind of nobility from their origin ...". The value itself claims a noble genealogy, not a noble essence. If I am proud on its behalf, it is because that instinct has been helping to keep my ancestral line alive for many generations. I could say something similar for a meme which became common by way of selection at the individual or societal level. But what do I say about a selfish meme. That I am not the only person that it fooled and exploited? I'm going to guess that most people do have that kind of feeling.
... except possibly for the part about no prior metaphysical meaning.
I think I see the source of the difficulty now. My fault. BobTheBob mentioned the mistake of replicating with errors. I took this to be just one example of a possible mistake by a virus, and thought of several more - inserting into the wrong species of host, for example, or perhaps incorporating an instance of the wrong peptide into the viral shell after replicating the viral genome.
I then sought to define 'mistake' to capture the common fitness-lowering feature of all these possible mistakes. However, I did not make clear what I was doing and my readers naturally thought I was still dealing with a replication error as the only kind of mistake.
Sorry to have caused this confusion.
...teleology in nature is merely illusory, but the kind of teleology needed to make sense of rationality is not - it's real. Can you live with this?
No, I cannot. It presumes (or is it argues?) that human rationality is not part of nature.
My apologies for using the phrase "illusion of teleology in nature". It seems to have created confusion. Tabooing that use of the word "teleology", what I really meant was the illusion that living things were fashioned by some rational agent for some purpose of that agent. Tabooing your use of the word, on the other hand, in your phrase "the kind of teleology needed to make sense of rationality" leads elsewhere. I would taboo and translate that use to yield something like "To make sense of rationality in an agent, one needs to accept/assume/stipulate that the agent sometimes acts with a purpose in mind. We need to understand 'purpose', in that sense, to understand rationality."
Now if this is what you mean, then I agree with you. But I think I understand this kind of purpose, identifying it as the cognitive version of something like "being instrumental to survival and reproduction". That is, it is possible for an outside observer to point to behaviors or features of a virus that are instrumental to viral survival and reproduction. At the level of a bacterium, there are second-messenger chemicals that symbolize or represent situations that are instrumental to survival and reproduction. At the level of the nematode, there are neuron firings serving as symbols. At the level of a human the symbols can be vocalizations: "I'm horny; how about you?". I don't see anything transcendently new at any stage in this progression, nor in the developmental progression that I offered as a substitute.
In case I'm giving the wrong impression, I don't mean to be implying that people are bound by norms in virtue of possessing some special aura or other spookiness. I'm not giving a theory of the nature of norms - that's just too hard. All I'm saying for the moment is that if you stick to purely natural science, you won't find a place for them.
Let me try putting that in different words: "Norms are in the eye of the beholder. Natural science tries to be objective - to avoid observer effects. But that is not possible when studying rationality. It requires a different, non-reductionist and observer dependent way of looking at the subject matter." If that is what you are saying, I may come close to agreeing with you. But somehow, I don't think that is what you are saying.
A utilitarian might well be indifferent to the self-serving nature of the the meme. But, as I recall, you brought up the question in response to my suggestion that my own (genetic) instincts derive a kind of nobility from their origin in the biological process of natural selection for organism fitness. Would our hypothetical utilitarian be proud of the origin of his meme in the cultural process of selection for meme self-promotion?
Hmmm. I may be using "metaphysical" inappropriately here. I confess that I am currently reading something that uses "metaphysical" as a general term of deprecation, so some of that may have worn off. :)
Let me try to answer your excellent question by analogy to geometry, without abandoning "metaphysical". As is well known, in geometry, many technical terms are given definitions, but it is impossible to define every technical term. Some terms (point, line, and on are examples) are left undefined, though their meanings is supplied implicitly by way of axioms. Undefined terms in mathematics correspond (in this analogy) to words with prior metaphysical meaning in philosophical discourse. You can't define them, because their meaning is somehow "built in".
To give a rather trivial example, when trying to generate a naturalistic definition of ought, we usually assume we have a prior metaphysical meaning for is.
Hope that helped.
With apologies to Ludwig Wittgenstein, if we can't talk about the singularity, maybe we should just remain silent. :)
I happen to agree with you that the SIAI mission will never be popular. But a part of the purpose of this website is to create more people willing and capable to work (directly or indirectly) on that mission. So, not mentioning FAI would be a bit counterproductive - at least at this stage.
Is this fair?
Not really. You started by making an argument that listed a series of stages (virus, bacterium, nematode, man) and claimed that at no stage along the way (before the last) were any kind of normative concepts applicable. Then, when I suggested the standard evolutionary explanation for the illusion of teleology in nature, you shifted the playing field. In option 1, you demand that I supply standard scientific expositions of the natural history of your chosen biological examples. In option 2 you suggest that you were just kidding in even mentioning viruses, bacteria and nematodes. Unless an organism has the cognitive equipment to make mistakes in probability theory, you simply are not interested in speaking about it normatively.
Do I understand that you are claiming that humans are qualitatively exceptional in the animal kingdom because the word "ought" is uniquely applicable to humans? If so, let me suggest a parallel sequence to the one you suggested starting from viruses. Zygote, blastula, fetus, infant, toddler, teenager, adult. Do you believe it is possible to tell a teenager what she "ought" to do? At what stage in development do normative judgements become applicable.
Here is a cite for sorites. Couldn't resist the pun.
As you may have noticed, that definition was labeled as a "first attempt". It captures some of our intuitions about morality, but not all. In particular, its biggest weakness is that it fails to satisfy moral realists for precisely the reason you point out.
I have a second quill in my quiver. But before using it, I'm going to split the concept of morality into two pieces. One piece is called "de facto morality". I claim that the definition I provided in the grandparent is a proper reductionist definition of de facto morality and captures many of (some) people's intuitions about morality. The second piece is called "ideal morality". This piece is essentially what de facto morality ought to be.
So, your conformist may well be automatically in the right with respect to de facto morality. But it is possible for a moral reformer to point out that he and all of his fellows are in the wrong with respect to ideal morality. That is, the reformer claims that the society would be better off if its de facto conventions were amended from their present unsatisfactory status to become more like the ideal. And, I claim, given the right definition of "society would be better off", this "ideal morality" can be given an objective and naturalistic definition.
For more details, see Binmore - Game Theory and the Social Contract
Outstanding comment - particularly the point at the end about the candle cooling the room.
It might be worthwhile to produce a sequence of postings on the control systems perspective - particularly if you could use better-looking block diagrams as illustrations. :)
Consider then a virus particle ... Surely there is nothing in biochemistry, genetics or other science which implies there is anything our very particle ought to do. It's true that we may think of it as having the goal to replicate itself, and consider it to have made a mistake if it replicates itself inaccurately, but these conceptions do not issue from science. Any sense in which it ought to do something, or is wrong or mistaken in acting in a given way, is surely purely metaphorical (no?).
No. The distinction between those viral behaviors that tend to contribute to the virus replicating and those viral behaviors that do not contribute does issue from science. It is not a metaphor to call actions that detract from reproduction "mistakes" on the part of the virus, any more than it is a metaphor to call certain kinds of chemical reactions "exothermic". There is no 'open question' issue here - "mistake", like "exothermic", does not have any prior metaphysical meaning. We are free to define it as we wish, naturalistically.
So much for the practical ought, the version of ought for which ought not is called a mistake because it generates consequences contrary to the agent's interests. What about the moral ought, the version of ought for which ought not is called wrong? Can we also define this kind of ought naturalistically? I think that we can, because once again I deny that "wrong" has any prior metaphysical meaning. The trick is to make the new (by definition) meaning not clash too harshly with the existing metaphysical connotations.
How is this for a first attempt at a naturalistic definition of the moral ought as a subset of the practical ought? An agent morally ought not to do something iff it tends to generate consequences contrary to the agent's interests, those negative consequences arising from the reactions of disapproval coming from other agents.
In general, it is not difficult at all to define either kind of ought naturalistically, so long as one is not already metaphysically committed to the notion that the word 'ought' has a prior metaphysical meaning.
This article lists the top Google+ users by # of followers. Worth a chuckle.
ETA: in general, bare links are usually not appreciated here. Still, here are two more links to interesting articles in the tech blogosphere discussing Google+.
You may have missed a subtlety in my comment. In your grandparent, you said "people's thoughts and words are a byproduct ...". In my comment, I suggested "Thoughts are at one with ...". I didn't mention words.
If we are going to focus on words rather than thoughts, then I am more willing to accept your model. Spoken words are indeed behaviors - behaviors that purport to be accurate reports of thoughts, but probably are not.
Perhaps we should taboo "thought", since we may not be intending the word to designate the same phenomenon.
I take this to be an elliptical way of suggesting that Yvain is offering a false dichotomy in suggesting a choice between the notion of thoughts being in control of the processes determining behavior and the notion of thoughts being a byproduct of those processes.
I agree. Thoughts are at one with (are a subset of) the processes that determine behavior.
Your just-so story is more complicated than you seem to think. It involves an equilibrium of at least two memes: an evangelical utilitarianism which damages the host but propagates the meme, plus a cryptic egoism which presumably benefits the host but can't successfully propagate (it repeatedly arises by spontaneous generation, presumably).
I could critique your story on grounds of plausibility (which strategy do crypto-egoists suggest to their own children?) but instead I will ask why someone infected by the evangelical utilitarianism meme would argue as you suggested in the great-grandparent:
"Evolution (memetic evolution, that is) has instilled in me the idea of that I should linearly value the welfare of others regardless of kinship, regardless of what instincts I got from my genes."
Isn't it more likely that someone realizing that they have been subverted by a selfish meme would be trying to self-modify?
I'm convinced by mathematical arguments that utility should be additive. If the value of N things in the real world is not N times the value of 1 thing, then I handle that in how I assign utility to world states.
I don't disagree. My choice of slogan wording - "utility is not additive" - doesn't capture what I mean. I meant only to deny that the value of something happening N times is (N x U) where U is the value of it happening once.
what would you say to a utilitarian who say: "Evolution (memetic evolution, that is) has instilled in me the idea of that I should linearly value the welfare of others regardless of kinship, regardless of what instincts I got from my genes."
There are two separate issues here. I assume that by "linearly" you are referring to the subject that started this conversation: my claim that utilities "are not additive", an idea also expressed as "diminishing returns", or diminishing marginal utility of additional people. I probably would not dispute the memetic evolution claim if it focused on "linearity".
The second issue is a kind of universality - all people valued equally regardless of kinship or close connectedness in a network of reciprocity. I would probably express skepticism at this claim. I would probe the claim to determine whether the selection operates at the level of the meme, the individual, or the society. And then I would ask how that meme contributes to its own propagation at that level.
By "orthodox position" are you referring to TDT-related ideas?
Mostly, I am referring to views expressed by EY in the sequences and frequently echoed by LW regulars in comments. Some of those ideas were apparently repeated in the TDT writeup (though I may be wrong about that - the write-up was pretty incoherent.)
... the mistake began as soon as we started calling it a "blue-minimizing robot".
Agreed. But what kind of mistake was that?
Is "This robot is a blue-minimizer" a false statement? I think not. I would classify it as more like the unfortunate selection of the wrong Kuhnian paradigm for explaining the robot's behavior. A pragmatic mistake. A mistake which does not bode well for discovering the truth, but not a mistake which involves starting from objectively false beliefs.
I think that we should follow Jaynes and insist upon 'probability' as the name of the subjective entity. But so-called objective probability should be called 'propensity'. Frequency is the term for describing actual data. Propensity is objectively expected frequency. Probability is subjectively expected frequency. That is the way I would vote.
As I understand it, EY's commitment to MWI is a bit more principled than a choice between soccer teams. MWI is the only interpretation that makes sense given Eliezer's prior metaphysical commitments. Yes rational people can choose a different interpretation of QM, but they probably need to make other metaphysical choices to match in order to maintain consistency.
Does your utility function treat "a life saved by Perplexed" differently from just "a life"?
I'm torn between responding with "Good question!" versus "What difference does it make?". Since I can't decide, I'll make both responses.
Good question! You are correct in surmising that the root justification for much of the value that I attach to other lives is essentially instrumental (via channels of reciprocity). But not all of the justification. Evolution has instilled in me the instinct of valuing the welfare (fitness) of kin at a significant fraction of the value of my own personal welfare. And then there are cases where kinship and reciprocity become connected in serial chains. So the answer is that I discount based on 'remoteness' where remoteness is a distance metric reflecting both genetic and social-interactive inverse connectedness.
What difference does it make? This is my utility function we are talking about, and it is only operational in deciding my own actions. So, even if my utility function attached huge value to lives saved by other people, it is not clear how this would change my behavior. The question seems to be whether people ought to have multiple utility functions - one for directing their own rational choices; the others for some other purpose.
I am currently reading Binmore's two-volume opus Game Theory and the Social Contract. I strongly recommend it to everyone here who is interested in decision theory and ethics. Although Binmore doesn't put it in these terms, his system does involve two different sets of values, which are used in two different ways. One is the set of values used in the Game of Life - a set of values which may be as egoistic as the agent wishes (or as altruistic). However, although the agent is conceptually free in the Game of Life, as a practical matter, he is coerced by everyone else to adhere to a Social Contract. Due to this coercion, he mostly behaves morally.
But how does the Social Contract arise? In Binmore's normative fiction, it arises by negotiated consensus of all agents. The negotiation takes place in a Rawlsian Original Position under a Veil of Ignorance. Since the agent-while-negotiating has different self-knowledge than does the agent-while-living, he manifests different values in the two situations - particularly with regard to utilities which accrue indexically. So, according to Binmore, even an agent who is inherently egoistic in the Game of Life will be egalitarian in the Game of Morals where the Social Contract is negotiated. Different values for a different purpose.
That is the concise summary of the ethical system that Binmore is constructing in the two volumes. But he does a marvelously thorough job of ground-clearing - addressing mistakes made by Kant, Rawls, Nozick, Parfit, and others regarding the Prisoner's Dilemma, Newcomb's 'paradox', whether it is rational to vote (probably wasted), etc. And in the course of doing so, he pretty thoroughly demolishes what I understand to be the orthodox position on these topics here at Less Wrong.
Really, really recommended.
Correct. In fact, I probably confused things here by using the word "discount" for what I am suggesting here. Let me try to summarize the situation with regard to "discounting".
Time discounting means counting distant future utility as less important than near future utility. EY, in the cited posting, argues against time discounting. (I disagree with EY, for what it is worth.)
"Space discounting" is a locally well-understood idea that utility accruing to people distant from the focal agent is less important than utility accruing to the focal agent's friends, family, and neighbors. EY presumably disapproves of space discounting. (My position is a bit complicated. Distance in space is not the relevant parameter, but I do approve of discounting using a similar 'remoteness' parameter.)
The kind of 'discounting' of large utilities that I recommended in the great-grandparent probably shouldn't be called 'discounting'. I would sloganize it as "utilities are not additive." The parent used the phrase 'diminishing returns'. That is not right either, though it is probably better than 'discounting'. Another phrase that approximates what I was suggesting is 'bounded utility'. (I'm pretty sure I disagree with EY on this one too.)
The fact that I disagree with EY on discounting says absolutely nothing about whether I agree with EY on AI risk, reductionism, exercise, and who writes the best SciFi. That shouldn't need to be said, but sometimes it seems to be necessary in your (XiXiDu's) case.
What benelliot said.
Sheesh! Please don't assume that everyone who disagrees with one point you made is doing so because he disagrees with the whole thrust of your thinking.
Baez: ... you shouldn’t always maximize expected utility if you only live once.
BenElliot: [Baez is wrong] Expected utilities do not work like that.
XiXiDu: If a mathematician like John Baez can be that wrong ...
A mathematician like Baez can indeed be that wrong, when he discusses technical topics that he is insufficiently familiar with. I'm sure Baez is quite capable of understanding the standard position of economists on this topic (the position echoed by BenElliot). But, as it apparently turns out, Baez has not yet done so. No big deal. Treat Baez as an authority on mathematical physics, category theory, and perhaps saving the environment. He is not necessarily an authority on the foundations of microeconomics.
People shouldn't neglect small probabilities. The math works for small probabilities.
People should discount large utilities. Utilities are not additive. Nothing in economic theory suggests that they are additive.
It is well understood that the utility of two million dollars is not necessarily twice the utility of one million dollars. Yet it is taken here as axiomatic that the utility of two saved lives is twice the utility of one saved life. Two people tortured is taken to be twice the disutility of one person tortured. Why? As far as I can tell, the only answer given here is that our moral intuitions (against simple additivity) are wrong and will be corrected by sufficient reflection.
That is my take on the issue.
You are correct that my comments are missing the mark. Still, there is a sense in which the kinds of non-determinism represented by Born probabilities present problems for Si. I agree that Si definitely does not pretend to generate its predictions based on observation of the whole universe. And it does not pretend to predict everything about the universe. But it does seem to pretend that it is doing something better than making predictions that apply to only one of many randomly selected "worlds".
Can anyone else - Cousin_it perhaps - explain why deterministic evolution of the wave function seems to be insufficient to place Si on solid ground?
Hmmm. Would you be happier if I changed my last line to read "... we need to discard the whole Si concept as inappropriate to our imperfectly-observed universe."?
My implicit point was this: Nesov2006 probably did not realize that Nesov2006 was a fool and Nesov2008 probably did not judge himself to be a crackpot. Therefore, a naive extrapolation ("obvious prediction") suggests that Nesov2011 has some cognitive flaws which he doesn't yet recognize; he will recognize this, though, in a few years.
JoshuaZ, as I understood him, was suggesting that one improves ones tolerance by enlarging ones prior for the hypothesis that one is currently flawed oneself. He suggests the time-machine thought experiment as a way of doing so.
You, as I understood you, claimed that JoshuaZ's self-hack doesn't work on you. I'm still puzzled as to why not.
Write suggestion-driven fiction.
What is "suggestion-driven fiction"? Googling was unhelpful.
What it sounds like is fiction in which the author has no particular story in mind as (s)he begins the narration, but rather the author generates plot and characters in response to reader suggestions as each chapter is published.
If that is the kind of thing you are talking about, it sounds very intriguing. But I wonder how a beginner captures enough initial readers to generate the suggestions. Reciprocity? If someone wants to organize a circle of three or four novice writers producing serialized fiction on their blogs and providing suggestions to each other, I would like to join the group.
What should I do?
Step 1: Stop being frustrated with them for not knowing/understanding. Instead, direct your frustration at yourself for not successfully communicating.
Step 2: Come to know that the reason for your failure to communicate is not a lack of mastery over your own arguments. It is a lack of understanding of their arguments. Cultivate the skill of listening. Ask which school of martial arts presents the best metaphor for your current disputation habits. Which school best matches the kind of disputation you would like to achieve?
Step 3: In the course of learning to listen, you may also learn other things. The people you are talking to are probably not idiots. Sometimes they will be right and you will be wrong. Notice those occasions. Examine, analyze, and cherish those occasions. Come to see them as winning. After all, you came out of that dispute having gained something useful (knowledge). Your teacher, on the other hand, gained nothing but ephemeral status.
I would easily bite the bullet and say that Nesov2008 was a crackpot and Nesov2006 was a shallow naive fool.
Ah. But would you make the obvious predictions about the opinion Nesov2013 and Nesov2015 will have regarding Nesov2011?
I disagree. Alicorn's version is more mathematically meaningful, to my mind, than WeiDai's. But to return to the original problem:
A. Two-boxing yields more money than would be yielded by counterfactually one-boxing.
B. Taboo "counterfactually". ...
I'm confused. Assuming that I "believe in" the validity of what I have been told of quantum mechanics, I fully expect that a million quantum coin tosses will generate an incompressible string. Are you suggesting that I cannot simultaneously believe in the validity of QM and also believe in the efficacy of Solomonoff induction - when applied to data which is "best explained" as causally random?
Off the top of my head, I am inclined to agree with this suggestion, which in turn suggests that Si is flawed. We need a variant of Si which allows Douglas_Knight's simple fair coins, without thereby offering a simple explanation of everything. Or, we need to discard the whole Si concept as inappropriate in our non-deterministic universe.
The randomness of a source of information is not an empirical fact which we can discover and test - rather, it is an assumption that we impose upon our model of the data. It is a null hypothesis for which we cannot find Bayesian evidence - we can at best fail to reject it. (I hope the Popper-clippers don't hear me say that!). Maybe what our revised Si should be looking for is the simplest explanation for data D[0] thru D[n], which explanation is not refuted by data D[n+1] thru D[n+k].
ETA: Whoops. That suggestion doesn't work. The simplest such explanation will always be that everything is random.
The universal prior implies you should say "substantially less than 1 million".
Why do you say this? Are you merely suggesting that my prior experience with quantum coins is << a million tosses?
Similarly, in 2000, a man named "Alex" (fake name, but real case) suddenly developed pedophilic impulses at age 40, and was eventually convicted of child molestation. Turns out he also had a brain tumor, and once it was removed, his sexual interests went back to normal. ...
At the very least, it seems like we would certainly be justified in judging Charles and Alex differently from people who don't suffer from brain tumors.
Alex was not punished for the impulses he felt. Rather, he was punished for molesting a child. Judge Charles and Alex differently, if you wish, because you think you understand the causality better. But don't punish them differently. Actions should be punished, not inclinations. Punish because they did a bad thing, not because they are bad people.
If you justify punishment as deterence, you are still justified in punishing Charles and Alex. Feel sorry for them, if you wish, but don't forget to also feel sorry for their victims. Life sucks sometimes.
Maybe you should start by trying to figure out actual helpful suggestions, and then we can worry about whether people will be offended by them.
Good advice, I think. It is hard to tell (without an example) whether any gimmicks that work to get women into bed with you also work to get women to meetups. I would guess that any convincing way of communicating the message "Come on! It will be fun!!" would help with both.
Anyone have any brilliant ideas?
Call his attention to the stuff under the guise of soliciting a male opinion on whether it is offensively misogynist and nasty. Then let him make his own adult decisions as to whether he wants to make use of any of the information.
do they speculate that our universe is being computed (at a high-level) by a cellular automaton?
What does "at a high level" mean in this context?
Btw, thanks for posting this.
I'm not following you. Why is evil action XYZ going to be done regardless? Are you imagining that deontologists seek to have other people do their dirty deeds for them?
Will was rather more patient than he could have been.
Rather less careful, I would say. He failed to notice the typo above until nsheperd pointed it out - the original source of the confusion. And then later he began a comment with:
No, this is not the case. You have to cleverly choose B.
I have no idea at all what "is not the case". And I also don't know when anyone was offered the opportunity to cleverly choose B.
Will's description of his own limited motivation to communicate is the only portion of this thread which is crystal clear.
Yes, by working pretty hard, I was able to ignore the initial typo and to anticipate the explanation of A, B, and C. As I point out elsewhere on this thread, I have some objections to the scenario (as leaving out some details important to deontologists). Perhaps PeterDJones had similar objections. Please notice that neither of us could object to Will's A-B-C story until it was actually spelled out. And Will resisted making the effort of spelling it out far too long.
My "STFU" was rude. But sometimes rudeness is appropriate.
isn't preventing the existence of people who have stolen a consequentialist goal?
Taking into account the existence of people who have stolen is one way for a consequentialist to model the thinking of deontologists. If a consequentialist includes history of who-did-what-to-whom in his world states, he is capturing all of the information that a deontologist considers. Now, all that is left is to construct a utility function that attaches value to the history in the way that a deontologist would.
Voila! Something that approximates successful communication between deontologist and consequentialist.
Voted down vigorously. If you can't make the effort to make yourself understood, STFU.
Does world B contain someone who stole Bill's money? Does world C contain someone who stole Alicorn's money?
One reason that you are having trouble seeing the world as a deontologist sees it is that you stubbornly refuse to even try.
Why does my intuition reject wireheading? Well, I think it has something to do with the promotion of instrumental values to terminal values.
Some pleasures I value for themselves (terminal) - the taste of good food, for example. As it happens, I agree with you that there is no true justification for rejecting wireheading for these kinds of pleasures. The semblance of pleasure is pleasure.
But some things are valued because they provide me with the capability, ability, or power (instrumental) to do what I want to do, including experiencing those terminal pleasures. Examples are money, knowledge, physical beauty, athletic abilities, and interpersonal skills.
Evolution has programmed me to derive pleasure from the acquisition and maintenance of these instruments of power. So, a really thorough wireheading installation would make me delight in my knowledge, beauty, charisma, and athleticism - even if I don't actually possess those attributes. And therein lies the problem. The semblance of power is not power.
What I am saying is that my intuitive rejection of wireheading arises because at least some of the pleasures that it delivers are a lie, a delusion. And I'm pretty sure that I don't want to be deluded.
But do I really need power if all of my more basic terminal values are guaranteed? That doesn't really matter. The way I am currently wired, I want the actual power - not just the terminal pleasure that the power can someday deliver, and certainly not just the satisfied feeling that believing that I am powerful can generate.
Hope that helps.