Posts

Comments

Comment by Multipartite on Sleeping Beauty gets counterfactually mugged · 2017-10-27T02:05:06.048Z · LW · GW

Running through this to check that my wetware handles it consistently.

Paying -100 if asked:

When the coin is flipped, one's probability branch splits into a 0.5 of oneself in the 'simulation' branch, 0.5 in the 'real' branch. For the 0.5 in the real branch, upon awaking a subjective 50% probability that on either of the two possible days, both of which will be woken on. So, 0.5 of the time waking in simulation, 0.25 waking in real 1, 0.25 waking in real 2.

0.5 x (260) + 0.25 x (-100) + 0.25 x (-100) = 80. However, this is the expected cash-balance change over the course of a single choice, and doesn't take into account that Omega is waking you multiple times for the worse choice.

An equation for relating choice made to expected gain/loss at the end of the experiment doesn't ask 'What is my expected loss according to which day in reality I might be waking up in?', but rather only 'What is my expected loss according to which branch of the coin toss I'm in?' 0.5 x (260) + 0.5 x (-100-100) = 30.

Another way of putting it: 0.5 x (260) + 0.25 x (-100(-100)) + 0.25 x (-100(-100)) = 30 (Given that making one choice in a 0.25 branch guarantees the same choice made, separated by a memory-partition; either you've already made the choice and don't remember it, or you're going to make the choice and won't remember this one, for a given choice that the expected gain/loss is being calculated for. The '-100' is the immediate choice that you will remember (or won't remember), the '(-100)' is the partition-separated choice that you don't remember (or will remember).)

--Trying to see what this looks like for an indefinite number of reality wakings: 0.5 * (260) + n x (1/n) x (1/2) x (-100 x n) = 130 - (50 x n), which of the form that might be expected.

(Edit: As with reddit, frustrating that line breaks behave differently in the commenting field and the posted comment.)

Comment by Multipartite on 60m Asteroid currently assigned a .022% chance of hitting Earth. · 2012-03-14T22:49:17.497Z · LW · GW

(For thoroughness, noting that the other approach was also wondered about a little earlier. Surface action is an alternative to look at if projectile-launching would definitely be ineffective, but if the projectile approach would in fact be better then there'd no reason not to focus on it instead.)

Comment by Multipartite on 60m Asteroid currently assigned a .022% chance of hitting Earth. · 2012-03-14T01:22:05.390Z · LW · GW

A fair point. On the subject of pulling vast quantities of energy from nowhere, does any one country currently possess the knowledge and materials to build a bomb that detonated on the surface could {split the Earth like a grape}/{smash the Earth like an egg}/{dramatic verb the Earth like a metaphorical noun}?

And yes, not something to try in practice with an inhabited location. Perhaps a computer model, at most... actually, there's a thought regarding morbid fascination. I wonder what would be necessary to provide a sufficiently-realistic (uninhabited) physical (computer) simulation of a planet's destruction when the user pulled meteors, momentum, explosives et cetera out of nowhere as it pleased. Even subtle things, like fiddling with orbits and watching the eventual collision and consequences... hm. Presumably/Hopefully someone has already thought of this at some point, and created such a thing.

Comment by Multipartite on 60m Asteroid currently assigned a .022% chance of hitting Earth. · 2012-03-04T21:17:17.298Z · LW · GW

Not directly related, but an easier question: Do we currently have the technology to launch projectiles out of Earth's atmosphere into a path such that, in a year's time or so, the planet smashes into them from the other direction and sustains significant damage?

(Ignoring questions of targeting specific points, just the question of whether it's possible to arrange that without the projectiles falling into the sun or just following us eternally without being struck or getting caught in our gravity well too soon... hmm, if we could somehow put it into an opposite orbit then it could hit us very strongly, but in terms of energy... hmmm. Ah, and in the first place there's the issue that even that probably wouldn't hit with energy comparable to that of a meteor, though I am not an astrophysicist. In any case, definitely not something to do, but (as noted) morbidly fascinating if it turned out to be fairly easy to pull off. Just the mental image of all the 'AUGH' faces... again, not something one would actually want to do. )

Comment by Multipartite on Acausal romance · 2012-03-01T01:10:25.618Z · LW · GW

()

In practice, this seems to break down at a specific point: this can be outlined, for instance, with the hypothetical stipulation "...and possesses the technology or similar power to cross universe boundaries and appear visible before me in my room, and will do so in exactly ten seconds.".

As with the fallacy of a certain ontological argument, the imagination/definition of something does not make it existential, and even if a certain concept contains no apparent inherent logical impossibilities that still does not mean that there could/would exist a universe in which it could come to pass.

'All possible worlds' does not mean 'All imaginable worlds'. 'All possible people' does not mean 'All imaginable people'. Past a certain threshold of specificity, one goes from {general types of people who exist almost everywhere, universally speaking} to {specific types of people who only exist in the imaginations of people like you who exist almost everwhere, universally speaking}.

(As a general principle, for instance/incidentally, causality still needs to apply.)

Edit:

Comment by Multipartite on Acausal romance · 2012-03-01T00:45:25.223Z · LW · GW

(Absent(?) thought after reading: one can imagine someone, through a brain-scanner or similar, controlling a robot remotely. One can utter, through the robot, "I'm not actually here.", where 'here' is where one is doing the uttering through the robot, and 'I' (specifically 'where I am') is the location of one's brain. The distinction between the claim 'I'm not actually here' and 'I'm not actually where I am' is notable. Ahh, the usefulness of technology. For belated communication, the part about intention is indeed significant, as with whether a diary is written in the present tense (time of writing) or in the past tense ('by the time you read this[ I will have]'...).) enjoyed the approach

Comment by Multipartite on Longevity Insurance · 2012-02-21T01:36:56.271Z · LW · GW

To ask the main question that the first link brings to mind: What prevents a person from paying both a life insurance company and a longevity insurance company (possible the same company) relatively-small amounts of money each in exchange for either a relatively-large payout from the life insurance if the person dies early and a relatively-large payout from the longevity insurance if the person dies late?

To extend, what prevents a hypothetically large number of people to on average create this effect (even if each is disallowed from having both instead of just one or the other) and so creating a guaranteed total loss overall on the part of an insurance company?

Comment by Multipartite on Bayesian RPG system? · 2012-02-11T21:36:24.013Z · LW · GW

Thank you!

Comment by Multipartite on Bayesian RPG system? · 2012-02-10T16:10:27.210Z · LW · GW

To answer the earlier question, an alteration which halved the probability of failure would indeed change an exactly-0% probability of success into a 50% probability of success.

If one is choosing between lower increases for higher values, unchanged increases for higher values, and greater increases for higher values, then the first has the advantage of not quickly giving numbers over 100%. I note though that the opposite effect (such as hexing a foe?) would require halving the probability of success instead of doubling the probability of failure.

The effect you describe, whereby a single calculation can give large changes for medium values and small values for extreme values, is of interest to me: starting with (for instance) 5%, 50% and 95%, what exact procedure is taken to increase the log probability by log(2) and return modified percentages?


Edit: (A minor note that, from a gameplay standpoint, for things intended to have small probabilities one could just have very large failure-chance multipliers and so still have decreasing returns. Things decreed as effectively impossible would not be subject to dice rolling or similar in any case, and so need not be considered at length. In-game explanation for the function observed could be important; if it is desirable that progress begin slow, then speed up, then slow down again, rather than start fast and get progressively slower, then that is also reasonable.)

Comment by Multipartite on Bayesian RPG system? · 2012-02-08T18:06:04.996Z · LW · GW

For what it's worth, I'm reminded of systems which handle modifiers (multiplicatively) according to the chance of failure:

[quote]

For example, the first 20 INT increases magic accuracy from 80% to

(80% + (100% - 80%) * .01) = 80.2%

not to 81%. Each 20 INT (and 10 WIS) adds 1% of the remaining distance between your current magic accuracy and 100%. It becomes increasingly harder (technically impossible) to reach 100% in any of these derived stats through primary attributes alone, but it can be done with the use of certain items.

[/quote]

A clearer example might be that of a bonus which halves your chance of failure changing 80% success likelihood to 90% success (20% failure to 10% failure), but another bonus of the same type changing that 90% success to 95% success (10% failure to 5% failure). Notable that one could combine the bonus first in calculation to get a quarter of 20% as 5% with no end change.

Comment by Multipartite on Waterfall Ethics · 2012-01-31T13:23:37.635Z · LW · GW

The Turing machine doing the simulating does not experience pain, but the human being being simulated does.

Similarly, the waterfall argument found in the linked paper seems as though it could as-easily be used to argue that none of the humans in the solar system have intelligence unless there's an external observer to impose meaning on the neural patterns.

A lone mathematical equation is meaningless without a mind able to read it and understand what its squiggles can represent, but functioning neural patterns which respond to available stimuli causally(/through reliable cause-and-effect) are the same whether emboided in cell weights or in tape states. (So, unless one wishes to ignore one's own subjective consciousness and declare oneself a zombie...)


For the actual-versus-potential question, I am doubtful regarding the answer, but for the moment I imagine a group of people in a closed system (say, an experiment room), suddenly (non-lethally) frozen in ice by a scientist overseeing the experiment. If the scientist were to later unfreeze the room, then perhaps certain things would definitely happen if the system remained closed. However, if it were never unfrozen, then they would never happen. Also, if they were frozen yet the scientist decided to interfere in the experiment and make the system no longer a closed system, then different things would happen. As with the timestream in normal life, 'pain' (etc.) is only said to take place at the moment that it is actually carried out. (And if one had all states laid out simultaneously, like a 4D person looking at all events in one glance from past to present, then 'pain' would only be relevant for the one point/section that one could point to in which it was being carried out, rather than in the entire thing.)

Now though, the question of the pain undergone by the models in the predicting scientist's mind (perhaps using his/her/its own pain-feeling systems for maximum simulation accuracy) by contrast... hmm.

Comment by Multipartite on In the Pareto world, liars prosper · 2011-12-10T23:53:39.872Z · LW · GW

(Assuming that it stays on the line of 'what is possible', in any case a higher Y than otherwise, but finding it then according to the constant X--1 - ((19/31) * (1/19)), 30/31, yes...)

I confess I do not understand the significance of the terms mixed outcome and weighted sum in this context, I do not see how the numbers 11/31 and 20/31 have been obtained, and I do not presently see how the same effect can apply in the second situation in which the relative positions of the symmetric point and its (Pareto?) lines have not been shifted, but I now see how in the first situation the point selected can be favourable for Y! (This representing convincing of the underlying concept that I was doubtrful of.) Thank you very much for the time taken to explain this to me!

Comment by Multipartite on In the Pareto world, liars prosper · 2011-12-09T17:27:01.952Z · LW · GW

Rather than X or Y succeeding at gaming it by lying, however, it seems that a disinterested objective procedure that selects by Pareto optimalness and symmetry would then output a (0.6, 0.6) outcome in both cases, causing a -0.35 utility loss for the liar in the first case and a -0.1 utility loss for the liar in the second.

Is there a direct reason that such an established procedure would be influenced by a perceived (0.95, 0.4) option to not choose an X=Y Pareto outcome? (If this is confirmed, then indeed my current position is mistaken. )

Comment by Multipartite on In the Pareto world, liars prosper · 2011-12-08T21:24:23.414Z · LW · GW

I may be missing something: for Figure 5, what motivation does Y have to go along with perceived choice (0.95, 0.4), given that in this situation Y does not possess the information possessed (and true) in the previous situation that '(0.95, 0.4)' is actually (0.95, 0.95)?

In Figure 2, (0.6, 0.6) appears symmetrical and Pareto optimal to X. In Figure 5, (0.6, 0.6) appears symmetrical and Pareto optimal to Y. In Figure 2, X has something to gain by choosing/{allowing the choice of} (0.95, 0.4) over (0.6, 0.6) and Y has something to gain by choosing/{allowing the choice of} (0.95, 0.95) over (0.6, 0.6), but in Figure 5, while X has something to gain by choosing/{allowing the choice of} (0.6, 0.4) over (0.5, 0.5), Y has nothing to gain by choosing/{allowing the choice of} (0.95, 0.4) over (0.6, 0.6).

Is there a rule(/process) that I have overlooked?

Going through the setup again, it seems as though in the first situation (0.95, 0.95) would be chosen while looking to X as though Y was charitably going with (0.95, 0.4) instead of insisting on the symmetrical (0.6, 0.6), and that in the second situation Y would insist on the seemingly-symmetrical-and-(0.6, 0.6) (0.4, 0.6) instead of going along with X's desired (0.6, 0.4) or even the actually-symmetrical (0.5, 0.5) (since that would appear {non-Pareto optimal}/{Pareto suboptimal} to Y).

Comment by Multipartite on [SEQ RERUN] Two Cult Koans · 2011-12-01T11:42:32.231Z · LW · GW

A very interesting perspective: Thank you!

Comment by Multipartite on [SEQ RERUN] Two Cult Koans · 2011-11-30T23:37:13.908Z · LW · GW

' I am still mystified by the second koan.': The novice associates {clothing types which past cults have used} with cults, and fears that his group's use of these clothing types suggests that the group may be cultish.

In practice (though the clothing may have an unrelated advantage), the clothing one wears has no effect on the validity of the logical arguments used in reasoning/debate.

The novice fears a perceived connection between the clothing and cultishness (where cultishness is taken to be a state of faith over rationality, or in any case irrationality). The master reveals the lack of effect of clothing on the subjects under discussion with the extreme example of the silly hat, pointing out the absurdity of wearing it affecting one's ability to effectively use probability theory (or any practical use of rationality for that matter).

This is similar to the first koan, {in which}/{in that} what matters is whether the (mental/conceptual) tools actually /work/ and yield useful results.

The student, more-or-less enlightened by this, takes it to heart and serves as an example to others by always discussing important concepts in absurd clothing, to get across to his own students(, others whom he interacts with, et cetera) that the clothing someone wears has nothing to do with the validity/accuracy of their ideas.

(Or, at least, that's my interpretation.)

Edit: A similar way of describing this may be to imagine that the novice is treating clothing-cult correlation as though it were causation, and the master points out with use of absurdity that there cannot be clothing->cult causation for the same reason that there cannot be silly_hat->comprehension causation. (What counts being the usefulness of the hammer, the validity of the theories used, rather than unrelated things which coincide with them.)

Comment by Multipartite on Is latent Toxoplasmosis worth doing something about? · 2011-11-17T19:29:40.603Z · LW · GW

Depending on the cost, it at least seems to be worth knowing about. If one doesn't have it then one can be assured on that point, whereas if one does have it then one at least has appropriate grounds on which to second-guess oneself.

(I have been horrified in the past by tales of {people who may or may not have inherited a dominant gene for definite early disease-related death} who all refused to be tested, thus dooming themselves to a lives of fear and uncertainty. If they were going to have entirely healthy lives then they would have lived in fear and uncertainty instead of being able to enjoy them, and if they were giong to die early then they would have lived in fear and uncertainty (and stressful, gradually-increasing denial/acceptance) rather than quickly getting used to the idea, resetting their baseline, getting their loose ends in order and living as appropriate for their expected remaining lifespan. Whether or not one does (or can do) anything about one's state doesn't change that oneself having more information about oneself can (in most circumstances?) only be helpful.)

Comment by Multipartite on Bayes Slays Goodman's Grue · 2011-11-17T13:34:46.727Z · LW · GW

'I haven't seen a post on LW about the grue paradox, and this surprised me since I had figured that if any arguments would be raised against Bayesian LW doctrine, it would be the grue problem.':

If of relevance, note http://lesswrong.com/lw/q8/many_worlds_one_best_guess/ .

Comment by Multipartite on Free to Optimize · 2011-11-12T04:10:59.145Z · LW · GW

'The second AI helped you more, but it constrained your destiny less.': A very interesting sentence.

On other parts, I note that the commitment to a range of possible actions can be seen as larger-scale than to a single action, even before which one is taken is chosen.

A particular situation that comes to mind, though:

Person X does not know of person Y, but person Y knows of person X. Y has an emotional (or other) stake in a tiebreaking vote that X will make; Y cannot be present on the day to observe the vote, but sets up a simple machine to detect what vote is made and fire a projectile through the head of X if X makes one vote rather than another (nothing happening otherwise).

Let it be given that in every universe that X votes that certain way, X is immediately killed as a result. It can also safely be assumed that in those universes Y is arrested for murder.

In a certain universe, X votes the other way, but the machine is later discovered. No direct interference with X has taken place, but Y who set up the machine (pointed at X's head, X's continued life unknowingly dependent on X's vote) presumably is guilty of a felony of some sort (which though, I wonder?).

Regardless of motivation, to have committed to potentially carry out a certain thing against X is treated as similarly serious to that of in fact having it carried out (or attempted to be carried out).

(This, granted, may focus on a concept within the above article without addressing the entire issue of planning another entity's life.)

Comment by Multipartite on The Mystery of the Haunted Rationalist · 2011-11-12T01:26:21.979Z · LW · GW

Thought 1: If hypothetically one's family was going to die in an accident or otherwise (for valid causal wish-unrelated reasons), the added mental/emotional effect on oneself would be something to avoid in the first place. Given that one is infallible, one can never assert absolute knowledge of non-causality (direct or indirect), and that near-infinitesimal consideration could haunt one. Compare this possibility to the ease, normally, of taking other routes and thus avoiding that risk entirely.

...other thoughts are largely on the matter of integrity... respect and love felt for family members, thus not wishing to badmouth them or officially express hope for their death even given that neither they nor anyone else could hear it... hmm.

Pragmatically, one could cite a concern regarding taken behaviours influencing ease of certain thoughts: I do not particularly want to become someone who can more easily write a request that my family members die.

There are various things that I might wish that I would not carry out if I had the power to directly (and secretly) do so, but generally if doing such a thing I would prefer to wish for something I actually wanted (/would carry out if I had the power to do so myself), on the off-chance that some day if I do such to the knowledge of another the other is inclined to help me reach it in some way.

Given the existence of compensation, there is yet the question of what compensation would be sufficient to make me do something that made me feel sullied. Incidentally notable that I note there are many things that would make others feel sullied that I would do with no discomfort at all.

...a general practice of acting in a consistent way... a perception of karma not as something which operates outside normal causality, but instead similar-to-luck just those parts of normal causality that one cannot be aware of... ah, I've reached the point of redundancy were I to continue typing.

Comment by Multipartite on In favour of a selective CEV initial dynamic · 2011-10-24T15:51:01.969Z · LW · GW

CEV document: I have at this point somewhat looked at it, but indeed I should ideally find time to read through it and think through it more thoroughly. I am aware that the sorts of questions I think of have very likely already been thought of by those who have spent many more hours thinking about the subject than I have, and am grateful that the time has been taken to answer ths specific thoughts that come to mind as initial reactions.


Reaction to the difference-showing example (simplified by the assumption that a sapient smarter-me is assumed to not exist in any form), in two examples:

Case 1: I hypothetically want enough money to live in luxury (and achieve various other goals) without effort (and hypothetically lack the mental ability to bring this about easily). Extrapolated, a smarter me looking at this real world from the outside would be a separate entity from me, have nothing in particular to gain from making my life easier in such a way, and so not take actions in my interests.

Case 2: A smarter-me watching the world from outside may hold a significantly different aesthetic sense than the normal me in the world, and may act to rearrange the world in such a way as to be most pleasing to that me watching from outside. This being done, in theory resulting in great satisfaction and pleasure of the watcher, the problem remains that the watcher does not in fact exist to appreciate what has been done, and the only sapient entities involved are the humans which have been meddled with for reasons which they presumably do not understand, are not happy about, and plausibly are not benefited by.

I note that a lot in fact hinges on the hypothetical benevolence of the smarter-me, and the assumption/hope/trust that it would after all not act in particularly negative ways toward the existant humans, but given a certain degree of selfishness one can probably assume a range of hopefully-at-worst-neutral significant actions which I personally would probably want to carry out, but which I certainly wouldn't want to be carried out without anyone pulling the strings in fact benefiting from what was being done.

...hmm, those can be summed up as 'The smarter-me wouldn't aid my selfishness!' and 'The smarter-me would act selfishly in ways which don't benefit anyone since it isn't sapient!'. There might admittedly be a lot of non-selfishness carried out, but that seems like a quite large variation from the ideal behaviour desired by the client-equivalent. I can understand the throwing-out of the individual selfishness for something based on a group and created for the sake of humanity in general, but the taking of selfish actions for a (possibly congomerate) watcher who does not in fact exist (in terms of what is seen) seems as though it remains to be addressed.

...I also find myself wondering whether a smarter-me would want to have arrays built to make itself even smarter, and backup computers for redundancy created in various places each able to simulate its full sapience if necessary, resulting in the creation of hardware running a sapient smarter-me even though the decision-making smarter-me who decided to do so wasn't in fact sapient/{in existance}... though, arguably, that also wouldn't be too bad in terms of absolute results... hmm.

Comment by Multipartite on In favour of a selective CEV initial dynamic · 2011-10-23T19:25:22.066Z · LW · GW

Diamond: Ahh. I note that looking at the equivalent diamond section, 'advise Fred to ask for box B instead' (hopefully including the explanation of one's knowledge of the presence of the desired diamond) is a notably potentially-helpful action, compared to the other listed options which can be variably undesirable.


Varying priorities: That I change over time is an accepted aspect of existence. There is uncertainty, granted; on the one hand I don't want to make decisions that a later self would be unable to reverse and might disapprove of, but on the other hand I am willing to sacrifice the happiness of a hypothetical future self for the happiness of my current self (and different hypothetical future selves)... hm, I should read more before I write more, as otherwise redundancy is likely. (Given that my priorities could shift in various ways, one might argue that I would prefer something to act on what I currently definitely want, rather than on what I might or might not want in the future (yet definitely do not want (/want not to be done) /now/). An issue of possible oppression of the existing for the sake of the non-existant... hm.)

To check, does 'in order for it to be safe' refer to 'safe from the perspectives of multiple humans', compared to 'safe from the perspective of the value-set source/s'? If so, possibly tautologous. If not, then I likely should investigate the point in question shortly.

Another example that comes to mind regarding a conflict of priorities: 'If your brain was this much more advanced, you would find this particular type of art the most sublime thing you'd ever witnessed, and would want to fill your harddrive with its genre. I have thus done so, even though to you who owns the harddrive and can't appreciate it it consists of uninteresting squiggles, and has overwritten all the books and video files that you were lovingly storing.'


Digression: If such an entity acts according to a smarter-me's will, then theoretically existing does the smarter-me necessarily 'exist' as simulated/interpreted by the entity? Put another way, for a chatterbot to accurately create the exact interactions/responses that a sapient entity would, is it theoretically necessary for a sapient entity to effectively exist, simulated by the non-sapient entity, or could such an entity mimic a sapient entity withou sapience entering into the matter? (Would then a mimicked-sapient entity exist in a meaningful sense, but only if there were sapient entities hearing its words and benefiting from its willed actions, compared to if there were only multple mimicked-entities talking to each other? Hrm.) | If a smarter-me was necessarily simulated in a certain sense in order to carry out its will, I might be willing to accede to it in the same spirit as to extremely-intelligent aliens/robots wanting to wipe out humanity for their own reasons, but I would be unwilling to accept things which are against my interests being carried out for the interests of an entity which does not in fact in any sense exist.


Manifestation: It occurs to me that a sandbox version could be interesting to oberve, one's non-extrapolated volition wanting our extrapolated volitions to be modelled in simulated world-section level 2, and as a result of such a contradiction instead the extrapolated volitions of those in level 2 /not/ being modelled in level 3, yet still being modelled in level 2... again, though, while such a tool might be extremely useful for second-guessing one's decisions and discussing with one very, very good reasons to rethink them (and thus in fact oneself changing hopefully-beneficially as a person (?) where applicable), something which directly defies one's will(/one's curiosity) lacks appeal as a goal (/stepping stone) to work towards.

Comment by Multipartite on In favour of a selective CEV initial dynamic · 2011-10-22T20:25:15.282Z · LW · GW

Reading other comments, I note my thoughts on the undesirability of extrapolation have largely been addressed elsewhere already.


Current thoughts on giving higher preference to a subset:

Though one would be happy with a world reworked to fit one's personal system of values, others likely would not be. Though selected others would be happy with a world reworked to fit their agreed system of values, others likely would not be. Moreover, assuming changes over time, even if such is held to a certain degree at one point in time, changes based on that may turn out to be regrettable.

Given that one's own position (and those of any other subset) are liable to be riddled with flaws, multiplying may dictate that some alternative to the current situation in the world be provided, but it does not necessarily dictate that one must impose one subset's values on the rest of the world to the opposition of that rest of the world.

Imposition of peace on those filled with hatred who thickly desire war results in a worsening of those individuals' situation. Imposition of war on those filled with love who strongly esire peace results in a worsening of those individuals' situation. Taking it as given that each subset's ideal outcome differs significantly from that of every other subset in the world, any overall change according to the will of one subset seems liable to yield more opposition and resentment than it does approval and gratitude.

Notably, when thinking up a movement worth supporting, such an action is frightening and unstable--people with differing opinions climbing over each other to be the ones who determine the shape of the future for the rest.

What, then, is an acceptable approach by which the wills coincide of all these people who are opposed to the wills of other groups being imposed on the unwilling?

Perhaps to not remake the world in your own image, or even in the image of people you choose to be fit to remake the world in their own image, or even the image of people someone you know nothing about chose to be fit to remake the world in their own image.

Perhaps a goal worth cooperating towards and joining everyone's forces together to work towards is that of an alternative, or perhaps many, which people can choose to join and will be imposed on all willing and only those who are willing.

For those who dislike the system others choose, let them stay as they are. For those who like such systems more than their current situation, let them leave and be happier.

Leave the technophiles to their technophilia, the... actually I can't select other groups, because who would join and who would stay depends on what gets made. Perhaps it might end up with different social groups existing under the separate jurisdictions of different systems, while all those who preferred their current state to any systems as yet created remained on Earth.

A non-interference arrangement with free-to-enter alternatives for all who prefer it to the default situation: while maybe not anyone's ideal, hopefully something that all can agree is better, and something that to no one is in fact worse.

(Well, maybe to those people who have reasons for not wanting chunks of the population to leave in search of a better life..? Hmm.)

Comment by Multipartite on In favour of a selective CEV initial dynamic · 2011-10-22T19:56:13.142Z · LW · GW

Ahh. Thank you! I was then very likely at fault on that point, being familiar with the phrase yet not recognising the acronym.

Comment by Multipartite on In favour of a selective CEV initial dynamic · 2011-10-22T12:17:21.557Z · LW · GW

I unfortunately lack time at the moment; rather than write a badly-thought-out response to the complete structure of reasoning considered, I will for the moment write fully-thought-out thoughts on minor parts thereof that my (?) mind/curiosity has seized on.


'As for “taking over the world by proxy”, again SUAM applies.': this sentence stands out, but glancing upwards and downwards does not immediately reveal what SUAM refers to. Ctrl+F and looking at all appearances of the term SUAM on the page does not reveal what SUAM refers to. The first page of Google results for 'SUAM' does not reveal what SUAM refers to.

Hopefully SUAM is a reference to an S U A M acronym used elsewhere in the article or in a different well-known article, but a suggestion may be helpful that if the first then S U A M (SUAM) would be convenient in terms of phrase->acronym, and if the second then a reference to the location or else the expanded form of the acronym would be convenient.


The diamond case: Even if I did want a diamond, I simulate that I would feel nervous, alarmed even, if I indicated that I wanted it to bring me one box and I was brought a different box instead. I'm reminded--though this is not directly relevant--of Google searches, where I on occasion look up a rare word I'm unfamiliar with, and instead am given a page of results for a different (more common) word, with a question at the top asking me if I instead want to search for the word I searched for.

For Google, I would be much less frustrated if it always gave me the results I asked for, and maybe asked if I wanted to search for something else. (That way, when I do misspell something, I'm rightfully annoyed at myself and rightfully pleased with the search engine's consistent behaviour.) For the diamond case, I would be happy if it for instance noticed that I wanted the diamond and alerted me to its actual location, giving me a chance to change my official decision.

Otherwise, I would be quite worried about it making other such decisions without my official consent, such as "Hmm, you say you want to learn about these interesting branches of physics, but I can tell that you say that because you anticipate doing so will make you happy, so I'll ignore your request and pump your brain full of drugs instead forever.". Even if in most cases the outcome is acceptable, for something to second-guess your desires at all means there's always the possibility of irrevocably going against your will.

People may worry that a life of getting whatever one wants(/asks for) may not be ideal, but I'm reminded of the immortality/bat argument in that a person who gets whatever that person wants would probably not want to give that up for the sake of the benefits that would arguably come with not having those advantages.

In a more general sense, given that I already possess priorities and want them to be fulfilled (and know how I want to fulfill them), I would appreciate an entity helping me to do so, but would not want an entity to fulfill priorities that I don't hold or try to fulfill them in ways which conflict with my chosen methods of fulfilling them. If creating something that would act according to what one woul want if one /were/ more intelligent or more moral or more altruistic, then A) that would only be desirable if one were such a person currently instead of being the current self, or B) that would be a good upgraded-replacement-self to let loose on the universe while oneself ceasing to exist without seeking to have one's own will be done (other than on that matter of self-replacement).

Comment by Multipartite on [SEQ RERUN] Torture vs. Dust Specks · 2011-10-15T22:35:29.312Z · LW · GW

I can somewhat sympathise, in that when removing a plaster I prefer to remove it slowly, for a longer bearable pain, than quickly for a brief unbearable pain. However, this can only be extended so far: there is a set (expected) length of continuing bearable pain over which one would choose to eliminate the entire thing with brief unbearable pain, as with tooth disease and (hypothetical) dentistry, or unpleasant-but-survival-illness and (phobic) vaccination.

'prefer any number of people to experience the former pain, rather than one having to bear the latter': applying to across time as well as across numbers, one can reach the state of comparing {one person suffering brief unbearable pain} to {a world of pain, every person constantly existing just at the theshold at which it's possible to not go insane}. Somewhat selfishly casting oneself in the position of potential sufferer and chooser, should one look on such a world of pain and pronounce it to be acceptable as long as one does not have to undergo a moment of unbearable pain? Is the suffering one would undergo truly weightier than the suffering the civilisation wold labor under?

The above question is arguably unfair both in that I've extended across time without checking acceptability, and also in that I've put the chooser in the position of a sacrificer. For the second part, hopefully it can be resolved by letting it be given that the chooser does not notably value another's suffering above or below the importance of the chooser's own. (Then again, maybe not.)

As for time, can an infinite number of different people suffering a certain thing for one second be determined to be at least no less than a single person suffering the same thing for five seconds? If so, then one can hopefully extend suffering in time as well as across numbers, and thus validly reach the 'world of pain versus moment of anguish' situation.

(In regard to priveleging, note that dealing with large numbers is known to cause failure of degree appreciation due to the brain's limitations, whereas induction tends to be reliable.)

Comment by Multipartite on [SEQ RERUN] Torture vs. Dust Specks · 2011-10-11T19:30:10.526Z · LW · GW

Is the distribution necessary (other than as a thought experiment)?

Simplifying to a 0->3 case: If changing (in the entire universe, say) all 0->1, all 1->2, and all 2->3 is judged as worse than changing one person's 0->3 --for the reason that, for an even distrubution, the 1s and 2s would stay the same number and the 3s would increase with the 1s decreasing-- then for what hypothetical distribution would it be even worse and for what hypothetical distribution would it be less bad? Is it worse if there are only 0s who all become 1s, or is it worse if there are only 2s who all become 3s? Is a dust speck classed as worse if you do it to someone being tortured than someone in a normal life or vice versa, or is it just as bad no matter what the distribution in which case the distribution is unimportant?

...then again, if one weighs matters solely on magnitude of individual change, then that greater difference can appear and disappear like a mirage when one shifts back and forth considering those involved collectively or reductionalistically... hrm. | Intuitively speaking, it seems inconsistent to state that 4A, 4B and 4C are acceptable, but A+B+C is not acceptable (where A is N people 0->1, B is N 1->2, C is N 2->3).

...the aim of the even distribution example is perhaps to show that by the magnitude-difference measurement the outcome can be worse, then break it down to show that for uneven cases too the suffering inflicted is equivalent and so for consistency one must continue to view it as worse...

(Again, this time shifting it to a 0-1-2, why would it be {unacceptable for N people to be 1->2 if and only if N people were also 0->1, but not unacceptable for N people to be 1->2 if 2N more people were 1->2} /and also/ {unacceptable for N people to be 0->1 if and only if N people ere also 1->2, but not unacceptable for N people to be 0->1 if 2N more people were 0->1}?)


The arbitrary points concept, rather than a smooth gradient, is also a reasonable point to consider. For a smooth gradient, the more pain anothe person is going through the more objectionable it is. For an arbitrary threshold, one could not find someone greatly to be an objectionable thing, yet find someone else suffering by a negligible amount more to be a significantly objectionable thing. Officially adopting such a cut-off point for sympathy--particularly one based on an arbitrarily-arrived-at brain structure rather than well-founded ethical/moral reasoning--would seem to be incompatible with true benevolence and desire for others' well-being, suggesting that even if such arbitrary thresholds exist we should aim to act as though they did not.

(In other words, if we know that we are liable to not scale our contribution depending on the scale of (the results of) what we're contributing towards, we should aim to take that into account and deliberately, manually, impose the scaling that otherwise would have been left out of our considerations. In this situation, if as a rule of thumb we tend to ignore low suffering and pay attention to high suffering, we should take care to acknowledge the unpleasantness of all suffering and act appropriately when considering decisions that could control such suffering.

(Preferable to not look back in the future and realise that, because of overreliance on hardwired rules of thumb, one had taken actions which betrayed one's true system of values. If deliberately rewiring one's brain to eliminate the cut-off crutches, say, one would hopefully prefer to at that time not be horrified by one's previous actions, but rather be pleased at how much easier taking the same actions has become. Undesirable to resign oneself to being a slave of one's default behaviour.)

Comment by Multipartite on Link: WJS article that uses Steve Jobs' death to mock cryonics and the Singularity · 2011-10-08T16:07:16.735Z · LW · GW

('Should it fit in a pocket or backpack?': Robot chassis, please. 'Who is the user?': Hopefully the consciousness itself. O.O)

Comment by Multipartite on Should I play World of Warcraft? · 2011-10-07T09:22:41.750Z · LW · GW
  • In general, make decisions according to the furtherance of your current set of priorities.
  • Personally, though I enjoy certain persistant-world games for their content and lasting internal advantages, the impression I've gotten from reading others' accounts of World of Warcraft compared to other games is that it takes up a disproportionate amount of time/effort/money compared to other sources of pleasure.

For that game, the sunk-costs fallacy and the training-to-do-random-things-infinitely phenomenon may help in speculating about why so many sink and continue to sink time into it. I've noticed that people who bite the bullet and quit speak not as though they were dependent and longing to relapse into remembered joy, but rather as though horrified in retrospect at how they let themselves get used to essentially playing to work, that is doing something which in theory they enjoyed yet which in practice was itself a source of considerable stress/boredom/frustration. (Again, I have had no direct experience with the game.)

For cocaine, straightforwardly there's the expectation that it would do bad things to your receptors (as well as your nose lining...) such that you would gain dependency and require it for normality, as with caffeine and nicotine and alcohol. Your priorities would be forcibly changed to a state incompatible with your current priorities, thus it is worth avoiding. If there were a form or similar thing which in fact had no long-term neurological effects, that is one which actually gave you the high without causing any dependency (is that even theoretically possible, though, considering how the brain works? Well, if dropped to the level of most things then instead, say...), it might be worth trying in the same way that music is helpful to cheer oneself up (if it cost less as well?), or perhaps sweet foods would be a better example there.

The standard answer for sex is that it's already part of your system of priorities, and so there's little helping it. Practically speaking, it would probably be far easier if one could just turn off one's interest in that regard and focus one's energy elsewhere--particularly, in terms of the various psychological/physiological health benefits, if one already cannot experience it yet is near-futilely driven to seek it. Again though, there one more wants to turn off 'the impulse to have sex' rather than sex itself, since there are advantages if you want to have sex and do compared to if you want to and can't, and also advantages if you don't want to and don't compared to if you want to and can't.

Hm... returning to the original question wording, if one treats World of Warcraft as a potentially-addictive use of time that may truly or otherwise effectively rewire one's sytem of priorities to the point of interference with one's current priorities, then it is likely reasonable to avoid it for that reason. It's again important to note which priorities are true priorities (such as improvement of the world?) that one wishes to whole-heartedly support, and which are priorities which, when stopped to think about, don't have a particularly reason to value (such as the sex drive issue, which actually doesn't have much going for it compared to other ways of pursuing pleasure).

(Species-wide reproductive advantages are acknowledged.)

Comment by Multipartite on Rationality Lessons Learned from Irrational Adventures in Romance · 2011-10-04T22:05:20.059Z · LW · GW

Crocker's Rules: A significantly interesting formalisation that I had not come across before! Thank you!

On the one hand, even if someone doesn't accept responsibility for the operation of their own mind it seems that they nevertheless retain responsibility for the operation of their own mind. On the other hand, from a results-based (utilitarian?) perspective I see the problems that can result from treating an irresponsible entity as though they were responsible.

Unless judged it as having significant probability that one would shortly be stabbed, have one's reputation tarnished, or otherwise suffer an unacceptable consequence, there seem to be significant ethical arguments against {acting to preserve a softening barrier/buffer between a fragile ego and the rest of the world} and for {providing information either for possible refutation or for helpful greater total understanding}.
|
Then again, this is the same line of thought which used to get me mired in long religion-related debates which I eventually noticed were having no effect, so--especially given the option of decreasing possible reprisals' probabilities to nearly zero--treating others softly as lessers to be manipulated and worked around instead of interacting with on an equal basis has numerous merits.
|
--Though that then triggers a mental reminder that there's a sequence (?) somewhere with something to say about {not becoming arrogant and pitying others} that may have a way to {likewise treat people as irresponsible and manipulate them accordingly, but without looking down on them} if I reread it.

Comment by Multipartite on Reversed stupidity sometimes provides useful information · 2011-09-28T22:56:19.881Z · LW · GW

Note that these people believing this thing to be true does not in fact make it any likelier to be false. We judge it to be less {more likely to be true} than we would for a generic positing by a generic person, down to the point of no suspicion one way or the other, but this positing is not in fact reversed into a positive impression that something is false.

If one takes two otherwise-identical worlds (unlikely, I grant), one in which a large body of people X posit Y for (patently?) fallacious reasons and one in which that large body of people posit the completely-unrelated Z instead, then it seems that a rational (?) individual in both worlds should have roughly the same impression on whether Y is true or false, rather than in the one world believing Y to be very likely to be false.

One may not give the stupid significant credence when they claim it to be day, but one still doesn't believe it any more likely to be night (than one did before).

((As likely noted elsewhere, the bias-acknowledgement situation results in humans being variably treated as more stupid and less stupid depending on local topic of conversation, due to blind spot specificity.))

Comment by Multipartite on Two Cult Koans · 2011-09-26T16:46:41.655Z · LW · GW

Thank you!

If I may ask, is there a location in wiki.lesswrong.com or elsewhere which describes how to use quote-bars (of the type used in your comment for my paragraph) and similar?

Comment by Multipartite on Circular Altruism · 2011-09-26T16:38:10.752Z · LW · GW

Indeed. nods

If sacrifice of myself was necessary to (hope to?) save the person mentioned, I hope that I would {be consistent with my current perception of my likely actions} and go through with it, though I do not claim complete certainty of my actions.

If those that would die from the hypothetical disease were the soon-to-die-anyway (very elderly/infirm), I would likely choose to spend my time on more significant areas of research (life extension, more-fatal/-painful diseases).

If all other significant areas had been dealt with or were being adequately dealt with, perhaps rendering the disease the only remaining ailment that humanity suffered from, I might carry out the research for the sake of completeness. I might also wait a few decades depending on whether or not it would be fixed even without doing so.

A problem here is that the more inconvenient I make one decision, the more convenient I make the other. If I jump ahead to a hypothetical cases where the choices were completely balanced either way, I might just flip a coin, since I presumably wouldn't care which one I took.

Then again, the stacking could be chosen such that no matter which I took it would be emotionally devastating... that though conveniently (hah) comprises such a slim fraction of all possibilities that I gain by assuming there to always be some aspect that would make a difference or which could be exploited in some way, given that if there were then I could find it and make a sound decision and that if the weren't my position would not in fact change (by the nature of the balanced setup).

Stepping back and considering the least-convenient consideration argument, I notice that its primary function may be to get people to accept two options as both conceivable in different circumstances, rather than rejecting one on technicalities. If I already acknowledge that I would be inclined to make a different choice depending on different circumstances, am I freed from that application I wonder?

Comment by Multipartite on Circular Altruism · 2011-09-26T16:21:53.711Z · LW · GW

I note the answer to this seems particularly straightforward if the few dozen who would probably die would also have been young and healthy at the time. Even more convenient if the subject is a volunteer, and/or if the experimentor (possibly with a staff of non-sentient robot record-keepers and essay-compilers, rather than humans?) did them on himself/herself/themself(?).

(I personally have an extremely strong desire to survive eternally, but I understand there are (/have historically been) people who would willingly risk death or even die for certain in order to save others. Perhaps if sacrificing myself was the only way to save my sister, say, though that's a somewhat unfair situation to suggest as relevant. Again, tempting to just use a less-egocentric volunteer instead if available.) (Results-based reasoning, rather than idealistic/cautious action-based reasoning. Particularly given public backlash, I can understand why a governmental body would choose to keep its hands as clean as possible instead and allow a massive tragedy rather than staining their hands with a sin. Hmm.)

Comment by Multipartite on Stanislav Petrov Day · 2011-09-26T16:13:02.875Z · LW · GW

Humanity can likely be assumed to, under those conditions, balloon out to its former proportions (following the pattern of population increase to the point that available resources can no longer support further increase).

One possibility is that this would represent a delay in the current path and not much else, though depending on the length of time needed to rebuild our infrastructure it could make a large difference in efforts to establish humanity as safely redundant (outside Earth, that is).

Another possibility (the Oryx and Crake concept) is that due to shallow metal mining, oil depletion et cetera the current level of infrastructure would in fact not be regained, in which case the approximately-dark-ages humans would exist across the planet until it became inhabitable and they all died (even if six billion years from now).

Another (granted, comparatively unlikely) is that the various fallout from the war would prevent humanity from bouncing back in any form, in which case even the survivors would disappear relatively quickly.

One can hope that any long-term prevention for the overpopulation consequences that might have been used in that future could also be used in our future. (Personally, hoping for the Singularity approach, which seems much harder to achieve without an Internet and with a much smaller population slowly spreading out and starting to rebuild the ruins of empire.)

Comment by Multipartite on Eight Short Studies On Excuses · 2011-09-24T19:13:48.103Z · LW · GW

I note that while most of the examples seem reasonable, the Dictator instance seems to stand out: by accepting the trumped-up prospector excuse as admissible, the organisation is agreeing to any similarly flimsy excuse that a country could make (e.g. the route not taken in The Sports Fan). The Lazy Student also comes to mind in terms of being an organisation that would accept such an argument, thus others also making it.

(Hm... I wonder if a valid equivalent of the Grieving case would be if the other country had in fact launched an easily-verifiable full-scale offensive large enough to necessitate occuptation in order to stop it and protect the attacked country.)

While in the other cases, the decision-makers seem to maintain consistency regarding their possessed priorities, in the Dictator case it looks as though the only way the decision-maker can be said to have made the right decision is if one assumes the organisation only cares about public approval (a spin favourable to them) and does not in fact care whether or not countries invade each other based on transparent lies.

On a slightly related note, if a country/entity were so stupid/paranoid that it in good faith invaded countries left and right for flimsy reasons, it would be in appropriate keeping with such a decision-making organisation's stated goals to take it down as a mad dog (-equivalent) before it did any more harm to law-abiding countries/entities.

A different interpretation might require the prospector case to in fact be very unlikely, a near-miracle that the dictator leaped on as an unhoped-for opportunity rather than as one of any number of possible {grounds for unjustified action} that would certainly have come up sooner or later if one waited long enough.

Thinking intuitively, though, even if every country freezes its borders in terror to prevent any movements being declared as casus belli, there are all sorts of as-easy-to-see-through things that could be done instead... (Mostly involving forgery/bribery/baseless lying, granted.)

Comment by Multipartite on Welcome to Less Wrong! (2010-2011) · 2011-09-23T23:09:57.925Z · LW · GW

Greetings. I apologise for possible oversecretiveness, but for the moment I prefer to remain in relative anonymity; this is a moniker used online and mostly kept from overlapping with my legal identity.

Though in a sense there is consistency of identity, for fairness I should likely note that my use of first-person pronouns may not always be entirely appropriate.

Personal interest in the Singularity can probably be ultimately traced back to the fiction Deus Ex, though I hope it would have reached it eventually even without it as a starting point; my experience with this location, including a number of the sequences, comes from having encountered Yudkowsky-related things when reading Singularity-related information online (following a large web of references all at once, more or less) some time ago.

Depending on my future activity in this location, I may reveal more details about my current or future state of existence, but for the moment I plan to take advantage of the new existence of this account to lightly (?) engage in discussion when is something I find I want to say.

どうか、お手柔らかにお願いいたします。 よろしくお願いします。

Comment by Multipartite on Two Cult Koans · 2011-09-23T22:33:41.737Z · LW · GW

(Unlurking and creating an account for use from this point onwards; どうかお手柔らかにお願いします。)

Something I found curious in the reading of the comments for this article is the perception that Bouzo took away the conclusion that clothing was in fact important for probability.

Airing my initial impression for possible contrast (/as an indication of my uncertainty): When I read the last sentence, I imagined an unwritten 'And in that moment the novice was enlightened', mirroring the structure of certain koans I once glanced through.

My interpretation is/was that those words of Ougi's were what caused the novice to realise his error (in focusing on the clothing rather than the teachings when considering likelihood of cultishness), the absurdity of a hat worn affecting one's understanding revealing the absurdity the clothing worn by those in the dojo inherently for their rationality (though arguments could be made about indirect advantages and/or disadvantages in both directions?).

From that, the clown suit could be taken as a result of him being humbled by this lesson, it making such a deep impression on him that, taking it to heart, he established something to remind him (and others?) of it as part of his daily behaviour.

(This would be that, rather than thinking a clown suit really could make him more rational, him wearing a clown suit when discussing rationality to dramatically demonstrate the lack of importance of what he wore for what he said. I'm also reminded of the concept of abnormality being going to school in a clown suit rather than in all black, though this is arguably not directly relevant.)

I am as yet unfamiliar with the architecture of this location, and so do not know if anyone will automatically know about it nor in fact if it will ever be read, but it nevertheless is a significant weight off my mind to speak about my impressions alongside those of others. I thank you all.