Posts

Comments

Comment by DSherron on I notice that I am confused about Identity and Resurrection · 2013-11-15T05:23:32.393Z · LW · GW

After considering this for quite some time, I came to a conclusion (imprecise though it is) that my definition of "myself" is something along the lines of:

  • In short form, a "future evolution of the algorithm which produces my conscious experience, which is implemented in some manner that actually gives rise to that conscious experience"
  • In order for a thing to count as me, it must have conscious experience; anything which appears to act like it has conscious experience will count, unless we somehow figure out a better test.
  • It also must have memory, and that memory must include a stream of consciousness which leads back to the stream of consciousness I am experiencing right now, to approximately the same fidelity as I currently have memory of a continuous stream of consciousness going back to approximately adolescence.

Essentially, the idea is that in order for something to count as being me, it must be the sort of thing which I can imagine becoming in the future (future relative to my conscious experience; I feel like I am progressing through time), while still believing myself to be me the whole time. For example, imagine that, through some freak accident, there existed a human living in the year 1050 AD who passed out and experienced an extremely vivid dream which just so happens to be identical to my life up until the present moment. I can imagine waking up and discovering that to be the case; I would still feel like me, even as I incorporated whatever memories and knowledge he had so that I would also feel like I was him. That situation contains a "future evolution" of me in the present, which just means "a thing which I can become in the future without breaking my stream of consciousness, at least not any more than normal sleep does today".

This also implies that anything which diverged from me at some point in the past does not count as "me", unless it is close enough that it eventually converges back (this should happen within hours or days for minor divergences, like placing a pen in a drawer rather than on a desk, and will never happen for divergences with cascading effects (particularly those which significantly alter the world around me, in addition to me)).

Obviously I'm still confused too. But I'm less confused than I used to be, and hopefully after reading this you're a little less confused too. Or at least, hopefully you will be after reflecting a bit, if anything resonated at all.

Comment by DSherron on Change the labels, undo infinitely good improvements · 2013-11-02T02:15:34.068Z · LW · GW

No, because "we live in an infinite universe and you can have this chocolate bar" is trivially better. And "We live in an infinite universe and everyone not on earth is in hell" isn't really good news.

Comment by DSherron on Why officers vs. enlisted? · 2013-11-02T01:42:21.725Z · LW · GW

You're conflating responsibility/accountability with things that they don't naturally fall out of. And I think you know that last line was clearly B.S. (given that the entire original post was about something which is not identical to accountability - you should have known that the most reasonable answer to that question is "agentiness"). Considering their work higher, or considering them to be smarter, is alleged by the post to not be the entirety of the distinction between the hierarchies; after all, if the only difference was brains or status, then there would be no need for TWO distinct hierarchies. There is a continuous distribution of status and brains throughout both hierarchies (as opposed to a sharp distinction where even the lowest officer is significantly smarter or higher status than the highest soldier), so it seems reasonable to just link them together.

One thing which might help to explain the difference is the concept of "agentiness", not linked to certain levels of difficulty of roles, but rather to the type of actions performed by those roles. If true, then the distinguishing feature between an officer and a soldier is that officers have to be prepared to solve new problems which they may not be familiar with, while soldiers are only expected to perform actions they have been trained on. For example, an officer may have the task of "deal with those machine gunners", while a soldier would be told "sweep and clear these houses". The officer has to creatively devise a solution to a new problem, while the soldier merely has to execute a known decision tree. Note that this has nothing to do with the difficulty of the problem. There may be an easy solution to the first problem, while the second may be complex and require very fast decision making on the local scale (in addition to the physical challenge). But given the full scope of the situation, it is easy to look at the officer and say "I think you would have been better off going further around and choosing a different flank, to reduce your squad's casualties; but apparently you just don't have that level of tactical insight. No promotion for you, maybe next time". To the soldier, it would be more along the lines of "You failed to properly check a room before calling it clear, and missed an enemy combatant hiding behind a desk. This resulted in several casualties as he ambushed your squadmates. You're grounded for a week." The difference is that an officer is understood to need special insight to do his job well, while a soldier is understood to just need to follow orders without making mistakes. It's much easier to punish someone for failing to fulfill the basic requirements of their job than it is to punish them for failing to achieve an optimal result given vague instructions.

EDIT: You've provided good reason to expect that officers should get harsher punishments than soldiers, given the dual hierarchy. I claim that the theory of "agentiness" as the distinguishing feature between these hierarchies predicts that officers will receive punishments much less severe than your model would suggest, while soldiers will be more harshly punished. In reality, it seems that officers don't get held accountable to the degree which your model predicts they would, based on their status, while soldiers get held more accountable. This is evidence in favor of the "agentiness" model, not against it, as you originally suggested. The core steps of my logic are: the "agentiness" model predicts that officers are not punished as severely as you'd otherwise expect, while soldiers are punished more severely; therefore, the fact that in the real world officers are not punished as harshly as you'd otherwise expect is evidence for the "agentiness" model at the expense of any models which don't predict that. If you disagree with those steps, please specify where/how. If you disagree with an unstated/implied assumption outside of these steps, please specify which. If I'm not making sense, or if I seem exceedingly stupid, there's probably been a miscommunication; try and point at the parts that don't make sense so I can try again.

Comment by DSherron on Why officers vs. enlisted? · 2013-10-31T23:41:59.525Z · LW · GW

Doesn't that follow from the agenty/non-agenty distinction? An agenty actor is understood to make choices where there is no clear right answer. It makes sense that mistakes within that scope would be punished much less severely; it's hard to formally punish someone if you can't point to a strictly superior decision they should have made but didn't. Especially considering that even if you can think of a better way to handle that situation, you still have to not only show that they had enough information at the time to know that that decision would have been better ("hindsight is 20/20"), but also that such an insight would have been strictly within the requirements of their duties (rather than it requiring an abnormally high degree of intelligence/foresight/clarity/etc.).

Meanwhile, a non-agenty actor is merely expected to carry out a clear set of actions. If a non-agenty actor makes a mistake, it is easy to point to the exact action they should have taken instead. When a cog in the machine doesn't work right, it's simple to point it out and punish it. Therefore it makes a lot of sense that they get harsher punishments, because their job is supposed to be easier. Anyone can imagine a "perfect" non-agenty actor doing their job, as a point of comparison, while imagining a perfect "agenty" actor requires that you be as good at performing that exact role, including all the relevant knowledge and intelligence, as such a perfect actor.

Ultimately, it seems like observing that agenty actors suffer less severe punishments ought to support the notion that agentiness is at the least believed to be a cluster in thingspace. Of course, this will result in some unfair situations; "agenty" actors do sometimes get off the hook easy in situations where there was actually a very clear right decision and they chose wrong, while "non-agenty" actors will sometimes be held to an impossible standard when presented with situations where they have to make meaningful choices between unclear outcomes. This serves as evidence that "agentiness" is not really a binary switch, thus marking this theory as an approximation, although not necessarily a bad approximation even in practice.

Comment by DSherron on [deleted post] 2013-10-27T20:16:55.334Z

...Or you could notice that requiring that order be preserved when you add another member is outright assuming that you care about the total and not about the average. You assume the conclusion as one of your premises, making the argument trivial.

Comment by DSherron on What should normal people do? · 2013-10-26T21:08:26.364Z · LW · GW

Better: randomly select a group of users (within some minimal activity criteria) and offer the test directly to that group. Publicly state the names of those selected (make it a short list, so that people actually read it, maybe 10-20) and then after a certain amount of time give another public list of those who did or didn't take it, along with the results (although don't associate results with names). That will get you better participation, and the fact that you have taken a group of known size makes it much easier to give outer bounds on the size of the selection effect caused by people not participating.

You can also improve participation by giving those users an easily accessible icon on Less Wrong itself which takes them directly to the test, and maybe a popup reminder once a day or so when they log on to the site if they've been selected but haven't done it yet. Requires moderate coding.

Comment by DSherron on The Ultimate Sleeping Beauty Problem · 2013-09-30T04:00:34.297Z · LW · GW

She responds "I'm sorry, but while I am a highly skilled mathematician, I'm actually from an alternate universe which is identical to yours except that in mine 'subjective probability' is the name of a particularly delicious ice cream flavor. Please precisely define what you mean by 'subjective probability', preferably by describing in detail a payoff structure such that my winnings will be maximized by selecting the correct answer to your query."

Comment by DSherron on The Ultimate Newcomb's Problem · 2013-09-10T03:28:17.987Z · LW · GW

Written before reading comments; The answer was decided within or close to the 2 minute window.

I take both boxes. I am uncertain of three things in this scenario: 1)whether the number is prime; 2) whether Omega predicted I would take one box or two; and 3) whether I am the type of agent that will take one box or two. If I take one box, it is highly likely that Omega predicted this correctly, and it is also highly likely that the number is prime. If I take two boxes, it is highly likely that Omega predicted this correctly and that the number is composite. I prefer the number to be composite, therefor I take both boxes on the anticipation that when I do so I will (correctly) be able to update to 99.9% probability that the number is composite.

Thinking this through actually led me to a bit of insight on the original newcomb's problem, namely that last part about updating my beliefs based on which action I choose to take, even when that action has no causal effects on the subject of my beliefs. Taking an action allows you to strongly update on your belief about which action you would take in that situation; in cases where that fact is causally connected to others (in this case Omega's prediction), you can then update through those connections.

Comment by DSherron on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-06T06:19:11.257Z · LW · GW

It seems that mild threats, introduced relatively late while immersion is strong, might be effective against some people. Strong threats, in particular threats which pattern-match to the sorts of threats which might be discussed on LW (and thus get the gatekeeper to probably break some immersion) are going to be generally bad ideas. But I could see some sort of (possibly veiled/implied?) threat working against the right sort of person in the game. Some people can probably be drawn into the narrative sufficiently to get them to actually react in some respects as though the threat was real. This would definitely not apply to most people though, and I would not be shocked to discover that getting to the required level of immersion isn't humanly feasible except in very rare edge cases.

Comment by DSherron on [SEQ RERUN] "I don't know." · 2013-08-13T01:17:38.791Z · LW · GW

His answer isn't random. It's based on his knowledge of apple trees in bloom (he states later that he assumed the tree was an apple tree in bloom). If you knew nothing about apple trees, or knew less than he did, or knew different but no more reliable information than he did, or were less able to correctly interpret what information you did have, then you would have learned something from him. If you had all the information he did, and believed that he was a rationalist and at the least not worse at coming to the right answer than you, and you had a different estimate than he did, then you still ought to update towards his estimate (Aumann's Agreement Theorem).

This does illustrate the point that simply stating your final probability distribution isn't really sufficient to tell everything you know. Not surprisingly, you can't compress much past the actual original evidence without suffering at least some amount of information loss. How important this loss is depends on the domain in question. It is difficult to come up with a general algorithm for useful information transfer even just between rationalists, and you cannot really do it at all with someone who doesn't know probability theory.

Comment by DSherron on What Bayesianism taught me · 2013-08-12T20:32:48.587Z · LW · GW

Does locking doors generally lead to preventing break-ins? I mean, certianly in some cases (cars most notably) it does, but in general, if someone has gone up to your back door with the intent to break in, how likely are they to give up and leave upon finding it locked?

Comment by DSherron on What Bayesianism taught me · 2013-08-12T20:26:57.689Z · LW · GW

Nitpicking is absolutely critical in any public forum. Maybe in private, with only people who you know well and have very strong reason to believe are very much more likely to misspeak than to misunderstand, nitpicking can be overlooked. Certainly, I don't nitpick every misspoken statement in private. But when those conditions do not hold, when someone is speaking on a subject I am not certain they know well, or when I do not trust that everyone in the audience is going to correctly parse the statement as misspoken and then correctly reinterpret the correct version, nitpicking is the only way to ensure that everyone involved hears the correct message.

Charitably I'll guess that you dislike nitpicking because you already knew all those minor points, they were obvious to anyone reading after all, and they don't have any major impact on the post as a whole. The problem with that is that not everyone who reads Less Wrong has a fully correct understanding of everything that goes into every post. They don't spot the small mistakes, whether those be inconsequential math errors or a misapplication of some minor rule or whatever. And the problem is that just because the error was small in this particular context, it may be a large error in another context. If you mess up your math when doing Bayes' Theorem, you may thoroughly confuse someone who is weak at math and trying to follow how it is applied in real life. In the particular context of this post, getting the direction of a piece of evidence wrong is inconsequential if the magnitude of that evidence is tiny. But if you are making a systematic error which causes you to get the direction of certain types of evidence, which are usually small in magnitude, wrong, then you will eventually make a large error. And unless you are allowed to call out errors dealing with small magnitude pieces of evidence, you won't ever discover it.

I'd also like to say that just because a piece of evidence is "barely worth mentioning" when listing out evidence for and against a claim, does not mean that that evidence should be immediately thrown aside when found. The rules which govern evidence strong enough to convince me that 2+2=3 are the same rules that govern the evidence gained from the fact that when I drop an apple, it falls. You can't just pretend the rules stop applying and expect to come out ok in every situation. In part you can gain practice from applying the rules to those situations, and in part it's important to remember that they do still apply, even if in the end you decide that their outcome is inconsequential.

Comment by DSherron on Rationality Quotes August 2013 · 2013-08-04T22:49:14.255Z · LW · GW

While you can be definitively wrong, you cannot be definitely right.

Not true. Trivially, if A is definitively wrong, then ~A is definitively right. Popperian falsification is trumped by Bayes' Theorem.

Note: This means that you cannot be definitively wrong, not that you can be definitively right.

Comment by DSherron on The Fermi paradox as evidence against the likelyhood of unfriendly AI · 2013-08-02T23:11:44.548Z · LW · GW

They should be very, very slightly less visible (they will have slightly fewer resources to use due to expending some on keeping their parent species happy, and FAI is more likely to have a utility function that intentionally keeps itself invisible to intelligent life than UFAI, even though that probability is still very small), but this difference is negligible. Their apparent absence is not significantly more remarkable, in comparison to the total remarkability of the absence of any form of highly intelligent extra-terrestrial life.

Comment by DSherron on Rationality Quotes August 2013 · 2013-08-02T19:29:38.189Z · LW · GW

Alternatively, if you define solution such that any two given solutions are equally acceptable with respect to the original problem.

Comment by DSherron on low stress employment/ munchkin income thread · 2013-07-30T06:25:37.821Z · LW · GW

WOW. I predicted that I would have a high tolerance for variance, given that I was relatively unfazed by things that I understand most people would be extremely distressed by (failing out of college and getting fired). I was mostly right in that I'm not feeling stress, exactly, but what I did not predict was a literal physical feeling of sickness after losing around $20 to a series of bad plays (and one really bad beat, although I definitely felt less bad about that one after realizing that I really did play the hand correctly). It wasn't even originally money from my wallet; it came from one of the free offers linked elsewhere in this thread. But, wow, this advice is really really good. I can only imagine what it's like with even worse variance or for someone more inclined to stress about this sort of thing.

Comment by DSherron on low stress employment/ munchkin income thread · 2013-07-28T20:48:16.983Z · LW · GW

If I can do something fun, from my house, on my own hours, without any long-term commitment, and make as much money as a decent paying job, then that sounds incredible. Even if it turns out I can't play at high levels, I don't mind playing poker for hours a day and making a modest living from it. I don't really need much more than basic rent/food/utilities in any case.

Comment by DSherron on low stress employment/ munchkin income thread · 2013-07-24T00:30:22.193Z · LW · GW

Online poker (but it seems kinda hard)

Actually, does anyone know any good resources for getting up to speed on poker strategies? I'm smart, I'm good at math, I'm good at doing quick statistical math, and I've got a lot of experience at avoiding bias in the context of games. Plus I'm a decent programmer, so I should be able to take even more of an advantage by writing a helper bot to run the math faster and more accurately than I otherwise could. It seems to me that I should be able to do well at online poker, and this would be the sort of thing that I could likely actually get motivated to do to make money (which I unfortunately need to do).

Anyway, if anyone has any recommendations for how to go about the learning process and getting into playing, I'd love to hear them. I'll try to comment back here after doing some independent research as well.

Comment by DSherron on Harry Potter and the Methods of Rationality discussion thread, part 22, chapter 93 · 2013-07-07T05:05:53.348Z · LW · GW

You see an animal at a distance. It looks like a duck, walks like a duck, and quacks like a duck. You start to get offended by the duck. Then, you get closer and realize the duck was a platypus and not a duck at all. At this point, you realize that you were wrong, in a point of fact, to be offended. You can't claim that anything that looks like a duck, but which later turns out not to be, is offensive. If it later turns out not to be a duck then it was never a duck, and if you haven't been able to tell for sure yet (but will be able to in the future) then you need to suspend judgement until you can. Particularly since there is no possible defense that the thing is not a duck except to show you that it is not a duck, which will happen in time.

Comment by DSherron on [HPMOR][Possible Spoilers] Gedankenexperiment: Time Turner Meta-Informational Relativity · 2013-07-05T20:20:10.534Z · LW · GW

Given no other information to strictly verify, any supposed time-traveled conversation is indistinguishable from someone not having time-traveled at all and making the information up. The true rule must depend on the actual truth of information acquired, and the actual time such information came from. Otherwise, the rule is inconsistent. It also looks at whether your use of time travel actually involves conveying the information you gained; whether such information is actually transferred to the past, not merely whether it could be. Knowing that Amelia Bones has some information about 4 hours in the future will only restrict your time travel if you would transmit that information to the past - if you would act significantly differently knowing that than you would have otherwise. If you act the same either way, then you are not conveying information.

In short, the rule is that you cannot convey information more than 6 hours into the information's relative past, but that does not necessarily mean that you cannot go to a forbidden part of the past after learning it. It merely means that you cannot change your mind about doing so after learning it. Worth noting: if you plan on going to the past, and then receive some information from 6 hours in the future that changes your mind, you have conveyed information to the past. I'm not sure how that is handled, other than that the laws of the universe are structured as to never allow it to happen.

Comment by DSherron on Rationality Quotes July 2013 · 2013-07-02T14:02:23.748Z · LW · GW

The ideal attitude for humans with our peculiar mental architecture probably is one of "everything is amazing, also lets make it better" just because of how happiness ties into productivity. But that would be the correct attitude regardless of the actual state of the world. There is no such thing as an "awesome" world state, just a "more awesome" relation between two such states. Our current state is beyond the wildest dreams of some humans, and hell incarnate in comparison to what humanity could achieve. It is a type error to say "this state is awesome;" you have to say "more awesome" or "less awesome" compared to something else.

Also, such behavior is not compatible with the quote. The quote advocates ignoring real suboptimal sections of the world and instead basking how much better the world is than it used to be. How are you supposed to make the drinks better if you're not even allowed to admit they're not perfect? I could, with minor caveats, get behind "things are great lets make them better" but that's not what the quote said. The quote advocates pretending that we've already achieved perfection.

Comment by DSherron on Rationality Quotes July 2013 · 2013-07-02T13:52:00.185Z · LW · GW

Sure, I agree with that. But you see, that's not what the quote said. It actually not even related to what the quote said, except in very tenuous manners. The quote condemned people complaining about drinks on an airplane; that was the whole point of mentioning the technology at all. I take issue with the quote as stated, not with every somewhat similar-sounding idea.

Comment by DSherron on Rationality Quotes July 2013 · 2013-07-01T23:58:16.526Z · LW · GW

That honestly seems like some kind of fallacy, although I can't name it. I mean, sure, take joy in the merely real, that's a good outlook to have; but it's highly analogous to saying something like "Average quality of life has gone up dramatically over the past few centuries, especially for people in major first world countries. You get 50-90 years of extremely good life - eat generally what you want, think and say anything you want, public education; life is incredibly great. But talk to some people, I absolutely promise you that you will find someone who, in the face of all that incredible achievement, will be willing to complain about [starving kid in Africa|environmental pollution|dying peacefully of old age|generally any way in which the world is suboptimal]."

That kind of outlook not only doesn't support any kind of progress, or even just utility maximization, it actively paints the very idea of making things even better as presumptuous and evil. It does not serve for something to be merely awe-inspiring; I want more. I want to not just watch a space shuttle launch (which is pretty cool on its own), but also have a drink that tastes better than any other in the world, with all of my best friends around me, while engaged in a thrilling intellectual conversation about strategy or tactics in the best game ever created. While a wizard turns us all into whales for a day. On a spaceship. A really cool spaceship. I don't just want good; I want the best. And I resent the implication that I'm just ungrateful for what I have. Hell, what would all those people that invested the blood, sweat, and tears to make modern flight possible say if they heard someone suggesting that we should just stick to the status quo because "it's already pretty good, why try to make it better?" I can guarantee they wouldn't agree.

Comment by DSherron on Open Thread, July 1-15, 2013 · 2013-07-01T20:51:00.817Z · LW · GW

If, after realizing an old mistake, you find a way to say "but I was at least sort of right, under my new set of beliefs," then you are selecting your beliefs badly. Don't identify as a person who iwas right, or as one who is right; identify as a person who will be right. Discovering a mistake has to be a victory, not a setback. Until you get to this point, there is no point in trying to engage in normal rational debate; instead, engage them on their own grounds until they reach that basic level of rationality.

For people having an otherwise rational debate, they need to at this point drop the Green and Blue labels (any rationalist should be happy to do so, since they're just a shorthand for the full belief system) and start specifying their actual beliefs. The fact that one identifies as a Green or a Blue is a red flag of glaring irrationality, confirmed if they refuse to drop the label to talk about individual beliefs, in which case do the above. Sticking with the labels is a way to make your beliefs feel stronger, via something like a halo effect where every good thing about Green or Greens gets attributed to every one of your beliefs.

Comment by DSherron on An attempt at a short no-prerequisite test for programming inclination · 2013-07-01T20:08:20.254Z · LW · GW

Answered "moderate programmer, incorrect". I got the correct final answer but had 2 boxes incorrect. Haven't checked where I went wrong, although I was very surprised I had as back in grade school I got these things correct with near perfection. I learned programming very easily and have traditionally rapidly outpaced my peers, but I'm only just starting professionally and don't feel like an "experienced" programmer. As for the test, I suspect it will show some distinction but with very many false positives and negatives. There are too many uncovered aspects of what seems to make up a natural programmer. Also, it is tedious as hell, and I suspect that boredom will lead to recklessness will lead to false negatives, which aren't terrible but are still not good. May also lead to some selection effect.

Comment by DSherron on Newbomb's parabox · 2013-07-01T19:35:43.612Z · LW · GW

God f*ing damn it. Again? He has 99.9% accuracy, problem resolved. Every decision remains identical unless a change of 1/1000 in your calculations causes a different action, which in Newcomboid problems it never should.

Note to anyone and everyone who encounters any sort of hypothetical with a "perfect" predictor; if you write it, always state an error rate, and if you read it then assume one (but not one higher than whatever error rate would make a TDT agent choose to two-box.)

Comment by DSherron on Infinite Certainty · 2013-06-29T17:29:38.970Z · LW · GW

Right, I didn't quite work all the math out precisely, but at least the conclusion was correct. This model is, as you say, exclusively for fatal logic errors; the sorts where the law of non-contradiction doesn't hold, or something equally unthinkable, such that everything you thought you knew is invalidated. It does not apply in the case of normal math errors for less obvious conclusions (well, it does, but your expected utility given no errors of this class still has to account for errors of other classes, where you can still make other predictions).

Comment by DSherron on Infinite Certainty · 2013-06-29T07:26:39.626Z · LW · GW

That's not how decision theory works. The bounds on my probabilities don't actually apply quite like that. When I'm making a decision, I can usefully talk about the expected utility of taking the bet, under the assumption that I have not made an error, and then multiply that by the odds of me not making an error, adding the final result to the expected utility of taking the bet given that I have made an error. This will give me the correct expected utility for taking the bet, and will not result in me taking stupid bets just because of the chance I've made a logic error; after all, given that my entire reasoning is wrong, I shouldn't expect taking the bet to be any better or worse than not taking it. In shorter terms: EU(action) = EU(action & ¬error) + EU(action & error); also EU(action & error) = EU(anyOtherAction & error), meaning that when I compare any 2 actions I get EU(action) - EU(otherAction) = EU(action & ¬error) - EU(otherAction & ¬error). Even though my probability estimates are affected by the presence of an error factor, my decisions are not. On the surface this seems like an argument that the distinction is somehow trivial or pointless; however, the critical difference comes in the fact that while I cannot predict the nature of such an error ahead of time, I can potentially recover from it iff I assign >0 probability to it occurring. Otherwise I will never ever assign it anything other than 0, no matter how much evidence I see. In the incredibly improbable event that I am wrong, given extraordinary amounts of evidence I can be convinced of that fact. And that will cause all of my other probabilities to update, which will cause my decisions to change.

Comment by DSherron on Explain/Worship/Ignore? · 2013-06-29T05:19:45.893Z · LW · GW

Neat. Consider my objection retracted. Although I suspect someone with more knowledge of the material could give a better explanation.

Comment by DSherron on Emotional Basilisks · 2013-06-28T23:44:48.765Z · LW · GW

This comment fails to address the post in any way whatsoever. No claim is made of the "right" thing to do; a hypothetical is offered, and the question asked is "what do you do?" It is not even the case that the hypothetical rests on an idea of an intrinsic "right thing" to do, instead asking us to measure how much we value knowing the truth vs happiness/lifespan, and how much we value the same for others. It's not an especially interesting or original question, but it does not make any claims which are relevant to your comment.

EDIT: That does make more sense, although I'd never seen that particular example used as "fighting the hypothetical", more just that "the right thing" is insufficiently defined for that sort of thing. Downvote revoked, but it's still not exactly on point to me. I also don't agree that you need to fight the hypothetical this time, other than to get rid of the particular example.

Comment by DSherron on Emotional Basilisks · 2013-06-28T23:38:32.055Z · LW · GW

While I don't entirely think this article was brilliant, it seems to be getting downvoted in excess of what seems appropriate. Not entirely sure why that is, although a bad choice of example probably helped push it along.

To answer the main question: need more information. I mean, it depends on the degree to which the negative effects happen, and the degree to which it seems this new belief will be likely to have major positive impacts on decision-making in various situations. I would, assuming I'm competent and motivated enough, create a secret society which generally kept the secret but spread it to all of the world's best and brightest, particularly in fields where knowing the secret would be vital to real success. I would also potentially offer a public face of the organization, where the secret is openly offered to any willing to take on the observed penalties in exchange for the observed gains. It could only be given out to those trusted not to tell, of course, but it should still be publicly offered; science needs to know, even if not every scientist needs to know.

Comment by DSherron on Infinite Certainty · 2013-06-28T22:53:07.442Z · LW · GW

Yes, 0 is no more a probability than 1 is. You are correct that I do not assign 100% certainty to the idea that 100% certainty is impossible. The proposition is of precisely that form though, that it is impossible - I would expect to find that it was simply not true at all, rather than expect to see it almost always hold true but sometimes break down. In any case, yes, I would be willing to make many such bets. I would happily accept a bet of one penny, right now, against a source of effectively limitless resources, for one example.

As to what probability you assign; I do not find it in the slightest improbable that you claim 100% certainty in full honesty. I do question, though, whether you would make literally any bet offered to you. Would you take the other side of my bet; having limitless resources, or a FAI, or something, would you be willing to bet losing it in exchange for a value roughly equal to that of a penny right now? In fact, you ought to be willing to risk losing it for no gain - you'd be indifferent on the bet, and you get free signaling from it.

Comment by DSherron on Infinite Certainty · 2013-06-28T18:48:15.555Z · LW · GW

Sure, it sounds pretty reasonable. I mean, it's an elementary facet of logic, and there's no way it's wrong. But, are you really, 100% certain that there is no possible configuration of your brain which would result in you holding that A implies not A, while feeling the exact same subjective feeling of certainty (along with being able to offer logical proofs, such that you feel like it is a trivial truth of logic)? Remember that our brains are not perfect logical computers; they can make mistakes. Trivially, there is some probability of your brain entering into any given state for no good reason at all due to quantum effects. Ridiculously unlikely, but not literally 0. Unless you believe with absolute certainty that it is impossible to have the subjective experience of believing that A implies not A in the same way you currently believe that A implies A, then you can't say that you are literally 100% certain. You will feel 100% certain, but this is a very different thing than actually literally possessing 100% certainty. Are you certain, 100%, that you're not brain damaged and wildly misinterpreting the entire field of logic? When you posit certainty, there can be literally no way that you could ever be wrong. Literally none. That's an insanely hard thing to prove, and subjective experience cannot possibly get you there. You can't be certain about what experiences are possible, and that puts some amount of uncertainty into literally everything else.

Comment by DSherron on Infinite Certainty · 2013-06-28T16:39:49.842Z · LW · GW

"Exist" is meaningful in the sense that "true" is meaningful, as described in EY's The Simple Truth. I'm not really sure why anyone cares about saying something with probability 1 though; no matter how carefully you think about it, there's always the chance that in a few seconds you'll wake up and realize that even though it seems to make sense now, you were actually spouting gibberish. Your brain is capable of making mistakes while asserting that it cannot possibly be making a mistake, and there is no domain on which this does not hold.

Comment by DSherron on Bad Concepts Repository · 2013-06-28T13:44:51.552Z · LW · GW

I think that claiming that is just making the confusion worse. Sure, you could claim that our preferences about "moral" situations are different from our other preferences; but the very feeling that makes them seem different at all stems from the core confusion! Think very carefully about why you want to distinguish between these types of preferences. What do you gain, knowing something is a "moral" preference (excluding whatever membership defines the category)? Is there actually a cluster in thing space around moral preferences, which is distinctly separate from the "preferences" cluster? Do moral preferences really have different implications than preferences about shoes and I've cream? The only thing I can imagine is that when you phrase an argument to humans in terms of morality, you get different responses than to preferences ("I want Greta's house" vs "Greta is morally obligated to give me her house"). But I can imagine no other way in which the difference could manifest. I mean, a preference is a preference is a term in a utility function. Mathematically they'd better all work the same way or we're gonna be in a heap of trouble.

Comment by DSherron on Bad Concepts Repository · 2013-06-28T01:58:17.464Z · LW · GW

Things encoded in human brains are part of the territory; but this does not mean that anything we imagine is in the territory in any other sense. "Should" is not an operator that has any useful reference in the territory, even within human minds. It is confused, in the moral sense of "should" at least. Telling anyone "you shouldn't do that" when what you really mean is "I want you to stop doing that" isn't productive. If they want to do it then they don't care what they "should" or "shouldn't" do unless you can explain to them why they in fact do or don't want to do that thing. In the sense that "should do x" means "on reflection would prefer to do x" it is useful. The farther you move from that, the less useful it becomes.

Comment by DSherron on Explain/Worship/Ignore? · 2013-06-27T19:50:25.846Z · LW · GW

I'm not a physicist, and I couldn't give a technical explanation of why that won't work (although I feel like I can grasp an intuitive idea based on how the Uncertainty Principle works to begin with). However, remember the Litany of a Bright Dilettante. You're not going to spot a trivial means of bypassing a fundamental theory in a field like physics after thinking for five minutes on a blog.

Incidentally, the Uncertainty Principle doesn't talk about the precision of our possible measurements, per se, but about the actual amplitude distribution for the observable. As you get arbitrarily precise along one of the pair you get arbitrarily spread out along the other, so that the second value is indeterminate even in principle.

Comment by DSherron on Bad Concepts Repository · 2013-06-27T19:32:54.565Z · LW · GW

The accusation of being bad concepts was not because they are vague, but because they lead to bad modes of thought (and because they are wrong concepts, in the manner of a wrong question). Being vague doesn't protect you from being wrong; you can talk all day about "is it ethical to steal this cookie" but you are wasting your time. Either you're actually referring to specific concepts that have names (will other people perceive of this as ethically justified?) or you're babbling nonsense. Just use basic consequentialist reasoning and skip the whole ethics part. You gain literally nothing from discussing "is this moral", unless what you're really asking is "What are the social consequences" or "will person x think this is immoral" or whatever. It's a dangerous habit epistemically and serves no instrumental purpose.

Comment by DSherron on Bad Concepts Repository · 2013-06-27T17:22:13.384Z · LW · GW

This is because people are bad at making decisions, and have not gotten rid of the harmful concept of "should". The original comment on this topic was claiming that "should" is a bad concept; instead of thinking "I should x" or "I shouldn't do x", on top of considering "I want to/don't want to x", just look at want/do not want. "I should x" doesn't help you resolve "do I want to x", and the second question is the only one that counts.

I think that your idea about morality is simply expressing a part of a framework of many moral systems. That is not a complete view of what morality means to people; it's simply a part of many instantiations of morality. I agree that such thinking is the cause of many moral conflicts of the nature "I should x but I want to y", stemming from the idea (perhaps subconscious) that they would tell someone else to x, instead of y, and people prefer not to defect in those situations. Selfishness is seen as a vice, perhaps for evolutionary reasons (see all the data on viable cooperation in the prisoner's dilemma, etc.) and so people feel the pressure to not cheat the system, even though they want to. This is not behavior that a rational agent should generally want! If you are able to get rid of your concept of "should", you will be free from that type of trap unless it is in your best interests to remain there.

Our moral intuitions do not exist for good reasons. "Fairness" and it's ilk are all primarily political tools; moral outrage is a particularly potent tool when directed at your opponent. Just because we have an intuition does not make that intuition meaningful. Go for a week while forcing yourself to taboo "morality", "should", and everything like that. When you make a decision, make a concerted effort to ignore the part of your brain saying "you should c because it's right", and only listen to your preferences (note: you can have preferences that favor other people!). You should find that your decisions become easier and that you prefer those decisions to any you might have otherwise made. It also helps you to understand that you're allowed to like yourself more than you like other people.

Comment by DSherron on Bad Concepts Repository · 2013-06-27T16:54:48.555Z · LW · GW

You're sneaking in connotations. "Morality" has a much stronger connotation than "things that other people think are bad for me to do." You can't simply define the word to mean something convenient, because the connotations won't go away. Morality is definitely not understood generally to be a social construct. Is that social construct the actual thing many people are in reality imagining when they talk about morality? Quite possibly. But those same people would tend to disagree with you if you made that claim to them; they would say that morality is just doing the right thing, and if society said something different then morality wouldn't change.

Also, the land ownership analogy has no merit. Ownership exists as an explicit social construct, and I can point you to all sorts of evidence in the territory that shows who owns what. Social constructs about morality exist, but morality is not understood to be defined by those constructs. If I say "x is immoral" then I haven't actually told you anything about x. In normal usage I've told you that I think people in general shouldn't do x, but you don't know why I think that unless you know my value system; you shouldn't draw any conclusions about whether you think people should or shouldn't x, other than due to the threat of my retaliation.

"Morality" in general is ill-defined, and often intuitions about it are incoherent. We make much, much better decisions by throwing away the entire concept. Saying "x is morally wrong" or "x is morally right" doesn't have any additional effect on our actions, once we've run the best preference algorithms we have over them. Every single bit of information contained in "morally right/wrong" is also contained in our other decision algorithms, often in a more accurate form. It's not even a useful shorthand; getting a concrete right/wrong value, or even a value along the scale, is not a well-defined operation, and thus the output does not have a consistent effect on our actions.

Comment by DSherron on Bad Concepts Repository · 2013-06-27T13:45:20.681Z · LW · GW

"Should" is not part of any logically possible territory, in the moral sense at least. Objective morality is meaningless, and subjective morality reduces to preferences. It's a distinctly human invention, and it's meaning shifts as the user desires. Moral obligations are great for social interactions, but they don't reflect anything deeper than an extension of tribal politics. Saying "you should x" (in the moral sense of the word) is just equivalent to saying "I would prefer you to x", but with bonus social pressure.

Just because it is sometimes effective to try and impose a moral obligation does not mean that it is always, or even usually, the case that doing so is the most effective method available. Thinking about the actual cause of the behavior, and responding to that, will be far, far more effective.

Next time you meet a child murderer, you just go and keep on telling him he shouldn't do that. I, on the other hand, will actually do things that might prevent him from killing children. This includes physical restraint, murder, and, perhaps most importantly, asking why he kills children. If he responds "I have to sacrifice them to the magical alien unicorns or they'll kill my family" then I can explain to him that the magical alien unicorns dont't exist and solve the problem. Or I can threaten his family myself, which might for many reasons be more reliable than physical solutions. If he has empathy I can talk about how the parents must feel, or the kids themselves. If he has self-preservation instincts then I can point out the risks for getting caught. In the end, maybe he just values dead children in the same way I value children continuing to live, and my only choice is to fight him. But probably that's not the case, and if I don't ask/observe to figure out what his motivations are I'll never know how to stop him when physical force is no option.

Comment by DSherron on Living in the shadow of superintelligence · 2013-06-26T00:54:16.050Z · LW · GW

Hell, it's definitely worth us thinking about it for at least half a second. Probably a lot more than that. It could have huge implications if we discovered that there was evidence of any kind of powerful agent affecting the world, Matrix-esque or not. Maybe we could get into heaven by praying to it, or maybe it would reward us based on the number of paperclips we created per day. Maybe it wouldn't care about us, maybe it would actively want to cause us pain. Maybe we could use it, maybe it poses an existential risk. All sorts of possible scenarios there, and the only way to tell what actions are appropriate is to examine... the... evidence... oh right. There is none, because in reality we don't live in the Matrix and there isn't any superintelligence out there in our universe. So we file away the thought, with a note that if we ever do run into evidence of such a thing (improbable events with no apparent likely cause) that we should pull it back out and check. But that's not the same as thinking about it. In reality, we don't live in that world, and to the extent that is true then the answer to "what do we do about it" is "exactly what we've always done."

Comment by DSherron on Living in the shadow of superintelligence · 2013-06-24T12:47:23.158Z · LW · GW

Endless, negligible, and not at all. Reference every atheism argument ever.

Comment by DSherron on How would not having free will feel to you? · 2013-06-23T01:01:37.581Z · LW · GW

I don't think it's particularly meaningful to use "free will" for that instead of "difficult to predict." I mean, you don't say that weather has free will, even though you can't model it accurately. Applying the label only to humans seems a lot like trying to sneak in a connotation that wasn't part of the technical definition. I think that your concept captures some of the real-world uses of the term "free will" but that it doesn't capture enough of the usage to help deal with the confusion around it. In particular, your definition would mean that weather has free will, which is a phrase I wouldn't be surprised to hear in colloquial English but doesn't seem to be talking about the same thing that philosophers want to debate.

Comment by DSherron on How would not having free will feel to you? · 2013-06-21T21:59:22.805Z · LW · GW

Taboo "free will" and then defend that the simplest answer is that we have it. X being true is weakly correlated to us believing X, where belief in X is an intuition rather than a conclusion from strong evidence.

Comment by DSherron on How would not having free will feel to you? · 2013-06-21T21:47:17.930Z · LW · GW

It's explicitly opposed to my response here. I feel like if I couldn't predict my own actions with certainty then I wouldn't have free will (more that I wouldn't have a will than that it wouldn't be free, although I tend to think that the "free" component of free will is nonsense in any case). Incidentally, how do you imagine free will working, even just in some arbitrary logically possible world? It sounds a lot like you want to posit a magical decision making component of your brain that is not fully determined by the prior state of the universe, but which also always does what "you" want it to. Non-determinism is fine, but I can't imagine how you could have the feeling of free will without making consistent choices. Wouldn't you feel weird if your decisions happened at random?

Comment by DSherron on How would not having free will feel to you? · 2013-06-21T17:38:38.820Z · LW · GW

I suspect that a quick summary of people's viewpoints on free will itself would help in interpreting at least some answers. In my case, I believe that we don't have "free will" in the naive sense that our intuitions tend to imply (the concept is incoherent). However, I do believe that we fell like we have free will for specific reasons, such that I can identify some situations that would make me feel as though I didn't have it. So, not actually having free will doesn't constrain experience, but feeling like I don't does.

Epistemically:

If I discovered that I was unpredictable even in principle; if randomness played a large role in my thought process, and I sometimes gave different outputs for the same inputs, then I would feel like I did not have free will.

Psychologically:

I have no consistent internal narrative to my actions. On reflection I discover that I could not predict my actions in advance, and merely rationalized them later. I notice that my actions do not tend to fulfill my preferences (this one happens in real life to varying degrees). I notice that I act in ways that go against what I wanted at the time.

Physically:

None. I am tempted to say that losing complete control of my body constitutes a loss of free will, but in reality it seems to closer reflect simply that my will cannot be executed, not that I don't have it (or feel like I have it).

Note: much of this is also heavily tied into my identity. It would be interesting to examine how interlinked the feelings of identity and free will really are.

Comment by DSherron on Some reservations about Singer's child-in-the-pond argument · 2013-06-20T23:36:35.317Z · LW · GW

It is tautological, but it's something you're ignoring in both this post and the linked reply. If you care about saving children as a part of a complex preference structure, then saving children, all other things being equal, fulfills your preferences more than not saving those children does. Thus, you want to do it. I'm trying to avoid saying you should do it, because I think you'll read that in the traditional moral framework sense of "you must do this or you are a bad person" or something like that. In reality, there is no such thing as "being a bad person" or "being a good person", except as individuals or society construct the concepts. Moral obligations don't exist, period. You don't have an obligation to save children, but if you prefer children being saved more than you prefer not paying the costs to do so then you don't need a moral obligation to do it any more than you need a moral obligation to get you to eat lunch at (great and cheap restaurant A) instead of (expensive and bad restaurant B).

Taboo "moral obligation". No one (important) is telling you that you're a bad person for not saving the children, or a good person for doing so. You can't just talk about how you refuse to adopt a rule about always saving children; I agree that would be stupid. No one asked you to do so. If you reach a point (and that can be now) where you care more about the money it would take to save a life than you do about the life you could save, don't spend the money. Any other response will not fulfill your preferences as well (and yours are the only ones that matter). Save a few kids, if you want, but don't sell everything to save them. And sure, if you have a better idea to save more kids with less money then do it. If you don't, don't complain that no one has an even better solution than the one you're offered.

I suspect that part of the problem is that you don't have a mental self-image as a person who cares about money more than children, and admitting that there are situations where you do makes you feel bad because of that mental image. If this is the case, and it may not be, then you should try to change your mental image.

Note: just because I used the term preferences does not equate what I'm saying to any philosophical or moral position about what we really value or anything like that. I'm using it to denote "those things that you, on reflection, really actually want", whatever that means.

Comment by DSherron on Some reservations about Singer's child-in-the-pond argument · 2013-06-20T20:47:12.262Z · LW · GW

Right, that's the thought that motivated the "probably" at the end. Although it feels pretty strongly like motivated cognition to actually propose such an argument.

Comment by DSherron on Some reservations about Singer's child-in-the-pond argument · 2013-06-20T20:16:56.944Z · LW · GW

You're not "on the hook" or anything of the sort. You're not morally obligated to save the kids, any more than you're morally obligated to care about people you care about, or buy lunch from the place you like that's also cheaper than the other option which you don't like. But, if you do happen to care about saving children, then you should want to do it. If you don't, that's fine; it's a conditional for a reason. Consequentialism wins the day; take the action that leads most to the world you, personally, want to see. If you really do value the kids more than your clothes though, you should save them, up until the point where you value your clothing more (say it's your last piece), and then you stop. If you have a better solution to save the kids, then do it. But saying "it's not my obligation" doesn't get you to the world you most desire, probably.