Posts

Imposing FAI 2012-05-17T21:24:01.165Z

Comments

Comment by asparisi on Signaling of what, precisely? · 2013-09-17T18:51:01.257Z · LW · GW

One could judge the strength of these with a few empirical tests: such as for (2), comparing industries where it is clear that the skills learned in college (or in a particular major) are particularly relevant vs. industries where it is not as clear, and comparing the number of college grads w/ the relevant skill-signals vs. college grads w/o the relevant skill-signals vs. non-college grads; and for (3), looking to industries where signals of pre-existing ability in that industry do not conform to being in college and comparing their rate of hiring grads vs. non-grads. (This would presumably be jobs in sectors where some sort of loosely defined intellectual ability is not as important. These jobs are becoming more scarce due to automation, and in First World countries in particular, but the tests should still be possible.) (1) is harder to test, as it is agnostic, but trying to see how these intuitions conform to those in hiring positions could be informative. Other signals, as mentioned in the comments, probably have their own tests which can be run on them.

Comment by asparisi on The Ultimate Newcomb's Problem · 2013-09-10T14:19:53.710Z · LW · GW

I don't get paid on the basis of Omega's prediction given my action. I get paid on the basis of my action given Omega's prediction. I at least need to know the base-rate probability with which I actually one-box (or two-box), although with only two minutes, I would probably need to know the base rate at which Omega predicts that I will one-box. Actually, just getting the probability for each of P(Ix|Ox) and P(Ix|O~x) would be great.

I also don't have a mechanism to determine if 1033 is prime that is readily available to me without getting hit by a trolley (with what probability do I get hit by the trolley, incidentally?), nor do I know the ratio of odd-numbered primes to odd-numbered composites is off-hand.

I don't quite have enough information to solve the problem in any sort of respectable fashion. So what the heck, I two-box and hope that Omega is right and that the number is composite. But if it isn't, then I cry into my million dollars. (With P(.1): I don't expect to actually be sad winning $1M, especially after having played several thousand times and presumably having won at least some money in that period.)

Comment by asparisi on How sure are you that brain emulations would be conscious? · 2013-08-26T14:06:09.319Z · LW · GW

Accounting for qualia and starting from qualia are two entirely different things. Saying "X must have qualia" is unhelpful if we cannot determine whether or not a given thing has qualia.

Qualia can perhaps best be described, briefly, as "subjective experience." So what do we mean by 'subjective' and 'experience'?

If by 'subjective' we mean 'unique to the individual position' and by 'experience' we mean 'alters its internal state on the basis of some perception' then qualia aren't that mysterious: a video camera can be described as having qualia if that's what we are talking about. Of course, many philosophers won't be happy with that sort of breakdown. But it isn't clear that they will be happy with any definition of qualia that allows for it to be distinguished.

If you want it to be something mysterious, then you aren't even defining it. You are just being unhelpful: like if I tell you that you owe me X dollars, without giving you anyway of defining X. If you want to break it down into non-mysterious components or conditions, great. What are they? Let me know what you are talking about, and why it should be considered important.

At this point, it's not a matter of ruling anything out as incoherent. It's a matter of trying to figure out what sort of thing we are talking about when we talk about consciousness and seeing how far that label applies. There doesn't appear to be anything inherently biological about what we are talking about when we are talking about consciousness. This could be a mistake, of course: but if so, you have to show it is a mistake and why.

Comment by asparisi on How sure are you that brain emulations would be conscious? · 2013-08-26T11:52:45.355Z · LW · GW
  1. You've chosen one of the easier aspects of consciousness: self-awareness rather than, eg. qualia.

I cover this a bit when I talk about awareness, but I find qualia to often be used in such a way as to obscure what consciousness is rather than explicate it. (If I tell you that consciousness requires qualia, but can't tell you how to distinguish things which have qualia from things which do not, along with good reason to believe that this way of distinguishing is legitimate, then rocks could have qualia.)

  1. The "necessarily biological" could be aposteriori nomic necessity, not apriori conceptual necessity, which is the only kind you knock down in your comment.

If the defenders of a biological theory of consciousness want to introduce an empirically testable law to show that consciousness requires biology then I am more than happy to let them test it and get back to us. I don't feel the need to knock it down, since when it comes to a posteriori nomic necessity, we use science to tell whether it is legitimate or not.

Comment by asparisi on How sure are you that brain emulations would be conscious? · 2013-08-24T14:45:48.677Z · LW · GW

I find it helps to break down the category of 'consciousness.' What is it that one is saying when one says that "Consciousness is essentially biological"? Here it's important to be careful: there are philosophers who gerrymander categories. We can start by pointing to human beings, as we take human beings to be conscious, but obviously we aren't pointing at every human attribute. (For instance, having 23 base pairs of chromosomes isn't a characteristic we are pointing at.) We have to be careful that when we point at an attribute, that we are actually trying to solve the problem and not just obscure it: if I tell you that Consciousness is only explainable by Woogles, that's just unhelpful. The term we use needs to break down into something that allows us to (at least in principle) tell whether or not a given thing is conscious. If it can't do THAT, we are better off using our own biased heuristics and forgoing defintions: at least in the former case, I can tell you that my neighbor is conscious and a rock isn't. Without some way of actually telling what is conscious and what is not, we have no basis to actually say when we've found a conscious thing.

It seems like with consciousness, we are primarily interested in something like "has the capacity to be aware of its own existence." Now, this probably needs to be further explicated. "Awareness" here is probably a trouble word. What do I mean when I say that it is "aware"? Well, it seems like I mean some combination of being able to percieve a given phenomenon and being able to distinguish both degrees of the phenomenon when present and distinguishing the phenomenon from other phenomenon. When I say that my sight makes me aware of light, I mean that it allows me to both distinguish different sorts of light and light from non-light: I don't mistake my sight for hearing, after all. So if I am "aware of my own existence" then I have the capacity to distinguish my existence from things that are not my existence, and the ability to think about degrees to which I exist. (in this case, my intuition says that this caches out in questions like "how much can I change and still be me?")

Now, there isn't anything about this that looks like it is biological. I suppose if we came at it another way and said that to be conscious is to "have neural activity" or something, it would be inherently biological since that's a biological system. But while having neural activity may be necessary for consciousness in humans, it doesn't quite feel like that's what we are talking about when we talk about what we are pointing to when we say "conscious." If somehow I met a human being and was shown a brain scan showing that there was no neural activity, but it was apparently aware of itself and was able to talk about how its changed over time and such and I was convinced I wasn't being fooled, I would call that conscious. Similarly, if I was shown a human being with neural activity but which didn't seem capable of distinguishing itself from other objects or able to consider how it might change, I would say that human being was not conscious.

Comment by asparisi on Greatest Philosopher in History · 2013-08-12T23:52:25.128Z · LW · GW

On those criteria, I would say Plato. Because Plato came up with a whole mess of ideas that were... well, compelling but obviously mistaken. Much of Western Philosophy can be put in terms of people wrestling with Plato and trying to show just why he is wrong. (Much of the rest is wrestling with Aristotle and trying to show why HE is wrong... but then, one can put Aristotle into the camp of "people trying to show why Plato is wrong.")

There's a certain sort of person who is most easily aroused from inertia when someone else says something so blatantly, utterly false that they want to pull their hair out. Plato helped motivate these people a lot.

Comment by asparisi on Greatest Philosopher in History · 2013-08-12T23:46:10.779Z · LW · GW

The New Organon, particularly Aphorisms 31-46, show not only an early attempt to diagnose human biases (what Bacon referred to as "The Idols of the Mind") but also some of the reasons why he rejected Aristotelian thought, common at the time, in favor of experimental practice.

Comment by asparisi on The Fermi paradox as evidence against the likelyhood of unfriendly AI · 2013-08-03T15:06:42.595Z · LW · GW

Maybe there are better ways to expand than through spacetime, better ways to make yourself into this sort of maximizing agent, and we are just completely unaware of them because we are comparatively dull next to the sort of AGI that has a brain the size of a planet? Some way to beat out entropy, perhaps. That'd make it inconsistent to see any sort of sky with UFAI or FAI in it.

I can somewhat imagine what these sorts of ways would be, but I have no idea if those things are likely or even feasible, since I am not a world-devouring AGI and can only do wild mass speculation at what's beyond our current understanding of physics.

A simpler explanation could be that AGIs use stealth in pursuing their goals: the ability to camouflage oneself has always been of evolutionary import, and AGIs may find it useful to create a sky which looks like "nothing to see here" to other AGIs. (As they will likely be unfriendly toward each other) Camoflage, if good enough, would allow one to hide from predators (bigger AGI) and sneak up on prey (smaller AGI) Since we would likely be orders of magnitude worse at detecting an AGI's camoflage, we see a sky that looks like there is nothing wrong. This doesn't explain why we haven't been devoured, of course, which is the weakness of the argument.

Or maybe something like acausal trade limits the expansion of AGI. If AGIs realize that fighting over resources is likely to hinder their goals more than help them in the long wrong, they might limit their expansion on the theory that there are other AGIs out there. If I think I am 1 out of a population of a billion, and I don't want to be a target for a billion enemies at once, I might decide that taking over the entire galaxy/universe/beyond isn't worth it. In fact, if these sorts of stand-offs become more common as the scale becomes grander, it might be motivation not to pursue such scales. The problem with this being that you would expect earlier AGIs to be more likely to just take advantage before future ones can really get to the point of being near-equals and defect on this particular dilemma. (A billion planet-eating AGI are probably not a match for one galaxy-eating AGI. So if you see a way to become the galaxy-eater before enough planet-eaters can come to the party, you go for it.)

I don't find any of these satisfying, as one seems to require a sub-set of possibilities for unknown physics and the others seem to lean pretty heavily on the anthropic principle to explain why we, personally, are not dead yet. I see possibilities here, but none of them jump out at me as being exceptionally likely.

Comment by asparisi on Leveling up... · 2013-07-30T05:10:41.382Z · LW · GW

I get that feeling whenever I hit a milestone in something: if I run a couple miles further than I had previously, if I understand something that was opaque before, if I am able to do something that I couldn't before, I get this "woo hoo!" feeling that I associate with levelling up.

Comment by asparisi on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-06-30T19:28:15.215Z · LW · GW

Even if they are sapient, it might not have the same psychological effect.

The effect of killing a large, snarling, distinctly-not-human-thing on one's mental faculties and the effect of killing a human being are going to be very different, even if one recognizes that thing to be sapient.

If they are, Harry would assign moral weight to the act after the fact: but the natural sympathy that is described as eroding in the above quote doesn't seem as likely to be affected given a human being's psychology.

Comment by asparisi on Normativity and Meta-Philosophy · 2013-04-24T11:58:01.086Z · LW · GW

since I don't know what "philosophy" really is (and I'm not even sure it really is a thing).

I find it's best to treat philosophy as simply a field of study, albeit one that is odd in that most of the questions asked within the field are loosely tied together at best. (There could be a connection between normative bioethics and ontological questions regarding the nature of nothingness, I suppose, but you wouldn't expect a strong connection from the outset) To do otherwise invites counter-example too easily and I don't think there is much (if anything) to gain in asking what philosophy really is.

Comment by asparisi on Boring Advice Repository · 2013-04-16T16:46:03.828Z · LW · GW

Technical note: some of these are Torts, not Crimes. (Singing Happy Birthday, Watching a Movie, or making an Off-Color Joke are not crimes, barring special circumstances, but they may well be Torts.)

Comment by asparisi on Open Thread, April 15-30, 2013 · 2013-04-15T23:46:33.931Z · LW · GW

Is there anyone going to the April CFAR Workshop that could pick me up from the airport? I'll be arriving at San Francisco International at 5 PM if anyone can help me get out there. (I think I have a ride back to the airport after the workshop covered, but if I don't I'll ask that seperately.)

Comment by asparisi on Realism : Direct or Indirect? · 2013-02-14T17:47:00.378Z · LW · GW

I tend to think this is the wrong question.

Here's roughly what happens: there are various signals (light, air waves, particulates in the air) that humans have the capacity to detect and translate into neural states which can then be acted on. This is useful because the generation, presence, and redirection of these signals is affected by other objects in the world. So a human can not only detect objects that generate these signals, it can also detect how other objects around it are affected by these signals, granting information that the human brain can then act upon.

All of this is occurring in reality: the brain's neural firings, the signals and their detection, the objects that generate and are affected by the signals. There is no "outside reality" that the human is looking in from.

If you break it down into other questions, you get sensible answers:

"Does the human brain have the capacity to gain information from the ball without a medium?" No.

"Is the human brain's information about the ball physically co-located with some area of the brain itself?" Sure.

"Is the signal detected by the sense-organs co-located with some area of the brain itself?" Potentially at certain points of interaction, but not for its entire history, no.

"What about the neural activity?" That's co-located with the brain.

"So are you trying to say you are only 'Directly acquainted' with the signal at the point where it interacts with your sense-organ?" I don't think calling it 'directly acquainted' picks out any particular property. If you are asking if it is co-located with some portion of my brain, the answer is no. If you are asking if it is causing a physical reaction in some sensory organ, the answer is yes.

Comment by asparisi on The Singularity Wars · 2013-02-14T17:25:59.388Z · LW · GW

I just hope that the newly-dubbed Machine Intelligence Research Institute doesn't put too much focus on advertising for donations.

That would create a MIRI-ad of issues.

Sorry, if I don't let the pun out it has to live inside my head.

Comment by asparisi on A brief history of ethically concerned scientists · 2013-02-10T21:48:35.417Z · LW · GW

I find it unlikely that scientific secrecy is never the right answer, just as I find it unlikely that scientific secrecy is always the right answer.

Qualitatively, I'd say it has something to do with the ratio of expected harm of immediate discovery vs. the current investment and research in the field. If the expected risks are low, by all means publish so that any risks that are there will be found. If the risks are high, consider the amount of investment/research in the field. If the investment is high, it is probably better to reveal your research (or parts of it) in the hope of creating a substantive dialogue about risks. If the investment is low, it is less likely that anyone will come up with the same discovery and so you may want to keep it a secret. This probably also varies by field with respect to how many competing paradigms are available and how incremental the research is: psychologists work with a lot of different theories of the mind, many of which do not explicitly endorse incremental theorizing, so it is less likely that a particular piece of research will be duplicated while biologists tend to have larger agreement and their work tends to be more incremental, making it more likely that a particular piece of research will be duplicated.

Honestly, I find cases of alternative pleading such as V_V's post here suspect. It is a great rhetorical tool, but reality isn't such that alternative pleading actually can map onto the state of the world. "X won't work, you shouldn't do X in cases where it does work, and even if you think you should do X, it won't turn out as well" is a good way to persuade a lot of different people, but it can't actually map onto anything.

Comment by asparisi on How to offend a rationalist (who hasn't thought about it yet): a life lesson · 2013-02-06T15:53:46.858Z · LW · GW

I usually turn to the Principle of Explosion to explain why one should have core axioms in their ethics, (specifically non-contradictory axioms). If some principle you use in deciding what is or is not ethical creates a contradiction, you can justify any action on the basis of that contradiction. If the axioms aren't explicit, the chance of a hidden contradiction is higher. The idea that every action could be ethically justified is something that very few people will accept, so explaining this usually helps.

I try to understand that thinking this way is odd to a lot of people and that they may not have explicit axioms, and present the idea as "something to think about." I think this also helps me to deal with people not having explicit rules that they follow, since it A) helps me cut off the rhetorical track of "Well, I don't need principles" by extending the olive branch to the other person; and B) reminds me that many people haven't even tried to think about what grounds their ethics, much less what grounds what grounds their ethics.

I usually use the term "rule" or "principle" as opposed to "axiom," merely for the purpose of communication: most people will accept that there are core ethical rules or core ethical principles, but they may have never even used the word "axiom" before and be hesitant on that basis alone.

Comment by asparisi on AI box: AI has one shot at avoiding destruction - what might it say? · 2013-01-24T18:11:00.339Z · LW · GW

"[10065] No route to host Error"

I figure the easiest way to delay a human on the other end of a computer is to simulate an error as best I can. For a GAI, this time is probably invaluable.

Comment by asparisi on How to Teach Students to Not Guess the Teacher’s Password? · 2013-01-05T14:44:57.999Z · LW · GW

That depends on the level of explanation the teacher requires and the level of the material. I'd say that at least until you get into calculus, you can work off of memorizing answers. I'd even go so far as to say that most students do, and succeed to greater or lesser degrees, based on my tutoring experiences. I am not sure to what degree you can "force" understanding: you can provide answers that require understanding, but it helps to guide that process.

I went to a lot of schools, so I can contrast here.

I had more than one teacher that taught me multiplication. One taught it as "memorize multiplication tables 1x1 through 9x9. Then you use these tables, ones place by ones place, ones place by tens place, etc." One problem with this approach is that while it does act as an algorithm and does get you the right answer, you have no idea what you are trying to accomplish. If you screw up part of the process, there's no way to check your answer: to a student in that state, multiplication just is "look up the table, apply the answer, add one zero to the end for every place higher than one that the number occupied."

Whereas I had another teacher, who explained it in terms of groups: you are trying to figure out how many total objects you would have if you had this many groups of that, or that many groups of this. 25 is the right answer because if you have 5 groups of 5 things, you generally have 25 things in total. This is a relatively simple way of trying to explain the concept in terms of what you are trying to track, rather than just rote memorization. Fortunately, I had this teacher earlier.

The point being that you can usually teach things either way: actually, I think some combination of both is helpful. Teach the rote memorization but explain why it is true in terms of some understanding. Some memorization is useful: I don't want to actually visualize groups of objects when I do 41x38. But knowing that is what I am trying to track (at least at the basic level of mathematical understanding I acquired in the 2nd grade) is useful.

Comment by asparisi on How to Teach Students to Not Guess the Teacher’s Password? · 2013-01-05T14:09:54.252Z · LW · GW

I think the worry is that they are only concerned about getting the answer that gets them the good grade, rather than understanding why the answer they get is the right answer.

So you end up learning the symbols "5x5=25," but you don't know what that means. You may not even have an idea that corresponds to multiplication. You just know that when you see "5x5=" you write "25." If I ask you what multiplication is, you can't tell me: you don't actually know. You are disconnected from what the process you are learning is supposed to be tracking, because all you have learned is to put in symbols where you see other symbols.

Comment by asparisi on Politics Discussion Thread January 2013 · 2013-01-04T22:48:44.185Z · LW · GW

I think you are discounting effects such as confirmation bias, which lead us to notice what we expect and can easily label while leading us to ignore information that contradicts our beliefs. If 99 out of 100 women don't nag and 95 out of 100 men don't nag, given a stereotype that women nag, I would expect people think of the one woman they know that nags, rather than the 5 men they know that do the same.

Frankly, without data to support the claim that:

There is a lot of truth in stereotypes

I would find the claim highly suspect, given even a rudimentary understanding of our psychological framework.

Comment by asparisi on Politics Discussion Thread January 2013 · 2013-01-04T05:42:08.900Z · LW · GW

I seriously doubt that most people who make up jokes or stereotypes truly have enough data on hand to reasonably support even a generalization of this nature.

Comment by asparisi on [Link] Hey Extraverts: Enough is Enough · 2013-01-03T16:26:10.945Z · LW · GW

Groupthink is as powerful as ever. Why is that? I'll tell you. It's because the world is run by extraverts.

The problem with extraverts... is a lack of imagination.

pretty much everything that is organized is organized by extraverts, which in turn is their justification for ruling the world.

This seems to be largely an article about how we Greens are so much better than those Blues rather than offering much that is useful.

Comment by asparisi on My Best Case vs Your Worst Case · 2013-01-03T16:11:30.125Z · LW · GW

I don't have the answer but would be extremely interested in knowing it.

(Sorry this comment isn't more helpful. I am trying to get better at publicly acknowledging when I don't know an answer to a useful question in the hopes that this will reduce the sting of it.)

Comment by asparisi on Some scary life extension dilemmas · 2013-01-02T03:23:17.223Z · LW · GW

A potential practical worry for this argument: it is unlikely that any such technology will grant just enough for one dose for each person and no more, ever. Most resources are better collected, refined, processed, and utilized when you have groups. Moreover, existential risks tend to increase as the population decreases: a species with only 10 members is more likely to die out than a species with 10 million, ceteris paribus. The pill might extend your life, but if you have an accident, you probably need other people around.

There might be some ideal number here, but offhand I have no way of calculating it. Might be 30 people, might be 30 billion. But it seems like risk issues alone would make you not want to be the only person: we're social apes, after all. We get along better when there are others.

Comment by asparisi on Is ruthlessness in business executives ever useful? · 2012-12-29T00:18:33.441Z · LW · GW

Where is the incentive for them to consider the public interest, save for insofar as it is the same as the company interest?

It sounds like you think there is a problem: that executives being ruthless is not necessarily beneficial for society as a whole. But I don't think that's the root problem. Even if you got rid of all of the ruthless executives and replaced them with competitive-yet-conscientious executives, the pressures that creates and nurtures ruthless executives would still be in place. There are ruthless executives because the environment favors them in many circumstances.

Comment by asparisi on Is ruthlessness in business executives ever useful? · 2012-12-29T00:12:14.769Z · LW · GW

Edited. Thanks.

Comment by asparisi on Is ruthlessness in business executives ever useful? · 2012-12-28T20:31:59.620Z · LW · GW

Your title asks a different question than your post: "useful" vs. being a "social virtue."

Consider two companies: A and B. Each has the option to pursue some plan X, or its alternative Y. X is more ruthless than Y (X may involve laying off a large portion of their workforce, a misinformation campaign, or using aggressive and unethical sales tactics) but X also stands to be more profitable than Y.

If the decision of which plan to pursue falls to a ruthless individual in company A, company A will likely pursue X. If the decision falls to a "highly competitive, compassionate, with restrictive sense of fair play" individual in company B, B may perform Y instead. If B does not perform Y, it is likely because they noted the comparative advantage A would have, being likely to pursue X. In this case, it is still in B's interest to act ruthlessly, making ruthlessness useful.

Now, is it a virtue? Well, for a particular company it is useful: it allows the pursuit of plans that would otherwise not be followed. Does the greater society benefit from it? Well, society gains whatever benefit is gained from business pursuing such plans, at the cost of whatever the costs of such plans are. But it is a useful enough character trait for one company's executives that it grants a competitive advantage over other companies where that trait is absent. Thus, it is an advantage- and perhaps a virtue, I am not sure how that word cashes out here- for each company. Companies without ruthless executives may fail to act or fail to act quickly where a ruthless executive wouldn't hesitate. So in situations where ruthless tactics allow one to win, ruthless individuals are an asset.

I'm not sure what more can be said on this, as I don't have a good way of cashing out the word 'social virtue' here or what practical question you are asking.

Comment by asparisi on Intelligence explosion in organizations, or why I'm not worried about the singularity · 2012-12-27T15:46:08.699Z · LW · GW

You say this is why you are not worried about the singularity, because organizations are supra-human intelligences that seek to self-modify and become smarter.

So is your claim that you are not worried about unfriendly organizations? Because on the face of it, there is good reason to worry about organizations with values that are unfriendly toward human values.

Now, I don't think organizations are as dangerous as a UFAI would be, because most organizations cannot modify their own intelligence very well. For now they are stuck with (mostly) humans for hardware and when they attempt to rely heavily on the algorithms we do have it doesn't always work out well for them. This seems more a statement about our current algorithms than the potential for such algorithms, however.

However, there is a lot of energy on various fronts to hinder organizations whose motivations are such that they lead to threats, and because these organizations are reliant on humans for hardware, only a small number of existential threats have been produced by such organizations. It can be argued that one of the best reasons to develop FAI is to undo these threats and to stop organizations from creating new threats of the like in the future. So I am not sure that it follows from your position that we should not be worried about the singularity.

Comment by asparisi on What if "status" IS a terminal value for most people? · 2012-12-26T20:14:56.866Z · LW · GW

Definitely. These are the sorts of things that would need to be evaluated if my very rough sketch were to be turned into an actual theory of values.

Comment by asparisi on What if "status" IS a terminal value for most people? · 2012-12-26T20:12:38.022Z · LW · GW

Well, effectiveness and desire are two different things.

That aside, you could be posting for desires that are non-status related and still desire status. Human beings are certainly capable of wanting more than one thing at a time. So even if this post was motivated by some non-status related desire, that fact would not, in and of itself, be evidence that you don't desire status.

I'm not actually suggesting you update for you: you have a great deal more access to the information present inside your head than I do. I don't even have an evidence-based argument: merely a parsimony based one, which is weak at best. I wouldn't think of suggesting it unless I had some broader evidence that people who claim "I don't desire status" really do. I have no such evidence.

The original post was why the argument "This post is evidence that I do not seek status" is unconvincing. I was merely pointing out that even if we use your version of E, it isn't very good evidence for H. (Barring some data to change that, of course.)

Comment by asparisi on What if "status" IS a terminal value for most people? · 2012-12-26T05:16:48.203Z · LW · GW

Eh... but people like rock stars even though most people are NOT rock stars. People like people with really good looks even though most people don't have good looks. And most people do have some sort of halo effect on wealthy people they actually meet, if not "the 1%" as a class.

I am not sure that a person who has no desire for status will write a post about how they have no desire for status that much more often than someone who does desire status. Particularly if this "desire" can be stronger or weaker. So it could be:

A- The person really doesn't seek status and wants to express this fact for a non-status reason. B- The person does seek status but doesn't self identify as someone who seeks status, and wants to express this fact for a non-status reason. C- The person does seek status but doesn't self identify as someone who seeks status, and wants to express that they do not seek status on the gamble that being seen as a person who does not want status will heighten their status. D- The person does seek status and is gambling that being seen as a person who does not want status will heighten their status.

A has the advantage of simplicity, but its advantage is roughly on par with that of D. B is more complicated and C is more complicated, but not that much more as far as human ideas seem to run. And the set of all "status seekers" who would write such a post is {B, C, D}, and I'd say that the probability of that set is higher than the probability of A.

So all things being equal, I'd say that P(E|~H)>P(E|H). Which may still not lead to the right answer here. Now, if saying "I don't seek status" was definitely a status losing behavior, I'd say that would shift things drastically as it would render {B, C, D} as improbable on more than bare simplicity. But I really don't have a good evaluation for that, so I'd have to run on just the simplicity alone.

Comment by asparisi on What if "status" IS a terminal value for most people? · 2012-12-26T05:02:00.961Z · LW · GW

I upvoted it because the minimum we'd get without running a study would be anecdotal evidence.

I'm not sure that there is a close link between "status" and "behaving." Most of the kids I knew who I would call "status-seeking" were not particularly well behaved: often the opposite. Most of the things you are talking about seem to fall into "good behavior" rather than "status."

Additionally... well, we'd probably need to track a whole lot of factors to figure out which ones, based on your environment, would be selected for. And currently, I have no theory as to which timeframes would be the most important to look at, which would make such a search more difficult.

Comment by asparisi on More Cryonics Probability Estimates · 2012-12-25T06:59:48.810Z · LW · GW

I wouldn't say it has no bearing. If C. elegans could NOT be uploaded in a way that preserved behaviors/memories, you would assign a high probability to human brains not being able to be uploaded. So:

If (C. elegans) & ~(Uploading) goes up, then (Human) & ~(Uploading) goes WAY up.

Of course, this commits us to the converse. And since the converse is what happened we would say that it does raise the Human&Uploadable probabilities. Maybe not by MUCH. You rightly point out the dissimilarities that would make it a relatively small increase. But it certainly has some bearing, and in the absense of better evidence it is at least encouraging.

Comment by asparisi on New censorship: against hypothetical violence against identifiable people · 2012-12-25T06:21:06.157Z · LW · GW

Yeesh. Step out for a couple days to work on your bodyhacking and there's a trench war going on when you get back...

In all seriousness, there seems to be a lot of shouting here. Intelligent shouting, mind you, but I am not sure how much of it is actually informative.

This looks like a pretty simple situation to run a cost/benefit on: will censoring of the sort proposed help, hurt, or have little appreciable effect on the community.

Benefits: May help public image. (Sub-benefits: Make LW more friendly to new persons, advance SIAI-related PR); May reduce brain-eating discussions (If I advocate violence against group X, even as a hypothetical, and you are a member of said group, then you have a vested political interest whether or not my initial idea was good which leads to worse discussion); May preserve what is essentially a community norm now (as many have noted) in the face of future change; Will remove one particularly noxious and bad-PR generating avenue for trolling. (Which won't remove trolling, of course. In fact, fighting trolls gives them attention, which they like: see Cons)

Costs: May increase bad PR for censoring (Rare in my experience, provided that the rules are sensibly enforced); May lead to people not posting important ideas for fear of violating rules (corollary: may help lead to environment where people post less); May create "silly" attempts to get around the rule by gray-areaing it (Where people say things like "I won't say which country, but it starts with United States and rhymes with Bymerica") which is a headache; May increase trolling (Trolls love it when there are rules to break, as these violations give them attention); May increase odds of LW community members acting in violence

Those are all the ones I could come up with in a few minutes after reading many posts. I am not sure what weights or probabilities to assign: probabilities could be determined by looking at other communities and incidents of media exposure, possibly comparing community size to exposure and total harm done and comparing that to a sample of similarly-sized communities. Maybe with a focus on communities about the size LW is now to cut down on the paperwork. Weights are trickier, but should probably be assigned in terms of expected harm to the community and its goals and the types of harm that could be done.

Comment by asparisi on What if "status" IS a terminal value for most people? · 2012-12-25T05:48:58.512Z · LW · GW

Hm. I know that the biological term may not be quite right here (although the brain is biological, scaling this idea up may be problematic) but I have wondered if certain psychological traits are not epigenetic: that is, it isn't that you are some strange mutant if you express terminal value X strongly and someone else expresses it weakly. Rather, that our brain structures lead to a certain common set of shared values but that different environmental conditions lead to those values being expressed in a stronger or weaker sense.

So, for instance, if "status" (however that cashes out here) is highly important instrumentally in ones younger years, the brain develops that into a terminal value. If "intelligence" (again, cashing that out will be important) is highly important instrumentally in younger years, than it develops into a terminal value. It isn't that anyone else is a horrible mutant, we probably all share values, but those values may conflict and so it may matter which traits we express more strongly. Of course, if it is anything like an epigentic phenomenon then there may be some very complicated factors to consider.

Possible falsifiers for this: if environment, particularly social environment (although evolution is dumb and it could be some mechanism that just correlates highly) in formative years does not correlate highly with terminal values later in life. If people actually do seem to share a set of values with relatively equal strength. If terminal values are often modified strongly after the majority of brain development has ceased. If some terminal values do not correlate with some instrumental value, but nevertheless vary strongly between individuals.

Comment by asparisi on Caring about what happens after you die · 2012-12-18T16:54:01.009Z · LW · GW

The fact that I won't be able to care about it once I am dead doesn't mean that I don't value it now. And I can value future-states from present-states, even if those future-states do not include my person. I don't want future sapient life to be wiped out, and that is a statement about my current preferences, not my 'after death' preferences. (Which, as noted, do not exist.)

Comment by asparisi on Caring about what happens after you die · 2012-12-18T15:49:17.678Z · LW · GW

The difference is whether or not you care about sapience as instrumental or terminal values.

If I only instrumentally value other sapient beings existing, then of course, I don't care whether or not they exist after I die. (They will cease to add to my utility function, through no fault of their own.)

But if I value the existence of sapient beings as a terminal value, then why would it matter if I am dead or alive?

So, if I only value sapience because, say, other sapient beings existing makes life easier than it would be if I was the only one, then of course I don't care whether or not they exist after I die. But if I just think that a universe with sapient beings is better than one without because I value the existence of sapience, then that's that.

Which is not to deny the instrumental value of other sapient beings existing. Something can have instrumental value and also be a terminal value.

Comment by asparisi on By Which It May Be Judged · 2012-12-12T17:32:54.269Z · LW · GW

I think I have a different introspection here.

When I have a feeling such as 'doing-whats-right' there is a positive emotional response associated with it. Immediately I attach semantic content to that emotion: I identify it as being produced by the 'doing-whats-right' emotion. How do I do this? I suspect that my brain has done the work to figure out that emotional response X is associated with behavior Y, and just does the work quickly.

But this is maleable. Over time, the emotional response associated with an act can change and this does not necessarily indicate a change in semantic content. I can, for example, give to a charity that I am not convinced is good and I still will often get the 'doing-whats-right' emotion even though the semantic content isn't really there. I can also find new things I value, and occasionally I will acknowledge that I value something before I get positive emotional reinforcement. So in my experience, they aren't identical.

I strongly suspect that if you reprogrammed my brain to value counting paperclips, it would feel the same as doing what is right. At very least, this would not be inconsistent. I might learn to attach paperclippy instead of good to that emotional state, but it would feel the same.

Comment by asparisi on [LINK] Irrational Robot Billionaire Freedom Fighters · 2012-12-06T22:26:31.566Z · LW · GW

I am not sure that all humans have the empathy toward humanity on the whole that is assumed by Adams here.

Comment by asparisi on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-05T00:53:01.408Z · LW · GW

students might learn about the debate between Realism and Nominalism, and then be expected to write a paper about which one they think is correct (or neither). Sure, we could just tell them the entire debate was confused...

This would require a larger proportion of philosophy professors to admit that the debate is confused.

Comment by asparisi on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-05T00:48:41.127Z · LW · GW

Working in philosophy, I see some move toward this, but it is slow and scattered. The problem is probably partially historical: philosophy PhDs trained in older methods train their students, who become philosophy PhDs trained in their professor's methods+anything that they could weasel into the system which they thought important. (which may not always be good modifications, of course)

It probably doesn't help that your average philosophy grad student starts off by TAing a bunch of courses with a professor who sets up the lecture and the material and the grading standards. Or that a young professor needs to clear classes in an academic structure. It definitely doesn't help that philosophy has a huge bias toward historical works, as you point out.

None of these are excuses, of course. Just factors that slow down innovation in teaching philosophy. (which, of course, slows down the production of better philosophical works)

(2) 20th century philosophers who were way too enamored with cogsci-ignorant armchair philosophy.

This made me chuckle. Truth is often funny.

Comment by asparisi on Credence calibration game FAQ · 2012-11-27T12:49:18.800Z · LW · GW

Well, you can get up to 99 points for being 99 percent confident and getting the right answer, or minus several hundred (I have yet to fail at a 99 so I don't know how many) for failing at that same interval.

Wrong answers are, for the same confidence interval, more effective at bringing down your score than right answers are at bringing it up, so in some sense as long as you are staying positive you're doing good.

But if you want to compare further, you'd have to take into account how many questions you've answered, as your lifetime total will be different depending on the questions you answer. (990 after 10 questions would be exceptional: best possible score. 990 after 1,000 questions means you are getting a little less than a point per question, overall)

Comment by asparisi on Credence calibration game FAQ · 2012-11-26T21:16:45.314Z · LW · GW

High score seems to be good in terms of "My confident beliefs tend to be right."

Having your bars on the graph line up with the diagonal line would be an "ideal" graph (neither over- nor under- confident)

Comment by asparisi on Credence calibration game FAQ · 2012-11-26T13:38:31.508Z · LW · GW

To clarify, wrong in any of my answers at the 99% level. I have been wrong at other levels (including, surprisingly, hovering within around 1% of 90% at the 90% level.

Comment by asparisi on Credence calibration game FAQ · 2012-11-26T13:37:28.133Z · LW · GW

Well, in 11 out of 145 answers (7.5%) I so far have answered 99%, and I have yet to be wrong in any of my answers.

If I continue at this rate, in approximately 1,174 more answers, I'll be able to tell you if I am well callibrated (less, if I fail at more than one answer in the intervening time)

Comment by asparisi on Cryonic resurrection - an ethical hypothetical · 2012-11-26T09:24:27.893Z · LW · GW

Question 1: This depends on the technical details of what has been lost. If it merely an access problem: if there are good reasons to believe that current/future technologies of this resurrection society will be able to restore my faculties post-resurrection, I would be willing to go for as low as .5 for the sake of advancing the technology. If we are talking about permanent loss, but with potential repair (so, memories are just gone, but I could repair my ability to remember in the future) probably 9.0. If the difficulties would literally be permanent, 1.0, but that seems unlikely.

Question 2: Outside of asking me or my friends/family (assume none are alive or know the answer) the best they could do is construct a model based on records of my life, including any surviving digital records. It wouldn't be perfect, but any port in a storm...

Question 3: Hm. Well, if it was possible to revive someone who already was in the equivalent state before cryonics, it would probably be ethical provided that it didn't make them WORSE. Assuming it did... draw lots. It isn't pretty, but unless you privledge certain individuals, you end up in a stalemate. (This is assuming it is a legitimate requirement: all other options have been effectively utilized to their maximum benefit, and .50 is the best we're gonna get without a human trial) A model of the expected damage, the anticipated recovery period, and what sorts of changes will likely need to be made over time could make some subjects more viable for this than others, in which case it would be in everyone's interest if the most viable subjects for good improvements were the ones thrown into the lots. (Quality of life concerns might factor in too: if Person A is 80% likely to come out a .7 and 20% likely to come out a .5; and Person B is 20% likely to come out a .7 and 80% likely to come out a .5, then ceteris paribus you go for A and hope you were right. It is unlikely that all cases will be equal.)

Comment by asparisi on LW Women- Minimizing the Inferential Distance · 2012-11-26T07:28:38.972Z · LW · GW

I had an interesting experience with this, and I am wondering if others on the male side had the same.

I tried to imagine myself in these situations. When a situation did not seem to have any personal impact from the first person or at best a very mild discomfort, I tried to rearrange the scenario with social penalties that I would find distressing. (Social penalties do differ based on gender roles)

I found this provoked a fear response. If I give it voice, it sounds like "This isn't relevant/I won't be in this scenario/You would just.../Why are you doing this?" Which is interesting: my brain doesn't want to process these stories as first-person accounts. Some sort of analysis would be easier and more comfortable, but I am pretty sure would miss the damn point.

I don't have any further thoughts, other than this was useful in understanding things that may inhibit me from understanding. (and trying to get past them)

Comment by asparisi on Credence calibration game FAQ · 2012-11-26T05:50:26.081Z · LW · GW

Another thought: once you have a large bank of questions, consider "theme questions" as something people can buy with coins. Yes, that becomes a matter of showing off rather than the main point, but people LIKE to show off.

Comment by asparisi on Credence calibration game FAQ · 2012-11-26T05:47:15.423Z · LW · GW

Suggestions (for general audience outside of LW/Rationalist circles)

I like the name "Confidence Game"- reminds people of a con game while informing you as to the point of the game.

Try to see if you can focus on a positive-point scale, if you can. Try to make it so that winning nets you a lot of points but "losing" only a couple. (Same effect on scores, either way) This won't seem as odd if you set it up as one long scale rather than two shorter ones: so 99-90-80-60-50-60-80-90-99.

Setting it to a timer will make it ADDICTIVE. Set it up in quick rounds. Make it like a quiz show. No question limit, or a bonus if you hit the limit for being "Quick on your feet." Make it hard but not impossible to do.

Set up a leaderboard where you can post to FB, show friends, and possibly compare your score to virtual "opponents" (which are really just scoring metrics) Possibly make those metrics con-man themed, keeping with the game's name.

Graphics will help a lot. Consider running with the con-game theme.

Label people: maybe something like "Underconfident" "Unsure" "Confident" "AMAZING" "Confident" "Overconfident" "Cocksure" (Test labels to see what works well!) rather than using graphs. Graphs and percentages? Turn-off. Drop the % sign and just show two numbers with a label. Make this separate from points but related. (High points=greater chance of falling toward the center, but in theory not necessarily the same.) Yes, I know the point is to get people to think in percentages, but if you want to do that you have to get them there without actually showing them math, which many find off-putting.

Set up a coin system that earns you benefits for putting into the game: extended round, "confidence streak" bonuses, hints, or skips might be good rewards here. Test and see what works. Allow people to pay for coins, but also reward coins for play or another mini-game related to play or both. (Investment=more play)