Posts

Comments

Comment by GloriaSidorum on Rationality Quotes May 2013 · 2013-05-02T00:53:59.314Z · LW · GW

The distinguishing feature of one's boss is that this person has certain kinds of (formally recognized) power over you within your organization's hierarchy

You're considering just the word "boss". Consider the phrase "real boss". Regardless of the meanings of the constituent words, the phrase itself can often be replaced with "the one with the real power", or "the one who actually makes the decisions." For example, "The king may have nominal power, but he's really only a figurehead, his vizier is the real boss."

Now, we still find something lacking in that the mice don't actually make decisions, the people observing the mice do. However, if the people observing the mice care about doing good research, then decisions about what course of action to take in the future must take into account what happens with the mice. What happens with the mice provides evidence which forces the researchers to update their models, possibly changing the optimal course of action, or fail. The literal meaning "The mice provide evidence, forcing us to update our models, making us, in order to do our job correctly, change our decisions." may be expressed metaphorically as "The mice make decisions on how to do our job correctly" or "The mice are the real boss."

From the context of the article, in which he uses this as an argument for not coming up with certain specific goals before beginning research, this is likely what the author meant.

Comment by GloriaSidorum on Wrong Questions · 2013-04-24T01:13:51.526Z · LW · GW

If propositional calculus (simpler than it sounds is a good way of describing causality in the territory, I very much doubt there is a fourth option. If I'm doing logic right:

1.¬A is A's cause(1)∨A is A's cause (1)(By NOT-3)

2.A has a cause→ ¬A is A's cause(1)∨A is A's cause(1)(By THEN-1)

3.A has a cause→ ¬A is A's cause(1)∨A is A's cause(1)→A has a cause ∧¬A is A's cause(1)∨A is A's cause(1)(By AND-3)

4.A has a cause→A has a cause ∧¬A is A's cause(1)∨ A is A's cause(1)(Modus Ponens on 3)

  1. ¬A has a cause∨A has a cause⊢A has a cause ∧ A is A's cause(1)∨¬A is A's cause* (By NOT-3)

6.¬A has a cause∨A has a cause ∧ A is A's cause(1)∨¬A is A's cause(1)(Modus ponens on 5)

Which, translated back into English, means that something either has a cause apart from itself, is it's own cause*,or has no cause. If you apply "has a cause apart from itself" recursively, you end up with an infinite chain of causes. Otherwise, you have to go with "is it's own cause(1)", which means the causal chain loops back on itself or "has no cause" which means the causal chain ends.

Nothing thus far, to my knowledge, has been found to defy the axioms of PC, and thus, if PC were wrong, it would seem not only unsatisfying but downright crazy. I believe that I could make at least a thousand claims which I believe as strongly as "If the Universe defied the principles of logic, it would seem crazy to me." and be wrong at most once, so I assign at least a 99.9% probability to the claim that "Why is everything" has no satisfying answer if "It spontaneously sprang into being", "Causality is cyclical." and "an infinite chain of causes" are unsatisfying.

(1)Directly or indirectly

Comment by GloriaSidorum on Fallacies of Compression · 2013-04-10T18:59:48.388Z · LW · GW

If I recall correctly, they actually do. It falls under their incest taboo. So "bad" in any culture could probably be defined by a list of generalised principals which don't necessarily share any characteristics other than being labelled as "bad".

Comment by GloriaSidorum on Fallacies of Compression · 2013-04-10T17:35:59.073Z · LW · GW

That works a bit better, at least for the art example. A better example of where you'd best "define" a set by memorising all of it's members might be the morality of a particular culture. For instance, some African tribes consider it evil to marry someone whose sibling has the same first name as oneself. Not only is it hard to put into words, in English or Ju|'hoan, a definition of "bad" (or |kàù) which would encompass this, but one couldn't look at a bunch of other things that these tribes consider bad and infer that one shouldn't marry someone who has a sibling who share's one's first name. Better to just know that that's one of the things that is said to |kàù in that culture.

Comment by GloriaSidorum on Fallacies of Compression · 2013-04-10T16:16:11.404Z · LW · GW

What about words that "can't be defined"? (e.g. "art")

If you can't think of any unifying features of a category, but you still want to use it, you could go about listing members: "Art" Includes (for all known English-speaking humans):

  • Intentional paintings from before 1900 Statues Stained-glass windows &c. Includes for many: abstract art modern art cubism Photography &c. Includes for a few: Man-made objects not usually labelled as art &c. Includes for no known English-speaking human: Non man-made objects The Holocaust &c.

If the effect of knowing what "art" is (although that one's common-usage definition can be articulated in terms of features) is understanding what English-speakers mean when they say it, then a list-based definition is as effective, though not as efficient, as a feature based one. (You can make up for not knowing what criterion someone uses with a bit of Bayesian updating: The probability that Alice will call a Jackson Pollock piece "art" is greater if she called Léger's "Railway Crossing" "art" than if she did not)

Comment by GloriaSidorum on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2013-03-31T18:16:07.436Z · LW · GW

It seems as though Pascal's mugging may be vulnerable to the same "professor god" problem as Pascal's wager. With probabilities that low, the difference between P(3^^^^3 people being tortured|you give the mugger $5) and P(3^^^^3 people being tortured| you spend $5 on a sandwich) may not even be calculable. It's also possible that the guy is trying to deprive the sandwich maker of the money he would otherwise spend on the Simulated People Protection Fund. If you're going to say that P(X is true|someone says X is true)>P(X is true|~someone says X is true) in all cases, then that should apply to Pascal's wager as well; P(Any given untestable god is real|there are several churches devoted to it)>P(Any given untestable god is real|it was only ever proposed hypothetically, tongue-in-cheek) and thus P(Pascal's God)>P(professor god). In this respect, I'm not sure how the two problems are different.

Comment by GloriaSidorum on Don't Get Offended · 2013-03-19T04:53:17.116Z · LW · GW

Many thanks!

Comment by GloriaSidorum on Don't Get Offended · 2013-03-19T04:52:26.340Z · LW · GW

Or one where the differences are small, or trivial. I don't think this is "miraculous" or "implausible". Before the invention of agriculture, about seven to twelve thousand years ago, I'm not sure what pressures there could have been on Europeans to develop higher intelligence than Africans, so in contrast to physical differences, many of which have well-established links to specific climates, intellectual genetic differences would probably be attributable to genetic drift and >~10,000 years of natural selection. To be clear, my position isn't that I have good evidence for this, merely that I don't know and I don't assign this scenario as low a prior probability as you seem to.

Comment by GloriaSidorum on Don't Get Offended · 2013-03-19T01:23:08.400Z · LW · GW

Really? I've seen twin studies that purport a genetic explanation for IQ differences between individuals, but never between racial groups. If you've saved a link to a study of the latter type, I'd be really interested to read it.

Comment by GloriaSidorum on Don't Get Offended · 2013-03-18T03:25:50.258Z · LW · GW

Not a priori, but there has been at least one study performed on black children adopted by white families, this one, which comes to the conclusion that environment plays a key role. In all honesty, I haven't even read the study, because I can't find the full text online, but if more studies like it are performed and come to similar conclusions, then that could be taken as evidence of a largely environmental explanation.

Comment by GloriaSidorum on Decision Theory FAQ · 2013-03-17T18:52:07.375Z · LW · GW

Could you define "better"? Remember, until clippy actually rewrites its utility function, it defines "better" as "producing more paperclips". And what goal could produce more paperclips than the goal of producing the most paperclips possible?

(davidpearce, I'm not ignoring your response, I'm just a bit of a slow reader, and so I haven't gotten around to reading the eighteen page paper you linked. If that's necessary context for my discussion with whowhowho as well, then I should wait to reply to any comments in this thread until I've read it, but for now I'm operating under the assumption that it is not)

Comment by GloriaSidorum on Decision Theory FAQ · 2013-03-17T18:35:03.477Z · LW · GW

I'm sorry for misinterpreting. What evidence is there ( from the clippy SIs perspective) that maximizing happiness would produce more paperclips?

Comment by GloriaSidorum on Decision Theory FAQ · 2013-03-17T17:55:21.012Z · LW · GW

Thanks! Eventually I'll figure out the formatting on this site.

Comment by GloriaSidorum on Decision Theory FAQ · 2013-03-17T17:52:04.936Z · LW · GW

That's a guess

As opposed to all of those empirically-testable statements about idealized superintelligences

Knowing why some entity avoids some thing has more predictive power.

In what way?

Comment by GloriaSidorum on Decision Theory FAQ · 2013-03-17T17:21:29.108Z · LW · GW

Would it then need to acquire the knowledge that post-utopians experience colonial alienation? That heaps of 91 pebbles are incorrect? I think not. At most it would need to understand that "When pebbles are sorted into heaps of 91, pebble-sorters scatter those heaps" or "When I say that colonial alienation is caused by being a post-utopian, my professor reacts as though I had made a true statement." or "When a human experiences certain phenomena, they try to avoid their continued experience". These statements have predictive power. The reason that an instrumentally rational agent tries to acquire new information is to increase their predictive power. If human behavior can be modeled without empathy, then this agent can maximize its instrumental rationality while ignoring it. As to your last bullet point, if I may be so bold, I doubt you actually believe it. Having a rule like "Modify your utility function every time it might be useful" seems rather irrational. Most possible modifications to a clipper's utility function will not have a positive effect, because most possible states of the world do not have maximal paperclips.

Comment by GloriaSidorum on Decision Theory FAQ · 2013-03-17T15:27:22.096Z · LW · GW

The two cases presented are not entirely comparable. If Jane's utility function is "Maximize Jane's pleasure" then she will choose to not drink in the first problem; the pleasure of non-hangover-having [FOR JANE] exceeding that of [JANE'S] intoxication. Whereas in the second problem Jane is choosing between the absence of a painful death [FOR A COW] and [JANE'S] delicious, juicy hamburger. Since she is not selecting for the strongest preference of every being in the Universe, but rather for herself, she will choose the burger. In terms of which utility function is more instrumentally rational, I'd say that "Maximize Jane's Pleasure" is easier to fulfill than "Maximize Pleasure", and is thus better at fulfilling itself. However, instrumentally rational beings, by my definition, are merely better at fulfilling whatever utility function is given, not at choosing a useful one.

Comment by GloriaSidorum on Decision Theory FAQ · 2013-03-17T04:09:46.208Z · LW · GW

Perhaps its paperclipping machine is slowed down by suffering. But it doesn't have to be reducing suffering, it could be sorting pebbles into correct heaps, or spreading Communism, or whatever. What I was trying to ask was, "In what way is the instrumental rationality of a being who empathizes with suffering better, or more maximal, than that of a being who does not?" The way I've seen it used, "instrumental rationality" refers to the ability to evaluate evidence to make predictions, and to choose optimal decisions, however they may be defined, based on those predictions. If my definition is sufficiently close to the one your own, then how does "understanding", which I have taken, based on your previous posts, to mean "empathetic understanding", maximize this? To put it yet another way, if we imagine two beings, M and N, such that M has "maximal instrumental rationality" and N has "Maximal instrumental rationality- empathetic understanding", why does M have more instrumental rationality than N.

Comment by GloriaSidorum on Decision Theory FAQ · 2013-03-16T21:08:14.377Z · LW · GW

To have maximal instrumental rationality, an entity would have to understand everything... Why? In what situation is someone who empathetically understands, say, suffering better at minimizing it (or, indeed, maximizing paperclips) than an entity who can merely measure it and work out on a sheet of paper what would reduce the size of the measurements?

Comment by GloriaSidorum on Frequentist Magic vs. Bayesian Magic · 2013-03-11T02:32:32.531Z · LW · GW

Fixed. Thanks.

Comment by GloriaSidorum on Frequentist Magic vs. Bayesian Magic · 2013-03-10T18:53:32.496Z · LW · GW

Orthography is not intuitive. To test my native speaker instinct, I'll pick a case that is. Imagine a user whose name was "Praise_Him". To me, it would be more natural to say "Praise_Him's post" than "Praise_His post"; the former might give me a second's pause, but the latter would make me reread the sentence. Thus, at least the way I use the language, a proper name which incorporates a pronoun is possessivized as a whole, and cousin_it's is correct. But "Its" and "It's" are homophonous, so it wouldn't matter to me much.

Comment by GloriaSidorum on Welcome to Less Wrong! · 2013-03-06T23:24:39.746Z · LW · GW

Hello. My name is not, in fact, Gloria. My username is merely (what I thought was) a pretty-sounding Latin translation of the phrase "the Glory of the Stars", though it would actually be "Gloria Siderum" and I was mixing up declensions.

I read Three Worlds Collide more than a year ago, and recently re-stumbled upon this site via a link from another forum. Reading some of Elizier's series', I realized that most of my conceptions about the world were were extremely fuzzy, and they could be better said to bleed into each other than to tie together. I realized that a large amount of what I thought of as my "knowledge" is just a set of passwords, and that I needed to work on fixing that. And I figured that a good way to practice forming coherent, predictive models and being aware of what mental processes may affect those models would be to join an online community in which a majority of posters would have read a good number of articles on bias, heuristic, and becoming more rational, and will thus be equipped to some degree to call flaws in my thinking.

Comment by GloriaSidorum on Newcomb's Problem and Regret of Rationality · 2013-03-06T05:20:16.024Z · LW · GW

I've been fiddling around with this in my head. I arrived at this argument for one-boxing: Let us suppose a Rule, that we shall call W: FAITHFULLY FOLLOW THE RULE THAT, IF FOLLOWED FAITHFULLY, WILL ALWAYS OFFER THE GREATEST CHANCE OF THE GREATEST UTILITY To prove W one boxes, let us list all logical possibilities, which we'll call W1 W2 and W3: W1 always one-boxing W2 always two boxing, and W3 sometimes one-boxing and sometimes two boxing. Otherwise, all of these rules are identical in every way, and identical to W in every way. Imagining that we're Omega, we'd obviously place nothing in the box of the agent which follows W2, since it knows that agent would two-box.. Since this limits the utility gained, W2 is not W. W3 is a bit trickier, but a variant of W3 which two-boxes most of the time will probably not be favoured by Omega, since this would reduce his chance of being correct in his prediction. This reduces the chance of getting the greatest utility by however much, and thus, disqualifies all close to W2 variants of W3. A perfect W1 would guarantee that the box would contain 1,000,000 dollars, since Omega would get it's prediction wrong in not rewarding an agent who one-boxes. However, this rule GUARANTEES not getting the 1,001,000 dollars, and therefore is sub -optimal. Because of Omega's optimization, there is no such rule in which that is the most likely option, but if there is such a rule in which this is second-most-likely, that would probably be W. In any case, W favours B over A. I was going to argue that W is more rational than a hypothetical rule Z which I think is what makes most two-boxers two-box, but maybe I'll do that later, when I'm more sure I have time.