Posts

Comments

Comment by Cyan2 on Mind Projection Fallacy · 2009-04-27T03:53:49.000Z · LW · GW

I don't know what document that link originally pointed to, but this document contains one of Jaynes's earliest (if not the earliest) descriptions of the idea.

Comment by Cyan2 on Probability is in the Mind · 2009-03-12T20:38:00.000Z · LW · GW

Stephen R. Diamond, there are two distinct things in play here: (i) an assessment of the plausibility of certain statements conditional on some background knowledge; and (ii) the relative frequency of outcomes of trials in a counterfactual world in which the number of trials is very large. You've declared that probability can't be (i) because it's (ii) -- actually, the Kolmogorov axioms apply to both. Justification for using the word "probability" to refer to things of type (i) can be found in the first two chapters of this book. I personally call things of type (i) "probabilities" and things of type (ii) "relative frequencies"; the key is to recognize that they need different names.

On your further critiques:
(1) Eliezer is a determinist; see the quantum physics sequence.
(2) True. A logical argument is only as reliable as its premises, and every method for learning from empirical information is only as reliable as its inductive bias. Unfortunately, every extant practical method of learning has an inductive bias, and the no free lunch theorems give reason to believe that this is a permanent state of affairs.

I'm not sure what you mean in your last sentence...

Comment by Cyan2 on Markets are Anti-Inductive · 2009-02-26T03:49:11.000Z · LW · GW

Vilhelm S., companies and people who lose money on CDOs have mortgages to pay and employ people who have mortgages to pay... Once the system gets coupled like that, one unlucky bet can start the cascade. I'm not saying this actually happened, but it's a mechanism which could falsify the assertion "the lack of correlation doesn't stop being real just because people believe in it".

Comment by Cyan2 on An Especially Elegant Evpsych Experiment · 2009-02-13T16:36:43.000Z · LW · GW

David, the inelegance is that the study asked adults in general to imagine parental grief rather than asking parents in particular. (Your correct observations about imagined versus actual grief were already set forth in the post.)

Comment by Cyan2 on The Evolutionary-Cognitive Boundary · 2009-02-12T17:38:57.000Z · LW · GW

This post helps to ease much of what I have found frustrating in the task of understanding the implications of EP.

Comment by Cyan2 on Failed Utopia #4-2 · 2009-01-23T00:09:00.000Z · LW · GW

Huh. I guess I just don't see Angel (the TV character, not the commenter) as the equivalent of the verthandi. (Also naming the idea after the actor instead of the character lead me somewhat astray.)

Comment by Cyan2 on Failed Utopia #4-2 · 2009-01-22T14:27:00.000Z · LW · GW
If you google boreana you should get an idea of where that term comes from, same as verthandi.

Still need a little help. Top hits appear to be David Boreanaz, a plant in the Rue family, and a moth.

Comment by Cyan2 on Justified Expectation of Pleasant Surprises · 2009-01-19T19:18:47.000Z · LW · GW
No. I asserted that...

Fair enough.

This might be a good idea... At this point, the "hedonic impact" of this mechanic will almost disappear.

I don't disagree with this. My scenario is premised on the reward being a surprise, so it implicitly assumes one-time use, or at least no overuse.

Comment by Cyan2 on Justified Expectation of Pleasant Surprises · 2009-01-17T02:01:25.000Z · LW · GW
Well, that is even worse, because essentially, you just took the choice away from player.

I can't help but feel that you didn't really bother to think this response through. Taken literally, you've just asserted that a surprising reward with character synergy is worse than a surprising rigid reward that makes the player feel regret. You assert that this is so because choice was taken away from the player even though neither situation involves player choice.

I get that yout design principle is to give the player choice and the ability to plan. So what is the right way to give "good news" to the player with the most hedonic impact?

Comment by Cyan2 on She has joined the Conspiracy · 2009-01-15T23:05:38.000Z · LW · GW

CannibalSmith, JacobLyles,

The emphasis on Bayesian probability is because it is the simplest way to extend classical logic to propositions with varying degrees of plausibility. Just as all classical logic can be reduced to repeated applications of modus ponens, all manipulations of plausibility can be reduced to applications of Bayes' Theorem (assuming you want results that will line up with classical logic as the plausibilities approach TRUE and FALSE).

Comment by Cyan2 on Justified Expectation of Pleasant Surprises · 2009-01-15T15:40:10.000Z · LW · GW
If some or all abilities are hidden at the beginning, that forces the player to choose based on incomplete knowledge, and more often that not, leads to regrets: "I wish I purchased that ability which turned out to work in nice synergy with others, and not this one which turned out to be useless..". Especially if there's some finite pool of resources used to purchase these abilities. And that is not fun, even if surpising.

This seems to miss the point -- you're talking about a surprise that isn't a pleasant surprise. Suppose the game was designed so that after achieving a goal, you get an unexpected bonus ability with awesome synergy with the character, no matter how the character had been developed up to that point? As a game designer, ignoring the difficulty of realizing such a design, how would you say the Fun-theoretic potential of this scenario stacks up?

A rule of thumb in game design is to never make players make uninformed choices, as that only leads to frustration. This beats any possible pleasant surpise that might be there.

This rule of thumb is overly broad as stated. It would rule out poker, "fog of war" in RTS games, etc.

Comment by Cyan2 on Building Weirdtopia · 2009-01-13T22:14:42.000Z · LW · GW
Utopia originally meant no-place, I have a hard time forgetting that meaning when people talk about them.

The term "utopia" was a deliberate pun on "outopia" meaning "no place" and "eutopia" meaning "good place". It seems doubtful that Thomas More actually intended to depict his personal ideal society, so one might say that Utopia is the original Weirdtopia.

I think we're looking at premature search-halts here.

I plead no contest.

Comment by Cyan2 on Building Weirdtopia · 2009-01-13T00:53:42.000Z · LW · GW

Economic Weirdtopia: FAIth determines that the love of money actually is the root of ~75% of evil, so it's back to the barter system for us.

Sexual Weirdtopia: FAIth determines that the separatist feminists were right -- CEV requires segregation by sex. Homosexual men and lesbians laugh and laugh. Research on immersive VR becomes a preoccupation among the heterosexual majority in both segregated camps.

Not very plausible, but... "That's the thing about FAIth. If you don't have it, you can't understand it. And if you do, no explanation is necessary."

Comment by Cyan2 on Changing Emotions · 2009-01-05T19:15:02.000Z · LW · GW
I don't yet see quantifiable arguments why from-scratch AI is easier [than human augmentation].

From-scratch AI could also be justified as yielding greater benefits even if it as difficult (or more difficult) than human augmentation.

Comment by Cyan2 on Growing Up is Hard · 2009-01-04T17:34:01.000Z · LW · GW
Cyan, is that a standard hypothesis? I'm not sure how "practice" would account for a very gregarious child lacking an ordinary fear of strangers.

I don't know if it's a standard hypothesis -- it's just floating there in my brain as background knowledge sans citation. It's possible that read it in a popular science book on neuroplasticity. I'd agree that "practice" doesn't plausibly account for the lack of ordinary fear; it's intended as an explanation for the augmentations, not the deficits.

Comment by Cyan2 on Growing Up is Hard · 2009-01-04T04:50:28.000Z · LW · GW

Nitpick for Doug S.: that's actually two coupled evolutionary limits. Babies' heads need to fit through the women's pelvises, which also have to be narrow enough for useful locomotion.

Comment by Cyan2 on Growing Up is Hard · 2009-01-04T04:44:06.000Z · LW · GW
Deacon makes a case for some Williams Syndrome symptoms coming from a frontal cortex that is relatively too large for a human, with the result that prefrontal signals - including certain social emotions - dominate more than they should.

Having not read the book, I don't know if Deacon deals with any alternative hypotheses, but one alternative I know of is the idea that WSers get augmented verbal and social skills is because it is the only cognitive skill they are able to practice. In short, WSers are (postulated to be) geniuses at social interaction because of practice, not because of brain signal imbalance. This is analogous to the augmented leg and foot dexterity of people lacking arms.

How could we test these alternatives? I seem to recall that research has been done in the temporary suppression of brain activity using EM fields (carefully, one would hope). If I haven't misremembered, then effects of the brain signal imbalance might be subject to experimental investigation.

Comment by Cyan2 on Rationality Quotes 20 · 2008-12-23T04:55:22.000Z · LW · GW

TGGP, I think it's supposed to. The General is quoted in the linked article.

Comment by Cyan2 on Prolegomena to a Theory of Fun · 2008-12-18T02:40:36.000Z · LW · GW
Would it have been the moral thing to do to turn around and leave the Indians alone, instead of taking their land and using it to build an advancing civilization...?

False dichotomy.

Comment by Cyan2 on Not Taking Over the World · 2008-12-16T00:22:21.000Z · LW · GW
If you invoke the unlimited power to create a quadrillion people, then why not a quadrillion?

One of these things is much like the other...

Comment by Cyan2 on Artificial Mysterious Intelligence · 2008-12-08T15:42:04.000Z · LW · GW

Jeff, if you search for my pseudonym in the comments of the "Natural Selection's Speed Limit and Complexity Bound" post, you will see that I have already brought MacKay's work to Eliezer's attention. Whatever conclusions he's come to have already factored MacKay in.

Comment by Cyan2 on Selling Nonapples · 2008-11-14T21:24:17.000Z · LW · GW

Eliezer, you'd have done better to ignore ReadABook's trash. Hir ignorance of your arguments and expertise was obvious.

Comment by Cyan2 on Lawful Uncertainty · 2008-11-11T00:37:44.000Z · LW · GW

Peter de Blanc, I don't have an example, just a vague memory of reading about minimax-optimal decision rules in J. O. Berger's Statistical Decision Theory and Bayesian Analysis. (That same text notes that minimax rules are Bayes rules under the assumption that your opponent is out to get you.)

Comment by Cyan2 on Lawful Uncertainty · 2008-11-10T21:27:52.000Z · LW · GW

IIRC, there exist minimax strategies in some games that are stochastic. There are some games in which it is in fact best to fight randomness with randomness.

Comment by Cyan2 on Building Something Smarter · 2008-11-04T23:37:00.000Z · LW · GW

For what it's worth, Tim Tyler, I'm with you. Utility scripts count as programs in my books.

Comment by Cyan2 on Measuring Optimization Power · 2008-10-28T13:53:45.000Z · LW · GW
I mean, we weren't even designed by a mind, we sprung from simple selection!

This is backwards, isn't it? Reverse engineering a system designed by a (human?) intelligence is a lot easier than reverse engineering an evolved system.

Comment by Cyan2 on Aiming at the Target · 2008-10-27T13:25:36.000Z · LW · GW

Emile, you've mixed up "optimization process" and "intelligence". According to your post, Eliezer wouldn't consider evolution an optimization process. He does; he doesn't consider it intelligent.

Comment by Cyan2 on Which Parts Are "Me"? · 2008-10-23T02:43:00.000Z · LW · GW
...it seems to me much of the beautiful LaTex equations and formulas are only to give the impression of rigor.

I didn't suggest equations to enforce some false notion of rigor -- I suggested them as an aid to clear communication.

Comment by Cyan2 on Which Parts Are "Me"? · 2008-10-23T00:09:42.000Z · LW · GW

Jef Allbright, it seems to me that if you want Eliezer to take your criticisms seriously, you're going to need more equations and fewer words. (It would be nice if Eliezer produced some equations too.)

Comment by Cyan2 on Which Parts Are "Me"? · 2008-10-22T19:20:05.000Z · LW · GW

"But I still suspect that there's a little distance there, that wouldn't be there otherwise, and I wish my brain would stop doing that."

A finely crafted recursion. I salute you.

Comment by Cyan2 on Ends Don't Justify Means (Among Humans) · 2008-10-15T19:45:20.000Z · LW · GW

So in my posts on this topic, I proceeded to (attempt to) convey a larger and more coherent context making sense of the ostensible issue.

Right! Now we're communicating. My point is that the context you want to add is tangential (or parallel...? pick your preferred geometric metaphor) to Eliezer's point. That doesn't mean it's without value, but it does mean that it fails to engage Eliezer's argument.

But it seems to me that I addressed this head-on at the beginning of my initial post, saying "Of course the ends justify the means -- to the extent that any moral agent can fully specify the ends.

Eliezer's point is that humans can't fully specify the ends due to "hostile hardware" issues if for no other reason. The hostile hardware part is key, but you never mention it or anything like it in your original comment. So, no, in my judgment you don't address it head-on. In contrast, consider Phil Goetz's first comment (the second of this thread), which attacks the hostile hardware question directly.

Comment by Cyan2 on Ends Don't Justify Means (Among Humans) · 2008-10-15T17:19:48.000Z · LW · GW

Since you said you didn't know what to do with my statement, I'll add, just replace the phrase "limit the universe of discourse to" with "consider only" and see if that helps. But I think we're using the same words to talk about different things, so your original comment may not mean what I think it means, and that's why my criticism looks wrong-headed to you.

Comment by Cyan2 on Ends Don't Justify Means (Among Humans) · 2008-10-15T17:13:34.000Z · LW · GW

Jef Allbright,

By subsequent discussion, I meant Phil Goetz's comment about Eliezer "neglecting that part accounted for by the unpredictability of the outcome". I'm with him on not understanding what "a model of evolving values increasingly coherent over increasing context, with effect over increasing scope of consequences" means; I also found your reply to me utterly incomprehensible. In fact, it's incredible to me that the same mind that could formulate that reply to me would come shuddering to a halt upon encountering the unexceptionable phrase "universe of discourse".

Comment by Cyan2 on Ends Don't Justify Means (Among Humans) · 2008-10-15T03:21:11.000Z · LW · GW
But in an interesting world of combinatorial explosion of indirect consequences, and worse yet, critically underspecified inputs to any such supposed moral calculations, no system of reasoning can get very far betting on longer-term specific consequences.

This point and the subsequent discussion are tangential to the point of the post, to wit, evolutionary adaptations can cause us to behave in ways that undermine our moral intentions. To see this, limit the universe of discourse to actions which have predictable effects and note that Eliezer's argument still makes strong claims about how humans should act.

Comment by Cyan2 on Crisis of Faith · 2008-10-12T16:40:00.000Z · LW · GW

1) Do you believe this is true for you, or only other people?

I don't fit the premise of the statement -- my cherished spouse is not yet late, so it's hard to say.

2) If you know that someone's cherished late spouse cheated on them, are you justified in keeping silent about the fact?

Mostly yes.

3) Are you justified in lying to prevent the other person from realizing?

Mostly no.

4) If you suspect for yourself (but are not sure) that the cherished late spouse might have been unfaithful, do you think that you will be better off, both for the single deed, and as a matter of your whole life, if you refuse to engage in any investigation that might resolve your doubts one way or the other?

Depends on the person. Some people would be able to leave their doubts unresolved and get on with their life -- others would find their quality of life affected by their persistent doubts.

If there is no resolving investigation, do you think that exerting some kind of effort to "persuade yourself", will leave you better off?

No. You can count that as a win if you like -- "deluding myself" is too strong. "I am better off remaining deluded ..." is more likely to be true for some people.

5) Would you rather associate with friends who would (a) tell you if they discovered previously unsuspected evidence that your cherished late spouse had been unfaithful, or who would (b) remain silent about it?

Supposing I am emotionally fragile and might harm myself if I discovered that my spouse had been unfaithful, (b). Supposing that I am emotionally stable and that I place great weight on having an accurate view of the circumstances of my life, (a). Other situations, other judgment calls.

Which would be a better human being in your eyes, and which would be a better friend to you?

Depends on how I can reasonably be expected to react.

Comment by Cyan2 on Crisis of Faith · 2008-10-11T02:05:53.000Z · LW · GW

Fact check: MDL is not Bayesian. Done properly, it doesn't even necessarily obey the likelihood principle. Key term: normalized maximum likelihood distribution.

Comment by Cyan2 on Make an Extraordinary Effort · 2008-10-08T13:55:10.000Z · LW · GW
...the language and the arts with my comparison...

I thought about going this way, but I decided to stick with what I know.

Since sarcasm seems to have failed, let me just state flatly that all of the cultures we've mentioned have enough members and enough diversity that blanket assertions such as, "Japanese martial arts are worse than Chinese ones," or "American football is a cheap knockoff of rugby" are reductive and parochial to the point of not-even-wrongness.

Comment by Cyan2 on Make an Extraordinary Effort · 2008-10-08T02:03:28.000Z · LW · GW

Most American culture seems like a reinvention of British culture demanded by national pride. My impression is that their versions are like cheap knock-offs of the originals. Their beers are worse. They even managed to mess up the game of football. America faces limitations due to their vast tracts of underpopulated flyover country. That's a problem Britain doesn't have.

Comment by Cyan2 on My Bayesian Enlightenment · 2008-10-06T02:48:07.000Z · LW · GW

For those who are interested, a fellow named Kevin Van Horne has compiled a nice unofficial errata page for PT:LOS here. (Check the acknowledgments for a familiar name.)

Comment by Cyan2 on Trying to Try · 2008-10-02T02:56:18.000Z · LW · GW
This is a false dichotomization. Everything is reality!

"Quotation mode" is analogous to an escape character. There's no dualism here.

Comment by Cyan2 on The Magnitude of His Own Folly · 2008-10-01T17:12:00.000Z · LW · GW

"Consider the horror of America in 1800, faced with America in 2000. The abolitionists might be glad that slavery had been abolished. Others might be horrified, seeing federal law forcing upon all states a few whites' personal opinions on the philosophical question of whether blacks were people, rather than the whites in each state voting for themselves. Even most abolitionists would recoil from in disgust from interracial marriages - questioning, perhaps, if the abolition of slavery were a good idea, if this were where it led. Imagine someone from 1800 viewing The Matrix, or watching scantily clad dancers on MTV. I've seen movies made in the 1950s, and I've been struck at how the characters are different - stranger than most of the extraterrestrials, and AIs, I've seen in the movies of our own age. Aliens from the past.

Something about humanity's post-Singularity future will horrify us...

Let it stand that the thought has occurred to me, and that I don't plan on blindly trusting anything...

This problem deserves a page in itself, which I may or may not have time to write."

- Eliezer S. Yudkowsky, Coherent Extrapolated Volition

Comment by Cyan2 on Awww, a Zebra · 2008-10-01T13:24:29.000Z · LW · GW
Science: Best Sans Booty.

Schrödinger disagreed. (So did Einstein... and Feynman... I could mention Kinsey, but that would be cheating, I supppose.)

Comment by Cyan2 on The Level Above Mine · 2008-09-26T13:55:12.000Z · LW · GW

Jaynes was a really smart guy, but no one can be a genius all the time. He did make at least one notable blunder in Bayesian probability theory -- a blunder he could have avoided if only he'd followed his own rules for careful probability analysis.

Comment by Cyan2 on The Sheer Folly of Callow Youth · 2008-09-19T14:18:43.000Z · LW · GW
Eliezer, I think you have dissolved one of the most persistent and venerable mysteries: "How is it that even the smartest people can make such stupid mistakes".

Michael Shermer wrote about that in "Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time". In the question of smart people believing weird things, he essentially describes the same process as that Eliezer experienced: once smart people decide to believe a weird thing for whatever reason, it's much harder to to convince them that their beliefs are flawed because they are that much better at poking holes in counterarguments.

Comment by Cyan2 on Raised in Technophilia · 2008-09-17T15:49:21.000Z · LW · GW
One disturbing thing about the Petrov issue that I don't think anyone mentioned last time, is that by praising nuclear non-retaliators we could be making future nuclear attacks more likely by undermining MAD.

Petrov isn't praised for being a non-retaliator. He's praised for doing good probable inference -- specifically, for recognizing that the detection of only 5 missiles pointed to malfunction, not to a U.S. first strike, and that a "retaliatory" strike would initiate a nuclear war. I'd bet counterfactually that Petrov would have retaliated if the malfunction had caused the spurious detection of a U.S. first strike with the expected hundreds of missiles.

Comment by Cyan2 on Mirrors and Paintings · 2008-08-23T05:57:35.000Z · LW · GW
You've got to be almost as smart as a human to recognize yourself in a mirror...

Quite recently, research has shown that the above statement may not actually be true.

Comment by Cyan2 on Is Fairness Arbitrary? · 2008-08-14T17:59:12.000Z · LW · GW
Eliezer, I think you meant to say that "19 * 103 might not be 1957" instead of 1947. Either that or I'm misunderstanding that entire paragraph.

The setup's a little opaque, but I believe the correct reading is that the other person (characterized as honest) is correcting the faulty multiplication of the notional reader ("you").

Comment by Cyan2 on Probability is Subjectively Objective · 2008-07-17T17:50:42.000Z · LW · GW

Barkley Rosser, there definitely is something a little hinky going on in those infinite dimensional model spaces. I don't have the background in measure theory to really grok that stuff, so I just thank my lucky stars that other people have proven the consistency of Dirichlet process mixture models and Gaussian process models.

Comment by Cyan2 on Probability is Subjectively Objective · 2008-07-17T03:17:09.000Z · LW · GW

Barkley Rosser, what I have in mind is a reality which in principle predictable given enough information. So there is a "true" distribution -- it's conditional on information which specifies the state of the world exactly, so it's a delta function at whatever the observables actually turn out to be. Now, there exists unbounded sequences of bits which don't settle down to any particular relative frequency over the long run, and likewise, there is no guarantee that any particular sequence of observed data will lead to my posterior distribution getting closer and closer to one particular point in parameter space -- if my model doesn't at least partially account for the information which determines what values the observables take. Then I wave my hands and say, "That doesn't seem to happen a lot in practical applications, or at least, when it does happen we humans don't publish until we've improved the model to the point of usefulness."

I didn't follow your point about a distribution for which Bayes' Theorem doesn't hold. Are you describing a joint probability distribution for which Bayes' Theorem doesn't hold, or are you talking about a Bayesian modeling problem in which Bayes estimators are inconsistent a la Diaconis and Freedman, or do you mean something else again?

Comment by Cyan2 on Probability is Subjectively Objective · 2008-07-16T22:22:05.000Z · LW · GW

Barkley Rosser, it's a strong assumption in principle, but in practice, humans seem to be pretty good at obtaining enough information to put in the model such that the posterior does in fact converge to some point in the parameter space.