Posts

Comments

Comment by michael_vassar3 on Formative Youth · 2009-02-25T07:24:25.000Z · LW · GW

I don't disagree, but this sure seems to me like something Freud and Jung would say and that Judith Rich Harris would say was nonsense, possibly invoking evolutionary psychology and saying that it would be unfit to optimize for chaotically determined attractors.

Honestly, it seems to me that nerds are far more influenced by childhood and by constructed experiences such as movies, books, TV, religion, and yes, to a more limited extent even classroom experiences than non-nerds are. Partly this is because they consciously choose to try to hold onto their values while non-nerds are content to let their values drift (and "hold onto your values" is an explicit value learned from constructed experiences). Partly this is because nerds tend to simply miss out on many of the less constructed experiences that young people create spontaneously for one another or to get the bad side of those experiences but not the good side. Partly nerds may just be less aware of less processed data and not notice or respond to instincts, impulses, imitative opportunities and assorted influences that would tend to jostle their behavioral patterns into a new equilibrium. They perceive abstractions handed to them explicitly by other people more easily than patterns that show up around them. Oddly, this seems to me to be a respect in which nerds are more feminine rather than being more masculine as they are in many other ways.

Comment by michael_vassar3 on On Not Having an Advance Abyssal Plan · 2009-02-24T08:05:10.000Z · LW · GW

Hmm. It seems to me that Eliezer had a plan for the economic crisis, namely to keep his mouth shut about it, and when the crisis actually happened the pressure to talk about it apparently exceeded his expectations and he didn't follow through. That with the level of agency and unitary identity of a practiced human rationalist. Imagine how much less able to follow through he would be if he fell entirely under the sway of an arbitrary coalition of sub-motivations every 4 years (hmm... he does deny identity with the authors of his earlier papers). Oh, and I have to ditto "Katrina".

Comment by michael_vassar3 on Wise Pretensions v.0 · 2009-02-20T20:21:50.000Z · LW · GW

I'm with Sigivald here: I think that the archetype here is about serenity, not about impartiality, though the selfishly impartial may at times misuse the archetype when claiming it as a justification for their actions. Personally, I thought that my comment yesterday about the question of whether you should criticize those like or unlike you was more interesting/important.

Answering the question the post posed, I think that the less rhetorical style is superior for every-day use, but it doesn't hurt to whip out a more intense article from time to time, just preferably to make a better point than this one.

Comment by michael_vassar3 on Pretending to be Wise · 2009-02-20T06:12:04.000Z · LW · GW

Taking a hint from the babyeaters, I can say that the pro-life people are doing what's right' and what's right''', the pro choice people are doing what's right'' and what's right'''', Israel, what's right''''' and right''''''' and Hamas what's right'''''' and right''''''''. Then I can also say that if they wanted to switch to doing what was right, all groups would turn their efforts to FAI plus sustaining their actual existence. However, it seems that there may be many situations where fairly intelligent and well intentioned SL1 political commentators in the US, who may identify as broadly conservative or liberal, agree about what is right^, where right^ is closer to right than most actions or preferences are. In this case it often seems to me that the conservative political commentators emphasize opposition to those who are right''' and right''''' because those perspectives are farther from their own thus more wrong^, while liberal commentators emphasize the imperfections of those who are right'' and right'''' because they treat those who are right'' and right'''' as subject to reasons, as moral agents, and thus as subject to correction/criticism while those who are trying to do what's right''' and right''''' are merely seen as moral subjects beneath reproach. Victims. Children deserving sympathy and protection but not blame.

Of course, this was more the case back when there were fairly intelligent and well intentioned conservative SL1 political commentators in the US, e.g. before about 2004.

Comment by michael_vassar3 on Against Maturity · 2009-02-19T09:05:43.000Z · LW · GW

Infotropism: Michael (dot) Vassar at gmail

Eliezer: You really should tell people how to reach me as well as telling them that they should do so, either with my email or a link to SIAI's "about us" page, which now has my contact info.

Comment by michael_vassar3 on Against Maturity · 2009-02-19T00:50:16.000Z · LW · GW

Psy-Kosh: My guess is that most learning is deeper and more authentic if it is from one's own experience. Eliezer seems to particularly prize personal learning, favoring secrecy in science in his idea of paradise.

Comment by michael_vassar3 on Good Idealistic Books are Rare · 2009-02-17T20:54:59.000Z · LW · GW

Self Help? Maybe some. I think that most of the most popular self-help can seem DEEPLY cynical to someone of nerdy disposition like myself. The essential message of "how to win friends and influence people" is "don't try to reason with people, instead flatter them and otherwise manipulate their emotions to create immediate pleasure that they will associate with you". OTOH, the message is also that if you do this you can have SUCCESS!!!, so whether it's cynical or idealistic depends on how much you value SUCCESS!!!. When I first read it I didn't have Something To Protect (TM), which works like soap for dissolving otherwise analyticophobic cynicism into the nerdy soul. Robert Green is famous for being even more cynical than Carnegie. The Secret, more recently successful, tells people that the world is arbitrary, both simple and inscrutable. IIIck! Worse than saying "Look, Cthulhu is standing over there and is about to eat you". I'd consider suicide if it's worldview was true and if I could even entertain the impossible possibility. Many members of the PUA community sell books that are cynical about women being people but optimistic about sex being widely available. I'd call that VERY cynical on net.

Comment by michael_vassar3 on An Especially Elegant Evpsych Experiment · 2009-02-14T19:10:22.000Z · LW · GW

Weber's law applies to perceptions. You can't really perceive time on a span of years. I'm pretty certain that human's can't intuitively distinguish 18 years from 20. My post asserted that people should use rough concepts of age categories but that those categories shouldn't involve representing age. Those categories should also not correspond precisely to our categories due to improvements in nutrition and disease burden, e.g. we go through puberty earlier, grow taller, etc.

Comment by michael_vassar3 on An Especially Elegant Evpsych Experiment · 2009-02-14T07:55:59.000Z · LW · GW

The need for paternal resources for boys seems likely to be a motive. So is the greater ease of recognizing paternal resemblance among boys. Finally, producing boys is a weak signal of fitness by the mother.

Comment by michael_vassar3 on An Especially Elegant Evpsych Experiment · 2009-02-13T18:01:39.000Z · LW · GW

My mistake. It should only set marginal costs equal to marginal benefits for each stage in development. Cost and benefit should only be about equal at birth.

Comment by michael_vassar3 on An Especially Elegant Evpsych Experiment · 2009-02-13T15:23:59.000Z · LW · GW

I question the evidential value of the statement below. It seems to me that it argues against evolutionary fine tuning.

" Similarly, the graph that correlates to parental grief is for the future reproductive potential of a child that has survived to a given age, and not the sunk cost of raising the child which has survived to that age. (Could we get an even higher correlation if we tried to take into account the reproductive opportunity cost of raising a child of age X to independent maturity, while discarding all sunk costs to raise a child to age X?)"

Evolution should have set the cost to a given age as approximately equal to the expected benefit. It manifestly failed to do so in establishing an approximately equal gender ratio despite the larger cost of boys than girls... unless hunter gatherers Had/Have very unequal gender ratios (inversely proportional to the cost of children) but modern environments lead to FAR less selective abortion of boys.

When I proposed a study like this a few years ago as the sort of thing that evolutionary psychologists should do if they were to be taken seriously I pointed out that hypothetical grief over girls should show much lower variance than that over boys to reflect varied reproductive expectations which should be predictable by fairly early childhood.

I'm also bothered by the idea that our ancestors even had a concept of "3 years from now" distinct from "5 years from now". If they didn't shouldn't their estimates be based on environmentally impacted physiological factors like age of puberty or height which would vary between Canad and the ancestral environment?

That said, this was my exemplar when I was looking for an example of an experiment that should be done in evolutionary psychology that could boost its credibility. Updating on both its credibility and on the ability of the scientific community to integrate data. Common sense does NOT, IMHO, say that parents would be more unhappy by the death of a 12-year-old than that of an 8-year-old.

Comment by michael_vassar3 on The Evolutionary-Cognitive Boundary · 2009-02-12T20:11:59.000Z · LW · GW

A few years ago conscious and subconscious computations could gloom up my day a lot more than they can now. Subsequently I believe I came to understand people a lot better and now I am a lot more aware of personal confusion on this subject but in general at the very least I can say that conscious and subconscious ulterior motivations also only remind me more of what humans are. Broadly, they seem likely to fall under "something to protect".

Anyway, I'm really glad to see what seems to me like uncommonly effective communication between Eliezer and Robin on this point.

Comment by michael_vassar3 on (Moral) Truth in Fiction? · 2009-02-10T06:49:34.000Z · LW · GW

Eliezer: It may be worth noting that SIAI just hired a new president FROM a branch of the film industry who has some familiarity with the sort of tax laws that can make indie movies a good investment even when expected value appears negative, and that SIAI's largest donor is the producer of an excellent movie about the marketing of cigarettes.

Other than that.

I agree with Kaj I really like Hugh's point I don't think 3WC or Dragon Tyrant work as movies. I don't know what Eliezer's got however WRT stories.

Comment by michael_vassar3 on Normal Ending: Last Tears (6/8) · 2009-02-04T15:21:27.000Z · LW · GW

The trouble is that some years later Akon is not a super-happy baby-eating human but rather a hodge-podge of zillions of values. The super-happy population or resources can double in 35 hrs at current tech. Their tech advances much faster than human tech does at current population. This is their first encounter at current tech and population but in a year they will probably encounter and mix with over 2^240 new species!

More practically, severing the human starline system, in addition to being a cliche, seems very positive values utilitarian and very anti-CEV in that it imposes a decision to maintain disunion and thus the continued existence of true humans upon all future human generations. I see the appeal, but it doesn't seem to minimize the ratio of bad human worlds to good human worlds in a big universe. Really I can't seem to look away from the instant doom implications of a big universe with superluminal travel and exponentially growing populations of finite starting density.

Comment by michael_vassar3 on The Super Happy People (3/8) · 2009-02-02T08:30:34.000Z · LW · GW

You know, they aren't the "Trade Federation", but I come out of this post with a distinctly East Asian impression of the Super Happy Fun People, which I think probably shouldn't happen for a truly alien race, since I would expect its variance from humanity to be orthogonal to ethnic and cultural differences. It may just be the names and superlatives, but I think that the shadows of Buddhism are having some of the effect. OTOH, that really might be a fairly strong universal attractor in which case I'm being unfair.

Also, it seems to me that part of the intention of the story is to put us in the middle of a situation where motivations are symmetric in both directions, but that doesn't really happen. The SHFP values and generally existence call out to humans as plausibly a more proper expression of our values than our own existence is, though we are told that physical ugliness tends to drive us away. The human values do not have the same effect on the baby eaters, thus the humans don't face a threat to their values analogous to that faced by the baby eaters.

Also, a very important question regards the nature of baby eater children. I'm not sure in what sense they can be a lot like human children but not value "good", yet if they do value "good" where in their evolution does that value come from.

Comment by michael_vassar3 on The Baby-Eating Aliens (1/8) · 2009-01-31T09:20:19.000Z · LW · GW

Given that it's Carl, and that the nits sound pretty plausible, I'm guessing the latter. Personally though, given the LARGE number of fantasy assumptions in this story, most importantly FTL and synchronized ascent sentience so perfectly timed that neither humans nor baby-eaters expanded to fill one-another's space first even given FTL, I think we have to assume the MST3K mantra is in fairly full effect.

Comment by michael_vassar3 on Value is Fragile · 2009-01-30T11:12:44.000Z · LW · GW
  • likely values for all intelligent beings and optimization processes (power, resources)

Agree.

  • likely values for creatures with roughly human-level brain power (boredom, knowledge)

Disagree. Maybe we don't mean the same thing by boredom?

  • likely values for all creatures under evolutionary competition (reproduction, survival, family/clan/tribe)

Mostly agree. Depends somewhat on definition of evolution. Some evolved organisms pursue only 1 or 2 of these but all pursue at least one.

  • likely values for creatures under evolutionary competition who cannot copy their minds (individual identity, fear of personal death)

Disagree. Genome equivalents which don't generate terminally valued individual identity in the minds they descrive should outperform those that do.

  • likely values for creatures under evolutionary competition who cannot wirehead (pain, pleasure)

Disagree. Why not just direct expected utility? Pain and pleasure are easy to find but don't work nearly as well.

  • likely values for creatures with sexual reproduction (beauty, status, sex)

Define sexual. Most sexual creatures are too simple to value the first two. Most plausible posthumans aren't sexual in a traditional sense.

  • likely values for intelligent creatures with sexual reproduction (music, art, literature, humor)

Disagree.

  • likely values for intelligent creatures who cannot directly prove their beliefs (honesty, reputation, piety)

Agree assuming that they aren't singletons. Even then for sub-components.

  • values caused by idiosyncratic environmental characteristics (salt, sugar)

Agree.

  • values caused by random genetic/memetic drift and co-evolution (Mozart, Britney Spears, female breasts, devotion to specific religions)

Agree. Some caveats about Mozart.

Comment by michael_vassar3 on Investing for the Long Slump · 2009-01-24T07:55:22.000Z · LW · GW

Dr. Commonsense: I have always been highly interested in the possibility of economic collapse and have spent substantial effort to plan for it while ignoring most futuristic disasters, most of which can't practically be planned for.

Comment by michael_vassar3 on Interpersonal Entanglement · 2009-01-23T19:32:02.000Z · LW · GW

Wow is that NOT how I would characterize my side of the position that I have discussed with Frelkins. Just...WOW!

Comment by michael_vassar3 on Sympathetic Minds · 2009-01-23T19:13:17.000Z · LW · GW

Oh, and it also probably models the minds of onlookers by reference to its own mind when deciding on a shape and color for camouflage, which sounds like empathy.

Comment by michael_vassar3 on Sympathetic Minds · 2009-01-23T19:11:51.000Z · LW · GW

But if human sociopaths lack sympathy that doesn't prevent US from having sympathy for THEM at all. Likewise, it's not at all obvious that we CAN have sympathy for aliens with completely different cognitive architecture even if they have sympathy for one another. An octopus is intelligent, but if I worry about it's pain I think that I am probably purely anthropomorphizing.

Comment by michael_vassar3 on Dunbar's Function · 2009-01-01T05:54:33.000Z · LW · GW

"Even if 200 folks do the same sort of work in the same office, they don't do the exact same work, and usually that person wouldn't be there or be paid if no one thought their work made any difference."

Obviously Robin has never worked in a typical office environment. This is a GREAT example of the theoretical framework which he uses to model the world being grossly wrong and honestly is a great example of why no-one should be allowed a PhD in ANY social science without having spent at least 5 years in at least 3 different communities, jobs, and industries.

"I'd like to see a study confirming that. The Internet is more addictive than television and I highly suspect it drains more life-force."

On average, Americans spend far more hours watching TV than using the Internet. Obviously Eliezer's sample-set is severely biased when he makes causal statements about what's addictive. People who basically live on the internet will find that people THEY know are more addicted to internet than to TV.

Comment by michael_vassar3 on Dunbar's Function · 2009-01-01T05:32:12.000Z · LW · GW

"Though it's a side issue, what's even more... interesting.... is the way that our brains simply haven't updated to their diminished power in a super-Dunbarian world. We just go on debating politics, feverishly applying our valuable brain time to finding better ways to run the world, with just the same fervent intensity that would be appropriate if we were in a small tribe where we could persuade people to change things."

Actually Eliezer, this is national indoctrination. In Costa Rica people spend MUCH less time discussing better ways to run the world. In Kazakhstan they would look at you like you were crazy if you spent ANY time doing so. Things just are the way they are and no-one can know what that way is. People aren't even interested in knowing what is legal or illegal etc.

Comment by michael_vassar3 on Can't Unbirth a Child · 2008-12-30T01:13:15.000Z · LW · GW

Not telling people about harmful side-effects that they don't ask about wasn't considered fraud when all the food companies failed to inform the public about Trans Fats, as far as I can tell. At the least, their management don't seem to be going to jail over it. Not even the cigarette executives are generally concerned about prison time.

Comment by michael_vassar3 on Nonsentient Optimizers · 2008-12-28T15:21:25.000Z · LW · GW

Eliezer: There we totally agree, though I fear that many sub-fields of science are like philosophy in this regard. I think that these include some usual suspects like paraspychology but many others like the examples I gave such as the standard social science model or other examples like the efficient market hypothesis. Sadly, I suspect that much of medicine including some of the most important fields like cancer and AIDS research and nutrition also falls in this category.

Robin: I'm interested in why you think we should believe that sociologists know something but not that parapsychologists know something. What is your standard? Where do efficient marketers fit in? Elliot Wave theorists?

Comment by michael_vassar3 on Nonsentient Optimizers · 2008-12-28T08:05:53.000Z · LW · GW

Eliezer: I'm profoundly unimpressed by most recent philosophy, but really, why is it that when we are talking about science you say "nobody knows what science knows" while in the analogous situation with philosophy you say "the mountains of philosophy are the foothills of AI"? If scientists debate group vs individual selection or the SSSM or collapse for ten times a hundred years that doesn't mean that the answers haven't been discovered. How does this differ from free will?

Comment by michael_vassar3 on Nonperson Predicates · 2008-12-27T05:58:15.000Z · LW · GW

Yes, thanks Psy. That makes much more sense.

Comment by michael_vassar3 on Nonperson Predicates · 2008-12-27T03:17:10.000Z · LW · GW

"With a good toolbox of nonperson predicates in hand, we could exclude all "model citizens" - all beliefs that are themselves people - from the set of hypotheses our Bayesian AI may invent to try to model its person-containing environment." After you excise a part of its hypothesis space is your AI still Bayesian?

Comment by michael_vassar3 on Devil's Offers · 2008-12-25T22:41:20.000Z · LW · GW

ShardPhoenix: Yes. This is the same principle that says that credible confidentiality within a group can sometimes improve aggregate information flow and collective epistemology.

Tim Tyler: Human goals. I definitely do NOT want alien rationalists to be able to lie, but I doubt I have much choice regarding that. Also not transhuman children. There I might have some limited choice.

Eliezer: I certainly think that rationalists should practice telling truth more effectively as well as lie, and you admit that not lying enough makes people gullible, so it's mostly a matter of estimates of the magnitude of the relevant trade-offs here. I think that our disagreements are based on radically different models of social psychology. We disagree a great deal about the degree to which being known to sometimes lie reduces future credibility in the eyes of actual existent humans relative to being known to sometimes mislead without lying. I believe that being known to lie increases credibility somewhat relative to "wizards oath", while you think it greatly decreases it. I think that I know your reasons for your belief and that you don't know mine. I'm not sure whether you think that I know your reasons, and I'm not sure whether this difference in social psychological theory is the specific belief we disagree about. I'd like confirmation on whether you agree that this is our main point of disagreement. Also possibly a poll of the audience on the social psychology fact.

Comment by michael_vassar3 on Devil's Offers · 2008-12-25T15:56:37.000Z · LW · GW

"I flinched away from that thought's implications, not so much because I feared superintelligent paternalism myself, but because I feared what other people would say of that position."

This is basically THE reason I always advocate increased comfort with lying. It seems to me that this fear of believing what they don't want to say if they only believe truth is the single largest seemingly removable barrier to people becoming rationalists at all, or passing that barrier, to becoming the best rationalists they can be.

Comment by michael_vassar3 on Imaginary Positions · 2008-12-24T04:14:24.000Z · LW · GW

Eliezer: The distinction between direct observation and deduction is pretty ambiguous for a Bayesian, is it not? Also, MANY rationalists advocate "giving people the benefit of the doubt" which for them implies "behaving as if all people are reasonable and fair?". Furthermore, almost all rationalists, you for instance, advocate stating literally true beliefs towards people rather than stating the beliefs that you have most reason to expect to be most informative or to produce the best results. MANY people refrain from becoming more rational out of fear that they would have to do the same and out of justified belief that doing so would cripple their efficacy in life.

James Miller: Good call! That point about non-lawyers deserves a post of its own somewhere? I seriously wonder where they got that idea. Strangest of all, they seem to have generalized that misconception to invent the "laws of nature" which really are literal.

Paul Crowley: Both my wife and I have had brief phases when we were atheists of the type you question exists.

Comment by michael_vassar3 on Rationality Quotes 20 · 2008-12-23T17:52:21.000Z · LW · GW

I'd like to believe General Kurt, but I'm pretty sure he's a fictional character and that the line was invented for the pleasure of clever lazy people. DAMN! he's real!?! Where can I find such an employer in a position of great power today?

Comment by michael_vassar3 on Living By Your Own Strength · 2008-12-22T08:13:56.000Z · LW · GW

"Borrowing someone else's knowledge really doesn't give you anything remotely like the same power level required to discover that knowledge for yourself." Hmmm. This doesn't seem to me to be the way it works in domains of cumulatively developed competitive expertise such as chess, go, gymnastics and the like. In those domains the depth with which a technique penetrates you when you invent it is far less than that with which it penetrates your students to whom you teach it when they are children, or at least, that's my impression. Of course, if we could alternatively raise and lower our neoteny, gaining adult insights and then returning to childhood to truly learn them our minds might grow beyond what humans have yet experienced.

Comment by michael_vassar3 on Prolegomena to a Theory of Fun · 2008-12-18T09:39:31.000Z · LW · GW

Eliezer: Isn't your possible future self's disapproval one highly plausible reason for not spending lots of resources developing slowly?

Honestly, the long recognized awfulness of classic descriptions of heaven seems like counter-evidence to the thesis of "Stumbling on Happiness". I can't be confident regarding how good I am at knowing what would make me happy, so if the evidence that people in general are bad at knowing what will make them happy I should expect to be bad at it, but if I know that people in general are comically awful at knowing what will make them happy compared to myself and to most people the judgment of whom I respect then that fact basically screens off the standard empirical evidence of bad judgment as it applies to me.

Phil: Eliezer has repeatedly said that ems (formerly uploads) are people. Eliezer, can you please clarify this point in a simple direct comment aimed at Phil?

Komponisto: "Moral progress takes work, just like technological and intellectual progress. Indeed we should expect some correlation among these modes of progress, should we not?" Honestly, this seemed obvious before the 20th century when the Germans showed that it was possible to be plausibly the world's most scientifically advanced culture but morally backward. Our civilization still doesn't know what to make of that. We obviously see correlation, but also outliers.

Comment by michael_vassar3 on For The People Who Are Still Alive · 2008-12-15T02:19:40.000Z · LW · GW

I'm just incredibly skeptical of attempts to do moral reasoning by invoking exotic metaphysical considerations such as anthropics, even if one is confident that ultimately one will have to do so. Human rationality has enough trouble dealing with science. It's nice that we seen to be able to do better than that, but THIS MUCH better? REALLY? I think that there are terribly strong biases towards deciding that "it all adds up to normality" involved here, even when it's not clear what 'normality' means. When one doesn't decide that, it seems that the tendency is to decide that it all adds up to some cliche, which seems VERY unlikely. I'm also not at all sure how certain we should be of a big universe, but personally I don't feel very confident of it. I'd say it's the way to bet, but not at what odds it remains the way to bet. I rarely find myself in practical situations where my actions would be different if I had some particular metaphysical belief rather than another, though it does come up and have some influence on e.g. my thoughts on vegetarianism.

Comment by michael_vassar3 on You Only Live Twice · 2008-12-13T01:26:08.000Z · LW · GW

I would really like a full poll of this blog listing how many people are signed up for cryonics. Personally, I'm not, but I would consider it if existential risk was significantly lower OR my income was >$70K and would definitely do it if both were the case AND SIAI had $15M of pledged endowment.

Comment by michael_vassar3 on Artificial Mysterious Intelligence · 2008-12-07T21:21:30.000Z · LW · GW

4 seems important to me. I wouldn't expect intelligence to come via that route, but that route does seem to put a fairly credible (e.g. I would bet 4:1 on claims that credible and expect to win in the long term), though high, soft upper bound to how long we can go on with roughly current rate scientific progress without achieving AI. I'd say that it suggests such a soft uupper bound in the 2070s. That said, I wouldn't be at all surprised by science ceasing to advance at something like the current rate long before then, accelerating or decelerating a lot even without a singularity.

Comment by michael_vassar3 on Is That Your True Rejection? · 2008-12-06T17:38:46.000Z · LW · GW

Can't do basic derivatives? Seriously?!? I'm for kicking the troll out. His bragging about mediocre mathematical accomplishments isn't informative or entertaining to us readers.

Comment by michael_vassar3 on Hard Takeoff · 2008-12-03T05:52:23.000Z · LW · GW

Phil: It seems to me that the above qualitative analysis is sufficient to strongly suggest that six months is an unlikely high-end estimate for time required for take-off, but if take-off took six months I still wouldn't expect that humans would be able to react. The AGI would probably be able to remain hidden until it was in a position to create a singleton extremely suddenly.

Aron: It's rational to plan for the most dangerous survivable situations. However, it doesn't really make sense to claim that we can build computers that are superior to ourselves but that they can't improve themselves, since making them superior to us blatantly involves improving them. That said, yes it is possible that some other path to the singularity could produce transhuman minds that can't quickly self-improve and which we can't quickly improve, for instance drug enhanced humans, in which case hopefully those transhumans would share our values well enough that they could solve Friendlyness for us.

Comment by michael_vassar3 on Recursive Self-Improvement · 2008-12-02T06:16:35.000Z · LW · GW

Phil: Anthropic pressures should by default be expected to be spread uniformly through our evolutionary history accelerating the evolutionary and pre-evolutionary record of events leading to us rather than merely accelerating the last stretch.

Exponential inputs into computer chip manufacture seem to produce exponential returns with a doubling time significantly less than that for the inputs, implying increasing returns per unit input, at least if one measures in terms of feature number. Obviously returns are exponentially diminishing if one measures in time to finish some particular calculation. Returns will more interestingly be diminishing per unit labor in terms of hardware design effor per unit of depth to which a NP and exponential complexity class problems can be calculated, e.g. the number of moves ahead a chess program can look. OTOH, it bizarrely appears to be the case that over a large range of chess ranks, human players seem to gain effective chess skill measured by chess rank with roughly linear training while chess programs gain it via exponential speed-up.

Society seems to in aggregate get constant zero returns on efforts to cure cancer, though one can't rule out exponential returns starting from zero. OTOH, this seems consistent with the general inefficacy of medicine in aggregate as shown by the Rand study, which doesn't overturn the individual impacts, as shown by FDA testing, of many individual medical procedures. Life expectancy in the US has grown linearly while GDP per capita has grown exponentially, but among nations in the modern world life expectancy clearly has a different relationship to income, not linear, not logarithmic, more plausibly asymptotic moving towards something in the early 80s.

I'm glad that you consider the claim about turning object level knowledge metacognitive to be the most important and controvercial claim. This seems like a much more substantial and precise criticism of Eliezer's position than anything Robin has made so far. It would be very interesting to see you and Eliezer discuss evidence for or against sufficient negative feedback mechanisms, Eliezer's "just the right law of diminishing retunrs" existing.

Comment by michael_vassar3 on Disappointment in the Future · 2008-12-01T23:18:45.000Z · LW · GW

As far as I can tell, Kurzweil's methodology explicitly predicts a vaguely defined "rate of change" that is currently roughly doubling every fifteen years. Within that methodology, he extracts fairly strait-forward preditions, assumes instant adoption of technologies once they are feasible and ignores black swans, and largely ignores basic science in general focusing on technology. In addition he adds a few idiosyncratic predictions that match his interests and which are not calibrated to the general rate of change being predicted. In particlar, these predictions tend to accurately estimate near future computer pattern recognition capabilities in natural domains but grossly underestimate human pattern recognition in such domains and to be grossly overoptimistic about market enthusiasm for computer systems which underperform relative to humans in pattern recognition functions. The predictions on the above list generally seem to fit this methodology.

Comment by michael_vassar3 on Engelbart: Insufficiently Recursive · 2008-11-27T05:13:43.000Z · LW · GW

Phil: It seems clear to me that Newton and Einstein were not universally brilliant relative to ordinary smart people like you in the same sense that ordinary smart people like you are universally brilliant relative to genuinely average people, but it seems equally clear that it was not a coincidence that the same person invented Calculus, Optics AND universal gravitation or general relativity, special relativity, the photoelectric effect, brownian motion etc. Newton and Einstein were obviously great scientists in a sense that very few other people have been. It likewise isn't chance that Tiger Woods or Michael Jordan or Kasparov dominated game after game or that Picasso and Beethoven created many artistic styles.

That said, ELiezer doesn't have any accomplishments that strongly suggest that his abilities at your tasks 1-3 are comparible to the domain specific abilities of the people mentioned above, and in the absense of actual accomplishments of a world-historical magnitude the odds against any one person accomplishing goals of that magnitude seems to be hundreds to one (though uncertainty regarding the difficulty of the goals and the argument itself justify a slightly higher estimate of the probabilities in question). In addition, we don't have strong arguments that tasks 1-3 are related enough to expect solutions to be highly correlated, furthering the argument that building a community is a better idea than trying to be a lone genius.

Comment by michael_vassar3 on Cascades, Cycles, Insight... · 2008-11-25T22:13:39.000Z · LW · GW

I already included a factor for 33% to 80% waste.

Comment by michael_vassar3 on ...Recursion, Magic · 2008-11-25T18:09:35.000Z · LW · GW

I have been an office worker using easily documented approximate solutions generated via algorythms to do what my grandparents generation would have done with provably correct logical solutions to the same problems. They would have taken less time in some respects and more in others. On net, I'd guess that we weren't even 10% more productive. We generated many so-called "solutions" in the time they would have taken to generate one solution, but their solution would have been the correct solution while our procedure would be to then choose one of our many solutions for political reasons. We didn't take less time per project. We had more secretarial staff. We accomplished the same sort of work, and we did a worse job. Maybe 1/3 as productive over all?

Obviously, the tools we used could have been used to increase productivity, but could have != did. This phenomenon, as well as very rough measures, may explain the supposed uniformity of growth rates. An AGI with shared goal content and closed loops of self-improvement would not have the same difficulties as an economy in this respect and might plausibly be expected to show significant growth rate increases from insights on the level of the mouse.

Comment by michael_vassar3 on Cascades, Cycles, Insight... · 2008-11-25T07:32:48.000Z · LW · GW

Richard Hollerith: Herds of bison on the American plains numbered around 50 million in the mid 19th century numbered 60 to 100 million http://en.wikipedia.org/wiki/American_Bison . With a conservative 800 lbs/bison of meat and 8 year replacement time, that suggests a mostly bison diet could feed 15M-25M hunters on just the US great plains if herds were used efficiently and more plausibly 5M-10M with more likely levels of efficiency. A single river with a salmon run can support a primitive fishing town with a few tens of thousands of people. Throw in Africa and Asia and I would expect a few tens of millions of human hunter-gatherers. This is still consistent with a 3 to 5 fold increase in population due to the shift to agriculture, which could easily be enough to cause the replacement of hunter-gatherers (though I have alternative speculations as to what may have happened in this case) and another 3 to 5 fold increase in population subsequent to the development of agriculture but prior to industrialization due to several thousand years of improved crops, irrigation, fertilization, pest control, and organizational institutions (broadly, capital improvements). I don't know of ANY model for hunter gatherer populations which seriously suggests that they would have grown approximately smoothly and exponentially rather than expanding to carrying capacity with some overshoot like other animals. Intuitive models for agriculture suggest a roughly linear or geometric trend ultimately leading to diminising returns as good farm land is depleted from the dissemination of agriculture followed by an initially slower exponential trend from the multiplicative impact of innovations such as better tilling methods or hybrid grains.

Comment by michael_vassar3 on Cascades, Cycles, Insight... · 2008-11-25T02:13:11.000Z · LW · GW

For native American populations, standard estimates 50 years ago, and common estimates among experts 20 years ago, were for a few million. Now consensus is 10M-100M probably closer to 40M Pretty much ALL ancient world populations seem to have been traditionally underestimated by at least a factor of 2, leading to serious uncertainty as to whether total world population was ANY higher in 1650 than in 200. Some nomads subsist by raiding, others largely by herding sheep, horses, or cattle. In either case they are heavily occupied with defending their herds from other nomads, making them very martial in focus, but somewhat less so than knights are. Hunter Gatherers are also often specialized professional warriors to a significant extent, especially where nature is particularly bounteous as on many tropical islands. They usually do some fishing, nomads some animal husbandry.

Comment by michael_vassar3 on Cascades, Cycles, Insight... · 2008-11-25T01:41:07.000Z · LW · GW

Oddly, the comparitively tiny numbers of unspecialized nomads seem to have continued to make very significant contributions (iron, riding, high quality bows, specialized arrows, saddles and stirrups, firearms and cannon, imperialism/logistics/mercantilism, maybe monotheism, lots of elements of ethics aesthetics and music) to human culture. This doesn't casually fit with Eliezer's focus on specialization OR Robin's focus on numbers.

Also, as far as I can tell, the population numbers that Robin is using when he says that growth rates suddenly increased by two orders of magnitue with farming are basically worthless. We don't even have order of magnitude agreement on the population of the Americas in 1491, which is MUCH more recent than the origin of agriculture, and we DO have large disagreement between currently credible estimates and the estimates considered credible a few decades ago. Hell, we don't know the population of Afghanistan to within 40% (credible standard sources disagree) or the GDP of Cuba to within a factor of three (ditto). I would take his estimates at least a little more seriously, though only a little, if he could convince me that even economists setimates of GDP growth in the 20th century were credible, as I don't beleive them to be credible. One piece of information that would make me take these estimates more seriously would be a careful study of the ways in which Amish lifestyles have changed over the last century and data on what the per-capita Amish GDP and Amish birth rate were over the last century. Can anyone point me to some such studies? If the Amish were living very similarly 100 years ago and today but today they supposedly have a per-capita GDP of $20K, half the per-capita for the US and 100 years ago they had per capita GDP of $1K, equal to the par-capita of the US, this suggests that in at least one real and relevant sense US per-capita GDP has only doubled in a century (tripled possibly when one factors in some improvements in their standard of living).

Comment by michael_vassar3 on Life's Story Continues · 2008-11-22T03:59:23.000Z · LW · GW

Sexual selection would seem to me to be meta-level and brain related, though possibly brainless organisms have clever ways to do sexual selection. Any botanists here?

Comment by michael_vassar3 on San Jose Meetup, Sat 10/25 @ 7:30pm · 2008-10-23T23:01:03.000Z · LW · GW

I will be there.

Comment by michael_vassar3 on San Jose Meetup, Sat 10/25 @ 7:30pm · 2008-10-23T23:00:55.000Z · LW · GW

I will be there.