Posts

Comments

Comment by Lara_Foster2 on Awww, a Zebra · 2008-10-01T02:30:48.000Z · LW · GW

Ohhhh... oh so many things I could substitute for the word 'Zebra'....

Comment by Lara_Foster2 on The Magnitude of His Own Folly · 2008-09-30T22:32:42.000Z · LW · GW

Eliezer,

How are you going to be 'sure' that there is no landmine when you decide to step?

Are you going to have many 'experts' check your work before you'll trust it? Who are these experts if you are occupying the highest intellectual orbital? How will you know they're not YesMen?

Even if you can predict the full effects of your code mathematically (something I find somewhat doubtful, given that you will be creating something more intelligent than we are, and thus its actions will be by nature unpredictable to man), how can you be certain that the hardware it will run on will perform with the integrity you need it to?

If you have something that is changing itself towards 'improvement,' than won't the dynamic nature of the program leave it open to errors that might have fatal consequences? I'm thinking of a digital version of genetic mutation in which your code is the DNA...

Like, lets say the superintelligence invents some sort of "Code shuffling" mechanism for itself whereby it can generate many new useful functions in an expedited evolutionary manner (Like we generate antibodies) but in the process accidentally does something disasterous.

The argument, 'it would be too intelligent and well intentioned to do that, doesn't seem to cut it, because the machine will be evolving from something of below human intelligence into something above, and it is not certain what types of intelligence it will evolve faster, or what trajectory this 'general' intelligence will take. If we knew that, then we could program the intelligence directly and not need to make it recursively self-improving.

Comment by Lara_Foster2 on Above-Average AI Scientists · 2008-09-28T16:17:24.000Z · LW · GW

Eliezer, How do you envision the realistic consequences of mob-created AGI? Do you see it creeping up piece by piece with successive improvements until it reaches a level beyond our control,

Or do you see it as something that will explosively take over once one essential algorithm has been put into place, and that could happen any day?

If a recursively self-improving AGI were created today, using technology with the current memory storage and speed, and it had access to the internet, how much damage do you suppose it could do?

Comment by Lara_Foster2 on That Tiny Note of Discord · 2008-09-23T22:48:28.000Z · LW · GW

What I think is a far more likely scenario than missing out on the mysterious essence of rightness by indulging the collective human id, is that what 'humans' want as a complied whole is not what we'll want as individuals. Phil might be aesthetically pleased by a coherent metamorality, and distressed if the CEV determines what most people want is puppies, sex, and crack. Remember that the percentage of the population that actually engages in debates over moral philosophy is diminishingly small, and everyone else just acts, frequently incoherently.

Comment by Lara_Foster2 on That Tiny Note of Discord · 2008-09-23T17:51:13.000Z · LW · GW

Actually, I CANNOT grasp what life being 'meaningful' well... means. Meaningful to what? To the universe? That only makes sense if you believe there is some objective judge of what state of the universe is best. And then, why should we care? Cuz we should? HUH? Meaningful to us? Well yes- we want things...Did you think that there was one thing all people wanted? Why would you think that necessary to evolution? What on earth did you think 'meaning' could be?

Comment by Lara_Foster2 on That Tiny Note of Discord · 2008-09-23T17:34:26.000Z · LW · GW

I second Valter and Ben. It's hard for me to grasp that you actually believed there was any meaning to life at all, let alone with high confidence. Any ideas on where that came from? The thought, "But what if life is meaningless?" hardly seems like a "Tiny Note of Discord," but like a huge epiphany in my book. I was not raised with any religion (well, some atheist-communism, but still), and so never thought there was any meaning to life to begin with. I don't think this ever bothered me 'til I was 13 and recognized the concept of determinism, but that's another issue. Still- why would someone who believed that we're all just information-copying-optimization matter think there was any meaning to begin with?

Comment by Lara_Foster2 on Say It Loud · 2008-09-20T09:28:10.000Z · LW · GW

Greindl, Ah, but could not one be overconfident in their ability to handle uncertainties? People might interpret your well-reasoned arguments about uncertain things as arrogant if you do not acknowledge the existence of unknown variables. Thus, you might say, "If there's a 70% probability of X, and a 50% probability of Y, then there's a clear 35% probability of Z," while another is thinking, "That arrogant fool hasn't thought about A, B, C, D, and E!" In truth, those factors may have been irrelevant, or so obvious that you didn't mention their impact, but all the audience heard was your definitive statement. I'm not arguing that there is a better style (you might confuse people, which would be far worse), but I do think there are ways that people can be offended by it without being irrational. Claiming so seems very akin to Freud claiming his opponents had oedipal complexes.

There are also many factors that contribute to an assessment that someone is 'overconfident,' aside from their main writings.

Cool name, by the way. What are its origins?

Comment by Lara_Foster2 on The Sheer Folly of Callow Youth · 2008-09-19T03:59:13.000Z · LW · GW

Nice.

Comment by Lara_Foster2 on A Prodigy of Refutation · 2008-09-18T10:15:36.000Z · LW · GW

By George! You all need to make a hollywood blockbuster about the singularity and get all these national-security soccor moms screaming hellfire about regulating nanotechnology... "THE END IS NEAR!" I mean, with 'Left Behind' being so popular and all, your cause should fit right into the current milieu of paranoia in America.

I can see the preview now, children are quietly singing "My Country 'tis of Thee" in an old-fashioned classroom, a shot zooms from out the window to show suburban homes, a man taking out the trash with a dog, a woman gardening, a newscast can be overheard intermingling with the singing, "Ha Ha Mark, well, today's been a big day for science! Japanese physicist Uki Murakazi has unveiled his new, very tiny, and CUTE I might add, hydrogen-fuel creating nanobots..." Woman looks up as sky starts to darken. Silence 'What if all that ever mattered to you...' Lone voice, "Mommy?" Screaming chaos, school busses get sucked into some pit in the earth, upclose shots of hot half-naked woman running away in a towel with a bruise crying, firemen running pel-mell, buildings collapsing, the works... "What if all of it..." Dramatic "EUNK!" sound upon a black screen... Voices fade in, "God, where are you?" "I don't think we can stop it..." "Mommy? Where are we?" "Be prepared, because this September," violins making that very high pitched mournful noise, the words "The Singularity is Near" appear on the screen.

It practically writes itself... Then at the high point of the movie's popularity, you begin making press releases, interviews, etc. that declare you find such doomsdays scenarios (though not exactly as depicted) possible and of important security risk. Could backfire and make you look insane, I suppose... But even so, there's a lot of money in Hollywood- think about the Scientologists.

Comment by Lara_Foster2 on Raised in Technophilia · 2008-09-17T16:45:14.000Z · LW · GW

I understand that there are many ways in which nanotechnology could be dangerous, even to the point of posing extinction risks, but I do not understand why these risks seem inevitable. I would find it much more likely that humanity will invent some nanotech device that gets out of hand, poisons a water supply, kills several thousand people, and needs to be contained/quarantined, leading to massive nano-tech development regulation, rather than a nano-tech mistake that immediately depressurizes the whole space suit, is impossible to contain, and kills us all.

A recursively improving, superintelligent AI, on the other hand, seems much more likely to fuck us over, especially if we're convinced it's acting in our best interest for the beginning of its 'life,' and problems only become obvious after it's already become far more 'intelligent' than we are.

Comment by Lara_Foster2 on My Childhood Death Spiral · 2008-09-16T17:17:01.000Z · LW · GW

Sorry, this I realize is entirely off topic. Where should I move the discussion to? Ppl can take it to email with me if they like (cingulate2000@gmail.com).

Hmm... musing again on the psycho-social development of children and the role of adult approval. Scott suggested that being rewarded by adults for academic development may have impeded his social development.

I wonder if there are any social psychology studies in which a child is chosen at random to be favored by an adult authority figure, an what happens to that child's interactions with peers, and self perception. I wonder if gender has been used as a variable. Anyone have any references?

Personally, I have long asserted that the main reason I put any effort into school was to gain the approval and attention of my male teachers. My mother pointed out that I loved all my male teachers and usually despised the female ones, and thus did much better under male tutelage, even switching me into a male teacher's classroom in 4th grade after a 'personality conflict' with a female one. Now, for a woman, learning how to gain the approval of male authority figures is a transitive skill from childhood to adulthood... The girls at the lab I worked at in Germany joked that I was 'Herr Doctors kleine Freundin,' because he showed a disproportionately great interest in my relatively unremarkable project and would always pop into my room to chat (an apparently aberrant behavior for this very serious man).

Now, for boys, learning how best to get the approval of female authority figures doesn't seem to translate into adulthood. Maybe there is a subtle sexual tension between young female students and their male teachers (hence crushes and the like) but not for boys and their female teachers, who they might view more like mommies than girlfriends. Thus, at some point boys are going to need to break away from the adult-approval schematic if they are to be romantically successful and not turn into man-children. The psycho-social-sexual development of children seems very interesting to me, and I would be very grateful to be directed to some thoughtful literature and/or studies on the topic.

Comment by Lara_Foster2 on My Childhood Death Spiral · 2008-09-15T20:54:11.000Z · LW · GW

Interesting, Scott. What priorities do the intelligence-centric type have that make you unhappy? Though I might not necessarily fit into this group, I am confident that I am of above-average intelligence, and I do not believe my litany of worldly woes are attributable to that, so much as to specific personality traits independent of intelligence.

Comment by Lara_Foster2 on My Childhood Death Spiral · 2008-09-15T20:31:04.000Z · LW · GW

Michael, Your question is very ill-defined. I regularly partake in a drug that lowers my IQ in exchange for other utility... It's called alcohol. If you are talking about permanent IQ reductions, I would need to have some sense of what losing one IQ point felt like before I could evaluate a trade. Is it like taking one shot? Would I even notice it missing?

Many psychotropic drugs, especially antipsychotics, 'slow' down the people that take them and thus could be associated with lowering IQ, yet many people choose to take them and lower their IQ for the utility gained by not hearing demonic voices or being allowed to leave a mental institution.

Comment by Lara_Foster2 on My Childhood Death Spiral · 2008-09-15T18:52:28.000Z · LW · GW

As long as you are sharing your development with us, I'd be curious to know why the young Eliezer valued intelligence so highly as to make it a terminal value. He must have enjoyed what he thought was 'intelligence' tremendously, and seen that people who did not share in this intelligence, did not share in his enjoyment and felt sorry for them. Moreover, he must not have been jealous of any enjoyments his less intelligent brethren seemed to partake in that he did not. He probably also did some sort of correlative analysis observing people he considered having more and less intelligence and determined the mores were betteroff than the morons. What traits would he have used to establish this correlation?

Heck- not having experienced qualitatively what young Eliezer did, I can't be certain he's not right about how great it is to be that smart. But that argument can go in any direction. I was quite a busy teen myself, and I'm not so sure I'd trade my ups for a few more IQ points.

Comment by Lara_Foster2 on Optimization · 2008-09-14T19:42:40.000Z · LW · GW

It's not about resisting temptation to meddle, but about what will, in fact, maximize human utility. The AI will not care whether utility is maximized by us or by it, as long as it is maximized (unless you want to program in 'autonomy' as an axiom, but I'm sure there are other problems with that). I think there is a high probability that, given its power, the fAI will determine that it can best maximize human utility by taking away human autonomy. It might give humans the illusion of autonomy in some circumstances, and low and behold these people will be 'happier' than non-delusional people would be. Heck, what's to keep it from putting everyone in their own individual simulation? I was assuming some axiom that stated, 'no wire-heading', but it's very hard for me to even know what that means in a post-singularity context. I'm very skeptical of handing over control of my life to any dictatorial source of power, no matter how 'friendly' it's programmed to be. Now, if Eliezer is conviced it's a choice between his creation as dictator vs someone else's destroying the universe, then it is understandable why he is working towards the best dictator he can surmise... But I would rather not have either.

Comment by Lara_Foster2 on Optimization · 2008-09-14T18:55:36.000Z · LW · GW

John Maxwell- I thought the security/adventrure example was good, but that the way I portrayed it might make it seem that ever-alternating IS the answer. Heregoes: A man lives as a bohemian out on the street, nomadically day to day solving his problems of how to get food and shelter. It seems to him that he would be better off looking for a secure life, and thus gets a job to make money. Working for money for a secure life is difficult and tiring and it seems to him that he will be better off once he has the money and is secure. Now he's worked a long time and has the money and is secure, which he now finds is boring both in comparison to working and living a bohemian life with uncertainty in it. People do value uncertainty and 'authenticity' to a very high degree. Thus Being Secure is > Working to be secure > Not being secure > being secure.

Now, Eliezer would appropriately point out that the man only got trapped in this loop, because he didn't actually know what would make him happiest, but assumed without having the experience. But, that being said, do we think this fellow would have been satisfied being told to start with 'Don't bother working son, this is better for you, trust me!' There's no obvious reason to me why the fAI will allow people the autonomy they so desire to pursue their own mistakes unless the final calculation of human utility determines that it wins out, and this is dubious... I'm saying that I don't care if what in truth maximizes utility is for everyone to believe they're 19th century god-fearing farmers, or to be on a circular magic quest the memory of the earliest day of which disappears each day, such that it replays for eternity, or whatever simulation the fAI decides on for post-singularity humanity, I think I'd rather be free of it to fuck up my own life. Me and many others.

I guess this goes to another more important problem than human nonlinear preference- Why should we trust an AI that maximizes human utility, even if it understands what that means? Why should we, from where we sit now, like what human volition (a collection of non-linear preferences) extrapolates to, and what value do we place on our own autonomy?

Comment by Lara_Foster2 on Optimization · 2008-09-13T22:07:14.000Z · LW · GW

Eliezer, this particular point you made is of concern to me: "* When an optimization process seems to have an inconsistent preference ranking - for example, it's quite possible in evolutionary biology for allele A to beat out allele B, which beats allele C, which beats allele A - then you can't interpret the system as performing optimization as it churns through its cycles. Intelligence is efficient optimization; churning through preference cycles is stupid, unless the interim states of churning have high terminal utility."

You see, it seems quite likely to me that humans evaluate utility in such a circular way under many circumstances, and therefore aren't performing any optimizations. Ask middle school girls to rank boyfriend prenference and you find Billy beats Joey who beats Micky who beats Billy... Now, when you ask an AI to carry out an optimization of human utility based on observing how people optimize their own utility as evidence, what do you suppose will happen? Certainly humans optimize somethings, sometimes, but optimizations of some things are at odds with others. Think how some people want both security and adventure. A man might have one (say security), be happy for a time, get bored, then move on to the other and repeat the cycle. Is opimization a flux of the two states? Or the one that gives the most utility over the other? I suppose you could take an integral of utility over time and find which set of states = max utility over time. How are we going to begin to define utility? "We like it! But it has to be real, no wire-heading." Now throw in the complication of different people having utility functions at odds with each other. Not everyone can be king of the world, no matter how much utility they will derive from this position. Now ask the machine to be efficient- do it as easily as possible, so that easier solutions are favored over more difficult "expensive" ones.

Even if we avoid all the pitfalls of 'misunderstanding' the initial command to 'optimize utility', what gives you reason to assume you or I or any of the small, small subsegment of the population that reads this blog is going to like what the vector sum of all human preferences, utilities, etc. coughs up?

Comment by Lara_Foster2 on Anthropomorphic Optimism · 2008-08-05T17:24:22.000Z · LW · GW

Actually, if you want a more serious answer to your question, you should contact Sydney Brenner or Marty Chalfie, who actually worked on the C. elegans projects. Brenner is very odd and very busy, but Chalfie might give you the time of day if you make him feel important and buy him lunch.... Marty is an arrogant sonuvabitch. Wouldn't give me a med school rec, because he claimed not to know anything about me other than that I was the top score in his genetics class. I was all like, "Dude! I was the one who was always asking questions!" And he said, "Yes, and then class would go overtime." Lazy-Ass-Sonuvabitch... But still a genius.

Comment by Lara_Foster2 on Anthropomorphic Optimism · 2008-08-04T23:26:10.000Z · LW · GW

Eliezer.... This post terrifies me. How on earth can humans overcome this problem? Everyone is tainted. Every group is tainted. It seems almost fundementally insurrmountable... What are your reasons for working on fAI yourself and not trying to prevent all others working on gAI from succeeding? Why could you succeed? Life extesnion technologies are progressing fairly well without help from anything as dangerous as an AI.

Regarding anthropomorphism of non-human creatures, I was thoroughly fascinated this morning by a fuzzy yellow catepillar in central park that was progressing rapidly (2 cm/s) across a field, over, under, and around obstacles, in a seemingly straight line. After watching its pseudo-sinusoidal body undulations and the circular twisting of its moist, pink head with two tiny black and white mouth parts for 20 minutes, I moved it to another location, after which it changed its direction to crawl in another staight line. After forward projecting where the two lines would intersect, I determined the catepillar was heading directly towards a large tree with multi-rounded-point leaves about 15 feet in the distance. I moved the catepillar on a leaf (not easy, the thing moved very quickly, and I had to keep rotating the leaf) to behind the tree, and sure enough, it turned around, crawled up the tree, into a crevice, and burrowed in with the full drilling force of its furry, little body.

Now, from a human point of view, words like 'determined,' 'deliberate,' 'goal-seeking,' might creep in, especially when it would rear its bulbous head in a circle and change directions, yet I doubt the catepillar had any of these menal constructs. It was, like the moth it must turn into, probably sensing some chemoattractant from the tree... maybe it's time for it to make a crysalis inside the tree and become a moth or butterfly, and some program just kicks in when it's gotten strong and fat enough, as this thing clearly was. But 'OF COURSE' I thought. C. elegans, a much simpler creature, will change its direction and navigate simple obstacles when an edible proteinous chemoattractant is put in its proximity. The cattepillar is just more advanced at what it does. We know the location and connections of every one of C. elegans 213 neurons... Why can't we make a device that will do the same thing yet? Too much anthropomorphism?