Posts
Comments
We recently gave our 9-year old a 'kid license' attached to a tile to take around when he runs errands. He's had not trouble with anything other than one store refusing to sell him cookies on the basis that they didn't think his mother would approve. He really loves the independence. I've given him a cell-phone in a fanny-pack to take when going someplace new, but he doesn't want to take the cell phone most of the time. Of course we can't do this with our 6-year old, both because he is clearly not mature enough, and even if he was, I'm pretty sure strangers would object.
To what extent are people burning themselves out, vs using what they're doing as an excuse not to perform effortful and sometimes unpleasant mental and physical hygiene? My understanding is this is a crowd prone to depression anyway, and failing to engage in self-care is a pretty common. IE - if these people were working on something else, would we expect them to burn long-term resources anyway?
I notice that as someone without domain specific knowledge of this area, that Paul's article seems to fill my model of a reality-shaped hole better than Eliezer's. This may just be an artifact of the specific use of language and detail that Paul provides which Eliezer does not, and Eliezer may have specific things he could say about all of these things and is not choosing to do so. Paul's response at least makes it clear to me that people, like me, without domain specific knowledge are prone to being pulled psychologically by use of language in various directions and should be very careful about making important life decisions based on concerns of AI safety without first educating themselves much further on the topic, especially since giving attention and funding to the issue at least has the capacity to cause harm.
It was interesting to re-read this article 2 years later. It reminds me that I am generally working with a unique subset of the population, which is not fully representative of human psychology. That being said, I believe this article is misleading in important ways, which should be clarified. The article focused too much on class, and it is hard to see it as anything but classist. While I wrote an addendum at the end, this really should have been incorporated into the entire article and not tacked on, as the conclusions one would reach without it are quite different. I believe I didn’t incorporate it largely because I am not a strong writer and didn’t know how to do this in an elegant way without losing my other points.
This article needed some discussion of internal vs external locus of control. My current clients have a strong sense that they have control over their lives. This leads to attempting more actions to change their situations but also internalizing their failures. The population at the Medicaid clinic feel that they and in fact do have less control over their lives. This is an important thing to point out. I had one older minority client who basically described a lifetime of being buffeted about by various government policy changes and oppressive interference in her life for the last several decades. She suffered many tragedies that were not within her control.
I believe I also over-simplified the psychology of both my current and former patients for clarity. The majority of clients who come to me do want medication. I do see people struggling with past traumas and current situations which are out of their control. I definitely saw people at the Medicaid clinic suffering from identity crises. Money was not absent from the concerns of people on government assistance. I still think the spirit of my comparisons is accurate, but the oversimplifications are dangerous, and I’m not certain that the greater point was worth the confusion.
I still believe the conclusion that struggling to hold onto identity leads to great human suffering. It is not a simple problem to solve, nor necessarily one that should be fully solved. I will leave that to others to debate. I do spend a lot of time working with people on examining their expectations of themselves, where they come from, and whether holding themselves to these standards leads to anything positive in anyone’s life.
"That was when close friends started delicately checking to see if I was “okay.” "
I would suggest seeing some of your close friends in person and seeing if they think you are ok instead of 'reassuring' them that you are fine. Your 'explanation' is not at all reassuring on this front. The whole incident seems out of character from what I remember of you, and I'm guessing your friends are right to be concerned. I recommend not writing more in public forums for the time being.
There is a lot of arguing in the comments about what the 'tradeoffs' are for individuals in the scientific community and whether making those tradeoffs is reasonable. I think what's key in the quoted article is that fraudsters are trading so much for so little. They are actively obscuring and destroying scientific progress while contributing to the norm of obscuring and destroying scientific progress. Potentially preventing cures to diseases, time and life-saving technology, etc. This is REALLY BAD. And for what? A few dollars and an ego trip? An 11% instead of a 7% chance at a few dollars and an ego trip? I do not think it is unreasonable to judge this behavior as reprehensible, reguargless if it is the 'norm'.
Using peers in a field as a proxy for good vs. bad behavior doesn't make sense if the entire field is corrupt and destroying value. If 100% of scam artists steal people's money, I don't forgive a scam artist for stealing less money than the average scam artist. They are not 'making things better' by in theory reducing the average amount of money stolen per scam artist. They are still stealing money. DO NOT BECOME A SCAM ARTIST IN THE FIRST PLACE. If academia is all a scam, then that is very sad, but it does not make it ok for people to join in the scam and shrug it off as a norm.
And being fraudulent in science is SO MUCH WORSE than just being an ordinary scam artist who steals money. It's more like being a scam artist who takes money in exchange for poisoning the water and land with radioactive waste. No, it's not ok because other people are doing it.
That might be true, but it is very hard for people to make reasoned changes when they are deeply depressed. Depression has real cognitive effects, which people frequently complain about. Depressive reasoning often looks like I SUCK I SUCK I SUCK I SUCK Everything is my fault and I screwed it all up, and I can't fix it, and I can't fix it because I SUCK I SUCK I SUCK. Ok - let me focus for a second - If I change this... I SUCK I SUCK ... Ok, if I change this thing then... why can't I think? Oh right I SUCK I SUCK I SUCK... etc, etc. Getting people out of that pattern is very helpful.
Number 1 is the politically correct thing to say, but was not what I actually observed when working with Medicaid patients. People complained far less about poverty than I (who come from a middle-class upbringing) would have anticipated. People adjust to what they are used to. It's the middle class, with the constant fear of downward mobility, which really suffers from monetary issues. There were some interesting interactions between race and class, which are hard to express without the internet eating me. Being hispanic and poor is very different from being black and poor, which is different still from being white and poor. And I'll leave it at that.
2 just sounds correct. America has reached the apotheosis of individualism. We can't all be the star of the show, and it hurts when you find out you are not.
The latter. Most 'analysts' today do not consider themselves primarily freudian.
My new practice is only 3 months old, so no one with full on DID yet, though some people have these rando dissociations (which are likely trauma related) I had one patient in my former position with DID. Very interesting case, but hippa lol.
I just read this article from the Atlantic - I wrote the comment first- but I think it eloquently highlights most of these points.
https://www.theatlantic.com/amp/article/379329/
A few thoughts on why people dislike the idea of greatly extending human life:
1) The most obvious reason: people don't understand the difference between lifespan and healthspan. They see many old, enfeebled, miserable people in old folks homes and conclude, 'My God, what has science wrought!' They are at present not wrong.
2) They don't believe it could work. People as they get older start recognizing and coming to terms with mortality. It suffuses everything about their lives, preparations, the way they talk. The second half of a modern human life is mostly shoring things up for the next generation. Death is horrible. It needs to be made ok one way or another. If you dangle transhumanism in front of them, but they don't believe it has any possibility of happening, then you are undoing years of mental preparation for the inevitable for no reason. People have mental protections against this kind of thing.
3) On some level people don't want their parents to live forever. Modernly extended lifespans have already greatly extended the time parents exert influence over their children. Our childhoods essentially never end.
4) On some level people don't want to live. That might be hard for you to understand, but many people are very miserable, even if they are not explicitly suicidal. The idea of a complete life, when they can say their work is done, can be very appealing. The idea of it never ending can sound like hell.
Sick babies are often too weak to suck much - and this is true even if the baby isn't sick enough to require a nicu stay. If a baby has to be in the hospital - it can be difficult logistically to breastfeed them, and of course if women aren't dedicated to it, they won't maintain milk. My son was required to stay in the nicu for 4 days (for ridiculous reasons - he was fine). I was only allowed to stay in the hospital 2 nights, and I was exhausted and needed to sleep.
I ended up allowing them to feed him formula since my milk was slow to come in - no one strongly encouraged me to stay there and breastfeed in the night. I got a 5 minute tutorial on how to use a pump, which was briefly suggested. It's great that some hospitals are encouraging breastfeeding and providing donor milk to premature babies. I don't know how universal this is. I know other women who have complained of similar problems I faced.
Ozy - sibling studies have a major problem - they don't take into account the reasons why a mother would breast-feed one child but not the other. If you ask moms about this, they always have an answer, and it is usually something like, 'Josh was very sleepy and just wouldn't suck. We had to give him a bottle to get him to eat at all.' My mother basically gives this exact story for why I was breast-fed and my brother was not.
And my brother had developmental problems and I did not. I don't think this is because he was fed formula.
Remember, weaker/sicker babies are more likely to get formula, and sicker/older/tireder/more depressed mothers are more likely to formula feed. In order to breastfeed, everything has to go right. One thing goes wrong, and it's on to formula.
It's a mess. In general poor people are more likely to use formula since they have to go back to work/don't have the same level of indoctrination- oops education - about the benefits of breast feeding, and breast feeding is a lot of work. Then there's the issue that sicker babies often have to be formula fed, because they have weaker sucking reflexes and/or require special high-calorie formula. Multiples are more likely to be formula fed, for obvious reasons. Babies of older mothers are more likely to be formula feed, since older moms produce less milk, etc. etc. More obsessive and more highly educated mothers are more likely to breast-feed for obvious reasons. In general, my conclusion from the (noncomprehensive) reading I've done about it indicate that breast feeding clearly reduces early respiratory and GI infections as well as reduced colic and GI distress (while breastfeeding), but has unclear impact on long term psychological, physical, and cognitive health. Overall those things look better with breast-fed babies, but attempts to control for other things often negates the effects, leading to yo-yoing articles about the supremacy of breast milk depending on the fashion of the day. However, going back to theory, it would be very strange if breast milk weren't better given human's past experience with making food-substitutes. That being said, the healthiest baby is a fed baby, and the impact of formula vs breast feeding is unlikely to outweigh many other factors in a person's life, such as milk production, needs to earn money to support the family, and mental health of the mother (depression in mothers is very highly correlated with poor long term outcomes).
There is another interpretation, which is that strong property rights *are* moral. I am currently 80% through Atlas Shrugged, which is a very strong thesis for this interpretation. Basically, when you take away property rights, whether the material kind, the action of one's labor, or the spiritual kind, you give power to those who are best at taking. Ayn Rand presents the results of this kind of thinking, the actions that result, and the society it creates. I strongly recommend you read.
Excellent post with good food for thought. I'm interested to hear more about how people on this blog avoid superstitions.
I agree with Ray - the chapter was too long and spent too many words saying what it was trying to say. I read it in several sittings due to lack of adequate time block nd couldn't find my place, which lead to me losing time and rereading portions and feeling generally frustrated. think the impact would be improved by reducing by a considerable margin.
I agree that this is an important issue we may have to deal with. I think it will be important to separate doing things for the community from doing things for individual members of the community. For example, encouraging people to bring food to a pot luck or volunteer at solstice is different from setting expectations that you help someone with their webpage for work or help out members of the community who facing financial difficulties. I've been surprised by how many times I've had to explain that expecting the community to financially support people is terrible on every level and should be actively discouraged as a community activity. This is not an organized enough community with high enough bars to membership to do things like collections. I do worry that people will hear a vague 'Huffelpuff!' call to arms and assume this means doing stuff for everyone else whenever you feasilbly can -- It shouldn't. It should be a message for what you do in the context of the public community space. What you choose to do for individuals is your own affair.
Eliezer, Komponisto,
I understand the anxiety issues of, 'Do I have what it takes to accomplish this..."
I don't understand why the existence of someone else who can would damage Eliezer's ego. I can observe that many other people's sense of self is violated if they find out that someone else is better at something they thought they were the best at-- the football champion at HS losing their position at college, etc. However, in order for this to occur, the person needs to 1) in fact misjudge their relative superiority to others, and 2) value the superiority for its own sake.
Now, Eliezer might take the discovery of a better rationalist/fAI designer as proof that he misjudged his relative superiority-- but unless he thinks his superiority is itself valuable, he should not be bothered by it. His own actual intelligence, afterall, will not have changed, only the state of his knowledge of others' intelligence relative to his own.
Eliezer must enjoy thinking he is superior for loss of this status to bother his 'ego'.
Though I suppose one could argue that this is a natural human quality, and Eliezer would need to be superhuman or lying to say otherwise.
Again, I have difficulty understanding why so many people place such a high value on 'intelligence' for its own sake, as opposed to a means to an end. If Eliezer is worried that he does not have enough mathematical intelligence to save the universe from someone else's misdesigned AI, than this is indeed a problem for him, but only because the universe will not be saved. If someone else saves the universe instead, Eliezer should not mind, and should go back to writing sci-fi novels. Why should Eliezer's ego cry at the thought of being upstaged? He should want that to happen if he's such an altruist.
I don't really give a damn where my 'intelligence' falls on some scale, so long as I have enough of it to accomplish those things I find satisfying and important TO ME. And if I don't, well, hopefully I have enough savvy to get others who do to help me out of a difficult situation. Hopefully Eliezer can get the help he needs with fAI (if such help even exists and such a problem is solvable).
Also, to those who care about intelligence for its own sake, does the absolute horsepower matter to you, or only your abilities relative to others? IE, would you be satisfied if you were considered the smartest person in the world by whatever scale, or would that still not be enough because you were not omniscient?
Scott: "You have a separate source of self-worth, and it may be too late that you realize that source isn't enough."
Interesting theory of why intelligence might have a negative correlation with interpersonal skills, though it seems like a 'just so story' to me, and I would want more evidence. Here are some alternatives: 'Intelligent children find the games and small-talk of others their own age boring and thus do not engage with them.' 'Stupid children do not understand what intelligent children are trying to tell them or play with them, and thus ignore or shun them.' In both of these circumstances, the solution is to socialize intelligent children with each other or with an older group in general. I had a horrible time in grade school, but I socialized with older children and adults and I turned out alright (well, I think so). I suppose without any socialization, a child will not learn how to interpret facial expressions, intonations, and general emotional posturing of others. I'm not certain that this can't be learned with some effort later in life, though it might not come as naturally. Still, it would seem worth the effort.
I'm uncertain whether Eliezer-1995 was equating intelligence with the ability to self-optimize for utility (ie intelligence = optimization power) or if he was equating intelligence with utility (intelligence is great in and of itself). I would agree with Crowly that intelligence is just one of many factors influencing the utility an individual gets from his/her existence. There are also multiple kinds of intelligence. Someone with very high interpersonal intelligence and many deep relationships but abyssmal math skills may not want to trade places with the 200 IQ point math wiz who's never had a girlfriend and is still trying to compute the ultimate 'girlfriend maximizing utility equation". Just saying...
Anyone want to provide links to studies correlating IQ, ability, and intelligences in various areas with life-satisfaction? I'd hypothesize that people with slightly above average math/verbal IQs and very above average interpersonal skills probably rank highest on life-satisfaction scales.
Unless, of coures, Eliezer-1995 didn't think utility could really be measured by life satisfaction, and by his methods of utility calculation, Intelligence beats out all else. I'd be interested in knowing what utility meant to him under this circumstance.
Oh, come on, Eliezer, of course you thought of it. ;) However, it might not have been something that bothered you, as in- A) You didn't believe actually having autonomy mattered as long as people feel like they do (ie a Matrix/Nexus situation). I have heard this argued. Would it matter to you if you found out your whole life was a simulation? Some say no. I say yes. Matter of taste perhaps?
B) OR You find it self evident that 'real' autonomy would be extrapolated by the AI as something essential to human happiness, such that an intelligence observing people and maximizing our utility wouldn't need to be told 'allow autonomy.' This I would disagree with.
C) OR You recognize that this is a problem with a non-obvious solution to an AI, and thus intend to deal with it somehow in code ahead of time, before starting the volition extrapolating AI. Your response indicates you feel this way. However, I am concerned even beyond setting an axiomatic function for 'allow autonomy' in a program. There are probably an infinite number of ways that an AI can find ways to carry out its stated function that will somehow 'game' our own system and lead to suboptimal or outright repugnant results (ie everyone being trapped in a permanent quest- maybe the AI avoids the problem of 'it has to be real' by actually creating a magic ring that needs to be thrown into a volcano every 6 years or so). You don't need me telling you that! Maximizing utility while deluding us about reality is only one. It seems impossible that we could axiomatically safeguard against all possibilities. Assimov was a pretty smart cookie, and his '3 laws' are certainly not sufficient. 'Eliezer's million lines of code' might cover a much larger range of AI failures, but how could you ever be sure? The whole project just seems insanely dangerous. Or are you going to address safety concerns in another post in this series?
Ah! I just thought of a great scenario! The Real God Delusion. Talk about wireheading...
So the fAI has succeeded and it actually understands human psychology and their deepest desires and it actually wants to maximize our positive feelings in a balanced way, etc. It has studied humans intently and determines that the best way to make all humans feel best is to create a system of God and heaven- humans are prone to religiosity, it gives them a deep sense of meaning, etc. So our friendly neighbohrhood AI reads all religious texts and observes all rituals and determines the best type of god(s) and heaven(s) (it might make more than one for different people)... So the fAI creates God, gives us divine tasks that we feel very proud to accomplish when we can (religiosity), gives us rules to balance our internal biological conflicting desires, and uploads us after death into some fashion of paradise where we can feel eternal love...
Hey- just saying that even IF the fAI really understood human psychology, doesn't mean that we will like it's answer... We might NOT like what most other people do.
Cocaine-
I was completely awed by how just totally-mind-blowing-amazing this stuff was the once and only time I tried it. Now, I knew the euphoric-orgasmic state I was in had been induced by a drug, and this knowledge would make me classify it as 'not real happiness,' but if someone had secretly dosed me after saving a life or having sex, I probably would have interpreted it as happiness proper. Sex and love make people happy in a very similar way as cocaine, and don't seem to have the same negative effects as cocaine, but this is probably a dosage issue. There are sex/porn addicts whose metabolism or brain chemistry might be off. I'm sure that if you carefully monitored the pharmacokinetics of cocaine in a system, you could maximize cocaine utility by optimizing dosage and frequency such that you didn't sensitize to it or burn out endogenous seretonin.
Would it be wrong for humans to maximize drug-induced euphoria? Then why not for an AI to?
What about rewarding with cocaine after accomplishing desired goals? Another million in the fAI fund... AHHH... Maybe Eliezer should become a sugar-daddy to his cronies to get more funds out of them. (Do this secretly so they think the high is natural and not that they can buy it on the street for $30)
The main problem as I see it is that humans DON'T KNOW what they want. How can you ask a superintelligence to help you accomplish something if you don't know what it is? The programmers want it to tell them what they want. And then they get mad when it turns up the morphine drip...
Maybe another way to think about it is we want the superintelligence to think like a human and share human goals, but be smarter and take them to the next level through extrapolation.
But how do we even know that human goals are indefinitely extrapolatable? Maybe taking human algorithms to an extreme DO lead to everyone being wire-headed in one way or another. If you say, "I can't just feel good without doing anything... here are the goals that make me feel good- and it CAN'T be a simulation,' then maybe the superintelligence will just set up a series of scenarios in which people can live out their fantasies for real... but they will still all be staged fantasies.
Eliezer,
Excuse my entrance into this discussion so late (I have been away), but I am wondering if you have answered the following questions in previous posts, and if so, which ones.
1) Why do you believe a superintelligence will be necessary for uploading?
2) Why do you believe there possibly ever could be a safe superintelligence of any sort? The more I read about the difficulties of friendly AI, the more hopeless the problem seems, especially considering the large amount of human thought and collaboration that will be necessary. You yourself said there are no non-technical solutions, but I can't imagine you could possibly believe in a magic bullet that some individial super-genius will eurekia have an epiphany about by himself in his basement. And this won't be like the cosmology conference to determine how the universe began, where everyone's testosterone riddled ego battled for a victory of no consequence. It won't even be a manhattan project, with nuclear weapons tests in barren waste-lands... Basically, if we're not right the first time, we're fucked. And how do you expect you'll get that many minds to be that certain that they'll agree it's worth making and starting the... the... whateverthefuck it ends up being. Or do you think it'll just take one maverick with a cult of loving followers to get it right?
3) But really, why don't you just focus all your efforts on preventing any superintelligence from being created? Do you really believe it'll come down to us (the righteously unbiased) versus them (the thoughtlessly fame-hungry computer scientists)? If so, who are they? Who are we for that matter?
4) If fAI will be that great, why should this problem be dealt with immediately by flesh, blood, and flawed humans instead of improved-upoloaded copies in the future?
Ok- Eliezer- you are just a human and therefore prone to anger and reaction to said anger, but you, in particular, have a professional responsibility not to come across as excluding people who disagree with you from the discussion and presenting yourself as the final destination of the proverbial buck. We are all in this together. I have only met you in person once, have only had a handful of conversations about you with people who actually know you, and have only been reading this blog for a few months, and yet I get a distinct impression that you have some sort of narcissistic Hero-God-Complex. I mean, what's with dressing up in a robe and presenting yourself as the keeper of clandestine knowledge? Now, whether or not you actually feel this way, it is something you project and should endeavor not to, so that people (like sophiesdad) take your work more seriously. "Pyrimid head," "Pirate King," and "Emperor with no clothes" are NOT terms of endearment, and this might seem like a ridiculous admonission coming from a person who has self-presented as a 'pretentious slut,' but I'm trying to be provocative, not leaderly. YOU are asking all of these people to trust YOUR MIND with the dangers of fAI and the fate of the world and give you money for it! Sorry to hold you to such high standards, but if you present with a personality disorder any competent psychologist can identify, then this will be very hard for you... unless of course you want to go the "I'm the Messiah, abandon all and follow me!" route, set up the Church of Eliezer, and start a religious movement with which to get funding... Might work, but it will be hard to recruit serious scientists to work with you under those circumstances...
Oh... I should have read these comments to the end, somehow missed what you said to sophiesdad.
Eliezer... I am very disappointed. This is quite sad.
I should also add:
6) Where do you place the odds of you/your institute creating an unfriendly AI in an attempt to create a friendly one?
7) Do you have any external validation (ie, unassociated with your institute and not currently worshiping you) for this estimate, or does it come exclusively from calculations you made?
Eliezer, I have a few practical questions for you. If you don't want to answer them in this tread, that's fine, but I am curious:
1) Do you believe humans have a chance of achieving uploading without the use of a strong AI? If so, where do you place the odds?
2) Do you believe that uploaded human minds might be capable of improving themselves/increasing their own intelligence within the framework of human preference? If so, where do you place the odds?
3) Do you believe that increased-intelligence-uploaded humans might be able to create an fAI with more success than us meat-men? If so, where do you place the odds?
4) Where do you place the odds of you/your institute creating an fAI faster than 1-3 occurring?
5) Where do you place the odds of someone else creating an unfriendly AI faster than 1-3 occurring?
Thank you!!!
Eliezer- Have you written anything fictional or otherwise about how you envision an ideal post-fAI or post-singularity world? Care to share?
Michael- ah yes, that makes a lot of sense. Of course if the worm's only got 213 neurons, it's not going to have hundreds of neurotransmitters. That being said, it might have quite a few different receptor sub-types and synaptic modification mechanisms. Even so... It would seem theoretically feasible to me for someone to hook up electrodes to each neuron at a time and catalog not only the location and connections of each neuron, but also what the output of each synapse is and what the resulting PSPs are during normal C. elegans behaviors... Now that's something I should tell Brenner about, given his penchant for megalomaniacal information gathering projects (he did the C. Elegans genome, a catalog of each cell in its body throughout its embryonic development, and its neural connections).
Doug- too much stuff to put into an actual calculation, but I doubt we have complete knowledge, given how little we understand epigentics (iRNA, 22URNAs, and other micro RNAs), synaptic transcription, cytoskeletal transport, microglial roles, the enteric nervous system, native neuro-regeneration, and low and behold, neurotransmitters themselves. The 3rd edition of Kandel I was taught out of as an undergrad said nothing of orexins, histamine, the other roles of meletonin beyond the pineal gland, or the functions of the multifarious set of cannibinoid receptors, yet we now know (a short 2 years later) that all of these transmitters seem to play critical roles. Now, not being an elegans gal, I don't know if it has much simpler neurotransmission than we do. I would venture to guess it is simpler, but not extraordinarily so, and probably much more affected by simple epigenetic mechanisms like RNA interference. In C. elegans, iRNA messages are rapidly amplified, quickly shutting off the target gene in all of its cells (mammals have not been observed to amplify). Now, here's the kicker- it gets into the germ cells too! So offspring will also produce iRNAs and shut off genes! Now, due to amplification error, iRNAs are eventually lost if the worms are bred long enough, but Craig Mellow is now exploring the hypothesis that amplified iRNAs can be eventually permenantly disable (viral) DNA that's been incorporated into the C. elegans genome, either by more permenant epigenetic modification (methylation or coiling), or by splicing it out... Sorry for the MoBio lecture, but DUDE! This stuff is supercool!!!
Kennaway- I meant why can't we make something that C. elegans does in the same way that C. elegans does it using it's neural information. Clearly our knowledge must be incomplete in some respect. If we could do that, then imitating not only the size, but the programming of the caterpillar would be much more feasible. At least three complex programs are obvious: 1) crawl -coordinated and changeable sinusoidal motion seems a great way to move, yet the MIT 'caterpillar' is quite laughable in comparison to the dexterity of the real thing, 2)Seek- this involves a circular motion of the head, sensing some chemical signal, and changing directions accordingly, 3) Navigate- caterpillar is skillfully able to go over, under, and around objects, correcting its path to its original without doing the weird head-twirling thing, indicating that aside from chemeoattraction, it has some sense of directional orientation, which it must have or else its motion would be a random walk with correction and not a direct march. I wonder how much of these behaviors operate independently of the brain.
All of this reminds me of something I read in Robert Sapolski's book "Monkey Luv" (a really fluffy pop-sci book about baboon society, though Sapolski himself in person is quite amazing), about how human populations under different living conditions had almost predictable (at least in hindsight)explicative religions. People living in rainforests with many different creatures struggling at cross-purposes to survive developed polytheistic religions in which gods routinely fought and destroyed each other for their own goals. Desert dwellers (semites) saw only one great expanse of land, one horizon, one sky, one ecosystem, and so invented monotheism.
I wonder what god(s) we 21st century American rationalists will invent...
I am pleased that you mention that (at present) the human brain is still the best predictor of other humans' behavior, even if we don't understand why (yet). I've always known my intuitions to be very good predictors of what people will do and feel, though it's always been a struggle trying to formalize what I already know into some useful model that could be applied by anyone...
However, I was once told my greatest strength in understanding human behavior was not my intuitions, but my ability to evaluate intuitions as one piece of evidence among others, not assuming they are tyrranically correct (which they are certainly not), and thus improving accuracy... Maybe instead of throwing the baby out with the bathwater on human intuitions of empathy, we should practice some sort of semi-statistical evaluation of how certain we feel about a conclusion and update it for other factors. Do you do this already, Eliezer? How?
You can also replace, "Because you enjoy looking" with "Because you have to look" for many high-power jobs and positions. Dominance-Submission relationships in business and politics are very important to outcome. I would guess that a lot of bad decisions are made because of the necessity of this dance at high levels... how to crush it out? Not easy. Human nature, it would seem.
Why not, I can't help myself: Caledonian = Thersites, Eliezer = Agamemnon
Thersites only clamourâd in the throng,
Loquacious, loud, and turbulent of tongue:
Awed by no shame, by no respect controllâd,
In scandal busy, in reproaches bold:
With witty malice studious to defame,
Scorn all his joy, and laughter all his aim:â
But chief he gloried with licentious style
To lash the great, and monarchs to revile.
...
Sharp was his voice; which in the shrillest tone,
Thus with injurious taunts attackâd the throne.
Whateâer our master craves submit we must,
Plagued with his pride, or punishâd for his lust.
Oh women of Achaia; men no more!
Hence let us fly, and let him waste his store
In loves and pleasures on the Phrygian shore.
We may be wanted on some busy day,
When Hector comes: so great Achilles may:
From him he forced the prize we jointly gave,
From him, the fierce, the fearless, and the brave:
And durst he, as he ought, resent that wrong,
This mighty tyrant were no tyrant long.â
...
âPeace, factious monster, born to vex the state,
With wrangling talents formâd for foul debate:
Curb that impetuous tongue, nor rashly vain,
And singly mad, asperse the sovereign reign.
Have we not known thee, slave! of all our host,
The man who acts the least, upbraids the most?
...
Expel the council where our princes meet,
And send thee scourged and howling through the fleet.â
TGGP:
I have great sympathy with this position. An incorrectly formatted AI is one of the biggest fears of the singularity institute, mainly because there are so many more ways to be way wrong than even slightly right about it... It might be that the task of making an actually friendly AI is just too difficult for anyone, and our efforts should be spent in preventing anyone from creating a generally intelligent AI, in the mean time trying to figure out, with our inperfect human brains and the crude tools at our disposal, how to make uploads ourselves or create other physical means of life-extension... No idea. The particulars are out of my area of expertise. I might keep your brain from dying a little longer though... (stroke research)
Just another point as to why important, meglomeniacal types like Eliezer need to have their motives checked:
Frank Vertosick, in his book "When the Air Hits Your Brain: Tales from Neurosurgery," about a profession I am seriously considering, describes what becomes of nearly all people taking such power over life and death:
"He was the master... the 'ptototypical surgical psychopath' - someone who could render a patient quadriplegic in the morning, play golf in the afternoon, and spend the evening fretting about that terrible slice off the seventh tee. At the time this seemed terrible, but I soon learned he was no different than any other experierienced neurosurgeon in this regard... I would need to learn not to cry at funerals."
I had an interesting conversation with a fellow traveler about morality in which he pointed out that 'upright' citizens will commit the worst atrocities in the name of a greater good that they think they understand... Maybe some absolute checks are required on actions, especially those of people who might actually have a lot power over the outcome of the future. What becomes of the group lead by the man who simultaneously Achilles and Agamemnon?
Unknown: "But it is quite impossible that the complicated calculation in Eliezer's brain should be exactly the same as the one in any of us: and so by our standards, Eliezer's morality is immoral. And this opinion is subjectively objective, i.e. his morality is immoral and would be even if all of us disagreed. So we are all morally obliged to prevent him from inflicting his immoral AI on us"
Well, I would agree with this point if I thought what Eliezer was going to inflict upon us was so out of line with what I want that we would be better off without it. Since, you know, NOT dying doesn't seem like such a bad thing to me, I'm not going to complain, when he's one of the only people on Earth actually trying to make that happen...
On the other hand, Eliezer, you are going to have to answer to millions if not billions of people protesting your view of morality, especially this facet of it (the not dying thing), so yeah, learn to be diplomatic. You NOT allowed to fuck this up for the rest of us!
Calhedonian: [THIS WOULD GET DELETED]The reason you are unable to make such arguments is that you're unwilling to do any of the rudimentary tasks necessary to do so. You've accomplished nothing but making up names for ill-defined ideas and then acting as though you'd made a breakthrough. On the off-chance that you actually want to contribute something meaningful to the future of humanity, I suggest you take a good, hard look at your other motivations - and the gap between what you've actually accomplished and your espoused goals.[/THIS WOULD GET DELETED]
This is NOT that bad a point! Don't delete that! If we're considering cognitive biases, then it makes sense to consider the biases of our beloved leader, who might be so clever as to convince all of us to walk directly off of a cliff... Who is the pirate king at the helm of our ship? What are your motivations is a good question indeed- though not one I expect answered in one post or right away.
Also, I found reading this post very satisfying, but that might just be because it's brain candy confirming my justness in believing what I already believed... It's good to be skeptical, especially of things that say, 'You can feel it's right! And it's ok that there's no external validation...' Tell that to the Nazis who thought Jews were not part of the human species...
Just because I can't resist, a poem about human failing, the judgment of others we deem weaker than ourselves, and the desire to 'do better.' Can we?
"No Second Troy" WB Yeats, 1916 WHY should I blame her that she filled my days With misery, or that she would of late Have taught to ignorant men most violent ways, Or hurled the little streets upon the great, Had they but courage equal to desire? 5 What could have made her peaceful with a mind That nobleness made simple as a fire, With beauty like a tightened bow, a kind That is not natural in an age like this, Being high and solitary and most stern? 10 Why, what could she have done being what she is? Was there another Troy for her to burn?
I second Behemouth and Nick- what do we do in the mindspace in which individual's feelings of right and wrong disagree? What if some people think retarded children absolutely should NOT be pulled off the track? Also, what about the pastrami-sandwich dilemma? (hat of those who would kill 1 million unknown people with no consequence to themselves for a delicious sandwich?
But generally, I loved the post. You should write another post on 'Adding Up to Normality.'
Oh- back on topic, I think the exploration of metemorality will need to include people who are only softly sociopathic but not 'brain damaged'. Here is an example: An ex-boy-friend of mine claimed to have an 'empathy switch,' by which he had complete and total empathy with the few chosen people he cared about, and complete zero empathy with everyone else. To him, killing millions of people half-way around the world in order to get a super-tasty toasted pastrami and cheese sandwich would be a no-brainer. Kill the mother fuckers! He didn't know them before, he won't know them afterwards, what difference does it make? The sandwich on the other hand... well, that will be a fond memory indeed! I think many people actually live by this moral code, but are simply to dishonest with themselves to admit it. What says metemorality to that???
I think Caledonian should stay. Even if he does misrepresent Eliezer, he offers an opportunity to correct misconceptions that others might have regarding what Eliezer was trying to say... And on some rare occasions, he is right...
Oh yay! Do tell! I'm very interested to here your metemoral philosophy... Before you started posting on morality, I thought the topic a general waste of time since people would always be arguing cross-purposes, and in the end it was all just atoms anyway... Your explanation of metemorality helps to explain why all these moral philosophies are in disagreement, yet converge on many of the same conclusions, like 'killing for its own sake is wrong' (which people do decide to do- two students from my high school riddled a pizza delivery boy with bullets just to watch him die). I am wondering what universals can be pulled out of this...
When I first started reading the post, I had Keith's reaction, 'Get down to the point!', but I'm now very interested to see where Eliezer is going with this...
Obert: "I rather expect so. I don't think we're all entirely past our childhoods. In some ways the human species itself strikes me as being a sort of toddler in the 'No!' stage."
This in a way explains some of my own questions about my behavior... The first and only time I tried cocaine, I was shocked by just how much I loved it (I had thought it would be like smoking a joint and drinking three cups of coffee, fuck was I wrong)... And I thought to myself, "This is way too much fun, I don't care if you didn't crash, DON'T do it again." I think I realize that reactions that beyond my control, really are beyond my control, and thus should not be tampered with in my 'sophomoric' state.
TGGP- While JFK's assassination may or may not (LBJ???) have been good for progressivism, RFK's was certainly NOT. Nixon won, and then we had drug schedules, and watergate, and all that bullshit...
Here's a counterfactual to consider: What would the world have been like if Bobby Kennedy had been president instead of Nixon?
Still think it would be a good thing for progressivism if Obama is shot and McCaine becomse prez?