Posts
Comments
Concerning many comments already here that I am not sure which one I should reply to:
Never an argument to warrant violence? Or OK against superintelligences but NO against humans? Do not suppose there's a sharp line between human and superintelligence situations. To me some of you may well be akin to superintelligences, that I cannot outwit. No absolute line between argument and verbal abuse either, when I think about it. Also, I think I have some examples of dangerous/disgusting arguments - nothing exists, you should die, your consciousness doesn't exist ...
As for whether the rightness of a violent arguments has to do with the physical power of the opponent -
Should you let the moral value of initiating violence depend on whether or not you win?
I say yes, but my idea of moral value is more self-centered. My morals consider others, but I think it's moral to prefer to survive - not the least because if your moral doesn't prescribe survival, you will not be here. It's not as if we help others out of morals and survive out of baser urges. That dichotomy is common in present morals ( think bioethics - if you don't accept death, you refuse to "open up to higher goals"/live for others) but it's nonetheless sick. It's right and moral to want to survive! And thus I decide that while arguments should be free when you are only concerned with truth and rationality, in the case of lots of real situations, it's more than truth at stake, and you worry for your well-being. Even if you want to keep it at the rational, intellectual level, your opponent may not oblige. And then it would be moral to use violence, but not moral to risk your own life for small arguments, but not because of the value of truth or laws of rationality at all.
Though even then I wish to be more intelligent beforehand in preparation for such a sad event, so that I may be strong and integral enough to know the offending argument without being hurt, and do not have to use violence, or at least ponder their point after the violence safely.
I wonder. I grew up with experience in multiple systems of meditation, and found a way that works for me. Without electrodes or drugs or Nobel Prizes, I can choose to feel happy and relaxed and whatever. When I think about it, meditation can feel more pleasing and satisfying than every other experience in my life. Yet (luckily?) I do not feel any compulsion to do that in place of many other things, or try to advocate it. This is not because of willpower. While it lasts I like and want it, as if there's fulfillment of purpose, and when it's over I cannot recall that feeling faithfully enough to desire it more than I desire chocolate. Also, I cannot very reliably reproduce the feeling - it occurs only some of the time I try, cannot be had too frequently(no idea why) and cannot be consciously prolonged. So I consider it a positive addition to my life, especially helpful in yanking me out of episodes of gloom.
This of course raises multiple questions. There's such thing as ambient mood as opposed to current momentary pleasure, and if a person is pissed off too often to concentrate productively, would improving the mood be the right choice, especially if it has an upper bound and doesn't lead to the person madly pressing the button indefinitely? Hell, if there's any way to make people happier with no other change, without causing crippling obsession - maybe there's such a quirk in the brain (with want and pleasure detached from each other) to be exploited safely with meditation, maybe the button is in responsible hands, would it be acceptable? Though, the meditation sometimes make me wonder if the mind can directly change the world (I changed my emotional reality, and it felt real). Is impaired rationality an acceptable price then?
I already believe this. And I feel the closest thing I have to a "meaning/purpose" is the very drive to live, which would be pointless in the eyes of an unsympathetic alien. But I don't feel depressed, just not too happy about this. And the pointlessness and horror of my existence and experience is itself interesting, the realization fun, just like those who love maths for the sake of itself as opposed to other concerns can also be very darkly intrigued by Godel's incompleteness proof, instead of losing heart. Frustrated, yes. But I would not commit suicide or wirehead myself before I understand the correct basis and full implications of this futility, especially this fear of futility. And that understanding may well be impossible, and thus my curiosity circuit will always fire, and defend me from any anti-life proof indefinitely. Could this line of reasoning be helpful to someone with depression? It's how I battled it off.
If the above is nonsense to you, I admit I am just doublefeeling. The drive, the fun and the futility are all real to me, corresponding to the wanting, liking and learning aspects of human motivation, and who am I to decide which is human's real purpose? I do not think my opinion is truth, or should be adopted. But in case there's danger of suicide from lack of point, let it be remembered that two of the three aspects can support living, whereas if you forget that the apparent futility is deep and worthy of interest, then you easily end up one against two for survival. Or is it that I am less smart and much more introspective than the average rationalist here, and thus put too little weight in the logical recursive futility and too much in the introspective curiosity and end up with this attitude, while others just survived by being truly blind/dismissive about the end of recursive justification and believe in a real and absolute boundary between motivational and evolutional justifications, like Eliezer seems to do?
May I suggest that Plato's words carry some different and non-obvious sensibility, that has little to do with the outside vs inside, if we take the original text and the circumstances into account? For, in that age, people had fewer reasons to believe the physicality of the individual. They saw dead people remain dead, but that's pretty much all of it. And they had more motives to believe in the soul, because there's no scientific transhumanism, and religion was their only hope of personal immortality. So the introspecting self may feel that just as it has slept and awaken, and remained itself, so it is possible to survive an abscence of consciousness, and death must be temporary. Not because they superficially looked and sounded alike, but because of the common factor of lack of consciousness, which seemed much less distinguishable than they do now in light of neuroscience. It's not outside vs inside, it might as well have been Phaecrinon thinking "the two pairs are structually different" and Plato thinking "they are equivalent and symmetrical".
Plato has dismissed his share of strawmen opponents, and I have no problem with adapting his words, but I feel confused by this post's focus on the principles of thinking when this more obvious reaction to the analogy comes to mind. How about choosing a purer example next time?
IMO a fun project (for those like me who like this but are clearly not smart enough to be part of a Singularity developing team): create an object-based environment with maybe rule-based reproductive agents, with customizable explicit world rules (as in a computer game, not as in physics) and let them evolve. Maybe users across the world can add new magical artifacts and watch the creatures fail hilariously...
On a more related note, the post sounds ominous for any hope of a general AI. There may be no clear distinction between protein computer and just protein, between learning and blindly acting. If we and our desires are in such a position, aren't any AI we can make indirectly also blind? Or as I understand, Eliezer seems to think that we can (or had better) bootstrap, both for intelligence/computation and morality. For him, this bootstrapping, this understanding/theory of the generality of our own intelligence (as step one of the bootstrapping) seems to be the Central Truth of Life (tm). Maybe he's right, but for me with less insight into intelligence, that's not self-evident. And he didn't explain this crucial point anywhere clearly, only advocated it. Come on, it's not as if enough people can be a seed AI programmer even armed with that Central Truth. (But who knows.)
self-control is inherently aggravating
Thanks for pointing it out (by science!)-- a lot of people who wish to perfect their personality should know it. I didn't consciously know it, but developed a mental discipline of acknowledging anger in my tormented teens anyway. People who hold intuitive ideals about "perfection of humanity/personality" should learn neuroscience, lest they suppose that things they ought to do (control themselves) must bring happiness. They may be confused when they experience that anger, and either conclude that they are born sinful/defective, or selfish/negative emotions are to be done away with to achieve perfection.
I really want say: It's OK to feel hurt if you didn't get what you want, even if that's because you did what you should/must. Those who try to make humans completely ethical/self-controlled are turning us into something not human.
But what did I just say? Surely that's an excuse for being impulsive? I want what I want, and I don't want to be called unethical for that. And that humanness part -- if doing whatever you end up deciding by taking "liking, wanting and learning" into account seems to be functional in the past, in meatspace, can't it be utterly disastrous when we have access to Singularity-level power? Shouldn't we sever the lower impulses and go with ethics instead? (But is it, um, fun?) I don't know what should I feel...Hope the one that comes up with FAI first is not going to program it to value ethics strictly above fun...
Similar ideas as Eliezer can occur to people without proper physics, experimental spirit or understanding of the brain (but I am not sure I can say "without rationality", as the Art may not be what I think it to be). I mean,some Indian spiritual traditions have explicitly stated that although you feel and believe that you have a real self, although you feel your existence as an entity strongly, this is not acceptable evidence for the existence of your "self". This is their key to selflessness. In other words, you may feel your existence outside of physics or whatever reality you believe in, and yet you should not trust this feeling. This sounds rational to me, but is further complicated by the fact that their tenets call for the abandonment of self, and thus the conclusion was not drawn on a fair ground. Also, the follow-up question of life-choices and meaning is dissolved by obligations that mainly consists of living an intellectual life as prescribed. I do not recommend reading this kind of material, it can hurt. I'm just making a point, that even without a scientific method, even while thinking your attitudes can control your afterlife, you can start having these meta thoughts and actually be somewhat right. Maybe this fact is relevent to, um, AI theory?