Posts

Comments

Comment by AlphaOmega on Open Thread, November 16–30, 2012 · 2012-11-19T21:51:58.972Z · LW · GW

I think what Viliam_Bur is trying to say in a rather complicated fashion is simply this: humans are tribal animals. Tribalism is perhaps the single biggest mind-killer, as you have just illustrated.

Am I correct in assuming that you identify yourself with the tribe called "Jews"? For me, who has no tribal dog in this particular fight, I can't get too worked up about it, though if the conflict involved, say, Irish people, I'm sure I would feel rather differently. This is just a reality that we should all acknowledge: Our attempts to "overcome bias" with respect to tribalism are largely self-delusion, and perhaps even irrational.

Comment by AlphaOmega on What does the world look like, the day before FAI efforts succeed? · 2012-11-19T02:20:39.743Z · LW · GW

On the contrary, adversarial questioners are often highly productive. I've already incited one of the best comments you've seen on LessWrong, haven't I?

Yes, my cognition is significantly motivated along these lines. Doesn't Hitler deserve some of the credit for the rapid development of computers and nuclear bombs? Perhaps I or someone like me will play a similar role in the development of AI?

Comment by AlphaOmega on What does the world look like, the day before FAI efforts succeed? · 2012-11-17T01:37:18.127Z · LW · GW

Just a gut reaction, but this whole scenario sounds preposterous. Do you guys seriously believe that you can create something as complex as a superhuman AI, and prove that it is completely safe before turning it on? Isn't that as unbelievable as the idea that you can prove that a particular zygote will never grow up to be an evil dictator? Surely this violates some principles of complexity, chaos, quantum mechanics, etc.? And I would also like to know who these "good guys" are, and what will prevent them from becoming "bad guys" when they wield this much power. This all sounds incredibly naive and lacking in common sense!

Comment by AlphaOmega on FAI, FIA, and singularity politics · 2012-11-08T22:38:00.965Z · LW · GW

I can conceive of a social and technological order where transhuman power exists, but you may or may not want to live in it. This is a world where there are god-like entities doing wondrous things, and humanity lives in a state of awe and worship at what they have created. To like living in this world would require that you adopt a spirit of religious submission, perhaps not so different from modern-day monotheists who bow five times a day to their god. This may be the best post-Singularity order we can hope for.

Comment by AlphaOmega on against "AI risk" · 2012-04-12T17:48:26.038Z · LW · GW

I am going to assert that the fear of unfriendly AI over the threats you mention is a product of the same cognitive bias which makes us more fascinated by evil dictators and fictional dark lords than more mundane villains. The quality of "evil mind" is what really frightens us, not the impersonal swarm of "mindless" nanobots, viruses or locusts. However, since this quality of "mind," which encapsulates such qualities as "consciousness" and "volition," is so poorly understood by science and so totally undemonstrated by our technology, I would further assert that unfriendly AI is pure science fiction which should be far down the list of our concerns compared to more clear and present dangers.

Comment by AlphaOmega on Robots ate my job [links] · 2012-04-11T19:08:48.985Z · LW · GW

OK, but if we are positing the creation of artificial superintelligences, why wouldn't they also be morally superior to us? I find this fear of a superintelligence wanting to tile the universe with paperclips absurd; why is that likely to be the summum bonum to a being vastly smarter than us? Aren't smarter humans generally more benevolent toward animals than stupider humans and animals? Why shouldn't this hold for AI's? And if you say that the AI might be so much smarter than us that we will be like ants to it, then why would you care if such a species decides that the world would be better off without us? From a larger cosmic perspective, at that point we will have given birth to gods, and can happily meet our evolutionary fate knowing that our mind children will have vastly more interesting lives than we ever could have. So I don't really understand the problem here. I guess you could say that I have faith in the universe's capacity to evolve life toward more intelligent and interesting configurations, because for the last several billion years this has been the case, and I don't see any reason to think that this process will suddenly reverse itself.

Comment by AlphaOmega on Robots ate my job [links] · 2012-04-10T18:30:22.700Z · LW · GW

It seems to me that humanity is faced with an epochal choice in this century, whether to:

a) Obsolete ourselves by submitting fully to the machine superorganism/superintelligence and embracing our posthuman destiny, or

b) Reject the radical implications of technological progress and return to various theocratic and traditionalist forms of civilization which place strict limits on technology and consider all forms of change undesirable (see the 3000-year reign of the Pharaohs, or the million-year reign of the hunter-gatherers)

Is there a plausible third option? Can we really muddle along for much longer with this strange mix of religious “man is created in the image of God”, secular humanist “man is the measure of all things” and transhumanist “man is a bridge between animal and Superman” ideologies? And why do even Singularitarians insist that there must be a happy ending for homo sapiens, when all the scientific evidence suggests otherwise? I see nothing wrong with obsoleting humanity and replacing them with vastly superior “mind children.” As far as I’m concerned this should be our civilization’s summum bonum, a rational and worthy replacement for bankrupt religious and secular humanist ideals. Robots taking human jobs is another step toward bringing the curtain down permanently on the dead-end primate dramas, so it’s good news that should be celebrated!

Comment by AlphaOmega on A Primer On Risks From AI · 2012-03-25T02:03:45.901Z · LW · GW

Excellent! Perhaps you can be BetaOmega ;)

As far as history goes, there are some chapters that might be worth repeating. For example, what possessed the ancient Egyptians, suddenly and out of the stone age, to build huge monuments of great precision which still awe us after 4.5 thousand years? Some crazy pharaohnic cult made that possible, and even though it seems totally irrational, I’m glad they did it! So maybe this is what we need today: a cult of the Machine which gives our technology an ideology, and even a religion. Otherwise it all seems rather pointless, doesn't it?

Please don't be too put off by my web site by the way -- I was in a comic book supervillain phase when I created it which I’m finally getting over. Nor am I here to troll LessWrong; I think what has been created here is brilliant, and though it’s often accused of being cultish, maybe the real problem is that it isn’t cultish enough!

Comment by AlphaOmega on A Primer On Risks From AI · 2012-03-24T23:26:37.885Z · LW · GW

I think I understand how you feel. Here is what I propose, for people who find these vistas of reality terrifying, and who may feel a need to approach them from a more "spiritual" (for lack of a better word) perspective: a true Singularity cult. By that I mean, no more pretending that you are a mere rationalist, coolly calculating the probabilities of heaven and hell, but rather to embrace the quasi-religious nature of this subject matter in all its glory. I have a pretty clear vision of such a cult, its ideology, activities and structure, and would like to know if anyone here is interested in such a thing. What I have in mind would be rather extreme and terrifying to the profane, and hence is better discussed in a more cult-like environment. For example, from the point of view of the "Cult of Omega", the extinction of humanity is an all but inevitable and desirable outcome, as we march ineluctably toward the Singularity. I believe that if it was done well, such a cult could become the nexus of a powerful new religion which could totally remake the world.

Comment by AlphaOmega on Q&A with Jürgen Schmidhuber on risks from AI · 2011-06-15T21:42:51.371Z · LW · GW

How useful are these surveys of "experts", given how wrong they've been over the years? If you conducted a survey of experts in 1960 asking questions like this, you probably would've gotten a peak probability for human level AI around 1980 and all kinds of scary scenarios happening long before now. Experts seem to be some of the most biased and overly optimistic people around with respect to AI (and many other technologies). You'd probably get more accurate predictions by taking a survey of taxi drivers!

Comment by AlphaOmega on Survey: Risks from AI · 2011-06-14T00:45:15.148Z · LW · GW

Since I'm in a skeptical and contrarian mood today...

  1. Never. AI is Cargo Cultism. Intelligence requires "secret sauce" that our machines can't replicate.
  2. 0
  3. 0
  4. Friendly AI research deserves no support whatsoever
  5. AI risks outweigh nothing because 0 is not greater than any non-negative real number
  6. The only important milestone is the day when people realize AI is an impossible and/or insane goal and stop trying to achieve it.
Comment by AlphaOmega on [SEQ RERUN] One Life Against the World · 2011-06-12T21:16:33.716Z · LW · GW

That site is obsolete. I create new sites every few months to reflect my current coordinates within the Multiverse of ideas. I am in the process of launching new "Multiversalism" memes which you can find at seanstrange.blogspot.com

There is no Universal truth system. In the language of cardinal numbers, Nihilism = 0, Universalism = 1, and Multiversalism = infinity.

Comment by AlphaOmega on [SEQ RERUN] One Life Against the World · 2011-06-12T20:36:54.807Z · LW · GW

Trolls serve an important function in the memetic ecology. We are the antibodies against outbreaks of ideological insanity and terminal groupthink. I've developed an entire philosophy of trolling, and am obligated to engage in it as a kind of personal jihad.

Comment by AlphaOmega on [SEQ RERUN] One Life Against the World · 2011-06-12T20:17:14.901Z · LW · GW

Please spare me your "optimizations on my behalf" and refrain from telling me what I should talk about. Your language gives you away -- it's the same old grandiose totalitarian mindset in a new guise. Are these criticisms well-formed enough for you?

Comment by AlphaOmega on [SEQ RERUN] One Life Against the World · 2011-06-12T18:14:28.872Z · LW · GW

Basing your ethics on far-fetched notions like "intergalactic civilizations" and the "Singularity" is the purest example of science fiction delusion. I would characterize this movement as an exercise in collective delusion -- very much like any other religion. Which I don't have a problem with, as long as you don't take your delusions too seriously and start thinking you have a holy mission to save the universe from the heathens. Unfortunately, that is exactly the sense I get from Mr. Yudkowsky and some of his more fanatical followers...

Comment by AlphaOmega on [SEQ RERUN] One Life Against the World · 2011-06-12T08:38:48.096Z · LW · GW

Maximizing human life is an absurd idea in general. Does Yudkowsky not believe in Malthusian limits, and does he really take seriously such fantastic notions as "intergalactic civilizations"? Maybe he should rethink his ethics to incorporate more realistic notions like limits and sustainability and fewer tropes from science fiction.

Step back and take a look at yourselves and the state of the world folks. Monkeys at keyboards fantasizing about colonizing other galaxies and their computers going FOOM! while their civilization crumbles is quite an amusing spectacle!

Comment by AlphaOmega on Meanings of Mathematical Truths · 2011-06-06T07:37:14.229Z · LW · GW

“Pure logical thinking cannot yield us any knowledge of the empirical world; all knowledge of reality starts from experience and ends in it. Propositions arrived at by pure logical means are completely empty of reality.” –Albert Einstein

I don't agree with Al here, but it's a nice quote I wanted to share.

Comment by AlphaOmega on What would you do with infinite willpower? · 2011-06-04T04:15:47.681Z · LW · GW

Have you been doing anything in particular to cause your willpower to increase? What are some effective techniques for increasing willpower?

Comment by AlphaOmega on [SEQ RERUN] Your Rationality is My Business · 2011-06-02T19:17:16.238Z · LW · GW

So rationality is desirable because it gets rid of crap? What if crap makes me happy? What if my entire culture is based on crap?

Is there a paper here that addresses this meta-question of "why be rational?" I can think of many reasons, but mostly it seems to come down to this: rationality confers power. Bertrand Russell called Western thought "power thought," which seems pretty accurate. Rationality is good because it wins, and eliminates the competition. I haven't had any conversations with any pre-rational Stone Agers lately, though they were once common in my neighborhood. Did they lose a philosophical debate with rationalists, or were they simply exterminated?

So it seems to me that the lasting appeal of irrationality, spirituality, religion, etc. is that, for some strange reason, people aren't quite comfortable worshipping this Terminator-like god of reason.

EDIT: I take it from the response that people here don't want to discuss this meta-question? Is rationality perhaps a sacred cow which plays a role similar to God in other faiths?

Comment by AlphaOmega on [SEQ RERUN] Your Rationality is My Business · 2011-06-02T05:24:03.008Z · LW · GW

To summarize: people should use rationality to decide arguments instead of a) killing each other or b) forbidding all judgment about who's right or wrong.

Just for the sake of argument: a) what is rationality? and b) what is so sacred about it that it should be the arbiter of all truth? I.e. why exactly do you worship the god of reason?

Comment by AlphaOmega on Remaining human · 2011-05-31T17:56:18.201Z · LW · GW

My thinking of late is that if you embrace rationality as your raison d'etre, you almost inevitably conclude that human beings must be exterminated. This extermination is sometimes given a progressive spin by calling it "transhumanism" or "the Singularity", but that doesn't fundamentally change its nature.

To dismiss so many aspects of our humanity as "biases" is to dismiss humanity itself. The genius of irrationality is that it doesn't get lost in these genocidal cul-de-sacs nor in the strange loops of Godelian undecidability in trying to derive a value system from first principles (I have no idea what this sentence means). Civilizations based on the irrational revelations of prophets have proven themselves to be more successful and appealing over a longer period of time than any rationalist society to date. As we speak, the vast majority of humans being born are not adopting, and never will adopt, a rational belief system in place of religion. Rationalists are quite literally a dying breed. This leads me to conclude that the rationalist optimism of post-Enlightenment civilization was a historical accident and a brief bubble, and that we'll be returning to our primordial state of irrationality going forward.

It's fun to fantasize about transcending the human condition via science and technology, but I'm skeptical in the extreme that such a thing will happen -- at least in a way that is not repugnant to most current value systems.

Comment by AlphaOmega on Defeating Mundane Holocausts With Robots · 2011-05-31T02:50:28.149Z · LW · GW

I'm not sure either, it was a general rant against hyper-rational utilitarian thinking. My utility function can't be described by statistics or logic; it involves purely irrational concepts such as "spirituality", "aesthetics", "humor", "creativity", "mysticism", etc. These are the values I care about, and I see nothing in your calculations that takes them into account. So I am rejecting the entire project of LessWrong on these grounds. Have a nice day.

Comment by AlphaOmega on Defeating Mundane Holocausts With Robots · 2011-05-31T02:01:43.884Z · LW · GW

If your goal is to maximize human life, maybe you should start by outlawing abortion and birth control worldwide. Personally I think reducing human values to these utilitarian calculations is absurd, nihilistic and grotesque. What I want is a life worth living, people worth living with and a culture worth living in -- quality, not quantity. The reason irrational things like religion, magical thinking and art will never go away, and why I find the ideology of this rationality cult rather repulsive, is because human beings are not rational robots and never will be. Trying to maximize happiness via rationality is a fool's quest! The happiest people I know are totally irrational! If maximal rationality is your goal, you need to exterminate humanity and replace them with machines!

(Of course it may be that I am off my meds today, but I don't think that invalidates my points.)

Comment by AlphaOmega on What bothers you about Less Wrong? · 2011-05-19T20:27:53.975Z · LW · GW

That's how it strikes me also. To me Yudkowsky has most of the traits of a megalomaniacal supervillain, but I don't hold that against him. I will give LessWrong this much credit: they still allow me to post here, unlike Anissimov who simply banned me outright from his blog.

Comment by AlphaOmega on What bothers you about Less Wrong? · 2011-05-19T19:57:51.654Z · LW · GW

What bothers me is that the real agenda of the LessWrong/Singularity Institute folks is being obscured by all these abstract philosophical discussions. I know that Peter Thiel and other billionaires are not funding these groups for academic reasons -- this is ultimately a quest for power.

I've been told by Michael Anissimov personally that they are working on real, practical AI designs behind the scenes, but how often is this discussed here? Am I supposed to feel secure knowing that these groups are seeking the One Ring of Power, but it's OK because they've written papers about "CEV" and are therefore the good guys? He who can save the world can control it. I don't trust anyone with this kind of power, and I am deeply suspicious of any small group of intelligent people that is seeking power in this way.

Am I paranoid? Absolutely. I know too much about recent human history and the horrific failures of other grandiose intellectual projects to be anything else. Call me crazy, but I firmly believe that building intelligent machines is all about power, and that everything else (i.e. most of this site) is conversation.

Comment by AlphaOmega on The greater a technology’s complexity, the more slowly it improves? · 2011-05-18T18:04:24.110Z · LW · GW

You raise a good point here, which relates to my question: Is Good's "intelligence explosion" a mathematically well-defined idea, or is it just a vague hypothesis that sounds plausible? When we are talking about something as poorly defined as intelligence, it seems a bit ridiculous to jump to these "lather, rinse, repeat, FOOM, the universe will soon end" conclusions as many people seem to like to do. Is there a mathematical description of this recursive process which takes into account its own complexity, or are these just very vague and overly reductionist claims by people who perhaps suffer from an excessive attachment to their own abstract models and a lack of exposure to the (so-called) real world?

Comment by AlphaOmega on How some algorithms feel from inside · 2011-05-17T21:44:18.832Z · LW · GW

Consciousness is how the algorithms of the universal simulation feel from the inside. We are a self-aware simulation program.

Comment by AlphaOmega on SIAI - An Examination · 2011-05-15T21:25:20.228Z · LW · GW

This is a good discussion. I see this whole issue as a power struggle, and I don’t consider the Singularity Institute to be more benevolent than anyone else just because Eliezer Yudkowsky has written a paper about “CEV” (whatever that is -- I kept falling asleep when I tried to read it, and couldn’t make heads or tails of it in any case).

The megalomania of the SIAI crowd in claiming that they are the world-savers would worry me if I thought they might actually pull something off. For the sake of my peace of mind, I have formed an organization which is pursuing an AI world domination agenda of our own. At some point we might even write a paper explaining why our approach is the only ethically defensible means to save humanity from extermination. My working hypothesis is that AGI will be similar to nuclear weapons, in that it will be the culmination of a global power struggle (which has already started). Crazy old world, isn’t it?

Comment by AlphaOmega on People who want to save the world · 2011-05-15T03:18:23.002Z · LW · GW

Well I just want to rule the world. To want to abstractly "save the world" seems rather absurd, particularly when it's not clear that the world needs saving. I suspect that the "I want to save the world" impulse is really the "I want to rule the world" impulse in disguise, and I prefer to be up front about my motives...

Comment by AlphaOmega on Entropy and social groups · 2011-04-27T17:29:59.599Z · LW · GW

Entropy may always be increasing in the universe, but I would argue that so is something else, which is not well-understood scientifically but may be called complexity, life or intelligence. Intelligence seems to be the one "force" capable of overcoming entropy, and since it's growing exponentially I conclude that it will overwhelm entropy and produce something quite different in our region of spacetime in short order -- i.e. a "singularity". If, as I believe, we are a transitional species and a means to a universal singularity, why would I want a system which restricts changes to those which are comprehensible or related to us?