Greg Egan disses stand-ins for Overcoming Bias, SIAI in new book

post by Kaj_Sotala · 2010-10-07T06:55:56.543Z · LW · GW · Legacy · 42 comments

Contents

42 comments

From a review of Greg Egan's new book, Zendegi:

Egan has always had difficulty in portraying characters whose views he disagrees with. They always end up seeming like puppets or strawmen, pure mouthpieces for a viewpoint. And this causes trouble in another strand of Zendegi, which is a mildly satirical look at transhumanism. Now you can satirize by nastiness, or by mockery, but Egan is too nice for the former, and not accurate enough at mimicry for the latter. It ends up being a bit feeble, and the targets are not likely to be much hurt.

Who are the targets of Egan’s satire? Well, here’s one of them, appealing to Nasim to upload him:

“I’m Nate Caplan.” He offered her his hand, and she shook it. In response to her sustained look of puzzlement he added, “My IQ is one hundred and sixty. I’m in perfect physical and mental health. And I can pay you half a million dollars right now, any way you want it. [...] when you’ve got the bugs ironed out, I want to be the first. When you start recording full synaptic details and scanning whole brains in high resolution—” [...] “You can always reach me through my blog,” he panted. “Overpowering Falsehood dot com, the number one site for rational thinking about the future—”

(We’re supposed, I think, to contrast Caplan’s goal of personal survival with Martin’s goal of bringing up his son.)

“Overpowering Falsehood dot com” is transparently overcomingbias.com, a blog set up by Robin Hanson of the Future of Humanity Institute and Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence. Which is ironic, because Yudkowsky is Egan’s biggest fan: “Permutation City [...] is simply the best science-fiction book ever written” and his thoughts on transhumanism were strongly influenced by Egan: “Diaspora [...] affected my entire train of thought about the Singularity.”

Another transhumanist group is the “Benign Superintelligence Bootstrap Project”—the name references Yudkowsky’s idea of “Friendly AI” and the description references Yudkowsky’s argument that recursive self-optimization could rapidly propel an AI to superintelligence. From Zendegi:

“Their aim is to build an artificial intelligence capable of such exquisite powers of self-analysis that it will design and construct its own successor, which will be armed with superior versions of all the skills the original possessed. The successor will produce a still more proficient third version, and so on, leading to a cascade of exponentially increasing abilities. Once this process is set in motion, within weeks—perhaps within hours—a being of truly God-like powers will emerge.”

Egan portrays the Bootstrap Project as a (possibly self-deluding, it’s not clear) confidence trick. The Project persuades a billionaire to donate his fortune to them in the hope that the “being of truly God-like powers” will grant him immortality come the Singularity. He dies disappointed and the Project “turn[s] five billion dollars into nothing but padded salaries and empty verbiage”.

 (Original pointer via Kobayashi; Risto Saarelma found the review. I thought this was worthy of a separate thread.)

42 comments

Comments sorted by top scores.

comment by Risto_Saarelma · 2010-10-07T10:37:30.661Z · LW(p) · GW(p)

I find it amusing that SIAI has now ended up as the bad guys in Egan's new novel, and Egan's earlier novels like Permutation City or Diaspora were probably a not entirely insignificant inspiration for the founders when SIAI was being set up 10 years ago.

So Egan ended up (partially) making his own monster.

Replies from: XiXiDu
comment by XiXiDu · 2010-10-07T12:13:37.957Z · LW(p) · GW(p)

I don't see how those novels could have been an inspiration? I've read them when I was just awakening (~2005) and even then I noticed the sharp absence of any artificial intelligence. I believe Greg Egan's idea of the future is still a serious possibility. After all, as with aliens, the only example of something resembling generally intelligent, aware and goal-oriented agents are we ourselves.

If there was an inspiration then I would suspect others to be a more likely source.

I haven't read the book, but it looks rather like that he portrays this movement as a conspiracy to live off the money of nonconformists that is hidden under a massive amount of writings about rationality and pillowed by the little cherry on the cake that is AI going FOOM (rapture of the nerds).

Replies from: Risto_Saarelma, garethrees
comment by Risto_Saarelma · 2010-10-07T12:43:57.713Z · LW(p) · GW(p)

I don't see how those novels could have been an inspiration?

They present a seriously posthuman future, with a populace consisting mostly of human uploads and digital substrate native people, as a normal setting. Basically, hardcore computationalist cogsci, computer science mediated total upheaval of the human condition, and observing how life goes on nevertheless instead of bemoaning the awfulness of losing some wetware substrate. Several short stories of non-shallow thought about issues with human uploads and human cognitive modification. Pretty much the same cultural milieu as the SIAI writings are based on.

The ideas about singularity and AI come from Vinge, but I have a hard time coming up with other writers before 2000 that take the same unflinching materialistic stance to human cognition that Egan does, and aren't saddled by blatantly obvious breaks from reality. Ken MacLeod's Fall Revolution series maybe.

Basically Egan showed how the place where SIAI wants to go can be inhabitable.

Replies from: XiXiDu, mjankovic
comment by XiXiDu · 2010-10-07T17:27:59.507Z · LW(p) · GW(p)

Interesting, it worked pretty much the opposite for me. At the time I read those novels the particular idea of substrate independence seemed naturally to me. Only now I'm beginning to doubt its effectiveness. Not that I doubt substrate independence in general but that uploading might not amplify us, that the bloated nature of our brains is a actual feature. Further we might need a quantum computer the size of a human head to run a simulated brain. The chaotic nature might also not permit much tinkering after you've reached a certain state space.

By the way, isn't there another movement that was influenced by science fiction?

JK :-)

Replies from: JenniferRM
comment by JenniferRM · 2010-10-10T08:22:25.327Z · LW(p) · GW(p)

So I assume that's a crack in the direction of Objectivism, but I think your insight actually applies to a large number of semi-political movements, especially if you see "interesting science fiction", and "utopian/dystopian literature", and "political philosophy" as repackagings of basically the same thing.

Part of the political back story of Plato's Republic is that it documents utopian political theorizing in the presence of Athenian youths. In reality, there were students of Socrates who were part of the Thirty Tyrants... which group was responsible for a political purge in Athens. In Plato's Apology, there's a bit about how Socrates didn't get his hands dirty when ordered to participate in the actual killing but if you want to read critically between the lines you can imagine that his being ordered to drink hemlock was payback for inflicting bad philosophy on the eventual evil leaders of Athens.

Its one of those meta-observations where I'm not sure there's real meat there or not, but the pattern of philosophers inspiring politics and significant political leaders operating according to some crazy theory seems to exist. Maybe the policiains would have grabbed any old philosophy for cover? Or maybe the philosophy actually determines some of the outcome? I have no solid opinion on that right now, prefering so far to have worked on data accumulation...

Aristotle was Alexander the Great's tutor. Ayn Rand's coterie included Alan Greenspan. Nietzsche and the Nazis sort of has to be mentioned even if it trigger's Godwin. Marx seems to have had something to do with Stalin. Some philosophers at the University of Chicago might be seen as the intellectual grand parents of Cheney et al.

In trying to find data here, the best example of something that didn't end up being famously objectionable to someone may be Saint Thomas Moore's book "A Fruitful and Pleasant Work of the Best State of a Public Weal, and of the New Isle Called Utopia" which served as an inspiration for Vasco de Quiroga's colonial administration of Michoacán.

One of the important themes in all this seems to be that "philosophy is connected to bad politics" with alarming frequency - where the philosophers are not or would not even be fans of the political movements and outcomes which claim to take inspiration from their thoughts.

Having read Zendegi, I get the impression from the portrayal of the character Caplan, with an explicit reference to overcoming bias and with the parody of the "benevolent bootstrap project", that Greg Egan is already not happy with the actions of those he may have inspired and is already trying to distance himself from Singularitarianism in the expectation that things will not work out.

The weird thing is that he says "those people are crazy" and at the same time he says "this neuromorphic AI stuff is morally dangerous and could go horribly wrong". Which, from conversations with LW people, I mean...

...the warnings Egan seems to be trying to raise with this book are a small part of the reason this issue is attracting smart people to online political organization in an effort to take the problems seriously and deal with the issue in an intellectually and morally responsible fashion. But then Egan implicitly bashes the group of people who are already worrying about the fears he addresses in his book...

...which makes me think he just has something like a very "far mode" view of OB (or at least he had such a view in July or 2009 when he stopped adjusting Zendegi's content)?

A far mode view of Overcoming Bias could make us appear homogenous, selfish, and highly competent. The character "Caplan" is basically a sociopathic millionaire with a cryonics policy, a very high risk tolerance, and absolutely no akrasia to speak of... he's some sort of "rationalist ubermensh" twisted into a corporate bad guy. He's not someone struggling to write several meaningful pages every day on an important topic before the time for planning runs out.

I suspect that if Greg got on the phone with some of us, or attended a LW meetup to talk with people, he would find that mostly we just tend to agree on a lot of basic issues. But the phone call didn't happen and now the book is published.

One of the reasons I love science fiction is that it says so much about the time and mindset it was written in. I can read scifi from the 1960's and recognize the themes and pre-occupations and understand how they grew out of 1950's science fiction and what I'm reading fell out of fashion for 1970's stuff and so on. Some of it is pretentious, some childish, some beautiful, some is outright political ranting, some is just plain fun. Usually its a mixture. I wouldn't be surprised if Zendegi is interesting in 2012 for how much it reveals about the preoccupations of people in early 2009.

And the fact that science fiction is working on shorter timescales like this is also something I think is interesting. Shorter science fiction feedback cycles is (weakly) part of what I would expect if concerns about the singularity were signal rather than noise...

Replies from: Broggly, Document, Lanius
comment by Broggly · 2011-02-23T11:56:14.912Z · LW(p) · GW(p)

Surely Mill and the like can be seen as having some influence on liberalism? I certainly don't think our current society is so bad as to be comparable to the Nazis or USSR.

I'm also a little unhappy with your characterisation of both Nietzsche and Alexander. For one, Nietzsche's link to the Nazis was more due to his proto-nazi sister and brother in law who edited and published The Will to Power, using his name and extracts of his notes to support their political ideology. I also think Alexander wasn't so bad for his time. True, imperialism isn't a good thing, but as I've been told for the short span of his rule Alexander was a fairly good king who allowed his subjects to follow their own customs and treated them fairly regardless of nationality. I may be mistaken and there might be a bit too much hagiography in the history of Alexander though.

comment by Document · 2010-11-06T20:47:28.539Z · LW(p) · GW(p)

Tangent: I'm not sure if the comment referred to Objectivism or Scientology.

Replies from: jhuffman
comment by jhuffman · 2011-04-28T21:06:51.356Z · LW(p) · GW(p)

Scientology was my first thought. But Scientology mostly reminds me of another sort of movement altogether.

comment by Lanius · 2011-05-08T17:46:11.765Z · LW(p) · GW(p)

So, who's worse? Greenspan or Rand? I'm pretty sure that very few people heading the F.R. would have done better than Greenspan... but Rand wrote a few novels that perfectly reflect her pathological psyche. She was a clear case of NPD and abused amphetamines for decades.

I've only read Atlas Shrugged.. and that only because I couldn't put it down. Had to see whether it could get any worse..

comment by mjankovic · 2014-11-12T20:59:29.887Z · LW(p) · GW(p)

The ideas about singularity and AI come from Vinge, but I have a hard time coming up with other writers before 2000 that take the same unflinching materialistic stance to human cognition that Egan does, and aren't saddled by blatantly obvious breaks from reality.

Egan's stance is not materialistic in the least. It can be best described as a "what if" of extreme idealism. It has computers without any substrate, as well as universes operating on pure mathematics. You can hardly find a way of being less materialistic than that.

The idea of singularity and AI originates with Stanislaw Lem. Vinge was following his lead.

Egan's novels do have plenty of themes relevant to transhumanism, though their underlying philosophical suppositions are somewhat dubious at best, as they negate the notion of material reality.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2014-11-13T05:15:46.624Z · LW(p) · GW(p)

Yeah, 'materialism' isn't perhaps the best word since the being made of atoms part is often irrelevant in Egan's work. The connotation of materialism is being made of the math that the atoms obey, without any rule-excepting magic, and Egan has that in spades when cogsci is otherwise usually the part in even otherwise hard SF where whatever magical asspull the author needs to move the plot happens.

The idea of singularity and AI originates with Stanislaw Lem. Vinge was following his lead.

I guess you're talking about Golem XIV? I was talking about what early MIRI was inspired by, and they talked a bunch about Vinge and pretty much nothing about Lem. And I. J. Good's 1965 Ultraintelligent Machine paper predates Golem.

comment by garethrees · 2011-01-10T19:32:56.233Z · LW(p) · GW(p)

I don't see how those novels could have been an inspiration?

Yudkowsky describes Egan's work as an important influence in Creating Friendly AI, where he comments that a quote from Diaspora "affected my entire train of thought about the Singularity".

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-01-12T21:39:02.896Z · LW(p) · GW(p)

Though it's worth noting that this was a "That shouldn't happen" quote and not a "What a good idea" quote.

comment by XiXiDu · 2010-10-07T09:10:05.282Z · LW(p) · GW(p)

I've already got that book, I have to read it soon :-)

Here is more from Greg Egan:

I think there’s a limit to this process of Copernican dethronement: I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There’s a notion in computing science of “Turing completeness”, which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any other computer can do. Other computers might be faster, or have more memory, or have multiple processors running at the same time, but my 1988 Amiga 500 really could be programmed to do anything my 2008 iMac can do — apart from responding to external events in real time — if only I had the patience to sit and swap floppy disks all day long. I suspect that something broadly similar applies to minds and the class of things they can understand: other beings might think faster than us, or have easy access to a greater store of facts, but underlying both mental processes will be the same basic set of general-purpose tools. So if we ever did encounter those billion-year-old aliens, I’m sure they’d have plenty to tell us that we didn’t yet know — but given enough patience, and a very large notebook, I believe we’d still be able to come to grips with whatever they had to say.

What's really cool about all this is that I just have to wait and see.

Replies from: magfrump, blogospheroid, Nebu, Document
comment by magfrump · 2010-10-07T17:35:49.528Z · LW(p) · GW(p)

apart from responding to external events in real time

The concept of "real time" seems like a BIG DEAL in terms of intelligence, at least to me.

If aliens come into contact with us, it seems unlikely that they'll give us a billion years and a giant notebook to come to grips before they try to trade with/invade/exterminate/impregnate/seed with nanotechnology/etc.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2010-10-08T14:23:16.927Z · LW(p) · GW(p)

Can you come up with problem scenarios that don't involve interactions with other intelligent agents that have a significant speed advantage or disadvantage?

Sure, you can eat someone's lunch if you're faster than them, but I'm not sure what this is supposed to tell me about the nature of intelligence.

Replies from: magfrump, gwern
comment by magfrump · 2010-10-08T17:20:04.908Z · LW(p) · GW(p)

When I said that "real time" seems like a big deal, I didn't mean in terms of the fundamental nature of intelligence; I'm not sure that I even disagree about the whole notebook statement. But given minds of almost exactly the same speed there is huge advantage to things like answering a question first in class, bidding first on a contract, designing and carrying out an experiment fast, etc.

To the point where computation, the one place where we can speed up our thinking, is a gigantic industry that keeps expanding despite paradigm failures and quantum phenomena. People who do things faster are better off in a trade situation, so creating an intelligence that thinks faster would be a huge economic boon.

As for scenarios where speed is necessary that aren't interactive: if a meteor is heading toward your planet, the faster the timescale of your species' mind the more "time" you have to prepare for it. That's the least contrived scenario that I can think of, and it isn't of huge importance, but that was sort of tangential to my point regardless.

comment by gwern · 2011-01-08T17:53:05.700Z · LW(p) · GW(p)

Can you come up with problem scenarios that don't involve interactions with other intelligent agents that have a significant speed advantage or disadvantage?

Existential risks come to mind - even if you ignore the issue of astronomical waste - as setting a lower bound on how stupid lifeforms like us can afford to be.

(If we were some sort of interstellar gas cloud or something which could only be killed by a nearby supernova or collapse of the vacuum or other really rare phenomena, then maybe it wouldn't be so bad to take billions of years to develop in the absence of other optimizers.)

comment by blogospheroid · 2010-10-08T10:33:45.996Z · LW(p) · GW(p)

So, what Greg Egan is saying is that the methods of epistemic rationality and creativity are all mostly known by humans, all we lack is memory space.

I sincerely doubt it. I genuinely believe Anna Salamon's statement , that humans are only on the cusp of general intelligence, is closer to the truth.

EDITED : to add hyperlink to Anna Salamon's article

Replies from: XiXiDu
comment by XiXiDu · 2010-10-08T12:39:06.528Z · LW(p) · GW(p)

What is productive cannot be judged in advance if you are facing unknown unknowns. And the very nature of scientific advances is an evolutionary process, not one of deliberate design but discovery. We may very well be able to speed up certain weak computational problems by sheer brute force but not solve problems by creating an advance problem solving machine.

That we do not ask ourselves what we are trying to achieve is a outstanding feature of our ability to learn and change our habits. If we would all be the productive machines that you might have in mind, then the religious people would stay religious and the bushman would keep striving for a successful chase even after they build a supermarket in front of his village. Our noisy and bloated nature is fundamental to our diversity and ability to discover the unknown unknowns that the highly focused, productive and change adverse autistic mind would never come across or care about. Pigeons outperform humans at the Monty Hall Dilemma because they are less methodical.

What I'm trying to say is that the idea of superhuman intelligence is not as clear as it is portrayed here. Greg Egan may very well be right. That is not to say that once we learnt about a certain problem there isn't a more effective way to solve it than using the human mind. But I wouldn't bet my money on the kind of god-like intelligence that is somehow about to bootstrap itself out of the anthropocentric coding it emerged from.

comment by Nebu · 2016-01-24T22:28:00.540Z · LW(p) · GW(p)

I suspect that if we're willing to say human minds are Turing Complete[1], then we should also be willing to say that an ant's mind is Turing Complete. So when imagining a human with a lot of patience and a very large notebook interacting with a billion year old alien, consider an ant with a lot of patience and a very large surface area to record ant-pheromones upon, interacting with a human. Consider how likely it is that human would be interested in telling the ant things it didn't yet know. Consider what topics the human would focus on telling the ant, and whether it might decide to hold back on some topics because it figures the ant isn't ready to understand those concepts yet. Consider whether it's more important for the patience to lie within the ant or within the human.

1: I generally consider human minds to NOT be Turing Complete, because Turing Machines have infinite memory (via their infinite tape), whereas human minds have finite memory (being composed of a finite amount of matter). I guess Egan is working around this via the "very large notebook", which is why I'll let this particular nitpick slide for now.

comment by Document · 2010-11-06T21:21:04.643Z · LW(p) · GW(p)

Random thought about Schild's Ladder (which assumes the same "equal footing" idea), related to the advantages of a constant utility function: Would Tchicaya have destroyed the alien microbes if he'd first simulated how he'd feel about it afterward?

comment by XiXiDu · 2010-10-07T10:22:36.624Z · LW(p) · GW(p)

By the way, a quote from Greg Egan is one of the highest voted comments on LW:

You know what they say the modern version of Pascal's Wager is? Sucking up to as many Transhumanists as possible, just in case one of them turns into God. -- Julie from Crystal Nights by Greg Egan

I wonder if Greg Egan actually knows how serious the people here take this stuff. You could actually write a non-fiction book about the incidents and beliefs within this community and tell people it is science fiction and they would review it as exaggerated fantasy. I guess that means that the Singularity already happened when it comes to the boundaries between what is factual, realistic, fiction, science fiction, fantasy, not even wrong or plain bullshit.

Replies from: thomblake, nick012000
comment by thomblake · 2010-10-07T17:46:15.109Z · LW(p) · GW(p)

You could actually write a non-fiction book about the incidents and beliefs within this community and tell people it is science fiction and they would review it as exaggerated fantasy.

I recently was considering writing a post-apocalyptic science fiction story where people are on a quest to find Roko's deleted post, believing it to contain the key to defeating the tyrannical superintelligence.

Replies from: NihilCredo, XiXiDu
comment by NihilCredo · 2010-11-04T14:35:44.336Z · LW(p) · GW(p)

Given what the post was alleged to do to its readers, it would be the most downer of all endings.

comment by XiXiDu · 2010-10-08T11:58:46.314Z · LW(p) · GW(p)

How about one where people destroyed the Internet, burned all books and killed all academics to impede the dangerous knowledge cut loose by Roko. In the preface the downfall of the modern world would be explained by this. The actual story then would be set in the year 4110 when the world not just recovered but invented advanced AI and many other technologies we dreamed about today. The plot would be about a team of AI supported cyborg archaeologists on Mars discovering an old human artifact from the 2020's, some kind of primitive machine that could be controlled from afar to move over the surface of Mars. When tapping its internal storage they are shocked. It looks like that the last upload from Earth was all information associated with the infamous Roko incident that lead to the self-inflicted destruction of the first technological civilisation over 2000 years ago. Sure, the archaeologists only know the name of the incident that lead a group of people to destroy the civilized society. But there's a video too! Some Chinese looking guy can be seen, panic in his eyes and loud explosions in the background. Apparently some SIAI assault team is trying to take out his facility, as you can hear a repeated message coming in from a receiver, "We are the SIAI. Resistance is futile..." He's explaining how he's going to upload all relevant data to let the future know that it was all for nothing...then the video suddenly ends. Instantly long instantiated measures are taken to sandbox the data for further analysis. In the epilogue it is then told how people are aghast at how the ancients destroyed their civilisation over such blatant nonsense. How could have anyone taken those ideas serious for that every kid knows that a hard takeoff isn't possible as there can only be a gradual development of artificial intelligence and that any technological civilization is merging with its machines rather than being ruled by them. Even worse, the ancients had absolutely no reason to believe that to create intelligences with incentive as broad as to allow for the urge to evolve is something that can be easily happen by failure, now people know that it has to grow and requires the cooperation of the world beyond you. And the moral of the story would be that the real risk is taking mere ideas too serious!

Replies from: jimrandomh
comment by jimrandomh · 2010-10-09T16:27:29.706Z · LW(p) · GW(p)

Maybe I'm generalizing from one example here, but every time I've imagined a fictional scenario where something I felt strongly about escalated implausibly into warfare, I've later realized that it was a symptom of an affective death spiral, and the whole thing was extremely silly.

That's not to say a short story about a war triggered by supposedly-but-not-actually dangerous knowledge couldn't work. But it would work better if the details of the knowledge in question were optimized for the needs of the story, which would mean it'd have to be fictional.

Replies from: Document
comment by Document · 2010-10-20T02:03:29.912Z · LW(p) · GW(p)

There are a stories about dangerous knowledge and stories about censorship gone mad, but I can't think of one where the reader theirself isn't sure which it is.

Replies from: khafra, HonoreDB
comment by khafra · 2010-11-08T18:17:44.339Z · LW(p) · GW(p)

There's a related concept in the stage production Urinetown, where the draconian controls of the police state turn out to have been necessary all along; and the Philip K. Dick short story The Golden Man, where the government's brutal crackdown on mutants and sadistic experimentation are defied by a lone researcher, directly leading to implied cosmic waste.

But the closest story I can think of to ambiguous censorship is Scissors Cut Paper Wrap Stone, where the protagonist controls some Langford Basilisks; and censorship per se doesn't play a big part in the plot.

Replies from: Alicorn
comment by Alicorn · 2010-11-08T18:52:50.323Z · LW(p) · GW(p)

There's a related concept in the stage production Urinetown, where the draconian controls of the police state turn out to have been necessary all along

As a musical, Urinetown is okay, but its premise does not make sense. They have somehow managed, in spite of the water shortage and the wherewithal to institute massive societal change to manage it, to continue using restroom facilities that cost water, and they only don't all die because they charge money to use those facilities, as though this will affect how much waste a person produces. This is all instead of a water-free facility, or better yet, reclamation.

Replies from: Broggly
comment by Broggly · 2011-02-23T14:06:29.219Z · LW(p) · GW(p)

And given that the Haber-Bosch process requires water (to produce the hydrogen gas), it seems a little stupid to ban public urination rather than simply insisting they urinate on trees or into buckets for their farmers to use.

comment by HonoreDB · 2011-02-18T00:43:27.492Z · LW(p) · GW(p)

The Pillowman and The Metal Children, both recent plays, come to mind.

comment by nick012000 · 2010-10-22T18:53:06.581Z · LW(p) · GW(p)

What the heck was up with that, anyway? I'm still confused about Yudkowsky's reaction to it; from what I've pieced together from other posts about it, if anything, attracting the attention of an alien AI so it'll upload you into an infinite hell-simulation/use nanobots to turn the Earth into Hell would be a Good Thing, since at least you don't have to worry about dying and ceasing to exist.

Even if posting it openly would just get deleted, could someone PM me or something? EDIT: Someone PMed me; I get it now. It seems like Eleizer's biggest fear could be averted simply by making a firm precommitment not to respond to such blackmail, and thereby giving it no reason to commit such blackmail upon you.

Replies from: billswift
comment by billswift · 2010-10-30T18:23:21.587Z · LW(p) · GW(p)

Simply? Making firm commitments at all, especially commitments believable by random others, is a hard problem. I just finished reading Schelling's Strategies of Commitment so the issue is at the top of my mind right now.

comment by xamdam · 2010-11-28T15:16:04.753Z · LW(p) · GW(p)

So I actually read the book; while there is a little "dis" in there, but the portrait is very partial: "Nate Caplan, my IQ is 160" of "OverpoweringFalsehood.com" is actually pictured as the rival of the "benign SuperIntelligence Project" (a stand-in for SIAI I presume, which is dissed in its own right of course). I think it's funny flattering and wouldn't take it personally at all, doubt Eliezer would in any case.

BTW the book is Ok, I prefer Egan in far-future mode than in near-future.

comment by randallsquared · 2011-01-28T03:04:04.631Z · LW(p) · GW(p)

From your description, it seems clear that the book's Caplan is supposed to mock Robin Hanson, or possibly a combination of various GMU economists, since Bryan Caplan is one of Hanson's colleagues there. This also fits with an exaggerated view of Hanson and Yudkowsky disagreeing about hard takeoff and the need for Friendliness.

comment by Jonathan_Graehl · 2010-10-09T01:49:36.370Z · LW(p) · GW(p)

It's hardly fair to call EY Egan's 'biggest fan', but this is nonetheless amusing. The actual disrecommendation for the book was a little hard to find buried in all the clever analysis:

the novel never delivers the emotional impact that it promises. Ned Beauman in SFX calls it a “tepid meditation on fatherhood and Middle Eastern democracy,” which is a fair summary. Egan’s characterisation is simply not good enough to support the story he wants to tell.

Replies from: garethrees
comment by garethrees · 2011-01-10T19:36:37.100Z · LW(p) · GW(p)

It's hardly fair to call EY Egan's 'biggest fan'

I based this description on Yudkowsky's comments here, where he says of Permutation City, "This is simply the best science-fiction book ever written [...] It is, in short, my all-time favorite."

Replies from: rkyeun
comment by rkyeun · 2013-09-04T20:26:02.303Z · LW(p) · GW(p)

That makes Egan the thing Yudkowsky is the biggest fan of. It does not make Yudkowsky to be Egan's biggest fan.

Replies from: garethrees, MakoYass
comment by garethrees · 2013-09-14T18:52:38.478Z · LW(p) · GW(p)

"Biggest fan" here is hyperbole for "a very big fan".

comment by mako yass (MakoYass) · 2020-02-25T01:12:50.124Z · LW(p) · GW(p)

I'm not sure what use "biggest fan" would have as a term, if it meant that. We would rarely ever want to look at or talk about the biggest fans of almost anything. To like something more than anyone else, you have to be weird. Per The Winner's Curse, to get to the top, they'll usually need to have made a mistake somewhere in their estimation of it, to like it a bit more than anyone should.

Perhaps if "fandom" should come to mean "understanding". You do have to like something quite a bit to come to understand it very well (though many will claim to understand a thing they dislike better than the people who like it, they are generally recognisably wrong)

comment by DilGreen · 2010-10-07T09:01:17.751Z · LW(p) · GW(p)

This is interesting.

I'd read all of Egan before finding LW/encountering serious singularity/AI thinkers. (I'm a generalist). I read Zendegi recently but didn't immediately connect it with here - I may go and re-read it now.

For the record, I would have to say, though, the Egan's characterisations of all protagonists is weak - a tendency that is, I find, widespread among hard SF writers. Not surprisingly; they are interested in the interactions of imaginably real science with the history and future of humanity. Significant emphasis on the characteristics of particular individuals (making them seem real by letting us understood their particular identity as distinctive) is likely to undermine their purpose in examining these interactions. It takes a great artist to unite disparate angles one a topic into a whole (I hesitate to use my usual word for this achievement here - I call this achievement 'transcendent')