Superintelligence fiction - "Understand", by Ted Chiang

post by D_Alex · 2013-10-07T03:03:20.587Z · LW · GW · Legacy · 25 comments

Contents

25 comments

http://www.infinityplus.co.uk/stories/under.htm?2. 15-30 min read time, rated "pretty good" by me.

 

There are a couple of interesting features of this story that I would like to discuss - but I don't want to introduce any spoilers, so I'll just leave this here for now.

25 comments

Comments sorted by top scores.

comment by lfghjkl · 2013-10-07T14:36:43.103Z · LW(p) · GW(p)

It is clear that the ending would have been very different had the author heard about TDT.

Replies from: Ishaan
comment by Ishaan · 2013-10-10T03:32:11.871Z · LW(p) · GW(p)

Check out "Story of your life" by the same author.

Ur'f znqr nyvraf jub jbhyq cebonoyl bcrengr ol GQG, zl cuvybfbcuvpny dhvooyrf jvgu GQG abgjvgufgnaqvat.

Replies from: lfghjkl
comment by lfghjkl · 2013-10-10T22:46:54.553Z · LW(p) · GW(p)

Hmm, just read that story before checking your spoiler and it was interesting, even despite the author's poor grasp of the physics he tried to explain. A light ray going from point A to point B is not taking the shortest path (measured in time) because it wants to reach B, the point B is merely a point on the geodesic curve the light ray is currently travelling along.

In other words, these light rays are taking the least time to reach the points they pass without intending to reach them, the points are just in the way.

That said, thanks for the recommendation! This story was still pretty good.

V qvfnterr gung gurfr nyvraf ner sbyybjvat GQG (be nal bgure qrpvfvba gurbel sbe gung znggre), fvapr gurl ner nyjnlf npgvat va n cerqrgrezvarq znaare naq arire npghnyyl znxr nal qrpvfvbaf. Gur jubyr pbaprcg bs n qrpvfvba gurbel jbhyq zrnavatyrff gb gurz.

What are your philosophical quibbles with TDT, if I may ask?

Replies from: Ishaan
comment by Ishaan · 2013-10-10T23:42:43.073Z · LW(p) · GW(p)

agree with your rot13. I guess it mostly just seemed related enough to be worth mentioning.

What are your philosophical quibbles with TDT, if I may ask?

A bunch of inferences which arise from the following: statement: "The supposition that an idealized rational agent's mind interacts with the universe in any way other than via the actions it chooses to carry out contains logical paradoxes."

I'm not confident in the opinion, it just represents my current state of understanding. When I've fleshed it out better in my head I will write it up and display it for criticism, unless I realize it is wrong during the intervening time (which is quite likely). One potential consequence is that TDT might ultimately be impossible to fully formalize without paradox via self-reference. The conclusion is that CDT is correct, as long as you follow the no-mind-reading rule. I reconstruct Newcombs and similar problems in such a way that the problem is similar but we aren't reading the agent's mind, and seem to always arrive at winning answers.

comment by stripey7 · 2013-10-08T02:03:53.878Z · LW(p) · GW(p)

I'll have to reread before I can make a comment specific to this story. But I found the collection as a whole (Stories of Your Life and Others) incredibly stimulating. I don't think I've ever seen so many really original ideas between two covers.

Replies from: Bayeslisk
comment by Bayeslisk · 2013-10-09T07:17:18.532Z · LW(p) · GW(p)

Man, the Babylon story and the Arab world story were both incredible. Excellent worldbuilding passing off complex ideas and making them Buffy-spoken in terms of understandability, with scattered crunchy genius bonuses.

comment by Bayeslisk · 2013-10-07T05:03:09.811Z · LW(p) · GW(p)

I liked this, and it was excellent. It possibly even conveys the idea of a sufficiently intelligent entity deriving complicated and useful results from little information, implementing superior evidence-gathering and processing to win, and possibly having sapient emotions.

comment by Kaj_Sotala · 2013-10-08T18:21:16.161Z · LW(p) · GW(p)

Thanks, nice story.

My reaction was somewhat opposite to what the others described: I thought the beginning was a somewhat generic and implausible brand of superintelligence porn, but the end was cool. Mostly I enjoyed the way a "conversation" and battle between superintelligences was depicted, and the attacks and countermeasures were rather cool.

comment by David_Gerard · 2013-10-09T11:19:49.378Z · LW(p) · GW(p)

This did provoke me to reread Jeeves and the Singularity, by Andrew Hickey.

comment by palladias · 2013-10-07T03:10:51.304Z · LW(p) · GW(p)

Oooh, I read this and...

Nf hfhny, V ybir ernqvat Puvnat. (Vagebqhprq gb uvz guebhtu "Uryy vf gur Nofrapr bs Tbq" juvpu vf rkpryyrag). Ohg rira gubhtu V jnf snfpvangrq ol guvf fgbel nf vg hasbyqrq, V sryg purngrq ol gur pyvznk. Vg jnf whfg fb sehfgengvat gb unir gur pbaarpgvba orgjrra gjb pyrire crbcyr (znavchyngvat gur znexrgf gb fraq n zrffntr! fdhrr!) or fb crggl naq fznyy. Creuncf gur cbvag jnf gung vagryyvtrapr nhtzragngvba vf begubtbany gb rguvpny nqinapr, ohg V jnfa'g pbaivaprq gung gurfr gjb crbcyr jrer fb hacyrnfnag gb ortva jvgu, fb gur jnfgr enaxyrq.

Replies from: NancyLebovitz, Bayeslisk, shminux
comment by NancyLebovitz · 2013-10-07T14:15:54.800Z · LW(p) · GW(p)

I see it as an example of the kind of story where the author has a really cool idea, but forces a pointless conflict onto it so that there will be a plot.

Replies from: Transfuturist
comment by Transfuturist · 2013-10-07T21:01:04.021Z · LW(p) · GW(p)

I would have liked to see the story's end without the second AI (augmented individual). However, I did like the story as it was. The issue I found with it was that their conflict of values was artificial. Human value is more complex than what was depicted (aesthetic hedonism(?) vs. utilitarianism), and unless the author had some thesis that such an augmented human would simplify their values, I would have enjoyed seeing them cooperate to a better end, for Earth and for the protagonist. Their goals did not conflict in any way (unless the protagonist was a paperclipper for intelligence), and they could have achieved a result that had greater value through cooperation, with a faster utopia for Reynolds and an isolated echo chamber for the protagonist, as well as a possible form of society of superintelligences.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-10-08T18:26:03.178Z · LW(p) · GW(p)

I agree that the conflict was implausible, but then the magnitude and speed of growth of the main character's intelligence was already magical enough that I'd already put the whole thing into the "stories that should be judged based on the aesthetic, not anything remotely resembling plausibility" category.

comment by Bayeslisk · 2013-10-07T05:01:49.176Z · LW(p) · GW(p)

I quite like Chiang myself. There is a quality to a few authors like him, Mieville, and Egan that I can't figure out what it is but really like. Possibly linguistics, good worldbuilding, and rarely having their characters be inexplicable idiots.

comment by Shmi (shminux) · 2013-10-07T05:50:57.393Z · LW(p) · GW(p)

Fictional evidence for the orthogonality thesis :)

comment by Manfred · 2013-10-07T04:02:24.656Z · LW(p) · GW(p)

The should have understood the concept of love.

The superpowers are fun, but pretty implausible. I think a lot of the fun is because it's out of the mainstream. We have so many stories where being crazy-superpower-smart just means making fancy gadgets, or developing an inflated ego and pulling off one successful scheme before perishing at the hands of the hero.

comment by danlucraft · 2013-10-12T11:21:29.838Z · LW(p) · GW(p)

After reading this story I spent about 30 seconds worrying that my ipad was broken because the display was now tinted pink. Even a restart didn't fix it. Then I realized.

comment by [deleted] · 2013-10-09T05:34:39.529Z · LW(p) · GW(p)

Never seen this before, I enjoyed it.

Tangent: I've always had a strong appreciation for stories which have smart protagonists - even if the actual actions being taken by the protagonists are, on second thought, not quite the genius strategic moves that the author intended them to be. I think that at a fundamental level this is because reading stories like this requires more or less putting yourself into the shoes of a superintelligence to even determine if what they are doing is optimal or not, and after the story finishes when you go back to coding, reading, or playing music you still have the lingering thought patterns of a being which is, in some stories, many times more intelligent than you. It's a very useful state to be in, but for some reason when I stop reading books like that for a bit it becomes harder and harder to sustain over time. Does anyone else experience something like this?

Replies from: cousin_it
comment by cousin_it · 2013-10-09T11:08:36.265Z · LW(p) · GW(p)

Yeah. That's one nice thing about Eliezer's fiction, when he writes a smart character, he actually tries to come up with smart decisions for them to make. Though I guess it's easier to have the character pull the solution out of a hat if you designed the puzzle yourself.

comment by FiftyTwo · 2013-10-07T05:51:37.860Z · LW(p) · GW(p)

I read it recently. I liked it overall but found the ending a bit strange/unsatisfying. Did anyone else have that experience?

Replies from: shminux
comment by Shmi (shminux) · 2013-10-07T06:01:30.666Z · LW(p) · GW(p)

What's unsatisfying to me is when I can find holes in a strategy presumably concocted by a superintelligence.

Replies from: FiftyTwo