A few analogies to illustrate key rationality points
post by kilobug · 2011-10-09T13:00:00.084Z · LW · GW · Legacy · 52 commentsContents
Introduction A tale of chess and politics The North Pole analogy The alphabet and language analogy The Lascaux painting and trans-humanism Conclusion None 52 comments
Introduction
Due to long inferential distances it's often very difficult to use knowledge or understanding given by rationality in a discussion with someone who isn't versed in the Art (like, a poor folk who didn't read the Sequences, or maybe even not the Goedel, Escher, Bach !). So I find myself often forced to use analogies, that will necessary be more-or-less surface analogies, which don't prove anything nor give any technical understanding, but allow someone to have a grasp on a complicated issue in a few minutes.
A tale of chess and politics
Once upon a time, a boat sank and a group of people found themselves isolated in an island. None of them knew the rules of the game "chess", but there was a solar-powered portable chess computer on the boat. A very simple one, with no AI, but which would enforce the rules. Quickly, the survivors discovered the joy of chess, deducing the rules by trying moves, and seeing the computer saying "illegal move" or "legal move", seeing it proclaiming victory, defeat or draw game.
So they learned the rules of chess, movement of the pieces, what "chess" and "chessmate" is, how you can promote pawns, ... And they understood the planning and strategy skills required to win the game. So chess became linked to politics, it was the Game, with a capital letter, and every year, they would organize a chess tournament, and the winner, the smartest of the community, would become the leader for one year.
One sunny day, a young fellow named Hari playing with his brother Salvor (yes, I'm an Asimov fan), discovered a new move of chess : he discovered he could castle. In one move, he could liberate his rook, and protect his king. They kept the discovery secret, and used it on the tournament. Winning his games, Hari became the leader.
Soon after, people started to use the power of castling as much as they could. They even sacrificed pieces, even their queen, just to be able to castle fast. But everyone was trying to castle as fast as they could, and they were losing sight of the final goal : winning, for the intermediate goal : castling.
After a few years, another young fellow, who always hated Hari and Salvor, Wienis, realized how mad people had become with castling. So he decides to never castle anymore, and managed to win the tournament.
Starting from this day, the community split in two : the Castlers and the anti-Castlers. The first would always try to castle, the others never. And if you advised to a Castler than in this specific situation he shouldn't castle, he would label you "anti-Castler" and stop listening to you. And if you advised an anti-Castler to castle in this specific situation, he would label you "Castler" and stop listening to you.
That tale illustrates a very frequent situation in politics : something is discovered which leads to great results, but then is mistaken for a final goal instead of an intermediate goal, and used even when it doesn't serve the final goal. Then some people, in reaction, oppose the whole thing, and the world is cut between the "pro" and the "anti". I used this tale to argue with someone saying to me "but you're a productivist", and it worked quite well to get my point : productivism can lead to huge increases in quality of life, but if it gets mistaken for a final goal (as many people do now, using GDP and economical growth as ultimate measures of success/failure), it leads to disasters (ecological destruction, dangerous or very painful working conditions, disregard of fundamental research over short term research, ...). And people are either categorized as "productivists" or "anti-productivists". But it could apply to many others things, like free market/free trade.
The North Pole analogy
Well, that one isn't any new, I'm using since like a decade, and I'm probably not the only one to use it, but it does work relatively well. It's an analogy used to answer to the "But, what's before the Big Bang ?" question. When I asked that, I can't just start explaining about the mathematical concept of limit, about the Plank time, about theories like timeless physics or quantum vacuum fluctuation, ... so I just answer "What's north of the North Pole ?". That usually works quite well to make people understand that asking what is before the start of time just doesn't have any meaning.
The alphabet and language analogy
That's an analogy that I found very useful in making people understand about reductionism, single-level reality and multi-level map, the fact you can understand (more or less completely) one level without understanding another. It also works very well about brain scanning/mind upload.
Take a piece of paper, with writings on it. Do words exist, I mean, really exist ? They are just made of letters. There is nothing more than letters, arranged in a specific way, to make words. And letters are nothing more than ink. How can consciousness arise from mere neurons ? The same way that the meaning of a text can arise from mere letters. There is only one level of reality : the ink and the paper. And the ink and paper are made of molecules, themselves made of atoms. And we can descend down to QM.
Now, can we understand a level without understanding another level ? Definitely. We can recognize the letters, to be of the roman alphabet, without understanding the languages. We can know them, since we use that same alphabet daily. But if the text is in German and we don't speak German, we won't understand the next level, the one of words, nor the one of meaning.
And can we understand a higher level, without understand a lower level ? If we speak Spanish and the text is in Portuguese, we may understand most of the highest level, the level of the text, without understanding every single word and grammatical rule of Portuguese. So an incomplete understanding of a lower level can give us an almost complete understanding of an higher level. Or even more obviously : even if we know nothing about the chemistry of ink and paper, we can still understand the letters and the higher levels.
But what about mind upload ? « We don't understand the human brain, it's too complicated, so we'll never be able to upload minds. » Well... there are levels in the human brain, like in a text on paper. If given a text in ancient Egyptian hieroglyph, you won't get anything about the text, or won't know the letters. But still, you can duplicate it with a pen and paper, reproducing the exact drawing by hand, if you're skilled enough with a pen. Or, you can scan it, store it on a USB key, and give to an archaeologist. In both cases, you would have duplicated the meaning, without even understanding it. And if you know the alphabet, but not the language, like German for me, you can recopy it much faster, or type it instead of scanning it, leading to a much smaller file that you can send by email and not USB key.
The same way, we don't need to understand human brain at all levels to be able to duplicate it, or to scan it and have it digitalized. If we only know its chemistry, we can scan it at molecule level, it'll be long and require a lot of storage, like scanning the Egyptian text to a bitmap. If we know the working of neurons, and can duplicate it at the level of individual neurons instead of individual molecules, it'll be much easier to duplicate, and require much less storage, like the German text.
(There is a variant of this analogy for geeks, which is about hard disk, file system and file format. You can understand a file system without really knowing how bits are stored on the magnetic plate, and you duplicate a hard disk by doing a block copy even if you don't understand the file system.)
The Lascaux painting and trans-humanism
Speaking about trans-humanism with a fellow coworker, it reached the usual objection : « but it's death that give meaning to our life, just look at all that beautiful poetry that was written because of death and the feeling of urgency it gives ». I tried the "baseball bat on the head once per week" objection, but didn't really work well. So I let the issue go from transhumanism, drifted the topic to art in general, and then I asked : « Do you think we appreciate the Lascaux painting more or less than they did when they painted them, 30 000 years ago ? » and then he said « More ». And then I said « And for the same reasons, in 3 000 years, when the average life span will be counted in thousands of years (or more), they'll appreciate more what we wrote at the time of always near death. » Which partially worked, but only partially, because he admitted we would appreciate existing poetry as much, if not more, than we do now, but he still claimed that we wouldn't be able to write it anymore, and I didn't find anything as simple/strong to answer to that.
Conclusion
Arguing by analogies is very error-prone, but it's the most efficient way I found to cross inferential distances. I would like to hear your opinion and comments about both the principle of using analogies to try to break through long inferential distances. I would also like to hear what you think about those specific ones , and hear your own analogies, if you have some to share.
(PS : I'm still new to Less Wrong, I'm not sure about the exact customs for making top-level posts, if you think it didn't deserve one, please tell me, and accept my apologizes).
52 comments
Comments sorted by top scores.
comment by CronoDAS · 2011-10-09T21:38:11.760Z · LW(p) · GW(p)
Imagine the chaos that ensues when somebody discovers that you can castle queenside. ;)
Replies from: DSimon↑ comment by DSimon · 2011-10-10T05:34:38.258Z · LW(p) · GW(p)
And don't even get me started on en passant...
Replies from: DavidPlumpton↑ comment by DavidPlumpton · 2011-10-10T06:59:15.905Z · LW(p) · GW(p)
Perhaps underpromotion is the rarest move of all.
Replies from: DSimon↑ comment by DSimon · 2011-10-13T13:49:53.853Z · LW(p) · GW(p)
Yeah, but that one's probably going to be a lot easier to stumble into on the chess computer than en passant. After a pawn reaches the back row, the computer has to wait for some additional input on what piece to promote it to before it can do anything; it's an obvious point for experimentation.
comment by Alicorn · 2011-10-09T17:23:40.317Z · LW(p) · GW(p)
I like this post and have upvoted it. But,
What's north of the North Pole?"
strikes me as a fairly cheap linguistic trick - you could deploy it with equal efficacy regardless of whether there can meaningfully be said to be "time" before the Big Bang. Now, the fact that it is a linguistic trick itself illustrates a sort of "ways words can be wrong" meta-point, which could be valuable in unrelated circumstances, but your other analogies are much better.
Replies from: kpreid, Sniffnoy↑ comment by kpreid · 2011-10-10T10:49:47.342Z · LW(p) · GW(p)
It isn't just a linguistic trick: it is pointing out that there exist ways in which a dimension can end, even if the particular way is nothing like the other.
Replies from: torekp, Bound_up, MarkusRamikin↑ comment by Bound_up · 2015-05-20T23:03:57.436Z · LW(p) · GW(p)
I thought the inquiry referred more to what was the cause of the Big Bang than to what was happening across the y-axis of the timeline.
If I'm not mistaken, we don't know yet?
Or have we concluded that the Big Bang is some kind of uncaused event?
↑ comment by MarkusRamikin · 2011-10-10T20:33:54.894Z · LW(p) · GW(p)
Well put. As was the analogy itself; I found it quite helpful personally and it's my favorite part of this article.
comment by DSimon · 2011-10-09T17:09:27.975Z · LW(p) · GW(p)
Which partially worked, but only partially, because he admitted we would appreciate existing poetry as much, if not more, than we do now, but he still claimed that we wouldn't be able to write it anymore, and I didn't find anything as simple/strong to answer to that.
Possible counter-example: Even as really well designed transhuman entities, we will eventually start "forgetting" our past experiences, given light-speed limits and some other physical upper bounds on practical storage densities and retrieval rates. At the very least, it would take long enough to retrieve enough detail about ourselves far enough in the past that it would be more like looking up someone else's life on Wikipedia than actually remembering the past. With clever indexing and compression, as well as advanced storage hardware, this might take many millions of years or longer, but it will still eventually happen as long as we continue having new experiences.
Therefore, if your friend insists that we must have something to mourn in order to create and appreciate poetry, consider the unavoidable slow loss of our past selves.
Replies from: pjeby, kilobug↑ comment by pjeby · 2011-10-10T03:52:40.118Z · LW(p) · GW(p)
Therefore, if your friend insists that we must have something to mourn in order to create and appreciate poetry, consider the unavoidable slow loss of our past selves.
This happens to me now. I have more than once noticed and mourned gaps in my memory, where a day of bliss interrupted by some silly bit of emotional drama is retained only as a memory of a ruined day.
And on a more professional level, I routinely notice the annoying difficulty of recalling how a "past self" as little as one day old would think about a problem (when I've effectively deleted that self due to a recent mind hack).
(It's not that I personally care about how the deleted self thought or felt, it's just that I'm usually trying to write accounts of the before-and-after of my mindhacks, and it's bloody difficult to write the "before" part, after, because I just don't think the same way any more.)
↑ comment by kilobug · 2011-10-09T18:01:06.168Z · LW(p) · GW(p)
True. But I wasn't arguing over "absolute immortality" (that's even quite fuzzy for me, "eternity" is not something I can apprehend, and I'm not sure it's really possible, with the Second Law of Thermodynamics), but more about a "moderate" amount of trans-humanism, with no more old age, but occasional accidents, so people living in average like a few millions of years. But you said hold true for "absolute immortality" if it's possible.
Replies from: CronoDAS, DSimon↑ comment by CronoDAS · 2011-10-09T21:53:30.906Z · LW(p) · GW(p)
so people living in average like a few millions of years.
"Curing aging" isn't enough on its own to get a million year lifespan; an 18-year old male in the United States has about a 1 in 1000 chance of dying before reaching his 19th birthday*, which would imply an average lifespan of about 1,000 years.Of course, by the time we do cure aging, we'll probably have solved a lot of other problems, too...
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-10-09T13:23:58.567Z · LW(p) · GW(p)
Upvoted for suggesting concrete examples.
comment by Brickman · 2011-10-10T01:19:17.691Z · LW(p) · GW(p)
I like the first two, and the chess one's pretty interesting though I can't imagine I'd have an easy time getting someone to stand still long enough to hear the whole thing as an argument. But I don't really like the last one. You've been tricked into accepting his premise, that death lets you create more meaningful art, and trying to regain ground from there. It's that premise itself that you should be arguing against--point out all the great literature and art that isn't about death, and that you could still have all of that once death was gone. Also point out that to someone with cancer today the availability of art is probably less valuable than the availability of a cure would be, and there's no reason to assume that'll change if you double his age, even if you double it several times.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2011-10-10T17:43:51.908Z · LW(p) · GW(p)
Also point out that to someone with cancer today the availability of art is probably less valuable than the availability of a cure would be.
Or to approach the same point from a slightly different direction - Elie Wiesel wrote some pretty awesome stuff, but that doesn't mean we should have more Holocausts.
Replies from: kilobug↑ comment by kilobug · 2011-10-10T18:50:15.678Z · LW(p) · GW(p)
That's an important point. He's a fan of Jacques Prévert, and I'll try to point to him that without WW2, Barbara (and many other of his poems) wouldn't have been written, but that it still doesn't make war in general, nor WW2 in particular, a good thing.
comment by Vladimir_Nesov · 2011-10-09T13:16:26.192Z · LW(p) · GW(p)
rationalism
http://lesswrong.com/lw/3rd/note_on_terminology_rationality_not_rationalism/
Replies from: kilobugcomment by CronoDAS · 2011-10-09T21:35:33.562Z · LW(p) · GW(p)
So they learned the rules of chess, movement of the pieces, what "chess" and "chessmate" is
I think you mean "check" and "checkmate".
Replies from: florian↑ comment by florian · 2011-10-09T22:05:27.140Z · LW(p) · GW(p)
I assumed that was intentional, as the players would not know the terminology of chess if they had to deduce the rules.
Replies from: kilobug, MarkusRamikin↑ comment by kilobug · 2011-10-10T16:31:03.722Z · LW(p) · GW(p)
Thanks for the excuse, but I've to admit... I was really a mistake :/ In French we say "échec" and "échec et mat" and the name of the game is "les Échecs". So... I confused the English terms.
But well, your excuse is cute, so I'll leave the mistake in the text ;)
↑ comment by MarkusRamikin · 2011-10-10T12:44:21.923Z · LW(p) · GW(p)
That doesn't seem to make sense. What are the odds they would have invented the word "chessmate" if the computer never used it?
Replies from: florian↑ comment by florian · 2011-10-10T16:09:06.597Z · LW(p) · GW(p)
It's a fictional example and it's not that uncommon in fiction to have terminology that's almost, but not entirely like the equivalent in the real world. I find that kind of thing amusing, so I thought the author might have a similar sense of humour, so it could be intentional. But I admit that Occam's razor supports the theory that it's simply a mistake.
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2011-10-10T16:24:05.250Z · LW(p) · GW(p)
it's not that uncommon in fiction to have terminology that's almost, but not entirely like the equivalent in the real world.
Hm... That doesn't strike me as true, actually. Can you think of an example or three?
EDIT: Actually now that i think of it, this can be true for when an author is creating an alternative reality but wants things to be familiar to the audience. Like in Dragon Age, all the titles of the nobility and such things are similar to real-world terms for no special reason. But then compare Warhammer 40k, which is not an alternative world but rather a far future of ours: a computer is a "cogitator" and a camera is a "picter". Clearly different words, but rather than complete inventions, they're obviously based on somewhat plausible alternative etymologies.
However, that doesn't apply to a story like this, which is trying to logically follow from its starting premise, and in which the formation of the terms happens wthin the story itself. It's that plausible etymology that is lacking.
comment by JoshuaZ · 2011-10-09T19:34:08.940Z · LW(p) · GW(p)
Most of these are quite good. The castling one is a novel one that I like a lot.
If you had two chess players of about equal ability and one of them would castle whenever he could and would work towards it, and the other one would never castle, I strongly suspect that the anti-castler would be more likely to win. A moderate castler, who maybe castled on any move when he could castle but didn't take steps towards castling when it wasn't heavily in his interest would probably be the sort of thing which would likely rise to the front.
comment by NancyLebovitz · 2011-10-09T15:45:36.295Z · LW(p) · GW(p)
I'm very fond of the castling analogy-- people do lose track of context, and tying group identity into choices which are useful some of the time makes it worse.
I'm not so sure about the Lascaux cave paintings. On one hand, we get to think about how wonderful their age is, but the people who made them would have had much less visual art competing for their attention, so the painting might have been much more special for them.
comment by spriteless · 2011-10-19T02:51:12.327Z · LW(p) · GW(p)
"What's further north of the north pole" is easier to say than "perception of time is a function of entropy, we just have more memories in the direction of greater entropy, the big bang is like how a sphere gets taller quickest on the leftmost point going right."
comment by falenas108 · 2011-10-09T15:16:50.438Z · LW(p) · GW(p)
The content is good for a top level post. The only thing you might want to change is the grammar/spacing issues (Using « instead of quotes, putting spaces between several punctuation marks, the sentence "Well, that one isn't any new, I'm using since like a decade" in the second paragraph, and a few other minor things.)
Edit for clarification: don't put the spacings between the punctuation marks that are there now.
Replies from: kilobug, ciphergoth↑ comment by kilobug · 2011-10-09T15:26:20.969Z · LW(p) · GW(p)
Ok, thanks for your remarks. I'm not a native English speaker (I'm French) so it's not as easy to write "perfect English" for me, but I'll try to be even more careful next time.
Replies from: Emile↑ comment by Paul Crowley (ciphergoth) · 2011-10-09T15:30:15.259Z · LW(p) · GW(p)
A substitute would be "Well, this one isn't new, I've been using it for something like a decade" - and I've used the quote marks which are standard for English writing. Is your native language German? Hope this helps!
comment by selylindi · 2011-10-09T22:51:24.452Z · LW(p) · GW(p)
It's an analogy used to answer to the "But, what's before the Big Bang ?" question. ... so I just answer "What's north of the North Pole ?".
As an analogy for one view of the how the Big Bang might have worked, that's quite a good analogy. It isn't totally fair, though, since that exact question is an active area of research by theoretical physicists, and there has even already been one observational test of a particular version of multiverse cosmology that would imply other universes existing before our Big Bang.
comment by MarkusRamikin · 2011-10-09T21:17:45.214Z · LW(p) · GW(p)
How can consciousness arise from mere neurons?
I thought the answer to that one was "we don't know yet". And it strikes me as a little dangerous to draw analogies about a process which you haven't understood yet... and especially to try and sell such an analogy as a "key rationality point".
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-10-09T21:48:34.179Z · LW(p) · GW(p)
I think it's a valid explanation of how something can arise from something lower-level.
Replies from: DSimon↑ comment by DSimon · 2011-10-10T05:33:05.314Z · LW(p) · GW(p)
Yeah, but we have so many other examples of reductionistic layers in stuff that we do understand really thoroughly that it seems a little off to go with one that we're still fuzzy about.
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2011-10-10T09:12:51.442Z · LW(p) · GW(p)
I see it as an applause light. On Less Wrong, talking about consciousness arising from physical structures is a way to show your allegiance to reductionism and physicalism, and you can hardly go wrong doing that.
The reason I'm being a pain about this sort of stuff is that I have an allergy to signs that our tribal beliefs - atheism, physicalism, MWI, memetics, reductionism etc - are being used as synonyms for rationality. They're just ideas, theories. Even if they're all correct (I'm not sure on some of them), if reality were different, they'd be wrong. So they're hardly the essence, or key ideas, of rationality, the way map/territory, evidence, Bayes Theorem etc can be said to be key concepts of rationality.
"The whole idea of a unified universe with mathematically regular laws, that was what had been flushed down the toilet; the whole notion of physics. Three thousand years of resolving big complicated things into smaller pieces, discovering that the music of the planets was the same tune as a falling apple, finding that the true laws were perfectly universal and had no exceptions anywhere and took the form of simple math governing the smallest parts, not to mention that the mind was the brain and the brain was made of neurons, a brain was what a person was -
And then a woman turned into a cat, so much for all that.
"Right," Harry said, somewhat dazed. He pulled his thoughts together. The March of Reason would just have to start over, that was all; they still had the experimental method and that was the important thing."
I quoted that because it illustrates my point: reality is "somehow separate from my very best hypotheses", all those specific "rational" beliefs might need to be abandoned at some point, and the methods of rationality would still apply.
So when merely professing these beliefs is somehow taken as imparting a lesson on rationality (like when a quote about memetics gets upvoted as a rationality quote, even though it carries no rationality lesson in itself), something feels wrong to me.
Replies from: soreff↑ comment by soreff · 2011-10-14T16:47:16.097Z · LW(p) · GW(p)
The reason I'm being a pain about this sort of stuff is that I have an allergy to signs that our tribal beliefs - atheism, physicalism, MWI, memetics, reductionism etc - are being used as synonyms for rationality.
Agreed. To pick a more extreme example: Tribal beliefs here seem to also include cryonics, life-extensionism, and immortalism. While I agree with these as desirable goals, if feasible, the frequent assumption here on their practical feasibility seems to rest on assumptions of continued economic, technical, and medical progress, which are actually contrary to recent evidence. (To put my own cards on the table - I'm an Alcor member, but more out of status quo bias from the 1990s, when it looked like Drexler/Merkle nanotech was going to be funded and succeed and be applied to medicine on a timescale of a decade or so. Didn't happen.)
An alternative to the consensus view on LW, equally physicalist, is to see significant further healthy life span gains as unlikely. Under the practical technological and medical constraints that we face, it might be more helpful to look towards easier assisted suicide than towards cryonics and similar options that rely heavily on progress that has in many ways stalled.
Replies from: MarkusRamikin, lessdazed↑ comment by MarkusRamikin · 2011-10-14T18:26:59.065Z · LW(p) · GW(p)
I don't know if it's just me, but I have to say that I don't get the impression that cryonics and those other topics are tribal beliefs here.
They are popular, but tribal beliefs aren't the same as merely popular topics. Rather, it's the stuff which gets taken for granted by the supermajority of those who bother or dare to speak up, and which makes for easy applause lights.
The feasability of cryonics and the rationality of the choice to get frozen have been points of very real debates, and if the author of this post chose to say something in favor of cryonics as an example of a "key rationality point", I bet that would get challenged quite readily.
Replies from: MarkusRamikin, soreff↑ comment by MarkusRamikin · 2011-10-14T19:10:46.631Z · LW(p) · GW(p)
Also, at the risk of testing everyone's tolerance for the density of MarkusRamikin posts on a page (sorry!) I'd like to make something clear. I fear I might sound like I think:
scarcity of debate -> tribal belief -> bad.
That is not so. I've no love for fake debate for the sake of debate, and I don't think the fact that some beliefs are shared so widely here that there is virtually no debate is in itself wrong. Even in the best rationalist community you could imagine, this would happen - precisely because rationality is supposed to help us narrow down on true beliefs, which necessarily means that if our rationality is on the whole greater than the wider society's, our beliefs should show convergence.
That this community consensus leads to some tribalism is probably an unavoidable side effect. But it's the sort of entropy we need to remain vigilant for and pump out.
Replies from: lessdazed↑ comment by soreff · 2011-10-14T20:05:56.381Z · LW(p) · GW(p)
Fair enough. I'm not claiming that there is a supermajority solidly convinced of the practical feasibility of cryonics, and significant life extension, and immortalism. The impression that I get is more nearly that most of the hypotheticals that I see here posit more medical and technical progress than is supported by observation. Now, these are hypotheticals - for instance the discussion of consequences of various (large!) degrees of life extension (starting at 1000-year lifespans) in the responses to the original post on this page. It is perfectly valid to discuss improbable hypotheticals. Nonetheless, I get the impression that very few of the hypotheticals explored on LW posit something close to the stagnation that we've actually seen in many fields. Perhaps it doesn't count as a tribal belief, but it does seem to set a tone of the discussion here - and not in the direction of making the discussion less wrong :-)
↑ comment by lessdazed · 2011-10-14T20:23:41.294Z · LW(p) · GW(p)
more helpful to look towards easier assisted suicide than towards cryonics
As false choices ignoring third alternatives go...this is an interesting one to set up.
comment by saph · 2011-10-12T11:43:21.789Z · LW(p) · GW(p)
And letters are nothing more than ink. How can consciousness arise from mere neurons ? The same way that the meaning of a text can arise from mere letters.
I am not sure if this is a good analogy. The meaning of text is usually not hidden somewhere in the letters. Most of it is in the brain of the writer/reader. (But I agree that some meaning can be read out from a text without much previous knowledge.)
Replies from: Cthulhoo↑ comment by Cthulhoo · 2011-10-27T15:33:28.951Z · LW(p) · GW(p)
Quoting kilobug:
it's often very difficult to use knowledge or understanding given by rationality in a discussion with someone who isn't versed in the Art
From what I understood these analogies are meant to better explain some basic point to people with little to no previous background. I assume that the average target of these analogies has never quite considered your objection, and is sitting on a lower level, so the analogy should be good enough to deliver the point. Once you've made yourself clear, you can then explain to your interlocutor that the analogy is not an isomorphism.
This is at least how I usually proceed when trying to explain new and complicated concepts to people that encounter them for the first time.