The allegory of the hospital

post by Sunny from QAD (Evan Rysdam) · 2020-07-02T21:46:09.269Z · LW · GW · 20 comments

This is a link post for http://questionsanddaylight.com/?p=537

Contents

20 comments

Sylvanus, a professional programmer, has gotten into a car crash, with disastrous results: he’s paralyzed from the wrists down, and won’t be able to use a keyboard or mouse for six weeks while his hands heal. He has to take time off from his job, and worse, he has to take a break from his obsessive hobby of reading dozens of online news articles a day. He loves to stay up-to-date about current events, but no keyboard and no mouse means no surfing the internet, and the staff at the hospital are to busy to keep him in the loop.

Luckily, Sylvanus has a brilliant imagination, and since he’s on a break from life he has plenty of time to think. He lies in the hospital bed all day, relaxing his mind and sending probing thoughts out into the world, like a golfer putting a finger up in the air in order to check the direction of the wind. It isn’t long before he starts getting inklings about the outside world. They’re weak and vague at first, but they get stronger and clearer with each passing day of practice. He asks a nurse for a tape recorder, which he operates by pressing the buttons with his limp knuckles and which he uses to make spoken notes about the messages he receives.

After a few weeks of thinking and recording, he decides it’s time to review his notes. The process is challenging, since the messages are often incoherent or contradictory, but he feels up to the task, and before long he’s pretty sure he knows what he’s missed in world news since the accident.

The next day he gets a visit from his co-worker Daniella. She knows he’s a news junkie, and she offers to fill him in, but he decides to try to impress her. He explains his listening process, tells her what tweaks it needed to work more reliably, lets her listen to some of his notes, and then says:

“So, I think I already know what the news is. I’ll bet you five bucks I’ve got it right.”

“Huh,” she says. “Okay, deal. Start talking.”

He tells her what he came up with. Their country’s top official died of the flu three days after the accident, and was replaced by their second-in-command, as per the country’s founding document. But the second-in-command was incompetent and hugely unpopular, and was assassinated six days later. Everyone expected another high-ranking official to take their place, but a loophole was discovered stipulating that, in light of some specific details of the assassination, a general election should be held instead. All the major political parties raced to find suitable candidates, and…

“Okay, stop,” says Daniella. “Not sure where this is coming from, but it’s way off the mark. Nothing like that happened. You owe me five bucks.”

Sylvanus slumps in his bed, obviously disappointed.

“Why did you even offer that bet?” she asks. “Surely you know telepathy doesn’t work?”

But Sylvanus glares at her, and snaps “Well, I had nothing else to go on!”

20 comments

Comments sorted by top scores.

comment by Dagon · 2020-07-02T22:02:05.409Z · LW(p) · GW(p)

This allegory misses me by far enough that I have no clue what it's intended to demonstrate.

Replies from: Evan Rysdam, areiamus
comment by Sunny from QAD (Evan Rysdam) · 2020-07-03T03:19:54.543Z · LW(p) · GW(p)

Thanks for the feedback. It seems at least a handful of people are in the same boat, so I might try to re-work this in the future.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2020-07-03T16:29:46.417Z · LW(p) · GW(p)

Even if my guess is wrong (see other comment), I think this story works well as it is. It has something of the spirit of Mullah Nasreddin.

comment by areiamus · 2020-07-03T00:56:54.531Z · LW(p) · GW(p)

Agree

comment by Richard_Kennaway · 2020-07-03T15:41:50.750Z · LW(p) · GW(p)

Is this about recent demos of Hollywood-level image enhancement, and how they're not discovering what's in the image, but making stuff up that's consistent with it? And similar demos with GPT-3, that one might call "text enhancement"?

Replies from: Richard_Kennaway, Evan Rysdam
comment by Richard_Kennaway · 2020-07-03T16:13:26.184Z · LW(p) · GW(p)

I wonder when someone investigating a crime will try feeding all the evidence to something like GPT-3 and asking it to continue the sentence "Therefore the guilty person is..." Then they present this as evidence in court.

comment by Sunny from QAD (Evan Rysdam) · 2020-07-03T16:17:03.216Z · LW(p) · GW(p)

I wasn't aware of the image enhancement stuff, but it sounds different from what I'm getting at. If I had to write a moral for this story, I would say "Just because X is all the information you have to go on doesn't mean that X is enough information to give a high-quality answer to the problem at hand".

One time where I feel like I see people making this mistake is with the problem of induction. There are those who say "well, if you take the problem of induction too seriously, you can't know for sure that the sun will rise tomorrow!" and conclude that there must be an issue with the problem of induction, rather than wondering whether they really might not know for sure whether the sun will rise tomorrow.

I (believe that I) saw Scott Alexander make this sort of mistake in one of his recent posts, but I can't go check because... well, the blog doesn't exist at the moment. Actually, I heard it through the podcast, which is still available, so I might just listen back to the recent episodes and see if I (1) find the snippet I'm thinking of and (2) still think it's an instance of this mistake. If condition 1 is met, I'll come back and edit in a report of condition 2.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2020-07-03T16:45:36.850Z · LW(p) · GW(p)

I think that's what I had in mind. One of the "image enhancement" demos takes a heavily pixelated face and gives a high quality image of a face — which may look little like the real face. Another takes the top half of a picture and fills in the bottom half. In both cases it's just making up something which may be plausible given the input, but no more plausible than any of countless possible extrapolations.

comment by waveBidder · 2020-07-15T07:56:46.437Z · LW(p) · GW(p)

Often the issue is that what you're trying to predict is sufficiently important that you need to assume *something*, even if the tools you have available are insufficient. Existential risks generally fall in this category. Replacing the news with an upcoming cancer diagnosis, and telepathy with paying very careful attention to that organ, and whether Sylvanus is being an idiot is much less clear.


On the other hand, if someone is taking even odds on an extremely specific series of events, yeah, they're kind of dumb. And I wouldn't be surprised to find pundits doing this.

Replies from: Evan Rysdam, Evan Rysdam
comment by Sunny from QAD (Evan Rysdam) · 2020-07-15T20:03:31.234Z · LW(p) · GW(p)

As a side note, I wonder if I should have had him bet on a less specific series of events. The way the story is currently makes it almost sound like I'm just rehashing the "burdensome details" sequence, but what I was really trying to call out was the fairly specific fallacy of "X is all the information I have access too, therefore X is enough information to make the decision".

Overall I wish I had put more thought into this story. I did let it simmer in mind for a few days after writing it but before posting it, but the decision to finally publish was kind of impulsive, and I didn't try very hard to determine if it was comprehensible before doing so. Oops! I've updated towards "I need to do more work to make my writing clear".

Replies from: waveBidder
comment by waveBidder · 2020-07-16T06:24:39.590Z · LW(p) · GW(p)

Writing well is really hard. Thanks for sharing.

comment by Sunny from QAD (Evan Rysdam) · 2020-07-15T19:56:52.247Z · LW(p) · GW(p)

In the cancer diagnosis example, part of the reason that I would think it's less clear that Sylvanus is being an idiot is that you really might be able to get some evidence about the presence of cancer by paying close attention to the affected organ.

I  think I see where you're coming from, though. The importance of a cancer diagnosis (compared to a news addiction) does mean that trying out various apparently dumb ways of getting at the truth becomes a lot more sane. But I don't think I understand what you're saying in the first sentence. What needs to be assumed when reasoning about existential risk, and how are the high stakes responsible for forcing us to assume it?

(For context, my knowledge about the various existential risks humanity might face is pretty shallow, but I'm on board with the idea that they can exist and are important to think about and act upon.)

Replies from: waveBidder
comment by waveBidder · 2020-07-16T06:38:03.053Z · LW(p) · GW(p)
What needs to be assumed when reasoning about existential risk, and how are the high stakes responsible for forcing us to assume it?

I guess I opted for too much brevity. By their very nature, we don't* have any examples of existential threats actually happening, so we have to rely very heavily on counterfactuals, which aren't the most reliable kind of reasoning. How can we reason about what conditions lead up to a nuclear war, for example? We have no data about what led up to one in the past, so we have to rely on abstractions like game theory and reasoning about how close to nuclear war we were in the past. But we need to develop some sort of policy to make sure it doesn't kill us all either way.

*at a global scale at least. There are civilizations which completely died off (Rapa Nui is an example), but we have few of these, and they're only vaguely relevant, even as far as climate change goes.

Replies from: Evan Rysdam
comment by Sunny from QAD (Evan Rysdam) · 2020-07-16T11:36:52.775Z · LW(p) · GW(p)

Ah, I see what you're saying now. So it is analogous to the cancer example: higher stakes make less-likely-to-succeed-efforts more worth doing. (When compared with lower stakes, not when compared with efforts more likely to succeed, of course.) That makes sense.

comment by [deleted] · 2020-07-03T02:43:14.386Z · LW(p) · GW(p)

The certificate for questionsanddaylight is invalid (NET::ERR_CERT_AUTHORITY_INVALID). I don't know enough about net security to diagnose the problem, but I thought you should know.

Replies from: Evan Rysdam
comment by Sunny from QAD (Evan Rysdam) · 2020-07-03T03:18:28.449Z · LW(p) · GW(p)

Thank you, that's a consequence of me self-certifying. I've changed the link from https to http, which prevents the issue.

Replies from: Richard_Kennaway, None
comment by Richard_Kennaway · 2020-07-03T16:15:03.882Z · LW(p) · GW(p)

The internal links on your web site are having the same problem.

Replies from: Evan Rysdam
comment by Sunny from QAD (Evan Rysdam) · 2020-07-03T16:19:24.374Z · LW(p) · GW(p)

Yeah, that's almost certainly because they are all https links as well. In another branch of this comment thread Raven has pointed me to a place where I can get an https certificate for free, so I should be able to fix this soonish. Thanks!

comment by [deleted] · 2020-07-03T14:00:31.430Z · LW(p) · GW(p)

That works, but in case you aren't aware, you don't have to pay for a certificate: LetsEncrypt offers them for free.

Replies from: Evan Rysdam
comment by Sunny from QAD (Evan Rysdam) · 2020-07-03T16:00:16.724Z · LW(p) · GW(p)

Ooh, I'll check that out. Thanks for the tip!