Inverse Problems In Everyday Life

post by silentbob · 2024-10-15T11:42:30.276Z · LW · GW · 1 comments

Contents

  Bayesian Reasoning
  Human Communication
  Dating Profiles
  Being Asked for Your ID When Buying Alcohol
  “You Will Recognize It When You See It”
  AI Image Generation
  Signs and Interfaces
  Being Suspicious of Strangers
  Other Examples
  Closing Thoughts
None
1 comment

There’s a class of problems broadly known as inverse problems. Wikipedia explains them as follows:

An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them. [...] It is called an inverse problem because it starts with the effects and then calculates the causes. It is the inverse of a forward problem, which starts with the causes and then calculates the effects.

This post is about the many examples that we run into in life, where there’s a difference between forward reasoning and inverse reasoning, and where neglecting to take this difference into account can lead to problems. My aim here is mostly to provide enough examples to make this distinction more intuitive and recognizeable. Ideally we should be able to quickly notice situations where inverse reasoning is a suitable tool, so that we can use it, and hence come to better conclusions, or at the very least adopt an appropriate degree of uncertainty to our expectations and interpretations.

Bayesian Reasoning

Bayesian reasoning is one typical example of an inverse problem. While causal effects are “forward” facing, Bayesian reasoning works in the opposite, the diagnostic direction. We look at “evidence” (the outcomes of some causal process) and try to infer the possible causes.

If you’re unaware that Bayesian reasoning is an “inverse” process, you might get the idea that you can also try Bayesian updating on forecasting questions – but eventually you realize that this gets quite awkward. For example: how likely are we to observe the release of GPT-O1, in a world where artificial superintelligence eventually ends up disempowering humanity? This is not a natural fit at all, and understanding the inverse nature of Bayesian reasoning should make that clear.

Human Communication

It is no secret that communication between humans can be tricky: you have some thought or feeling or idea in your mind, and you try to encode it into words, which then make their way to your communication partner, who tries to decode these words back into the underlying idea. This is often a very lossy process. And while the general problem of communication being difficult goes way beyond the scope of this post, there is one particularly relevant layer to it: if both communication partners treat their communication naively as a forward problem, this can lead to avoidable misunderstandings. Whereas, if it’s important to them to understand each other precisely, it might make sense to invest some mental effort on both sides and take the inverse direction into account:

To name one trivial yet real example of the above that I recently ran into: I was planning a day trip with a friend, and the evening before, I was checking out the weather report. I made a comment along the lines of “oh, the weather will be great tomorrow”! As it turned out, my friend interpreted this as “Summer will return and I can wear shorts and a t-shirt”. Actually though, the weather report predicted a sunny day at 14° C (57F), which of course is a bit too chilly for that kind of clothing.

My mistake was to not realize that “the weather will be great”, while this was indeed my feeling about what I saw in the weather report, could be interpreted in all kinds of ways and might impact my friend’s clothing decisions. My friend’s mistake was to not realize that the span of weather forecasts that would cause me to exclaim “the weather will be great” was much larger than just “summer weather”.

Dating Profiles

When you get involved in online dating, you will get exposed to endless profiles of people that may or may not be interesting to you. When looking at any given profile, you basically have to construct some internal image of that other person in your head, in order to predict whether you want to meet them or not. You make that judgement based on the profile pictures as well as any writing and data they share about themselves. The forward way of approaching this is to take everything at face value, and basically assume that the person looks like they do in the pictures, and is as friendly/funny/thoughtful/whatever as the profile suggests. With this approach however, many dates will be disappointing – people tend to choose their very best pictures, meaning they will look less attractive in real life than in their profile, and they probably have tried several iterations of their self description, asked friends for feedback, and so on. So a more accurate approach would be to inverse the process, and think about the whole, wide set of potential people, who could end up creating a dating profile that looks like the one you’re seeing. Which particularly means that most of the positive attributes they display are dialed down in reality.

This is kind of obvious, and I’m sure nobody who’s reading this right now will consider this a surprising, new insight. Yet, I have a strong suspicion that it’s still a very common occurrence that people, on first dates, are negatively surprised by the realization that the other person is not quite what they expected.

Being Asked for Your ID When Buying Alcohol

When I was in my early 20s, it happened quite a lot that friends would complain about something along the lines of “I went to buy some beer yesterday and the clerk asked me for my ID! How stupid, I really don’t look like I’m younger than 16 do I”. (where I grew up, you can buy beer when you’re 16, and IDs tend to be checked only for young people who are not obviously old enough)

I acknowledge that the nature of such complaints may not have the highest epistemic standards to begin with, but it still serves as an example of failure to treat this as an inverse problem: the question is not, do you look as if you were 15? The question is, are there plausibly any 15 year-olds who look as old as you do? And the second question is a rather different one, and one that can well be answered “yes”, even when you very much look like a regular 20 year-old.

“You Will Recognize It When You See It”

This is probably mostly some movie trope, where movies occasionally use this phrase to lead into an entertaining scene transition to introduce some really fancy/funny/unusually looking person or place. And it always annoys me, because it’s a classic forward/inverse confusion. Let’s say you go to some party, and are looking to meet a specific person you haven’t seen before. So you ask a friend, “how will I recognize her”? And your friend responds “Oh don’t worry - you will know when you see her”! Even if this is indeed true, and the effect of seeing that person would be that you get a feeling of “knowing”, until that happens you live through confusion and uncertainty, each person you encounter making you wonder whether there’s anything about them that would make you “know” that it’s them. And who knows, maybe there’s some other person running around that triggers a similar reaction, making you mistake that person for the one you’re looking for.

I guess the main issue here is that the implication relation is not symmetric, so “you’re looking at person X ⇒ you know it’s person X” does not equal (or imply) “you think you know the person you're looking at is person X ⇒ you’re actually looking at person X”. What it comes down to is that “Don’t worry, you’ll know!” is a stupid thing to say in almost all cases, because it answers the wrong question.

AI Image Generation

AI image generators such as DALLE or Midjourney are usually trained on labeled images. That is, there was some image (or billions of them), and that image was at some point labeled by a human. The “forward” problem here is the original labeling. So what image generators do is to solve an inverse problem: given some prompt by a human, they try to generate an image that, if it was labeled, would match that prompt. However, as a human who is using an image generator, you’re now effectively solving a double inverse problem: you turn things around again and now try to come up with the kind of prompt that makes the image generator generate an image that is close to the one in your head. This approach, if you have some experience with any given AI image generator, can be much more fruitful than simply describing the image in your head in forward-fashion[1].

An artful, AI-generated depiction of DALLE
How DALLE interprets GPT4’s vision of what DALLE looks like.

Signs and Interfaces

There’s a whole field of study around interface design and design thinking. And the reason this field exists probably mostly comes down to the fact that the “forward” way of designing some user interface based on the developer’s understanding of the system, who’s then mapping that system to a bunch of UI elements, tends to not work very well, and often leaves users confused. Instead it’s important to consider: what does the user know and expect? And how can the interface not only convey all the necessary information, but also do so in a way that the user feels certain about understanding the state and meaning of things correctly?

Imagine a simple sign on a train or airplane showing whether the toilet is currently occupied. It shows a symbol of a man and a woman next to an arrow to where you find it. When occupied, the sign glows red, otherwise it glows white. Knowing all this, it seems perfectly reasonable and obvious. Consider the situation of a traveler however, who didn’t yet pay much attention to the sign until they realize they require a toilet – they start looking around and don't have all that context about how the sign works. They just see the sign in its current state. Let’s say it’s glowing white. What does it mean? The person now doesn’t know whether the toilet is occupied or not, or if the sign even has the capability of reflecting that information. Which is rather unfortunate. So ideally, you would design the sign with this scenario in mind: how can a person, seeing that sign for the first time, immediately know (and feel certain about knowing) whether the toilet is occupied or not?

Being Suspicious of Strangers

It occasionally happens that in some interaction between two strangers, one is suspicious of the other, to which the second person then takes offense. Imagine Alice is sitting on a park bench, browsing on her phone. Bob, a stranger, approaches and kindly asks if it would be possible for him to call somebody on her phone, as his battery had just died. Alice reacts uneasy to that request and hesitates. Bob then exclaims “Oh, no! You think I want to steal your phone? Nooo, do I really look like I would do such a thing?”. Bob here reasons in a forward-only way, looking at himself and his character and is offended that somebody would suspect him of stealing a phone. Whereas from Alice’s perspective, it’s perfectly reasonable to wonder whether the kind of person who wants to steal her phone might look and act in the way that Bob does. Bob being offended in such a situation is not very productive, as Alice is not actually accusing actual Bob of anything – she doesn’t know him, after all – but is rather unsure about who the real person behind the facade that she’s seeing really is.

Depiction of a woman with a phone sitting on a bench, being approached by a slightly shady looking man.
(I am not sure why DALLE decided to add that little heart. Maybe Bob wants to call his mother.)

Other Examples

I could go on and on and on about this, but this post is long enough and I’m sure you got the point three examples ago. Still, I’ll at least mention a few more areas where this can be applied:

Closing Thoughts

As [LW · GWwith [LW · GWmany [LW · GWof [LW · GWmy [LW · GWposts [LW · GW], my aim here is not to argue that this particular concept is so important that it should always be at the forefront of our minds, dominating our thinking and decision-making. But I do think it is important to maintain some level of awareness about this. As I hopefully was able to show, situations where pure forward thinking is insufficient are quite common. And when they do occur, we’re most certainly better off recognizing that, so that we can make an informed decision whether or not to put in that extra bit of effort to get a more accurate understanding of what’s actually going on. Especially given that the "extra bit of effort" can mean as little as a 5-second [LW · GW] check of which possible causes map to the observation in front of you. And if this awareness enables one to occasionally avoid false expectations, hurt feelings or general misunderstandings, then it’s probably worth it.

  1. ^

    Even though in theory, for a “perfect” AI image generator, just providing the forward description of your desired image would probably indeed be the best strategy

  2. ^

    I once noticed that a Burger King in a city I was traveling to had google ratings of 1.2 stars out of 5. Forward reasoning would have made me expect a truly horrific restaurant, but inverse reasoning quickly led to the conclusion that surely it can’t be remotely that bad (it wasn’t – although I agree the fries were a little below average. 3 stars!)

1 comments

Comments sorted by top scores.

comment by silentbob · 2024-10-15T11:46:14.054Z · LW(p) · GW(p)

Bonus Example: The Game Codenames

There’s a nice boardonline game called Codenames. The basic idea is: you have two teams, each team split into two roles, the spymaster and the operatives. All players see an array of 25 cards with a single word on each of them. Everybody sees the words, but only the spymasters see the color of these cards. They can be blue or red, for the two teams, or white for neutral. The teams then take turns. Each time, the spymaster tries to come up with a single freely chosen word that would then allow their operatives to select, by association to that word, as large as possible a number of cards of their team’s color. The spymaster hence communicates that word, as well as the number of cards to be associated with that word, to the rest of their team. The operatives then discuss amongst each other which cards are most likely to fit that provided word[1].

I’ve played this game with a number of people, and noticed that many seem to play this in “forward” mode: spymasters often just try to find some plausible word that matches some of their team’s cards, almost as if they were trying to solve the problem: if somebody saw what I saw, they should agree this word makes sense. Whereas the better question would be: which word, if my team heard it, would make them choose the cards that have our team’s color? And the operatives on the other hand usually just check plainly which cards fit this word best? But almost nobody asks themselves if the selection of cards I’ve picked now really is the one the spymaster had in mind, would they have picked the word that they did?

To name a concrete example of the latter point, let’s say the spymaster said the word “transportation” and the number 2, so you know you’re looking for exactly two cards with some word on them that relates to transportation. And after looking at all available cards, there are three candidates: “wheel”, “windshield” and “boat”. Forward reasoning would allow basically any 2 out of these 3 cards, so you basically had to guess. But with inverse reasoning you would at least notice that, if “wheel” and “windshield” were the two words the spymaster was hinting at, they would most certainly have used “car” rather than “transportation”. But as they did not, in fact, choose “car”, you can be pretty sure that “boat” should be among your selection, so you can at least be pretty sure about that one word.

Of course one explanation for all of this may be that Codenames is, after all, just a game, and doing all this inverse reasoning is a bit effortful. Still it made me realize how rarely people naturally go into “inverse mode”, even in such a toy setting where it would be comparably clean and easy to apply.

  1. ^

    I guess explaining the rules of a game is another problem that can be approached in forward or inverse ways. The forward way would just be to explain the rules in whichever way seems reasonable to you as someone familiar with the game. Whereas the inverse way would be to think about how you can best explain things in a way such that somebody who has no clue about the game will quickly get an idea of what’s going on. I certainly tried to do the latter, but, ehhh, who knows if I succeeded.