On continuous decision boundaries
post by eeegnu · 2022-03-11T03:36:01.860Z · LW · GW · 2 commentsContents
2 comments
I've always had a fascination with decisions that rely on accepting/declining some continuous value. For instance, you might receive an unexpected email offering to purchase your domain name e.g. eeegnu.com, and depending on some huge number of factors, (i.e. how attached am I to the domain name, is the offer enticing enough, do I want to go through the hassle, etc.) you'll arrive at a decision. Intuitively, we'd think that there's some hypothetical $ value X where every offer X yields No, and every offer X yields Yes, or framing this probabilistically, we'd expect a plot of $ from 0 to infinity against the probability of choosing yes to be non-decreasing. This might look something like a sigmoid, centered at the point where you're 50/50 on whether you should accept or not. Instead I argue that the actual plot would look allot more chaotic, with various regions and points that are wildly different that neighboring points.
To really discuss this, it'll help to fixate on a particular example that controls for some unknown factors on the part of the person making the decision. Assume that they've received this offer for their domain name from a futuristic blockchain trading post, a pre-signed contract whereby clicking Yes, they'll immediately receive $X worth of crypto in their bank/wallet account (and this is a currency they're happy to keep things in), and the domain name will transfer ownership to some unknown figure. Assume also that they were not planning on selling this domain name, and that the offer rather mysteriously appeared.
Now we can think about different regions on the graph. The most natural place to start is around the neighborhood of our decision makers internal expected value of their domain name. This might be ~$100 given the amount they've already spent in yearly renewal fees, but for someone who randomly received an offer, the probability of accepting such an offer will be quite low, and similarly, values less than this can be expected to be upper bounded by its probability. Another natural region is where the offer becomes enticing (assuming it exists): the smallest X where the probability is 0.5. This might be ~$10k, a few orders of magnitude higher than our expected valuation, and not something too unusual in the space of domain resale. Ignoring human psychology factors for preferring some numbers over others, I expect we'd see a pretty steady increase in probability of acceptance up to another magnitude (~$100k), where things then start to fall.
At a certain point, the most natural conclusion for why someone wants your domain name may begin to change. For reasonable sums, you might assume it's some startup that really likes the name. For significant amounts, it might be something like a large company with an unlimited budget rebranding and buying up everything related to their new domain name (like a .so domain name selling for $149k.) At this point your uncertainty is high, it's very possible that an even higher life changing offer is reasonable. And so we start to see a decline in the probability of acceptance.
Going another two magnitudes up, an offer of $10M is enough for life long financial independence. This is near the upper bound on highest ever sold domain names. The offer might seem very strange and even suspicious, but the crypto trading post is true and tested, there's no risk that you won't receive the stated offer. The probability goes up, as your gains in lifestyle changes from potentially seeking out better offers diminishes.
Now we go another few magnitudes higher to offers in the $1B range. Things here are too good to possibly be true. All sorts of potential reasons for why this offer emerged begin to fight for being most likely. It's possible some super rich entity made a typo. It's possible that there's some elaborate trick going on. It's even possible that the crypto trading post isn't 100% guaranteed to work (here it is, but they don't know that.) Surely these are more likely than someone valuing your site so highly. The probability of acceptance goes down from here. As the numbers begin to reach values beyond the global GDP, you likely just believe it's a glitch akin to an int overflow leading to a significant drop in the probability of acceptance.
Another interesting case is offers which just happen to hit numbers that have meaning to you. Like if an offer was $9534954.21, which just happened to be your passphrase for the private key to your crypto wallet that this offer is being sent to, and now you're worried since this could not possibly have been a coincidence. So for any particular persons such graph, there may be sharp discontinuities (or the discrete equivalent.)
My takeaway from this is that counter-intuitively, if you had infinite money and wanted something from someone, you may be best off making an offer that's at the upper bound of what's tangibly reasonable, rather than something absurdly high.
I'm new to the ideas of rationality, so I'm not sure how consistent this thought experiment is with its ideas, or if it's just obvious (or wrong) for this graph to behave so erratically.
2 comments
Comments sorted by top scores.
comment by Alex Vermillion (tomcatfish) · 2022-03-12T23:48:27.343Z · LW(p) · GW(p)
I enjoyed this. I would have liked (or will like? up to you!) an attempt to take this idea and make it into an overarching abstract concept. Something like "Your internal preference curve may rise over the whole range of payoffs, but the chances of some caveat applying increase too, which may make your observed preferences rather different".
I always think about this with jobs. See a job paying $X/hour, and think it's fine. Now, if you say that job paying $30*X/hour, you think "What is going on here!??"
In this case, I would prefer more money for the same work, but I suspect that I'm being lied to: that I will have to do more work or otherwise go against my values. In this way, we show that I don't have a "work-vs-reward" graph in my head, but a "costs-vs-reward" where a high enough reward-per-work makes me suspect I'm incurring additional costs.
Replies from: eeegnu↑ comment by eeegnu · 2022-03-13T01:11:41.642Z · LW(p) · GW(p)
I'll probably end up thinking about this in the background for a while, and jotting down any interesting cases in case they can accumulate into a nice generalizable thing (or maybe I'll stumble upon someone else who's made such an analysis before.)
Where I see your example sharing a common idea is that one party makes what appears to be a suboptimal decision (e.g. if they just wanted to attract top talent, a salary at the top of the salary spectrum would suffice), and it leaves the other party to infer what might be the true reasoning behind the decision that would lead to it being an optimal decision (i.e. it assumes the other party is rational.)
Another case I've seen recently was in a thread discussing the non-shopper problem.
Another possible factor is that when people are unable to evaluate quality directly, they use price as a proxy.
You don't want a crappy realtor who gives you bad information. But you can't tell which realtors are good and which aren't, because you don't know enough about real estate. So, you think, "You get what you pay for" and go with one that charges more, figuring that the higher price corresponds with higher quality.
Here an exchange also happens, except this time it's your $ for something you have little domain knowledge over. I imagine the peak probability for someone concerned about quality but without a good way to assess it would fall somewhere around a standard deviation above the mean price.