Adjectives from the Future: The Dangers of Result-based Descriptions
post by Pradeep_Kumar · 2019-08-11T19:19:19.139Z · LW · GW · 8 commentsContents
Less Skeptical Examples of Adjectives from the Future How does this apply to LessWrong? Should we always avoid Result-Based Descriptions? None 8 comments
Less Skeptical
Suppose your friend tells you he's on a weight-loss program. What do you think will happen in three months if he keeps on the weight-loss program? Will he lose weight?
If you're like me, you're thinking, "Of course. He is on a weight-loss program, isn't he? So, ipso facto, he is likely to lose weight."
Does there seem to be anything fishy about that chain of reasoning?
We usually describe the current features of a thing and predict something about the future. For example, we might say "I'm running for half an hour each day" and predict that we will lose a certain number of pounds by the end of the month. But your friend above skipped the description and talked about the prediction as if it were visible right now: "I'm on a weight-loss program".
You weren't told the features of the activity (running for half an hour) or even a name (CrossFit program). If you had been told either, you could have judged it based on your past knowledge of those features or names. Running regularly does help you lose weight and so does CrossFit. But, here, you were told just the prediction itself. This means you can't predict anything for sure. If his program involves running, he will lose weight; if it involves eating large cheese pizzas, he won't. You don't know which it is.
Yet, it sounded quite convincing! Even if you objected by saying that your friend probably won't stick to the exercise regimen, you probably bought into the premise, like me, that the program was a weight-loss program.
Hypothesis: If you are given an adjective that describes a future event and are not given any currently-visible features, then you're more likely to accept that that future event will occur than when you can see some features.
In other words, result-based descriptions make you less skeptical.
A more serious example is when someone mentions a drug-prevention program. We might assume that it will prevent illegal drugs from being bought and sold. After all, it must have been designed for that purpose. But the result depends on what the program actually does. Running ads saying "Don't do drugs!" may not achieve much, whereas inspecting trucks at border checkpoints may. To judge whether the program will be successful, you have to inspect its actual features. But "drug-prevention program" sounded convincing, right? Notice how the adjective "drug-prevention" describes a future event - it says that drugs will be prevented in the future. Now, since you can't look into the future and tell whether drugs were in fact prevented, you shouldn't accept such an adjective. And since you're not told anything else about the program, you really can't say anything either way. And yet it sounds so convincing!
Similarly, take environment-protection laws. Again, surely they must have been designed for the purpose of protecting the environment. Don't you feel like they will protect the environment? Contrast that to saying "a law that raises the tax rate on fossil fuels". Now this may or may not protect the environment in terms of air pollution, but at least you don't jump to that conclusion right away.
If this hypothesis is true, it means that the person who chooses the adjective can mislead you (and himself) in the direction he desires by describing the thing in terms of the result and by omitting any features.[1] Suppose someone tells you this is an earthquake-resistant building. Do you believe that it will withstand earthquakes better than ordinary buildings? I do. He may have described the thing solely in terms of the result, but it still sounded convincing, right? Contrast that to "this building is made out of steel-reinforced concrete". Now, you have one feature of the building. If you had to predict whether it would withstand earthquakes better than ordinary buildings, you would lean towards yes because reinforced concrete has worked in the past. But you wouldn't always jump to the conclusion that it was "earthquake-resistant". If I said "this building is made out of green-colored brick", you would be skeptical about its ability to withstand earthquakes better because you haven't heard anything about brick color being relevant.
The above illusion is compounded by the fact that you won't get feedback from others about your mistaken ideas if you use result-based descriptions[2]. Suppose your weight-loss friend assumed that using a telemarketed ab machine will help him get abs (it's right there in the name, I tell you!). Even then, he wouldn't have been lost if he had told you his concrete plan. You would have corrected his belief as soon as you stopped laughing at him. But since he told you that he's using a weight-loss program, you couldn't really correct him. He might go on behaving as if that silly "ab machine" is going to get him six-pack abs by summer.
Why do we even accept descriptions that have nothing except a description about the future?
For one, it matters that no features are described. If I said that I was drinking lemonade, you wouldn't really predict that I would lose weight. You would ask me what evidence I have for lemonade causing weight-loss. But what if I said I was having a weight-loss drink? You might be less skeptical as long as you didn't look at my glass. Who knows; maybe there are drinks out there that cause weight-loss.
Another relevant factor is the speaker's credibility how often we think the speaker sees the underlying features along with the eventual result. We accept an expert's result-based description because we trust that he knows the features that lead to the result and is just omitting them when talking to a layman. When a doctor says that these are "sleeping pills", we are more likely to accept it than when a school boy does - the doctor knows that the pills contain benzodiazepine, which usually works. When a politician calls something a "drug-prevention program", we are more likely to accept it than when a housewife says it - the politician knows that border-checks (or whatever) have worked in the past. However, this might be misleading when the expert is dealing with something novel, such as a brand-new pill formula or a brand-new approach to drug regulations, since he is unlikely to have seen the result of those features (or may not care very much about deceiving the voters).
Finally, such descriptions might be fine when talking about the past. Saying that "I went on a weight-loss program and lost 50 pounds" is a bit redundant, but harmless. You actually observe the result there, so you can decide based on the result how skeptical to be. You won't blindly jump to the conclusion that it will work as when someone says "I'm on a weight-loss program right now".
So, we should avoid describing something only in terms of the result and should describe it using features instead. And if anyone tries to bias our prediction by sneaking in an adjective from the future, we should stop and ask for the features.
Examples of Adjectives from the Future
Here are some result-based descriptions that I collected from news reports and books as I was testing the above hypothesis. All of them talk about future results, completely omit current features, and seem to make us less skeptical about the plan's success. Did you fall for any of them?
Rehabilitation program -- Don't you feel like the drug addict is likely to get better after going to the rehab program? It's right there in the name! Notice that there are no features mentioned, just a description of the future as though it were the present. Contrast that to "not having access to drugs for 30 days, listening to lectures, and talking about your experiences". This doesn't make us jump to the conclusion that the addict will get better. We might even be skeptical about the power of lectures to fight off the temptation of drugs. For a real-world contrast, think of "the 12-step program". It too tries to overcome addiction, but it is described in terms of the features (12 steps), not the desired result (overcoming addiction). In fact, it sounds like work, which it probably is. A rehabilitation program doesn't quite sound like that.
Peace process -- Feels like it is likely to lead to peace. No features; only desired results. Contrast that to "shaking hands and signing agreements in front of the world press". We may be more skeptical that that will prevent future wars. But in the former case, we would be insulated from feedback because we keep talking about the "peace process" instead of the "hand-shaking and agreement-signing".
Wait. Aren't there people who distrust the peace process and talk about its possible failure? I suspect that they do so after mentioning features of the process. They might say that this dictator has reneged on his promises in the past and thus should not be trusted right now. It would sound ludicrous if they expressed skepticism without any features. People would ask, "What do you mean this peace process may not bring about peace? It's a peace process."
Dangerous driving -- Doesn't it seem likely that the driver is going to get into trouble? No features; no feedback; only the future result - danger. Contrast that to: one-handed driving, texting while driving, or overtaking cars by switching lanes. We are a bit more skeptical that it will cause danger.
Cost-cutting measures -- Need I say anything? Of course the cost-cutting measure is going to cut costs. Why else would they have called it a cost-cutting measure? Contrast that to "switching to online advertising" or "encouraging working from home a few days a week", which we are more skeptical about, since they may or may not bring down ultimate costs.
Healthy morning drink -- No features, but it sounds like it will lead to health. Even the "morning" part is not a description of a feature of the object. It just talks about the time when people will drink it. Contrast to: drink containing 15g of protein and other stuff, which may or may not lead to more "health".
Recidivism-reduction classes for ex-convicts, i.e., making sure they don't go back to jail after getting out -- Again, we feel like these classes will make them less likely to go back in. The classes reduce recidivism, after all. No features mentioned; description in terms of the future result (recidivism-reduction); insulated from feedback. Contrast that to "lectures and reading books and stuff". We might be much more skeptical.
You can find any number of examples like these: national security bill vs a bill that increases the number of fighter jets; sufficiently well-funded program vs same budget as last year (which may not be enough this year); a Sudoku-solving program vs program that solved a set of easy and medium Sudoku puzzles.
How does this apply to LessWrong?
Now, let's look at some descriptions that may be important to us as LessWrong readers.
Effective altruism -- Doesn't effective altruism feel like it will be effective? And altruistic? I feel inclined to believe so. But the name talks about the future results and doesn't mention any current features. Contrast that to "cash transfers" or even "evidence-based donations" and "evidence-based job changes", which talk about currently-available evidence, not future results. We may be more skeptical that such cash transfers or donations will be effective or even altruistic. "Cause prioritization" talks about a feature of the process right now. We can see a clear gap between the causes we prioritize and their eventual effectiveness. That gap doesn't even seem to exist when we talk about effective altruism.
When I hear "Against Malaria Foundation", I feel like it is likely to strike a blow against malaria. All it needs is the money. But if I were to hear "Mosquito Net Distributors", I would ask quite a few questions about the effectiveness of mosquito nets. I may indeed get convinced that a dollar spent on nets will go farther than on other methods to fight malaria, but I won't jump to that conclusion. I may even think of how it might backfire or how mosquitoes might adapt. Not so with "Against Malaria Foundation".
Notice how future-based adjectives could make a cause immune to feedback. If you were to mention that you won't donate to, say, AMF, people could raise their eyebrows, "Are you seriously against fighting against malaria?". But if you mention the means, you can safely say that you are in favour of fighting malaria, but against focusing on mosquito nets.
Finally, if "Mosquito Net Distributors" sounds a bit too sober because it doesn't mention its purpose, perhaps we could combine the two as "Mosquito Nets to Fight Malaria". [3]
Rationality techniques -- When I see [LW · GW] the term "rationality technique" or "rationality training" or "methods of rationality", I feel like the technique will lead to good, if not optimal, results. It doesn't describe any features after all; it just promises that good things will happen in the future. Contrast that to experimentation techniques or logical deduction. These talk about the features of the process and I don't assume that these will always get me the best results, since I know I might miss a confounding variable or apply rules incorrectly. I'm not quite as skeptical when I hear about the "methods of rationality".
Even when I look at concrete technique names, hearing about the CFAR technique of "Comfort Zone Expansion (CoZE)" makes me feel like it will actually expand my "comfort zone". But it doesn't mention any features; just the desired future result. Contrast that to "doing for an hour, in public, a few things you avoided doing in the past". Now, I pause when I ask myself if it will help me do what you or I may actually care about: ask a boss for a raise, tell an annoying colleague to shove it, or ask out a crush. I can tell that there is quite a gap between lying down on the pavement for 30 seconds and doing something that might jeopardize my work life. But when I hear "Comfort Zone Expansion", I really do feel like my "comfort zone" will be expanded, meaning that I will do those kinds of things more frequently. Why not call it "uncomfortable-action practice" or the original "exposure therapy"?
Brain emulation or brain-emulating software -- "How sure are you that brain emulations would be conscious?" (source [LW · GW])
My immediate response is that, of course, brain emulations would be conscious. If human brains are conscious (whatever that means) and if human brain emulations emulate human brains, then those would also be conscious. The very term seems to dispose me to a particular answer. It doesn't describe any present features, just the desired future results - that the program will behave like a human brain in most respects.
Imagine if we used a term that talked only about whichever observable tests you want: "How sure are you that, say, a DARPA Grand Challenge-winning program would be conscious?" Suddenly, we are given two separate variables and asked to bridge the gap between them. That gives us a lot more room for skepticism. We can see that there could be many a slip between its present features and its future results.
I reason just as naively about claims like whole brain emulation can be "an easy way to create intelligent computers" or will acquire the "information contained within a brain", since human brains are already intelligent and already contain information. Given that this is a field where no one has succeeded, i.e., no one has emulated a human brain, we should take pains to avoid terms that make us less skeptical.
Optimization power -- Lastly, take this [LW · GW] description of a car design: "To hit such a tiny target in configuration space requires a powerful optimization process. The better the car you want, the more optimization pressure you have to exert - though you need a huge optimization pressure just to get a car at all."
I find myself agreeing with that. A car that travels fast is highly-optimized, so of course it would need a powerful optimization process.
Unfortunately, "optimization process" does not describe any present features of the process itself. It simply says that the future result will be optimized. So, if you want something highly-optimized, you'd better find a powerful optimizer. Seems to make sense even though it's a null statement! But if you describe any features, as in "the design of a car requires 1 teraflop of computing power for simulation", I immediately ask, is that too little computing power? Too much? I become a lot more skeptical.
Again, this suggests that, in such a novel domain, we should be more careful about avoiding result-based descriptions like "optimization power", "superintelligence", and "self-improving AI".
Should we always avoid Result-Based Descriptions?
No. I don't think it's possible and I don't think people would want it. Like I said above, when I go to the doctor, I may just want "sleeping pills", not "benzodiazepine". Speaking about the latter would be a waste of time for the doctor and for me, provided I trust him. But what if I don't trust the person or if he's deluded himself?
I would reserve this technique for occasions when you're accepting an important pitch a pitch that asks for a big investment, either in business or politics or social circumstances. People may try to convince us to accept a "career-defining opportunity" (instead of a shift to another department, which may not define your career) or a "jobs-for-the-poor program" (instead of a law that reserves X% of infrastructure jobs, which may not be filled and may not employ all the poor) or a "life-changing experience" (instead of skydiving for six minutes, which may or may not change your life much).
When it comes to our own usage, as people who want to portray an accurate map of reality, we should avoid using such result-based descriptions that might mislead others and, most importantly, ourselves. Marketing may demand a title that sounds catchy, but you have to decide whether you want to risk deceiving others, especially when you're pitching an idea that will ask them to invest a lot.
What's in a name? Isn't it ok to have the name based on the result as long as the contents tell you the features? Well, that would be ok if people always mentioned the contents. But we usually omit the contents when referring to something and someone who is new or busy may not look at the contents. Thus they (and we) might get misled into predicting the result based on the title. A person donating to an organization or paying for a workshop may see only the title, perhaps a few testimonials from friends, and maybe some headings on the website. If all of these descriptions are result-based, he might think that the organization or workshop does, in fact, have a good chance of delivering those results. If he had been given the features, maybe he would have been much more skeptical.
Let me know your thoughts below. Does the basic hypothesis seem valid? What about some of its implications?
Edit: Made it clearer that I'm claiming result-based descriptions make you less skeptical, not that they convince you absolutely.
There is a similar phenomenon in goal-setting where they distinguish between outcome goals (such as losing 10 pounds) and process goals (such as going to the gym four times a week). However, the focus there is on which goal-setting style is more effective in getting results. My focus here is on which type of description makes you more gullible. The two may be related. ↩︎
Isn't "result-based description" itself a result-based description, an adjective from the future? I don't think so. It's something you can observe right now. Specifically, if the description isn't fully determined by past features, then it's a result-based description. (Contrast that to "misleading description".) ↩︎
And, yes, "mosquito net" is itself a result-based description, since you expect it to keep out mosquitoes, but at least it mentions one feature - the net. ↩︎
8 comments
Comments sorted by top scores.
comment by Gordon Seidoh Worley (gworley) · 2019-08-12T21:31:48.292Z · LW(p) · GW(p)
Maybe it's because we live in a world full of these "adjectives from the future", but when I think of, for example, a "weight-loss program" I don't think the program will result in weight loss, but rather a program whose purpose is weight loss, whether or not it achieves it. Similarly with the other examples: the adjective is not describing what it will do, but what the intended purpose is.
Replies from: Pradeep_Kumar↑ comment by Pradeep_Kumar · 2019-08-12T22:57:00.606Z · LW(p) · GW(p)
I guess you're saying we allow for the possibility of failure when somebody says "I'm on a weight-loss program". I agree. We are not completely gullible in the face of such descriptions.
I'm claiming that we seem to be visibly more skeptical when we see the features than when we see just the intended result. For example, "weight-loss program" vs using the telemarketed ab machine for 15 minutes. Similarly with "clean air law" vs raising the fuel tax rate, or "cost-cutting measure" vs switching to online advertising.
Would you agree with that claim? Thanks for your feedback.
comment by Dagon · 2019-08-13T23:44:53.200Z · LW(p) · GW(p)
I'd characterize these as "intent-description" as opposed to "activity-description". And I think the underlying problem is the compression inherent in short, catchy phrases to describe a complex endeavor that includes thousands or more people working on it. Famously and only somewhat tongue-in-cheek, one of the two unsolved computer science problems is "naming things".
Failure to look into the model and activity behind any intended consequence will leave you open to manipulation and incorrect expectations. Failure to look at the intent can lead you to ignore the possibility that tactics and methods might need to change, and how aware the org is of that.
Replies from: Pradeep_Kumar
↑ comment by Pradeep_Kumar · 2019-08-14T01:53:24.618Z · LW(p) · GW(p)
I agree that it can be hard to describe a detailed activity in a short phrase, especially to a layman who might care more that it is a weight-loss program than that it involves kettlebell swings. I don't have a great solution for that.
Why not minimize the manipulation by describing both the intent and the means, as in "Mosquito Nets to Fight Malaria" instead of "Against Malaria" (pure intent) or "Mosquito Net Distribution" (pure means)? As you say, we might lead people astray if we don't check the means against the intent, so I think we should avert that by specifying the means and letting the listener check it for us.
Thanks for the comment.
Replies from: Dagon↑ comment by Dagon · 2019-08-14T17:00:55.013Z · LW(p) · GW(p)
Why not minimize the manipulation by describing both the intent and the means
I believe, in most cases, this actually happens when you read/discuss beyond the headline. Use more words, actually put effort into understanding rather than just assuming the the 2-4 word description is all there is.
In the examples you give, it would be somewhat misleading to describe both motive and method - "weight-loss program" doesn't specify mechanism because it applies to a lot of different mechanisms. The person describing it wants to convey the intent, not the mechanism - that detail is important for some things, and not for others, so it's left to you to decide if you want it. "Against Malaria" likewise. They believe that the right tactic is mosquito nets, but if things change and that stops being the case, they don't intend to change their mission or identity in order to use laser-guided toads or whatever.
Replies from: Pradeep_Kumar↑ comment by Pradeep_Kumar · 2019-08-14T17:51:15.523Z · LW(p) · GW(p)
Yeah, that was a good point about changing the means but not the mission. It would be costly to change the name of the entire foundation every time you changed your tactic.
In the examples you give, it would be somewhat misleading to describe both motive and method - "weight-loss program" doesn't specify mechanism because it applies to a lot of different mechanisms.
We should probably do that when we are not experts. A doctor may safely call something a sleeping pill, but a novice at the gym should probably say "I'm doing crunches for weight-loss" and not "I'm on a weight-loss program".
Use more words, actually put effort into understanding rather than just assuming the the 2-4 word description is all there is.
We both agree that if people went into the features, they wouldn't be misled as often. I was hoping to make it easier to not be misled even when people didn't spend time reading beyond the headline. That is why it would be crucial to mention features in the name and not just the intended result.
Thanks for the feedback.
comment by Donald Hobson (donald-hobson) · 2020-12-29T18:27:54.919Z · LW(p) · GW(p)
Environmental protection legislation is a category that covers taxes on fossil fuels, bans on CFC's and subsidies on solar panels, amongst many other policies.
This is a predictively useful category, politicians that support one of these measures are probably more likely to support others. It would be more technically accurate, but more long winded to describe these as "policies that politicians believe will help the environment"
Unfortunately, "optimization process" does not describe any present features of the process itself. It simply says that the future result will be optimized. So, if you want something highly-optimized, you'd better find a powerful optimizer. Seems to make sense even though it's a null statement!
Suppose we have a black box. We put the word "airoplane" into the box, and out comes a well designed and efficient airoplane. We put the word "wind turbine" in and get out a highly efficient wind turbine. We expect that if we entered the word "car", this box would output a well designed car.
In other words, seeing one result that is highly optimised tells you that other results from the same process are likely to be optimized.
Unfortunately "fitness" doesn't describe any feature of the person themself, it simply says they can run fast. So if you want someone who can run fast, you better find someone fit. Seems to make sense even though its a null statement.
To the extent that running speed and jumping height and weightlifting weight ect are strongly correlated, we can approximately encode all these traits into a single parameter, and call that fitness. This comes with some loss of accuracy, but is still useful.
Imagine that you have to send a list of running speeds, jump heights ect to someone. Unfortunately, this is too much data, you need to compress it. Fortunately, the data is strongly correlated. Lets say that all the data has been normalized to the same scale.
If you can only send a single number and were trying to minimize the L1 loss, you could send the median value for each person. If you were trying to minimize L2 loss, send the mean. If you could only send a single bit, you should make that bit be whether or not the persons total score is above median.
Consider the reasoning that goes "Bob jumped really far on the longjump => Bob is fit => Bob can weightlift". There we are using the word "fit" as a hidden inference. Hidden in how we use the word is implicit information regarding the correlation between athletic abilities.
Replies from: Pradeep_Kumar↑ comment by Pradeep_Kumar · 2021-05-29T05:07:42.414Z · LW(p) · GW(p)
All three of your examples involve using a phrase as a shorthand for a track record. You call something a pollution-reducing law, a vehicle-producer, or a fit athlete after observing consistent pollution reduction, vehicles, or field records. That's like the doctor calling something a "sleeping pill", which is ok because he's doing that after observing its track record.
The problem is when there is no track record. For example, when someone proposes a new "environmental protection" law that has not really been tested, others who hear that name may be less skeptical than if they hear "subsidies for Teslas". In the latter case, they may ask whether this would really help the environment and whether there might be unintended consequences.
Suppose we have a black box. We put the word "airoplane" into the box, and out comes a well designed and efficient airoplane. We put the word "wind turbine" in and get out a highly efficient wind turbine. We expect that if we entered the word "car", this box would output a well designed car. In other words, seeing one result that is highly optimised tells you that other results from the same process are likely to be optimized.
The term "optimization power" doesn't seem to add much here. Any prediction I make would be based on the track record you mentioned (using some model that "fits" that training data). For example, maybe we would predict it producing a good car, but not necessarily a movie or a laptop. Even for the examples of "optimization processes" mentioned in the article [LW · GW], such as humans and natural selection, I predict using the observed track record. If we say a chess player has reached a higher Elo than another, we can use that to predict that he'll beat the other one. That will invite justified questions about the chess variant, their past matches, and recent forms. Why bring in the claim that he has more "optimization power", which provokes fewer such questions?
Thanks for the thoughtful comment.