Posts
Comments
“How should we operationalize the concept ‘sex worker who is actively being sex trafficked’ for the purpose of determining how many such people there are, taking care that our operationalization successfully captures our moral intuitions related to this subject” is certainly an interesting question, and your answer is not prima facie unreasonable (although we would surely want to consider it carefully and with a critical eye, before applying it).
But I want to note that it is an almost wholly irrelevant question unless we intend to do our own data collection from scratch. For the purpose of determining the answer to our question by looking at existing data, what use is an operationalization like this if the data we would need to answer that operationalized question is not in our data sets?
In other words—it’s not like the existing studies that Aella cites involved the subjects being asked the question you propose, or anything like it. Or maybe some of them did! But we don’t know, because in many cases the data is from sources like this:
They cite a 2006 GAO report which says depressing things like “the U.S. government’s estimate was developed by one person who did not document all his work” and “There is also a considerable discrepancy between the numbers of observed and estimated victims of human trafficking”
… probably not, though.
Aella estimates (in rather noisy fashion) that 3.2% of active, in-person sex workers in America are actively being sex trafficked.
From the linked post (emphasis in original):
So: given my estimated sex trafficking prevalence, I estimate about 3.2% of active, in-person sex workers in the US are currently being sex trafficked. This is higher than I anticipated. But I didn’t thoroughly check definitions of trafficking, and criticisms of sex trafficking concern often point out that ‘trafficking’ is a loose term with some reports including “anyone who crosses a border in order to do sex work, even if voluntary,” so who actually knows.
And indeed there is, in the sources cited in the post, very little in the way of consistent (or any!) clarification of just what the heck “sex trafficked” (or, for that matter, “sex workers”) is supposed to mean, thus no basis for assuming that the sources mean the same thing by the term, or even mean anything coherent by it…
In short, the only conclusion that can be drawn from Aella’s post is “how many sex workers are actively being sex trafficked? idk, what even are words and what do they mean”.
If an EA could go back in time to 1939, it is obvious that the ethically correct strategy would be to…?
… stuff about perverse utility functions …
Well, there’s a couple of things to say in response to this… one is that wanting to get the girl / dowry / happiness / love / whatever tangible or intangible goals as such, and also wanting to be virtuous, doesn’t seem to me to be a weird or perverse set of values. In a sense, isn’t this sort of thing the core of the project of living a human life, when you put it like this? “I want to embody all the true virtues, and also I want to have all the good things.” Seems pretty natural to me! Of course, it’s also a rather tall order (uh, to put it mildly…), but that just means that it provides a challenge worthy of one who does not fear setting high goals for himself.
Somewhat orthogonally to this, there is also the fact that—well, I wrote the footnote about the utility function being metaphorical for a reason. I don’t actually think that humans (with perhaps very rare exceptions) have utility functions; that is, I don’t think that our preferences satisfy the VNM axioms—and nor should they. (And indeed I am aware of so-called “coherence theorems” and I don’t believe in them.)
With that constraint (which I consider an artificial and misguided one) out of the way, I think that we can reason about things like this in ways that make more sense. For instance, trying to fit truth and honesty into a utility framework makes for some rather unnatural formulations and approaches, like talking about buying more of it, or buying it more cheaply, etc. I just don’t think that this makes sense. If the question is “is this person honest, trustworthy, does he have integrity, is he committed to truth”, then the answer can be “yes”, and it can be “no”, and it could perhaps be some version of “ehhh”, but if it’s already “yes” then you basically can’t buy any more of it than that. And if it’s not “yes” and you’re talking about how cheaply you can buy more of it, then it’s still not “yes” even after you complete your purchase.
(This is related to the notion that while consequentialism may be the proper philosophical grounding for morality, and deontology the proper way to formulate and implement your morality so that it’s tractable for a finite mind, nevertheless virtue ethics is the “descriptively correct as an account of how human minds implement morality, and (as a result) prescriptively valid as a recommendation of how to implement your morality in your own mind, once you’ve decided on your object-level moral views”. Thus you can embody the virtue of honesty, or fail to do so. You can’t buy more of embodying some virtue by trading away some other virtue; that’s just not how it works.)
I think you understand that, f other people noticed a pattern that everything you said was false, irrelevant, or unimportant, they would eventually stop bothering to listen when you talk, and this would mean you’d lose the ability to get other people to know things, which is a useful ability to have.
Yes, of course; but…
Whether the specific person you address is better off in each specific case isn’t materal because you aren’t trying to always make them better off, you’re just trying to avoid being seen as someone who predictibly doesn’t make them better off.
… but the preceding fact just doesn’t really have much to do with this business of “do you make people better off by what you say”.
My claim is that people (other than “rationalists”, and not even all or maybe even most “rationalists” but only some) just do not think of things in this way. They don’t think of whether their words will make their audience better off when they speak, and they don’t think of whether the words of other people are making them better off when they listen. This entire framing is just alien to how most people do, and should, think about communication in most circumstances. Yeah, if you lie all the time, people will stop believing you. That’s just directly the causation here, it doesn’t go through another node where people compute the expected value of your words and find it to be negative.
(Maybe this point isn’t particularly important to the main discussion. I can’t tell, honestly!)
I took great effort to try to right down my policy as something explicit in terms a person could try to do (even though I am willing to admit it is not really correct mostly because finite agent problems), because a person can’t be a real Rule Consequentialist without actually having a Rule. What is the rule for “Only lie when doing so is the right thing to do”? It sounds like an instruction to pass the act to my rightness calculator, but if I program that rule into my rightness calculator, and then give it any input, it gets into an infinite loop. I have an Act Consequentialist rightness calculator as a backup, but if I pass the rule “only lie when doing so is the right thing to do” into that as a backup I’m just right back at doing act consequentialism.
If you can write down a better rule for when to lie the than what I’ve put above (that is also better than the “never” or “only by coming up with galaxy-brained ways it technically isn’t lying” or Eliezer’s meta-honesty idea that I’ve read before) I’d consider you to have (possibly) won this issue, but that’s the real price of entry. It’s not enough to point out the flaws where all my rules don’t work, you have to produce rules that work better.
Well… let’s start with the last bit, actually. No, it totally is enough to point out the flaws. I mean, we should do better if we can, of course; if we can think of a working solution, great. But no, pointing out the flaws in a proffered solution is valuable and good all by itself. (“What should we do?” “Well, not that.” “How come?” “Because it fails to solve the problem we’re trying to solve.” “Ok, yeah, that’s a good reason.”) In other words: “any solution that solves the problem is acceptable; any solution that does not solve the problem is not acceptable”. Act consequentialism does not solve the problem.
But as far as my own actual solution goes… I consider Robin Hanson’s curve-fitting approach (outlined in sections II and III of his paper “Why Health is Not Special: Errors in Evolved Bioethics Intuitions”) to be the most obviously correct approach to (meta)ethics. In brief: sometimes we have very strong moral intuitions (when people speak of listening to their conscience, this is essentially what they are referreing to), and as those intuitions are the ultimate grounding for any morality we might construct, if the intuitions are sufficiently strong and consistent, we can refer to them directly. Sometimes we are more uncertain. But we also value consistency in our moral judgments (for various good reasons). So we try to “fit a curve” to our moral intuitions—that is, we construct a moral system that tries to capture those intuitions. Sometimes the intuitions are quite strong, and we adjust the curve to fit them; sometimes we find weak intuitions which are “outliers”, and we judge them to be “errors”; sometimes we have no data points at all for some region of the graph, and we just take the output of the system we’ve constructed. This is necessarily an iterative process.
If the police arrest your best friend for murder, but you know that said friend spent the whole night of the alleged crime with you (i.e. you’re his only alibi and your testimony would completely clear him of suspicion), should you tell the truth to the police when they question you, or should you betray your friend and lie, for no reason at all other than that it would mildly inconvenience you to have to go down to the police station and give a statement? Pretty much nobody needs any kind of moral system to answer this question. It’s extremely obvious what you should do. What does act and/or rule consequentialism tell us about this? What about deontology, etc.? Doesn’t matter, who cares, anyone who isn’t a sociopath (and probably even most sociopaths who aren’t also very stupid) can see the answer here, it’s absurdly easy and requires no thought at all.
What if you’re in Germany in 1938 and the Gestapo show up at your door to ask whether you’re hiding any Jews in your attic (which you totally are)—what should you do? Once again the answer is easy, pretty much any normal person gets this one right without hesitation (in order to get it wrong, you need to be smart enough to confuse yourself with weird philosophy).
So here we’ve got two situations where you can ask “is it right to lie here, or to tell the truth?” and the answer is just obvious. Well, we start with cases like this, we think about other cases where the answer is obvious, and yet other cases where the answer is less obvious, and still other cases where the answer is not obvious at all, and we iteratively build a curve that fits them as well as possible. This curve should pass right through the obvious-answer points, and the other data points should be captured with an accuracy that befits their certainty (so to speak). The resulting curve will necessarily have at least a few terms, possibly many, definitely not just one or two. In other words, there will be many Rules.
(How to evaluate these rules? With great care and attention. We must be on the lookout for complexity, we must continually question whether we are in fact satisfying our values / embodying our chosen virtues, etc.)
Here’s an example rule, which concerns situations of a sort of which I have written before: if you voluntarily agree to keep a secret, then, when someone who isn’t in on the secret asks you about the secret, you should behave as you would if you didn’t know the secret. If this involves lying (that is, saying things which you know to be false, but which you would believe to be true if you were not in possession of this secret which you have agreed, of your own free will, to keep), then you should lie. Lying in this case is right. Telling the truth in this case is wrong. (And, yes, trying to tell some technical truth that technically doesn’t reveal anything is also wrong.)
Is that an obvious rule? Certainly not as obvious as the rules you’d formulate to cover the two previous example scenarios. Is it correct? Well, I’m certainly prepared to defend it (indeed, I have done so, though I can’t find the link right now; it’s somewhere in my comment history). Is a person who follows a rule like this an honest and trustworthy person, or a dishonest and untrustworthy liar? (Assuming, naturally, that they also follow all the other rules about when it is right to tell the truth.) I say it’s the former, and I am very confident about this.
I’m not going to even try to enumerate all the rules that apply to when lying is wrong and when it’s right. Frankly, I think that it’s not as hard as some people make it out to be, to tell when it is necessary to tell the truth and when one should instead lie. Mostly, the right answer is obvious to everyone, and the debates, such as they are, mostly boil down to people trying to justify things that they know perfectly well cannot be justified.
Indeed, there is a useful heuristic that comes out of that. In these discussions, I have often made this point (as I did in my top-level comment) that it is sometimes obligatory to lie, and wrong to tell the truth. The reason I keep emphasizing this is that there’s a pattern one sees: the arguments most often concern whether it’s permissible to lie. Note: not, “is it obligatory to tell the truth, or is it obligatory to lie”—but “is it obligatory to tell the truth, or do I have no obligation here and can I just lie”.
I think that this is very telling. And what it tells us (with imperfect but nevertheless non-trivial certainty) is that the person asking the question, or making the argument against the obligation, knows perfectly well what the real—which is to say, moral—answer is. Yes, the right thing to do is to tell the truth. Yes, you already know this. You have reasons for not wanting to tell the truth. Well, nobody promised you that doing the right thing will always be personally convenient! Nevertheless, very often, there is no actual moral uncertainty in anyone’s mind, it’s just “… ok, but do I really have to do the right thing, though”.
This heuristic is not infallible. For example, it does not apply to the case of “lying to someone who has no right to ask the question that they’re asking”: there, it is indeed permissible to lie[1], but no particular obligation either to lie or to tell the truth. (Although one can make the case for the obligation to lie even in some subset of such cases, having to do with the establishment and maintenance of certain communicative norms.) But it applies to all of these, for instance.
The bottom line is that if you want to be honest, to be trustworthy, to have integrity, you will end up constructing a bunch of rules to aid you in epitomizing these virtues. If you want to try to put together a complete list of such rules, that’s certainly a project, and I may even contribute to it, but there’s not much point in expecting this to be a definitively completable task. We’re fitting a curve to the data provided by our values, which cannot be losslessly compressed.
Assuming that certain conditions are met—but they usually are. ↩︎
Detailed commentary, as promised:
Rationality can be about Winning, or it can be about The Truth, but it can’t be about both. Sooner or later, your The Truth will demand you shoot yourself in the foot, while Winning will offer you a pretty girl with a country-sized dowry. The only price will be presenting various facts about yourself in the most seductive order instead of the most informative one.
If your highest value isn’t Winning, you do not get to be surprised when you lose. You do not even get to be disappointed. By revealed preference, you have to have a mad grin across your face, that you were able to hold fast to your highest-value-that-isn’t-winning all the way to the bitter end.
What if my highest value is getting a pretty girl with a country-sized dowry, while having not betrayed the Truth?
Then, if I only get one of those things, it’s worse than getting both of those things (and possibly so much worse that I don’t even consider them worthwhile to pursue individually; but this part is optional). Is there a law against having values like this? There is not. Do you get to insist that nevertheless I have to choose, and can’t get both? Nope, because the utility function is not up for grabs[1]. Now, maybe I will get both and maybe I won’t, but there’s no reason I can’t have “actually, both, please” as my highest value.
The fallacy here is simply that you want to force me to accept your definition of Winning, which you construct so as not to include The Truth. But why should I do that? The only person who gets to define what counts as Winning for me is me.
In short, no, Rationality absolutely can be about both Winning and about The Truth. This is no more paradoxical than the fact that Rationality can be about saving Alice’s life and also about saving Bob’s life. You may at some point end up having to choose between saving Alice and saving Bob, and that would be sad; and you would end up making some choice, in some manner, as fits the circumstance. The existence of this possibility is not particularly interesting, and has no deep implications. The goal is “both”.
(That said, the bit about the two armies was A++, and I strong-upvoted the post just for that.)
Technical Truth is as Bad as Lying
I wholly agree with this section…
… except for the last paragraph—specifically, this:
The purpose of a word is to carve reality at a joint useful for the discussion taking place, and we should pause here to note that the joint in question isn’t “emits true statements”, it’s “emits statements that the other party is better off for listening to”.
No, it’s not.
I’m not sure where this meme comes from[2], but it’s just wrong. Unless you are, like, literally my mother, “is the other party, speficially, better off for listening to this thing that I am saying” constitutes part of my motivation for saying things approximately zero percent of the time. It’s just not a relevant consideration at all—and I don’t think I’m even slightly unusual in this.
I say and write things[3] because I consider those things to be true, relevant, and at least somewhat important. That by itself is very often (possibly usually) sufficient for a thing to be useful in a general sense (i.e., I think that the world is better for me having said it, which necessarily involves the world being better for the people in it). Whether the specific person to whom the thing is nominally or factually addressed will be better off as a result of what I said or wrote is not my concern in any way other than that.
Sometimes I am additionally motivated by some specific usefulness of some specific utterance, but even in the edge case where the expected usefulness is exclusive to the single person to whom the utterance is addressed, I don’t consider whether that person will be better off for having listened to the thing in question. Maybe they won’t be! Maybe they will somehow be harmed. That’s not my business; they have the relevant information, which is true (and which I violated no moral precepts by conveying to them)—the rest is up to them.
Therefore if someone says “but if you lie, they’ll be better off”, my response is “weird thing to bring up; what’s the relevance?”.
Biting the Bullet
Basically correct. I will add that not only is it not always morally obligatory to tell the truth, but in fact it is sometimes morally obligatory to lie. Sometimes, telling the truth is wrong, and doing so makes you a bad person. Therefore the one who resolves to always tell the truth, no matter what, can in fact end up predictably doing evil as a direct result of that resolution.
There is no royal road to moral perfection. There is no way to get around the fact that you will always need to apply all of your faculties, the entirety of your reason and your conscience and everything else that is part of you, in order to be maximally sure (but never perfectly sure!) that you are doing the right thing. The moment you replace your brain with an algorithm, you’ve gone wrong. This fact does not become any less true even if the algorithm is “always tell the truth”. You can and should make rules, and you can and should follow them (rule consequentialism is superior to act consequentialism for all finite agents), and yet even this offers no escape from that final responsibility, which is always yours and cannot be offloaded to anyone or anything, ever.
Lie by default whenever you think it passes an Expected Value Calculation to do so, just as for any other action.
No, this is a terrible idea. Do not do this. Act consequentialism does not work. It doesn’t work no matter how much we say “yeah but just make up numbers” or “yeah you can’t actually do the calculation, but let’s pretend we can”. The numbers are fake and meaningless and we can’t do the calculation.
It’s still a better policy than just trusting people.
Definitely don’t just trust people. Trust, but verify. (See also.)
When your friends ask you about how trustworthy you are, make no implications that you are abnormally honest. Tell them truthfully (if it is safe to do so) about all the various bad incentives, broken social systems, and ordinary praxis that compel dishonesty from you and any other person, even among friends, and give them sincere advice about how to navigate these issues.
This, I agree with.
Cooperative Epistemics
I agree with all of this section. What I’ll note here is that there are people who will campaign very hard against the sort of thing you are advocating for here (“Do them the charity of not pretending they wouldn’t be making a terrible mistake by imagining they can take you or anyone else at their word. Build your Cooperative Epistemics on distrust instead.”), and for the whole “trying to start a communist utopia on the expectation that everybody just” thing. I agree that this actually has the opposite result to what anyone sensible would want.
Saying words is just an action, like any other action. You judge actions by their consequences. Are people made worse off or not? Most of the time, you’re not poisoning a shared epistemic well. The well was already poisoned when you got here. It’s more of a communal dumping ground at this point. Mostly you’d just be doing the sensible thing like everybody else does, except that you lack the instinct and intuition and need to learn to do it by rote.
When it makes sense to do so, when the consequences are beneficial, when society is such that you have to, when nobody wants the truth, when nobody is expecting the truth, when nobody is incentivising the truth: just lie to people.
This, on the other hand, is once again a terrible idea.
Look, this is going to sound fatuous, but there really isn’t any better general rule than this: you should only lie when doing so is the right thing to do.
“When the consequences are beneficial”—no, you can’t tell when the consequences will be beneficial, and anyhow act consequentialism does not and cannot work, so instead you should be a rule consequentialist and adopt rules about when lying is right, and when lying is wrong, and only lie in the first case and not the second case. (And you should have meta-rules about when you make your rules known to other people—hint, the answer is “almost always”—because much of the value of rules like this comes from them being public knowledge. And so on, applying all the usual ethical and meta-ethical considerations.)
“When society is such that you have to”—too general; furthermore, people, and by “people” I mean “dirty stinking liars who lie all the time, the bastards”, use this sort of excuse habitually, so you should be extremely wary of it. However, sometimes it actually is true. Once again you cannot avoid having to actually think about this sort of thing in much more detail than the OP.
“When nobody wants the truth”—situations like this are often the ones where telling the truth is exceptionally important and the right thing to do. But sometimes, the opposite of that.
“When nobody expects the truth”—ditto.
“When nobody is incentivizing the truth”—ditto.
The well was already poisoned when you got here. It’s more of a communal dumping ground at this point.
Wells can be cleaned, and new wells can be dug. (The latter is often a prerequisite for the former.)
The metaphorical utility function, that is. ↩︎
Although I have certain suspicions. ↩︎
That is, things which should be construed as in some sense possibly being true or false. In other words, I do not include here things like jokes, roleplaying, congratulatory remarks, flirting, requests, exclamations, etc. ↩︎
The hard cases are much more interesting. What about lying to my landlord about renting a room on airbnb? What about saying your class will make people millionaires for the low low price of $1,000 (hey, it could happen)? What about hiding the rats from the health inspector?
None of these seem like hard cases to me. Lying is wrong (and pretty obviously so) in all three of these cases.
I don’t see any mention of rule consequentialism in this post.
Is your idea just rule consequentialism?
Ok, thanks. (You omit from your enumeration rule consequentialists who are not utilitarians, but I infer that you have a similar attitude toward these as you do towards rule utilitarians.)
Well, as I am most partial to rule consequentialism, I have to agree that “this issue is much more thorny”. On the one hand, I agree with you that “never lie” is not a good rule to endorse (if even for the very straightforward reason that lying is sometimes not only permissible, but in fact is morally obligatory, so if you adopted a “never lie” rule then this would obligate you to predictably behave in an immoral way). On the other hand, I consider act consequentialism[1] to be obviously foolish and doomed (for boring, yet completely non-dismissable and un-avoidable, reasons of bounded rationality etc.), so your proposed solution where you simply “do the Expected Utility Calculation” is a non-starter. (You even admit that this calculation cannot be done, but then say to pretend to do it anyway; this looks to me like saying “the solution I propose can’t actually work, but do it anyway”. Well, no, if it can’t work, then obviously I shouldn’t do it, duh.)
(More commentary to come later.)
Utilitarianism, specifically (of any stripe whatsoever, and as distinct from non-utilitarian consequentialist frameworks) seems to me to be rejectable in a thoroughly overdetermined manner. ↩︎
I agree with a lot of things in this post and disagree with a lot of things in this post, but before I comment in more detail, I would like to clarify one thing, please:
Are you aware that there exist moral frameworks that aren’t act consequentialism? And if so, are you aware that some people adhere to these other moral frameworks? And if so, do you think that those people are all idiots, crazy, or crazy idiots?
(These questions are not rhetorical. Especially the last one, despite it obviously sounding like the most rhetorical of the set. But it’s not!)
The obvious response, which I thought of as soon as I saw this, is indeed contained in multiple reply tweets:
How about bank teller employment divided by total population size?
Ok but USA population went up +20% from 1990 to 2010 so tellers per capita did decrease over this period.
(McKenzie did not reply to either of these, for some reason.)
If you don’t normalize for population, graphs and claims like this are profoundly misleading. (Similarly to normalizing geographic data for population density, correcting for inflation, etc.)
Why would the right tail of Debbie’s distribution be longer than it seems?
The left tail—sure, that makes sense. Perhaps Debbie’s a sociopath (in a strong or weak sense of the term), inventing good or acceptable actions that don’t exist, hiding bad actions that do exist. But why would she be hiding good actions that exist? What sorts of actions might these be?
Or perhaps Debbie’s just a weird person with a very different outlook (maybe there’s even some mental illness involved, maybe something else). In this case it would make sense if Debbie sometimes acted in ways outside the norm, but didn’t particularly advertise these things (she may actively hide them, but might not need to, if they’re sufficiently surprising that they wouldn’t be revealed unless someone said something). But this doesn’t square with inventing middle-of-the-distribution actions that never happened—off-kilter weirdoes aren’t like this.
So what explains that symmetrical dissimulation and mid-curve confabulation? It doesn’t map even approximately to any personality type, or set of motivations, that I can think of.
I do not believe that any such frog-boiling has ever happened to me.
Nor to me. I can’t map the described scenario to anything in my experience.
Noticing a basketball is forming an accurate mental representation of the basketball. This mental representation is not the basketball. The map is not the territory, the quotation is not the referent. They’re fundamentally different things, no matter how good you are at recognizing basketballs.
Yes, of course.
Noticing that you’ve noticed the basketball is noticing this mental representation—which again is not the basketball. Noticing that you’ve noticed the basketball is when you form a mental representation of the fact that you’ve formed a mental representation of the basketball. This representation of your mental construct is as different from the mental construct it represents as the your mental construct of a basketball is different from the basketball it represents. They’re fundamentally different things, regardless of how good you are at recognizing when you’ve noticed a thing.
On the contrary: these are not fundamentally different things, but rather, the same kind of thing—namely, they are both mental representations. (We might say that they are different instances, but not different classes.) And it is entirely possible that they simply co-occur basically always, as @Richard_Kennaway describes.
But especially if Bob were preoccupied he might not have taken note of his obstacle avoidance, might fail to find in his memory a mental representation of this object he did indeed represent at the time, and say—incorrectly—“No, I didn’t notice it”. The fact that he stepped around the basketball is proof that he noticed it. The fact that he said “I didn’t notice it”, doesn’t negate the fact that he noticed it, it shows that he didn’t notice that he noticed the basketball.
On the contrary again: what you are describing here is simply Bob not having noticed the basketball, and then truthfully reporting this fact.
(Note that this is different from the scenario where Bob is not preoccupied, notices the basketball, steps around, but then forgets that this happened; and, when later asked, falsely reports that he did not notice the basketball. In other words, these are what Dennett colorfully described in Consciousness Explained as the “Stalinesque” and “Orwellian” scenarios, respectively.)
This is reminiscent of the famous hypnosis experiment where people were hypnotized and told that a chair placed in their path was invisible. The people instructed to fool the researchers into believing they had been hypnotized all walked into the chair, as one would. The people who were genuinely hypnotized walked around the chair, and when asked why they took the path they did, showed that they had no idea why they did what they did. They had noticed the chair, and not that they had noticed it.
Here you are again describing these people not having noticed the chair.
If you pick up the basketball, and tell the kid you left it on the court, you’ve shown that you’ve both noticed the basketball and also the fact that you noticed the basketball.
If you walk around the basketball, and tell the kid you haven’t seen it, you’ve shown that you’ve noticed the basketball but not the fact that you’ve noticed the basketball.
If you walk right past the basketball wishing you had one to play with, you’ve shown that you didn’t notice the basketball—and so you can’t have noticed that you noticed
And if you walk around the basketball, wishing you had one to play with, you’ve shown that…?
The weird part about this is that it can be hard to imagine not noticing. It’s hard to miss a big orange ball on black asphalt, so it’s hard to imagine there being a basketball there and not noticing it. It can seem like the distinction between the ball being there and noticing the ball being there isn’t worth tracking because you’ll never fail to notice it—except in the obvious cases like if it’s a moonless night and you’re blind but that doesn’t count, right?
I have not had this experience (of it being hard to imagine this distinction); it has always been clear to me that it’s important and worth tracking. But of course I am aware that some people do think thus.
Similarly, if the only time you’re looking at your own mental representations, they’re metaphorically big and bright orange against a black background, it’s going to be hard to imagine not noticing them—except for the things that are “unconscious” and therefore “impossible to notice” but you can insist that those “don’t count” either.
Sure. Have you ever noticed individual hydrogen atoms? No? Well, why doesn’t that serve as an example of a thing that you didn’t notice? Because you can’t notice them, of course. (Unless you have one of them fancy quantum microscopes, anyway.)
And similarly, once you start looking for representations that are hard to find, and start finding them where you hadn’t seen them immediately, it gets a lot more obvious that there’s* a lot* of things represented in your mind which you haven’t yet noticed. And that there’s simply too much external reality represented in your head for you to represent all of the object level representations you have.
There are plenty of “low-level” representations in our brains (and auxiliary organs) which are inaccessible to conscious awareness. These are of a different kind than mental representations as we ordinarily think of them. For example, take color vision: would you say that we “notice” the individual lightness values of the three color channels formed by signals from the three types of cone cells in our retina? I think that it would be very silly to say that yes, we notice this information, but we do not notice that we notice it, etc. No, we are simply unaware of it, because our visual cortex begins to combine and transform the color channel information long before it gets anywhere near the processes that we could reasonably describe as constituting any “noticing” of anything whatsoever.
Noticing the basketball and noticing that you have noticed the basketball are fundamentally different things, because basketballs and noticing basketballs are fundamentally different things.
Consequent does not follow from antecedent. A mental representation is the same kind of thing as another mental representation.
(Computer science analogy: NSString
is a different kind of thing from NSString*
. But NSString*
and NSString**
are the same kind of thing, and so is NSString***
, NSSTring****
, etc.)
(Of course, you need to know about NSString**
to understand the pattern, otherwise you might think that there are two kinds of things: objects, and pointers to objects. It’s once you go to that second level of indirection that you realize that actually, there are two kinds of things: objects, and pointers to [objects, and pointers to [objects, and pointers to [objects, and …]]]… or, in other words, there are two kinds of things: objects, and pointers to things. Thus also with mental representations: to establish the recursion, you must have mental representations of mental representations… at which point you realize that there are two kinds of things: stuff out there in reality, and mental representations of things.)
The same applies to anyone who thinks they see everything their mind is representing and responding to.
“Blindsight, but for social cues” is the grand revelation that all of this has been leading up to…?
I was more or less with you for most of this comment, but it seems to me that you go astray in the end, in two ways.
First:
The thing is, when you look at the people who best approximate this ideal, they’re by definition the ones that are very skilled in self awareness.
This doesn’t work at all. Now, if we remove the words “by definition”, then we’d be left with an empirical claim, which could be true or false (I think it’s false, but more on that later), which is fine. But “by definition”? No, absolutely not.
The reason it’s wrong is that you’re equivocating between two concepts: someone who (as @Richard_Kennaway put it) “is always aware of their own existence, whose own presence is as ineluctable to them as their awareness of the sun when out of doors on a bright sunny day: for such a person, to be aware is always to be aware of themselves”, and someone who is “very skilled in self awareness”. You could claim that those are, empirically, the same thing! But they are not the same thing by definition. One is defined in one way, and the other is defined in another way. To claim that these two concepts have the same referent is an empirical claim.
Second:
“Different minds may operate differently” is definitely true in a sense, but the distinction I’m drawing is fundamental, and the minds that are the least aware of it are those for whom it is* most* important—because there’s fruit there, and you can’t start picking it until you see it. “I’m just so skilled in self awareness that I literally have never noticed myself making this mistake—and have never noticed all the other people making it either” is a self disproving statement.
Perhaps it is, but that’s because it is a mischaracterization of the concept described in the grandparent comment. Instead we can imagine a person saying: “I have never made this mistake myself (so naturally I have never noticed myself making this mistake, there having never been anything to notice in that regard); unsurprisingly, it thus also never occurred to me that other people might make such a mistake”.
Such a statement may be true or false, and we may draw various conclusions about its author in either case; but “self-disproving” it certainly is not.
For my part, while I won’t claim to embody the ultimate extreme on the spectrum described in the grandparent, I’m certainly quite a bit closer to the latter end of it than to the former end. Likewise, I have read, and participated in, enough conversations like this one to be aware that other people’s mind work differently; but I certainly find it strange. And yet, am I “skilled in self awareness”? You could claim that, or conversely disclaim it, but either way it seems to me to be more like a wrong question than anything else. As far as I can tell, there is no distinction between noticing a thing and noticing when I’ve noticed a thing. (Unless you mean something banal like “thinking about the fact that I’ve noticed a thing”? But that is not “noticing”, of course; one may notice all sorts of things, and then proceed to not think about them.)
test summary
test details more details
- I told you that I addressed this failure mode in another comment. Why did you ignore when I told you this instead of reading that comment and responding to what I said over there instead? Isn’t that the *only *thing that makes sense, if that’s all you want to talk about?
I read the comment in question (this one, yes? if that’s not the comment you meant then please link the correct one) and it did not seem to me to have addressed this. (I did not particularly find anything specific to respond in it, either, although I certainly can’t say that I agree with the model you give therein.)
- Why are you talking whether men pick up on these things in general?
Because, as I have said, I agree with johnswentworth when he wrote:
Sometimes this is due to the woman in question not recognizing how subtle she’s being, and losing out on a date with a man she’s still interested in.
I would guess that this is approximately 100% of the time in practice, excluding cases where the man doesn’t pick up on the cues but happens to ask her out anyway. Approximately nobody accurately picks up on womens’ subtle cues, including other women (at least that would be my strong guess, and is very cruxy for me here). If the woman just wants a guy who will ask her out, that’s still a perfectly fine utility function, but the cues serve approximately-zero role outside of the woman’s own imagination.
EDIT: In fact, let me expand on this.
Your linked comment answers the question “why don’t women just ask, if they really want the guy”. (I find the answer unconvincing, as I said, but that’s actually beside the point here.) But the reason I brought up the scenario in question wasn’t to pose the question “why don’t women just ask”, but rather to point out that in said scenario:
- The woman is definitely sending cues as hard as she can.
- The man does not pick up on those cues.
- If asked later, the man will say that he did not perceive any cues. (Indeed, he’d be surprised by the question—“Cues, what cues? From whom…? That one woman? Flirting? With whom? With me?! No, you’ve got something mixed up, surely…”)
- If asked later, the woman will say that the man did not pick up on any cues. (And will be very frustrated by this; she was being so obvious, how could this absolute dunce of a man not have noticed?! Ridiculous! And she was really into the guy, too…)
So even in this case where one person is trying as hard as they can to send cues, nevertheless the other person is totally oblivious. (Does this happen all the time? Yes it does.)
Given this, does the suggestion that actually, everyone is perceiving all the cues all the time, is obviously silly. If even trying this hard, “sending” this “loudly”—at such extreme “transmission intensity”—can fail to be enough to get the signal through, at all, then how could it be true that the signal is actually successfully getting through pretty much all the time? The answer is obvious: it can’t be true.
(Now, you might say: “but I am making no such suggestion, because you have misunderstood my…”—sure, fair enough. But this should, at least, clearly and definitively answer the question of why I brought up that scenario and what relevance I think it has to the general question.)
(Edit ends.)
So, to again try to summarize: you claim that people (and in particular, men) “sense the cues” that other people (and in particular, women) send. What exactly it means to “sense the cues”, in your usage, remains unclear. It likewise remains unclear how (or if) that claim is responsive to johnswentworth’s claim quoted above (with which I agree). It certainly seemed like you were disagreeing with him. But then, based on your later explanation of how you are using the relevant words, it seemed like you were not disagreeing with him but instead were saying… something… unrelated…?
Could you, perhaps, express your position on this matter using the same terminology as is being used by your interlocutors? (I understand if you prefer an idiosyncratic usage, and it’s fine if you return to that usage afterwards, but it would help if you could “translate” your point into the normal usage at least temporarily, just so that we could at least get clear on what it is that you’re actually saying.)
Even though it’s normally rude to point out so bluntly like this, I certainly prefer the respect of “What you’re saying sounds obviously dumb. What am I missing?” than the polite fictions that condescend and presuppose that you’re not only in error but also too emotionally immature to admit it.
Yes, of course, likewise.
EDIT: By the way, this is false in my experience:
You’re still misunderstanding what I’m saying though. Again, you can’t judge truth of a statement until you know what the statement means.
Well, I’m definitely confused about at least one thing. Namely, I am confused about whether you claim that you have already explained what you mean by “sense the cue”.
If you haven’t already explained it… well, the obvious question is “why not”, but never mind that, we can move past it, and instead proceed to: please do go ahead and explain it.
If you have already explained it (as seems to me to be the case), then what makes you think that, despite your explanation, I do not understand what you meant? (As far as I can tell, you’re not saying “aha, I conclude from this here statement of yours that you didn’t understand my explanation”, but rather “I know in advance that you didn’t understand my explanation, and on the basis of that fact, I conclude that this here statement of yours is false”. But how do you know it in advance…?)
flinching from admitting what you sense
Needless to say, I can’t decipher this comment, because I am once again (or still?) not sure how you’re using the word “sense” here…
Perhaps you could go ahead and explain what you think is wrong with the billboard analogy I offered?
As a meta-level comment, I’d like to note that I’ve asked quite a few questions to try to understand your points, and you’ve ignored almost all of them, whereas I have (as you see) tried to respond directly to your questions in my replies. (Perhaps it seemed to you like the questions were rhetorical? But no, I actually did want answers to them!) For example, in the grandparent I asked:
What do you mean by “matter” here?
That wasn’t a rhetorical question; I really would like to know what you meant when you disputed my alleged assumption that you have to be aware of things for them to “matter”. In what sense do things you’re not aware of “matter”? (I can think of some obvious cases where this is true—one need not be aware of electromagnetism for it to affect you, for instance—but presumably you don’t suspect me of being a solipsist with regard to physics, so this can’t be what you meant. On the other hand, if someone is trying to communicate something to you—“sending cues”—but you are not aware of this, then, clearly, you cannot be receiving the message that is being transmitted. Is that something you dispute? Or does “matter” mean something else entirely here?)
(And likewise with the other questions I asked; they were meant to help me understand your points, not as some sort of rhetorical ploys. But let’s start with that last question, at least; it’ll do for now.)
It all hinges on the fact that your assumption that you have to be aware of things for them to matter is unjustified—and, it turns out, very very false.
What do you mean by “matter” here?
Remember, we’re talking about the following situation:
- Woman attempts to “send cues”, with the intended result being that a certain man will perceive these cues and react in a certain desired way.
- Man has no idea that this is happening.
- Man does not react in the desired way to the supposed cues that the woman is supposedly sending (and how could he, not being aware of any such things?).
- Woman is annoyed, frustrated, etc., that she is not getting what she wants.
Now you’re claiming that, somehow, these cues are nevertheless being “sensed” (ok, sure), and also that they nevertheless “matter” in some way.
What is the meaning of “matter” in this context that makes your claim true?
EDIT: By the way, this is false in my experience:
But it is also true that people will call what I refer to as “sensing the cue” “sensing the cue”.
I’ve never encountered this usage from anyone other than you.
I think perhaps you have missed the point I was making, which is that what you call “being aware that you have sensed a cue” is just what everyone else calls “sensing the cue” (perhaps “perceiving the cue” might be a better phrase, by the way; that does seem to me to be more consonant with how the concepts of perception and sensation are used elsewhere…). Whatever we call it, the interesting and important thing is the part where the intended cue-recipient ends up having any idea whatsoever that a cue is being sent (or, more likely, instead fails to end up with any such idea).
Thus we had the following exchange:
Approximately nobody accurately picks up on womens’ subtle cues, including other women (at least that would be my strong guess, and is very cruxy for me here). … (Of course there’s an obvious alternative hypothesis: most men do pick up on such cues, and I’m overindexing on myself or my friends or something. I am certainly tracking that hypothesis, I am well aware that my brain is not a good model of other humans’ brains, but man it sure sounds like “not noticing womens’ subtle cues” is the near-universal experience, even among other women when people actually try to test that.)
In short, approximately everybody senses women’s cues whether they recognize it or not, whether they know what to do with it or not, and they’re only subtle and ambiguous to the extent that their purpose is served by being subtle and ambiguous.
Now, given the way you are using the phrase “sensing the cue”, we can now see that the second quote is totally non-responsive to the first. Like, it’s literally just a non sequitur.
An analogy: suppose that certain billboards, signs, etc., were designed in such a way that they secretly also worked like those Magic Eye pictures, and if you squinted at them just right, you could see a hidden image. Suppose that such special double-duty displays weren’t marked in any obvious way.
Now, suppose I said: “You know how some billboards and signs and such are secretly also magic eye images? I have no idea how to spot when a sign is one of those! Much less how to squint at them the right way, even if I did spot them…” And suppose you replied: “Well, you know, the light reflected from the those signs is hitting your retina, so you can totally tell that the signs exist.”
Would that be an even remotely useful reply?
Does the reply point out any errors in the complaint, or contradict the complaint in any way at all? No, of course not. Does the reply talk about anything whatsoever to do with the reason why the problem exists? Not in the least. Can it possibly point the way to a solution? Not a chance.
If you follow this up with “ah, but there is a distinction between noticing ‘the billboard exists’ and noticing ‘I have noticed that the billboard exists’”, is that relevant or useful? It is not. Yes, I am indeed aware of billboards, and aware of my awareness of billboards, and aware of that… is this fact even slightly relevant to my (and most people’s) hypothetical inability to spot which billboards are, hypothetically, also secretly Magic Eye images? Alas, no.
Specifically, I assume you’re objecting to the first part “approximately everybody senses women’s cues whether they recognize it or not”, but you don’t seem to be noticing the distinction between “sensing the cue” and “being aware that you have sensed a cue, and having concluded that it counts as an a cue”. These are easy to conflate, but they’re wildly different things.
What is the distinction between “sensing the cue” and “being aware that you have sensed a cue”?
If, in “approximately everybody senses women’s cues whether they recognize it or not”, you mean by the phrase “sense women’s cues” something which can be done without even being aware that you have sensed anything, then what exactly are you saying? Is this claim falsifiable at all? How would you falsify it?
How would the world look different in the following two cases:
-
Everyone senses women’s cues, although some (many?) people do this without having any awareness at all that they have sensed anything whatsoever.
-
Some people do not sense women’s cues.
What observations would you expect to make in one of those scenarios but not in the other?
Heck, in high school a girl told my mom that she wanted to have my baby, and I was still telling my friends “You’re wrong, she’s not into me”. I get that.
Which is perfectly sensible, because I have in fact encountered cases where women would say things like that to men, but give no other indication of being interested in said men, and would react with bafflement to suggestions that they were interested in dating said men, etc. (Being a third party in these cases, I could observe in a disinterested way, and found these observations quite instructive.)
The same, by the way, can be said of this:
But if she holds unbroken eye contact, flat out doesn’t respond to anything he says in attempt to distract her or test her resolve, and leans all the way in until her lips are mere millimeters from his, waiting for minutes until he responds.… that’s yin, but not something that can be missed, you know?
I have absolutely known women who have done this with guys whom they had no intention whatsoever to kiss or do anything else with.
At the same time, I couldn’t help but respond to what she was actually doing. When she’d jump in front of me to try to stop me and get my attention, I could change direction and walk around her, but I had to sense her presence in order to do that. Sure, I had alternate explanations for why she was doing what she was doing, but that shows recognition of the thing to be explained.
As far as I can tell, you are using the phrase “sense a cue” in a way that I can only describe as completely useless. Obviously it is impossible to literally not perceive the the physical presence of a woman in front of you (assuming that you aren’t blind, it’s not dark, your eyes aren’t closed, etc.), but that is not what anyone means by “sense a cue”. What the phrase means is “recognize a specific action or behavior as a cue”.
“Recognition of the thing to be explained” is worth nothing.
For example, suppose I am walking down the street and I see a woman walking in the opposite direction toward me. Is this a “thing to be explained”? Taking the broad view, sure. What might the explanation be? Obviously it is “this woman happens to have some reason to be walking in that direction, just like any number of people who walk down streets every day”. What is the correct reaction? Nothing; no reaction is required or appropriate.
Or: I’m riding the subway and a woman brushes my arm as she walks past. Is this a “thing to be explained”? Sure. What might the explanation be? Obviously it is “it’s a crowded subway car and there’s no particular reason to take extreme care not to make physical contact with anyone, and this sort of accidental casual contact happens all the time”. What is the correct reaction? Ignore it; no reaction is required or appropriate.
Or: the woman in the cafe is smiling when she hands me my order. Is this a “thing to be explained”? Sure. What might the explanation be? Obviously it is “she smiles at everyone; she just smiled at the old lady before me, and now she’s smiling at the couple after me; being friendly is good for business, probably”. What is the correct reaction? Smile back, be polite, otherwise nothing; no special reaction is required or appropriate.
In each of these cases, I “recognized the thing to be explained”; and the explanation for thing to be explained was it was a totally mundane behavior that had nothing to do with anyone sending any cues.
In the cases of failing to notice cues, what happens is that a man “recognizes the thing to be explained” in the same way that I “recognized the thing to be explained” in my three examples above; he identifies the explanation for the thing as something mundane; he then does not react in any particular way, because generally in such mundane cases no special reaction is required or appropriate. But whoops! Actually the woman was strenuously sending all sorts of cues! But the man completely failed to perceive any of them, because the idea that these cues were somehow extremely obvious was just a fantasy in the woman’s head. All the man perceived was various behaviors which have various mundane explanations, just like the overwhelming majority of behaviors in which we engage every day.
So, back to the actually useful question: is it possible to not recognize a behavior or action as a deliberate cue that is being sent? Yes, absolutely it’s possible, it happens all the time and is definitely the explanation for approximately 100% of cases of “woman sends cues, guy doesn’t respond, woman does nothing more and doesn’t get date”, exactly as @johnswentworth wrote.
Approximately nobody accurately picks up on womens’ subtle cues, including other women (at least that would be my strong guess, and is very cruxy for me here).
Indeed, this is well supported by innumerable examples from various social media of discussions where a man asks “the following situation happened involving a woman, please enlighten me as to its meaning”, and gets the mostly wildly divergent array of answers from women (most of whom, of course, are completely confident in their answer’s correctness, and many of whom express bafflement, or even indignation, that anyone could possibly think that the truth might be otherwise).
The other issue with your choice of denominator is that if the woman definitely wants the date she likely won’t be subtle.
You mean, like the woman in your anecdote about your friend tutoring in college…?
The problem with your argument is that it doesn’t at all explain all the cases where the woman definitely wants a date, is definitely interested in the guy, is very frustrated by (what she would characterize as) the guy’s obliviousness (and quite likely complains about this to her friends), and yet still won’t say anything.
When you have a situation where the woman knows the man is going to be interested in her
But of course this is an absurd requirement. If she knows he’s going to be interested, of course that makes it vastly easier!
… and yet, according to your own account, women still won’t say anything in that situation, despite having a guarantee of a positive response. What does that tell you?
really really obvious clues like leaning in and forcing the man to contend with the fact that she’s there waiting to be kissed
In fact there is no such as “forcing the man to contend with” anything. People (not just men) are, as it turns out, perfectly capable of totally ignoring a cue like this, and indeed of not even noticing it in the first place. A woman who thinks that leaning in and waiting to be kissed is somehow a guarantee that a man will correctly perceive the cue, is sadly, sadly mistaken.
In short, approximately everybody senses women’s cues whether they recognize it or not, whether they know what to do with it or not, and they’re only subtle and ambiguous to the extent that their purpose is served by being subtle and ambiguous.
Sorry, but this is empirically false.
So you can only imagine the excitement when, a few years after LessWrong was established, we found that *literally all every part of your brain does is make Bayesian updates to minimize prediction errors. *Scott Alexander wrote several excited articles about it. I wrote several excited articles about it. And most everyone else said ok cool story we’re gonna ignore it and just do AI posting now.
Did we “find” this? That’s not what seems to me to have happened.
As far as I can tell, this is a speculative theory that some people found to be very interesting. Others said “well, perhaps, this is certainly an interesting way of thinking about some things, although there seem to be some hiccups, and maybe it has some correspondence to actual reality, maybe not—in any case do you have a more concrete model and stronger evidence?” and got the answer “more research is needed”. Alright, cool, by all means, proceed, research more.
And then nothing much came of it, and so of course it was thereafter mostly ignored. What is there to say about it or do with it?
Conservation of expected evidence, if I see an Amazon review page with reviews saying “instead of office chair, package contained bobcat” my odds they’re sending bobcats did go up at least somewhat.
Yes, of course. But your odds that it’s a competitor’s plot should also go up—and will end up higher by far. (This is one of the myriad examples of what Jaynes called the “resurrection of dead hypotheses”.)
If I see an Amazon / eBay / etc. review page with a bunch of reviews that say “instead of office chair, package contained bobcat”, my immediate assumption will be that a competitor of the seller has paid for a bunch of bad reviews.
This sort of thing happens regularly.
More generally, given a heuristic that you would apply to a situation, ask: who can profit by exploiting that heuristic. If such people exist, then assume that they are already exploiting your heuristic.
The task, then, is to determine whether bobcatters are more or less common than unscrupulous sellers who would pay for fake reviews that accuse their competitors of being bobcatters.
Thus also in social situations.
Determining and comparing respective base rates is not trivial, but without assuming that adversarial optimization exists, you will predictably fail to get the right answer, quite often. (Increasingly often, in fact, due to incentives.)
But is this because you didn’t know what cream cheese was?
In my experience, everyone already understands that cream cheese frosting is (a) sweet, and (b) delicious. I have never met anyone, nor heard of anyone, who was somehow under the impression that cream cheese frosting is in any way incongruous or weird.
In other words, as far as I can tell, the problem you are describing is completely nonexistent.
cured fish.
Why would I do that to myself? I don’t feel my sins deserve that level of punishment.
Perhaps you are not aware of the lox & cream cheese bagel sandwich, a venerable and beloved item of New York City cuisine. If you have not had this food, then you are missing out on a singular life experience, and you are spiritually impoverished by this lack. I suggest rectifying this omission forthwith.
Humans display a bias called scope neglect. Because we can’t intuitively grok how much larger some big numbers are than others, we have a tendency to treat big numbers all the same. People will pay as much to save 2,000 birds as 20,000 and 200,000 birds.
This is a deeply misleading characterization of that study.
If I got suddenly teleported to the court of Genghis Khan and proposed we vote on who’s in charge, this obviously doesn’t work.
Genghis Khan was, in fact, elected:
All Great Khans of the Mongol Empire, for example Genghis Khan and Ögedei Khan, were formally elected in a Kurultai; khans of subordinate Mongol states, such as the Golden Horde, were elected by a similar regional Kurultai.
I think you’ve had her buy an extra pair of boots.
Ah, true. So, $239.41, at the end.
(Of course, this all assumes that the cheap boots don’t get more expensive over the course of 14 years. Siderea does say that she spends $20 each year on boots, but that’s hard to take seriously over a decade-plus period…)
We might be talking about poverty at different orders of magnitude, and you might be writing off a lot of failures to purchase efficiently as “skill issue”… but being poor in skills and the capacity to hone them is, itself, a form of poverty.
Once we’ve redefined “poverty” to mean something other than poverty, we can obviously make all sorts of claims about it. Being “poor in skills and the capacity to hone them” can be the cause of poverty. Notice how this is a different cause from the one that “boots theory” posits.
As I’ve written, I have personally experienced my family being quite poor. Buying a roll of toilet paper instead of a whole package, or buying just one meal’s worth of food instead of a week’s worth, is definitely a “skill issue”.
Being poor is unpleasant in many ways. It being “expensive” is not one of them.
In another world, Siderea buys $20 boots and invests $260. …
I think that your calculation is a bit off. After a year, she’ll have $258.20 (i.e., ($260 * 1.07) − $20). After two years, $256.27 (i.e., ($258.20 * 1.07) − $20). And so on. After 14 years, she’ll have $219.41.
Still better than buying the expensive boots—in purely financial terms.
(Inflation-adjustment is another important point, of course. That $200 in 2005 would be $265 in 2018 dollars.)
In short, yes, this is indeed a very poor example—ironic, as it’s a real-life version of the original example!
(This is also often missing when people talk about buying versus renting. Yes, the mortgage is often lower than rent, and house value is likely higher at the end, but you gave up investing your deposit. How do those effects compare? Probably depends on time and place.)
This is precisely what made renting come out far ahead, in my aforementioned calculation. (And this is without even considering the time value of money.)
buying poor quality food and then having to pay for medical care
I have seen this sort of thing mentioned, but I don’t think that it works.
Let’s set aside for the moment the somewhat tenuous and indirect connection between the food you eat today, and the medical care you will require, some years down the line. (If you end up with heart disease in ten years because you’ve been eating poorly, surely this can’t be any part of the reason why you’re poor today—that would require some sort of anti-temporal causation!)
And let’s also set aside this business of “having to pay for medical care”. (Even in the United States—famously the land of medical bills—the poorest people are also the ones who are eligible for Medicaid. You’re more likely to have to pay for medical care if you’re sufficiently well-off to eat well than if you’re very poor!)
Let’s instead consider just this notion that there’s a causal connection between being poor and eating poor quality food—and specifically, the sort of food that contributes to poor health outcomes—because you can’t afford healthy food.
There was a time when my own family was very poor. (We’d just arrived in the United States as brand-new immigrants, with little more than the proverbial clothes on our backs; my mother had to work two, or sometimes three, jobs just to pay the rent; even my grandfather, then already of retirement age, got a job delivering newspapers.) As I would routinely help my grandmother with grocery shopping, I was very well acquainted with how much it cost to feed a family on a very tight budget, what sorts of purchasing decisions needed to be made, etc.
And what I can tell you is that the sort of food we ate was not cheap-but-unhealthy. Rather, it was cheap-but-healthy. What was missing was luxuries and variety—not nutrition! You can, in fact, have a healthy (and even tasty) diet on a tight budget. I have extensive personal experience with this.
The difference between my family and the poor people who buy junk food is much more cultural than anything else. Specifically, the missing ingredient is cultural transmission of knowledge of, and expertise in, preparation of nutritious, satisfying food under severe financial constraints. My family’s cultural background contains a tremendous amount of accumulated wisdom on this topic. Someone who, for whatever reason, lacks access to such cultural metis, will be severely disadvantaged in this regard. But this has nothing to do with poverty as such.
buying a cheap car that costs more in repairs
This, too, is a dubious example. The key word here is “more”. More than what?
Do you mean:
(1) “Car A (which costs less), over a period of N years, requires repairs of cost totaling X; car B (which costs more), over the same period of N years, requires repairs of cost totaling Y; X > Y”
If so, then notice that this does not support “boots theory”! But perhaps you instead mean:
(2) “Car A (which costs less), over a period of N years, requires repairs of cost totaling X; car B (which costs more), over the same period of N years, requires repairs of cost totaling Y; (cost[A] + X) > (cost[B] + Y)”
This might support “boots theory”. But is it plausible?
The car I currently drive was bought used, at a cost of ~$13,000, in 2016. Since then I have spent ~$2,000 on repairs (not counting inspections and oil changes, which will apply to everyone equally), for a total of ~$15,000.
In 2016, I could instead have purchased a much cheaper used car, for ~$3,000. Suppose that this had been all I could afford. For this to have turned out to be a “boots theory”-supporting example, I’d then have to spend ~$12,000, in the 9-year period since, on repairs.
Seems improbable. Indeed, at that point, I could just buy another car. I could buy a new car four times! (And notice that this scenario would then still not support “boots theory”.)
payday loans
This is a complicated topic, and I am not qualified to opine confidently on it. I will say only that it seems like a highly non-central example of the alleged phenomenon.
paying rent instead of buying and building equity
It’s interesting that you should mention this. I currently rent an apartment. At one point, some years ago, I realized that my financial situation was such that, if I wanted to, I could buy a condo or co-op. I recalled the received wisdom that owning is better than renting, and looked into my options. After doing the math, I concluded that, on a time horizon of 10, 20, 30, or 40 years[1], renting unambiguously came out ahead—and not just slightly ahead, but way ahead. Buying an apartment would have amounted to setting a big pile of money on fire for no reason whatsoever. The received wisdom was diametrically wrong.
(One can make all sorts of objections to this, along the lines of “but isn’t this just because of the crazy real estate market in NYC”, or “but isn’t this just because of historically-unusual economic situation X which currently obtains”, etc. But what is the use of that? The reality is what it is.)
buying consumable goods instead of investments
Investing is certainly useful, if you can do it. But what in the world does that have to do with “boots theory”, or with the “phenomenon where people spend a lot to buy things that are poor quality instead of longer lasting higher quality things”?
Your “instead of” in the “consumable goods instead of investments” clause is fallacious, I think. Poor people spend some amount of money on consumable goods, have no money left over, and thus do not invest. Rich people spend more money on consumable goods than poor people, still have money left over after spending more, and invest the remainder. This is unambiguously not consistent with “boots theory”.
working jobs instead of building passive income
This, too, seems like a non sequitur. You can only be “building passive income” if you already have a lot of money. You can’t get there by spending less money. What’s more, if you do have a lot of money (which you can invest, thus securing a source of passive income), nothing prevents you from also working a job, and earning additional money. Indeed, plenty of people do just that. Many don’t, of course—but this is not because it would somehow end up costing them more money than not working! Rather, it’s because they (quite reasonably) don’t want to work if they don’t have to.
There is still a real phenomenon where people spend a lot to buy things that are poor quality instead of longer lasting higher quality things.
Well, I should like to see some examples. So far, our tally of actual examples of this alleged phenomenon seems to still be zero. All the examples proffered thus far… aren’t.
Or longer, really—but “longer” is likely to be a moot point, for various reasons. ↩︎
Boots theory captures the “being poor is expensive” element that’s true in Ankh-Morkpork and also true on Earth
It certainly is not true on Earth.
As I have written:
It sounds so wise and worldly! And it’s also complete bullshit.
Because let’s say that I want a pair of sneakers (i.e., shoes that are comfortable and won’t hurt my feet) that won’t wear out in a couple of years, so that I can buy them once and wear them for ten years. Why, I’d have to pay five times the price of an ordinary pair of sneakers! But then I’d have my ten-year shoes. Right?
WRONG.
Oh, sure, I can buy a pair of sneakers that costs five times what normal sneakers cost. And they’ll last for just about as long—at best!
And it’s the same with cookware, it’s the same with computer hardware, it’s the same with a whole lot of things.
The reason why people mistakenly come to believe this “boots theory” is not that more expensive stuff lasts longer, out of proportion to the price difference—but rather, that stuff purchased in the past, whose inflation-adjusted price was substantially higher than the current price of the current cheap goods, lasts longer. But this is not because of the price difference.
It’s because they don’t make ’em like they used to.
I’ve become so reliant on a GPS that using maps to direct myself feels like a foreign concept. Google Maps, Waze, whatever, if it’s outside of my neighbourhood, I’m punching in the address before I head out. Sometimes I notice the GPS taking slower routes or sending me the wrong way as I get out of a parking lot, but regardless, I just follow its directions, because I don’t have to think. Though I know, without this convenient tool, I’d be lost (literally).
If you recognize this problem, why not stop using a GPS? Navigating without a GPS is not difficult. You could regain this skill easily. What’s stopping you?
Given how spectacularly harmful psychedelic drugs can often be, I think we’d better hope that there isn’t any such “sensory-input-only” method of inducing psychedelic states.
your proposal would have the displayed image revert back to the first frame on
mouseleave
IIUC
Yes, correct.
I was hoping to have the hover-mode animation seamlessly pause and unpause
This SO question has several answers, all of which seem like reasonable solutions to me (if I were doing this, I’d try them all and pick the most performant one, most likely).
The “auto” icon is the sun if auto says light mode, and the moon if it says dark mode. Though ideally it’d be self-explanatory.
Hmm, I see. Alright, that’s not too bad, given the labels.
I found setting it in smallcaps to be quite distracting, so I settled for italics. What do you think?
Seems reasonable. (I agree that underlining the links is no good.)
Auto-dark mode!
Good; however:
- “Auto” has the same icon as light—confusing!
- “Auto” has a label, while the others do not—likewise confusing
- The “Auto” label is styled just like the sidebar links, but of course it’s not a link at all (indeed, it’s not clickable or interactable in any way)
For #1, I suggest the “black & white cookie” (a.k.a. “contrast”) icon, as seen on gwern.net (this trio of “B&W cookie” / “sun” / “crescent moon” for “auto” / “light” / “dark” is becoming increasingly common for tri-state mode selectors, in my experience).
For #2, you should label all or none. My advice would be “all”, with a compromise: show labels on hover only (and make them a bit less obtrusive, and style them differently: smaller text, put the label off to the left—there’s room, given the shape of the logo—and perhaps set them in smallcaps? or even a different font, perhaps a sans; this would also solve problem #3).
I can’t play/pause the GIF on hover because GIFs don’t allow that (AFAIK).
-
Make a static version of the image (the first frame of the animation, perhaps?). Set that image to load by default.
-
At the end of page load, in the background, load the animated version.
-
On hover (by adding a listener to the
mouseenter
event), rewrite thesrc
attribute of the image element to point to the animated image. -
On un-hover (
mouseleave
event), rewrite thesrc
back to the static one.
I don’t really mind the zeros. If I hear from more people that the slashed zeros bother them, I’ll reconsider.
Slashed zeroes are Correct™.
Just the benefits gained by the small minority of kids actually being taught something?
Certainly we lose that, yes.
Whether we lose other things is beyond the scope of the main point that I am making here. That point is: if we switch from teachers teaching kids to parents teaching kids, we cannot assume that we thereby go from kids not being taught effectively, to kids being taught effectively. That is because most parents are not competent to effectively teach their kids most (or, often, all) academic subjects.
I would guess (but haven’t checked) that most of the teachers qualified to teach are at private schools anyway.
I, for one, did not attend private schools.[1] My comments upthread, about my own teachers, referred to a public school. (The junior high school I attended, where the teachers were also substantially more competent than the “average teacher” described in this discussion, was also a public school.)
Your guess may nonetheless be correct in the statistical aggregate; I don’t know enough to comment on that.
Except for a ~2 month period in 3rd grade; the school in question was substantially worse than all public schools I have attended. ↩︎
I think much of the discussion of homeschooling is focused on elementary school.
Unfortunately, this is not the case. There is a motte-and-bailey situation here, where the motte is “some kids can be homeschooled at the elementary school grade level by some exceptional parents” and the bailey is “abolish schools and homeschool everyone for everything at all grade levels”.
I can provide you cited quotes if you like; or you can take my word that I’ve seen many homeschooling advocates quite unambiguously arguing for homeschooling beyond the elementary-school level.
But in any case, very few high school students are taught chemistry by a Ph.D in chemistry with 30 years work experience as a chemist.
Yes, of course.
If most of your teachers had Ph.D or other degrees in the subjects they taught, then you were very fortunate.
Absolutely. No argument there.
My point, however, is that what it takes to teach children a subject is both skill at teaching, in general (which most people, parents included, do not have), and substantial domain training/expertise (whether that comes from a degree, preferably an advanced degree, in the subject, or from extensive professional experience, or both—and which most people, including most parents, likewise do not have, for most or even all the subjects which are commonly part of a school curriculum).
You might object: doesn’t this imply that most kids in the country are not being taught, and cannot be taught, most of their subjects by anyone who is qualified to teach them those things (as neither their parents nor any of their teachers at school meet those qualifications)?
I answer: correct.
And if we’re going to discuss atypical situations, I do in fact think that I would be competent to teach all those subjects at a high school level.
Well, I am not personally acquainted with you and am not familiar with your academic and professional background, so of course I can’t confidently agree or disagree. However, I hope you’ll forgive me for being very skeptical about your claim.
My mathematics teachers in high school were qualified to teach me mathematics because they had degrees (mostly doctorates, but a couple did have lesser degrees) in mathematics.
My chemistry teachers in high school were qualified to teach me chemistry because they had (respectively) a Ph.D. in chemistry and three decades of experience as a working chemist in industry.
My computer science teachers in high school were qualified to teach me computer science because they had degrees in computer science (and were working programmers / engineers).
My biology teachers in high school were qualified to teach me biology because they had degrees (one had a doctorate, another a lesser degree) in biology.
My physics teacher in high school was qualified to teach me physics because he had a degree in physics.
My drafting / technical drawing / computer networking / other “technology” teachers were qualified to teach me those things because they had extensive professional experience in those fields.
(I am not sure what degrees my humanities teachers had, but those subjects aren’t important, so who cares, really. Also, some of them were not qualified to teach anything whatsoever.)
Someone who has a degree in education only is, indeed, not qualified to teach mathematics / chemistry / physics / biology / computer science / any other STEM field at the high school or even middle school level. (Even the latter grades of elementary school are a stretch.)
I want to follow up on this a bit more, because this is a point which I’ve discussed with homeschooling advocates before, and it’s one which seems just wildly underappreciated in these sorts of discussions.
I will mention one anecdotal example, which is, I think, very generous to the pro-homeschooling side: namely, my own case. Now, my mother is a professional educator (now retired). She has a doctorate in education. She taught (English—specifically ESL / “Business Communication”) for several decades. She worked with school age kids and with adults in continuing-education programs. She did curriculum design, consulted for leading publishers of educational materials, won teaching awards, etc.
Do I think that my mother was qualified to homeschool me, and to teach me all the subjects that I in fact learned in school?
Hell no. No way, no how, not a chance, not even close, not in a million years. Absolutely, unquestionably, categorically not.
And if that’s so, then what possible hope does the average parent have?
Ugh, this is totally my fault, but I did mean “first paragraph”. (Second paragraph of the comment, of which the first paragraph is the quote… yeah, I know; I wouldn’t have figured it out either…)
What I am saying is: yeah, your second paragraph makes sense. But… aren’t you just describing exactly the same thing that you, in your first paragraph, said would be bad?
Like… you say that “if no home schoolers are allowed to be as bad as the worst public schools”, this would be bad, would put an undue cost on homeschooling, etc. But then you say that “a right to an education at least as good as the worst public school education” would be fine, but meaningless in practice.
But…
… aren’t those the same thing??
if no home schoolers are allowed to be as bad as the worst public schools
a right to an education at least as good as the worst public school education
What’s the difference? You can’t just be leaning on the “greater than” vs. “greater than or equal to” distinction here… right? (Because that’s obviously a trivial difference!) Other than that, are these two scenarios not literally, exactly the same? What am I not seeing here?
Instead, the actual thesis of many against homeschooling, when they’re not making up things like the claims earlier in this section, is flat out that parents are not qualified to teach their children.
That’s just true, though. Most parents aren’t even slightly qualified to teach their children. Is this not obvious…?
EDIT:
It is beyond absurd to think that an average teacher, with a class of 24 kids, couldn’t be outperformed by a competent parent focusing purely on their own child.
But most parents aren’t competent, at all, in any way whatsoever, especially at teaching anyone anything (but also at most other things).
The idea that if you don’t specifically have an ‘education degree’ that you can’t teach things is to defy all of human experience and existence. Completely crazy. And yet.
Oh, no, no, that’s not the idea at all!
The idea, rather, is that you probably can’t teach things, regardless of whether you have an “education degree”.
And that idea is completely consistent with all of human experience and existence.
Most people are stupid, incompetent, and very bad at teaching (and at most other things).
Er… I think you maybe got the adjectives mixed up in a bit? As written, your second paragraph doesn’t make any sense.
Did you perhaps mean… “good” / “as bad as the best”…? But that is also weird… yeah, I don’t understand what you had in mind there. Clarify, please?
When a magnificent body is just one more of the things that is yours for the asking, what will you do with it in paradise?
See also Stanislaw Lem on this subject:
“The freedom I speak of, it is not that modest state desired by certain people when others oppress them. For then man becomes for man—a set of bars, a wall, a snare, a pit. The freedom I have in mind lies farther out, extends beyond that societal zone of reciprocal throat-throttling, for that zone may be passed through safely, and then, in the search for new constraints—since people no longer impose these on each other—one finds them in the world and in oneself, and takes up arms against the world and against oneself, to contend with both and make both subject to one’s will. And when this too is done, a precipice of freedom opens up, for now the more one has the power to accomplish, the less one knows what ought to be accomplished.”
I’d bet that I’m still on the side where I can safely navigate and pick up the utility, and I median-expect to be for the next couple months ish.
With respect, I suggest to you that this sort of thinking is a failure of security mindset. (However, I am content to leave the matter un-argued at this time.)
… if you’re going to be that paranoid about LLM interference (as is very reasonable to do), it makes sense to try and eliminate second order effects and never talk to people who talk to LLMs, for they too might be meaningfully harmful e.g. be under the influence of particularly powerful LLM-generated memes.
Yes… this is true in a personal-protection sense, I agree. And I do already try to stay away from people who talk to LLMs a lot, or who don’t seem to be showing any caution about it, or who don’t take concerns like this seriously, etc. (I have never needed any special reason to avoid Twitter, but if one does—well, here’s yet another reason for the list.)
However, taking a pure personal-protection stance on this matter does not seem to me to be sensible even from a selfish perspective. It seems to me that there is no choice but to try to convince others, insofar as it is possible to do this in accordance with my principles. In other words, if I take on some second-order effect risk, but in exchange I get some chance of several other people considering what I say and deciding to do as I am doing, then this seems to me to be a positive trade-off—especially since, if one takes the danger seriously, it is hard to avoid the conclusion that choosing to say nothing results in a bad end, regardless of how paranoid one has been.