Posts
Comments
X and Y are cooperating to contain people who object-level care about A and B, and recruit them into the dialectic drama. X is getting A wrong on purpose, and Y is getting B wrong on purpose, as a loyalty test. Trying to join the big visible org doing something about A leads to accepting escalating conditioning to develop the blind spot around B, and vice versa.
X and Y use the conflict as a pretext to expropriate resources from the relatively uncommitted. For instance, one way to interpret political polarization in the US is as a scam for the benefit of people who profit from campaign spending. War can be an excuse to subsidize armies. Etc.
I wrote about this here: http://benjaminrosshoffman.com/discursive-warfare-and-faction-formation/
I can’t tell quite what you think you’re saying because “worse” and “morality” are such overloaded terms that the context doesn’t disambiguate well.
Seems to me like people calling it “evil” or “awful” are taking an adversarial frame where good vs evil is roughly orthogonal to strong vs weak, and classifying the crime as an impressive evil-aligned act that increases the prestige of evil, while people calling it disgusting are taking a mental-health frame where the crime is disordered behavior that doesn’t help the criminal. Which one is a more helpful or true perspective depends on what the crime is! I expect people who are disgusted to be less tempted to cooperate with the criminal or scapegoat a rando than people who are awed.
Possessing a home also imposes costs on everyone else - it costs scarce materials and labor to build, equip, and electrify/warm/cool/water a home, and it uses up scarce space in a way that excludes others. It’s not obvious that a homeless person who works & is taxed, and is thus contributing to collective capacity to build and maintain the amenities they take advantage of, is a free rider; you’d need to actually do the math to demonstrate that.
Reality is sufficiently high-dimensional and heterogeneous that if it doesn’t seem like there’s a meaningful “explore/investigate” option with unbounded potential upside, you’re applying a VERY lossy dimensional reduction to your perception.
There’s a common fear response, as though disapproval = death or exile, not a mild diminution in opportunities for advancement. Fear is the body’s stereotyped configuration optimized to prevent or mitigate imminent bodily damage. Most such social threats do not correspond to a danger that is either imminent or severe, but are instead more like moves in a dance that trigger the same interpretive response.
It's true that people who ask for "collaborative truth-seeking" are lying, but false that no one does it. Some things someone might do to try to collaborate on seeking the truth instead of pushing a thesis are:
- Active listening (e.g. trying to restate someone's claims and arguments in one's own words, especially where they seem most unclear or surprising.)
- Extending interpretive labor to try to infer the cause of a disagreement.
- Offering various considerations for how to think about a question instead of pushing a party line - and clarifying the underlying model in general terms even when one does have a clear thesis.
IME people are perfectly able to distinguish this from less collaborative behavior, though some are more likely to respond strongly positively, and others are more likely to complain that the first two are "judgmental," "accusatory," or "mind-reading," and that the third is "unclear" because it doesn't include a command to endorse some particular conclusion. The second group seems like it overlaps a lot with the sorts of people who ask for the sort of "epistemic charity" you're complaining about.
People who are engaged in collaborative truth-seeking are more likely to talk about or simply demonstrate specific ways to accomplish particular component truth-seeking tasks better together, which is collaborative, and less likely to complain vaguely about how you should be more "collaborative," which is not.
https://www.theonion.com/why-do-all-these-homosexuals-keep-sucking-my-cock-1819583529 https://www.theonion.com/why-cant-anyone-tell-im-wearing-this-business-suit-iron-1819584239
I’m complying with Sinclair’s explicit preference to be treated as someone who might possibly do crimes, by not censoring the flow of credence from “people who don’t expect me to do crimes to them are making a mistake” to “I have done crimes to such people.” You are asking me to do exactly what Sinclair complained about and assume that they’re necessarily harmless, or to pretend to do this.
Wouldn’t that imply more upside than downside in staying over?
Huh, I notice I casually used male pronouns here when I previously wasn’t especially inclined to. I guess this happened because I dropped politeness constraints to free up working memory for modeling the causal structure of the problem.
If this had been a lower-latency conversation with the implied greater capacity to make it awkward to ignore a legitimate question, my first reply would have been something like, “well, did you actually assault them? Seems like an important bit of information when assessing whether they made a mistake.” And instead of the most recent comment I’d have asked, “You identify as a woman. Do you think you are being naïve, or devaluing your sexualness or cleverness or agency? If so, why? If not, why?”
Examples of info she might have had:
- She was hoping to have sex with Sinclair, so theit sexual advances would not have been unwelcome.
- Harassment from acquaintances of her social class is more common than stranger assault but much less likely to be severely bad - acquaintance assault is socially constrained and thin-tailed, stranger assault is deviant and fat-tailed - which is not adequately captured by the statistics.
- She’s not the sort of person who can be easily traumatized by, or would have a hard time rejecting, unwanted advances.
- Sinclair is in fact discernibly unlikely to assault her because they’re obviously nonaggressive, sex-repulsed, or something else one can pick up from a vibe.
- Sinclair’s very small and she could just break Sinclair if she needed to.
Yes. It seems like RobertM is trying to appeal to some idea about fair play, by saying that people shouldn’t make even disjunctive hypothetical accusations because they wouldn’t like it if someone did that to them. But it seems relevant to evaluating that fairness claim that some accusations are discernibly more justified than others, and in this case RobertM seems not to have been able to think of any plausible crimes to disjunctively accuse me of. I am perplexed as to how “true accusations are better than false ones and you can discover by thinking and investigating which statements are more likely to be true and which are more likely to be wrong” seems to have almost fallen out of the Overton window for some important subset of cases on less wrong dot com, but that seems to be where we are.
Which unspecified but grossly immoral act did the plain text of my comment seem like it implied a confession of?
They imply irrationality via failure to investigate a confusion, so I thought it was within scope on a rationality improvement forum to point that out. Since there exists an alternative coherent construal I thought it was good practice to acknowledge that as well.
The comment reported a trend of accurate appraisals characterized as mistakes, with an illustrative anecdote, not an isolated event. Other parts of the comment, like the bit about how not treating them as a likely assailant is "devaluing my sexualness or cleverness or agency" implies an identification of agency with unprovoked assault. This is not ambiguous at all. It seems like on balance people think that politeness calls for pretending not to understand when someone says very overtly that they mean people ill, want to be perceived as violent and aggressive, etc, up until it's time to scapegoat them.
If someone keeps asking "why aren't these women scared of me as a potential rapist?", but isn't actually raping any of them, well, there's an obvious answer there - they're using some information you're not tracking - & it makes no sense not to propagate the confusion upstream to the ideology that causes you to make wrong statistical predictions about yourself that the people around you aren't fooled by.
Not saying the obvious answer is sufficient on its own, but "what are they tracking that I'm not?" would be a reasonable epistemic response, and "people keep being wrong by accurately predicting my behavior when that goes against my ideology" is not.
Not very, but it's the only coherent construal.
This seems like you are either confessing on the public internet to committing assault, or treating a correct prediction as discrediting because of the strength of your prior against the idea that a woman might accurately evaluate a man as unlikely to assault her.
I agree that streetlamp effects are a problem. I think you are imagining a different usage than I am. I was imagining deferring to LockPickingLawyer about locks so that I could only spend about 5 minutes on that part of the problem, and spend whatever time I saved on other problems, including other aspects of securing an enclosure. If I had $100M and didn't have friends I already trusted to do this sort of thing, offering them $20k to red-team a building might be worth it if I were worried about perimeter security; the same mindset that notices you can defeat some locks by banging them hard seems like it would have a good chance at noticing other simple ways to defeat barriers e.g. "this window can be opened trivially from the outside without triggering any alarms".
Holding math duels to the standard of finding high-value problems to work on just seems nuts; I meant them as an existence proof of ways to measure possession of highly abstract theoretical secrets. If someone wanted to learn deep math, and barely knew what math was, they could do a lot worse than hiring someone who won a lot of math duels (or ideally whoever taught that person) as their first tutor, and then using that knowledge to find someone better. If you wanted to subsidize work on deep math, you might do well to ask a tournament-winner (or Fields medalist) whose non-tournament work they respect.
I went through a similar process in learning how to use my own body better: qigong seemed in principle worth learning, but I didn't have a good way to distinguish real knowledge (if any) from bullshit. When I read Joshua Waitzkin's The Art of Learning, I discovered that the closely related "Martial" Art of Tai Chi had a tournament that revealed relative levels of skill - and that William C. C. Chen, the man who'd taught tournament-winner Waitzkin, was still teaching in NYC. So I started learning from him, and very slowly improved my posture and balance. Eventually one of the other students invited me to a group that practiced on Sunday mornings in Chinatown's Columbus Park, and when I went, I had just enough skill to immediately recognize the man teaching there as someone who had deep knowledge and was very good at teaching it, and I started learning much faster, in ways that generalized much better to other domains of life. This isn't the only search method I used - recommendations from high-discernment friends also led to people who were able to help me - but it's one that's relatively easy to reproduce.
One more thing: the protagonists of The Matrix and Terry Gilliam’s Brazil (1985) are relatively similar to EAs and Rationalists so you might want to start there, especially if you’ve seen either movie.
Hmm... firstly, I hope they do not think and act like that.
Maybe this was unclear, but I meant to distinguish two questions so you that you could try to answer one somewhat independently of the other:
1 What determines various authorities' actions?
2 How should a certain sort of person, with less or different information than you, model the authorities' actions?
Specifically I was asking you to consider a specific hypothesis as the answer to question 2 - that for a lot of people who aren't skilled social scientists, the behavior of various authorities can look capricious or malicious even if other people have privileged information that allows them to predict those authorities' behavior better and navigate interactions with them relatively freely and safely.
To add a bit of precision here, someone who avoids getting hurt by anxiously trying to pass the test (a common strategy in the Rationalist and EA scene) is implicitly projecting quite a bit more power onto the perceived authorities than they actually have, in ways that may correspond to dangerously wrong guesses about what kinds of change in their behavior will provoke what kinds of confrontation. For example, if you're wrong about how much violence will be applied and by whom if you stop conforming, you might mistakenly physically attack someone who was never going to hurt you, under the impression that it is a justified act of preemption.
On this model, the way in which the behavior of people who've decided to stop conforming seems bizarre and erratic to you implies that you have a lot of implicit knowledge of how the world works that they do not. Another piece of fiction worth looking at in this context is Burroughs's Naked Lunch. I've only seen the movie version, but I would guess the book covers the same basic content - the disordered and paranoid perspective of someone who has a vague sense that they're "under cover" vs society, but no clear mechanistic model of the relevant systems of surveillance or deception.
Ascorbic acid seems to be involved in carbohydrate metabolism, or at least in glucose metabolism, which may be why the small amounts of vitamin C in an all meat diet seem to be sufficient to avoid scurvy - negligible carbohydrate intake means reduced levels of vitamin C. Both raw unfiltered honey and fruits seem like they don't cause the kind of metabolic derangement attributed to foods high in refined carbohydrates like refined grains and sugar. Empirically high-carbohydrate foods in the ancestral diet are usually high in vitamin C. Honey seems like an exception, but there might be other poorly understood micronutrients in it that help as well. So it seems probable but not certain that taking in a lot of carbohydrates without a corresponding increase in vitamin C (and/or possibly other micronutrients they tend to come with in fresh fruit) could lead to problems.
Seeds (including grains) also tend to have high concentrations of antinutrients, plant defense chemicals, and hard to digest or allergenic proteins (these are not mutually exclusive categories), so it might be problematic in the long run to get a large percentage of your calories from cake for that reason. Additionally, some B vitamins like thiamine are important for carbohydrate metabolism, so if your sponge cake is not made from a fortified flour, you may want to take a B vitamin supplement.
Finally, sponge cake can be made with or without a variety of adulterants and preservatives, and with higher-quality or lower-quality fats. There is some reason to believe that seed and vegetable oils are particularly prone to oxidation and may activate torporific pathways causing lower energy and favoring accumulation of body fat over other uses for your caloric intake, but I haven't investigated enough to be confident that this is true.
I wouldn't recommend worrying about glycemic index, as it's not clear high glycemic index causes problems. If your metabolism isn't disordered, your pancreas should be able to release an appropriate amount of insulin, causing the excess blood sugar to be stored in fat or muscle cells. If it is disordered, I'd prioritize fixing that over whatever you're trying to do with a "bulk." Seems worth reflecting on the theory behind a "bulk," though, as if you're trying to increase muscle mass, I think the current research suggests that you want to:
- Take in enough protein
- Take in enough leucine at one time to trigger muscle protein synthesis
- Take in enough calories to sustain your activity level
I think part of what happens in these events is that they reveal how much disorganized or paranoid thought went into someone's normal persona. You need to have a lot of trust in the people around you to end up with a plan like seasteading or prediction markets - and I notice that those ideas have been around for a long time without visibly generating a much saner & lower-conflict society, so it does not seem like that level of trust is justified.
A lot of people seem to navigate life as though constantly under acute threat and surveillance (without a clear causal theory of how the threat and surveillance are paid for), expecting to be acutely punished the moment they fail to pass as normal - so things they report believing are experienced as part of the act, not the base reality informing their true sense of threat and opportunity. So it's no wonder that if such people get suddenly jailbroken without adequate guidance or space for reflection, they might behave like a cornered animal and suddenly turn on their captors seemingly at random.
For a compelling depiction of how this might feel from the inside, I strongly recommend John Carpenter's movie They Live (1988), which tells the story of a vagrant construction worker who finds an enchanted pair of sunglasses that translate advertisements into inaccurate summaries of the commands embedded in them, and make some people look like creepy aliens. So without any apparent explanation, provocation, or warning, he starts shooting "aliens" on the street and in places of business like grocery stores and banks, and eventually blows up a TV transmission station to stop the evil aliens from broadcasting their mind-control waves. The movie is from his perspective and unambiguously casts him as the hero. More recently, the climax of The Matrix (1999), a movie about a hacker waking up to systems of malevolent authoritarian control under which he lives, strikingly resembles the Columbine massacre (1999), which actually happened. See also Fight Club (1999). Office Space (1999) provides a more optimistic take: A wizard casts a magic spell on the protagonist to relax his body, which causes him to become unresponsive to the social threats he was previously controlled by. This causes his employer to perceive him as too powerful for his assigned level in the pecking order, and he is promoted to rectify the situation. He learns his friends are going to be laid off, is indignant at the unfairness of this, and gets his friends together to try to steal a lot of money from their employer. This doesn't go very well, and he eventually decides to trade down to a lower social class instead and join a friend's construction crew, while his friends remain controlled by social threat.
I've noticed that on phone calls with people serving as members of a big bureaucratic organization like a bank or hospital, I can't get them to do anything by appealing to policies they're officially required to follow, but talking like I expect them to be afraid of displeasing me sometimes makes things happen. On the positive side, they also seem more compliant if they hear my baby babbling in the background, possibly because it switches them into a state of "here is another human who might have real constraints and want good things, and therefore I sympathize with them" - which implies that their normal on-call state is something quite different.
I'm not sure whether you were intentionally alluding to cops and psychiatrists here, but lots of people effectively experience them as having something like this attitude:
It seems aggressively dumb to then decide that personally murdering people you think are evil is straightforwardly fine and a good strategy, or that you have psychic powers and should lock people in rooms.
How should someone behave if they're within one or two standard deviations of average smarts, and think that the authorities think and act like that? I think that's a legit question and one I've done a lot of thinking about, since as someone who's better-oriented in some ways, I want to be able to advise such people well. You might want to go through the thought experiment of trying to persuade the protagonist of one of the movies I mentioned above to try seasteading, prediction markets, or an online community, instead of the course of action they take in the movie. If it goes well, you have written a fan fic of significant social value. If it goes poorly, you understand why people don't do that.
I agree that stealing billions while endorsing high-trust behavior might superficially seem like a more reasonable thing to do if you don't have a good moral theory for why you shouldn't, and you think effective charities can do an exceptional amount of good with a lot more money. But if you think you live in a society where you can get away with that, then you should expect that wherever you aren't doing more due diligence than the people you stole from, you're the victim of a scam.. So I don't think it really adds up, any more than the other sorts of behaviors you described.
Two years ago, I took a high dose of psychedelic mushrooms and was able to notice the sort of immanent-threat model I described above in myself. It felt as though there was an implied threat to cast me out alone in the cold if I didn't channel all my interactions with others through an "adult" persona. Since I was in a relatively safe quiet environment with friends in the next room, I was able to notice that this didn't seem mechanistically plausible, and call the bluff of the internalized threat: I walked into the next room, asked my friends for cuddles, and talked through some of my confusion about the extent to which my social interface with others justified the expense of maintaining an episodic memory. But this took a significant amount of courage and temporarily compromised my balance - my ability to stand up or even feel good sitting on a couch elevated above the ground. Likely most people don't have the kinds of friends, courage, patience, rational skepticism, theoretical grounding in computer science, evolution, and decision theory, or living situation for that sort of refactoring to go well.
The first one seems like it would describe most people, e.g. many, many people repeatedly drink enough alcohol to predictably acutely regret it later.
The second would seem to exclude incurable cases, and I don’t see how to repair that defect without including ordinary religious people.
The third would also seem to include ordinary religious people.
I think these problems are also problems with the OP’s frame. If taken literally, the OP is asking about a currently ubiquitous or at least very common aspect of the human condition, while assuming that it is rare, intersubjectively verified by most, and pathological.
My steelman of the OP’s concern would be something like “why do people sometimes suddenly, maladaptively, and incoherently deviate from the norm?”, and I think a good answer would take into account ways in which the norm is already maladaptive and incoherent, such that people might legitimately be sufficiently desperate to accept that sort of deviance as better for them in expectation than whatever else was happening, instead of starting from the assumption that the deviance itself is a mistake.
If it’s hard to see how apparently maladaptive deviance might not be a mistake, consider a North Korean Communist asking about attempted defectors - who observably often fail, end up much worse off, and express regret afterwards - “why do our people sometimes turn crazy?”. From our perspective out here it’s easy to see what the people asking this question are missing.
Douglas Hubbard’s book How to Measure Anything provides good examples of what it looks like for a target to be expensive to measure - frequently what it looks like is for the measurement to feel sloppy or unrigorous (because precision is expensive) - so it’s a common mistake to avoid trying to measure what we care about directly but sloppily, in order to work with nice clean quantitatively objective hard-to-criticize but not very informative data instead.
Experts should be able to regularly win bets against nonexperts, and generally valued generally scarce commodities like money should be able to incentivize people to make bets. If you don't know how to construct an experiment to distinguish experts from nonexperts, you probably do not have a clear idea what it is you want an expert on, and if you don't have any idea what you are trying to buy, it's unclear what it would even mean to intend to buy it.
Abstract example: 16th Century "math duels," in which expert algebraists competed to solve quantitative problems.
Concrete example: LockPickingLawyer, who demonstrates on YouTube how to easily defeat many commercially popular locks.
If I needed to physically secure a closure against potentially expert adversaries, I'd defer to LockPickingLawyer, and if I had the need and budget to do so at scale, I'd try to hire them. Obviously I'd be vulnerable to principal-agent problems, and if the marginal value of resolving those were good enough I'd look for an expert on that as well. However, it would be cheaper and probably almost as effective to simply avoid trying to squeeze LockPickingLawyer financially, and instead pay something they're happy with.
In hindsight, a norm against criticizing during a fundraiser, when there is always a fundraiser, leads to a community getting scammed by people telling an incoherent story about an all-powerful imaginary guy, just like they did in the synagogue example.
This article seems to model rational discourse as a cybernetic system made of two opposite actions that need to be balanced:
- Agreement / support of shared actions
- Disagreement / criticism
Agreement and disagreement are not basic elements of a statement about base reality, they're contextual facts about the relation of your belief to others' beliefs. Is "the sky is blue" agreement or dissent? Depends on what other people are saying. If they're saying it's blue, it's agreement. If they're saying it's green, it's dissent. Someone might disagree with someone by supporting an action, or agree with a criticism of what was previously a shared story. When you have a specific belief about the world, that belief is not made of disagreement or agreement with others, it's made of constrained conditional anticipations about your observations.
This error seems likely related to using a synagogue fundraiser as the central case of a shared commitment of resources, rather than something like an assurance contract! There's a very obvious antirational motive for synagogue fundraisers not to welcome criticism - God is made up, and a community organized around the things its members would genuinely like to do together wouldn’t need to invoke fictitious justifications. Rational coordination should be structurally superior, not just the same old methods but for a better cause.
Insofar as there's something to be rescued from this post, it's that establishing common knowledge of well-known facts is underrated, because it helps with coordination to turn mutual knowledge into common knowledge so everyone can rely on everyone else in the community acting on that info. But that also recommends blurting out, "The emperor's naked!".
There's also the problem that sometimes people say stuff that's off-topic and not helpful enough to be worth it - but compressing the complexity of that problem down to managing the level of agreement vs criticism is substituting an easier but unhelpful task in place of a more difficult but important one.
It’s not clear to me what “crazy” means in this post & how it relates to something like raising the sanity waterline. A clearer idea of what you mean by crazy would, I think, dissolve the question.
Would a good experimental test of the transposon hypothesis be gene editing a simple model organism to remove transposons? Is CRISPR/Cas9 precise enough to do something like that?
Bureaucracies create jobs. For mechanistic details see Parkinson's Law.
Thanks, the Hadza study looks interesting. I'd have to read carefully at length to have a strong opinion on it but it seems like a good way to estimate the long-run target. I agree 16,000 is probably too much to take chronically, I've been staying below the TUL of 10,000, and expect to reduce the dosage significantly now that it's been a few years and COVID case rates are waning.
This just seems like a much vaguer way to say the same thing I did. Is there a specific claim I made that you disagree with?
As far as I can tell, the function of this kind of vagueness is to avoid weakening the official narrative. Necessarily this also involves being unhelpful to anyone trying to make sense of state-published data, Fauci's public statements, and other official and unofficial propaganda. If we have an implied disagreement, it's about whether one ought to participate in a coverup to support the dominant regime, or try to inform people about how the system works.
This generalizes to actions.
Lots of people I know, including me, take way fewer actions than is optimal because we are trying to avoid making mistakes instead of trying to get what we want. (In some cases that's just a cover story and we're actually trying to avoid revealing our location or preferences, so that we're not a target.) But in lots of contexts if you just do things, instead of trying to supervise your intentions so you only do things you've preapproved, you can get lots more done that you want to do.
If you're worried that this is risky, you can think about what's likely to go wrong and how important that is in the specific context where you're considering to just do stuff, and plan ahead to mitigate the risks worth mitigating.
I'm gonna treat this as a serious question, since most of the value of engagement comes from that scenario, and ignore the vibe of "why are you so negative?".
The gains from reason, discourse, and trade are so huge that they can produce positive returns for many people even in the presence of adversarial action. If you don't see this, some suggested reading at a few levels:
- I, Pencil
- Economics comic books published by the Federal Reserve
- An Inquiry into the Nature and Causes of the Wealth of Nations, by Adam Smith
- Ethics, by Spinoza
Human animals want to live, it takes a lot of optimization to pervert that even imperfectly, and that perversion reduces the capacities of the thing being perverted, which limits the damage.
So there's a lot of perverse activity - which really is bad - but the good news is that if we were previously misattributing production to perverse activity, that means that the value per unit of nonperverse activity is much higher than we thought. Plus there are lots of people who are oppressed, and they have to be relatively nonperverse to survive since they don't get taken care of for being bad.
I'd expect trying-to-live behavior to be trying to cooperate with other instances of itself, sharing and investigating what seems like relevant info. In the ideal case info being shared would be strong evidence of its relevance and importance, and info not being shared would be evidence of its unimportance.
"Intellectually incurious and un-agent-y" about info strongly relevant to mortality risk isn't consistent with a rational-agent model of someone trying to live, and I don't see what "trying to" could mean without at least implicit reference to a rational-agent model.
I don't conclude that people are trying to hurt themselves to signal loyalty simply from the fact that they don't seem to be trying very hard to survive. I conclude that from the relative popularity of injunctions to impose or endure harms for the collective good, vs info that doesn't involve sacrifice. Many famous religious and philosophical writers have praised sacrifice for its own sake, which is strong evidence that some strong coordination mechanism is promoting such messages. Given that, it would be surprising - and require an explanation - if I didn't know people who participated in that coordination mechanism.
But again you don't have to go outside of mainstream microeconomics to find explanations for this. Liquidity premium is one (relatively mundane) explanation, and I suspect principle-agent problem again applies here, perhaps because it's easier (less costly) for a shareholder to monitor one big company for misalignment, and for the company to institute governance measures to try to ensure alignment, than the equivalent for n smaller companies.
I would expect a liquidity premium to exist but I'd expect it to be much smaller than the size of opportunity I'm seeing - why do you expect to see one so large?
I don't think the principal-agent problem explains anything here because large publicly traded corporations frequently have governance, financial structures, and operations that aren't intelligible at all to casual investors. For example, I've taken courses in finance, accounting, and economics, and worked in financial services, and I have no idea how to evaluate Markopolos's criticisms of GE, & compare this with their financial statements, because the latter are so vague. (Do you?) Nor was there a trusted intermediary whose evaluation methods I understood. (Can you recommend one?) In practice when I did own stocks I was relying on correlation with other investors - the government would try not to let us all fail at once - rather than any ability to meaningfully exercise oversight over centralized management.
The parenthetical questions are meant seriously, they're not just rhetorical flourishes.
A standard principle-agent problem in corporate governance is managers who prefer to hoard cash instead of returning profits to investors, with the investors not trusting the managers to use that cash in their interests (instead of the managers' own interests) in the future. (That's probably why the shares of such a company are trading at such a low multiple to its earnings that you can afford to buy them by issuing debt.) Leveraged buyout can be viewed as a solution to this problem.
I understand this argument, it would be a perfectly logical reason for some leveraged buyouts to happen in some circumstances, but in practice many leveraged buyouts are a way to offload risk onto counterparties with less legible high-trust relationships with management, such as employees - e.g. in the airline bankruptcies - or consumers, who can't use brand quality as much as they used to be able to because corporate decisions are made based on short-term numbers, and turnover means that the cost of eroded brand loyalty will be correlated across many companies and distributed across many people, and since the state won't let large corporations in general fail all at once, we end up with bailouts. I recommended a book on the subject because I really can't cover everything in the blog post, it's long enough already and this sort of thing is very well documented elsewhere.
If you were an owner or investor of a bank, should you really prefer that lending decisions be based on the subjective judgement of loan officers? What if they decide to base their decisions in part on what maximizes their values instead of yours? E.g., demand kickbacks, make loans to their friends or allies, bias their decisions with political ideology, or just slack off when they're supposed to be interviewing the farmer's friends and neighbors.
I'd rather people investing on my behalf use objective profit-maximizing criteria, ideally with skin in the game, and that's why it's surprising that access to capital depends so much on the kinds of subjective factors you mention (checking whether someone has already been extended credit, whether they're vibing the right way with VCs or bankers, whether they look like a normal borrower, and in the case I described, whether they took a class prescribed by the credit union) relative to economic considerations.
I have a close friend who was had a business bank account closed for avowedly discretionary reasons after a conversation with a banker where as far as I can tell the banker got spooked because he seemed like he had specific, creative plans that didn't look normal. (Nothing illegal was discussed; they were thinking about something that might have attracted regulatory scrutiny, but they'd have been happy to negotiate or just look for a different counterparty for those transactions.)
An important case study here is Abacus Bank, the only bank to be prosecuted in relation to the 2008 financial crisis, as far as I can tell simply because they're culturally decorrelated from other banks (small, ethnically Chinese, privately held). The prosecution didn't work out, because Abacus hadn't done any crimes.
If a physicist were to spend two hours trying to explain to me how they knew that the earth was flat, I'd expect to come away from that conversation with a better understanding of the physical world or the social construction of physics knowledge, which would better help me navigate my life, even if I ended up wronger on the bottom-line answer - because that's how epistemically persuasive explanations work, they have to show an ability to win bets either more often or with less computational cost than alternative hypotheses.
Some optimizer computed by a human brain is doing it on purpose. I agree that it seems desirable to be able to coherently and nonvacuously say that this is generally not something the person wants. I tried to lay out a principled model that distinguishes between perverse and humane optimization in Civil Law and Political Drama.
Hard to predict. In my case helping organize an early distribution of masks to prisons gave me cred with a formidable formerly-incarcerated mental health care activist who's now running for NY state assembly with a credible shot of winning, and who's already helping draft state-level legislation on multiple topics of interest to me. Not how I'd pictured that working out, and not every such venture bears fruit, but I tried fewer than 10 things like that before one did.
Currently hoping to help an African immigrant I know raise funds for a cheap-remittances-and-banking business, it's focused on Africa rather than South America so there's plenty of room for the latter.
More generally the way to find such opportunities is cognitively loaded - first we need to be able to include them explicitly in our perceptions and true decision calculus as politically relevant persons, with interests that bear some relation to ours, who might have something to offer us in trade.
I guess people are mean because it moves them up in the pecking order, or prevents them from moving down, and they think it's safer to be an aggressor than a victim. Since scolding people for maybe not wearing masks is a protected behavior, they can get away with more meanness, with less discernment, than in other contexts. I don't fully understand why this gives people cover for being mean to mask-wearers in the name of pro-mask propaganda, but it seems to be the case. This seems like part of the same phenomenon: https://reductress.com/post/how-governor-cuomo-once-a-soft-sidepiece-snapped-into-a-dom-daddy-i-would-let-choke-me/
Even a similar focus on allowing the dominator to constrict your breathing!
My best guess is that being mean to people is considered part of the process by which you take care of them, since it's part of the process of giving orders, as I sketched out in Civil Law and Political Drama, much like frame-controlling them is, as Vaniver pointed out here and here.
Playing to your outs seems potentially VNM-consistent in ways the log-odds approach doesn't https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy?commentId=cWaoFBM79atwnCadX
I don't think it's necessary to believe that "the MTA really is full of moustache-twirling villains" in order to believe that sometimes they're mean to people on purpose. This is a normal thing that normal people do and doesn't require someone to be totally committed at all times to evil. The interesting problem is not that someone was mean, but that the factional imperative to be pro-mask and anti-anti-mask effectively functions to provides cover for this, so that as part of their display of factional loyalty people refuse to recognize that someone did something mean.
Another instance of approximately the same error: "They are not trying to live. They are not trying to save their friends' lives. If they were, they would have picked up my message about high-dose Vitamin D". You are firmly convinced that high-dose vitamin D is plainly very helpful, but they may not be, and the reason need not be stupidity or dishonesty. E.g., Scott Alexander, at least as of December 2021, doesn't think it likely that vitamin D is useful against Covid-19; he may be right or wrong, but to me this is already conclusive evidence that it isn't obvious to any smart person who's paying attention that vitamin D is useful against Covid-19. Which means that "if these people were really trying to save their own and their friends' lives, they would agree with me about vitamin D" is just plain wrong.
I didn't mean to claim that everyone would agree with me if they were trying to live. I meant that the vitamin D result would be interesting, and I'd see some combination of people being persuaded that vitamin D was helpful (and acting accordingly to pass along the info) or people making specific arguments against that proposition.
Instead what I saw among the kinds of people excited about vaccines was basically no intellectual initiative, combined with defensive nonsense like sneering at the idea of comparing costs and benefits as somehow unrelated to "reality," when someone tries to makes sense of things in a decision-relevant way that doesn't seem committed to getting the "right" answer for their faction.
An older cousin I talked with about this more recently openly admitted to me that she'd been anxiously trying to follow orders, that she'd been aware of this, and so when she felt like she was probably missing out on net by being too paranoid about COVID, arranged to get orders from a proper authority (someone who'd worked on the COVID vaccine studies) that it was correct to stop being so scared. The initiative to solve this problem seems like it reflects a desire to live, but it would have been a mistake to take anything she'd said about vaccine efficacy or COVID risk as a literal attempt to inform or part of a process of trying to figure out how to minimize nonpolitical harms from COVID.
It is extremely common for university students to become "artisans" in the sense that seems relevant here. (That is: people doing a skilled job that it is possible to do well or badly and in which it is possible to do better by trying harder.) And it is extremely common for university students to intend to become "artisans" in the same sense.
Maybe Caplan is right that in fact it turns out that university education is, on balance, not helpful to people in doing their jobs. That can hardly be relevant to Lewis and his audience, back in 1944, decades before Bryan Caplan was even born.
I'd expect it to be significantly more applicable to the kinds of students Lewis was speaking to, than the average college student now, because college used to be more of an extreme elite endeavor, and fewer jobs used to require a nonspecific college degree. And in fact the aggregate behavior of university graduates between Lewis's time and now caused us to live in a more credentialist society, with weaker property rights, where normal college students feel like they don't have freedom of speech because it's so important to be liked, and a large share of "business" is governed by the culture of Moral Mazes.
I also remark that AIUI what Caplan looks at is not education's effectiveness in helping people do better jobs, but education's effectiveness in helping people get paid more. One would hope that the two are closely related, and perhaps they are, but it seems relevant that one of Lewis's key points in the talk we are discussing is that getting paid more is often determined by things that have little to do with how good a job you do.)
I don't understand what alternative hypothesis you're advancing here. How can people afford to spend increasingly large amounts of time and money on schooling, which is mainly about teaching you to do a better job, if doing a better job doesn't get you paid correspondingly more in expectation? If an alchemist's wages aren't sufficient to compensate for the training costs, shouldn't we expect fewer alchemists in the next generation?
Fixed the lecture link to refer to Lewis's speech instead of a Google search for it.
It seems to me like either Bryan Caplan's made some significant error, or your learning useful mathematics was pretty much independent of your schooling, or you're in the minority, offset by cases where school caused someone to become less useful.
It seems like better discursive practice for this kind of objection to lead to a full blog post criticizing The Case Against Education, than for it to just show up ad hoc when people try to take already-established claims for granted. If there's an existing critique you think I should examine, let me know.