Posts
Comments
Good points. This may be another case where we evolved to have probability-weighted-by-utility intuitions, and where we work backwards from these intuitions when ask for a model of raw probability.
Can we finance cryogenics by revival awards?
Create a market for frozen humans. The reward is for the agent who performs the revival. Investors can either search for revival technology and patent it, or they can invest in frozen humans, which they can sell to agents who wish to attempt revival.
Maybe the head is the most vulnerable region to injury, and the locating of the self in the head reflects the need to protect the brain and other inputs (mouth, eyes, ears).
I hypothesise a lower proportion of drinkers than the rest of the population. (subject of course to cultural norms where you come from)
Curiously, high SES in the United States is correlated with more frequent alcohol consumption.
The discussion itself is a good case study in complex communication. Look at the levels of indirection:
- A: What is true about growth, effort, ability, etc?
- B: What do people believe about A?
- C: What is true about people who hold the different beliefs in B?
- D: What does Dweck believe about C (and/or interventions to change B)?
- E: What does Scott believe about C (by way of discussing D, and also C, and B, and A)?
Yikes! Naturally, it's hard to keep these separate. From what I can tell, the conversation is mostly derailing because people didn't understand the differences between levels at all, or because they aren't taking pains to clarify what level they are currently talking about. So everyone gets that E is the "perspective" level, and that D is the contrasting perspective, but you have plenty of people confusing (at least in discussion) levels ABC, or A and BC, which makes progress on D and E impossible.
(Not the OP, but musing on part of this)
I've never been in therapy, but I find it almost impossible to map certain psychological concepts and questions to coherent internal things. It's like when someone describes political liberalism as "the belief that government should be bigger": It's not total nonsense, but it doesn't connect with solid, and it's probably a sign of confusion if you feel that you can give a categorical answer.
Or another way: Trying to apply these concepts to myself feels like asking if some Canadian guy more culturally Japanese or Spanish (extroversion/introversion, high/low self-esteem, inner/outer locus of control, masculine/feminine). I can see that certain percentage of the world population is really clearly Japanese or Spanish, but what's the meaning of saying this Canadian guy is more Japanese, or even that he's more Japanese in contexts X, Y, and Z, and more Spanish in environments P, Q, and R?
Well put.
Furthermore, is there any great mystery about the possible scope of these hidden opinions? I suspect (though how can I verify?) that most of these "too controversial to mention" opinions can be enumerated by simple inversion of common beliefs.
Blue is right -> Blue is wrong Green is good -> Green is bad
If we're talking about things you can't say because of moral outrage, then there aren't that many beliefs that are common enough to provoke widespread outrage by publicly challenging them. Maybe you can't guess exactly why Blue is Actually Bad, but you know the general forms of how it could be so.
Certainly there are other, more exotic things you shouldn't say in public ("How to build a super laser weapon from pocket change", etc), but I doubt this problem is the driving force here.
Even a dog knows the difference between being kicked and being stumbled over.
-- Oliver W. Holmes
I agree that this isn't happening to LW. (To avoid repetition, I talk about a bit more about motivation in this comment)
I'm a bit curious what prompted you to post this?
Well, I think it's true, interesting, and useful :)
The argument is a specific case of a more general form (explaining changing group dynamics by selection into the group, driven by the norms of the group, but without the norms necessarily causing a direct change to any individual's behavior) which I think is a powerful pattern to understand. But like a lot of social dynamics, explicitly pointing it out can be tricky, because it can make the speaker seem snooty or Machiavellian or tactless, and because it can insult large classes of people, possibly including current group members. I felt that LW is one of the few places where I could voice this type of argument and get a charitable reception (after all, I'm indirectly insulting everyone who likes to talk politics, which is most people, including me :P)
To be clear: I don't think lesswrong is currently being hurt by this dynamic. But I do see periodic comments criticizing the use of only internal risks (mind-killing ourselves) as the justification for avoiding political topics. I'm sympathetic to some of these critiques, and I wanted to promote a reason to avoiding political topics that didn't imply that mind-killing susceptibility was somehow an insurmountable problem for individuals.
On outguessing the market: With only public information, can someone (expect to) determine better times to invest into diversified funds? Specifically, is it a good idea to use the "being greedy when others are fearful and fearful when others are greedy" heuristic?
Seems to be a working memory aid for me.
If I have to manipulate equations mentally, I'll (sort of) explain the equation sub-vocally and assign chunks of it to different fingers/regions of space, and then move my fingers around or reassign to regions, as if I'm "dragging and dropping" (e.g. multiply by a denominator means dragging a finger pointing at the denominator over and up). Even if I'm working on paper, this helps me see one or two steps further ahead than I could do so using internal mental imagery alone. I don't remember explicitly learning this.
If you would like to be horrified, represent the number of deaths from WWII in unary in a text document and scroll through it (by copy pasting larger and larger chunks, or by some other method).
There are about 4000 "1" characters in a page in MS Word, so at 20 million battle deaths, you'll get about 5000 pages.
I agree that the relationship is separate question. I did find some links though:
Here is a Swedish conscripts study, finding that pre-morbid IQ was negatively associated with later adult depression, anxiety, and schizophrenia, but positively related to mania, measured by hospital admittance. This New Zealand study replicates this: Low childhood IQ predicts depression, anxiety, while higher IQ predicts bipolar.
These are about the best "large homogeneous" population studies I could find, in two more-or-less standard Western cultures. There is one study that tracked some particularly high performing children through adulthood, but the results weren't much different regarding mental illness than a normal high intelligence sample would be. Needless to say, it gets complicated when you look at populations that are preselected (college students, etc) or more diverse. Most popular articles that claim a uniform association are looking at some narrow populations (e.g. famous artists), or reporting how intelligence relates to different presentations of a given mental illness (e.g. intelligence seems to the presentation of anxiety).
Even assuming genetic risk for a mental illness was unrelated to education or intelligence, you'd expect something like this given the environmental correlates: Better family conditions early on, better social status later. While there are some environmental stressors that are probably associated with higher intelligence (graduate/medical/law school, perhaps more status anxiety?), these are probably not severe enough to outweigh the stressors in the opposite direction.
Just leaving the phone across the room didn't work for me, but the lock did.
There are all sorts of possible schemes: I also thought about putting the clock up in an inaccessible location (a high shelf in my closet). Then turning it off would require physically dragging a stepladder or chair from some other room, bringing it in, being awake enough not to fall off it, etc.
My sleep tends to be delayed and irregular. I put my alarm clock in a locked box. In the morning, it takes ~45 seconds to get out of bed, walk across the room, and open the combination lock. Since doing so, my waking time has greatly smoothed.
There are roughly four prototypical white American regions/cultures, which correspond to fairly clear demographic events. Two of these are distinct white "rural" cultures (crudely: the western cowboy and the southern redneck) but these are often misleadingly combined into a unified "rural" stereotype that doesn't really describe many actual people. This makes about as much sense as combining New York and San Francisco to create the archetypal "urban" American. Alas, the media is based in big coastal cities, and so even many Americans conflate the two.
So I think what you've noticed is that the cowboy culture has this individualist current, that leads to fewer public displays of religion, even though the people tended to be privately religious. Whereas the redneck culture has a more group-based history, with an theological approach (Evangelicalism) that requires more public displays of faith.
For the immigration element, look at this is map of self-reported ancestry.
The huge region of self-identified German ancestry is centered on historically cowboy culture areas, and the Grey region labeled "American" is redneck culture. The "American" self-identification usually means Northern England / Southern Scotland / Northern Ireland, but far in the past.
The Grey region is the so-called Bible Belt, sometimes just referred to as "the South", or as Appalachia. The lower-class whites in this area are the basis for the redneck stereotype (see Google images for pictures), but the area really doesn't have the cowboy flavor. The cowboy or frontier rural culture historically spread out over the modern-day-German-ancestry areas in waves. The modern impact of this is complicated, but it's sufficient to say that the rural cultures of the West are rather different from the rural culture of the South.
So I'm not too surprised if aspects of cowboy culture appeal more to Europeans today than redneck culture, because the modern areas where cowboy cultural flourished were inhabited by the descents of immigrants who were closer to modern Europe (culturally and temporally) than the people who founded redneck culture.
I can't comment on alcohol use, but on recuperative activity:
Different types of "burnt out" suggest different remedies.
If you just spent 8 hours sitting at a desk, you might get a bump from a game of tennis, or a long walk. If you just spent 8 hours on your feet, that game of tennis might not help.
If you just spent 8 hours alone, then socialize. If you were dealing with customers and coworkers and crowds nonstop, maybe do something alone.
Anecdote: When I lived in college dorms (4 people in 2 bunk beds in a unit), my idea of heaven was sitting alone in a quiet empty room. The desire evaporated as soon as I moved out.
Sometimes people match to the wrong class of remedies: If you're angry (a negative, high-arousal state), you might not want to go out with friends (social activity = further arousal). If you're lethargic and depressed (negative, low-arousal), the long hot bath might makes things worse (hot bath = low arousal).
Yes! I think this is it. The wikipedia article links to these ray diagrams, which I found helpful (particularly the fourth picture).
I suspected it had to do with an overlap in the penumbra, or the "fuzzy edges", of the shadow, but I kept getting confused because the observation isn't what you would expect, if you think of the penumbra as two separate pictures that you're simply "adding together" as they overlap.
Note: This post raises a concern about the treatment of depression.
If we treat depression with something like medication, should we be worried about people getting stuck in bad local optima, because they no longer feel bad enough that the pain of changing environments seems small by comparison? For example, consider someone in a bad relationship, or an unsuitable job, or with a flawed philosophic outlook, or whatever. The risk is that you alleviate some of the pain signal stemming from the lover/job/ideology, and so the patient never feels enough pressure to fix the lover/job/ideology.
Also, I'm pretty confident that the medical profession has thought about this in detail, but I've been spinning my wheels trying to find the right search terms. Does anyone know where to look, or have other recommendations?
Why does the edge of a shadow sometimes appear to shift when another shadow gets close to it?
Details: I was in front of a window. The edge of a chair cast a shadow on the floor from the window light. When I moved such that the shadow of my arm got very close to the shadow of the chair, part of the edge of the chair's shadow was "pulled towards" the shadow being cast by my arm. The shadow of my arms didn't appear to move. My arm was closer to the sun than the chair.
Tip for research: In personality psychology, the tendency to experience negative emotions is usually called neuroticism.
Woody Allen on time discounting and path-dependent preferences:
In my next life I want to live my life backwards. You start out dead and get that out of the way. Then you wake up in an old people's home feeling better every day. You get kicked out for being too healthy, go collect your pension, and then when you start work, you get a gold watch and a party on your first day. You work for 40 years until you're young enough to enjoy your retirement. You party, drink alcohol, and are generally promiscuous, then you are ready for high school. You then go to primary school, you become a kid, you play. You have no responsibilities, you become a baby until you are born. And then you spend your last 9 months floating in luxurious spa-like conditions with central heating and room service on tap, larger quarters every day and then Voila! You finish off as an orgasm!
The rationality gloss is that a naive model of discounting future events implies a preference for ordering experiences by decreasing utility. But often this ordering is quite unappealing!
A related example (attributed to Gregory Bateson):
If the hangover preceded the binge, drunkenness would be considered a virtue and not a vice.
"Context-free abstract pattern recognition" can be partially resolved into more legible subcomponents, some of which can be learned, and some of which can't.
So working memory is one such component, and is often theorized as a big pathway for (intuitively defined) general human intelligence. It doesn't look you can train working memory in a way that generalizes to increased performance on all tasks that involve working memory (although there's some controversy about this). And as with other traits, increased performance on formal measurements of working memory might not translate to the real-world outcomes associated with higher untrained working memory.
At the same time, it seems that the universe must come packaged with a distribution over patterns, and so learning a few common patterns might transfer fairly well. The Raven pattern is XOR, a basic boolean function. The continued fraction is self-similarity, which is an interesting pattern (meta-pattern?), because while people already recognize trivial self-similarity (invariance, repetition), it look like people can be successfully taught to look for more complicated recurrences in math and CS classes.
Not because we have a specific "heretic burning" sense receptor, but because the parts of the brain containing the idea of burning the heretics were connected by neural pathways to the pleasure centers, just like all associations are created.
There is almost certainly hardware support for punishment behavior, albeit that which can be executed with very little high level conceptual understanding, as you note. Even more, it doesn't always require a "belief that X is right": It can simply happen, when everyone else is throwing stones, that a person may throw stones too, and the high level belief of person that they are "trying to do the right thing" is formed after the behavior has already happened, or in (hardware-embedded) anticipation of a hypothetical future demand to justify their behavior.
Outliers are interesting, but I'm not sure they are often useful examples. I suspect the focus on outliers is more due to a certain insecurity among specialists, which is exactly the last thing 99.9% of the people struggling to understand or enjoy mathematics need further exposure to.
Perhaps within mathematics, progress really is so dominated by the elite that it seems natural to worry so much about elites. I don't know either way. But in most other fields, and in the everyday strength of society, there seems to be a decent potential from moving everyone else just a few notches in mathematical comfort.
Naturally, people rise to the level of their ability for any given level of pedagogic incompetence, and so it would be equally useless to blame people for not figuring out on their own how to maximize their own ability (whatever that may be), unless we can provide reasonably concrete advice.
Is he bullying or insulting people? Does he lack the machinery to detect social disapproval? Either situation would require specialized advice.
There is an option to only display only comments above a certain threshold. I tried to use a positive threshold (5 votes) but it doesn't seem to work.
As an aside, I still find it much easier to sift through LW for good content, relative to other broad-domain sites. While I'm glad the ecosystem has diversified, it has become harder to find e.g. the good comments on a SSC piece, or to separate the wheat from the chaff on social media or single-author blogs.
Do you mean in casual social situations? Or is this people doing stupid things that directly harm you (e.g an incompetent coworker you still need to rely on; a roommate that keeps destroying stuff)?
This raises an interesting question: What is a population measure of sanity?
As you point out, stated beliefs might not be a great measurement. And even if a less-than-sane belief is genuine, the belief may be so compartmentalized that it isn't a leading cause of irrational behavior.
A while back, I found this study: pdf which tried to correlate performance on a test of cognitive biases with the likelihood of reporting a bunch of different real-world "bad decisions", like having been in jail or default on a loan. They found some modest correlations after adjusting for SES and an IQ-proxy.
Many ideological problems boil down to an error of expansive domain:
So a X=Marxist can talk intelligently about certain large-scale economic patterns. But there's no reason to expect good career advice from a Marxist. Despite this, some Marxists are perfectly happy to reason "having a career is related to economics, and my theory of proletarian revolution is related to economics, and so clearly my theory of the proletarian revolution is related to giving good career advice!". And then the critics of Marxism are happy to attack Marxism as a whole, but only by pointing out that the theory fails when applied to the problem of giving good career advice.
I think this maps directly to certain controversies over feminism. Feminism is about patterns X, Y, and Z in gender relations. But you shouldn't expect a particular feminist framework to apply to literally every problem involving gender, despite the willingness of many proponents and critics to debate accept these misapplications as if they were meaningful. In particular, I would map "Marxist giving career advice" to "Feminist giving dating advice".
Note that this position is consistent with supporting the underlying ideological framework: I could be a fervent Marxist, while still accepting that Marxism might have limited, or at least very complicated, relevance to your current job search.
There's a content-neutral signaling dynamic too: Some BDSM fans (for lack of a better term?) are signaling sophistication by loudly complaining out that the recent "pop music hit" is crap. So there's an opportunity for hipster counter-signaling if anyone with in-group credibility defends some aspect of the book.
Thanks for clarifying. I saw a few definitions that were less precise: wikipedia describes negative feedback as "...when some function of the output of a system...is fed back in a manner that tends to reduce the fluctuations in the output, whether caused by changes in the input or by other disturbances." I think I was confused by skipping the tends part, and applying the resulting definition to the shower example.
You're right on the explosion.
So "negative feedback" does not imply "stable point". Although "stable point" presumably implies "negative feedback" somewhere?
That an overexuberant negative feedback controller can still lead to explosions is one of the interesting results of control theory...
Terminology question: Does "negative feedback" have a precise definition? So if I point at something and say "this is a negative feedback loop", is that exactly the same as saying "the current state of this thing is stable, or the state is known to be in the neighborhood of an implicitly communicated stable point"? (And conversely for "positive feedback" = "unstable") I'm considering that a physical explosion will reliably reach a stable state. Or something that pushes a real value in [0,1] towards the nearest bound, but then stops.
The paper, or my comment? I interpreted the paper as an attack on (explanatory) models of risk aversion that are based on this (quite general) type of utility curve, with the conclusion that observed behavior can't be motivated by such a curve.
Why people find it emotionally difficult to keep secrets?
The dynamic shows up very early in childhood (search for "google: louise ck secret"), it can involve self-sacrifice (confessing to a crime), and people find it relieving to share secrets even in completely anonymous and impersonal ways ("google: the confession bear meme").
Also consider risks from future technology. For example, we might be able to "deanonymize" various public data records.
Conversly, if you'd pay much more than this, you are absurdly risk averse: Here's a pdf of a classic paper by Rabin: Risk Aversion and Expected-Utility Theory: A Calibration Theorem
Abstract:
Within the expected-utility framework, the only explanation for risk aversion is that the utility function for wealth is concave: A person has lower marginal utility for additional wealth when she is wealthy than when she is poor. This paper provides a theorem showing that expected-utility theory is an utterly implausible explanation for apprecia- ble risk aversion over modest stakes: Within expected-utility theory, for any concave utility function, even very little risk aversion over modest stakes implies an absurd degree of risk aversion over large stakes. Illustrative calibrations are provided
Ah! I may have a meta-contrarian position to contribute:
This is not useful -> This is useful for having fun -> Fun is a valid goal, but this is a fairly ineffective way to have fun.
In the same way that people are routinely in error about how to improve everything else, they are routinely in error about what things are good at actually providing fun. And there is a familiar resistance to the direct application of thought to the problem, which relies on the normal excuses ("Isn't it all subjective?", "But thinking is incompatible with feeling! Haven't you seen Spock?").
Playing sports looks really good from an "effective hedonism" standpoint, even up to several hours a week. But for most people, I'm skeptical that regularly watching sports provides a decent long-term return, when done for more than a few hours every month or year.
Tangentially related: My local baseball team is far funner to watch than the top teams, because they make more mistakes, which leads to more unpredictable and exciting plays, but at the same time they're still athletic enough that you're not just watching children flounder around. In the same way, I really enjoyed the last superbowl.
We should probably concern ourselves with fatality rates (serious disability rates probably tracks this). Because of differences in average speed, I expect the typical rural accident to be much more severe.
Political beliefs can cluster with more consequential behaviors than voting. For example, consider the relationship between views on economic policy and the appeal of different careers (or fields of academic study). Or political views and religious behaviors. Or the subjective appeal of living in Texas vs San Francisco. Knowing humans, there probably isn't a clear direction of cause-and-effect.
Anecdotally, I've changed my political views recently, and I'm surprised by the breadth of the associated cluster of beliefs (some of which are non-socially consequential) that shifted at the same time.
Maps and territories. A noisy signal can still be understood, and the marginal cost of suppressing noise can become steep. Even mathematical proofs are often first communicated in a logically correct but "noisy" form, and simplified later.
I struggle with over-qualifying, to the point where my writing takes too long or is too hard for other people to understand. I actually wonder if prolific writers are selected for a certain lack of guilt, whereas I often feel like a scrupulous person, almost guilty for not addressing every little subtlety.
The collapse into the trivial is usually good news! The trivial is just the accurately concise, which depends on the power of your background knowledge. I'm a huge fan of SlateStarCodex, but sometimes I reach the end of a 10,000 word essay and wonder "Why did he just say APPLY META-LEVEL RATIONALITY CONCEPT TO TOPIC X?", but that's only trivial if you share the right background, while his audience is very broad.
While we're here: How do real-world incentive structures interact with the EMH?
In the same way that "No one was ever fired for buying IBM", is it true that "No one was ever fired for selling when everyone else was"? And would that mean someone without these external social incentives will have an edge on the market? For example, what about a rule like "put money into an index fund whenever the market went down for X consecutive days and everyone is sufficiently gloomy"?
How can you learn to calibrate long term predictions, when it takes so long to get feedback?
A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines.
-- Emerson, Self-Reliance
Perhaps these are two prescriptions for two different patients: The fox and the hedgehog!
You can model uncertain parameters within a model as random variables, and then run a large number of simulations to get a distribution of outcomes.
Modeling uncertainty between models (of which guessing the distribution of an uncertain parameter is an example) is harder to handle formally. But overall, it's not difficult to improve on the naive guess-the-exact-values-and-predict method.
A thought about heritability and malleability:
The heritability of height has increased, because the nutritional environment has become more uniform. To be very specific, "more equal" means both that people have more similar sets of options, and that they exercise similar preferences among these options.
This is interesting, because the increased heritability has coincided exactly with an increased importance of environmental factors from a decision making standpoint. In other words, a contemporary parent picking from {underfeed kids, don't underfeed kids} can exert more influence over the absolute height of their children than a parent with only the option to underfeed. Of course, modern parents overwhelming opt for the same choice. At the same time, these parent don't have much influence on the relative height advantage of their child, given a uniformity in options and preferences in the population.
This can happen whenever options and preferences are aligned in a population. For example, no matter how heritable a positive trait is, it will usually be trivial to influence it ... in a downward direction. So if you're looking at a twin study on something like subjective well-being, I've found it clarifying to explicitly note the options and preference available to the population. I'm currently reading up on positive psychology, and I keep seeing, even from domain experts, statements like, "X percentage of your happiness is genetically determined", as if the population they studied were picking actions at random.
Yes. Richer states can afford to transfer more wealth. We see this in the size of modern (domestic) welfare states, which could not have been shouldered even a century ago.
All the advice on resisting video games and the like (internet blockers, social support) has been on using tricks of one sort or another to restrict the act, not the desire.
Some advice is about substitution, i.e. you identify the emotional need driving a stubborn behavior, and find a more approved behavior than satisfies the same need.
On the other hand: So fucking what? You know how the world becomes a better place? By people doing things that are difficult and thankless because those things need to be done. The world doesn't become a better place by people sitting around waiting for the brief moment of inspiration in which they sorta want to solve a local problem.
Historical, isn't that exactly how the world became a better place? Better technology and better institutions are the ingredients of reduced suffering, and both of these see to have developed by people pursuing solutions to their own (very local) problems, like how to make money and how to stop the government from abusing you. Even scientists who work far upstream of any application seem to be more motivated by curiosity and fame than a desire to reduce global suffering.
Of course, modern wealth disparities may have changed the situation. But we should be clear, if we think that we've entered a new historical phase in which the largest future reductions in suffering are going to come from globally-altruistic motivations.