Open Thread, January 11-17, 2016
post by username2 · 2016-01-12T10:29:29.953Z · LW · GW · Legacy · 173 commentsContents
173 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
173 comments
Comments sorted by top scores.
comment by Lumifer · 2016-01-12T19:33:30.643Z · LW(p) · GW(p)
A physics research team has members who can (and occasionally do) in secret insert false signals into the experiment the team is running. The goal is practice resistance to false positives. A very interesting approach, first time I've heard about physicists using it.
Bias combat in action :-)
Replies from: ahbwramc, jnwrdThe LIGO is almost unique among physics experiments in practising ‘blind injection’. A team of three collaboration members has the ability to simulate a detection by using actuators to move the mirrors. “Only they know if, and when, a certain type of signal has been injected,”...
Two such exercises took place during earlier science runs of LIGO, one in 2007 and one in 2010. ... The original blind-injection exercises took 18 months and 6 months respectively. The first one was discarded, but in the second case, the collaboration wrote a paper and held a vote to decide whether they would make an announcement. Only then did the blind-injection team ‘open the envelope’ and reveal that the events had been staged.
↑ comment by ahbwramc · 2016-01-13T19:47:07.936Z · LW(p) · GW(p)
Wait, I'm confused. How does this practice resistance to false positives? If the false signal is designed to mimic what a true detection would look like, then it seems like the team would be correct to identify it as a true detection. I feel like I'm missing something here.
Replies from: Lumifer↑ comment by Lumifer · 2016-01-13T21:31:47.020Z · LW(p) · GW(p)
I don't know the details, but the detection process is essentially statistical and very very noisy. It's not a "we'll know it when we see it" case, it's more like "out of the huge number of wiggles and wobbles that we have recorded, what can't we explain and therefore might be a grav wave".
I would guess one of the points is that a single observation is unreliable in a high-noise environment.
↑ comment by gjm · 2016-01-24T11:31:48.749Z · LW(p) · GW(p)
Two minutes' inspection of her thesis would, I think, lead any reasonable person to conclude that it was almost certainly not written by her adviser. The extremely unusual style is consistent with her adviser having, say, had all the actual clever mathematical ideas in it, but again the point here is merely that Piper is clearly intelligent, and being able to understand the material described in her thesis (which, again, I think it's clear she does if you actually look at the thesis) is itself indicative of a high IQ.
(PS. Hi, Eugine/Azathoth/Ra/Lion. This is your regularly scheduled reminder that I respond to mass-downvoting of my old comments, such as you've been engaging in very recently, by posting more, and that Less Wrong responds to it by banning you and forcing you to go to all the trouble of registering another identity.)
comment by Lumifer · 2016-01-14T17:22:42.146Z · LW(p) · GW(p)
An excellent piece about communication styles, in particular about a common type of interaction on the 'net which is sometimes seen on LW as well. I'll quote some chunks, but the whole thing is good.
Here’s a series of events that happens many times daily on my favorite bastion of miscommunication, the bird website. Person tweets some fact. Other people reply with other facts. Person complains, “Ugh, randos in my mentions.” Harsh words may be exchanged, and everyone exits the encounter thinking the other person was monumentally rude for no reason. ...
For clarity’s sake, I’ll name “ugh, randos” Sue and an archetypal “rando” Charlie.[4] I will also assume both are, initially anyway, operating in good faith–while there are certainly Sues and Charlies who are just unpleasant assholes, I think they are comparatively uncommon, and in any event picking apart their motivations wouldn’t be particularly interesting.
From Sue’s perspective, strangers have come out of the woodwork to demonstrate superiority by making useless, trivial corrections. Some of them may be saying obvious things that Sue, being well-versed in the material she’s referencing, already knows, and thus are insulting her intelligence, possibly due to their latent bias. This is not necessarily an unreasonable assumption, given how social dynamics tend to work in mainstream culture. People correct others to gain status and assert dominance. An artifice passed off as “communication” is often wielded as a blunt object to establish power hierarchies and move up the ladder by signaling superiority. Sue responds in anger as part of this social game so as not to lose status in the eyes of her tribe.
From Charlie’s perspective, Sue has shared a piece of information. Perhaps he already knows it, perhaps he doesn’t. What is important is that Sue has given a gift to the commons, and he would like to respond with a gift of his own. Another aspect is that, as he sees it, Sue has signaled an interest in the topic, and he would like to establish rapport as a fellow person interested in the topic. In other words, he is not trying to play competitive social games, and he may not even be aware such a game is being played. When Sue responds unfavorably, he sees this as her spurning his gift as if it had no value. This is roughly as insulting to Charlie as his supposed attempt to gain status over Sue is to her. At this point, both people think the other one is the asshole. People rightly tend to be mean to those they are sure are assholes, so continued interaction between them will probably only serve to reinforce their beliefs the other is acting in bad faith.
And a special shout-out to mathematicians :-/ Here is a quote about how talking to a mathematician feels to someone... born on the other side of IQ tracks:
Replies from: Viliam, gjmNobody was mean to me, nobody consciously laughed at me. There’s just a way that mathematicians have been socialized (I guess?!) to interact with each other that I find oppressive. If you have never had someone mansplain or whitesplain things to you, it may be hard for you to understand what I’m going to describe.
Usually, friendly conversation involves building a shared perspective. Among other things, mansplaining and whitesplaining involve one person of privilege forcing a marginalized person into a disagreeable perspective against their will, and not allowing them a way out. If you are someone averse to negative labels, it can be silencing. My experience discussing math with mathematicians is that I get dragged into a perspective that includes a hierarchy of knowledge that says some information is trivial, some ideas are “stupid”; that declares what is basic knowledge, and presents open incredulity in the face of dissent. Maybe I would’ve successfully assimilated into this way of thinking if I had learned it at a time where I was at the same level as my peers, but as it was it was just an endless barrage of passive insults I was supposed to be in on.
↑ comment by Viliam · 2016-01-15T10:00:48.196Z · LW(p) · GW(p)
I agree with gjm that the remark about IQ is wrong. This is about cultures. Let's call them "nerd culture" and "social culture" (those are merely words that came immediately to my mind, I do not insist on using them).
Using the terms of Transactional Analysis, the typical communication modes in "nerd culture" are activity and withdrawal, and the typical communication modes in "social culture" are pastimes and games. This is what people are accustomed to do and to expect from other people in their social circle. It doesn't depend on IQ or gender or color of skin; I guess it depends on personality and on what people in our perceived "tribe" really are doing most of the time. -- If people around you exchange information most of the time, it is reasonable to expect that the next person also wants to exchange information with you. If people around you play status games most of the time, it is reasonable to expect that the next person also wants to play a status game with you. -- In a different culture, people are confused and project.
A person coming from "nerd culture" to "social culture" may be oblivious to the status games around them. From an observer's perspective, this person display a serious lack of social skills.
A person coming from "social culture" to "nerd culture" may interpret everything as a part of some devious status game. From an observer's perspective, this person displays symptoms of paranoia.
The "nerd culture" person in a "social culture" will likely sooner or later get burned, which provides them evidence that their approach is wrong. Of course they may also process the evidence the wrong way, and decide e.g. that non-nerds are stupid or insane, and that it is better to avoid them.
Unfortunately, for a "social culture" person in a "nerd culture" it is too easy to interpret the evidence in a way that reinforces their beliefs. Every failure in communication may be interpreted as "someone did a successful status attack on me". The more they focus on trying to decipher the imaginary status games, the more they get out of sync with their information-oriented colleagues, which only provides more "evidence" that there is some kind of conspiracy against them. And even if you try to explain them this, your explanation will be processed as "yet another status move". A person sufficiently stuck in the status-game interpretation of everything may lack the dynamic to process any feedback as something else then (or at least something more than merely) a status move.
Thus ends my whitesplaining mansplaining cissplaining status attack against all who challenge the existing order.
EDIT:
Reading the replies I realized there are never enough disclaimers when writing about a controversial topic. For the record, I don't believe that nerds never play status games. (Neither do I believe that non-nerds are completely detached from reality.) Most people are not purely "nerd culture" or purely "social culture". But the two cultures are differently calibrated.
For example, correcting someone has a subtext of a status move. But in the "nerd culture" people focus more on what is correct and what is incorrect, while in the "social culture" people focus more on how agreement or disagreement would affect status and alliances.
If some person says "2+2=3" and other person replies "that's wrong", in the "nerd culture" the most likely conclusion is that someone has spotted a mistake and automatically responded. Yes, there is always the possibility that the person wanted to attack the other person, and really enjoyed the opportunity. Maybe, maybe not.
In the "social culture" the most likely conclusion is the status attack, because people in the "social culture" can tolerate a lot of bullshit from their friends or people they don't want to offend, so it makes sense to look for an extra reason why in this specific case someone has decided to not tolerate the mistake.
As a personal anecdote, I have noticed that in real life, some people consider me extremely arrogant and some people consider me extremely humble. The former have repeatedly seen me correcting someone else's mistake; and the latter have repeatedly seen someone else correcting my mistake, and me admitting the mistake. The idea that both attitudes could exist in the same person (and that the person could consider them to be two aspects of the same thing) is mind-blowing to someone coming from the "social culture", because there these two roles are strictly separated; they are the opposite of each other.
When you hear someone speaking about how the reality is socially constructed, in a sense they are not lying. They are describing the "social culture" they live in; where everyone keeps as many maps as necessary to fit peacefully in every social group they want to belong to. For a LessWronger, the territory is the thing that can disagree with our map when we do an experiment. But for someone living in a "social culture", the disagreement with maps typically comes from enemies and assholes! Friends don't make their friends update their maps; they always keep an extra map for each friend. So if you insist that there is a territory that might disagree with their map, of course they perceive it as a hostility.
Yes, even the nerds can be hostile sometimes. But a person from the "social culture" will be offended all the time, even by a behavior that in the "nerd culture" is considered perfectly friendly. -- As an analogy, imagine a person coming from a foreign culture that also speaks English, but in their culture, ending a sentence with a dot is a sign of disrespect towards the recipient. (Everyone in their culture knows this rule, and it is kinda taboo to talk about it openly.) If you don't know this rule, you will keep offending this person in every single letter you send them, regardless of how friendly you will try to be.
Replies from: Pfft, Pfft, ChristianKl, bogus, Lumifer, Vaniver↑ comment by Pfft · 2016-02-17T05:24:10.078Z · LW(p) · GW(p)
For a LessWronger, the territory is the thing that can disagree with our map when we do an experiment. But for someone living in a "social culture", the disagreement with maps typically comes from enemies and assholes! Friends don't make their friends update their maps; they always keep an extra map for each friend.
I figured this was an absurd caricature, but then this thing floated by on tumblr:
So when arguing against objectivity, they said, don’t make the post-modern mistake of saying there is no truth, but rather that there are infinite truths, diverse truths. The answer to the white, patriarchal, heteronormative, massively racist and ableist objectivity is DIVERSITY of subjectivities. And this, my friends, is called feminist epistemology: the idea that rather than searching for a unified truth to fuck all other truths we can understand and come to know the world through diverse views, each of which offers their own valid subjective view, each valid, each truthful. How? by interrupting the discourses of objectivity/normativity with discourses of diversity.
Objective facts: white, patriarchal, heteronormative, massively racist and ableist?
Replies from: Viliam↑ comment by Viliam · 2016-02-17T11:58:49.645Z · LW(p) · GW(p)
Sigh.
Logic itself has a very gendered and white supremacist history.
These people are clearly unable to distinguish between "the territory" and "the person who talks about the territory".
I had to breathe calmly for a few moments. Okay, I'm not touching this shit on the object level again.
On a meta level, I wonder how much of the missing rationality skills these people never had vs how much they had but lost later when they became politically mindkilled.
Replies from: Sarunas, OrphanWilde, ChristianKl↑ comment by Sarunas · 2016-02-17T17:27:02.894Z · LW(p) · GW(p)
I remember reading SEP on Feminist Epistemology where I got the impression that it models the world in somewhat different way. Of course, this is probably one of those cases where epistemology is tailored to suit political ideas (and they themselves most likely wouldn't disagree) but much less vice versa.
When I (or, I suppose, most LWers) think about how knowledge about the world is obtained the central example is an empirical testing of hypotheses, i.e. situation when I have more than one map of a territory and I have to choose one of them. An archetypal example of this is a scientist testing hypotheses in a laboratory.
On the other hand, feminist epistemology seems to be largely based on Feminist Standpoint Theory which basically models the world as being full of different people who are adversarial to each other and try to promote different maps. It seems to me that it has an assumption that you cannot easily compare accuracies of maps, either because they are hard to check or because they depict different (or even incommensurable) things. The central question in this framework seems to be "Whose map should I choose?", i.e. choice is not between maps, but between mapmakers. Well, there are situations where I would do something that fits this description very well, e.g. if I was trying to decide whether to buy a product which I was not able to put my hands on and all information I had was two reviews, one from the seller and one from an independent reviewer, I would be more likely to trust the latter's judgement.
It seems to me that the first archetypal example is much more generalizable than the second one, and strange claims that were cited in a Pfft's comment is what one gets when one stretches the second example to extreme lengths.
There also exists Feminist Empiricism which seems to be based on idea that since one cannot interpret empirical evidence without a framework, something must be added to an inquiry, and since biases that favour a desirable interpretations is something, it is valid to add them (since this is not a Bayesian inference, this is different from the problem of choice of priors). Since the whole process is deemed to be adversarial (scientists in this model look like prosecutors or defense attorneys), different people inject different biases and then argue that others should stop injecting theirs.
(disclaimer: I have read SEP article some time ago and wrote about these ideas from my memory, it wouldn't be a big surprise if I misrepresented them in some way. In addition to that, there are other obvious sources of potential misrepresentations)
Replies from: Viliam↑ comment by Viliam · 2016-02-18T10:25:44.388Z · LW(p) · GW(p)
Seems like the essential difference is whether you believe that as the maps improve, they will converge.
A "LW-charitable" reading of the feminist version would be that although the maps should converge in theory, they will not converge in practice because humans are imperfect -- the mapmaker is not able to reduce the biases in their map below certain level. In other words, that there is some level of irrationality that humans are unable to overcome today, and the specific direction of this irrationality depends on their "tribe". So different tribes will forever have different maps, regardless of how much they try.
Then again, to avoid "motte and bailey", even if there is the level of irrationality that humans are unable to overcome today even if they try, the question is whether the differences between maps are at this level, or whether people use this as a fully general excuse to put anything they like on their maps.
Yet another question would be who exactly are the "tribes" (the clusters of people that create maps with similar biases). Feminism (at least the version I see online) seems to define the clusters by gender, sexual orientation, race, etc. But maybe the important axes are different; maybe e.g. having high IQ, or studying STEM, or being a conservative, or something completely different and unexpected actually has greater influence on map-making. Which is difficult to talk about, because there is always the fully general excuse that if someone doesn't have the map they should have, well, they have "internalized" something (a map of the group they don't belong to was forced on them, but naturally they should have a different map).
↑ comment by OrphanWilde · 2016-02-17T14:39:22.843Z · LW(p) · GW(p)
On a meta level, I wonder how much of the missing rationality skills these people never had vs how much they had but lost later when they became politically mindkilled.
Can rationality be lost? Or do people just stop performing the rituals?
Replies from: Viliam, Lumifer, Old_Gold↑ comment by Viliam · 2016-02-18T08:57:43.987Z · LW(p) · GW(p)
Heh, I immediately went: "What is rationality if not following (a specific kind of) rituals?" But I guess the key is the word "specific" here. Rationality could be defined as following a set of rules that happen to create maps better corresponding to the territory, and knowing why those rules achieve that, i.e. applying the rules reflectively to themselves. The reflective part is what would prevent a person from arbitrarily replacing one of the rules by e.g. "what my group/leader says is always right, even if the remaining rules say otherwise".
I imagine that most people have at least some minimal level of reflection of their rules. For example, if they look at the blue sky, they conclude that the sky is blue; and if someone else would say that the sky is green, they would tell them "look there, you idiot". That is, not only they follow the rule, but they are aware that they have a rule, and can communicate it. But the rule is communicated only then someone obviously breaks it; that means, the reflection is only done in crisis. Which means they don't develop the full reflective model, and it leaves the option of inserting new rules, such as "however, that reasoning doesn't apply to God, because God is invisible", which take priority over reflection. I guess these rules have a strong "first mover advantage", so timing is critical.
So yeah, I guess most people are not, uhm, reflectively rational. And unreflective rationality (I guess on LW we wouldn't call it "rationality", but outside of LW that is the standard meaning of the word) is susceptible to inserting new rules under emotional pressure.
↑ comment by Lumifer · 2016-02-18T19:40:07.026Z · LW(p) · GW(p)
Can rationality be lost?
I don't see why not. It is, basically, a set of perspectives, mental habits, and certain heuristics. People lose skills, forget knowledge, just change -- why would rationality be exempt?
Replies from: OrphanWilde↑ comment by OrphanWilde · 2016-02-18T20:37:09.635Z · LW(p) · GW(p)
Habits and heuristics are what I'd call "rituals."
Are perspectives something you can lose? I ask genuinely. It's not something I can relate to.
Replies from: Lumifer↑ comment by Lumifer · 2016-02-18T20:47:10.464Z · LW(p) · GW(p)
Habits and heuristics are what I'd call "rituals."
I don't know about that. A heuristic is definitely not a ritual -- it's not a behaviour pattern but just an imperfect tool for solving problems. And habits... I would probably consider rituals to be more rigid and more distanced from the actual purpose compared to mere habits.
Are perspectives something you can lose?
Sure. You can think of them as a habitual points of view. Or as default approaches to issues.
↑ comment by Old_Gold · 2016-02-17T18:33:37.453Z · LW(p) · GW(p)
Can rationality be lost?
Sure, when formerly rational people declare some topic of limits to rationality because they don't like the conclusions that are coming out. Of course, since all truths are entangled that means you have to invent other lies to protect the ones you've already made. Ultimately you have to lie about the process of arriving at truth itself, which is how we get to things like feminist anti-epistomology.
↑ comment by ChristianKl · 2016-02-18T21:19:35.303Z · LW(p) · GW(p)
These people are clearly unable to distinguish between "the territory" and "the person who talks about the territory".
What about that sentence makes you think that the person isn't able to make that distinction?
If you look at YCombinator the semantics are a bit different but the message isn't that different. YCombinator also talks about how diversity is important. The epistemic method they teach founders is not to think abstractly about a topic and engage with it analytically but that it's important to speak to people to understand their own unique experiences and views of the world.
David Chapman's article going down on the phenomenon is also quite good.
Replies from: Viliam↑ comment by Viliam · 2016-02-19T08:59:25.299Z · LW(p) · GW(p)
It's interesting how the link you posted talks about importance of using the right metaphors, while at the same time you object against my conclusion that people saying "logic itself has white supremacist history" can't distinguish between the topic and the people who talk about the topic.
To explain my position, I believe that anyone who says either "logic is sexist and racist" or "I am going to rape this equation" should visit a therapist.
Replies from: ChristianKl↑ comment by ChristianKl · 2016-02-19T09:49:04.235Z · LW(p) · GW(p)
I believe that anyone who says either "logic is sexist and racist" or "I am going to rape this equation"
Nobody linked here says either of those things. In particular the orginal blog posts says about logic:
This is not to say it is not useful; it is. But it does not exist in a vacuum and should not be sanctified.
The argument isn't that logic is inherently sexist and racist and therefore bad but that it's frequently used in places where there are other viable alternatives. That using it in those places can be driven by sexism or racism.
Replies from: Old_Gold↑ comment by Old_Gold · 2016-02-20T04:58:02.281Z · LW(p) · GW(p)
The argument isn't that logic is inherently sexist and racist and therefore bad but that it's frequently used in places where there are other viable alternatives.
Such as?
Replies from: ChristianKl↑ comment by ChristianKl · 2016-02-20T09:47:38.623Z · LW(p) · GW(p)
Interviewing lot's of people to understand their view points and not to have conversations with them to show them where they are wrong but be non-judgemental. That's basically what YC teaches.
Reasoning by analogy is useful in some cases.
There's a huge class of expert decisions that's done via intuition.
Using a technique like Gendlin's Focusing would be a way to get to solutions that's not based on logic.
↑ comment by Pfft · 2016-01-15T16:00:10.994Z · LW(p) · GW(p)
I guess your theory is the same as what Alice Maz writes in the linked post. But I'm not at all convinced that that's a correct analysis of what Piper Harron is writing about. In the comments to Harron's post there are some more concrete examples of what she is talking about, which do indeed sound a bit like one-upping. I only know a couple of mathematicians, but from what I hear there are indeed lots of the social games even in math---it's not a pure preserve where only facts matter.
(And in general, I feel Maz' post seems a bit too saccharine, in so far as it seems to say that one-up-manship and status and posturing do not exist at all in the "nerd" culture, and it's all just people joyfully sharing gifts of factual information. I guess it can be useful as a first-order approximation to guide your own interactions; but it seems dangerously lossy to try to fit the narratives of other people (e.g., Harron) into that model.)
↑ comment by ChristianKl · 2016-02-18T20:46:15.719Z · LW(p) · GW(p)
I'm not sure whether "social culture" is a good label. Not every social interaction by non-nerds is heavily focused on status.
There's "authencity culture" whereby being authentic and being open is more important than not saying something that might lower someone's status.
↑ comment by bogus · 2016-01-15T14:24:00.363Z · LW(p) · GW(p)
A person coming from "social culture" to "nerd culture" may interpret everything as a part of some devious status game.
The social person is right here. Remember 'X is not about Y'?. The difference is that your 'social culture' person is in fact low-to-average status on the relevant hierarchy. Something that's just "harmless social banter" to people who are confident in their social position can easily become a 'status attack', or a 'microaggression' from the POV of someone who happens to be more vulnerable. This is not limited to information-exchange at all, it's a ubiquitous social phenomenon. And this dynamic makes engaging in such status games a useful signal of confidence, so they're quite likely to persist.
Replies from: Sarunas, Lumifer↑ comment by Sarunas · 2016-01-15T18:02:56.459Z · LW(p) · GW(p)
I think that one very important difference between status games and things that might remind people of status game is how long they are expected to stay in people's memory.
For example, I play pub quizzes and often I am the person who is responsible for the answer sheet. Due to strict time limits, discussion must be as quick as possible, therefore in many situations I (or another person who is responsible for the answer sheet) have to reject an idea a person has came up with based on vague heuristic arguments and usually there is no time for long and elaborate explanations. From the outside, it might look like a status related thing, because I had dismissed a person's opinion without a good explanation. However, the key difference is that this does not stay in your memory. After a minute or two, all these things that might seem related to status are already forgotten. Ideally, people should not even come into picture (because paying attention to anything else but the question is a waste of time) - very often I do not even notice who exactly came up with a correct answer. If people tend to forget or not even pay attention whom a credit should be given, also, if they tend to forget cases where their idea was dismissed in favour of another person's idea. In this situation, small slights that happened because discussion should be as quick as possible are not worth remembering, one can be pretty certain that other people will not remember them either. Also, if "everyone knows" they are to be quickly forgotten, they are not very useful in status games either. If something is forgotten it cannot be not forgiven.
Quite different dynamics arise if people have long memories for small slights and "everyone knows" that people have long memories for them. Short memory made them unimportant and useless for status games, but in the second case where they are important and "everyone knows" they are important, they become useful for social games and therefore a greater proportion of them have might have some status related intentionality behind them and not just be random noise.
Similarly, one might play a board game that might things that look like social games, e.g. backstabbing. However, it is expected that when figures go back to the box, all of that is forgotten.
I think that what differentiates information sharing and social games is which of those are more likely to be remembered and which one of them is likely to be quickly forgotten (and whether or not "everyone knows" which is most likely to forgotten or remembered by others). Of course, different people might remember different things about the same situation and they might be mistaken about what other people remember or forget - that's how a culture clash might look like. On the other hand, the same person might tend to remember different things about different situations, therefore people cannot be neatly divided into different cultures, but at the same time frequency of situations of each type seems to be different for different people.
Replies from: Viliam, ChristianKl↑ comment by Viliam · 2016-01-25T11:20:52.554Z · LW(p) · GW(p)
Yes, this is an important aspect.
I think what people usually keep in mind are not the specific mistakes, but status and alliances. In the "nerd culture", the individual mistakes are quickly forgotten... however, if someone makes mistakes exceptionally often, or makes a really idiotic mistake and then insists on it, they may gain a long-term reputation of an idiot (which means low status). But even then, if a well-known idiot makes a correct statement, people are likely to accept this specific statement as correct.
In the "social culture", it's all about alliances and power. Those change slowly, therefore the reactions to your statements change slowly, regardless of the statements. If you make a mistake and people laugh at you because you are low-status and it is safe to kick you, next time if you make a correct statement, someone may still make fun of you. (But when a high-status person later makes essentially the same statement, people will accept it as a deep wisdom. And they will insist that it is totally not the same thing that you said.) It's not important what was said, but who said it. Quick changes only come when people change alliances, or suddenly gain or lose power; but that happens rarely.
↑ comment by ChristianKl · 2016-01-25T11:49:49.800Z · LW(p) · GW(p)
The pub quiz you play has clearly defined status. You lead it. As such there's not the uncertainty about status that exists in a lot of other social interactions.
↑ comment by Lumifer · 2016-01-15T15:44:43.962Z · LW(p) · GW(p)
The social person is right here. Remember 'X is not about Y'?. The difference is that your 'social culture' person is in fact low-to-average status on the relevant hierarchy. Something that's just "harmless social banter" to people who are confident in their social position can easily become a 'status attack', or a 'microaggression' from the POV of someone who happens to be more vulnerable.
You're confusing two points of view.
Let's say social Sally is talking to nerdy Nigel. From the point of view of Sally, there are a lot of microaggressions, and status attacks, and insensitivity, etc. But that is not because Nigel is cunningly conducting a "devious status game", Nigel doesn't care about status (including Sally's) and all he wants to do is talk about his nerdy stuff.
Nigel is not playing a let's-kick-Sally-around game, Sally is misperceiving the situation.
Replies from: bogus↑ comment by bogus · 2016-01-15T16:14:19.646Z · LW(p) · GW(p)
Nigel doesn't care about status (including Sally's) and all he wants to do is talk about his nerdy stuff.
Oh, Nigel may not care about Sally's status - that much is clear enough, and I'm not disputing it. He cares a lot about his own status and the status of his nerdy associates, however. That's one reason why he likes this "bzzzzzzt, gotcha!" game so much. It's a way of saying: "Hey, this is our club; outsiders are not welcome here! Why don't you go to a sports bar, or something." Am I being uncharitable? Perhaps so, but my understanding of Nigel's POV is as plausible as yours.
Replies from: Lumifer, Viliam↑ comment by Lumifer · 2016-01-15T16:22:22.301Z · LW(p) · GW(p)
Our friend Nigel may or may not play status games of his own, but my issue was with you saying
A person coming from "social culture" to "nerd culture" may interpret everything as a part of some devious status game.
The social person is right here.
And, nope, the social person is not.
Of course, it all depends on the situation and she may be right, but, generally speaking, feeling like an outsider does NOT mean that everyone is playing devious status games against you.
↑ comment by Viliam · 2016-01-25T10:17:31.680Z · LW(p) · GW(p)
Depends on whether "bzzzzzzt, gotcha!" is applied more frequently to the outsiders than to the insiders when they make the same mistake.
In other words, does "making a mistake" screen off "being an outsider"?
Replies from: gjm↑ comment by gjm · 2016-01-25T10:45:17.836Z · LW(p) · GW(p)
I'm not sure it does depend on that. Suppose your ingroup is made up predominantly of people with ginger hair and your outgroup predominantly of people with brown hair. Then if you make fun of people with brown hair, and admire people with ginger hair, you're raising the status of your ingroup relative to your outgroup even if you apply this rule consistently given hair colour.
Similarly, if your ingroup is predominantly made up of people who don't make a certain kind of mistake and your outgroup is mostly made up of people who do.
It's not clear to me that there's a good way to tease apart the two hypotheses here. And of course they could both be right: Nigel may sincerely care about the nerdy stuff but also on some level be concerned about raising the status of his fellow nerds.
↑ comment by Lumifer · 2016-01-15T15:32:27.409Z · LW(p) · GW(p)
It doesn't depend on IQ or gender or color of skin
On color of skin, no, but on IQ somewhat. This is so for two reasons. The first one is capability to learn -- a sufficiently high-IQ person will be able to figure out what's happening and adjust. An insufficiently-high-IQ person will not and will be stuck in unhappy loops.
The second one is that the nerd culture of sharing information depends on the ability to understand and value that information. If you don't understand what the nerds are talking about, you have to fall back on social games because you have no other options. That's what I mistakenly thought was happening with the mathematician quote in the grandparent comment -- turned out I was wrong, but such situations exist.
Oh, and gender plays a role, too. Women are noticeably more social than men, so the nerd cultures tend to be mostly male.
↑ comment by Vaniver · 2016-01-15T20:10:20.965Z · LW(p) · GW(p)
As an analogy, imagine a person coming from a foreign culture that also speaks English, but in their culture, ending a sentence with a dot is a sign of disrespect towards the recipient.
Note that in most IM conversations and texts, ending a message with a period makes one seem angry or insincere (see here).
Replies from: None↑ comment by [deleted] · 2016-01-16T12:39:16.813Z · LW(p) · GW(p)
That should depend on the rules of the language. My supervisor sometimes texted me with You will come (in Ukrainian) when we had not scheduled a meeting, I would rush in to see what got him, and find out he forgot the question mark (again).
↑ comment by gjm · 2016-01-14T18:07:51.316Z · LW(p) · GW(p)
It's not clear to me that the other person really was "born on the other side of IQ tracks". (Unless you just mean that she's female and black, I guess?) I mean, she did a PhD in pure mathematics. Some of the things she says about it and about her experience in mathematics are certainly ... such as might incline the cynical to think that she actually just isn't very good at mathematics and is trying some passive-aggressive thing where she half-admits it and half-blames it on The Kyriarchy. But getting to the point at which anyone is willing to consider letting you do a mathematics PhD (incidental note: her supervisor is a very, very good mathematician) implies, I think, a pretty decent IQ.
For the avoidance of doubt, I am not myself endorsing the cynic's position above. I haven't looked at her thesis, which may in fact make it clear that she's a very good mathematician indeed. In which case her difficulties might in fact be the result of The Kyriarchy, or might be the result of oversensitivity on her part, or any combination thereof. Or in fact might simply be a useful rhetorical invention.
Replies from: Lumifer, Good_Burning_Plastic↑ comment by Lumifer · 2016-01-14T18:35:55.804Z · LW(p) · GW(p)
Ah, I didn't follow the link to Piper's blog so my expression was misguided -- I take it back.
In this case, I think, her complaint reflects the status game mismatch -- either she's playing it and her conversation partner isn't, or vice versa, she is not and he is. It's hard to tell what is the case.
↑ comment by Good_Burning_Plastic · 2016-01-24T11:37:19.166Z · LW(p) · GW(p)
I haven't looked at her thesis,
Do try to.
Replies from: gjmcomment by [deleted] · 2016-01-13T15:20:51.512Z · LW(p) · GW(p)
Löb's theorem states that "If it's provable that (if it's provable that p then p), then it's provable that p." In addition to being a theorem of set theory with Peano arithmetic, it's also a theorem of modal logic.
Try this on for size: If I believe that (if I believe that this chocolate chip will cure my headache, then this chocolate chip will cure my headache), then I believe that this chocolate chip will cure my headache.
-Agenty Duck
Replies from: Pfftcomment by polymathwannabe · 2016-01-12T15:39:35.177Z · LW(p) · GW(p)
Obvious in hindsight: one cause of massive bee death turned out to be neonicotinoids. In other words, newsflash: insecticides kill insects.
Was there any way this could have been anticipated?
Replies from: passive_fist, ChristianKl↑ comment by passive_fist · 2016-01-12T22:16:43.717Z · LW(p) · GW(p)
It's not obvious that use of a pesticide would substantially harm bees, as pesticides have been in use for a very long time, and many organophosphate pesticides are fairly non-toxic to bees. Neonicotinoids, however, are extremely toxic to bees. The use of neonicotinoids is fairly recent; large-scale use only started in the late 90's, and very soon after that beekeepers started filing petitions to the EPA. They were ignored. I'd say this is more a case of systemic and deliberate ignorance/politics rather than a 'mistake'.
↑ comment by ChristianKl · 2016-01-12T18:40:54.150Z · LW(p) · GW(p)
Be more conservative. Require more evidence before you allow a new insecticide to come to market.
comment by Bryan-san · 2016-01-13T21:01:52.943Z · LW(p) · GW(p)
Nate Soares' recent post "The Art of Response" on Minding Our Way talks about effective response patterns that people develop to deal with problems. What response patterns do you use in life or in your field of expertise that you have found to be quite effective?
comment by Anders_H · 2016-01-12T18:18:51.679Z · LW(p) · GW(p)
I finally gave in and opened a Tumblr account at http://dooperator.tumblr.com/ . This open-thread comment is just to link my identity on Less Wrong with my username on websites where I do not want my participation to be revealed by a simple Google search for my name, such as SlateStarCodex and Tumblr.
comment by [deleted] · 2016-01-12T12:10:05.355Z · LW(p) · GW(p)
Information coupled with suprise this week:
the chance of transmission during any single episode of unprotected vaginal sex is estimated at a 1 in 2,000. Thus, the odds you were infected are 0.05 x 0.0005 = 0.000025, i.e. 1 in 40,000. That's less than your lifetime risk of getting killed by lightning (if you live in the US) and less than the chance you will die in the coming week in some sort of accident. As for other STDs, the lack of symptoms is a strong indicator that you didn't catch anything.
A less authoritative but more nuanced relevant analysis is hosted here
Replies from: passive_fistAsset prices around the world are extremely high relative to historic norms. Across all asset classes and most parts of the world, the returns on offer are measly. But most investors buying these assets are not doing so with greed as their driving emotion, rather with a sense of reluctant resignation that they need to do something more with their cash.
↑ comment by passive_fist · 2016-01-12T22:08:46.642Z · LW(p) · GW(p)
I wouldn't put too much faith in the 1/2000 figure for chance of HIV transmission. There is no known way to calculate that with any reasonable confidence. Estimates vary from something like 1/500 to 1/2500 (this is for vaginal sex; anal sex has much higher transmission risk).
comment by iarwain1 · 2016-01-14T18:35:52.541Z · LW(p) · GW(p)
I'm an undergrad going for a major in statistics and minors in computer science and philosophy. I also read a lot of philosophy and cognitive science on the side. I don't have the patience to read through all of the LW sequences. Which LW sequences / articles do you think are important for me to read that I won't get from school or philosophy reading?
Replies from: Manfred, Vaniver, Strangeattractor, Gunnar_Zarncke↑ comment by Manfred · 2016-01-15T04:41:46.671Z · LW(p) · GW(p)
Check out the Rationality: A to Z contents page, click on things that look interesting, it'll mostly work out.
A Human's Guide to Words is really good exposition of philosophy. The subsequence of thinking about morality that I can point at with the post fake fake utility functions is good too. Or if you just want to learn what this rationality stuff is about, read the early posts about biases and read Knowing about biases can hurt people. That one's important - the point of knowing about biases is to see them in yourself.
I just don't know what suits you, is all.
↑ comment by Vaniver · 2016-01-18T11:13:57.369Z · LW(p) · GW(p)
One of the chief benefits of reading through the sequences is being able to notice, label, and communicate many different things. Instead of having a vague sense that something is wrong and having to invent an explanation of why on the spot, I can say "oh, there's too much inferential distance here" or "hmm, this argument violates conservation of expected evidence" or "but that's the Fallacy of Gray." But in order to have that ability, I need to have crystallized each of those things individually, so that I can call on it when necessary.
But if you're only going to read one thing, A Human's Guide to Words (start here) is probably going to be the most useful, especially going into philosophy classes.
Replies from: Viliam↑ comment by Viliam · 2016-01-25T10:08:20.416Z · LW(p) · GW(p)
I would add that most of those things can also be found in other sources; sometimes they have different names.
But the practical question is: have you read those "other sources"? If not, then the Sequences are a compressed form of a lot of useful stuff. They may be long, but reading all the original sources would be much longer. (This is not to discourage people from reading the other sources, just saying that if "that's too much text" is your real objection, then you probably haven't read them.)
Replies from: Tem42↑ comment by Tem42 · 2016-01-27T23:37:59.965Z · LW(p) · GW(p)
Unfortunately, I think many of the people who come to LessWrong are in the position of having read about 50-75% of the content of the sequences through other sources, and may become frustrated by the lack of clear indication within the sequences as to what the next post actually includes.... it is very annoying to read through a couple of pages only to find that this section has just been a wordy setup to reviewing basic physics.
Replies from: Bryan-san↑ comment by Bryan-san · 2016-01-28T19:41:43.404Z · LW(p) · GW(p)
What % do you define as "many"? Those percentages of content already known sound very high to me in regards to the first 1/3rd of the Sequences. (I'm still working on the rest so can't comment there.) Also, they can use the Article Summaries to test out whether they've seen the concept before and then read the full article or not. I don't recommend just reading the summaries though. I think a person doing that would be doing a disservice to themselves because of the reasons supplied by Vaniver above.
↑ comment by Strangeattractor · 2016-01-18T05:17:20.819Z · LW(p) · GW(p)
The Quantum Mechanics sequence - you won't get that in school.
↑ comment by Gunnar_Zarncke · 2016-01-14T21:31:24.896Z · LW(p) · GW(p)
How about the Grad Student Advice Repository?
Replies from: iarwain1comment by [deleted] · 2016-01-14T04:18:12.423Z · LW(p) · GW(p)
Smoking cigarettes is very protecting against parkinsons. The evidence is clear, large and replicated in large samples. Hypothetically, would someone with strong genetic indications of risk for parkinsons, and genetic indications that they are protective against cardiovascular disease, lung cancer and that kind of other smoking related diseases be making a healthy choice to start smoking?.
Replies from: gjm, Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2016-01-14T21:38:33.868Z · LW(p) · GW(p)
SSC describes a related affect and also mentions parkinsons in SCHIZOPHRENIA: NO SMOKING GUN (recent).
Replies from: None↑ comment by [deleted] · 2016-01-15T09:17:33.174Z · LW(p) · GW(p)
Thanks for that. What a blog! My personal experiences with psychosis bias me towards theories that play up suggestibility as a key feature in psychotic disorders. I’d favour an alternative hypothesis that the marketing of cigarettes (e.g. presence in movies, positioning as a bad and cool vice) encourages sz’s to take up smoking, then addiction and receptor biases maintain the habit.
comment by [deleted] · 2016-01-15T18:21:25.046Z · LW(p) · GW(p)
Anecdotally, Russians and Englishmen talk (pronounce) Latin [names of biological taxa] rather differently. In my opinion, not really informed because we did not have a Latin course, saying '-aceae' as 'ayshae' is wrong, and although I know people do that it still throws me off for a moment. Still, I've just realized that there are non-English biologists who mangle Latin as they wish. Has anyone got any data on how widespread is the English Latin?
Replies from: IlyaShpitser, MrMind, Douglas_Knight↑ comment by IlyaShpitser · 2016-01-15T18:26:51.715Z · LW(p) · GW(p)
Yes, "Englishing" Latin is a pet peeve of mine.
Replies from: Lumifer↑ comment by Douglas_Knight · 2016-01-20T22:11:11.898Z · LW(p) · GW(p)
Every country has a different pronunciation of Latin. Standardizing on the English version sounds like an improvement to me.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2016-01-20T22:48:16.196Z · LW(p) · GW(p)
English has awful, unintuitive pronunciation rules. Almost any other Indoeuropean language would be better. I would prefer Spanish or Italian.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2016-01-20T23:14:32.453Z · LW(p) · GW(p)
The standard pronunciation of Latin by English speakers doesn't follow English pronunciation rules. I added a link. Italian pronunciation was a possible standard, since it is generally used by the Catholic Church. But that doesn't seem likely to spread in Russia.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2016-01-21T17:08:14.485Z · LW(p) · GW(p)
Italian pronunciation rules are different from those of Classical Latin. Even Ecclesiastical Latin sounds different from Classical Latin, and closer to the modern Italian norm. My school priest pronounced Humanae Vitae as "oo-man-eh bee-teh," whereas in ancient times it would have been "hoo-man-eye wee-tye."
comment by Brillyant · 2016-01-13T18:06:01.214Z · LW(p) · GW(p)
Yes, I'm aware politics kills minds.
What did Obama do wrong?
I hear people say (1) the economy didn't grow fast enough and (2) the U.S. is weaker, globally.
Is there objective evidence of either of these claims? Or is this mostly just blue vs. green tribalism?
Replies from: knb, Lumifer, None, Douglas_Knight↑ comment by knb · 2016-01-13T22:58:46.679Z · LW(p) · GW(p)
The economy definitely is not growing fast enough, but blaming Obama doesn't really make sense. Very weak growth is a problem throughout the developed world, and the US economy is if anything better than average.
What did Obama do wrong?
Leaving aside issues that are primarily questions of personal values, I see a couple of important failures that seem pretty objective.
Affordable Care Act: The rollout of Healthcare.gov was an embarrassing debacle, but the law itself just isn't very good--even from a liberal perspective (the basic plan was originally a proposal by the right-wing Heritage Foundation). It doesn't achieve anything like universal coverage, there have been continued large increases in insurance premiums, the insurance "corridors" are hemorrhaging money faster than expected, and there are some signs of the "death spiral." (United Health is losing so much money they plan to exit the [individual] market.) Even Obama has admitted that "if you like your health plan, you can keep it" turned out not to be true. Keep in mind that ACA was designed so that many of its aspects don't take full effect for years, so we still don't really know how things will shake out, but it's clear Obama's signature legislation isn't curing America's healthcare woes.
Obama administration policies of supporting regime change against secular Arab governments has basically been a disaster, leading to disastrous civil wars in Libya and Syria. Islamists are almost certainly a lot stronger than they would have been if the administration had done nothing. The side effects of this are disastrous for long-term US policy goals like supporting European integration, since the resulting refugee crisis has (temporarily?) killed Schengen and made the nationalist parties in Europe stronger. And the crisis is ongoing, we have no idea how bad it will get.
↑ comment by [deleted] · 2016-01-14T20:21:10.597Z · LW(p) · GW(p)
Which liberal health policy experts have you been reading to get that impression of the Affordable Care Act? Most liberal economists I have read have mixed feelings on the act, but think it was largely an improvement. While conservatives would probably agree with most of your statement, I would hardly call your view an objective one if a lot of experts would disagree with it.
Here is Austin Frakt on the Affordable Care Act.
Replies from: knb↑ comment by knb · 2016-01-15T05:24:03.386Z · LW(p) · GW(p)
Which liberal health policy experts have you been reading to get that impression of the Affordable Care Act?
I'm saying the law, taken on its merits, is not actually good by the standards liberals profess. I'm aware most liberals supported it (with some grumbling) but I think that's mainly because of Halo Effect/Affective Death Spiral. If George W. Bush had proposed this, I suspect liberals would have criticized it for locking us even deeper into the private insurance trap (giving corporations a captive market).
↑ comment by Brillyant · 2016-01-14T02:22:43.766Z · LW(p) · GW(p)
Thank you for the reply. This is interesting.
Is the U.S. health care system as a whole better than before the ACA in your view?
Also, could Obama have gotten anything more liberal—like universal coverage—through congress?
What are your politics?
Replies from: knb↑ comment by knb · 2016-01-14T06:31:45.115Z · LW(p) · GW(p)
Is the U.S. health care system as a whole better than before the ACA in your view?
No. I'd mostly prefer market-oriented reforms for healthcare (plus vouchers), but right now we tend to get the worst of both worlds. Single payer would also probably be better than what we have now.
Also, could Obama have gotten anything more liberal—like universal coverage—through congress?
The main obstacle wasn't really that it was too liberal. Opposition from the insurance lobby is what killed "Hillarycare" back in 93 even though Democrats had huge majorities then as well. Once the insurance lobby got the "public option" removed from the legislation, they supported it.
What are your politics?
Mostly paleoconservative, less opposed to "big government" than most paleocons.
↑ comment by Lumifer · 2016-01-13T18:24:34.916Z · LW(p) · GW(p)
What did Obama do wrong?
He created very high expectations (remember Hope & Change?) and massively underperformed.
Basically, he turned out to be a mediocre President, not horrible, but not particularly good either. He disappointed an awful lot of people.
As to claims that you mention, Presidents have little control over economy. Economic growth is just not a function of who currently lives in the White House. With respect to "weaker globally", it's a complicated discussion which should start with whether you want US to be a global SWAT team.
Replies from: Brillyant↑ comment by Brillyant · 2016-01-13T18:44:50.547Z · LW(p) · GW(p)
Thank you!
He created very high expectations (remember Hope & Change?) and massively underperformed.
And "Yes We Can!". :)
I guess all political slogans blend together for me. All of this year's nominees are making similar over-the-top type claims about what they will accomplish. I'm sincerely surprised anyone believes any of them.
One "change" that happened was the ACA. I know this is contentious depending on your politics, but it at least qualifies as the sort of "change" Obama's constituents likely had in mind when electing him.
Basically, he turned out to be a mediocre President, not awful but not particularly good either.
Do you have any metrics in mind to support this? Presidential rankings seem problematic to me. Especially trying to rank Obama so early on, since we haven't seen the long term impact of anything he has done.
As to claims that you mention, Presidents have little control over economy. Economic growth is just not a function of who currently lives in the White House.
This is also my sense, though I don't know much about economics.
My terribly over-simplified view is that the economy was horrible in 2008, and now it is much better. So that is good. And while I don't give Obama anything like full credit for that, I also don't accept criticism that he made the economy worse or didn't grow it "enough".
With respect to "weaker globally", it's a complicated discussion which should start with whether you want US to be a global SWAT team.
This is my view as well. I have no idea where critics of Obama get the evidence that the US is less safe now that 2008. I'm assuming it's just tribal politics, but would be open to arguments.
Replies from: Lumifer, Riothamus↑ comment by Riothamus · 2016-01-13T19:25:47.468Z · LW(p) · GW(p)
The thing to consider about the economy is that the president is not only not responsible, but mostly irrelevant. An easy way to see this is the 2008 stimulus packages. Critics of the president frequently share the graph of national debt which grows sharply immediately after he took office - ignoring that the package was demanded by congress and supported by his predecessor, who wore a different color shirt.
A key in evaluating a president is the difference between what he did, what he could have done, and what people think about him. Consider that the parties were polarizing before he took office.
In terms of specifics, I am disappointed that he continued most of the civil rights abuses of the previous administration with regards to due process. I also oppose the employment of the drone warfare doctrine, which is minimally effective at achieving strategic goals and highly effective at generating ill will in the region.
By contrast, I am greatly pleased at the administrations' commitment to diplomacy and improvement of our reputation among our allies. I am pleased that major combat operations were ended in two theaters, and that no new ones were launched. I applaud the Iranian nuclear agreement.
Replies from: sight, lookup3↑ comment by sight · 2016-04-21T18:31:29.868Z · LW(p) · GW(p)
I am pleased that major combat operations were ended in two theaters, and that no new ones were launched.
So what about Libya? What about the fight against ISIS? The former was a quick-strike operation that caused the country in question to go to hell fast. The latter is an example of things going to hell so badly after a "successfully ended operation" that we had to intervene again.
Replies from: Riothamus↑ comment by Riothamus · 2016-04-21T20:51:36.827Z · LW(p) · GW(p)
As compared to what alternative? There is no success condition for large scale ground operations in the region. If the criticism of the current administration is "failed to correct the lack of strategic acumen in the Pentagon" then I would agree, but I wonder what basis we have for expecting an improvement.
It seems to me we can identify problems, but have no available solutions to implement.
Replies from: sight↑ comment by sight · 2016-04-21T21:25:27.996Z · LW(p) · GW(p)
As compared to what alternative?
Well, not intervening in Libya for starters.
Replies from: Riothamus↑ comment by Riothamus · 2016-04-22T16:03:15.251Z · LW(p) · GW(p)
What are your criteria for good foreign policy choices then? You have conveyed that you want Iraq to be occupied, but Libya to be neglected, so non-intervention clearly is not the standard.
My current best guess is 'whatever promotes maximum stability'. Also, how do you expect these decisions are currently made?
Replies from: sight↑ comment by sight · 2016-04-22T23:05:46.936Z · LW(p) · GW(p)
I wouldn't object nearly as much to occupying Libya as to what Obama actually did. Namely, intervene just enough to force Gaddafi out and leave a huge mess.
Actually I would still object, but that's because Gaddafi had previously abandoned his WMD program under US pressure. So getting rid of him now sends a very bad message to other thrid world dictators contemplating similar programs.
↑ comment by lookup3 · 2016-04-21T18:15:47.191Z · LW(p) · GW(p)
I am pleased that major combat operations were ended in two theaters, and that no new ones were launched.
What like Libya? Or the fight against ISIS? The former is an example of a fast intervention that caused things to go straight to hell. The latter is an example of him "ending an operation" and things going to hell so badly that he had to intervene again.
↑ comment by [deleted] · 2016-01-14T21:07:13.373Z · LW(p) · GW(p)
I think Obama's greatest accomplishment was the overhaul of military spending he worked with Secretary Robert Gates on at the start of his administration. I'm also highly supportive of his executive actions on immigration reform.
I find the Affordable Care Act to be difficult to evaluate. They made so many changes at once that it's hard to ascertain their net effect on health care overall. Yes, increases in health care costs have gone down. Yes, younger people are spending more on insurance that they probably don't need. Yes, there are multiple ways to improve the system which are not politically feasible.
I think Obama's biggest failure was Libya. The US should stop supporting rebellions, or invading countries. It's never clear what's going to happen when the revolutionaries take over, or the new regime is in place, and the war itself is always bad.
The issue I find most perplexing is wiretapping. It seems like Obama didn't do anything about it, and nobody really seems to have cared. Other failures can be explained away as the fault of Congress such as his failure to close Guatanamo Bay, but I don't think the wiretapping issue can.
One thing people don't talk about enough is the unprecedented slowdown in the growth of government spending these past few years. Look at what happened with nominal government spending. I think this is principally due to the Tea Party because it coincides with their rise and fall almost exactly, but I still think Obama's role in this brief change is an important one. Alex Tabarrok's views on the subject from 2008 come across to me as prescient.
Replies from: username2, LumiferExit is the right strategy because if there is any hope for reform it is by casting the Republicans out of power and into the wilderness where they may relearn virtue. Libertarians understand better than anyone that power corrupts. The Republican party illustrates. Lack of power is no guarantee of virtue but Republicans are a far better – more libertarian – party out-of-power than they are in power. When in the wilderness, Republicans turn naturally to a critique of power and they ratchet up libertarian rhetoric about free trade, free enterprise, abuse of government power and even the defense of civil liberties.
↑ comment by username2 · 2016-01-17T22:13:12.267Z · LW(p) · GW(p)
I think Obama's biggest failure was Libya. The US should stop supporting rebellions, or invading countries. It's never clear what's going to happen when the revolutionaries take over, or the new regime is in place, and the war itself is always bad.
IIRC, that was Nicolas Sarkozy's idea. Obama's fault is that he joined him.
Back in mid 1990s USA and the whole Western World was heavily criticized for not intervening in Rwanda conflict and many people in the US and Europe took that criticism to their hearts and now they tend to err in an opposite direction.
↑ comment by Douglas_Knight · 2016-01-13T20:22:39.310Z · LW(p) · GW(p)
What are you trying to explain? Why do you believe that Obama did anything wrong?
Are you trying to explain his approval ratings? Shouldn't ~50% approval be your default assumption of political polarization? If so, there is nothing to explain. Are they very different from other presidents? A little lower, but nothing out of the ordinary. W's peak approval was just after 9/11. Clinton's peak approval was during the impeachment. Clinton's rose over the course of his term, while W's and Obama's fell. I guess you could interpret that as judging their actions, but W's ended low and Obama's ended mediocre.
Added: better than the summary statistics in wikipedia are these graphs (correcting the dead link in wikipedia). Obama had a two year honeymoon period and has bounced around 50/50 since then.
Replies from: Brillyant↑ comment by Brillyant · 2016-01-13T20:59:40.133Z · LW(p) · GW(p)
What are you trying to explain? Why do you believe that Obama did anything wrong?
Anger from the political right. Though it's generally what I would expect given the nature of politics, I want to understand if there is an objective basis for opposition to Obama...or if it is just pure blue vs. green stuff.
I have a sense race plays a big part of the right's hatred of him, but I'm not sure how to go about validating this.
Replies from: Douglas_Knight, Lumifer↑ comment by Douglas_Knight · 2016-01-13T21:24:10.484Z · LW(p) · GW(p)
My link also gives peak disapproval ratings. Obama is perfectly normal. W is an outlier, with a peak disapproval of 71%. Other than him, all the presidents since Ford had a peak disapproval of 54-60%. (Ford didn't have time to do anything to merit disapproval.) Obama is exactly in the middle. (Average disapproval is probably a better metric, though.)
Replies from: knb, Brillyant↑ comment by knb · 2016-01-13T21:57:15.254Z · LW(p) · GW(p)
Ford didn't have time to do anything to merit disapproval
Anecdotally, a lot of the anger came from him pardoning Nixon.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2016-01-13T23:15:23.237Z · LW(p) · GW(p)
Sure, his ratings (archive) crashed from 71/3 on inauguration to 50/28 after the pardon, but that just took him to a fairly normal level.
↑ comment by Lumifer · 2016-01-13T21:23:48.244Z · LW(p) · GW(p)
Anger from the political right.
I don't see any unusual anger.
It's election year, so the usual suspects are already hard at work operating their mud-throwers at max volume and intensity...
an objective basis
What in politics would you consider to be an "objective basis"?
Replies from: Brillyant↑ comment by Brillyant · 2016-01-13T21:52:47.850Z · LW(p) · GW(p)
What in politics would you consider to be an "objective basis"?
I'm not sure. Perhaps there is very little that can be considered objective, since the two parties have competing definitions of success.
Are you saying there are is no objective way to evaluate a president's performance? Which measures did you use to conclude the following?
Replies from: LumiferBasically, he turned out to be a mediocre President, not horrible, but not particularly good either.
↑ comment by Lumifer · 2016-01-13T22:11:49.542Z · LW(p) · GW(p)
Are you saying there are is no objective way to evaluate a president's performance?
Evaluating performance necessarily involves specifying goals and metrics.
If you provide hard definitions of the goals that you're interested in, as well as precise specifications of the metrics, plus a particular weighting scheme for combining performance numbers for multiple goals, well, then you can claim that you are objectively evaluating the performance. The problem is that you're evaluating a very narrow idea of performance, one that involves the goals and the metrics and the weights that you have picked. Other people can (and probably will) say that your goals are irrelevant, your metrics are misleading, and your weights are biased X-)
Which measures did you use to conclude the following?
I listened to my feelings :-P
comment by username2 · 2016-01-12T10:41:58.575Z · LW(p) · GW(p)
Paperclip maximizer thought experiment makes a lot of people pattern match AI risk to Science Fiction. Do you know any AI risk related thought experiments that avoid that?
Replies from: gjm, _rpd↑ comment by gjm · 2016-01-12T12:42:08.087Z · LW(p) · GW(p)
Major AI risk is science fiction -- that is, it's the kind of thing science-fiction stories get written about, and it isn't something we have experience of yet outside fiction. I don't see how any thought experiment that seriously engages with the issue could not pattern-match to science fiction.
Replies from: IlyaShpitser, RaelwayScot↑ comment by IlyaShpitser · 2016-01-12T17:54:47.137Z · LW(p) · GW(p)
There is a field that thinks hard about risks from unintelligent computers (computer security) that tackles very difficult problems that sometimes get written about in popular fiction (Neil Stephenson, etc.) and manages to not look silly.
I think to the extent that (U)FAI research is even a "real area," it would be closest in mindset to computer security.
Replies from: ChristianKl, gjm, Lumifer↑ comment by ChristianKl · 2016-01-12T19:03:37.357Z · LW(p) · GW(p)
manages to not look silly.
Computer security as portrayed on TV frequently does look silly.
Replies from: Lumifer↑ comment by gjm · 2016-01-12T18:27:36.139Z · LW(p) · GW(p)
I endorse Lumifer's quibble about the field of computer security, with the caveat that often the fact that the risks happen inside computer systems is much more important than the fact that they come from people.
The sort of "value alignment" questions MIRI professes (I think sincerely) to worry about seem to me a long way away from computer security, and plausibly relevant to future AI safety. But it could well be that if AI safety really depends on nailing that sort of thing down then we're unfixably screwed and we should therefore concentrate on problems there is at least some possibility of solving...
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2016-01-12T19:06:54.881Z · LW(p) · GW(p)
I think my point wasn't about what computer security precisely does, but about the mindset of people who do it (security people cultivate an adversarial point of view about systems).
My secondary point is that computer security is a very solid field, and doesn't look wishy washy or science fictiony. It has serious conferences, it has research centers, industry labs, intellectual firepower, etc.
Replies from: Wei_Dai, philh, Lumifer↑ comment by Wei Dai (Wei_Dai) · 2016-01-15T23:03:07.032Z · LW(p) · GW(p)
I'm not sure how much there is to learn from the field of computer security, with regard to the OP's question. It's relatively easy to cultivate an adversarial mindset and get funding for conferences, research centers, labs, intellectual firepower, etc., when adversaries exist at the present time and are causing billions of dollars of damage each year. How to do that if the analogous adversaries are not expected to exist for a decade or more, and we expect it will be too late to get started once the adversaries do exist?
Replies from: gwern↑ comment by gwern · 2016-01-16T00:40:31.054Z · LW(p) · GW(p)
...Can we consider computer security a success story at all? I admit, I am not a professional security researcher but between Bitcoin, the DNMs, and my own interest in computer security & crypto, I read a great deal on these topics and from watching it in real-time, I had the definite impression that, far from anyone at all considering modern computer security a success (or anything you want to emulate at all), the Snowden leaks came as an existential shock and revelation of systemic failure to the security community in which it collectively realized that it had been staggeringly complacent because the NSA had devoted a little effort to concealing its work, that the worst-case scenarios were ludicrously optimistic, and that most research and efforts were almost totally irrelevant to the NSA because the NSA was still hacking everyone everywhere because it had simply shifted resources to attacking the weakest links, be it trusted third parties, decrypted content at rest, the endless list of implementation flaws (Heartbleed etc), and universal attacks benefiting from precomputation. Even those who are the epitome of modern security like Google were appalled to discover how, rather than puzzle over the random oracle model or do deep R&D on quantum computing, the NSA would just tap its inter-datacenter links to get the data it wanted.
When I was researching my "Bitcoin is Worse is Better" essay, I came across a declassified NSA essay where the author visited an academic conference and concluded at the end, with insufferable - yet in retrospect, shockingly humble - arrogance, that while much of the research was of high quality and interesting, the researchers there were no threat to the NSA and never would be. I know I'm not the only one to be struck by that summary because Rogaway quotes it prominently in his recent "The Moral Character of Cryptographic Work" reflecting on the almost total failure of the security community to deliver, well, security. Bernstein also has many excellent critiques of the security community's failure to deliver security and its frequent tendency towards "l'art pour l'art".
The Snowden leaks have been to computer security what DNA testing was to the justice system or clinical trials to the medical system; there are probably better words for how the Snowden revelations made computer security researchers look than 'silly'.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2016-01-16T19:09:55.406Z · LW(p) · GW(p)
It's a success story in the sense that there is a lot solid work being done. It is not a success story in the sense that currently, and for the foreseeable future, attack >> defense (but this was true in lots of other areas of warfare throughout various periods of history). We wouldn't consider armor research not a success story just because at some point flintlocks phased out heavy battlefield armor.
The fact that computer security is having a hard time solving a much easier problem with a ton more resources should worry people who are into AI safety.
Replies from: gwern↑ comment by gwern · 2016-01-16T20:56:38.262Z · LW(p) · GW(p)
We wouldn't consider armor research not a success story just because at some point flintlocks phased out heavy battlefield armor.
I think you missed the point of my examples. If flintlocks killed heavy battlefield armor, that was because they were genuinely superior and better at attack. But we are not in a 'machine gun vs bow and arrow' situation.
The Snowden leaks were a revelation not because the NSA had any sort of major unexpected breakthrough. They have not solved factoring. They do not have quantum computers. They have not made major progress on P=NP or reversing one-way functions. The most advanced stuff from all the Snowden leaks I've read was the amortized attack on common hardwired primes, but that again was something well known in the open literature and why we were able to figure it out from the hints in the leaks. In fact, the leaks strongly affirmed that the security community and crypto theory has reached parity with the NSA, that things like PGP were genuinely secure (as far as the crypto went...), and that there were no surprises like differential cryptanalysis waiting in the wings. This is great - except it doesn't matter.
They were a revelation because they revealed how useless all of that parity was: the NSA simply attacked on the economic, business, political, and implementation planes. There is no need to beat PGP by factoring integers when you can simply tap into Gmail's datacenters and read the emails decrypted. There is no need to worry overly much about OTR when your TAO teams divert shipments from Amazon, insert a little hardware keylogger, and record everything and exfiltrate out over DNS. Get something into a computer's BIOS and it'll never come out. You don't need to worry much about academics coming up with better hash functions when your affiliated academics, who know what side their bread is buttered on, will quietly quash it in committee or ensure something like export-grade ciphers are included. You don't need to worry about spending too much on deep cryptanalysis when the existence of C ensures that there will always be zero-days for you to exploit. You don't need to worry about even revealing capabilities when you can just leak information to your buddies in the FBI or DEA and they will work their tails off to come up with a plausible non-digital story which they can feed the judge. (Your biggest problems, really, are figuring out how to not drown under the tsunami of data coming in at you from all the hacked communications links, subverted computers, bulk collections from cloud datacenters, decrypted VPNs etc.)
This isn't like guns eliminating armor. This is like an army not bothering with sanitation and wondering why it keeps losing to the other guys, which turns out to be because the latrine contractors are giving kickbacks to the king's brother.
The fact that computer security is having a hard time solving a much easier problem with a ton more resources should worry people who are into AI safety.
I agree, it absolutely does, and it's why I find kind of hilarious people who seem to seriously think that to do AI safety, you just need some nested VMs and some protocols. That's not remotely close to the full scope of the problem. It does no good to come up with a secure sandbox if dozens of external pressures and incentives and cost-cutting and competition mean that the AI will be immediately let out of the box.
(The trend towards attention mechanisms and reinforcement learning in deep learning is an example of this: tool AI technologies want to become agent AIs, because that is how you get rid of expensive slow humans in the loop, make better inferences and decisions, and optimize exploration by deciding what data you need and what experiments to try.)
↑ comment by philh · 2016-01-13T10:26:10.307Z · LW(p) · GW(p)
Eliezer has said that security mindset is similar, but not identical, to the mindset needed for AI design. https://www.facebook.com/yudkowsky/posts/10153833539264228?pnref=story
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2016-01-13T14:05:19.058Z · LW(p) · GW(p)
Well, what a relief!
↑ comment by Lumifer · 2016-01-12T19:21:13.904Z · LW(p) · GW(p)
about the mindset of people who do it
A fair point, though that mindset is hacker-like in nature. It is, basically, an automatic "how can I break or subvert this system?" reaction to everything.
But the thing is, computer security is an intensely practical field. It's very much like engineering: has to be realistic/implementable, bad things happen if it fucks up, people pay a lot of money to get good solutions, these solutions are often specific to the circumstances, etc.
AI safety research at the moment is very far from this.
↑ comment by Lumifer · 2016-01-12T18:08:39.911Z · LW(p) · GW(p)
There is a field that thinks hard about risks from unintelligent computers (computer security)
Not quite. Computer security deals with managing risks coming from people, it's just that the universe where it has to manage risks is a weird superposition of the physical world (see hardware or physical-access attacks), the social world (see social engineering attacks), and the cyberworld (see the usual 'sploit attacks).
↑ comment by RaelwayScot · 2016-01-12T17:46:30.331Z · LW(p) · GW(p)
I think many people intuitively distrust the idea that an AI could be intelligent enough to transform matter into paperclips in creative ways, but 'not intelligent enough' to understand its goals in a human and cultural context (i.e. to satisfy the needs of the business owners of the paperclip factory). This is often due to the confusion that the paperclip maximizer would get its goal function from parsing the sentence "make paperclips", rather from a preprogrammed reward function, for example a CNN that is trained to map the number of paperclips in images to a scalar reward.
Replies from: gjm↑ comment by gjm · 2016-01-12T18:22:48.965Z · LW(p) · GW(p)
Could well be. Does that have anything to do with pattern-matching AI risk to SF, though?
Replies from: RaelwayScot↑ comment by RaelwayScot · 2016-01-12T19:27:22.770Z · LW(p) · GW(p)
Just speaking of weaknesses of the paperclip maximizer though experiment. I've seen this misunderstanding at least 4 out of 10 times that the thought experiment was brought up.
↑ comment by _rpd · 2016-01-12T22:55:46.359Z · LW(p) · GW(p)
If you are just trying to communicate risk, analogy to a virus might be helpful in this respect. A natural virus can be thought of as code that has goals. If it harms humankind, it doesn't 'intend' to, it is just a side effect of achieving its goals. We might create an artificial virus with a goal that everyone recognizes as beneficial (e.g., end malaria), but that does harm due to unexpected consequences or because the artificial virus evolves, self-modifying its original goal. Note that once a virus is released into the environment, it is nontrivial to 'delete' or 'turn off'. AI will operate in an environment that is many times more complex: "mindspace".
↑ comment by Good_Burning_Plastic · 2016-01-24T11:46:46.071Z · LW(p) · GW(p)
Assuming it was written by her and not her adviser.
The writing doesn't sound like the same voice as her advisor's (e.g. arXiv:1402.1131). OTOH it is plausible that most of the original research in it was the latter's. Also, the fact that she doesn't seem to have ever published anything else is pretty suspicious. EDIT: also, she took ten years to finish it.
All in all, I'd guess her IQ is above 100 but below 130.
↑ comment by username2 · 2016-01-24T02:23:16.889Z · LW(p) · GW(p)
That only postpones the problem for a few years, unless you establish a permanent military presence.
The US can keep 100,000+ soldiers on the ground for 7 years, have all of its top military brass focus on that conflict, fight cleverly and aggressively against the opposition, lead the country through the process of drafting a constitution and holding elections, train the new military and police forces, spend tens of billions of dollars helping to build the country's infrastructure (in addition to hundreds of billions of dollars of military spending), gradually remove its troops in an orderly fashion as negotiated with the country's new government, and still have everything go horribly within a couple years of leaving.
comment by Panorama · 2016-01-15T20:09:14.017Z · LW(p) · GW(p)
Why boredom is anything but boring
Implicated in everything from traumatic brain injury to learning ability, boredom has become extremely interesting to scientists.
comment by [deleted] · 2016-01-12T12:30:18.803Z · LW(p) · GW(p)
But I now thought that this end [one's happiness] was only to be attained by not making it the direct end. Those only are happy (I thought) who have their minds fixed on some object other than their own happiness[....] Aiming thus at something else, they find happiness along the way[....] Ask yourself whether you are happy, and you cease to be so.
-John Stuart Mill, the utilitarian philosopher, in his autobiography
Policy Debates Should Not Appear One-Sided. Is there testimony against the one-sided evidence-by-testimony for the paradox of hedonism at the Wikipedia article. Or, better yet, compelling empirical evidence from well-designed experimentation?
comment by Brillyant · 2016-01-12T18:53:12.143Z · LW(p) · GW(p)
Has anyone heard of Amazon using drones for actual deliveries? Or are they still just in testing?
Replies from: ChristianKl, Lumifer, knb↑ comment by ChristianKl · 2016-01-13T12:42:32.292Z · LW(p) · GW(p)
http://www.marketwatch.com/story/drone-delivery-is-already-here-and-it-works-2015-11-30 suggests that a main problem holding back drone deliveries is governmental regulation.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2016-01-13T19:42:09.964Z · LW(p) · GW(p)
Amazon mentions regulation, but it also says that there is a lot of testing ahead.
↑ comment by polymathwannabe · 2016-01-24T14:48:20.177Z · LW(p) · GW(p)
If you made an incorrect statement and this gets pointed out, you will lose status for admitting it
LW culture is built specifically to encourage and reward correcting oneself.
Replies from: bogus↑ comment by username2 · 2016-01-24T06:57:09.456Z · LW(p) · GW(p)
I agree with your implied point that putting boots on the ground for a few years (and then removing them) is less likely to lead to horrible outcomes if it's done in a stable region, where law-and-order is well-established in the neighboring countries and there are unlikely to be any major disruptive events in the region during the military engagement or the decade after it has ended.
How about actually removing your troops in an orderly fashion, rather than cause negotiations to fail over a minor technical matter and remove the troops all at once.
I am unsure which part of this graph you are referring to, but if the policy of temporarily sending troops into a country is fragile enough to risk disaster if that sort of detail isn't handled properly then that is not a good sign.
comment by Panorama · 2016-01-15T20:01:39.674Z · LW(p) · GW(p)
Can Economics Change Your Mind?
Economics is sometimes dismissed as more art than science. In this skeptical view, economists and those who read economics are locked into ideologically motivated beliefs—liberals versus conservatives, for example—and just pick whatever empirical evidence supports those pre-conceived positions. I say this is wrong and solid empirical evidence, even of the complicated econometric sort, changes plenty of minds.
Can economics change your mind?
Where to start? I could write a whole ongoing blog on this question (wait…). In any case, here are just a few examples of where I have changed my mind due to economic evidence:
comment by [deleted] · 2016-01-14T20:05:41.578Z · LW(p) · GW(p)
Carol Dweck on fixed vs. growth mindsets
In terms of theory, I'm not sure if fixed vs. growth mindset is the best way to describe the comparison. I feel like there should be a better way to more precisely define the two concepts, but I'm not sure exactly how. I think the research is useful still despite my concerns although you're more than welcome to argue it isn't. Anyway, I've been wondering about this in terms of LessWrong. Does LessWrong as a community have a fixed-mindset? The praising for being smart vs. praising for effort distinction used made me wonder if LessWrong is more concerned with having intelligent discussions, and whether this interferes with improvement in rationality.
Replies from: Viliam↑ comment by Viliam · 2016-01-15T10:26:17.518Z · LW(p) · GW(p)
If I try to quickly taboo the words "fixed mindset" and "growth mindset", the essential question is probably this:
Is the person aware (not verbally, but on the gut level) that their own skills could improve in the future, or do they implicitly assume that their skills will always stay the same?
It is a bit more complicated than this. For example, the person may deny the possibility of growth by refusing to classify something as a "skill", because merely reframing something as a "skill" (as opposed to a "trait") already suggests the possibility of improvement. For example, one person would say "I am introverted" where another person would say "my social skills of dealing with strangers are not good enough (yet)". In other words, the person may reject not just the possibility of improving their own skill, but the idea of the trait being modifyable in general.
Also, this doesn't have to apply generally. For example a stereotypical nerd may assume that you are able to learn programming, but that social skills are innate; while another person may assume that social behaviors are learned, but the talent to understand math or computers is innate. So one can have a "fixed mindset" in some areas and a "growth mindset" in others.
Does LessWrong as a community have a fixed-mindset?
Both/neither. The idea that humans can become more rational is central to the website. On the other hand, I guess everyone accepts that IQ is a thing. On the other other hand, transhumanists hope to overcome even those biological limits in a distant future.
But these are the professed beliefs. What do LessWrongers alieve? Not so sure here; but I'd guess that anyone who e.g. participated in a CFAR workshop has revealed the "growth mindset". But it's also possible that for some of them the "growth mindset" applies only in a narrow area.
Uhm, how about making a poll with more specific questions, such as "how much you believe you could improve in X" for various values of X such as "social skills" or "your job" or...?
Replies from: None↑ comment by [deleted] · 2016-01-16T06:22:39.551Z · LW(p) · GW(p)
Is the person aware (not verbally, but on the gut level) that their own skills could improve in the future, or do they implicitly assume that their skills will always stay the same?
I think that's a good definition of the theory as Carol Dweck would define it; I'm just not so sure that's the best definition of the experimental results. For instance, what precisely is gut level awareness? How would I test it experimentally if they can't vocally express this awareness? Is the fixed mindset due to unawareness of the ability to improve or due to a desire to stay the same? Is it that the individual is aware they can improve, but simply is overestimating their own probability of getting worse or underestimating their probability of getting better? Is it an issue of avoidance of failure or is it a failure to approach goals? If I was to define the two terms, I might use something like:
fixed-mindset - When individuals are praised for their attributes, they are more likely to engage in behaviors intended to display or protect those attributes. growth-mindset - When individuals are praised for their effort, they are more likely to engage in behaviors intended to improve their attributes.
But that's rough. I'm not familiar with all the studies on the subject.
Replies from: ChristianKl↑ comment by ChristianKl · 2016-01-18T11:57:16.382Z · LW(p) · GW(p)
How would I test it experimentally if they can't vocally express this awareness?
Just like you can run implicit racism tests I think you likely also can run texts where you let participants read various statements and measure their reactions.
fixed-mindset - When individuals are praised for their attributes, they are more likely to engage in behaviors intended to display or protect those attributes. growth-mindset - When individuals are praised for their effort, they are more likely to engage in behaviors intended to improve their attributes.
I think that points to part of the experiments but it doesn't explain the whole concept.
comment by [deleted] · 2016-01-13T15:13:39.677Z · LW(p) · GW(p)
“Staying in the present’ is popular pop-psychology prescription. The evidence suggests a different and more sophisticated attitude to time:
... Zimbardo believes research reveals an optimal balance of perspectives for a happy life; commenting, our focus on reliving positive aspects of our past should be high, followed by time spent believing in a positive future, and finally spending a moderate (but not excessive) amount of time in enjoyment of the present.
So instead of living in the present, try living in the positive aspects of the past, and to a lesser extent believing in the positive aspects of the future, and finally only enjoy the present, like pop-medical advice for other minor vices, ‘in moderation‘.
comment by [deleted] · 2016-01-13T14:59:53.295Z · LW(p) · GW(p)
I want to determine whether I ought to have children or not based on the consequences for the population, my child(ren) and me personally.
I reckon the demographic factor that is most relevant to this choice is my status as a mentally ill person.
My decision cycle lasts from now till my prime fertile years (till I’m 35).
I will have kids if:
The consequences for the population is good. If existing evidence suggests population growth is good then the consequences for population growth is good. Population growth is basically good. There may be some non-linearity to that public good in the far future and that is a problem the future recipients of this past public good beyond my decision cycle can solve.
Is this consequence modified by greater participation in the population growth by individuals with mental illness? I have no relevant evidence so I will be stick the most proximate generalisation that the consequences on population growth is good.
The consequences for the children are good.
Having a close family member affected by a mental illness is the largest known risk factor, to date
Therefore, relative to the general population, it is likely that the consequences for the children are bad. However, this is meaningful at a population level, rather than at the level of the child as I had intended to analyse. Regardless, I will adjust me interpretation of the consequences of population growth to be bad simulating that greater prevalence of risk factors in the population if mentally ill people participate at a greater level of reproduction.
Finally, are the consequences good for me?
Children cost hundreds of thousands of dollar’s in Western countries. The switching cost is tremendous. We may need not enumerate the trade from this angle, however, as there is a more psychologically proximate evidence-base to examine:
From the abstract for Clarifying the Relationship Between Parenthood and Depression](Clarifying the Relationship Between Parenthood and Depression:
Unlike other major adult social roles in the United States, parenthood does not appear to confer a mental health advantage for individuals... Parenthood is not associated with enhanced mental health since there is no type of parent who reports less depression than nonparents.
From the abstract for Parenthood and Psychological Well-Being Theory, Measurement, and Stage in the Family Life Course:
There are theoretical foundations in sociology for two seemingly incompatible positions: (1) children should have a strong negative impact on the psychological well-being of parents and (2) children should have a strong positive impact on the psychological well-being of parents. Most empirical analyses yield only a modest relationship between parenthood and psychological well-being. Usually, but not always, it is negative. In this study we consider the relationship between parental status and several dimensions of psychological well-being. Our analysis is based on data from a large national survey. It suggests that children have positive and negative effects on the psychological well-being of parents. The balance of positive and negative effects associated with parenthood depends on residential status of the child, age of youngest child, marital status of the parent, and the particular dimension of psychological well-being examined. When compared with nonparents, parents with children in the home have low levels of affective well-being and satisfaction, and high levels of life-meaning; parents with adult children living away from home have high levels of affective well-being, satisfaction, and life-meaning.
Scholarly evidence clearly favours non-parenthood for personal well-being.
Given the negative consequences for the general population and the individual with mental illness, and the uncertainty in forecasting consequences for the children themselves, not having children dominates the choice to have children.
Replies from: philh, Tem42, None↑ comment by Tem42 · 2016-01-27T23:21:55.580Z · LW(p) · GW(p)
Watch your baseline: you should not consider the benefits that you and your child might get vs. not having children, but rather, the benefits you and the child might get vs. the benefits that you and another child might get if you did not have a child but became involved in a mentor program (or other volunteer activity helping children).
It may be hard to determine the value you get through working with other people's children, but there are big two plus sides to doing so:
you have a comparative advantage for a certain population of kids; those with mental illnesses may benefit especially from an adult who has experienced something like what they are going through and
you can experiment to determine the value you get from a mentor program much more easily (or rather, with much lower cost) than you can experiment with having your own kids -- and it makes good sense to try the low cost experiment before you run any final calculations.
↑ comment by [deleted] · 2016-01-15T17:31:04.185Z · LW(p) · GW(p)
I think the cost of children is a factor in the psychological well-being of the parents, so it's double counting to treat those as separate items. More to the point, you are not an average. While the effect of a child is slightly negative for the average parent, parents will vary widely in the effect of their own children on their life. If you are wealthy, in a stable marriage, and knowledgeable about parenting, then I would expect children to be net-positive for your well-being. I think a lot of the negatives of children stem from poor decision-making by the parents which leads to unnecessary stress.
Replies from: Viliam↑ comment by Viliam · 2016-01-25T11:49:24.559Z · LW(p) · GW(p)
If you are wealthy, in a stable marriage, and knowledgeable about parenting, then I would expect children to be net-positive for your well-being.
Yes. You should be able to easily take care about yourself (financially, logistically, emotionally), before you accept the burden of taking care about someone else who cannot reciprocate in the following few years.
Imagine that you have less time, less energy, and less money; every day, for the following few years. Plus some unexpected problems appearing randomly. This is how it is when you have a baby.
In return you get a cute little person that is similar to you, loves you more or less unconditionally (unless you really screw up), and "becomes stronger" visibly every month. That can be hugely emotionally rewarding.
However, that emotional reward doesn't change the fact that you still have less time, less energy, and less money. So if something was a problem before, it will become much greater problem with the baby. That also includes the possible problems with the relationship: now the partners have more stress, and less time to talk or have sex (which are the two typical methods to solve interpersonal problems).
comment by seuoseo · 2016-01-13T09:12:28.326Z · LW(p) · GW(p)
Can you think of any good reason to consult any so called psychic?
Replies from: Richard_Kennaway, None, IlyaShpitser, username2, Richard_Kennaway↑ comment by Richard_Kennaway · 2016-01-13T11:19:04.974Z · LW(p) · GW(p)
Can you think of any good reason to consult any so called psychic?
I can think of a good reason for anything. I ask my brain "conditional upon it being a good idea, what might the situation be?" and the virtual outcome pump effortlessly generates scenarios. A professional fiction writer could produce a flood of them. Try it! For any X whatever, you can come up with answers to the question "what might the world look like, conditional upon X being a good idea?" For extreme X's, I recommend not publishing them. If you find yourself being persuaded by the stories you make up, repeat the exercise for not-X, and learn from this the deceptively persuasive power of stories.
Why consult a psychic? Because I have seen reason to think that this one is the real deal. To humour a friend who believes in this stuff. For entertainment. To expose the psychic as a fraud. To observe and learn from their cold reading technique. To audition them for a stage act. Because they're offering a free consultation and I think, why not? (Don't worry, my virtual outcome pump can generate reasons why not just as easily as reasons why.)
What is the real question here?
Replies from: seuoseo↑ comment by seuoseo · 2016-01-13T11:41:56.813Z · LW(p) · GW(p)
You got me, there was no real question. It was all made up for fun. It would be fun to know of a rationalist's experience and interpretation or desire to visit a psychic and whatever unusual circumstances and reasoning led them to it.
Replies from: ChristianKl↑ comment by ChristianKl · 2016-01-13T11:49:31.692Z · LW(p) · GW(p)
It would be fun to know of a rationalist's experience
Even conditional on someone having those experiences I find it unlikey that the person would write an reply to a question on LW that posed as the question above.
Replies from: seuoseo↑ comment by [deleted] · 2016-01-14T01:39:42.943Z · LW(p) · GW(p)
Cold reading and externalizing your unconscious thoughts so as to allow conscious consideration thereof are very useful things sometimes. As can sometimes be manipulating symbols so as to deeply seat changes to said thoughts. There's a great big grab-bag of tricks that human societies have come up with to do these things over the millennia including some activities practiced by the more interesting subsets of individuals using that label.
There are spaces within the occult philosophy scene that effectively say, for example, things like that coincidences and synchronicity are very important to pay attention to because noticing what you find synchronistic and interestingly-coincidental about the swirling morass of the world around you reveals what you're actually preoccupied by and how you really feel about things. Or that by forcing yourself to have unconscious emotional reactions to fairly random charged symbols and trying to interpret what they mean to you (think tarot), you gain a better appreciation of what is important to you.
↑ comment by IlyaShpitser · 2016-01-13T18:15:53.335Z · LW(p) · GW(p)
Good ones are good judges of character. Might want to befriend one rather than be a client, though.
↑ comment by username2 · 2016-01-16T22:28:10.852Z · LW(p) · GW(p)
Follow up question: has anyone on LessWrong ever actually consulted a psychic for any reason?
Replies from: seuoseo↑ comment by Richard_Kennaway · 2016-01-13T11:16:30.729Z · LW(p) · GW(p)
Can you think of any good reason to consult any so called psychic?
I can think of a good reason for anything. I ask my brain "conditional upon it being a good idea, what might the situation be?" and the virtual outcome pump effortlessly generates scenarios. A professional fiction writer could produce a flood of them. Try it! For any X whatever, you can come up with answers to the question "what might the world look like, conditional upon X being a good idea?" For extreme X's, I recommend not publishing them.
Why consult a psychic? Because I have seen reason to think that this one is the real deal. To humour a friend who believes in this stuff. For entertainment. To expose the psychic as a fraud. To observe and learn from their cold reading technique. To audition them for a stage act. Because they're offering a free consultation and I think, why not? (Don't worry, my virtual outcome pump can generate reasons why not just as easily as reasons why.)
What is the real question here?
comment by [deleted] · 2016-01-17T04:58:13.770Z · LW(p) · GW(p)
Could a hypothetical being exist That is so sensitive to harms and good m, and experienced such extremes of harm and good, that altruistic people would be best served by dedicating themselves to the service of that one being?
Replies from: IlyaShpitser, g_pepper, gwern, LessWrong1, username2↑ comment by IlyaShpitser · 2016-01-17T05:47:08.267Z · LW(p) · GW(p)
It's not news to anyone that it's pretty easy to screw up consequentialists. The lesson I take from this is this: "maximize to solve a particular problem, rather than as a lifestyle choice."
Replies from: Richard_Kennaway, Richard_Kennaway↑ comment by Richard_Kennaway · 2016-01-17T11:22:09.106Z · LW(p) · GW(p)
The lesson I take from this is this: "maximize to solve a particular problem, rather than as a lifestyle choice."
Is that a solution to a particular problem, or a lifestyle choice?
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2016-01-17T17:21:03.297Z · LW(p) · GW(p)
It's a solution to a problem of bad (underspecified) ethics. The lifestyle choice I am referring to here is "MAXIMIZE ALL THE THINGS."
But of course ethics is hard to fully specify because human minds are involved. It's hard to have models of those. Most of the specification work, the dominating term, is in the most difficult to model part. In this sense I think virtue ethics is playing in the right stadium. They are trying to describe things in terms of the part of the problem that is hardest to model.
↑ comment by Richard_Kennaway · 2016-01-17T11:21:34.108Z · LW(p) · GW(p)
The lesson I take from this is this: "maximize to solve a particular problem, rather than as a lifestyle choice."
Is that a lifestyle choice?
↑ comment by g_pepper · 2016-01-17T05:46:33.556Z · LW(p) · GW(p)
You are describing a utility monster, I believe.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2016-01-17T05:52:05.686Z · LW(p) · GW(p)
↑ comment by Gunslinger (LessWrong1) · 2016-01-17T17:16:56.384Z · LW(p) · GW(p)
Richard Stallman could be that kind of man, although he prefers that people be informed thinkers rather than the following servant type.
comment by [deleted] · 2016-01-16T22:38:37.279Z · LW(p) · GW(p)
Foundational Research weighs in on cooperation
comment by [deleted] · 2016-01-16T08:47:34.049Z · LW(p) · GW(p)
Direct impact careers - the topic EA's often skirt around. I'm somewhat disturbed by one of the first exposures to EA: that medicine is not an effective, altruistic career whatsoever because the shear supply of people interested and capable of becoming doctors is so great (even after artificial restriction). It seems rather theoretical. What is the the economic term for 'replaceability of workers by the labour workforce'? It would be something like a human equivelant of fungibility, with perhaps some element of 'elasticity'. I'd want to see empirical work in this area.
Replies from: ChristianKl, Manfred↑ comment by ChristianKl · 2016-01-16T21:28:19.291Z · LW(p) · GW(p)
effective, altruistic career whatsoever because the shear supply of people interested and capable of becoming doctors is so great (even after artificial restriction).
I'm not sure whether the "even after" makes sense in that sentence. There are a lot of interested and capable people applying to med-school If you get into medschool that means that one of those people won't get into med-school.
On the other hand if you become a skilled programmer you don't take a job away from anyone. That's why 80,000hour recommends people to rather become a programmer at a startup then to study medicine.
↑ comment by Manfred · 2016-01-16T19:05:42.689Z · LW(p) · GW(p)
I'm sure someone else can answer this better, but it sounds like you're asking for "empirical work," but aren't willing to explain why you're unsatisfied with the empirical work that you can find by searching websites like GiveWell and 80000 Hours.
Replies from: Nonecomment by [deleted] · 2016-01-15T06:22:13.533Z · LW(p) · GW(p)
If you are analysing survey data about gamblers attitudes to laws about casinos, that couldn't be specified in PICOT format right?
comment by [deleted] · 2016-01-13T21:23:05.087Z · LW(p) · GW(p)
As long as weight loss is extremely easy (let's not kid ourselves, there are much harder things in the world) and obese people are a burden on my tax dollars that can be spent on say, foreign aid for people with less self-caused problems, I'm going to think about how to start fat shaming (unless the weight of evidence suggests it's counterproductive in getting them motivated to lose weight)
Replies from: gwern, Gunnar_Zarncke↑ comment by gwern · 2016-01-13T22:46:07.652Z · LW(p) · GW(p)
As long as weight loss is extremely easy (let's not kid ourselves, there are much harder things in the world)
If it's so easy, why are diets so infamously ineffective, why do existing social stigmas and norms fail to ensure slimness, why is obesity so heritable and genetically varies across countries ("Population genetic differentiation of height and body mass index across Europe"), why are people exercise resistant, and why do the randomized experiments show much less weight loss than necessary for health ("Intentional Weight Loss and All-Cause Mortality: A Meta-Analysis of Randomized Clinical Trials" being the last I read, estimating long-term losses at 12lbs and half what is necessary for Americans)?
↑ comment by Gunnar_Zarncke · 2016-01-14T21:45:01.213Z · LW(p) · GW(p)
Seligman gives a list that puts this in perspective.