A social science without sacred values

post by ChristianKl · 2017-05-16T12:26:38.737Z · score: 1 (2 votes) · LW · GW · Legacy · 2 comments

This is a link post for https://www.researchgate.net/profile/Bo_Winegard/publication/282819379_A_social_science_without_sacred_values/links/561da3cd08ae50795afd823e.pdf

2 comments

Comments sorted by top scores.

comment by ethereally · 2017-05-27T22:13:14.747Z · score: 0 (0 votes) · LW · GW

As a social psychology student, this was an interesting read. It's been baffling me why political issues are so often brought up in the educational books. I've noticed, for example, the tendency to link the political right to authoritarian personalities (with little information about leftists' personalities). Even if it has seemed weird, I've somewhat brushed it off because politics don't really interest me.

Some comments on the text:

They are paranoid optimists because their brains were designed to manage inevitable errors in the least costly (and most advantageous) way possible. In the case of environmental threats, error management generally leads to paranoia because it is often less costly to mistake an innocuous stimuli for a threat than to mistake a threat for an innocuous stimuli. This is probably best illustrated by considering a smoke detector (Nesse, 2001).

I do not think this illustration was on-point. Why demonstrate something they think is biological ("because their brains...") via a man-made object (a smoke detector)? It seems very likely that if there is such paranoid tendency in human biological brains, people will try to implement that structure to the smoke detector as well. However the designation of the smoke detector is in the hands of people themselves; it might only tell about how people ideally would want their environment to work when and if they could have total control over it. In our everyday lives, we might not have so much control over our environment, so I think the biological tendency would be better demonstrated by something that isn't so directly controlled by people themselves.

For a comparison, people probably enjoy the accuracy of a pocket calculator. But the calculator is more of a crystallization of something people strive for, not something many people actually could utilize in their biological bodies in their everyday lives.

The first is that PEMs honestly believe that the researchers who propound such theories or data are morally reprehensible people. This might sound extreme--do scientists really believe that other scientists are bad people simply for espousing unpopular theories? --, but it is completely rational if our model is correct. - - - (Also, many PEMs may believe that the hypotheses that are forwarded are more extreme than they really are; in other words, the threat such hypotheses present is often grossly exaggerated; see table 1).

I'm only a student, but I can confirm that sometimes students are endorsed to read texts as if they were more extreme than they, at least if interpreted wholly logically, are. This is a made-up example from my memory, but it's perfectly fine to see a sentence "Sally is intelligent" as implying that people-who-are-not-Sally would be non-intelligent. That would be a logical fallacy (denying the antecedent) but logic isn't seen as applying to the social realm. They have a point: if social scientists study the social realm, it's likely that not so many people apply logic to such sentences and they would read it as implying that not-Sallys are non-intelligent. If that's how people in the social realm truly act, the logically incorrect way to read the sentence is, for a social scientist, more correct. However, such logical fallacies shouldn't be endorsed unless we are specifically studying the social realm, and particularly research hypotheses shouldn't be read that way.

I must say, in my field of study, it's a popular idea that "we create the shared world in our interactions". I see there might be a connection between that idea and the phenomenon described in the article. Even if I think the idea might be somewhat true (e.g. kids can learn aggressive behavior from their parents), when taken as a general rule there lies a danger that people will be afraid to speak their mind because they think that they'll somehow end up "creating the world" with their words. Perhaps the "unwanted" ideas are neither read as scientific inquiry but as an attempt to "create a certain kind of world". Which is pressing, because it verges on reducing a scientific discipline to normative ethics or politics. Not to mention that many phenomena will be left unstudied. Or that the idea that our words somehow directly "create worlds" is very unwieldy to begin with and many important things would be left unsaid if we always had to perfect our speech to fit some moral schemata. If we followed that "rule" in every interaction of ours, not much could be said.

Overall, I think it's somewhat disappointing the political issues seem to be such a big thing in the academy. It's good to be aware of political influences that are at play in science, but it should be no bigger thing than that (unless the political influences are a specific interest of study for example). I agree with the last paragraph of the article: as scientists (and as science-minded) people should be happy to explore different ideas, even those which are opposite to their "own favorite ideas", not avoid them. We can't be afraid of the world, and exposing yourself to different ideas shouldn't be seen as "co-creating immoral worlds".

comment by ethereally · 2017-05-27T21:31:35.283Z · score: 0 (0 votes) · LW · GW

As a social psychology student, this was an interesting read. Overall, it's been baffling me why political issues are so often brought up in the educational books. I've noticed, for example, the tendency to link conservatives to authoritarian personalities. Even if it has seemed weird, I've somewhat brushed it off because politics don't really interest me.

Some comments on the text:

They are paranoid optimists because their brains were designed to manage inevitable errors in the least costly (and most advantageous) way possible. In the case of environmental threats, error management generally leads to paranoia because it is often less costly to mistake an innocuous stimuli for a threat than to mistake a threat for an innocuous stimuli. This is probably best illustrated by considering a smoke detector (Nesse, 2001).

I do not think this illustration was on-point. Why demonstrate something they think is biological ("because their brains...") via a man-made object ("a smoke detector")? It seems very likely that if there is such paranoid tendency in human biological brains, people will try to implement that structure to the smoke detector as well. However the designation of the smoke detector is in the hands of people themselves; it might only tell about how people ideally would want their environment to work when and if they could have total control over it. In our everyday lives, we might not have so much control over our environment, so I think the biological tendency would be better demonstrated by something that isn't so directly controlled by people themselves. E.g. how would this paranoid tendency show up in "bare wilderness" -type of scenario?

The first is that PEMs honestly believe that the researchers who propound such theories or data are morally reprehensible people. This might sound extreme--do scientists really believe that other scientists are bad people simply for espousing unpopular theories? --, but it is completely rational if our model is correct. - - - (Also, many PEMs may believe that the hypotheses that are forwarded are more extreme than they really are; in other words, the threat such hypotheses present is often grossly exaggerated; see table 1).

I'm only a student, but I can confirm that sometimes students are endorsed to read texts as if they were more extreme than they, at least if interpreted wholly logically, are. This is a made-up example from my memory, but it's perfectly fine to see a sentence "Sally is intelligent" as implying that people-who-are-not-Sally would be non-intelligent. That would be a logical fallacy (denying the antecedent) but logic isn't seen as applying to the social realm. They have a point: if social scientists study the social realm, it's likely that not so many people apply logic to such sentences and they would read it as implying that not-Sallys are non-intelligent. If that's how people in the social realm truly act, the logically incorrect way to read the sentence is, for a social scientist, more correct. However, such logical fallacies shouldn't be endorsed unless we are specifically studying the social realm, and particularly research hypotheses shouldn't be read that way.

I must say, in my field of study, it's a popular idea that "we create the shared world in our interactions". I see there might be a connection between that idea and the phenomenon described in the article. Even if I think the idea might be somewhat true (e.g. kids can learn aggressive behavior from their parents), there also lies a danger that people will be afraid to speak their mind because they think that they'll somehow end up "creating the world" with their words. Perhaps the "unwanted" ideas are neither read as scientific inquiry but as an attempt to "create a certain kind of world". Which is truly pressing, because that almost entirely reduces a scientific discipline to normative ethics or politics. Not to mention that many phenomena will be left unstudied. Or that the idea that our words somehow directly "create worlds" is very unwieldy and many, many important things would be left unsaid if we always had to perfect our speech to fit some moral schemata. If we followed that "rule" in every interaction of ours, not much could be said. It'd end up being very controlling.

Overall, I think it's somewhat disappointing the political issues seem to be such a big thing in the academy. It's good to be aware of political influences that are at play in science, but it should be no bigger thing than that (unless the political influences are a specific interest of study for example). I agree with the last paragraph of the article: as scientists (and as science-minded) people should be happy to explore different ideas, even those which are opposite to their "own favorite ideas", not avoid them. We can't be afraid of the world, and exposing yourself to different ideas shouldn't be seen as "co-creating immoral worlds".