Are the social sciences challenging because of fundamental difficulties or because of imposed ones?post by ozziegooen · 2020-11-10T04:56:13.100Z · LW · GW · 13 comments
Introduction Intentional Limitations: Political Agendas Subject Privacy Concerns Power Unintentional Limitations: What are the Social Sciences supposed to do? research directions: Predictions to anchor my views Takeaways None 12 comments
I'm not an expert in the social sciences. I've taken a few in-person and online courses and read several books, but I’m very much speculating here. I also haven’t done much investigation for this specific post. Take this post to be something like, "my quick thoughts of the week, based on existing world knowledge" rather than "the results of an extensive research effort." I would be very curious to get the take of people with more experience in these areas.
The social sciences, in particular psychology, sociology, and anthropology, are often considered relatively ineffective compared to technical fields like math, engineering, and computer science. Contemporary progress in the social sciences seems to be less impactful, and college graduates in these fields face more challenging job prospects.
One reason for this discrepancy could be that the social sciences are fundamentally less tractable. An argument would go something like, “computers and machines can be deeply understood, so we can make a lot of progress around them, but humans and groups of humans are really messy and near impossible to make useful models of.”
I think one interesting take is that the social sciences aren’t fundamentally more difficult than technical fields, but rather that they undergo substantial limitations. Many of these limitations are intentional, and these might be the majority. By this I’m mainly referring to certain expectations of privacy and other ethical expectations in research, coupled with uncomfort around producing social science insights that are “too powerful.”
If this take is true, then it could change conversations around progress in the social sciences to focus on possibly uncomfortable trade-offs between research progress, experimentation ethics, and long term risks.
This is important to me because I could easily imagine advances in the social sciences being much more net-positive than advances in technical fields, outside of AI. I’d like flying cars and better self driving vehicles a great deal, but I’d like to live in a kind, coordinated, and intelligent world a whole lot more.
When I went to college (2008-2012), it was accepted as common wisdom that human decision making was dramatically more complicated than machine behavior. The math, science, and engineering classes used highly specified and deterministic models. The psychology, anthropology, and marketing courses used what seemed like highly sketchy heuristics, big conclusions drawn from narrow experiments, and subjective ethnographic interpretations. Our reductionist and predictable models of machines allowed for the engineering of technical systems, but our vague intuitions of humans didn’t allow us to do much to influence them.
Perhaps it’s telling that engineers keep on refusing to enter the domains of human and social factors; we don’t yet really have a field of “cultural engineering” or “personality engineering” for instance. Arguably cyberneticians and technocrats in the 1900s made attempts but fell out of favor.
Up until recently, I assumed that this discrepancy was due to fundamental difficulties around humans. Even the most complex software setups were rather straightforward compared to humans. After all, human brains are monumentally powerful compared to software systems, so must be correspondingly challenging to deal with.
But recently I've been intrigued by a second hypothesis. That many aspects of the social sciences aren't fundamentally more difficult to understand than technical systems, but rather that progress is deeply bottlenecked by ethical dilemmas and potential dangerous truths.
1. Political Agendas
Political agendas are probably the most obvious intentional challenge to the social sciences. Generally, no one gets upset about which conclusions nuclear physicists arrive at, but they complain on Twitter when new research is posted on sexual orientation. There's already a fair bit [LW · GW] of discussion of a left-bias in the social sciences and it's something I hear many academics complain about. My impression is that this is a limitation, but that the issue is a lot more complicated than a simple "we just need more conservative scientists" answer. Conservatives on Twitter get very upset about things too, and having two sides complain about opposite things doesn't cancel the corresponding problems out.
So one challenge with agendas is that they preclude certain kinds of research. But I think a deeper challenge is that they change incentives of researchers to actively focus on providing evidence for existing assumptions, rather than actively search for big truths. Think tanks are known for this; for a certain amount of money you could generally get some macroeconomic work that supports seemingly any side of an economic argument. My read is that many anthropologists and sociologists do their work as part of a project to properly appreciate diversities of cultures and lifestyles. There’s a fair amount of work to understand issues on oppression; typically this seems focused on defending the hypothesis that oppression has existed.
There’s a lot of value for agenda driven work, where agenda driven work is defined as, “there’s a small group of people who know something, and they need a lot of evidence to prove this to the many more people.” I partake in work like this myself, any writing to promote already discussed research around the validity of forecasting fits this description. However, this work seems very different from work finding totally new insights. Science done for the sake of convincing people of known things can be looked at as essentially a highly respectable kind of marketing. Agenda driven science uses the same tools as “innovation driven” science, but the change in goals seems likely to produce correspondingly different outcomes.
2. Subject Privacy Concerns
I've worked a fair bit with web application architectures. They can be a total mess. Client layers on top of client APIs, followed by the entire HTTP system, load balancers, tangled spaghetti of microservices, several different databases. And compared to the big players (Google, Facebook, Uber), this was all nice and simple.
One of the key things that makes it work is introspectability. If something is confusing, you can typically either SSH into it and mess around, or try it out in a different environment. There are hundreds of organized sets of logs for all of the interactions in and out. There's an entire industry of companies that do nothing but help other tech companies set up sophisticated logging infrastructures. Splunk and Sumo Logic are both public, with a combined market cap of around $35 Billion.
Managing all the required complexity would be basically impossible without all of this.
Now, in human land, we don't have any of this, mostly because it would invade privacy. Psychological experiments typically consist of tiny snapshots of highly homogenous clusters (college students, for example). Surveys can be given to more people, but the main ones are highly limited in scope and are often quite narrow. There’s typically little room to “debug”, which here would mean calling up particular participants and getting a lot more information from them.
What Facebook can do now is far more expansive and sophisticated than anything I know of in social science and survey design. They just have dramatically more and better data. However, they don't seem to have that sophisticated of a team to generate academic insights using this data, and their attempts to do actual experimentation haven't gone very well. My guess is that the story "Facebook hires a large team of psychologists to do investigation and testing on users" wouldn't be received nicely, even if they took the absolute most prestigious and qualified psychologists.
As artificial intelligence improves, it becomes possible to infer important information from seemingly minor data. For example, it's possible to infer much of one's big 5 personality characteristics from their Facebook profile, or the discussion about inferring sexuality from profile photos. Over time we could expect both that more data be collected about people, and also that this data goes much further for analysis, because we can make superior inferences from it. So the Facebook of tomorrow won’t just have more data, but might be able to infer a whole lot about each user.
I know if I was in social science, I would expect to be able to discover dramatically more by working with the internal data of Facebook, NSA, or Chinese Government, especially if I had a team of engineers help prune and run advanced inference on the data.
This would be creepy on basically all current social standards. It could get very, very creepy.
There could be ways of accepting reasonable trade-offs somewhere. Perhaps we could have better systems of differential privacy so scientists could get valuable insights from large data sets without exposing any personal information. Maybe select groups of people could purposely opt-in to extended study, ideally being paid substantially for the privilege. Something like We Live in Public but more ordinary and on a larger scale. We may desire intensive monitoring and regulation of any groups handling this kind of information. Those doing monitoring probably should be the most monitored.
On this note, I’d mention that I could imagine the Chinese Government being in a position to spearhead social science research in a way not at all accepted in the United States. Arguably their propaganda sophistication is already quite good. I’m not sure if their improvement of propaganda has led to fundamental insights about human behavior in general, but I would expect things to go in that direction, especially if they made a significant effort.
I imagine that this section on “privacy” could really be reframed as “ethical procedures and limitations.” Psychology has a long history of highly malicious experiments and this has led to a heap of enforced polices and procedures. I imagine that the main restrictions though are still the ones that were almost too obvious to be listed as rules. Culture often produces more expansive and powerful rules than bureaucracy does (thinking about ideas from Critical Theory and Slavoj Žižek).
Let's step back a bit. It's difficult to put a measure on progress of the social sciences, but I think one decent aggregate metric would be the ability to make predictions about individuals, organizations, and large societies. If we could predict them well, we could do huge amounts of good. We could recommend very specific therapeutic treatments to specific subpopulations and expect them to work. We could adjust the education system to promote compassion, creativity, and flourishing in ways that would have tangible benefits later on. We could quickly hone in on the most effective interventions to global paranoia, jingoism, and racism.
But you can bet that if we got far better at "predicting human behavior", the proverbial "bad guys" would be paying close attention.
In Spent, Geoffrey Miller wrote about attempts to investigate and promote ideas around using evolutionary psychology to understand modern decision making. Originally himself and a few others tried to discuss the ideas with academic economists. They quickly realized that the people who were paying the closest attention were really the people in marketing research.
Research into cognitive biases and the power of nudges was turned into a "nudge unit" in the US, but I'm sure was more frequently used by growth hackers. I'm not quite sure how advancements in the last few years of Neuroscience have so far helped myself or my circle, but I do know they have been studied for Neuromarketing.
So I think that if we're nervous about physical technological advances being used for destructive ends (environmental degradation and warfare come to mind), we should be doubly so about social ones.
Militarily, it's interesting that there's public support for bomber planes and "advanced interrogation", but information warfare and propaganda get a bad rap. There seems to be a deeper stigma against propaganda than there is against nuclear weapons. Related, the totalitarian environment of Nineteen Eighty-Four focused on social technologies (an engineered language and cultural control) rather than the advanced use of machines.
Sigmund Freud was the uncle of Edward Bernays, an Australian-American pioneer of "public relations" who wrote the books Crystallizing Public Opinion and Propaganda, both between 1925 and 1930. Edward extended ideas in psychology for his work. In general the public seems to have had a positive association of propaganda at that time as a tool for good. That association changed with the visible and evident use of Propaganda during WWII very shortly afterwards. This seems to have been about as big a reversal as with the public stance on eugenics.
If the standard critiques of psychological study are that it's not effective or useful, the critique of propaganda would be anything but. It's too powerful. Perhaps it can be used for massive amounts of good, but it can clearly be and historically was used for catastrophic evil.
There are, of course, many unintentional problems with the social sciences too. There's the whole replication crisis for one. I imagine that better tooling could make a huge difference. Positly comes to mind. The Social Sciences also could use more of the basic stuff; money, encouragement, and talent. I'm really not sure how these compare to the above challenges.
What are the Social Sciences supposed to do?
I think this all raises a question. What are the social sciences really for? More specifically, what outputs are people in and around these fields hoping to accomplish in the next 10 to 100 years? What would radical success look like? It's assumed that fundamental breakthroughs in chemistry will lead to improvements in consumer materials and that benefits in engineering will lead to the once promised flying cars. With psychology, anthropology, and sociology, I'm really not sure what successes would be both highly impactful and also socially tolerated.
If the path to impact is "improve the abilities of therapists and human resources professionals", I imagine gains will be limited. I think that these professions are important, but don't see changes in them improving the world by over 20% in the next 20 to 50 years. If the path is something like, "interesting knowledge that will be transmitted by books and educational materials directly to its users", then the outputs would need to be highly condensed. Most people don't have much extra time to study material directly.
If the path is, "help prove out cases against racial and gender inequality" (and similar), I could see this working to an extent, but this would seem like a fairly limited agenda. Agenda driven scientific work is often too scientific to be convincing to most people (who prefer short opinion pieces and fiction), and too narrow to be groundbreaking. It serves a useful function, but generally not a function of radical scientific progress.
There are some research direction proposals that I could see being highly impactful, but these correlate strongly with those venturing on being dangerous. This is especially the case because many of them may require coordinated action, and it's not clear which modern authorities are trusted enough to carry out large coordinated action.
Possible research directions:
- Systems to predict and signal which people and activities will lead to good or bad things.
- Support for aligning human intuitions with arbitrary traits. (Like, making children particularly patriotic or empathetic)
- Cultural engineering to optimize cultures for arbitrary characteristics.
- Sophisticated chat bots that would outperform current humans at delivering psychological help, friendship, and possibly romance.
- Systems that would make near-optimal recommendations on what humans should do on near all aspects of their lives.
Here's another way of looking at it. Instead of asking what people in the fields are expecting, ask what regular people think will happen. What do most people expect of the social sciences in 30, 60, 200 years? I'd guess that most people would assume that psychology will make minor advances for the future of therapy, and that maybe sociology and anthropology will provide more evidence in favor of contemporary liberal values. Maybe they'll make a bunch of interesting National Geographic articles too.
If you ask people what they expect from engineering I imagine they’d start chatting about exciting science fiction scenarios. They might not know what a transistor is, but they could understand that cars could drive themselves and maybe fly too. This seems like a symptom of the focus on technology in science fiction, though that could be a symptom of more fundamental issues. Perhaps one should look to utopian literature instead. I’m not well read on utopian fiction, but generally from what I know there is a focus on “groups that seem well run” as opposed to “groups that are well run due to sophisticated advances in the social sciences.”
Predictions to anchor my views
Given the generality of the discussion above, it’s hard to make incredibly precise predictions that are meaningful. I’m mainly aiming for this piece to suggest a perspective rather than convince people of it.
Here are some slightly specific estimations I would make:
- If all agenda driven social science incentives were removed, it would increase “fundamental innovations” by 5% to 40%, 90% credence interval (but lose a lot of other value).
- If privacy concerns were totally removed, and social scientists could easily partner with governments and companies (a big if!), it would increase “fundamental innovations” by 20% to 1,000%.
- If power concerns were totally removed, it would increase “fundamental innovations” by 5% to 10,000%.
- If $1 Billion were effectively (think how it would be spent by an aligned tech company, not that this will happen) spent on great software tooling for social scientists, in the current climate, it would increase “fundamental innovations” by 2% to 80%. Note that I’d expect the government to be poor at spending, so if they were to attempt this, I would expect it to cost them $100 Billion for the equivalent impact.
- If "fundamental innovations" in the social sciences were improved by 1000% over the next 50 years, it would slightly increase the chances of global totalitarianism, but it has a chance (>30%) of otherwise being dramatically positive.
Apologies for the meandering path through this article. I'm attempting to make broad-sweeping hypotheses of huge fields that I haven't ever worked in, this is exactly the kind of analysis I typically am wary of.
At this point I'm curious to better understand if there are positive trajectories for the social sciences that are both highly impactful and acceptable to both the key decision makers and by society. I'm not sure if there really are.
Arguably the first challenge is not to make the social sciences more effective, but to first help clear up confusion over what they should be trying to accomplish in the first place. Perhaps the most exciting work is too hazardous or dangerous to attempt. Work further outlining the costs and benefits seems fairly tractable and important to me.
Many thanks to Elizabeth Van Nostrand, Sofia Davis-Fogel, Daniel Eth, and Nuño Sempere for comments on this post. Some of them pointed out some severe challenges that I haven't totally addressed, so any faults are really my own.
 I realize that these fields are complicated collections of individuals and incentives that are probably optimizing for a large set of goals, which likely include some of the ones here mentioned. I’m not thinking there should be one universal goal, but I am thinking that myself and many readers would be interested in the social sciences through a consequentialist lens.
Comments sorted by top scores.