Emotional microscope

post by pchvykov · 2021-09-20T21:37:30.034Z · LW · GW · 9 comments

Seeing is believing. Before we could accept the germ theory of disease, we had to invent the microscope to directly see the culprits of our ailments. These days, we live in an age that largely discounts the role emotions play in our social structures. Even as we are seeing the "rational agent" model of human behavior being called into question across the social sciences, we are far from a fundamental paradigm shift that could break with this view.  As with the germ theory, such a transition may require new tools to be invented that allow us to directly see and track our emotions.

In particular, while the presence of bacteria is seen as objective "external" truth, the subjective "internal" nature of emotions often makes us treat them as "less real." We never tell someone to "stop being sick" – as this is not within their free-will to decide. Instead we try to give them a chance to sleep, bring them soup, or antibiotics. Yet we often say "don't be sad," or “stop being angry” and even ascribe blame to it – somehow assuming that they are free to change this just by willing. Remember the last time you tried to stop feeling angry just by deciding to? Sure, it isn’t fundamentally impossible – much like psychosomatic healing of some diseases isn't impossible if you are a Tibetan yogi. It’s more that this isn’t a skill we can just be expected to know and be good at without having ever received proper training. But to start learning to control our emotions, we must first learn to see and acknowledge them – both as individuals, and as society. 

The truth is that seeing others' emotions makes us uncomfortable. If someone honestly tells me that I hurt their feelings, that in itself may hurt my feelings – I might feel unwelcome, insecure, scared, and/or angry. If I don’t like feeling that way, I may try to invent some social norms of behavior that will guarantee my emotional safety – a project doomed from the outset. But there is another way: rather than avoiding or denying the situation through elaborate safety protocols and judgement, I can continue to play this game and accept, or even voice, my reaction to their reaction. Hall of mirrors: no one is to blame, no one needs to be disciplined, and no one is whining - merely observing and sharing what is real anyway, allowing for interpersonal understanding and compassion. This takes us down a rabbit hole into a vast and beautiful wonderland of emotional truth – a world that otherwise remains invisible, even as we are living right in it and are deeply affected by it every day.

Such invisible worlds, while sound like magic woo-woo on the one hand, are actually very familiar to us. If someone sneezes on me in the subway, my knowledge of bacterial infections imbues the event with meaning, even as nothing visibly substantive has happened. UV noon sunlight can burn my skin, even as I seem to be enjoying its treacherous warmth. A colleague quietly harboring resentment at me can ruin my career even as I remain entirely oblivious to the cause. The technical term is “hidden variables” – things we do not directly observe, but that decide our reality nonetheless – but "invisible worlds" is more poetic.  

To truly believe in and appreciate the effect of such hidden variables, it has been enormously helpful to create technology that allowed for their direct observation. For a long time, we attributed maladies to invisible curses or spirits, while the microscope gave us a way to see and understand these beings for what they really were - microorganisms co-inhabiting our world. An infrared camera can help us see a person’s body temperature, and thus learn whether they have a fever, are getting burned in the sun, or freezing from poor blood circulation. By having tools to clearly see things like infection or temperature, we start treating them as an objective part of our reality – technologies like the antibiotics and AC are then soon to follow.

So then, can we cure misery? Following these same steps, it seems the first, and perhaps the hardest step, is to acknowledge our feelings as the hidden variables that play the formative role in our subjective world. And while spiritual and psychological teaching have been saying this for centuries, it is perhaps only now we have the technology to develop an “emotional microscope” to bring emotions into undeniable objective reality once and for all. With such technology, we could no longer lie to others, or to ourselves, about what we really care about and want in life, who we love, what job we enjoy doing, and what gives us the subtle gnawing feeling of wrongness we’d rather not see. Morality and happiness could perhaps become inevitable. 

And as with any transformative technology, it could all go horribly wrong, making it now possible to persecute not only our actions, but also our emotions. Perhaps the key to avoiding this is to point this “microscope” at ourselves and harmonize our own lives, well before we point it at others. Or perhaps it’s much more complex than that – but either way, facing tough and dangerous challenges has been, and likely will be, humanity’s chosen path forward.

Do you think such technology would be as transformative as I suggest? And would it pose more "big-brother watching" type risk than benefit?

9 comments

Comments sorted by top scores.

comment by jimmy · 2021-09-21T18:22:19.269Z · LW(p) · GW(p)

This kind of thing isn't completely unprecedented.

The advent of audio and especially video recording has made it easier to credibly convey information about people's emotional states. Instead of "he said she said", you have a video showing the exact words used, the exact tone of voice in which they were used, and the exact facial expressions used. Polygraphs have given us the ability to move beyond subjective interpretations and get objective numbers for levels of arousal. And this does change things, somewhat.

But...

When was the last time you were hooked up to a polygraph? They're inconvenient for one, but also just not that useful. Level of arousal becomes common knowledge, but you could already see it for yourself and the reasons behind it still aren't common knowledge. Analysis is the hard part, and polygraphs don't remove enough subjectivity to be a slam dunk game changer. They're not even always admissible in court.

Video is much easier and ubiquitous, but that hasn't been a hindrance to the culture war as two people can still see the same video and interpret it very differently. And even video isn't everywhere it could be, since people like their privacy.

And no fancy technology is needed in the first place. Emotions aren't exactly impossible to notice and introspect honestly on. Neither are they impossible to convey quite credibly. Yet few people are as open as they could be, either with themselves or with others.

The incentives to spin things will always exist, as will the incentives to maintain space in which to do so. Technology makes it such that if you don't want to be accountable, you have to choose your words to be ambiguous when recorded. If someone integrates automated micro-expression analysis into all video, you can expect people to be more ambiguous still, and to move towards poker faces.

It's not that technologies won't change things significantly, it's that the changes will be on the margin. You won't suddenly start seeing people persecuted for their emotions, because you're already seeing that. People already get persecuted for their emotions (look at how people  respond to video evidence of controversial events already), so all that can change is that maybe they do it somewhat more or somewhat more accurately.

As far as "Does it pose more risk than benefit", it depends on how it's used. Cameras are clearly great on the bodies and guns of police officers. They're great options to have when you want to prove that nothing nefarious happens behind the closed doors of your office, or that you're not at fault in any motor vehicle accident. They're less good as always-recording fixtures of bedrooms or when discussing trade secrets within a company -- especially when broadcast publicly.

I would expect further technology to be helpful to the extent that it allows for the option of more credible sharing of information, and bad to the extent that it impinges on the ability to withhold information in the cases where people would want to say "no thanks" to that option -- since the latter just pushes obfuscation further back and impairs clear thinking internally. Whether it will become used in a way that allows thoughts/emotions to be developed in private and credibly shared when/to the extent ready, or whether it'll be used in Orwellian fashion to stop valuable insights before they start is more of a political question and I don't have any predictions there yet.
 

Replies from: pchvykov
comment by pchvykov · 2021-09-21T20:08:44.475Z · LW(p) · GW(p)

Thanks for such thoughtful reply - I think I'm really on-board with most of what you're saying. 

I agree that analysis is the hard part of this tech - and I'm hoping that this is what is just now becoming possible to do well with AI, like check out https://www.chipbrain.com/

Another point I think is important: you say "Emotions aren't exactly impossible to notice and introspect honestly on." - having been doing some emotional-intelligence practice for the last few years, I'm very aware of how difficult it is to honestly introspect on my own emotions. It's sort of like trying to objectively gauge my own attractiveness in photos - really tough to be objective! and I think this is one place that an AI could really help (they're building one for attractiveness now too actually).

I see your point that the impact will likely be marginal, compared to what we already have now - and I'm wondering if there is some way we could imagine applying such technology to have a revolutionary impact, without falling into Orwellian dystopia. Something about creative inevitable self-awareness, emotion-based success metrics, or conscious governance. 

Any ideas how this could be used save the world? Or do you think there isn't any real edge it could give us?

Replies from: jimmy, ChristianKl
comment by jimmy · 2021-09-27T19:10:21.274Z · LW(p) · GW(p)

I think there's a temptation to see AI as an unquestionable oracle of truth. If you can get objective and unquestionable answers to these kinds of questions then I do think it would be a revolutionary technology in an immediate and qualitatively distinct way. If you can just pull out an app that looks at both people and says "This guy on my right is letting his emotions get to him, and the guy on my left is being perfectly calm and rational", and both people accept it because not accepting it is unthinkable, that would change a lot.

In practice, I expect a lot of "Fuck your app". I expect a good amount of it to be justified too, both by people who are legitimately smarter and wiser than the app and also by people who are just unwilling to accept when they're probably in the wrong. And that's true even if no one is thumbing the scales. Given their admission to thumbing the scale on search results, would you trust Google's AI to arbitrate these things? Or even something as politically important as "Who is attractive"?


In my experience, a lot of the difficulty in introspecting on our own emotions isn't in the "seeing" part alone, but rather in the "knowing what to do with them" part, and that a lot of the difficult "seeing" isn't so much about being unable to detect things so much as not really looking for things that we're not ready to handle. Mostly not "I'm looking, and I'm ready to accept any answer, but it's all just so dark and fuzzy I just can't make anything out". Mostly "Well, I can't be angry because that'd make a bad person, so what could it possibly be! I have no idea!" -- only often much more subtle than that. As a result, simply telling people what they're feeling doesn't do a whole lot. If someone tells you that you're angry when you can't see it, you might say "Nuh uh" or you might say "Okay, I guess I'm angry", but neither one gives you a path to explaining your grievances or becoming less angry and less likely to punch a wall. This makes me really skeptical of the idea that simple AI tools of the sort like "micro-expression analyzers" are going to make large discontinuous changes.

If you can figure out what someone is responding to, why they're responding to it in that way, what other options they have considered and rejected, what other options they have but haven't seen, etc, then you can do a whole lot. However, then you start to require a lot more context and a full fledged theory of mind, and that's a much deeper problem. To do it well you not only have to have a good model of human minds (and therefore AI-complete, but no more objective than humans) but also have a model of the goal humans should be aiming for, which is a good deal harder still.

It's possible to throw AI at problems like chess and have them outperform humans without understanding chess strategy yourself, but that has a lot to do with the fact that the problem is arbitrarily narrowed to something with a clear win condition. The moment you try to do something like "improving social interaction" you have to know what you're even aiming for. And the moment you make emotions goals, you Goodheart on them. People Goodheart these already, like with people who have learned how to be really nice and avoid near-term conflict and get people to like them but fail to learn the spine needed to avoid longer term conflict let alone make things good. Competent AI is just going to Goodheart that much more effectively, and that isn't a good thing.

As I see it, the way for AI to help isn't to blindly throw it at ill-defined problems in hopes that it'll figure out wisdom before we do, but rather to help with scaling up wisdom we already have or have been able to develop. For example, printing presses allow people to share insights further and with more fidelity than teaching people who teach people who teach people. Things like computer administered CBT allow even more scalability with even more fidelity, but at the cost of flexibility and ability to interact with context. I don't see any *simple* answers for how better technology can be used to help, but I do think there are answers to be found that could be quite helpful. The obvious example that follows from above would be that computer administered therapy would probably become more effective if it could recognize emotions rather than relying on user reports.

I'm also not saying that an argument mediator app which points out unacknowledged emotions couldn't be extremely helpful. Any tool that makes it easier for people to convey their emotional state credibly can make it easier to cooperate, and easier cooperation is good. It's just that it's going to come with most of the same problems as human mediators (plus the problem of people framing it as 'objective'), just cheaper and more scalable. Finding out exactly how to scale things as AI gets better at performing more and more of the tasks that humans do is part of it, but figuring out how to solve the bigger problems with actual humans is a harder problem in my estimation, and I don't see any way for AI to help that part in any direct sort of way.

Basically, I think if you want to use AI emotion detectors to make a large cultural change, you need to work very hard to keep it accurate, to keep it's limitations in mind, and to figure out with other humans how to put it to use in a way that gets results good enough that other people will want to copy. And then figure out how to scale the understanding of how to use these tools.

Replies from: pchvykov
comment by pchvykov · 2021-10-14T23:06:19.691Z · LW(p) · GW(p)

Wow, wonderful analysis! I'm on-board mostly - except maybe I'd leave some room for doubt of some claims you're making. 

And your last paragraph seems to suggest that a "sufficiently good and developed" algorithm could produce large cultural change? 
Also, you say "as human mediators (plus the problem of people framing it as 'objective'), just cheaper and more scalable" - to me that would quite a huge win! And I sort of thought that "people framing it as objective" is a good thing - why do you think it's a problem? 
I could even go as far as saying that even if it was totally inaccurate, but unbiased - like a coin-flip - and if people trusted it as objectively true, that would already help a lot! Unbiased = no advantage to either side. Trusted = no debate about who's right. Random = no way to game it.

Replies from: jimmy
comment by jimmy · 2021-10-15T19:22:11.308Z · LW(p) · GW(p)

I'm on-board mostly - except maybe I'd leave some room for doubt of some claims you're making. 

I might agree with the doubt, or I might be able to justify the confidence better.

to me that would quite a huge win! 

I agree! Just not easy :P

And I sort of thought that "people framing it as objective" is a good thing - why do you think it's a problem? 
I could even go as far as saying that even if it was totally inaccurate, but unbiased - like a coin-flip - and if people trusted it as objectively true, that would already help a lot! Unbiased = no advantage to either side. Trusted = no debate about who's right. Random = no way to game it.

Because it wouldn't be objective or trustworthy. Or at least, it wouldn't automatically be objective and trustworthy, and falsely trusting a thing as objective can be worse than not trusting it at all.

A real world example is what happened when these people put Obama's face into a "depixelating" AI.

If you have a human witness describe a face to a human sketch artist, both the witness and the artist may have their own motivated beliefs and dishonest intentions which can come in and screw things up. The good thing though, is that they're limited to the realism of a sketch. The result is necessarily going to come out with a degree of uncertainty, because it's not a full resolution depiction of an actual human face -- just a sketch of what the person might kinda look like. Even if you take it at face value, the result is "Yeah, that could be him".

AI can give extremely clear depiction of exactly what he looks like, and be way the fuck off -- far outside the implicit confidence interval that comes with expressing the outcome as "one exact face" rather than "a blurry sketch" or "a portfolio of most likely faces". If you take AI at face value here, you lose. Obama and that imaginary white guy are clearly different people. 

In addition to just being overconfident, the errors are not "random" in the sense that they are both highly correlated with each other and predictable. It's not just Obama that the AI imagines as a white guy, and anyone who can guess that they fed it a predominately white data set can anticipate this error before even noticing that the errors tend to be "biased" in that direction. If 90% of your dataset is white and you're bad at inferring race, then the most accurate thing to do (if you can't say "I have no idea, man") is to guess "White!" every time -- so "eliminating bias" in this case isn't going to make the answers any more accurate, but you still can't just say "Hey, it has no hate in it heart so it can't be racially biased!". And even if the AI itself doesn't have the capacity to bring in dishonesty, the designers still do. If they wanted that result, they can choose what data to feed it such that it forms the inferences they want it to form. 

This particular AI at this particular job is under-performing humans while giving far more confident answers, and with bias that humans can readily identify, which is sorta "proof of concept" for distrust of AI to be reasonable. As the systems get more sophisticated it will get more difficult to spot the biases and causes, but that doesn't mean they just go away. Neither does it mean that one can't have a pretty good idea of what the biases are -- just that it starts to become a "he said she said" thing, where one of the parties is AI. 

At the end of the day, you still have to solve the hard problem of either a) communicating the insights such that you don't have to trust and can verify yourself, or b) demonstrating sufficient credibility that people will actually trust you. This is the same problem that humans face, with the exception again that AI is more scale-able. If you solve the problem once, you're now faced with an easier problem of credibly demonstrating that the code hasn't changed when things scale up.

comment by ChristianKl · 2021-09-23T11:28:59.203Z · LW(p) · GW(p)

The fact that something is difficult for you doesn't mean that it's difficult in general. CFAR for example teaches Gendlin's Focusing. If you do Focusing and name the handle you either have a felt shift or you don't. It's clear and there's not much room for self deception.

Replies from: pchvykov
comment by pchvykov · 2021-09-24T03:26:51.699Z · LW(p) · GW(p)

Cool that you find this method so powerful! To me it's a question of scaling: do you think personal mindfulness practices like Gendlin's Focusing are as easy to scale to a population as a gadget that tell you some truth about you? I guess each of these face very different challenges - but so far experience seems to show that we're better at building fancy tech than we are at learning to change ourselves.
What do you think is the most effective way to create such culture-shift?

comment by ChristianKl · 2021-09-23T11:25:00.149Z · LW(p) · GW(p)

Sure, it isn’t fundamentally impossible – much like psychosomatic healing of some diseases isn't impossible if you are a Tibetan yogi.

It's nontrival but it doesn't take being a Tibetan yogi. Grinberg practioners do that pretty reliable in a Western setting.

Yet we often say "don't be sad," or “stop being angry” and even ascribe blame to it – somehow assuming that they are free to change this just by willing.

That's not a sentiment I hear often. I don't think that it's within the social context where I'm operating to often say those things. The last time I said something similar it was to a person who had a strong meditation background and it got her out of a depressive phase in which she was the prior two months.

With such technology, we could no longer lie to others, or to ourselves, about what we really care about and want in life, who we love, what job we enjoy doing, and what gives us the subtle gnawing feeling of wrongness we’d rather not see.

Those things are much more complex then just emotions. If you want to stop lying to yourself about those things belief reporting from Leverage research is more useful then such a tool would be.

I'm not up to date with the current technological capabilities but a decade ago we had objective emotional measurement via sensors that was good enough to be useful for advertisers. High resolution heartrate + skin conductance + machine learning goes far.

I personally have emotional perception of base emotions in other people that's does not depend on seeing someone's face. It doesn't work when the emotion is disassociated but I don't think there's an easy way to conceal.

Replies from: jimmy
comment by jimmy · 2021-09-27T19:13:12.880Z · LW(p) · GW(p)

The last time I said something similar [to "don't be sad"] it was to a person who had a strong meditation background and it got her out of a depressive phase in which she was the prior two months

 

That's awesome