Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, August 21, 2019

Tech Is Already Reading Your Emotions - But Do Algorithms Get It Right?

Jessica Baron
Forbes.com
Originally published July 18, 2019

From measuring shopper satisfaction to detecting signs of depression, companies are employing emotion-sensing facial recognition technology that is based on flawed science, according to a new study.

If the idea of having your face recorded and then analyzed for mood so that someone can intervene in your life sounds creepy, that’s because it is. But that hasn’t stopped companies like Walmart from promising to implement the technology to improve customer satisfaction, despite numerous challenges from ethicists and other consumer advocates.

At the end of the day, this flavor of facial recognition software probably is all about making you safer and happier – it wants to let you know if you’re angry or depressed so you can calm down or get help; it wants to see what kind of mood you’re in when you shop so it can help stores keep you as a customer; it wants to measure your mood while driving, playing video games, or just browsing the Internet to see what goods and services you might like to buy to improve your life.


The problem is – well, aside from the obvious privacy issues and general creep factor – that computers aren’t really that good at judging our moods based on the information they get from facial recognition technology. To top it off, this technology exhibits that same kind of racial bias that other AI programs do, assigning more negative emotions to black faces, for example. That’s probably because it’s based on flawed science.

The info is here.