Information commissioner warns firms over ‘emotional analysis’ technologies



The information commissioner has warned companies to steer clear of “emotional analysis” technologies or face fines, because of the “pseudoscientific” nature of the field.

It’s the first time the regulator has issued a blanket warning on the ineffectiveness of a new technology, said Stephen Bonner, the deputy commissioner, but one that is justified by the harm that could be caused if companies made meaningful decisions based on meaningless data.

“There’s a lot of investment and engagement around biometric attempts to detect emotion,” he said. Such technologies attempt to infer information about mental states using data such as the shininess of someone’s skin, or fleeting “micro expressions” on their faces. - said a recent article posted in The Guardian.

It should be clear that what the Information commissioner is referring: a. a specific type of emotional AI; b. with a specific use case; and c. deployed without proper procedures.

  1. Nomenclature and marketing challenge: Unimodal Vs Multimodal biometric data

As explained several times in our blog posts and during our lectures, the main challenge is the widespread use of unimodal affective computing - primarily unsupervised machine learning models.

 
 

Conclusion: while unimodal and bimodal affective computing has its limitations and challenges, multimodal affective computing is significantly more accurate and backed by substantial research.

2. Data collection and methodology needs to be adjusted to the use case

Not all use-cases are created equal. 

We have explained more about how to choose what data to collect and subsequently what research vendor you should use.

For more information regarding accuracy, privacy and ethical concerns - please feel free to review our lecture on the topic available here.

3. Baselining 

Frequently mentioned in all of our lectures and research proposals is that all algorithms need to be adjusted to each and every individual subject i.e. their personal normal or baseline.

What is happiness for one person may not be happiness for the other, and numerous edge cases can be easily screed out via the calibration process.

It is important to note that all data collected needs to be calibrated and adjusted to the subject that is being analysed. 


Conclusion: In the end, keep calm and make sure that you implement multimodal affective computing adjusted for your use case (ideally ethically approved) and make sure to include baselining!

To be clear, we fully support Commissioner’s concerns, especially around unimodal unethical use-cases that are powered by unsupervised machine-learning models without applying proper baselining for observed subjects.

andrea Sagud