Within five years, all of our personal devices may be able to respond dynamically to our moods, according to US-based computer scientist Rana el Kaliouby.
Speaking at TEDWomen 2015 in May, she outlined her mission to bring emotions back into our digital experiences. At present, despite the intimate time we spend with our devices, they have no idea how we feel, says el Kaliouby. As a result, they can't improve or be more attuned to our emotional states.
To tackle these frustrations, in 2009, she co-founded Affectiva, an MIT Media Lab spin-off specialising in facial recognition. The firm uses deep learning software to recognise a full range of emotions and even strength of feeling. To do this, it has amassed the largest emotion database in the world, gathering 12 billion emotion data points from 2.9 million face videos from 75 countries to date.
Affectiva captures consumers' emotional responses via our most commonly used devices – smartphones and tablets. But el Kaliouby has a broader vision for tech in the future, when emotional chips will come as standard. Distant friends will be able to have heart-to-heart moments, wearable tech will help the visually impaired to read the faces of others, and educational apps will sense when to offer more help.
She said: "I think five years down the line, all our devices are going to have an emotion chip, and we won't remember what it was like when we couldn't just frown at our device and our device would say: 'Hmm, you didn't like that, did you?'."