Chances are, you’re already familiar with facial recognition software, even if you’ve never spent time in an artificial intelligence lab. The algorithm that Facebook uses for tagging photos, for example, is a version of facial recognition software that can identify faces with a 97.25 percent accuracy.
The problem with most of today’s facial recognition software, however, is that it’s computationally very intensive and difficult to use for more than just matching simple photos. If you could speed up the process of recognizing faces, add the ability to track facial features and make it so easy to use that it could be used as a smartphone app by anyone – then it might open up a number of important new opportunities.
That’s the goal of AI researchers at Carnegie Mellon University’s Human Sensing Laboratory. Starting in Feb. 2016, they will make available their advanced software facial image analysis software to fellow researchers. The software, known as IntraFace, is fast and efficient enough that it can be installed as a smartphone app.
For now, in order to give a preview of what to expect, the researchers have made available free demonstration smartphone apps, which show how IntraFace can identify facial features and detect emotions. These can be downloaded from the website for the Human Sensing Laboratory, fromApple’s App Store, or from Google Play.
The project’s lead researcher, Fernando De la Torre, associate research professor in the robotics department of Carnegie Mellon University, says that he is already starting to see enormous interest within the AI field for IntraFace. “Now it’s time to develop new applications for this technology. We have a few of our own, but we believe there are lots of people who may have even better ideas once they get their hands on it.”
Duke medical researchers, for example, are using IntraFace as part of an advanced tool to screen for autism. Doctors and clinicians might be able to monitor or detect a number of pathologies, including depression or anxiety, just by checking out facial expressions of patients.
One everyday use for facial recognition analysis might be vehicles that can recognize a distracted driver behind the wheel. In the demo video for IntraFace, there’s the example of a father behind the wheel of a vehicle who turns around to comfort a screaming toddler in a car seat for a split second before being warned that he’s veered off the road.
And sales and marketing teams are going to love this once this technology becomes commercially available – imagine being able to evaluate what people think about your products just by reading their faces. The Carnegie Mellon researchers refer to this as “audience reaction measurement,” and see potential applications being used by public speakers to gauge how well they’re winning over the crowd. But you could also imagine biometric trackers on billboards, checking out how the crowd is responding to an advertisement.
For that matter, you could also envision IntraFace being used by dating apps once the technology becomes commercially available. Not sure whether you should approach the target of your amorous intentions? Use the IntraFace app to read the facial features of that handsome guy or beautiful girl and get the go-ahead green light to proceed.
IntraFace is the result of a decade of work by De la Torre and his colleagues, including Jeffrey Cohn, a professor of psychology and psychiatry at the University of Pittsburgh and an adjunct professor in CMU’s Robotics Institute. The researchers used machine learning techniques to train the software to recognize and track facial features. The researchers then created an algorithm that can take this generalized understanding of the face and personalize it for an individual, enabling expression analysis.