We often rely on lie detectors or polygraphs, but these devices are not always 100 percent accurate. There are other means to determine if an individual is lying, such as by observing micro-movements, or movements that we subconsciously make while lying. These can include anything from a raised eyebrow to tilting our heads.
Now, a group of scientists has revealed that they developed “an artificial intelligence system that can detect these micro-expressions and detect if you’re lying.”
The researchers from the University of Maryland and Dartmouth College hope that the AI system they developed, known as Deception Analysis and Reasoning Engine (DARE), can soon be used in courtrooms to determine if people on the stand are being truthful. The study was published in arXiv.
According to the researchers, they developed DARE by training the system using videos of people in the courtroom. Dr. Zhe Wu, who led the study, shared, “On the vision side, our system uses classifiers trained on low level video features which predict human micro-expressions.”
The team explained that they trained the AI to identify five micro-expressions that often indicate when a person is lying. These include frowning, eyebrows raising, lip corners turning up, lips protruded, and the head turning to the side. DARE viewed 15 videos taken from courtrooms, then the AI was tested on whether it was able to tell if someone was lying in a final video. (Related: AI mind reading technology can tell if you’ve knowingly committed a crime.)
Based on the results, DARE was able to pinpoint 92 percent of the micro-expressions. The researchers proudly called this a “good performance.” The researchers also gave the same task to human assessors, but the latter were only able to pick up 81 percent of micro-expressions. The results reveal that the AI was better than humans at identifying a person who was lying.
The researchers commented, “Our vision system, which uses both high-level and low level visual features, is significantly better at predicting deception compared to humans.” They even implied that if the AI is given access to further information, it can “be even more effective.” According to the researchers, “complementary information” gathered via audio and transcripts can further improve DARE’s “deception prediction.”
See full article