In 2012, Associate Professor of Cognitive Science and Communications at Cornell University, Jeff Hancock, gave a fascinating and prescient talk entitled The Future of Lying. He discussed how we interact and deceive each other through social media, pointing to 3 new types of lying that have surfaced in the digital era.
One of the most fascinating aspects of Jeff Hancock’s talk (which I encourage you to watch for yourself, it begins around fifteen minutes in) discusses a computer algorithm which can analyze the linguistic differences between deception and honesty, specifically those found in travel reviews. One of the coolest findings of the highly accurate algorithm was the tendency for deceptive reviews to make more mention of first-person singular pronouns such as “I” and “me” and their noticeably higher use of verbs and adverbs.
What is fascinating to me, however, is not the findings of the algorithm, but instead what its accuracy and use has in store for the future of social interaction. As Jeff mentions, we as human beings are terrible at detecting deception – we can tell if a statement made by another person is a lie only 54 percent of the time – only marginally better than chance. But what will happen when we are augmented to detect these kinds of lies 80, 90, 100 percent of the time? While this may sound like science fiction, the reality is closer than you think.
Norberto Andrade of The Atlantic writes of a similar computer program developed by researchers at Ohio State University, including Aleix M. Martinez, which used facial recognition to achieve an accuracy rate of 96.9 percent in the identification of the six basic emotions, and 76.9 percent for compound emotions.
And it’s not just emotions. As Jeff Hancock discusses in his Ted Talk, there already exist computer programs that can determine whether someone is being deceptive with much higher accuracy than humans ever could. And with countless gigabytes of content being generated every day, the algorithms we use to detect deception will only become smarter, faster, and more prevalent.
With the development of APIs like emotionAPI and continued research in labs such as Aleix Martinez’ Computational Biology and Cognitive Science Lab, it is no longer a matter of if we’ll get technology that will help us read other’s emotions and deceptive patterns, it’s a matter of when. And with the inevitable adoption of Augmented Reality in the form of Google Glass and other competing products in the future, this information will be displayed front-and-center throughout all of your future interactions.
This, of course, brings up myriad fascinating questions. How will we use this new technology? How will we react when every text message, email, and human speech pattern we ever utter or receive will be judged for its honesty, not by a person, but by an algorithm that performs much better? Will it spell the death of lying?
It is estimated that, currently, people get away with almost 95 percent of all of the lies they tell. According to this article, entitled Lying Comes Easy, we aren’t even aware of how many lies we tell. People are often surprised at how often they lie when asked to be mindfully aware of their dishonest behavior.
But with the use of deception-detecting algorithms, we will be able to detect whether or not someone is lying at near perfect rates, turning our 95 percent failure rate into an equally high success rate.
Fascinatingly enough, this may not be as damning as it seems. Despite all the doomsayers and technophobes decrying our use of Facebook as a devilish tool used to lie and deceive others, our Facebook profiles are actually highly correlated with our real identity: strangers’ judgments of another person via their profile, and their judgments in person, were strikingly similar. On the same token, people are more likely to lie on paper resumes than they are on professional sites like LinkedIn.
One of the reasons given for this reduction in lying is the “paper trail” that is left behind when we lie on social media. It’s very difficult to recover from being caught in a lie when we can’t play dumb. Technology cements our lies to our past, making it very difficult to pretend they do not exist. Indeed, Hancock found that we are more honest in emails instead of over the phone, because emails can be documented in a way phone calls can’t.
As highly adaptable creatures, we learn very rapidly to avoid deception when we can get caught. It seems, then, that when algorithms do come around which can determine, in our everyday lives, when we are lying – and they are approaching fast – it may not spell doom, despair, and anger for the human race. Perhaps it will just make us a little more honest.
In fact, when algorithms used to detect our emotional states are put into practice, they could have a whole range of uses. Imagine being able to recognize when the person you’re having lunch with is having a bad day, and offering to pay for the meal. Imagine an autistic child or severely introverted person using the emotional data output by these algorithms to better understand emotion. Imagine psychotherapy being the vogue once again as psychologists can see, in real time, the effects of their therapy on the patient. Imagine the world we can create when all of us are honest with each other, tolerant of our emotions, and a little more understanding.