The Death of Lying: How Technology Continues to Make Us More Honest

banner-1023856_960_720In 2012, Associate Professor of Cognitive Science and Communications at Cornell University, Jeff Hancock, gave a fascinating and prescient talk entitled The Future of Lying. He discussed how we interact and deceive each other through social media, pointing to 3 new types of lying that have surfaced in the digital era.

One of the most fascinating aspects of Jeff Hancock’s talk (which I encourage you to watch for yourself, it begins around fifteen minutes in) discusses a computer algorithm which can analyze the linguistic differences between deception and honesty, specifically those found in travel reviews. One of the coolest findings of the highly accurate algorithm was the tendency for deceptive reviews to make more mention of first-person singular pronouns such as “I” and “me” and their noticeably higher use of verbs and adverbs.

What is fascinating to me, however, is not the findings of the algorithm, but instead what its accuracy and use has in store for the future of social interaction. As Jeff mentions, we as human beings are terrible at detecting deception – we can tell if a statement made by another person is a lie only 54 percent of the time – only marginally better than chance. But what will happen when we are augmented to detect these kinds of lies 80, 90, 100 percent of the time? While this may sound like science fiction, the reality is closer than you think.

Jason Merkoski, CEO of moodzle.com and co-founder of beforever.me, boasts an emotionAPI which can determine the emotion being expressed in someone’s speech 91 percent of the time.  It can do so on both male and female voices, in noisy environments, and through a smartphone microphone.

Norberto Andrade of The Atlantic writes of a similar computer program developed by researchers at Ohio State University, including Aleix M. Martinez, which used facial recognition to achieve an accuracy rate of 96.9 percent in the identification of the six basic emotions, and 76.9 percent for compound emotions.

And it’s not just emotions. As Jeff Hancock discusses in his Ted Talk, there already exist computer programs that can determine whether someone is being deceptive with much higher accuracy than humans ever could. And with countless gigabytes of content being generated every day, the algorithms we use to detect deception will only become smarter, faster, and more prevalent.

With the development of APIs like emotionAPI and continued research in labs such as Aleix Martinez’ Computational Biology and Cognitive Science Lab, it is no longer a matter of if we’ll get technology that will help us read other’s emotions and deceptive patterns, it’s a matter of when. And with the inevitable adoption of Augmented Reality in the form of Google Glass and other competing products in the future, this information will be displayed front-and-center throughout all of your future interactions.

This, of course, brings up myriad fascinating questions. How will we use this new technology? How will we react when every text message, email, and human speech pattern we ever utter or receive will be judged for its honesty, not by a person, but by an algorithm that performs much better? Will it spell the death of lying?

It is estimated that, currently, people get away with almost 95 percent of all of the lies they tell. According to this article, entitled Lying Comes Easy, we aren’t even aware of how many lies we tell. People are often surprised at how often they lie when asked to be mindfully aware of their dishonest behavior.

But with the use of deception-detecting algorithms, we will be able to detect whether or not someone is lying at near perfect rates, turning our 95 percent failure rate into an equally high success rate.

Fascinatingly enough, this may not be as damning as it seems. Despite all the doomsayers and technophobes decrying our use of Facebook as a devilish tool used to lie and deceive others, our Facebook profiles are actually highly correlated with our real identity: strangers’ judgments of another person via their profile, and their judgments in person, were strikingly similar. On the same token, people are more likely to lie on paper resumes than they are on professional sites like LinkedIn.

One of the reasons given for this reduction in lying is the “paper trail” that is left behind when we lie on social media. It’s very difficult to recover from being caught in a lie when we can’t play dumb. Technology cements our lies to our past, making it very difficult to pretend they do not exist. Indeed, Hancock found that we are more honest in emails instead of over the phone, because emails can be documented in a way phone calls can’t.

As highly adaptable creatures, we learn very rapidly to avoid deception when we can get caught. It seems, then, that when algorithms do come around which can determine, in our everyday lives, when we are lying – and they are approaching fast – it may not spell doom, despair, and anger for the human race. Perhaps it will just make us a little more honest.

In fact, when algorithms used to detect our emotional states are put into practice, they could have a whole range of uses. Imagine being able to recognize when the person you’re having lunch with is having a bad day, and offering to pay for the meal. Imagine an autistic child or severely introverted person using the emotional data output by these algorithms to better understand emotion. Imagine psychotherapy being the vogue once again as psychologists can see, in real time, the effects of their therapy on the patient. Imagine the world we can create when all of us are honest with each other, tolerant of our emotions, and a little more understanding.

Advertisements

7 thoughts on “The Death of Lying: How Technology Continues to Make Us More Honest

  1. This is a very interesting post. It can also be very creepy knowing that emotions can be read. One can get really paranoid. 🙂 If this technology can help in court cases, it will sure lessen the number of trials.

    Like

  2. Well written post! I’m very intrigued. I’ve downloaded the talk you mentioned and will watch it tomorrow!

    I wonder about how people will feel about this in regards to freedom of speech. It sounds very controversial… though I suppose it’s simply another tool to read people’s emotions. I also think about how great this would be in creating a world where people understand the way others are feeling and can act accordingly (as you mentioned). As a teacher, one of the biggest struggles I’ve had is to guide students toward empathy. Many children have an incredibly difficult time understanding how others might be feeling, why they might act the way they do, etc. I’m not sure what these programs would look like in practice, but these types of tools will, at the very least, be helpful in raising children (and adults) to empathize.

    Like

        1. I believe you’re right, most kids start learning complex moral reasoning at around 8 years old, so they’re pretty ready. I imagine it’s really difficult to teach that kind of thing, though. That age is but a distant memory and I’m still learning more about empathetic behavior.

          I read an article recently about an empathy toy out of 21Toys, you might want to check that out. Looks fascinating, and I’m sure lesson plans could be designed to mimic what they are trying to do.

          Like

          1. That’s a cool game. We’ve lot lots of similar activities that we do with children to help teach cooperation, empathy, and the like. It is definitely a challenge, but they do get there eventually. Each teacher just plays one small part 🙂

            Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s