16 May 2018

Google’s Duplicity Focuses on the Wrong Human Experience

by Stephanie Lummis

I’ve been duped on calls before, so Google Duplex has my imagination running in all sorts of directions.

When my daughter was in grade two I got a call from her school. She was injured. It wasn’t serious, but she was banged up from a playground fall. Could I come to the school and pick her up? I was upset and worried and I talked to my daughter on the phone, who was crying and wanted me. I couldn’t leave work, but waited anxiously as my husband went to pick her up.

A half hour later I got a call from my husband who was at the school. Maia was fine. But Maya wasn’t. They had called the wrong mom!

I was so relieved she was OK. But then I was incredulous. I had talked to her on the phone. How didn’t I know I wasn’t talking to my Maia? How did she not know it wasn’t her mother? We were both upset on the phone, but wouldn’t we know?

And then I was angry. How could the school get this wrong? And the injured child’s parents weren’t notified until almost an hour after I got the call! The stress, the wasted time – I know the school wasn’t intentionally trying to trick me. They laughed it off, in fact. What a silly mistake! (Don’t get me started on that reaction, that’s another story.)

Human engineering

My daughter is in grade 8 now and I hadn’t thought of this incident in years. Not until I read about Google Duplex, the AI robot engineered to make people think it’s human. Lots has been already written since Google demo’d the technology last week. I have so many thoughts about this and my mind keeps returning to it, troubled.

Cartoon robot on the telephone, representing Google Duplex
Disguising the robot on the other end of the phone creates mistrust

Let’s start with the name – Duplex is an interesting choice. A 2-family dwelling to many. Is this supposed to represent the human and the robot living happily side-by-side? As an adjective it means simply having 2 parts, but it makes me think of duplicity. Someone who is two-faced. That adjective is never used in a positive context. It’s someone pretending to be someone they are not and that is their intention.

Google says one of their success criteria was: does the person on the other end of the phone know they are not talking to a real person? They created a human-sounding voice, adding in the quirks of conversational language: umms and hmms to sound more natural. Bottom line: Google Duplex is designed to trick.

The robot in the room

All sorts of scenarios play out in my head from the 2 simple situations they demonstrated: (1) booking a hair appointment, and (2) making a restaurant reservation. These were chosen as they are easy, relatively-predictable conversations that they could flesh out. What happens when it doesn’t go well? What if:

  • Duplex says the wrong thing and the person gets offended.
    I went to my stylist for 8 years before she moved away. As soon as she heard my name she would have switched gears to the familiar (as is the case for regulars) “Muffin, are we going to add some highlights and hide that grey this time?” (my stylist called me Muffin). How does a robot respond to that? It certainly won’t be with the same level of friendly familiarity, which could easily be interpreted as rudeness. Salons are busy and loud so she isn’t thinking about if my voice doesn’t quite match. How will my stylist then respond to me when I show up for the appointment?
  • Someone arrives for their salon appointment and the stylist strikes up a conversation about the phone call. Do they lie to the stylist? “Oh, you were speaking to my assistant.” And if they don’t lie and tell them they were talking to a robot, the stylist feels foolish for thinking it was them.
  • The person who took the call learns the truth. They begin to mistrust every time they pick up the phone. And in dealing with a robot, people tend to be more rude and less patient. What happens when they speak to a real person, and think it’s a robot?

This all comes down to trust

Since the rise of social media, all brand advice to organizations is to be authentic and transparent. Don’t attempt to deceive or customers won’t trust you. Credibility is lost. But relationships are a two-way street. Why should it be OK for a customer to deceive an organization? If a restaurant is acting with integrity, don’t they have an expectation of that in return?

These may seem trivial, they are simple conversations after all. But what are the ramifications for more complex ones? More sensitive issues?

Google has been working on this for years. This technological accomplishment – and it is an accomplishment – was not easy. To achieve natural language responses when you don’t know what the conversation will bring is not trivial. In fact, in one of the demos, the restaurant call doesn’t go as expected, yet Duplex is still able to have a successful conversation. Impressive to say the least – their deception game is strong.

Duplicity was a conscious decision

Google says they considered using a robotic voice instead of a human-sounding one, but worried about hang-ups. That’s probably legitimate, but it should not have been a priority concern over damaged customer relationships. Invest in testing different voices, greetings, and phrases that would minimize the hang-ups. Whether that is using a robotic voice, or announcing up front that you’re speaking to a robotic Google Assistant, be up front.

It’s also short-sighted thinking. With the rise of Google Home, Alexa, and Siri we are all getting more accustomed to talking to robotic voices everyday – and doing so voluntarily. This technology is not an unexpected evolution, it’s just that in this case we humans are reacting rather than initiating the conversation.

Test the f#&% out of this technology

This is an ethically precarious time for technology (OK, technology always walks an ethically-precarious path, but it’s more visible now). A time when self-driven cars see a pedestrian in their path and decide to keep going. Sara Wachter-Boettcher filled a whole book: Technically Wrong, with examples of biases, assumptions and oversights getting inadvertently baked into technology that we use everyday, leading to discrimination and harm.

To be clear, this technology is still in development. Google knows there is work to be done. Hopefully their plan includes broad, inclusive testing before it is widely released, and includes safeguards to protect against misuse.

Give the humanity to the humans

The potential upside of this technology is huge, don’t get me wrong. For those with voice or hearing disabilities, this can make the world more accessible. But this is a cavalier introduction and Google was ethically negligent. Perhaps the swift backlash in this past week is a blessing in disguise. Hopefully it will prompt some rethinking and more testing before it evolves farther.

And Google can make the user experience – for the human on the other end of the phone – a priority.

Related reading:

Excellent TechCrunch article, “Duplex shows Google failing at ethical and creative AI design” 
Sarah’s book, Technically Wrong – highly recommend!
Google’s Duplex announcement on their AI blog