New Google Assistant Calling Feature Announced at Google I/O 2018 Brings Several Reactions
Recently in Mountain View, California, at the Shoreline Amphitheatre where Pearl Jam, k.d. lang, Bruce Springsteen, and many others have performed, tech took centre stage. And it did not disappoint. The Google I/O keynote, similar to last year, showcased the strides made in the last 365-ish days on Google’s impressive Machine Learning software applications, which have advanced everything from Google Maps and Photos, to Google Assistant and Waymo self-driving cars. One of the most impressive demonstrations came when Sundar Pichai showcased the new Google Assistant Calling feature. On stage using Wavenet to generate more human-like voices and inflection along with the powerful Google Duplex system, the Assistant called businesses to set up all sorts of appointments and even made dinner reservations. The interaction demos were impressive and very uncanny of the valley. In Shoreline, the reactions were palpable; an audible “Ooooh” and then a much quieter, slightly uncomfortable, “Wow” went up collectively from the audience.
During the I/O sessions that followed that notable demo, I asked other attendees what they thought about the Assistant’s new human-like capability. At the same time, I searched Twitter to see more reactions. Developers, designers, and even just plain-old users of technology consistently fell into two tracts. There was the optimistic group: “That was an amazing demo!”; “I couldn’t tell who the human was”; “I can’t wait to use this”; “Think of all the possibilities”. And then there were the pessimists: “Being human feels very unnecessary now.”; “So, now we’re robocalling each other?”; “I’m going to feel really duped when I figure out I’m not talking to a person.”; “How long till our robots are just calling each other?”. I watched a (surprisingly) thoughtful debate take place on Twitter. An acquaintance tweeted that these new AI calls will help no one and that trust in systems like this is detrimental to society. Another user replied that there are millions out there for whom simple phone calls are terribly hard to make. That’s when my view on Assistant Calling softened. While I’m still on the wary side, I am definitely open to looking at this from other points of view.
Assistant Calling is the latest in a long line of ‘futuristic’ technology that seems to bring more flash than substance. Or is merely built for trivial convenience. But, in the service of convenience, we are inadvertently garnering inclusivity and accessibility for others. However, what if we flip it, as an industry? What if we approach each and every one of these technologies with accessibility as a forethought and not an afterthought? Necessity is the mother of invention and I wonder if we have had that backwards, creating technology and innovation because we can and not often asking what is the greatest necessity and purpose for it. Assistant Calling, for example, will actually help people who have difficulty hearing, have social anxiety, are not native speakers, among other things. If we approach Assistant Calling with accessibility in mind, technology like this is less scary and deeply more impactful. Impactful to many, a convenience for more, and inclusive for all. These comparisons can be made to almost all creepy technology that toes over the line of making people uncomfortable, but also opens up the possibility to truly help them.
Facial recognition is another example of an accessibility-forward feature. While the idea of mapping faces and recognising images is definitely dangerous ground, facial recognition can actually be very helpful for a variety of people. Children on the spectrum can use facial recognition to learn how to differentiate the moods and meanings of others’ behaviour. There are around 285 million people in the world with visual impairments of some sort, and many have already been helped by the introduction of this new technology to unlock their phones with ease. In addition, Facebook has used this feature to help blind users identify people in their feeds. Examples like this make me feel slightly better about products that could become invasive or misused. In recent years at Google I/O and other major conferences, I have seen the importance of elevating accessibility and, thus, giving technology more of a humane touch that it was definitely lacking.
And why shouldn’t everyone be able to use these tools that were developed to make our lives better and easier?
The promise of these innovations recently introduced to the Google Assistant are great, and so is the fear (remember the pessimists). What makes this kind of technology so terrifying to me is the data that is gathered and required is often very personal and unchangeable. Setting an AI loose to make phone calls to humans means it’s learning in-depth, personal information about the real human on whose behalf the calls are being made. Facial recognition uses data about your face to do what it does, which means your face can now essentially be owned by an organisation.
Unfortunately, I believe it will be hard to separate the need for these inventions from the misuse and manipulation that they oftentimes invite. Let’s acknowledge both sides of the debate equally and ensure that inclusivity and the demand for safety measures are front and centre as our industry progresses. It will become the responsibility of the developers and designers to ask themselves the important questions about what they’re building, why they’re building it, who it’s really meant for, and consider the unethical ways their products could be used. We look forward to seeing more advancements like the Google Assistant Calling feature that will promote accessibility within the market, we just need to continue to strive for and demand that privacy and security are equally considered.
This article was originally published on LBB Online.