To design for autonomous vehicles, chat with the back seat driver

Strategy Director

Smart have been designing HMI (car controls), voice control, and HUD (heads up display) systems for a while and have gained some valuable insights when it comes to future car experiences.

Learning from back seat drivers

When we think ahead to driverless systems, it feels like scary unknown territory, but we have plenty of reference points to learn from. Take China, where you have a much higher proportion of people who can afford personal drivers. Traditionally, most car design prioritizes the driver’s seat, as the driver is usually the owner. But in China, the owner often sits in the back. And from there, they still want the same amount of control over the environment, entertainment, and navigation. By beginning to design for the “back seat driver” we are anticipating the moment when the front-seat driver is replaced by AI. OEMs have to shift from a driver-focused to passenger-focused approach.

To that end, autonomous vehicles (AVs) have exciting opportunities for digital interaction. Unfortunately, we are still in the honeymoon period with touch screens; we are obsessed with them and think they are the answer to everything. But I tell you, they’re not. The depth and breadth of all the controls you’d want to include in an autonomous taxi service would result in a user journey that either has so many clicks you’d forget what you were doing before you found what you wanted, or it would have so many buttons on the top-level, the screen would look like a TV cable remote control on steroids. Add to that the number of people who suffer from nausea when they try to read or use a screen in a car, and you’re at an impasse.

Voice control will be key

A more effective and natural solution is voice control. The power of voice control is the ability to cut through layers and layers of interaction instantly, and the accuracy of the technology is increasing exponentially. However, one of the challenges of voice control is the opacity of what is or isn’t possible. In design we call this a lack of semiotics or affordances – it isn’t immediately clear what Alexa can or can’t do, we just talk to her, and blindly feel out the edges of what she can and can’t do. Her relationship as a “digital assistant” (a nebulous title at best) is too vague and so our approach to her is unclear.  It makes us slightly nervous when we hear of OEMs planning to incorporate Alexa, Siri and their cohorts into future car dashboards.

Make it human

One way we’ve proposed to solve this is to utilize a specific human avatar as a mental model. Avatars of human roles we are familiar with (e.g. a virtual chef for cooking applications or a virtual nurse for health scenarios) give us a sense of what we might be able to ask – and equally as important – give designers a sense of what people might ask, when they might ask it, and how they might phrase it.  With AVs that choice is obvious: by addressing a user interface as a “personal driver”, you already can imagine the types of things you can request, without fumbling through menu structures or taking your eyes off the horizon. Replace “Okay Google” with “Oh, Driver!” and suddenly you can imagine requesting “take the scenic route,” “could you turn down the temperature, please,” and “let’s visit my sister’s house.” Hopefully given that structure, it would be more successful than Alexa’s attempt to scratch your back.

Driverless systems aren’t as scary unknown as we think, just chat to the back-seat drivers, they are already one step ahead.