Skip to content Skip to sidebar Skip to footer

Navigating the Internet of Things: Can I Trust My Car?

The other day, I connected a new phone to my car’s “Communications Console” for the first time. It all went smoothly, with the technology working pretty much as I expected.

Then I got this message on my phone:

Message from my phone asking “Do you“
Figure 1: In other words, do you trust your car to have access to everything on your phone?

I had to think about this for a minute…literally. The message is hard to parse: What am I adding? How do I know what VW BT 9314 is? How far away is 100 meters?

Once I’d picked apart the syntax, recognized “VW BT” as Volkswagen Bluetooth, and mentally paced off a distance about as far as I park from the front door of my grocery store, I realized that this is a much bigger question than a simple, “Are you sure?” in a software dialog. The risks here aren’t just whether I have accurately understood the question. To answer accurately, I need to understand what might happen in an area around my car, not just now, but in the future.

Even worse, I don’t really know what that question is. The message in the interface is asking me whether I trust my car, but I have no idea what I’m trusting my car to do.

So, I asked a geeky friend to translate for me. He said, “I don’t know what your car can do. It might be asking you if you trust that when your car talks to your phone, it really is your car.” That seemed fair. My car is a big physical object and I usually know if I’m near my car or not. But, his first question kept nagging at me. What can my car do that I might not want it to do…or even know about.

Trusting Technology

Our social lives are based on many levels of trust. We are asked to trust other people, organizations, institutions, and government in both explicit agreements and tacit assumptions. Even turning on a light switch is to believe that by doing so a light will turn on; that it will turn on the same light as last time; that there will be electricity to power it; that it will not spark an explosion.

At one level or another we make these leaps of faith in almost everything we do.

As technology gets more and more embedded in our lives, we increasingly inhabit a world where the boundaries are invisible. How do we design the interfaces that help people understand what they mean every time they answer an “Are you sure?”

Every new technology brings new social challenges as human beings learn to negotiate the communication, trust, and security issues implicit in that technology. Behind every new feature and every interaction, there is a team of people setting up those decisions for millions of users to make.

There’s a lot of evidence that, too often, we are making it too hard for users to make a decision they are comfortable with. That’s partly because it’s hard to balance explaining things clearly and not drowning people in information. But too often we are getting the conversation wrong. We make assumptions about what terminology users know or how well they can imagine how new technology really works.

It’s not just that with the Internet of Things we have to decide whether we can trust devices like our cars with our information. We also must ask whether our devices have been designed to be good citizens. Like The Sorcerer’s Apprentice, we may find that these devices are out of our control and can affect the “common” with new possibilities for mischief.

For example, in October 2016 one of the largest Denial of Service attacks on record shut down much of the Internet in the US. The culprit turned out to be baby monitors. Yep, baby monitors and a lot of other devices with cameras built into them—simple Internet-connected cameras that were infected with malware. Part of the problem is that we want these devices to be easy to set up and use. In fact, they are so easy to use that almost anyone can break into them. In this case, the problem turned out to be a single brand of webcam that—wait for it—had a password written into the firmware, essentially spoon-feeding access to hackers.

The result was, as CNET put it, that “an army of DVRs and cameras kept you off Reddit for most of [a] day” as hackers turned 100,000 vulnerable devices into a malicious botnet, or a zombie army of “things.”

[bluebox]

You can Google this yourself, but here are three articles that explain what happened in the denial of service attack.

Hackers Used New Weapons to Disrupt Major Websites Across U.S.  New York Times, October 22, 2016

Why it was so easy to hack the cameras that took down the web? CNet, October 24, 2016

Chinese firm recalls webcams used in last week’s massive cyber attack as experts warn poor device security may lead to another major hack Daily Mail, October 24, 2016

[/bluebox]

Designing for Trust

Like usability, accessibility, and quality, trust must be built into a system. It’s not enough for something to be possible for it to be a good idea. Big data algorithms are a particular ethics concern, so much so that the US ACM Public Policy Committee has created a set of principles for transparency and accountability.

As Eric Meyer and Sara Wachter-Boettcher wrote so elegantly in their book Design for Real Life, it’s critical to think—right from the beginning—about what might go wrong and how different people might use (or misuse) a new feature. What might the impact be on the very real people who will use what you create?

  • Will they understand the question each interaction asks?
  • Will they know the risks of each action they take?
  • Can they imagine the consequences to themselves and others?

These aren’t rhetorical questions. News reports are filled with stories that range from people who thought their email was a private conversation to new features in social media that have had devastating unintended consequences, to dark patterns designed to manipulate the people who use our products.

  • Can users trust us to work thoughtfully and ethically so they can trust us—and the devices we work on?
  • How can we make human-centered ethics, trustworthy transparency, and security for people part of every UX decision?

Each of us has to answer these questions for ourselves. It’s a topic for conversation with our teams, our colleagues, and with the people whose lives we touch with our work.

Discuss among yourselves.

Photo of Whitney

Whitney combines a fascination with people and an obsession to communicate clearly with her work bringing user research insights to designing products where people matter. She's also passionate about elections, and leads the Center for Civic Design with Dana Chisnell. Her books are Storytelling for User ExperienceGlobalUX, and A Web for Everyone. Twitter: @whitneyq and @awebforeveryone