Skip to content Skip to sidebar Skip to footer

Gaining User Trust: Research and a Secret

Directing users to make out-of-context privacy and security decisions hurts them and software companies alike. It’s not just that users don’t understand the trust dialogs that computers present; it’s that they also use other clues to determine trustworthiness, and make one-off decisions based on their desire to complete the task that the dialog interrupted.

Understanding users’ behaviors leads to a handy acronym, SECRET, for a set of criteria designers should follow when developing trust and privacy related interfaces.

SECRET – a Scoped, Equitable, Contextual, Responsible, Emotional and Timely user experience for trust and privacy decisions

 (Re) Learning about Trust

Back in the early 2000s, several nasty viruses hit the Internet in quick succession: ILoveYou, Sircam, Code Red, and Nimda all spread across the globe thanks in part to poor end-user security.

To some extent, these viruses and privacy-reducing spyware products propagated because users found it difficult to understand the implications of their online actions. At Microsoft, we started work on Service Pack 2 for Windows XP to address these issues. From a technical perspective it was easy to throw up warning dialogs and quarantine certain downloads, but the largest issues lay in getting users to respond to the warnings produced by the operating system.

As technologists, we knew our perception of trust differed from that of regular users; so as part of Microsoft’s Trustworthy Computing Initiative, we went back to first principles to learn how consumers thought about privacy and security.

First, we encouraged a group of usability study participants to create “trust maps” using craft supplies (see Figures 1 and 2). The maps helped us to understand what trust meant to them and formed the basis for subsequent one-on-one interviews. We asked about previous trust incidents, how participants recovered from those incidents, and how the incidents made them feel.

A collection of trust maps made paper cutouts
Figure 1. Participants’ trust maps took many forms, but almost all showed differing distances between entities based on levels of trust.

Asking users to recall previous trust incidents elicited some very powerful emotions. Interviewees used terms such as frustrated, violated, preyed upon, and exposed to describe events that had occurred to them.

During the course of the interviews, however, it became apparent that the experiences that users described contradicted their earlier statements about whom they trusted with various levels of personal information. In other words, what participants said they would do differed from their actual behavior. (See articles by Caroline Jarrett and Kelly Bouas Henry in this issue of UX.)

For example, when asked whether he cared whether people could see where he’d been online, one participant stated, “No. It’s just the idea that they’re in there. But it’s not as much a privacy deal as your credit card or letters.” However, ten minutes later, when reading an online privacy statement, the same participant said, “Why would you want to be tracked? You’ve lost your freedom. I’m not happy with that.”

It appeared that users’ rational thoughts about their behavior were overpowered by emotional criteria such as who had suggested they visit a site, or how much they wanted the thing that the software offered them. They would make one-off emotional trust decisions without necessarily considering the rational consequences. Users were happy to give information when it suited them. They also regretted the decisions when those consequences no longer suited them.

Interestingly, participants did not see the computer as an actor in the trust decision. Computers themselves were neither trusted nor not-trusted. They were just seen as a conduit for information. Any recommendations that the computer might make were easily overpowered by the user’s emotions.

Diagrams with concentric rings as distance from the person. One shows relationships to people (family, friends, school, business). The other shows types of data (financial info, appointments, stock tips, photos).
FIgure 2. Trust map summaries were very useful in framing the types of data that were likely to raise users’ concerns, and the people they’d share with.

The interview and trust map findings provided some basic research findings:

  • Trust decisions are emotional; computers are logical
  • Users’ trust decisions are one-off rather than general
  • Users don’t—or don’t want to—consider the consequences

Next we tested some prototype user interfaces based on our findings.

Setting Privacy Preferences

Although users typically do not read privacy policies, we studied their responses when they were made to read through the text. We found that the broader and more inclusive the language in the privacy policy wording, the less credible users found the policy to be. Their concept of what terms such as cookies, third parties, aggregate data, and such mean was often wildly inaccurate, leading them to assume behaviors worse than the policy actually allowed.

As a reaction to that credibility gap, we wondered whether allowing users to determine their preferred privacy settings up front in plain language would allow us to quickly compare any new software or service against their existing preferences and simply highlight the differences. That way we’d help users with their one-off decisions by presenting a list of exceptions rather than a whole policy. We specifically create a “Trust Advisor” label for this feature in an attempt to make it into a proxy for a person rather than just being “the computer.” (See Figure 3.)

Screenshot: complex dialog for privacy settings
Figure 3. Giving users access to a single place to set privacy options led to them restricting all access, all the time, without awareness of the consequences.

It didn’t work quite as we had planned. When shown a set of privacy controls separate from the context of use, the majority of users choose the most restrictive privacy settings. They were not aware of the consequences of these settings, such as inability to access certain sites or use certain products. In the future, they may become confused when their computer appears to be “broken.”

For example, users like the “similar book recommendations” feature of Amazon. Their friends and family addresses are only a click away. They like the convenience of having the books delivered by UPS. Yet, when you show them a privacy clause that describes sharing aggregate purchase information with other users and address book data with third parties, they claim that they would refuse to use the service. They trust Amazon, not the concept of “sharing data.”

That means it’s hard to turn the computer into a trust agent working on the user’s behalf, because the user can’t—or can’t sensibly—instruct the computer up-front, and because the operating system can’t make emotional decisions based on who is being trusted.

We had inadvertently broken all of our own principles in an attempt to fix them: we made users consider the outcome of their trust decisions, made them do it out of context, and we presented situations where the logical option was to shut out access.

It turns out that the concept of a trust center for reviewing or rescinding privacy preferences is fine, but it should not be the users’ first port of call. Once we moved to a system of smaller in-context decisions, each of which defaulted to a recommended trustable action (a “smart default”), study participants became much more reasoned in their interactions with the software.

Some additional findings from the study:

  • Users make trust decisions based on real-life context, not abstract concepts like “aggregate data.”
  • Users make more reasoned trust decisions at the point in time where the decision is necessary than they do in advance.

New Trust Dialogs

The interface that had led to many users’ frustration was the ActiveX installation dialog (see Figure 4). This dialog box would appear, sometimes without user intervention, when users visited web pages that required additional software to run. Because the dialog was so unhelpful, most users just clicked whichever button they thought would remove the distraction and allow them to continue with their task. As a result, many users ended up installing software that reduced their online privacy or opened them up to malware attacks.

Screenshot: dialog with technical, scary language
Figure 4. The ActiveX installation dialog from pre-trustworthy computing days.

We had learned from our Trust Advisor studies that when we had to ask users to make a decision, the best place to do it was at the point when they were taking the action the trust decision was related to.

The users’ strategy for dealing with the ActiveX dialog box was relatively polar. About 40 percent of users would consistently click the “No” button. Another 40 percent would always click the “Yes” button, stating that they’d never had issues with previous software so this download should be okay. The remaining 20 percent would choose either the “Yes” or the “No” button, depending on how they felt at that time, swayed, in part, by the information in the dialog box.

Unfortunately, the dialog box was particularly uninformative. Its talk of “signing” and “authenticity” presented users with a dilemma rather than with data they could use to make an informed decision. Both well-intentioned and shady software companies had taken to giving their software long names like “WidgetWorks please click the YES button below,” because this was the only place they could insert a message into the dialog.

The initial redesign (Figure 5) tested marginally better. The dialog’s question was clearer, button labels used verbs related to the action that users must take, and VeriSign was called out as the trust-providing entity rather than “the computer.”

Screenshot: dialog with softer visual appearance, but which has a lot of words
Figure 5. Initial prototype to replace the ActiveX installation dialog.

However, some issues remained. The option to always trust content from the provider was the opposite of what people needed. Having a default of never trust would have allowed users to block the same malware products that appeared on many sites. Also, people didn’t know who VeriSign and other certificate authorities were. Users typically assumed that if software was signed, it was somehow trustable, rather than the reality, which is that certificates merely assert that the software was produced by the people listed on the label.

The tests suggested that the best solution would be to replace VeriSign and other unknown entities with trustable companies that users could choose to subscribe to for recommendations, such as Consumer Reports, Good Housekeeping, SlashDot, or whoever else they happened to trust. Unfortunately, the infrastructure required to create this solution just wasn’t possible in the short timeframes available.

Data, Not Dilemmas

Our research confirmed that users’ responses to system security dialogs were based more on convenience than reason. Users would do whatever was necessary to dismiss the dialog and get on with their task.

It didn’t help that most security dialogs asked users to make decisions without supplying the underlying information that users need, resulting in dilemmas, not data. The dilemmas in this case were the tradeoff between users’ stated aims of staying secure and not revealing personal data versus their emotional attachment to performing a “risky” task.

After iterating the interface through several additional user test sessions, we arrived at the shipping version shown in Figure 6. Although it included some compromises, our user test data indicated that many more users understood the reason for the dialog box and could make informed decisions based on what they read.

Screenshot of dialog box with simpler question.
Figure 6. The ActiveX dialog as it shipped in Windows XP Service Pack 2.

The dialog box starts with a question that sets the scope of the interaction. It then presents the data that we have about the interaction, namely what we know about the software and developer. The help text is placed in a “snap-off” area under the main dialog so that it doesn’t interfere with the main task, but is still accessible. Addition of a set of red/yellow/green trust icons gives a quick visual indication of the risk level. Even participants who skipped past the text and went straight to the buttons hesitated once they saw the labels, realizing that “Install” had bigger implications than just dismissing a yes/no dialog box.

The iterations on the trust interfaces allowed us to identify some additional findings:

  • Users don’t want to make trust decisions, they just want to “be secure”
  • Users don’t want to reveal personal data without clear benefits
  • Trust questions should present data, not dilemmas

Implications

As technologists we often push responsibility for trust and privacy issues onto end users without giving them a suitable environment or the tools to make smart decisions.

We can either tell people not to share their email address, or we can create better spam filters. We can warn people that a downloaded app might be dangerous, or we can sandbox it to stop it from doing bad things. At least part of the responsibility for smart trust solutions lies with software developers.

Privacy and security settings are a bit like sausages. People like them, but they don’t want to know what goes into making them. Similarly, people love the features that software provides, but if they are asked up front for all the permissions the software needs just to do its job, they panic and shut down access, often without considering the longer-term implications. By asking just-in-time questions, we can keep trust decisions within the context of the tasks that people perform.

Computers are good at two things, remembering, and doing sums. They are very bad at understanding emotions. Yet many of the trust decisions that people make have a large emotional component. Despite the computer’s calculation that an item is untrustworthy, it may have great emotional significance to a user.

This leads to the SECRET acronym:

  • Scoped: Present users with just the data they need to make decisions, not with unmanageable dilemmas.
  • Equitable: Demonstrate the benefits that users will get in return for sharing their information.
  • Contextual: Let users make trust decisions in context. Make exchanging information an explicit part of using software, rather than hiding it in a privacy statement.
  • Responsible: Stop making users take responsibility. Recommend and default to trusted options; use technology to prevent trust issues.
  • Emotional: Users consider emotional factors that the computer can’t understand. Always respect their decision.
  • Timely: Present trust decisions at the time they need to be made, rather than bundling them up in advance.

Following these design principles is a good first step in creating trust interfaces that users will understand. The more they understand the decisions they are making, the more they will trust the company that is asking the questions.Chris 一步一步向读者展现了他在 XP Service Pack 2中开展设计测试迭代的经验,XP Service Pack 2 是微软可信计算计划 (Trustworthy Computing Initiative) 的一个核心元素。在每次迭代时吸取的经验累积成为“秘密”——一种定义明确、公平合理、基于情境、负责任、富有情感和及时的用户体验,以便作出关于信任和隐私方面的决定。

文章全文为英文版Chris는 마이크로소프트의 “신뢰받는 컴퓨팅 이니셔티브”개발의 핵심 요소였던XP 서비스팩 2에서 디자인과 테스트를 반복한 자신의 경험을 독자들에게 설명한다. 매번 반복할 때마다 얻은 교훈이 쌓여 결국 “비밀”이 된다. 바로 특정 범위에서의, 공평하고, 정황적이며, 책임 있고,  감성적이며 시의적절한 신뢰와 개인정보 보호 결정에 대한 사용자 경험을 의미한다.

The full article is available only in English.Chris leva os leitores pelas iterações de projeto-e-teste de sua experiência com o XP Service Pack 2, um elemento-chave na Iniciativa de Computação Confiável da Microsoft. As lições aprendidas a cada iteração viram um “segredo” – uma experiência do usuário com escopo, equitativa, contextual, responsável, emocional e oportuna para decisões de privacidade e confiança.

O artigo completo está disponível somente em inglês.著者は、マイクロソフトのTrustworthy Computing Initiative(信頼できるコンピューティング構想)の重要な要素であるXP Service Pack 2における設計とテストの反復という自らの経験を読者に紹介する。各反復から得られた教訓は、適用範囲内で公正であり、状況に応じて反応が良く、情緒的で時宜を得たユーザーエクスペリエンスは信頼とプライバシーを決断するための「秘訣」である、ということである。

原文は英語だけになりますChris lleva a los lectores a las iteraciones de diseño y prueba de su experiencia con XP Service Pack 2, un elemento clave en la Iniciativa de Computación Confiable de Microsoft. Las lecciones aprendidas en cada iteración se suman al “secreto” – una experiencia de usuario que permita la toma de decisiones sobre confianza y privacidad con alcance definido, imparcialidad, contextualización, responsabilidad, emocionalidad y sentido de la oportunidad.

La versión completa de este artículo está sólo disponible en inglés.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.