Skip to content Skip to sidebar Skip to footer

Building an Ethics Framework for UX Design, Part 2

This is the second of a two-part article. The first part of this article was devoted to exploring existential values and ethical issues where ill or misdirected intent occur. In this article, we will identify and examine ethical issues (with special reference to the healthcare industry) where the intent, though benevolent, results in latent ethical problems.

Benevolent Intent

As designers, our hearts are often in the right place. We have good intentions, resist dark patterns, and place our users first. But often, our designs bear latent ethical issues. These issues arise either because of existing design processes in organizations or because we have difficulty foreseeing how the product will be consumed and used in real-world scenarios.

The following ethical issues are often not evident during the design or development phase. Sometimes they are not even evident to designers well after the release, which underscores the importance of developing and maintaining an ethical awareness of both their existence and potential consequence.

Decision Aids and Making Choices for Our Users

We build menus, radio button selections, and often automate choices for our users in designs. Hick’s Law would have us attempt to minimize long lists of choices for our users. We often use onboarding or wizard patterns to solve such dilemmas. We use automation when we choose defaults for our users. And there may be instances in which we want to slow our users down when making decisions, as this aids in preventing errors. There are a number of ethical issues surrounding decision aids.

As always, the user’s best interest must be a top priority. Having a checkbox checked by default—which signs users up for a newsletter—places business needs before user desires. When we use onboarding patterns that ask limited questions upfront and then make a series of choices for the user, are we making the right choices? Are we transparent with those choices?

Curating content is another place where we make decisions for our users. We often tailor content to help our users more easily locate an item. But this comes at a cost. As the famous philosopher Jacques Derrida noted, “Inclusion can only be achieved through exclusion.” How do we know Yelp is giving us all of the best restaurants around when many may not be registered within the site? How do we know we are receiving the best information on a topic in a Google Scholar search when Google only crawls a certain percentage of the web? How do we know automating a certain process for a user is truly what they want? Where is the tipping point in automation where automation bias becomes an ethical issue or causes human harm?

Decision support is where we attempt to either limit choices for a user or make recommendations. In healthcare, we use clinical decision support, as shown in Figure 1, to make recommendations for treatment. In such scenarios, we must ensure the decisions are presented clearly, based on the best evidence available and in the patient’s best interest. In all of these scenarios, we must be concerned with automation bias and the overreliance on technology. This issue is very closely tied to the topic of persuasive design and changing user behavior.

Generic patient health record

Figure 1. Harnessing clinical decision support helps in recommending treatments. (Credit: GN ReSound)

Behavior Modification and Persuasive Design

Interfaces and systems can change behavior. Healthcare, for example, uses a plethora of technologies to manage a patient’s care and medical history. But these systems also change the way professionals think and work. Physicians and nurses change their decision-making behavior and how they treat patients as a result of health information technologies simply being implemented in a healthcare setting. This results in unintended consequences—one of which is less social interaction with patients.

Persuasive design, where we use methods to persuade users toward a certain behavior, also presents ethical issues. Behavioral economics has become a trending topic in recent years where we “nudge” users in directions we believe is in their best interest. Building on the points above, is it truly ethical to opt users in for a company-sponsored 401(k) plan or to become an organ donor as a default? Is it ethical to allow computers to use algorithms to enable suggestions when we know users lend great credibility to computers?

We often use systems and interfaces to regulate users or force them into a certain set of choices. The rigidity of these techniques often results in workarounds where a user finds a way outside of the system to complete a task or action. In healthcare, this potentially exposes patients to harm.

When is it okay to persuade a user to change their behavior? How do we know our intentions are aligned with our users’ intentions?

Addictive Designs

One way we change user behavior is to design products that “hook” our users into repeated use. Nir Eyal’s book Hooked outlines how companies build products that are habit-forming. Of course, we all want to design products customers will use more than once. But, is it ethical to actively seek out patterns that will exploit the addictive aspect of our psychology? Tristan Harris, a former Google design ethicist, outlines a number of methods employed to induce repeated or increased use, such as intermittent variable rewards (refreshing apps for updates), social approval (manipulating interactions on social networks), and autoplay features in apps such as Netflix.

Distractions

The technologies and interfaces we design can be distracting. For example, according to the Centers for Disease Control and Prevention, “each day in the United States, approximately 9 people are killed and more than 1,000 injured in crashes that are reported to involve a distracted driver.” Recent design enhancements have been implemented to address this. Apple Car Mode, for example, enables the phone to sense when you might be driving and turns a “do not disturb” feature on. Many apps have begun to employ their own “car mode” features. Audible allows users to enable car mode, which provides an interface with larger buttons and minimizes screen clutter.

Distraction can occur in other scenarios as well. Healthcare User Experience (UX) design must consider the timing of alerts, as well as provide interfaces conducive to the distractions one may encounter in a healthcare environment. That is, not only must we consider the timing of distracting elements within our interface designs, but we must also create affordances for when a clinician may be distracted, making it simpler for them to pick up where they left off. Addressing distractions requires us to truly consider the contexts in which humans will be using our designs. We cannot only consider the normal use cases in such scenarios.

Efficiency Costs

Technology is a tool, no different from a shovel or steam shovel. It inevitably makes our lives easier. However, this comes at a cost. In “Emerging Technologies Are Creating New Ethical Challenges for UX Designers,” Bill Gribbons noted that we often devalue or “de-skill” human work, leading to unintended consequences. Automation can devalue work and often leads to unrewarding work for employees, while de-skilling often leads to error through automation bias. Additionally, it is inevitable that many of us will work on technologies that will eventually replace or eliminate jobs from a given workforce. There may be little we can do as designers regarding these issues, and they are very closely related to the existential issues covered in part one of this article. But they are worth noting and being considered as legitimate ethical dilemmas.

Real Humans, Real Lives

It is easy for us as designers to lose touch with our users who have real lives and emotions. At IA Summit 2018, Alberta Soranzo spoke of the ethical issues and emotions surrounding death in her presentation “Our Eternal (Digital) Afterlife.” Soranzo detailed events where social media platforms might send birthday reminders or “on this day” reminders to relatives of deceased loved ones. This is painful for users and equates to tearing off a healing emotional scab. These features are often automated without considerations for such instances.

In Eric Meyer and Sara Wachter-Boettcher’s book Design for Real Life, they describe real-world scenarios where designs fail to consider the user in crisis. We often design for the ideal scenario and the ideal user. Meyer’s talk at An Event Apart details his own experience in attempting to find information concerning his daughter who was life-lined via helicopter to a hospital more than an hour away while he followed by car. The hospital website had been designed primarily from a marketing perspective and not with a situation such as Meyer’s in mind.

Designs can evoke emotions and can fail to consider the range of emotions we experience as humans. The ethical issues of designing for real humans become all-too-evident when we fail to consider scenarios such as those described above.

Privacy Issues

It would seem little needs to be written of privacy issues. Shakespeare famously wrote, “All the world’s a stage, And all the men and women merely players.” We have social lives, professional lives, and parts of our lives related to our health. And, we do not necessarily always want those lives and the roles we assume to mesh. Ethical issues of privacy can affect all areas of our lives when platforms and interfaces share what we would normally compartmentalize. Facebook has found this to be a tremendous challenge as have other technological corporations. Considering product design and its impact on privacy is a must for UX designers today.

It is important for us to safeguard users’ information and protect their privacy. However, in many instances, our attempt to solve one problem creates another. Consider password security and authentication. The constraints surrounding password creation often leads to problems in password memorization and retrieval. I used to work in a hospital where the clinicians had to memorize no less than eight passwords for the various systems they logged into on a routine basis. Exacerbating this problem, the passwords had to be changed every 30 days. Do we not have a duty to design solutions to the problems we create through our designs?

Designing for Race, Gender and Physical Limitations

We have an obligation to not exclude through design and to make an attempt to remain as non-biased as possible. This is difficult because it means considering all of our users, including those who belong to a different race, gender, socio-economic status and those who have physical limitations or who fall beyond what is considered “average.” Is your design biased to men? Does it require high-speed internet because you run autoplay advertisements or videos, excluding those without that access? Does it ignore racial differences or fail to consider international/localization design issues?

In relation to accessibility, we have an absolute obligation to ensure those with physical limitations can use our site. One of my former co-workers used to complain about the term “accessible” because she believed considering usability should automatically cover accessibility, and a product designed without accessibility in mind was not usable or user-centric at all. We must also consider users who fall beyond the average. The databases we use to determine “average” are antiquated and based on data that is decades old.

Finally, we have an obligation to ensure we do not harm our users physically. Designing for ergonomics is important. In the past decade, I have been on a number of projects where mobile devices or handheld devices were proposed for projects to replace existing desktops or laptops with little or no consideration to the ergonomic impact it would have on users.

Misuse of Our Designs

At what level are we charged with ensuring our designs are not misused? The “Fake News” scandals surrounding Facebook in recent years are a prime example of technology being used for devious purposes. As designers, we may not always be directly responsible for misuse. But we often are. When we design products with addictive properties, for example, do we truly have the user’s best interest in mind or do we have the business interest in mind? It is often difficult to address misuse scenarios during the design process. However, teams and organizations must be quick to adjust when such scenarios are identified.

There Is No U in Your UX

We all know UX stands for user experience. But, how many of us have worked in organizations where there was little or no user research conducted? How can we have a UX team responsible for building the right user experience when there is little or no research conducted? This is a major ethical issue. Organizations that implement UX often do not understand what is involved in user experience design. I believe they have the correct intentions and must start somewhere; however, I have worked in a number of organizations where user research was marginalized, or budgets and timelines were constrained to the point where user research was an impossibility.

If there is no U (or user) in our UX, we run the risk of creating experiences where ethical issues are either latent or lie dormant, waiting to result in an error or harm to the user. We cannot research every aspect of a design. But, not conducting any research at all is no longer a viable option in our profession where ethics are concerned.

Future Directions

In my own career, it has been common for ethical discussions to occur on projects—especially since I work in healthcare. However, I have never worked on a team where a design ethicist position existed. Where are these people? There are a few I have encountered in researching this article. Tristan Harris and Mike Monteiro are two. I fervently hope to see or find more evangelists such as these in the future. But it will certainly be a first step to have more proponents in the field as our conversations covering this topic continue.

We should also begin developing ethical frameworks or ethical codes to guide us in design and in evaluating designs. These frameworks should enable us to design with ethics in mind and conduct an appropriate pre- and post-evaluation.

We also need to view ethical issues from different perspectives. Viewing our designs from different perspectives allows us to consider the ethical impact of our design decisions or decisions we do not even realize the ethical implications of. One group who is currently starting this work is the Artifact Group with their project The Tarot Cards of Tech. These cards enable us to consider our designs from varying perspectives.

Ethics in UX design is a conversation we will continue to have and has only recently been considered more deeply than ever. Understanding the different issues we are facing and their origins is what this article attempts to achieve. Simply becoming aware there is a problem and what those problems are is often the first step in developing a solution.