Ethical interactions: How AI is influencing user experience design

Associate Design Director
Interaction Designer

About the salon

Driven by artificial intelligence (AI), many digital products and services have evolved from fun distractions to ubiquitous and powerful platforms that have a significant influence on our lives and our society. More and more, we freely share personal and behavioral data with companies that personalize the user experience. But many people don’t really know how AI works or whether their data is protected. As designers, what are our responsibilities when it comes to creating ethical products and services? How can we ensure that our digital designs benefit not only brands but also users—and the world we live in?

Virtual Smart Salon

Join the conversation

on today’s most important design-related issues

Virtual Smart Salon

Join the conversation

on today’s most important design-related issues

Our panelists

Rick Barraza is the principal design lead, Ethics & Society, at Microsoft, an interdisciplinary group focusing on sustainable innovation and exponential technologies that optimize human dignity and help mitigate personal and societal harm.

Jeff Huang is an associate professor in Computer Science at Brown University, where his research looks at human-computer interaction and building personalized systems based on user behavior data.

Sharon Lo is the product management lead, Ethics & Society, for AI at Microsoft, helping to guide technical and experience innovation toward ethical, responsible, and sustainable outcomes.

Introduction

Our discussion was co-led by Smart Design’s associate design director Jasper Dekker, and Sophia Xu, interaction designer, and informed by Smart’s recent work with Facebook parent Meta Platforms developing tools for parents and teens to engage with social media responsibly. We asked our panelists—who are deeply immersed in all aspects of ethical design—about the power and potential of AI, how digital addiction can be an unintended outcome, and the best way to educate both consumers and the next generation of ethically-minded designers.

Balance tech power and privacy concerns

Brown University’s Jeff Huang kicked off the discussion with the observation that companies today have not only considerable power to track data from their products and services but also a marketplace in which to sell this data. “The tools to collect data are becoming much more sophisticated,” he explained, as it’s now done by AI rather than humans, and statistical models compile the data and tailor things to people’s needs. Yet, for most users, “there’s not much transparency about how the data is collected, the extent of data taken, and what happens to it.” Huang also acknowledged that there’s an “opportunity to shift the balance of power toward users again by giving them the option to choose what is shared and helping them “figure out whether the benefits outweigh the costs.”

Sharon Lo of Microsoft agreed that power, privacy, and data collection are critical topics today, as “privacy is your power—your power to not be attributed to  spaces you don’t want to be in.” As such, Lo reasoned that when companies sell your personal data, “they’re selling the power to influence you,” echoing Huang’s comments about a lack of transparency around user consent and data collection.

Lo’s colleague at Microsoft, Rick Barraza, contended that AI has created a power asymmetry [between] the people who have it and the people who don’t understand what they’re giving up in the long term.” He compared the situation to when “conquerors and colonizers” negotiated with original settlers who didn’t know at the time the costs of what they were giving up—a situation that can lead to what he calls a “non-recoverable failure” such as nuclear weapons, that is, irreversible. He pointed out that, 20 years ago, you could make a product and ship it and sort it out later if something went wrong. But such is the power of AI, that if “you fail you don’t get a second chance to fine-tune it, and thus “this puts us into a different conversation about ethics than there would be with general technology.”

Rethink performance measures

Diving deeper into this topic, Barraza talked about how AI has changed the dynamic of the design industries, and also how we think about measuring key performance indicators (KPIs). With algorithms and massive data collection, “we’ve moved from the Mad Men days of reflective design and ego-design branding to be able to compute your subconscious intent and desire,” he explained. And if you know someone’s predilections, “it’s easier to exploit [them]—a power that marketing didn’t have before.” Where do we cross the line between marketing and persuasion and manipulation? Referring again to the notion of an AI-related power asymmetry, Barraza noted, “Most people wouldn’t play the game if they knew they were playing with a card counter. It’s not cheating, [but AI] can problem-solve at a significantly higher order than you can. And you’re going to lose the game that you think is fair because [AI] can read all the tells that you don’t even know you have.” 

Huang took the card counter analogy further to make a point about digital addiction. While designers aim to create good, useful products that benefit society, an addictive product can be an unintended outcome (like movie plots in which robots are supposed to protect humans but end up trapping them in a room and never letting them out). Huang attributes this to utilizing only a narrow set of measures—such as the amount of time a user spends on a page, or the number of ads clicked—and then optimizing them to heighten user interest. One solution he suggested is to improve measuring capabilities and use multiple measures for things that are hard to calibrate, such as “user satisfaction and what’s good for the person.” For example, how is their mental health? And their friendships after five years on Facebook? “These measures are important and can balance out the quantitative metrics that are automatically trackable.”

For her part, Lo recommends that when hyper-optimizing KPIs and measures such as engagement, “think about what this really looks like, and ask if you want people on your app 24 hours a day? Is this the future you want to create?” She urged designers to consider the endpoint of such metrics, and acknowledged that while not everything is measurable, “your job as human-centered designers is to serve human needs and values, so think about additional metrics.” Today, our work as designers is expanding from finding and addressing these needs and values to also safeguarding them as products integrate new technologies.

Embrace ethical values—and exercise “moral imagination“

Our panelists concurred that it’s crucial for designers to establish an ethical framework. In his department at Brown, for example, Huang said the first impulse is to make sure students understand one of the basic tenets of human-centered design: to put the user first. But while everyone agrees with this as an idea, “it doesn’t always work in a lot of cases.” Instead, Huang asks students to write down their personal values and ethics and then test them with different scenarios, questioning whether the principles are consistently applied. Another approach is retroactively thinking about past technologies: From our current perspective, knowing what we know now, would we have gone ahead with new technology, such as the car? Would we have done it all over again? And maybe put in seatbelts earlier?

For Barraza, the fastest way to misuse ethics is to tell others in advance what those ethics are.  “When you define ethics for everybody and they’re going to have to do it [that way], this is how the good guys turn into the bad guys—it’s the origin story of every bad guy,” he remarked. An alternate strategy is for creative people, who he says possess a “superpower of narrative,” to create a narrative about a “preferable” future they want to live in. Designers have a responsibility to continually stretch their moral imagination—that is, to consider all the ethical and socio-technological implications of what they make. “You need to ask yourself, Would this thing I am making and that I love become a horrific Black Mirror episode? What would Black Mirror do with this product?” He concluded that this goes hand in hand with moral imagination, “to get you out of the usual connections your brain is going to make.”

Lo commented that a common response to hearing the word “ethics” is to think that a company’s ethics advisory board will tell you what’s right or wrong and what the ethical outcome ought to be. But because “ethics is so contextual”—as seen in the famous Princeton Seminary Experiment, which examined bystander intervention to a distressed stranger by seminary students—she recommended thinking more broadly about ethical innovation, being responsible for “how we innovate,” and acting more intentionally and purposefully when developing products and experiences. This includes considering the impact on both direct users (customers) as well as those indirectly affected by it. For instance, when the Microsoft team was working on a synthetic voice project, it took into account the potential impact on voice actors and how such recordings might be used in court proceedings. “There are a lot more nuanced implications and different perspectives you have to be able to imagine—and that’s part of moral imagination.”

Inform users, educate designers

Besides encouraging his students to think about personal values and ethics, Huang made a strong case for educating people about what AI actually is and what it can—and cannot—do. “I don’t believe that AI is some sort of magical thing, [as it] is talked about in the media,” he asserted. “At the end of the day, AI can be optimized toward single goals like impersonating an artist or playing chess, because that can be programmed.” As such, he advocated informing people about AI’s limitations and explaining why it’s easier for it to learn how to play checkers, for example, than to operate a self-driving car, an area that Smart has been involved in to understand how more complex systems and data-powered algorithms can be delivered to other devices and platforms and its impact on a person’s experience. 

For design students, Barraza argued that it’s important to teach them about pattern matching and applying different lenses as they grapple with complex technologies and the ethics of designing for AI. “[They] should instantly be able to spot the questions that nobody’s asking, and as [designers], answer why, or at least provoke people to answer why,” he formulated. Noting that design courses tend to teach guidelines about what you “should do” from the start, Barraza insisted that these will likely be obsolete by the time students graduate, therefore wasting “four years learning tricks for something that’s old news.” 

Conclusion

As technology often moves at a pace far beyond our capacity to fully comprehend it—such as the new DALL-E 2 AI system that can create realistic images and art from a description in natural language—our panelists underscored the need for designers to better understand their responsibilities. They urged designers to be a little more critical and conscious of their actions, introspective about what kind of future we all want, and aware of the ethical principles that are needed to support this goal. “Because design is about how things come to be,” Lo declared, “the objects, the services, the experiences you’re building are shaping the future and affect people and societies.”

Smart starters

Establish ethical principles
Formulate a clear set of ethical values from the get-go and integrate these principles into the design process, all the while challenging stakeholders to ask questions about the ultimate intent and aspirations behind a project to clarify goals and possible outcomes.

Explore many possible futures
When creating new products, services, and experiences, imagine not only probable futures but also a plurality of futures as well as preferred futures, and be aware of the potential consequences and implications for both people and our society.

Identify new performance metrics
Go beyond more commonly used quantitative KPIs and create anti-KPIs for engagement and consider utilizing ethically based or human-dignity-based measures to broaden our understanding of how a product is being used, and the level of user satisfaction.

Engage and inform consumers 
As data collection tools become more sophisticated, consider introducing easily accessible and understandable informed consent agreements in order to ensure that users are knowledgeable about their choices, feel safe about data privacy, and are aware of the benefits they can expect from a product.

Find ways to  build trust 
To build a better relationship and loyalty with consumers, ask questions about the brand’s marketing messages as well as the narrative behind the brand to see how it’s being sold, the value it brings to consumers, and whether it’s true and aspirational.

About Jasper Dekker

Jasper is an associate design director who strives to make technology meaningful, useable, and delightful for all people. Next to his UX/UI skills, he brings expertise in embedded UI and design systems and has worked across automotive and mobility, consumer tech, media, healthcare, and gaming. His notable clients include HPFordAmplifon, Google, Gatorade, Merck, and Samsung, and IDSA and Fast Company have awarded his work. Outside Smart, Jasper is an advocate for urban cycling and dabbles in creating hifi gear.

About Sophia Xu

Sophia is an interaction designer who believes design should positively advance the human experience. As a multidisciplinary designer, she brings a fresh perspective to designs with expertise in UX/UI and product and interaction design. She holds an Industrial Design BFA from the Rhode Island School of Design (RISD), where she has received several awards from and exhibited projects.

Let’s design a smarter world together