Defining user research in the age of LLMs
What is AI's role in user research?
Smart Design’s recent series on defining user research in the age of LLMs explored how AI is reshaping the practice of user research. Across two panel events, leaders from Pfizer, Ford, Google, Hyatt, and JPMorgan Chase examined the opportunities, risks, and responsibilities that come with integrating AI into research practices. While each organization approaches AI from a different vantage, they share a common belief: the future of user research will be faster and more accessible, and we need to be very intentional about how to keep it true to the lived human experience.
01 Widening research access requires a culture of skepticism to prevent “work slop”
AI’s ability to expand research access is one of its most exciting promises, allowing more people across organizations to run analyses, generate insights, or test ideas. However, this accessibility can perpetuate affirmation bias where people accept polished research outputs that tell them what they want to hear. To prevent the shallow, unvetted outputs of “work slop” that push inaccuracies down the line, research teams must train people to interrogate data rather than accept the first answer.
The panelists shared a range of tactics to prevent against bias and work slop, including: documenting failures and best practices, sharing prompt libraries, double-checking outputs against the data, looping in an expert, always knowing where the data came from, and even doing A/B testing to compare results against traditional user research methods.
02 Leave the surface level to AI so humans can go deeper
Beyond accelerating individual studies, the panel highlighted how AI can propel user research from discrete projects to a dynamic repository that drives longer term strategy. Instead of insights staying at the end of a static report, AI allows for teams to revisit and re-query years of past research to find new patterns and insights.
Expanding who in the organization does “basic” research and freeing up time with automations allows researchers to spend more time upstream. AI has the potential to free humans to focus more on the unknown questions, product vision, and complex edge cases that automated systems cannot grasp. Creating more time for truly strategic work could be an unlock for organizations.
03 Implementation gaps remain for digital twins and synthetic users
Digital twins and synthetic users were buzzwords during both panel discussions. The panelists see varying potential, and all agree that they are not yet full substitutes for human participants. Panelists noted that these models can hallucinate entirely incorrect identities or fail to capture the unique and unexpected viewpoints that are often the gem of user research. Joe Dietzel pointed out that because synthetic users typically represent the middle of the bell curve, they lack the unpredictability and cultural nuance found in real human interactions. Until these AI tools can account for human behavior that falls outside of predictable patterns, they remain experimental test and learn features rather than a panacea for user research.
04 Researchers must redefine their value in an AI-driven landscape
The rise of LLMs does not signify the end of the researcher. Rather, it’s shifting the researcher’s role from artifact producer to strategic navigator. A researcher’s penchant for asking the right questions and keeping humans at the center positions them well for helping train models and shaping protocol around them. Researchers possess the essential skills for an AI future: strategic storytelling, correlating findings with business objectives, and acting as the ethical filter for AI outputs.
Human-centered rigor is still the north star
Despite the technology’s novelty, the panelists closed on a timeless truth: the fundamentals of great research don’t change. Understanding people, their motivations, contexts, and challenges remains at the heart of the work. AI can expand what is possible, but only humans can define what is meaningful.