Product Discovery

Product Discovery

Nov 9, 2023

When AI Gets It Wrong - Responsible AI in UX Research

When AI Gets It Wrong - Responsible AI in UX Research

When AI Gets It Wrong - Responsible AI in UX Research

Author

Author

Ronny Roeller

Artificial intelligence (AI) has reshaped our professional landscape, streamlining our work in unprecedented ways. Yet, there's a caveat: AI's convincing nature, even when off-mark. So, how do we ensure responsible AI use in UX research? Let's delve in.

The Deceptive Conviction of AI Responses

Consider a user in a usability test remarking, "The navigation menu is so intuitive!" An AI might tag "intuitive" as positive feedback. However, imagine if this was uttered with dripping sarcasm? Such nuances can easily elude AI, skewing our insights.

Blindly trusting AI is akin to dispatching an AI-crafted email to your boss unchecked. Tempting? Sure. Wise? Not quite. Researchers must treat AI outputs as drafts, necessitating human review.

Beyond acknowledging AI’s limitations, we must actively implement safeguards. Below are a couple of concrete best practices.

Back up each AI response with raw data

Every AI-generated insight should be substantiated by direct evidence from raw data. This means that, alongside an AI’s interpretation, there should be an accessible snippet—be it text or video—that supports the assertion. This "proof" enables researchers to verify, at a glance, the veracity of AI's conclusions and rectify if needed.

Think of this as the footnotes in a research paper or the sources in a journalist’s article; they don’t just lend credibility—they allow for verification and deeper understanding. This process of backing up AI interpretations goes beyond mere validation. It instills confidence in the researchers who rely on these tools.

By linking every insight to its raw data origin—be it a line of text, a moment in a video, or a user's click path—we not only ensure the accuracy of our AI's conclusions but also enable a deeper dive into the context surrounding those conclusions. This dual-faceted approach ensures that the AI serves not just as an analyst, but also as a curator, guiding researchers back to the moments that matter.

Beyond Text Transcripts

While text provides a narrative, it often misses the undertones. This is precisely why user interviews go beyond written feedback—it’s about capturing the full spectrum of human expression. Most AI solutions, unfortunately, rely heavily on text, sacrificing context in the process.

To remedy this, AI should not just process text but also reference associated video clips. This dual approach ensures a rounded interpretation. Furthermore, the AI should provide clarity on its reasoning. It should show which parts of the transcripts influenced its conclusions. Researchers should be empowered not just to review but to actively influence AI’s analysis—tweaking interpretations, flagging overlooked insights, or adding new ones. Linking textual highlights to their corresponding video moments creates a seamless bridge between AI's textual world and the nuanced realm of human interaction.

In Conclusion: Your Move, Researcher!

While AI promises efficiency, the onus of ethical and accurate research rests squarely on human shoulders. The fusion of AI's power and human discernment is the future. So, to all UX researchers: Stay curious, stay vigilant, and most importantly, stay committed to continuous learning in this ever-evolving AI-enhanced era.