Meg Kurdziolek is a UX Researcher for Google Cloud AI and Industry Solutions, where she focuses her research on Explainable AI and Model Understanding. She has had a varied career working for startups and large corporations alike across fields such as EdTech, weather forecasting, and commercial robotics. She has published articles on topics such as information visualization, educational technology design, human-robot interaction (HRI), and voice user interface (VUI) design. Kurdziolek is a proud alumna of Virginia Tech, where she received her Ph.D. in Human-Computer Interaction (HCI).
A problem faces machine learning (ML) product designers. As datasets grow larger and more complex, the ML models built on them grow complex and increasingly opaque. End users are less likely to trust and adopt the technology without clear explanations. Furthermore, audiences for ML model explanations vary considerably in background, experience with mathematical reasoning, and contexts in which they apply these technologies. UX professionals can use explainable artificial intelligence (XAI) methods and techniques to explain the reasoning behind ML products.