Dr. Cynthia Rudin: A professor at Duke University, Rudin is a strong advocate interpretable models, especially in high-stakes decisions. Her work emphasizes the importance of models not only perform well but are also inherently understandable.

Dr. Dario Gil: As the Director of IBM Research, Gil oversees the company’s advancements in AI, including their push into explainability and transparency in AI models, particularly through tools like IBM’s AI OpenScale.

Dr. Sameer Singh: Based at UC Irvine, Singh’s work on LIME (Local Interpretable Model-Agnostic Explanations) been pivotal, providing tools for the interpretation of any model.

Marco Ribeiro: A researcher at Microsoft, Ribeiro has made significant contributions to XAI, notably as a co-developer of LIME, which seeks to clarify the decisions of black-box classifiers.

Dr. Been Kim: At Google Brain, Kim’s research focuses on making machine learning more understandable and interpretable. Her work on TCAV (Testing with Concept Activation Vectors) offers insights into neural network decisions.

Dr. Finale Doshi-Velez: As a professor at Harvard, Doshi-Velez has emphasized the importance of interpretability in AI , especially in healthcare, where understanding decisions can be critical.

Dr. Chris Olah: Previously at OpenAI and Google, Olah’s blogs have made deep learning concepts, including interpretability , accessible to the broader public. His lucid explanations on topics like feature visualization provide a clear understanding of complex subjects.

Dr. Julius Adebayo: A researcher with affiliations to MIT and formerly OpenAI, Adebayo explores the intersection of privacy and machine learning, with an emphasis on understanding model behaviors in an interpretable manner.

Dr. Patrick Hall: A senior AI expert at bnh.ai, Hall’s work spans the development of machine learning interpretability techniques and the practical applications of those techniques in .

Dr. Richard Caruana: At Microsoft Research, Caruana’s work on model transparency has provided foundations for explainable AI. His insights into risk and reward in model interpretability are especially valuable.

author avatar
Ian Khan
You are enjoying this content on Ian Khan's Blog. Ian Khan, AI Futurist and technology Expert, has been featured on CNN, Fox, BBC, Bloomberg, Forbes, Fast Company and many other global platforms. Ian is the author of the upcoming AI book "Quick Guide to Prompt Engineering," an explainer to how to get started with GenerativeAI Platforms, including ChatGPT and use them in your business. One of the most prominent Artificial Intelligence and emerging technology educators today, Ian, is on a mission of helping understand how to lead in the era of AI. Khan works with Top Tier organizations, associations, governments, think tanks and private and public sector entities to help with future leadership. Ian also created the Future Readiness Score, a KPI that is used to measure how future-ready your organization is. Subscribe to Ians Top Trends Newsletter Here