Top 10 Explainable AI (XAI) experts to follow
Dr. Cynthia Rudin: A professor at Duke University, Rudin is a strong advocate for interpretable models, especially in high-stakes decisions. Her work emphasizes the importance of creating models that not only perform well but are also inherently understandable.
Dr. Dario Gil: As the Director of IBM Research, Gil oversees the company’s advancements in AI, including their push into explainability and transparency in AI models, particularly through tools like IBM’s AI OpenScale.
Dr. Sameer Singh: Based at UC Irvine, Singh’s work on LIME (Local Interpretable Model-Agnostic Explanations) has been pivotal, providing tools for the interpretation of any machine learning model.
Marco Ribeiro: A researcher at Microsoft, Ribeiro has made significant contributions to XAI, notably as a co-developer of LIME, which seeks to clarify the decisions of black-box classifiers.
Dr. Been Kim: At Google Brain, Kim’s research focuses on making machine learning more understandable and interpretable. Her work on TCAV (Testing with Concept Activation Vectors) offers insights into neural network decisions.
Dr. Finale Doshi-Velez: As a professor at Harvard, Doshi-Velez has emphasized the importance of interpretability in AI systems, especially in healthcare, where understanding decisions can be critical.
Dr. Chris Olah: Previously at OpenAI and Google, Olah’s blogs have made deep learning concepts, including interpretability techniques, accessible to the broader public. His lucid explanations on topics like feature visualization provide a clear understanding of complex subjects.
Dr. Julius Adebayo: A researcher with affiliations to MIT and formerly OpenAI, Adebayo explores the intersection of privacy and machine learning, with an emphasis on understanding model behaviors in an interpretable manner.
Dr. Patrick Hall: A senior AI expert at bnh.ai, Hall’s work spans the development of machine learning interpretability techniques and the practical applications of those techniques in real-world settings.
Dr. Richard Caruana: At Microsoft Research, Caruana’s work on model transparency has provided foundations for explainable AI. His insights into risk and reward in model interpretability are especially valuable.