This meetup was held in New York City on 30th April.
Abstract:
The good news is building fair, accountable, and transparent machine learning systems is possible. The bad news is it’s harder than many blogs and software package docs would have you believe. The truth is nearly all interpretable machine learning techniques generate approximate explanations, that the fields of eXplainable AI (XAI) and Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) are very new, and that few best practices have been widely agreed upon. This combination can lead to some ugly outcomes!
This talk aims to make your interpretable machine learning project a success by describing fundamental technical challenges you will face in building an interpretable machine learning system, defining the real-world value proposition of approximate explanations for exact models, and then outlining the following viable techniques for debugging, explaining, and testing machine learning models:
*Model visualizations including decision tree surrogate models, individual conditional expectation (ICE) plots, partial dependence plots, and residual analysis.
*Reason code generation techniques like LIME, Shapley explanations, and Treeinterpreter.
*Sensitivity Analysis.
Plenty of guidance on when, and when not, to use these techniques will also be shared, and the talk will conclude by providing guidelines for testing generated explanations themselves for accuracy and stability.
Open source examples (with lots of comments and helpful hints) for building interpretable machine learning systems are available to accompany the talk at: [ Ссылка ]
Bio:
Patrick Hall is senior director for data science products at H2O.ai where he focuses mainly on model interpretability. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining H2O.ai, Patrick held global customer facing roles and research and development roles at SAS Institute.
Navdeep Gill
Navdeep Gill is a Software Engineer & Data Scientist at H2O.ai where he focuses on model interpretability, GPU accelerated machine learning, and automated machine learning. He graduated from California State University, East Bay with a M.S. degree in Computational Statistics, B.S. in Statistics, and a B.A. in Psychology (minor in Mathematics). During his education, he gained interests in machine learning, time series analysis, statistical computing, data mining, and data visualization.
Before joining H2O.ai, he worked at Cisco Systems, focusing on data science and software development. Before stepping into industry he worked in various Neuroscience labs as a researcher/analyst. These labs were at institutions such as California State University, East Bay, University of California, San Francisco, and Smith Kettlewell Eye Research Institute. His work across these labs varied from behavioral, electrophysiology, and functional magnetic resonance imaging research. Connect with Navdeep on Twitter @Navdeep_Gill_.
![](https://i.ytimg.com/vi/Q8rTrmqUQsU/maxresdefault.jpg)