Bridging the Gap to Machine Learning in Assessing: Bringing Interpretability to the Models

Brief Session Description

The inability to interpret how values are determined is often cited as the main impediment to deploying machine learning CAMA models. In this presentation, we show how Local Interpretable Model-Agnostic Explanations (LIME) can open up these black boxes. By presenting LIME results interactively in GIS, we show how to further extend clarity. Finally, we compare the performance and interpretability of several machine learning models to existing, state-of-the-art, linear regression models.

Audience Expertise

Advanced (Developed for the participant who has a thorough knowledge of the areas covered.)

Location

L 100 G

Start Date

9-26-2018 2:45 PM

End Date

9-26-2018 3:45 PM

Moderator Name

Edward VanderVries, Portage, MI

This document is currently not available here.

Share

COinS
 
Sep 26th, 2:45 PM Sep 26th, 3:45 PM

Bridging the Gap to Machine Learning in Assessing: Bringing Interpretability to the Models

L 100 G

The inability to interpret how values are determined is often cited as the main impediment to deploying machine learning CAMA models. In this presentation, we show how Local Interpretable Model-Agnostic Explanations (LIME) can open up these black boxes. By presenting LIME results interactively in GIS, we show how to further extend clarity. Finally, we compare the performance and interpretability of several machine learning models to existing, state-of-the-art, linear regression models.