GeoShapley: an explainable AI method to measure geographic contribution in machine learning models
Topics:
Keywords: XAI; Context; GeoAI; SHAP; Modelling
Abstract Type: Paper Abstract
Authors:
Ziqi Li, University of Glasgow
,
,
,
,
,
,
,
,
,
Abstract
Recent work by Li (2022) shows that eXplainable AI (XAI) methods can be used to extract spatial effects from machine learning models and results are consistent with well-known spatial statistical frameworks (e.g. GWR). However, some challenges have been noted when using XAI, specifically around the fact that existing methods are a-spatial. As a first attempt, this work develops an XAI method named GeoShapley that aims to quantify the geographic contribution in a machine learning model. This allows measuring and interpreting spatial effects more accurately in ML that involves geospatial data. The method is built upon the Shapley value in game theory for quantifying player importance in a game. The performance and accuracy of GeoShapley value are benchmarked against SHAP, a widely applied Python implementation of Shapley value. GeoShapley value is model-agnostic and can be applied to most supervised learning tasks.
Li, Z. (2022). Extracting spatial effects from machine learning model using local interpretation method: An example of SHAP and XGBoost. Computers, Environment and Urban Systems, 96, 101845.
GeoShapley: an explainable AI method to measure geographic contribution in machine learning models
Category
Paper Abstract