Tree-based ensemble methods, like XGBoost [1] or LightGBM [2] are still the state of the art for predictions on tabular data. However, due to their complex structures these methods are considered black box models. Feature attributions, i.e. quantifying the impact of an individual feature to the prediction, can help to understand the predictions of such models. Feature interactions extend this idea to multiple features. Recently [3], an efficient method to compute such interactions on tree-based models has been presented. While current work considers local explanations, i.e. explaining the prediction of an individual instance, the goal of this thesis is to investigate global feature interactions in tree-based models, i.e. the impact of features to the overall performance of the model.
Keywords: Explainable AI, Deep Learning, Feature Interactions, Global Explanations
Literature