Skip to content

Latest commit

 

History

History
9 lines (5 loc) · 2.15 KB

README.md

File metadata and controls

9 lines (5 loc) · 2.15 KB

XAIRT

Can eXplainable-AI (XAI) capture robust dynamical relationships in ice and ocean modeling?

Artificial Neural Networks (NNs) have been used increasingly in the climate sciences in the last decade due to their high efficiency and ability to capture non-linear physics and dynamics. These NNs have proven hard to interpret (black boxes), which is why there has been a hesitancy in their adoption for wider use. There is thus a growing need to develop trust in these models.

To aid the interpretability of NNs, many methods have been recently developed within the growing field of eXplainable-AI (XAI). These methods come in various flavors, ranging across removal-based, gradient-based, and propagation-based explanations. There is even some recent work that looks specifically at XAI for regression (XAIR). While XAI methods can provide insights into correlations between inputs and outputs and hence inspire new science, they alone cannot prove the existence of causal forcings since they are not exposed to the underlying dynamics of the physical systems. Adjoints are powerful computational engines to efficiently compute dynamically consistent gradients or sensitivities of scalar-valued model output to high-dimensional model inputs. They can thus be used to validate the potential insights from XAI methods and check for the existence of teleconnections that can be interpreted by scientists.

In this work, we contrast the relevance heatmaps derived from a post-hoc attribution XAI method - Layerwise Relevance Propagation (LRP) against the physics-based sensitivity maps derived from the adjoint of the MIT General Circulation Model (MITgcm) using the open-source AD tool Tapenade. In doing so, we highlight the potential of such comparisons to benefit both research communities, allowing an improvement in NN architectures to give the correct predictions for the right reasons, as well as allowing domain experts to inspect new links suggested by XAI methods. We also highlight the utility of our new free and open-source adjoint for the MITgcm using the open-source Automatic Differentiation (AD) tool Tapenade, helping make science more accessible to a wider group of researchers.