Skip to content

Exploring parallels between MITgcm adjoint and eXplainable AI (XAI) methods

Notifications You must be signed in to change notification settings

Shreyas911/XAIRT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

92 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

XAIRT

Can eXplainable-AI (XAI) capture robust dynamical relationships in ice and ocean modeling?

Artificial Neural Networks (NNs) have been used increasingly in the climate sciences in the last decade due to their high efficiency and ability to capture non-linear physics and dynamics. These NNs have proven hard to interpret (black boxes), which is why there has been a hesitancy in their adoption for wider use. There is thus a growing need to develop trust in these models.

To aid the interpretability of NNs, many methods have been recently developed within the growing field of eXplainable-AI (XAI). These methods come in various flavors, ranging across removal-based, gradient-based, and propagation-based explanations. There is even some recent work that looks specifically at XAI for regression (XAIR). While XAI methods can provide insights into correlations between inputs and outputs and hence inspire new science, they alone cannot prove the existence of causal forcings since they are not exposed to the underlying dynamics of the physical systems. Adjoints are powerful computational engines to efficiently compute dynamically consistent gradients or sensitivities of scalar-valued model output to high-dimensional model inputs. They can thus be used to validate the potential insights from XAI methods and check for the existence of teleconnections that can be interpreted by scientists.

In this work, we contrast the relevance heatmaps derived from a post-hoc attribution XAI method - Layerwise Relevance Propagation (LRP) against the physics-based sensitivity maps derived from the adjoint of the MIT General Circulation Model (MITgcm) using the open-source AD tool Tapenade. In doing so, we highlight the potential of such comparisons to benefit both research communities, allowing an improvement in NN architectures to give the correct predictions for the right reasons, as well as allowing domain experts to inspect new links suggested by XAI methods. We also highlight the utility of our new free and open-source adjoint for the MITgcm using the open-source Automatic Differentiation (AD) tool Tapenade, helping make science more accessible to a wider group of researchers.

About

Exploring parallels between MITgcm adjoint and eXplainable AI (XAI) methods

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published