Explainability of Graph Neural Network

Explaining the predictions of Graph Neural Network using GNNExplainer

Renu Khandelwal
6 min readJul 12, 2022

This article explores

  • Need for explainability for GNNs
  • Challenges explaining GNN predictions
  • Different GNN explanation approaches
  • Intuitive explanation of GNNExplainer
  • Implementation using GNNExplainer to explain Node classification and Graph Classification

Explainability of a Deep Learning model provides a human-understandable reasoning of their predictions.

Deep learning algorithms are like black boxes if you do not explain the reasoning behind the predictions and thus cannot be fully trusted. Not providing the reason behind the predictions prevents the usage of deep learning algorithms in critical applications pertaining to fairness, privacy, and safety across domains.

Explainability of the deep learnings model helps to

  • Increase trust in the model predictions
  • Improves the model’s transparency for decision-critical applications pertaining to fairness, privacy, and other safety challenges
  • Allows understanding of the network characteristics to identify and correct systematic patterns of mistakes made by models before deploying them in the real world.

--

--

Renu Khandelwal
Renu Khandelwal

Written by Renu Khandelwal

A Technology Enthusiast who constantly seeks out new challenges by exploring cutting-edge technologies to make the world a better place!

Responses (2)