Explainable Deep Learning for Interstitial Alloys
  Research Topic

Explainable Deep Learning for Interstitial-Alloys

Interstitial-type alloys are characterised by small atoms distributed pseudo-randomly across the crystal lattice of larger atoms. This pseudo-random arrangement presents a unique challenge for machine learning models and explainability algorithms — one that is the focus of this research.

Materials Science Crystal GNNs Explainable AI DFT Replacement

Research Overview

Crystal Graph Neural Networks

My research focuses on developing explainable Graph Neural Networks (GNNs) to understand complex structure–property relationships in interstitial alloys. Traditional modelling of these materials required expensive Density Functional Theory (DFT) calculations. My work demonstrates that Crystal Graph Neural Networks (CGNet) can predict these properties with high accuracy at a fraction of the computational cost.

CGExplainer: Interpretability for Materials

A key contribution of this work is CGExplainer, an interpretability tool that identifies which atomic arrangements most strongly influence predicted material properties. This bridges the gap between black-box deep learning predictions and actionable chemical understanding — enabling materials scientists to design alloys with targeted properties.

Implications for Materials Design

This research holds significant potential for the design of next-generation materials, offering faster predictions of material behaviour and a deeper understanding of how atomic configurations impact properties like mechanical strength and catalytic activity. By making these models interpretable, we enable chemists and materials scientists to leverage AI for more effective materials discovery.

Related publication:

View Publications