Explainable Deep Learning for Interstial Alloys

Explainable Deep Learning for Interstial Alloys

Unlike stoichiometric materials, interstitial-type alloys are characterized by the positioning of small atoms pseudo-radomly across the crystal lattice of larger atoms, in spaces known as interstices. While stoichiometric materials have been modelled by assuming the positioning of atom types in fixed allowed spaces within the lattice, this is not the case for interstitial materials. This presents a challenge for the development of explainability algorithms for understanding the complex relationships between the ordering of interstitial atoms and the properties of the material, which is the focus of this research.

Research Overview

My research focuses on developing explainable Graph Neural Networks (GNNs) to understand the complex structure-property relationships in interstitial alloys. These materials involve small atoms occupying interstitial spaces within the crystal lattice of larger atoms, and their properties are influenced by the pseudo-random arrangement of these atoms.

Traditionally, modeling such materials required computationally expensive Density Functional Theory (DFT) calculations. However, my work demonstrates that Crystal Graph Neural Networks (CGNet) can predict these properties much faster while maintaining high accuracy. Additionally, my proposed Crystal Graph Explainer (CGExplainer) interprets these models, providing insights into the atomic arrangements that contribute to specific material properties.

This research holds significant potential for the design of next-generation materials, offering faster predictions of material behavior and a deeper understanding of how atomic configurations impact properties like mechanical strength and catalytic activity. By making these models interpretable, we enable chemists and materials scientists to leverage AI for more effective materials design.