Design-driven Materials Intelligence

dc.contributor.authorLi, Sichao
dc.date.accessioned2025-06-22T02:11:53Z
dc.date.available2025-06-22T02:11:53Z
dc.date.issued2025
dc.description.abstractThe integration of artificial intelligence (AI) and machine learning (ML) into materials science heralds the era of materials intelligence, AI-driven systems that learn from materials data to predict, design, and optimise structures and properties while embedding domain knowledge. This thesis explores several key challenges at the intersection of AI/ML and materials science: the reliance on single-model explanations, the complexity of capturing non-linear relationships, and the need to balance interpretability with stakeholder expectations. To address these challenges, a thorough literature review is conducted from Chapters 1 to 3, and the thesis emphasises explanations of the same task through diverse similarly performing models. The thesis is structured into three core chapters, guided by design thinking principles: Chapter 4: Rational Design introduces the Variance Tolerance Factor (VTF) framework to address the limitations of single-model explanations, which often generate conflicting insights across models. Using the Rashomon set concept, the VTF framework quantifies feature importance variability, offering a comprehensive perspective. The approach was validated against baseline methods and applied to chemical prediction tasks, demonstrating its utility in enhancing interpretability. Chapter 5: Creative Design builds on rational design by advancing methods to interpret complex feature relationships in materials science. This part introduces Feature Interaction Scores (FIS) and Feature Interaction Scores Cloud (FISC) to explain interactions among features in material property predictions in the Rashomon set. From the study of the Rashomon set in practice, two fundamental axioms are proposed as guidance for generalisability. Chapter 6: Optimal Design utilises explanation disagreement in the Rashomon set as a strategy, bridging the gap between stakeholder needs and ML models. The EXplanation AGREEment (EXAGREE) framework is proposed to align model explanations with stakeholder expectations while preserving predictive performance. This work seeks to improve the alignment between AI systems and the needs of materials scientists, engineers, and other stakeholders in the field. Throughout the thesis, this study explores fundamental challenges in applying ML, especially explainable AI, to materials science, balancing predictive performance with interpretability, satisfying different stakeholder needs, and combining automated optimisation with domain expertise. By advancing methods to address these challenges, this research aims to contribute to the development of trustworthy scientist-centred ML technologies for materials science.
dc.identifier.urihttps://hdl.handle.net/1885/733764530
dc.language.isoen_AU
dc.titleDesign-driven Materials Intelligence
dc.typeThesis (PhD)
local.contributor.affiliationCollege of Systems and Society, The Australian National University
local.contributor.supervisorBarnard, Amanda
local.description.embargo2025-06-25
local.identifier.doi10.25911/18M0-4C38
local.identifier.proquestYes
local.identifier.researcherIDKPA-8030-2024
local.mintdoimint
local.thesisANUonly.author5c6c1ee7-548e-4408-9b08-607c1b91a0d0
local.thesisANUonly.key9d567544-03fa-6155-181a-9015a9d58625
local.thesisANUonly.title000000027375_TC_1

Downloads

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Design_driven_Materials_Intelligence_Sichao_Li_PhD_thesis_corrected.pdf
Size:
59.06 MB
Format:
Adobe Portable Document Format
Description:
Thesis Material