Prototyping Proactive Responsible AI Practices
Abstract
As Artificial Intelligence (AI) rapidly expands across industries, bringing potential and actual harms, the need for responsible AI practices grows increasingly urgent. Yet beyond legal and regulatory compliance, what responsible AI practices are and how to enact them remains elusive.
This thesis asks: How can entrepreneurial organisations, committed to responsible AI, enact proactive responsibility? And what might this mean for the future of responsible AI as a field of study and practice?
I explored these questions through design intervention research with five entrepreneurial organisations committed to responsible practice. Grounded in pragmatism, the research engaged with three sub-questions of interest: how to get started with responsible AI in practice, how to embed a commitment to human-centred values, and how to sustain responsible AI efforts over time. Three prototypes emerged: a responsible AI pledge-making process, a Dignity Lens for AI development and integrated reflective practices.
Using five theoretical lenses - cybernetic awareness, systems thinking, affordances, dignity, and responsibility - the prototypes produced and reflected upon advance the field of responsible AI in several ways. First, the thesis asserts that protective responsibility orientations are necessary but insufficient in responsible AI, and demonstrates a complementary approach: proactive responsible AI practices. Second, the thesis establishes systems-informed pledge-making as an alternative pathway to principles-only frameworks. Third, it extends the set of values operationalised in AI development beyond usual principles - fairness, accuracy, transparency, privacy - to include human-centred values, like dignity. Fourth, it critiques the gap-bridge metaphor that is commonly used to describe the relationship between responsible AI principles and practices through evidence of integrated, non-linear, values-dynamic relations; different metaphors such as a hive or jazz duet are offered as starting point alternatives. Finally, the thesis exemplifies how entrepreneurial organisations can actively engage in responsible AI through small-scale projects and decisions. Together, these contributions expand and challenge prevailing perspectives on responsible AI, offering practical guidance for AI developers, managers and leaders, as well as new insights for scholars studying technological transformation.
Overall, this transdisciplinary, engaged scholarship delivers theoretical, methodological, and practical contributions that extend beyond academia. Theoretically, the thesis reconfigures how we understand and practice responsible AI. Methodologically, the work shows how to conduct design intervention research in responsible AI contexts. Practically, the prototypes deliver tangible value for practitioners and the broader field. For example, the Dignity Lens represents a first-in-kind implementable framework that assists practitioners to operationalise dignity throughout the AI development process, and is currently embedded into the operations of a seven-person data science team, demonstrating impact beyond the life of the thesis.
How to do responsible AI in practice is far from settled. While current approaches offer partial solutions, this thesis "researches into being" new practices, broadening what is possible in responsible AI.
Description
Keywords
Citation
Collections
Source
Type
Book Title
Entity type
Access Statement
License Rights
Restricted until
2026-12-10
Downloads
File
Description