From 2020 to 2023, I was part of a multidisciplinary research project at the CSIRO investigating how AI ethics principles are used in practice by the developers and users of AI systems (Sanderson et al., 2022), (missing reference), (Sanderson et al., 2022). Insights from interviews with researchers who develop or use AI in their work was used to inform the development of a set of design patterns AI developers can use to develop responsible AI systems. As part of this broad project, I conducted research interviews, helped to analyse the interview transcripts, and contributed to the published papers.
I also contributed to conceptual work on what ‘trust’ means in the context of AI (Duenser & Douglas, 2023). I also contributed to further work that investigated how AI ethics principles can be used in practice (Lu et al., 2022), (Sanderson et al., 2023), (Sanderson et al., 2024).
The official project page can be found here: An Operationalised Guidelines for Responsible AI (CSIRO).
An interview with Qinghua Lu, the project leader, about the goals of the project can be found here: AI You can Trust, Thanks to New Reusable Guidelines (CSIRO) .
References
2024
-
Resolving Ethics Trade-offs in Implementing Responsible AI
Conrad Sanderson, Emma Schleiger, David Douglas, Petra Kuhnert, and Qinghua Lu
Sep 2024
While the operationalisation of high-level AI ethics principles into practical AI/ML systems has made progress, there is still a theory-practice gap in managing tensions between the underlying AI ethics aspects. We cover five approaches for addressing the tensions via trade-offs, ranging from rudimentary to complex. The approaches differ in the types of considered context, scope, methods for measuring contexts, and degree of justification. None of the approaches is likely to be appropriate for all organisations, systems, or applications. To address this, we propose a framework which consists of: (i) proactive identification of tensions, (ii) prioritisation and weighting of ethics aspects, (iii) justification and documentation of trade-off decisions. The proposed framework aims to facilitate the implementation of well-rounded AI/ML systems that are appropriate for potential regulatory requirements.
2023
-
Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust
Andreas Duenser, and David M. Douglas
IEEE Intelligent Systems, Sep 2023
We present an overview of the literature on trust in AI and AI trustworthiness and argue for the need to distinguish these concepts more clearly and to gather more empirically evidence on what contributes to people’s trusting behaviours. We discuss that trust in AI involves not only reliance on the system itself, but also trust in the developers of the AI system. AI ethics principles such as explainability and transparency are often assumed to promote user trust, but empirical evidence of how such features actually affect how users perceive the system’s trustworthiness is not as abundance or not that clear. AI systems should be recognised as socio-technical systems, where the people involved in designing, developing, deploying, and using the system are as important as the system for determining whether it is trustworthy. Without recognising these nuances, ’trust in AI’ and ’trustworthy AI’ risk becoming nebulous terms for any desirable feature for AI systems.
-
Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects
Conrad Sanderson, David Douglas, and Qinghua Lu
arXiv preprint arXiv:2304.08275, Sep 2023
Many sets of ethics principles for responsible AI have been proposed to allay concerns about misuse and abuse of AI/ML systems. The underlying aspects of such sets of principles include privacy, accuracy, fairness, robustness, explainability, and transparency. However, there are potential tensions between these aspects that pose difficulties for AI/ML developers seeking to follow these principles. For example, increasing the accuracy of an AI/ML system may reduce its explainability. As part of the ongoing effort to operationalise the principles into practice, in this work we compile and discuss a catalogue of 10 notable tensions, trade-offs and other interactions between the underlying aspects. We primarily focus on two-sided interactions, drawing on support spread across a diverse literature. This catalogue can be helpful in raising awareness of the possible interactions between aspects of ethics principles, as well as facilitating well-supported judgements by the designers and developers of AI/ML systems.
2022
-
Towards Operationalising Responsible AI: An Empirical Study
Conrad Sanderson, Qinghua Lu, David Douglas, Xiwei Xu, Liming Zhu, and Jon Whittle
arXiv:2205.04358 [cs], May 2022
arXiv: 2205.04358
While artificial intelligence (AI) has great potential to transform many industries, there are concerns about its ability to make decisions in a responsible way. Many AI ethics guidelines and principles have been recently proposed by governments and various organisations, covering areas such as privacy, accountability, safety, reliability, transparency, explainability, contestability, and fairness. However, such principles are typically high-level and do not provide tangible guidance on how to design and develop responsible AI systems. To address this shortcoming, we present an empirical study involving interviews with 21 scientists and engineers, designed to gain insight into practitioners’ perceptions of AI ethics principles, their possible implementation, and the trade-offs between the principles. The salient findings cover four aspects of AI system development: (i) overall development process, (ii) requirements engineering, (iii) design and implementation, (iv) deployment and operation.
-
Software engineering for responsible AI: An empirical study and operationalised patterns
Qinghua Lu, Liming Zhu, Xiwei Xu, Jon Whittle, David Douglas, and Conrad Sanderson
In Proceedings of the 44th International Conference on Software Engineering: Software Engineering in Practice, May 2022
AI ethics principles and guidelines are typically high-level and do not provide concrete guidance on how to develop responsible AI systems. To address this shortcoming, we perform an empirical study involving interviews with 21 scientists and engineers to understand the practitioners’ views on AI ethics principles and their implementation. Our major findings are: (1) the current practice is often a done-once-and-forget type of ethical risk assessment at a particular development step, which is not sufficient for highly uncertain and continual learning AI systems; (2) ethical requirements are either omitted or mostly stated as high-level objectives, and not specified explicitly in verifiable way as system outputs or outcomes; (3) although ethical requirements have the characteristics of cross-cutting quality and non-functional requirements amenable to architecture and design analysis, system-level architecture and design are under-explored; (4) there is a strong desire for continuously monitoring and validating AI systems post deployment for ethical requirements but current operation practices provide limited guidance. To address these findings, we suggest a preliminary list of patterns to provide operationalised guidance for developing responsible AI systems.