From 2019 to 2022, I was part of a research project at the CSIRO investigating how stakeholders understood ethical responsibility for AI-designed surgical tools that would be attached to surgical robots. So far this project has produced three papers: a theoretical paper examining how ethical responsibility should be determined for physical products designed by generative AI (and in particular, evolutionary algorithms) (Douglas et al., 2021); a paper examining stakeholders’ views on ethical responsibility for AI-designed surgical tools (Douglas et al., 2022); and a paper examining stakeholder’s views on the ethical risks of using AI-designed surgical tools (Douglas et al., 2023). The insights we gained from this project also led to our account of ethical risk for AI (Douglas et al., 2024).
This project was also my first work using qualitative research methods.
The official project page can be found here: Responsibility for Bespoke 3D Printed Surgical Robots (CSIRO).
I was also interviewed for the CSIRO Responsible Innovation Future Science Platform blog about this project: Who Bears Responsibility When AI Systems Go Wrong? (CSIRO).
References
2024
-
Ethical Risk for AI
David M. Douglas, Justine Lacey, and David Howard
AI & Ethics, Jun 2024
The term ’ethical risk’ often appears in discussions about the responsible development and deployment of artificial intelligence (AI). However, ethical risk remains inconsistently defined in this context, obscuring what distinguishes it from other forms of risk, such as social, reputational or legal risk, for example. In this paper we present a definition of ethical risk for AI as being any risk associated with an AI that may cause stakeholders to fail one or more of their ethical responsibilities towards other stakeholders. To support our definition, we describe how stakeholders have role responsibilities that follow from their relationship with the AI, and that these responsibilities are towards other stakeholders associated with the AI. We discuss how stakeholders may differ in their ability to make decisions about an AI, their exposure to risk, and whether they or others may benefit from these risks. Stakeholders without the ability to make decisions about the risks associated with an AI and how it is used are dependent on other stakeholders with this ability. This relationship places those who depend on decision-making stakeholders at ethical risk of being dominated by them. The decision-making stakeholder is ethically responsible for the risks their decisions about the AI impose on those affected by them. We illustrate our account of ethical risk for AI with two examples: AI-designed attachments for surgical robots that are optimised for treating specific patients, and self-driving ’robotaxis’ that carry passengers on public roads.
2023
-
Ethical risks of AI-designed products: bespoke surgical tools as a case study
David M. Douglas, Justine Lacey, and David Howard
AI and Ethics, Jun 2023
An emerging use of machine learning (ML) is creating products optimised using computational design for individual users and produced using 3D printing. One potential application is bespoke surgical tools optimised for specific patients. While optimised tool designs benefit patients and surgeons, there is the risk that computational design may also create unexpected designs that are unsuitable for use with potentially harmful consequences. We interviewed potential stakeholders to identify both established and unique technical risks associated with the use of computational design for surgical tool design and applied ethical risk analysis (eRA) to identify how stakeholders might be exposed to ethical risk within this process. The main findings of this research are twofold. First, distinguishing between unique and established risks for new medical technologies helps identify where existing methods of risk mitigation may be applicable to a surgical innovation, and where new means of mitigating risks may be needed. Second, the value of distinguishing between technical and ethical risks in such a system is that it identifies the key responsibilities for managing these risks and allows for any potential interdependencies between stakeholders in managing these risks to be made explicit. The approach demonstrated in this paper may be applied to understanding the implications of new AI and ML applications in healthcare and other high consequence domains.
2022
-
Ethical responsibility and computational design: bespoke surgical tools as an instructive case study
David M. Douglas, Justine Lacey, and David Howard
Ethics and Information Technology, Feb 2022
Computational design uses artificial intelligence (AI) to optimise designs towards user-determined goals. When combined with 3D printing, it is possible to develop and construct physical products in a wide range of geometries and materials and encapsulating a range of functionality, with minimal input from human designers. One potential application is the development of bespoke surgical tools, whereby computational design optimises a tool’s morphology for a specific patient’s anatomy and the requirements of the surgical procedure to improve surgical outcomes. This emerging application of AI and 3D printing provides an opportunity to examine whether new technologies affect the ethical responsibilities of those operating in high-consequence domains such as healthcare. This research draws on stakeholder interviews to identify how a range of different professions involved in the design, production, and adoption of computationally designed surgical tools, identify and attribute responsibility within the different stages of a computationally designed tool’s development and deployment. Those interviewed included surgeons and radiologists, fabricators experienced with 3D printing, computational designers, healthcare regulators, bioethicists, and patient advocates. Based on our findings, we identify additional responsibilities that surround the process of creating and using these tools. Additionally, the responsibilities of most professional stakeholders are not limited to individual stages of the tool design and deployment process, and the close collaboration between stakeholders at various stages of the process suggests that collective ethical responsibility may be appropriate in these cases. The role responsibilities of the stakeholders involved in developing the process to create computationally designed tools also change as the technology moves from research and development (R&D) to approved use.
2021
-
Moral responsibility for computationally designed products
David M. Douglas, David Howard, and Justine Lacey
AI and Ethics, Feb 2021
Computational design systems (such as those using evolutionary algorithms) can create designs for a variety of physical products. Introducing these systems into the design process risks creating a ‘responsibility gap’ for flaws in the products they are used to create, as human designers may no longer believe that they are wholly responsible for them. We respond to this problem by distinguishing between causal responsibility and capacity responsibility (the ability to be morally responsible for actions) for creating product designs to argue that while the computational design systems and human designers are both casually responsible for creating product designs, the human designers who use these systems and the developers who create them have capacity responsibility for such designs. We show that there is no responsibility gap for products designed using computational design systems by comparing different accounts of moral responsibility for robots and AI (instrumentalism, machine ethics, and hybrid responsibility). We argue that all three of these accounts of moral responsibility for AI systems support the conclusion that the product designers who use computational design systems and the developers of these systems are morally responsible for any flaws or faults in the products designed by these systems. We conclude by showing how the responsibilities of accountability and blameworthiness should be attributed between the product designers, the developers of the computational design systems.