Publications
publications by categories in reversed chronological order. Generated by jekyll-scholar.
2024
- Resolving Ethics Trade-offs in Implementing Responsible AIConrad Sanderson, Emma Schleiger, David Douglas, Petra Kuhnert, and Qinghua LuIn 2024 IEEE Conference on Artificial Intelligence (CAI), 2024
While the operationalisation of high-level AI ethics principles into practical AI/ML systems has made progress, there is still a theory-practice gap in managing tensions between the underlying AI ethics aspects. We cover five approaches for addressing the tensions via trade-offs, ranging from rudimentary to complex. The approaches differ in the types of considered context, scope, methods for measuring contexts, and degree of justification. None of the approaches is likely to be appropriate for all organisations, systems, or applications. To address this, we propose a framework which consists of: (i) proactive identification of tensions, (ii) prioritisation and weighting of ethics aspects, (iii) justification and documentation of trade-off decisions. The proposed framework aims to facilitate the implementation of well-rounded AI/ML systems that are appropriate for potential regulatory requirements.
- Just Trade-Offs in a Net-Zero Transition and Social Impact AssessmentYuwan Malakar, Andrea Walton, Luk J.M. Peeters, David M. Douglas, and Dan O’SullivanEnvironmental Impact Assessment Review, 2024
Countries around the world are prioritising net zero emissions to meet their Paris Agreement goals. The demand for social impact assessment (SIA) is likely to grow as this transition will require investments in decarbonisation projects with speed and at scale. There will be winners and losers of these projects because not everyone benefits the same; and hence, trade-offs are inevitable. SIAs, therefore, should focus on understanding how the risks and benefits will be distributed among and within stakeholders and sectors and enable the identification of trade-offs that are just and fair. In this study, we used a hypothetical case of large-scale hydrogen production in regional Australia and engaged with multi-disciplinary experts to identify justice issues in transitioning to such an industry. Using Rawlsian theory of justice as fairness, we identified several tensions between different groups (national, regional, local, inter and intra-communities) and sectors (environmental and economic) concerning the establishment of a hydrogen industry. These stakeholders and sectors will be disproportionately affected by this establishment. We argue that Rawlsian principles of justice would enable the practice of SIA to identify justice trade-offs. Further, we conceptualise that a systems approach will be critical to facilitate a wider participation, and an agile process for achieving just trade-offs in SIA.
- Ethical Risk for AIDavid M. Douglas, Justine Lacey, and David HowardAI & Ethics, 2024
The term ’ethical risk’ often appears in discussions about the responsible development and deployment of artificial intelligence (AI). However, ethical risk remains inconsistently defined in this context, obscuring what distinguishes it from other forms of risk, such as social, reputational or legal risk, for example. In this paper we present a definition of ethical risk for AI as being any risk associated with an AI that may cause stakeholders to fail one or more of their ethical responsibilities towards other stakeholders. To support our definition, we describe how stakeholders have role responsibilities that follow from their relationship with the AI, and that these responsibilities are towards other stakeholders associated with the AI. We discuss how stakeholders may differ in their ability to make decisions about an AI, their exposure to risk, and whether they or others may benefit from these risks. Stakeholders without the ability to make decisions about the risks associated with an AI and how it is used are dependent on other stakeholders with this ability. This relationship places those who depend on decision-making stakeholders at ethical risk of being dominated by them. The decision-making stakeholder is ethically responsible for the risks their decisions about the AI impose on those affected by them. We illustrate our account of ethical risk for AI with two examples: AI-designed attachments for surgical robots that are optimised for treating specific patients, and self-driving ’robotaxis’ that carry passengers on public roads.
- Mapping Australia’s Quantum Landscape: Potential Applications Across Industry SectorsMohan Baruwal Chhetri, Rebecca Coates, Yue Haung, Lihong Tang, David Douglas, and Gabi Skoff2024
In preparing for the future of quantum technologies, Australia’s National Quantum Strategy envisions a trusted, ethical, and inclusive quantum ecosystem that will modernise the economy, enhance societal well-being, support national interests, and create new jobs. This study contributes to that vision by identifying potential quantum applications across Australian industries. Through an extensive literature review – encompassing government reports, market analyses, academic publications, and insights from the quantum industry – this study provides a comprehensive snapshot of potential use cases for quantum technologies across Australia’s economic landscape. This initial analysis serves as a foundational step in assessing quantum readiness across Australian industries. By identifying promising quantum applications and their potential impact, it aims to inform strategic planning for quantum readiness, guiding key stakeholders within the Australian Quantum Ecosystem, including government, quantum technology end-users, and the quantum technology developers.
2023
- Ethical risks of AI-designed products: bespoke surgical tools as a case studyDavid M. Douglas, Justine Lacey, and David HowardAI and Ethics, 2023
An emerging use of machine learning (ML) is creating products optimised using computational design for individual users and produced using 3D printing. One potential application is bespoke surgical tools optimised for specific patients. While optimised tool designs benefit patients and surgeons, there is the risk that computational design may also create unexpected designs that are unsuitable for use with potentially harmful consequences. We interviewed potential stakeholders to identify both established and unique technical risks associated with the use of computational design for surgical tool design and applied ethical risk analysis (eRA) to identify how stakeholders might be exposed to ethical risk within this process. The main findings of this research are twofold. First, distinguishing between unique and established risks for new medical technologies helps identify where existing methods of risk mitigation may be applicable to a surgical innovation, and where new means of mitigating risks may be needed. Second, the value of distinguishing between technical and ethical risks in such a system is that it identifies the key responsibilities for managing these risks and allows for any potential interdependencies between stakeholders in managing these risks to be made explicit. The approach demonstrated in this paper may be applied to understanding the implications of new AI and ML applications in healthcare and other high consequence domains.
- AI Ethics Principles in Practice: Perspectives of Designers and DevelopersConrad Sanderson, David Douglas, Qinghua Lu, Emma Schleiger, Jon Whittle, Justine Lacey, Glenn Newnham, Stefan Hajkowicz, Cathy Robinson, and David HansenIEEE Transactions on Technology and Society, Jun 2023Conference Name: IEEE Transactions on Technology and Society
As consensus across the various published AI ethics principles is approached, a gap remains between high-level principles and practical techniques that can be readily adopted to design and develop responsible AI systems. We examine the practices and experiences of researchers and engineers from Australia’s national scientific research agency (CSIRO), who are involved in designing and developing AI systems for many application areas. Semi-structured interviews were used to examine how the practices of the participants relate to and align with a set of high-level AI ethics principles proposed by the Australian Government. The principles comprise: (1) privacy protection and security, (2) reliability and safety, (3) transparency and explainability, (4) fairness, (5) contestability, (6) accountability, (7) human-centred values, (8) human, social and environmental well-being. Discussions on the gained insights from the interviews include various tensions and trade-offs between the principles, and provide suggestions for implementing each high-level principle. We also present suggestions aiming to enhance associated support mechanisms.
- Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and TrustAndreas Duenser, and David M. DouglasIEEE Intelligent Systems, Jun 2023
We present an overview of the literature on trust in AI and AI trustworthiness and argue for the need to distinguish these concepts more clearly and to gather more empirically evidence on what contributes to people’s trusting behaviours. We discuss that trust in AI involves not only reliance on the system itself, but also trust in the developers of the AI system. AI ethics principles such as explainability and transparency are often assumed to promote user trust, but empirical evidence of how such features actually affect how users perceive the system’s trustworthiness is not as abundance or not that clear. AI systems should be recognised as socio-technical systems, where the people involved in designing, developing, deploying, and using the system are as important as the system for determining whether it is trustworthy. Without recognising these nuances, ’trust in AI’ and ’trustworthy AI’ risk becoming nebulous terms for any desirable feature for AI systems.
- Implementing Responsible AI: Tensions and Trade-Offs Between Ethics AspectsConrad Sanderson, David Douglas, and Qinghua LuJun 2023
Many sets of ethics principles for responsible AI have been proposed to allay concerns about misuse and abuse of AI/ML systems. The underlying aspects of such sets of principles include privacy, accuracy, fairness, robustness, explainability, and transparency. However, there are potential tensions between these aspects that pose difficulties for AI/ML developers seeking to follow these principles. For example, increasing the accuracy of an AI/ML system may reduce its explainability. As part of the ongoing effort to operationalise the principles into practice, in this work we compile and discuss a catalogue of 10 notable tensions, trade-offs and other interactions between the underlying aspects. We primarily focus on two-sided interactions, drawing on support spread across a diverse literature. This catalogue can be helpful in raising awareness of the possible interactions between aspects of ethics principles, as well as facilitating well-supported judgements by the designers and developers of AI/ML systems.
2022
- Towards Operationalising Responsible AI: An Empirical StudyConrad Sanderson, Qinghua Lu, David Douglas, Xiwei Xu, Liming Zhu, and Jon WhittlearXiv:2205.04358 [cs], May 2022arXiv: 2205.04358
While artificial intelligence (AI) has great potential to transform many industries, there are concerns about its ability to make decisions in a responsible way. Many AI ethics guidelines and principles have been recently proposed by governments and various organisations, covering areas such as privacy, accountability, safety, reliability, transparency, explainability, contestability, and fairness. However, such principles are typically high-level and do not provide tangible guidance on how to design and develop responsible AI systems. To address this shortcoming, we present an empirical study involving interviews with 21 scientists and engineers, designed to gain insight into practitioners’ perceptions of AI ethics principles, their possible implementation, and the trade-offs between the principles. The salient findings cover four aspects of AI system development: (i) overall development process, (ii) requirements engineering, (iii) design and implementation, (iv) deployment and operation.
- Ethical responsibility and computational design: bespoke surgical tools as an instructive case studyDavid M. Douglas, Justine Lacey, and David HowardEthics and Information Technology, Feb 2022
Computational design uses artificial intelligence (AI) to optimise designs towards user-determined goals. When combined with 3D printing, it is possible to develop and construct physical products in a wide range of geometries and materials and encapsulating a range of functionality, with minimal input from human designers. One potential application is the development of bespoke surgical tools, whereby computational design optimises a tool’s morphology for a specific patient’s anatomy and the requirements of the surgical procedure to improve surgical outcomes. This emerging application of AI and 3D printing provides an opportunity to examine whether new technologies affect the ethical responsibilities of those operating in high-consequence domains such as healthcare. This research draws on stakeholder interviews to identify how a range of different professions involved in the design, production, and adoption of computationally designed surgical tools, identify and attribute responsibility within the different stages of a computationally designed tool’s development and deployment. Those interviewed included surgeons and radiologists, fabricators experienced with 3D printing, computational designers, healthcare regulators, bioethicists, and patient advocates. Based on our findings, we identify additional responsibilities that surround the process of creating and using these tools. Additionally, the responsibilities of most professional stakeholders are not limited to individual stages of the tool design and deployment process, and the close collaboration between stakeholders at various stages of the process suggests that collective ethical responsibility may be appropriate in these cases. The role responsibilities of the stakeholders involved in developing the process to create computationally designed tools also change as the technology moves from research and development (R&D) to approved use.
- Towards Implementing Responsible AIConrad Sanderson, Qinghua Lu, David Douglas, Xiwei Xu, Liming Zhu, and Jon WhittleIn 2022 IEEE International Conference on Big Data (Big Data), Dec 2022
As the deployment of artificial intelligence (AI) is changing many fields and industries, there are concerns about AI systems making decisions and recommendations without adequately considering various ethical aspects, such as accountability, reliability, transparency, explainability, contestability, privacy, and fairness. While many sets of AI ethics principles have been recently proposed that acknowledge these concerns, such principles are high-level and do not provide tangible advice on how to develop ethical and responsible AI systems. To gain insight on the possible implementation of the principles, we conducted an empirical investigation involving semi-structured interviews with a cohort of AI practitioners. The salient findings cover four aspects of AI system design and development, adapting processes used in software engineering: (i) high-level view, (ii) requirements engineering, (iii) design and implementation, (iv) deployment and operation.
- Software engineering for responsible AI: An empirical study and operationalised patternsQinghua Lu, Liming Zhu, Xiwei Xu, Jon Whittle, David Douglas, and Conrad SandersonIn Proceedings of the 44th International Conference on Software Engineering: Software Engineering in Practice, Dec 2022
AI ethics principles and guidelines are typically high-level and do not provide concrete guidance on how to develop responsible AI systems. To address this shortcoming, we perform an empirical study involving interviews with 21 scientists and engineers to understand the practitioners’ views on AI ethics principles and their implementation. Our major findings are: (1) the current practice is often a done-once-and-forget type of ethical risk assessment at a particular development step, which is not sufficient for highly uncertain and continual learning AI systems; (2) ethical requirements are either omitted or mostly stated as high-level objectives, and not specified explicitly in verifiable way as system outputs or outcomes; (3) although ethical requirements have the characteristics of cross-cutting quality and non-functional requirements amenable to architecture and design analysis, system-level architecture and design are under-explored; (4) there is a strong desire for continuously monitoring and validating AI systems post deployment for ethical requirements but current operation practices provide limited guidance. To address these findings, we suggest a preliminary list of patterns to provide operationalised guidance for developing responsible AI systems.
2021
- Moral responsibility for computationally designed productsDavid M. Douglas, David Howard, and Justine LaceyAI and Ethics, Dec 2021
Computational design systems (such as those using evolutionary algorithms) can create designs for a variety of physical products. Introducing these systems into the design process risks creating a ‘responsibility gap’ for flaws in the products they are used to create, as human designers may no longer believe that they are wholly responsible for them. We respond to this problem by distinguishing between causal responsibility and capacity responsibility (the ability to be morally responsible for actions) for creating product designs to argue that while the computational design systems and human designers are both casually responsible for creating product designs, the human designers who use these systems and the developers who create them have capacity responsibility for such designs. We show that there is no responsibility gap for products designed using computational design systems by comparing different accounts of moral responsibility for robots and AI (instrumentalism, machine ethics, and hybrid responsibility). We argue that all three of these accounts of moral responsibility for AI systems support the conclusion that the product designers who use computational design systems and the developers of these systems are morally responsible for any flaws or faults in the products designed by these systems. We conclude by showing how the responsibilities of accountability and blameworthiness should be attributed between the product designers, the developers of the computational design systems.
2020
- Doxing as Audience Vigilantism against Hate SpeechDavid M. DouglasIn Introducing Vigilant Audiences, Dec 2020
Doxing is the public release of personally identifiable information, and may be used as a tool for activism by removing the anonymity of individuals whose actions or stated beliefs harm others or undermine social cohesion. In this chapter I describe how doxing that deanomynises proponents of hate speech is a form of audience vigilantism. I argue that it is a defensible means of combating hate speech if it has the purpose of beginning a process of deradicalizing the identified individuals through reintegrative shaming. Such doxing must be motivated by a legitimate social need (in that they can be justified using premises and evidence acceptable to all in society),and must remain within socially tolerable bounds (in that it does not lead to physical harm, it is not indiscriminate, and is in response to injustices that are in principle recognisable to those who are not affected by it). I refer to several instances of doxing relating to proponents of hate speech to illustrate my argument and to demonstrate the importance of the legitimate social need and socially tolerable bounds criteria.
- The Network and the Demos: Big Data and the Epistemic Justifications of DemocracyDave Kinkead, and David M. DouglasIn Big Data and Democracy, Dec 2020
In this chapter, Kinkead and Douglas draw on the history of democracies to see how big data and its use with social media sites introduces new challenges to the contemporary marketplace of ideas. They note that traditionally one could narrowcast a tailored message with some impunity, but limited effect, while broadcasts (with larger impact) were open to examination by the public. Microtargeted political advertising now allows for the narrowcast message to be tweaked and directed on a scale never before seen.
- Machine Learning and Responsibility in Criminal InvestigationGeorgina Ibarra, David Douglas, and Meena TharmarajahDec 2020
Most of the literature on using machine learning (ML) systems in criminal investigations concerns whether using these systems undermines public trust in law enforcement. A related concern is whether investigators themselves should trust these systems. But what do we really mean by ‘trust’? Many methods have been developed for promoting fairness, transparency, and accountability in the predictions made by ML systems, however a technical approach to these problems needs to be accompanied by a human centred approach to user trust. In order to address social, ethical and practical issues these systems need to present information in such a way that the people that use them can make balanced decisions on whether or not they should trust them. In this report we use the lens of user experience (UX) and social science to look at how the role responsibilities and accountability of criminal justice experts may be affected by the use of ML systems in criminal investigations. How will these systems be used by these experts and what is the effect that may have in this regulated and legislated environment? To understand this, we explore the concepts of responsibility, accountability and transparency in the context of criminal investigations in parallel to the various levels of automation and AI assistance ranging from full human control to full automation. We discuss why ML systems used in criminal investigations can be considered ‘human-in-the-loop’ forms of automation, where the systems offer decision support to users. We also explore the issues connected with calibrating trust in an ML system, describing the characteristics of automated systems that affect the ability of users to determine if their trust in a system is legitimate, and the risks of misusing, rejecting, and abusing automation by experts operating in criminal justice settings. We highlight the risks connected with using ML systems, and how these risks might affect the use of these systems in criminal justice contexts. The report summarises additional areas of responsibility across the ecosystem of machine learning systems in a criminal context including: the responsibility of law enforcement institutions to train their workforce in the skills needed to use data and insights from ML systems; the responsibility of specific departments in these organisations to examine the intended use of such systems, adjusting their internal policies accordingly and ensuring their employees are alert to how this affects their responsibility or accountability; and the responsibility of technologists to be transparent about the system’s trustworthiness, and to allow experts to accurately calibrate their trust in the system by closely observing and responding to how they interpret and use different types of predictive data in investigative processes.
2019
- Ethical Analysis of AI and Robotics TechnologiesPhilip Jensen, Philip Brey, Alice Fox, Jonne Maas, Bradley Hillas, Nils Wagner, Patrick Smith, Isaac Oluoch, Laura Lamers, Hero Gein, Anaïs Resseguier, Rowena Rodrigues, David Wright, and David DouglasDec 2019
This SIENNA deliverable offers a broad ethical analysis of artificial intelligence (AI) and robotics technologies. Its primary aims have been to comprehensively identify and analyse the present and potential future ethical issues in relation to: (1) the AI and robotics subfields, techniques, approaches and methods; (2) their physical technological products and procedures that are designed for practical applications; and (3) the particular uses and applications of these products and procedures. In conducting the ethical analysis, we strove to provide ample clarification, details about nuances, and contextualisation of the ethical issues that were identified, while avoiding the making of moral judgments and proposing of solutions to these issues. A secondary aim of this report has been to convey the results of SIENNA’s “country studies” of the national academic and popular media debate on the ethical issues in AI and robotics in twelve different EU and non-EU countries, highlighting the similarities and differences between these countries. While these country study results have only formed a minor contribution to the overall identification and analysis of the ethical issues in this report, they are expected to play a larger role in future SIENNA deliverables. This deliverable also provides an overview of the history and state of the art of the academic debate on ethics of AI and robot ethics, and an overview of the current institutional support of these fields.
- Cyberwar and Mediation TheoryNolen Gertz, Peter-Paul Verbeek, and David M. DouglasDelphi - Interdisciplinary Review of Emerging Technologies, Nov 2019
Cyberwar (military operations conducted via computer networks) is often downplayed compared to traditional military operations as they are largely invisible to outside observers, difficult to convincingly attribute to a particular source and rarely cause physical damage or obvious harm. We use mediation theory to argue that cyberwar operations cause harm by undermining trust in computerised devices and networks and by disrupting the transparency of our usage of information technology in our daily lives. Cyberwar operations militarise and weaponise the civilian space of the Internet by co-opting and targeting civilian infrastructure and property. These operations (and the possibility of such operations occurring) fundamentally change users’ Internet experience by fostering fear and paranoia about otherwise unnoticed and transparent aspects of their lives, similarly to how biological and chemical weapons create fear and paranoia about breathing, eating, and physical exposure to the world. We argue that the phenomenological aspects of cyberwar operations offer a compelling justification for prohibiting cyberwar in the same manner in which biological and chemical warfare are prohibited.
2018
- Should Internet Researchers Use Ill-Gotten Information?David M. DouglasScience and Engineering Ethics, Aug 2018
This paper describes how the ethical problems raised by scientific data obtained through harmful and immoral conduct (which, following Stan Godlovitch, is called ill-gotten information) may also emerge in cases where data is collected from the Internet. It describes the major arguments for and against using ill-gotten information in research, and shows how they may be applied to research that either collects information about the Internet itself or which uses data from questionable or unknown sources on the Internet. Three examples (the Internet Census 2012, the PharmaLeaks study, and research into keylogger dropzones) demonstrate how researchers address the ethical issues raised by the sources of data that they use and how the existing arguments concerning the use of ill-gotten information apply to Internet research. The problems faced by researchers who collect or use data from the Internet are shown to be the same problems faced by researchers in other fields who may obtain or use ill-gotten information.
- Personal Information, Identification Information, and Identity KnowledgeDavid M DouglasUniSA Student Law Review, Aug 2018
This commentary responds to the primary article by Åste Corbridge in this volume entitled ‘Responding to Doxing in Australia: Towards a Right to Informational Self-Determination?’. It discusses the way that concepts of ‘personal information’ and ‘identification information’ from the Privacy Act 1988 (Cth) correspond with the seven crucial types of identity knowledge identified by Gary T. Marx and argues that these statutory definitions should be expanded to offer better protection to victims of doxing in Australia.
2017
- A Reasoned Proposal for Shared Approaches to Ethics Assessment in the European ContextPhilip Jensen, Wessel Reijers, David Douglas, Faridun Sattarov, Agata Gurzawska, Alexandra Kapeller, Philip Brey, Rok Benčin, Zuzanna Warso, and Robert BraunMay 2017
This report presents a comprehensive proposal for a common ethics assessment framework for research and innovation (R&I) in the European Union member states. It details recommendations for good practices for ethics assessment, which includes the development of ethics assessment units and the protocols of these units. More specifically, the report presents a general toolkit for ethics assessment of R&I, as well as specialised tools and toolkits for specific types of organizations that deal with ethics assessment, and for different scientific fields.
- Roadmap towards Adoption of a Fully Developed Ethics Assessment FrameworkAnna Leinonen, Raija Koivisto, Anu Tuominen, David Douglas, Agata Gurzawska, Philip Jansen, Alexandra Kapeller, and Philip BreyMay 2017
The aim of the SATORI roadmap process was to work out how the SATORI ethics assessment framework can be implemented in practice. The timespan of the roadmap was set at 10 years. To begin, a vision of a future in which the SATORI framework is implemented was formulated. Theories about the implementation of new social practices were subsequently studied, and a model for the implementation of the SATORI framework was constructed. This model was then used as the basis for identifying the steps (or outcomes) that need to be taken in order to realise the vision. Finally, these steps were fleshed out by listing recommendations and associated actions that need to be taken by various stakeholder groups that are involved in ethics assessment of research and innovation.
- Dual-Use or No-Use? The Ethics of Booters and DDoS-for-HireDavid M. Douglas, José Jair Santanna, Ricardo de O. Schmidt, Lisandro Z. Granville, and Aiko PrasJournal of Information, Communication and Ethics in Society, May 2017
Purpose: This paper examines whether there are morally defensible reasons for using or operating websites (called ‘booters’) that offer Distributed Denial-of-Service (DDoS) attacks on a specified target to users for a price. Booters have been linked to some of the most powerful DDoS attacks in recent years. Design/methodology/approach: The authors identify the various parties associated with booter websites and the means through which booters operate. Then the authors present and evaluate the two arguments that they claim may be used to justify operating and using booters: that they are a useful tool for testing the ability of networks and servers to handle heavy traffic, and that they may be used to perform DDoS attacks as a form of civil disobedience on the Internet. Findings: The authors argue that the characteristics of existing booters disqualify them from being morally justified as network stress testing tools or as a means of performing civil disobedience. The use of botnets that include systems without the permission of their owners undermines the legitimacy of both justifications. While a booter that does not use any third-party systems without permission might in principle be justified under certain conditions, the authors argue that it is unlikely that any existing booters meet these requirements. Practical implications: Law enforcement agencies may use the arguments presented here to justify shutting down the operation of booters, and so reduce the number of DDoS attacks on the Internet. Originality/value: The value of this work is in critically examining the potential justifications for using and operating booter websites and in further exploring the ethical aspects of using DDoS attacks as a form of civil disobedience.
2016
- Doxing: A Conceptual AnalysisDavid M. DouglasEthics and Information Technology, Sep 2016
Doxing is the intentional public release onto the Internet of personal information about an individual by a third party, often with the intent to humiliate, threaten, intimidate, or punish the identified individual. In this paper I present a conceptual analysis of the practice of doxing and how it differs from other forms of privacy violation. I distinguish between three types of doxing: deanonymizing doxing, where personal information establishing the identity of a formerly anonymous individual is released; targeting doxing, that discloses personal information that reveals specific details of an individual’s circumstances that are usually private, obscure, or obfuscated; and delegitimizing doxing, which reveals intimate personal information that damages the credibility of that individual. I also describe how doxing differs from blackmail and defamation. I argue that doxing may be justified in cases where it reveals wrongdoing (such as deception), but only if the information released is necessary to reveal that such wrongdoing has occurred and if it is in the public interest to reveal such wrongdoing. Revealing additional information, such as that which allows an individual to be targeted for harassment and intimidation, is unjustified. I illustrate my discussion with the examples of the alleged identification of the creator of Bitcoin, Satoshi Nakamoto, by Newsweek magazine, the identification of the notorious Reddit user Violentacrez by the blog Gawker, and the harassment of game developer Zoe Quinn in the ‘GamerGate’ Internet campaign.
- Models for Ethics Assessment and Guidance in Higher EducationPhilip Brey, David Douglas, Alexandra Kapeller, Rok Benčin, Daniela Ovadia, and Doris WolfslehnerSep 2016
This report will investigate best practices for developing ethics assessment and guidance in universities, through research ethics committees (RECs), institutional policies, scientific integrity boards, teaching and training, and other means. The objective is to identify different means by which universities may promote and regulate consideration of ethical aspects of research and innovation within their institutions, and to make recommendations on the means that are most adequate and the ways in which they may be implemented. The report subsequently considers goals for ethics at universities, pathways for advancing ethics at universities, ethics codes and protocols, scientific integrity boards and codes, ethics assessment and research ethics committees, and ethics teaching and training. It ends with a summary of the recommendations of earlier sections.
2015
- Principles and Approaches in Ethics Assessment: Ethics and RiskRaija Koivisto, and David DouglasJun 2015
This report aims to study and discuss the ethical aspects of risk assessment and management, and how risk plays a role in the ethical assessment of research. It introduces the central concepts – risk and ethics – and examines the different phases of the risk management process from the ethical point of view. It also describes the ethical principles used to determine whether the risks of conducting research are acceptable. The increasing complexity of systems, products and services due to new technological and social developments is making risk assessment and management more challenging and emphasizes the need to consider ethical issues systematically in the risk assessment process.
- Ethics Assessment in Different Countries: ChinaXin Ming, David Douglas, Agata Gurzawska, and Philip BreyJun 2015
The aim of this report is to analyse the existing structures and agents for the ethical assessment of research and innovation in China, both for the public and the private sector. The report will analyse how the national government has put into place organisational structures, laws, policies and procedures for ethical assessment, how both publicly funded and private research and innovation systems address ethical issues in research and innovation, and how ethical assessment plays a role in the activities of professional groups and associations for research and innovation and of civil society organisations (CSOs).
- Ethics Assessment in Different Fields: Medical and Life SciencesKarin Leersum, and David DouglasJun 2015
This is a report on ethics assessment of medical and life sciences. Ethics assessment concerns the question what is good and bad or right and wrong about a certain technology or practice. Such assessments help organisations determine to what extent ethical standards should influence decision making at both organisational and individual levels. The aim of this report is to cover both the academic and non-academic traditions of ethical assessment, and the institutionalisation of ethics assessment in different types of organisations, including national and international standards and legislation. This report is a part of a larger study of the SATORI project.
- SATORI Deliverable D1.1: Ethical Assessment of Research and Innovation: A Comparative Analysis of Practices and Institutions in the EU and Selected Other CountriesClare Shelley-Egan, Philip Brey, Rowena Rodrigues, David Douglas, Agata Gurzawska, Lise Bitsch, David Wright, and Kush WadhwaJun 2015
This deliverable offers a detailed picture of the de facto ethics assessment landscape in the European Union and other countries with regard to approaches, practices and institutions for ethics assessment across scientific fields, different kinds of organisations that carry out assessment, and different countries. The deliverable is based on in-depth study of ethics assessment in ten countries in the European Union, and the United States (US) and China, as well as studies of particular organisations in other EU countries. This main report summarises the results of work package 1 of the SATORI project and provides a comparative analysis of ethics assessment in the scientific fields, organisations and countries investigated.
- Towards a Just and Fair Internet: Applying Rawls’ Principles of Justice to Internet RegulationDavid M. DouglasEthics and Information Technology, Mar 2015
I suggest that the social justice issues raised by Internet regulation be exposed and examined by using a methodology adapted from that described by John Rawls in A Theory of Justice. Rawls’ theory uses the hypothetical scenario of people deliberating about the justice of social institutions from the ‘original position’ as a method of removing bias in decision-making about justice. The original position imposes a ‘veil of ignorance’ that hides the particular circumstances of individuals from them so that they will not be influenced by self-interest. I adapt Rawls’ methodology by introducing an abstract description of information technology to those deliberating about justice from within the original position. This abstract description focuses on information devices that users can use to access information (and which may record information about them as well) and information networks that information devices use to communicate. The abstractness of this description prevents the particular characteristics of the Internet and the computing devices in use from influencing the decisions about the just use and regulation of information technology and networks. From this abstract position, the principles of justice that the participants accept for the rest of society will also apply to the computing devices people use to communicate, and to Internet regulation.
- Ethical values and the global carbon integrity systemRowena Maguire, David M Douglas, Vesselin Popovski, and Hugh BreakeyIn Ethical Values and the Integrity of the Climate Change Regime, Mar 2015
This chapter introduces the Comprehensive Integrity Framework as it applies to institutions, and has employed that framework to map the key factors and concepts at work in the global carbon integrity system, including reference to the global integrity regime and to one of its sub-institutions. It defines a number of key terms, including the Public Institutional Justification (PIJ), consistency-integrity, coherence-integrity and context-integrity. An institution has comprehensive-integrity if its activities, values and ethics, internal organization and external relations accord with its PIJ. Social values outside the institution can also impact upon the institution’s selection of its PIJ and its capacity to live up its PIJ. The integrity system is therefore constituted by the combination of the institution’s coherence-integrity and context-integrity. The framework to the global climate regime complex as a whole, framed around the UN Framework Convention on Climate Change (UNFCCC) and the framework in analysing one illustrative sub-institution nested within the UNFCCC the Clean Development Mechanism (CDM).
2014
- The Social Goods of Information Networks: Complex Equality and Wu’s Separation PrinciplesDavid M. DouglasFirst Monday, Sep 2014
In his book ’The Master Switch: The Rise and Fall of Information Empires’, Tim Wu proposes a ’Separation Principle’ that the control of communication infrastructure should be separated from control over the information transmitted across it. I suggest that the Separation Principle can be further justified by appealing to Michael Walzer’s concept of complex equality. In this analysis, the integrated control of communication infrastructure and control over who can use it is unjust as the influence of the infrastructure sphere is influencing the sphere of expression. This gives a further theoretical justification for Wu’s Separation Principle and for resisting the monopolisation of information networks.
2013
- Pre-Owned GamesDavid M. DouglasOct 2013
The market in second-hand or pre-owned games is made possible by provisions in copyright law that allow purchasers of copyrighted works to give or sell their copy to others. Pre-owned games are a contentious issue for game developers and publishers who see them as damaging to the sales and revenue generated by new games.
2012
- Making ICT Careers Accessible: The Value of CertificationDavid M. DouglasInformation Age, Aug 2012
2011
- The Social Disutility of Software OwnershipDavid M. DouglasScience and Engineering Ethics, Sep 2011
Software ownership allows the owner to restrict the distribution of software and to prevent others from reading the software’s source code and building upon it. However, free software is released to users under software licenses that give them the right to read the source code, modify it, reuse it, and distribute the software to others. Proponents of free software such as Richard M. Stallman and Eben Moglen argue that the social disutility of software ownership is a sufficient justification for prohibiting it. This social disutility includes the social instability of disregarding laws and agreements covering software use and distribution, inequality of software access, and the inability to help others by sharing software with them. Here I consider these and other social disutility claims against withholding specific software rights from users, in particular, the rights to read the source code, duplicate, distribute, modify, imitate, and reuse portions of the software within new programs. I find that generally while withholding these rights from software users does cause some degree of social disutility, only the rights to duplicate, modify and imitate cannot legitimately be denied to users on this basis. The social disutility of withholding the rights to distribute the software, read its source code and reuse portions of it in new programs is insufficient to prohibit software owners from denying them to users. A compromise between the software owner and user can minimise the social disutility of withholding these particular rights from users. However, the social disutility caused by software patents is sufficient for rejecting such patents as they restrict the methods of reducing social disutility possible with other forms of software ownership.
- A Bundle of Software Rights and DutiesDavid M. DouglasEthics and Information Technology, Sep 2011
Like the ownership of physical property, the issues computer software ownership raises can be understood as concerns over how various rights and duties over software are shared between owners and users. The powers of software owners are defined in software licenses, the legal agreements defining what users can and cannot do with a particular program. To help clarify how these licenses permit and restrict users’ actions, here I present a conceptual framework of software rights and duties that is inspired by the terms of various proprietary, open source, and free software licenses. To clarify the relationships defined by these rights and duties, this framework distinguishes between software creators (the original developer), custodians (those who can control its use), and users (those who utilise the software). I define the various rights and duties that can be shared between these parties and how these rights and duties relate to each other. I conclude with a brief example of how this framework can be used by defining the concepts of free software and copyleft in terms of rights and duties.
- The Rights and Duties of Software Users: An Examination of the Ethics of Software OwnershipDavid M. DouglasSep 2011
Software ownership significantly affects the users of information technology as it allows owners to withhold rights from users and also impose duties upon them. This thesis evaluates this ownership by determining the rights and duties users should hold by using a conceptual framework of rights and duties over software to evaluate the major arguments for and against software ownership. I begin by describing the relevant aspects of software and the intellectual property laws covering it, and the software licenses defining the rights and duties of software users. I distinguish between three groups of people associated with any software project: creators (those who write the software), custodians (those who control the rights and duties others have over the software), and users (those who use the software). These classifications are used to define a set of rights and duties that these groups may possess over a particular program. The major categories of software ownership (such as the public domain, free software, open source, freeware, shareware, and retail software) are described in terms on this framework. I use this framework to determine the particular rights creators and custodians can justifiably withhold from users based on the three arguments most frequently given for why software creators should have greater control over the software they develop. These arguments claim that the creator’s labour in developing her software grants her an entitlement to claim ownership over it (the labour entitlement argument), that the creator deserves to own her program as a reward for developing it (the desert argument), and that granting ownership to creators is the most effective incentive for encouraging software development (the consequentialist incentive argument). I then examine the three major arguments for giving users greater control over software to determine the particular rights and duties that these arguments require users to possess. These arguments are that software ownership causes an unjustified social harm (the social disutility argument), that granting users more rights over software improves software quality (the open source argument), and that users need greater control over the software they use to protect their autonomy (the liberty argument). After evaluating these arguments, I conclude by comparing the different bundles of rights and duties each argument grants users to determine if there is any agreement between them over the particular rights and duties that should be granted to users and which rights creators and custodians can legitimately withhold from them. Finally, I compare the rights and duties that various kinds of software licenses grant users and determine whether they grant users the bundle of rights and duties that are justified by the arguments discussed.
2009
- A Beneficial Monopoly: Jeremy Bentham on Monopolies and PatentsDavid M. DouglasSep 2009
Here I examine Jeremy Bentham’s arguments in favour of patents in light of his description of the five harms associated with monopolies. I find while these harms can be reduced by the limited duration and specific definition of patents, the existence of these harms means that a utilitarian (like Bentham) would have to support an alternative to patents if it produced the same positive results without the monopoly harms.