Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Responsible Innovation. Show all posts
Showing posts with label Responsible Innovation. Show all posts

Wednesday, July 5, 2023

Taxonomy of Risks posed by Language Models

Weidinger, L., Uesato, J., et al. (2022, March).
In Proceedings of the 2022 ACM Conference on 
Fairness, Accountability, and Transparency
(pp. 19-30).
Association for Computing Machinery.

Abstract

Responsible innovation on large-scale Language Models (LMs) requires foresight into and in-depth understanding of the risks these models may pose. This paper develops a comprehensive taxonomy of ethical and social risks associated with LMs. We identify twenty-one risks, drawing on expertise and literature from computer science, linguistics, and the social sciences. We situate these risks in our taxonomy of six risk areas: I. Discrimination, Hate speech and Exclusion, II. Information Hazards, III. Misinformation Harms, IV. Malicious Uses, V. Human-Computer Interaction Harms, and VI. Environmental and Socioeconomic harms. For risks that have already been observed in LMs, the causal mechanism leading to harm, evidence of the risk, and approaches to risk mitigation are discussed. We further describe and analyse risks that have not yet been observed but are anticipated based on assessments of other language technologies, and situate these in the same taxonomy. We underscore that it is the responsibility of organizations to engage with the mitigations we discuss throughout the paper. We close by highlighting challenges and directions for further research on risk evaluation and mitigation with the goal of ensuring that language models are developed responsibly.

Conclusion

In this paper, we propose a comprehensive taxonomy to structure the landscape of potential ethical and social risks associated with large-scale language models (LMs). We aim to support the research programme toward responsible innovation on LMs, broaden the public discourse on ethical and social risks related to LMs, and break risks from LMs into smaller, actionable pieces to facilitate their mitigation. More expertise and perspectives will be required to continue to build out this taxonomy of potential risks from LMs. Future research may also expand this taxonomy by applying additional methods such as case studies or interviews. Next steps building on this work will be to engage further perspectives, to innovate on analysis and evaluation methods, and to build out mitigation tools, working toward the responsible innovation of LMs.


Here is a summary of each of the six categories of risks:
  • Discrimination: LLMs can be biased against certain groups of people, leading to discrimination in areas such as employment, housing, and lending.
  • Hate speech and exclusion: LLMs can be used to generate hate speech and other harmful content, which can lead to exclusion and violence.
  • Information hazards: LLMs can be used to spread misinformation, which can have a negative impact on society.
  • Misinformation harms: LLMs can be used to create deepfakes and other forms of synthetic media, which can be used to deceive people.
  • Malicious uses: LLMs can be used to carry out malicious activities such as hacking, fraud, and terrorism.
  • Human-computer interaction harms: LLMs can be used to create addictive and harmful applications, which can have a negative impact on people's mental health.
  • Environmental and socioeconomic harms: LLMs can be used to consume large amounts of energy and data, which can have a negative impact on the environment and society.

Monday, January 31, 2022

The future of work: freedom, justice and capital in the age of artificial intelligence

F. S. de Sio, T. Almeida & J. van den Hoven
(2021) Critical Review of International Social
 and Political Philosophy
DOI: 10.1080/13698230.2021.2008204

Abstract

Artificial Intelligence (AI) is predicted to have a deep impact on the future of work and employment. The paper outlines a normative framework to understand and protect human freedom and justice in this transition. The proposed framework is based on four main ideas: going beyond the idea of a Basic Income to compensate the losers in the transition towards AI-driven work, towards a Responsible Innovation approach, in which the development of AI technologies is governed by an inclusive and deliberate societal judgment; going beyond a philosophical conceptualisation of social justice only focused on the distribution of ‘primary goods’, towards one focused on the different goals, values, and virtues of various social practices (Walzer’s ‘spheres of justice’) and the different individual capabilities of persons (Sen’s ‘capabilities’); going beyond a classical understanding of capital, towards one explicitly including mental capacities as a source of value for AI-driven activities. In an effort to promote an interdisciplinary approach, the paper combines political and economic theories of freedom, justice and capital with recent approaches in applied ethics of technology, and starts applying its normative framework to some concrete example of AI-based systems: healthcare robotics, ‘citizen science’, social media and platform economy.

From the Conclusion

Whether or not it will create a net job loss (aka technological unemployment), Artificial Intelligence and digital technologies will change the nature of work, and will have a deep impact on people’s work lives. New political action is needed to govern this transition. In this paper we have claimed that also new philosophical concepts are needed, if the transition has to be governed responsibly and in the interest of everybody. The paper has outlined a general normative framework to make sense of- and address the issue of human freedom and justice in the age of AI at work. The framework is based on four ideas. First, in general freedom and justice cannot be achieved by only protecting existing jobs as a goal in itself, inviting persons to find ways for to remain relevant in a new machine-driven word, or offering financial compensation to those who are (permanently) left unemployed, for instance, via a Universal Basic Income. We should rather prevent technological unemployment and the worsening of working condition to happen, as a result of a Responsible Innovation approach to technology, where freedom and justice are built into the technical and institutional structures of the work of the future. Second, more in particular, we have argued, freedom and justice may be best promoted by a politics and an economics of technology informed by the recognition of different virtues and values as constitutive of different activities, following a Walzerian (‘spheres of justice’) approach to technological and institutional design, possibly integrated by a virtue ethics component. 

Thursday, July 29, 2021

Technology in the Age of Innovation: Responsible Innovation as a New Subdomain Within the Philosophy of Technology

von Schomberg, L., Blok, V. 
Philos. Technol. 34, 309–323 (2021). 
https://doi.org/10.1007/s13347-019-00386-3

Abstract

Praised as a panacea for resolving all societal issues, and self-evidently presupposed as technological innovation, the concept of innovation has become the emblem of our age. This is especially reflected in the context of the European Union, where it is considered to play a central role in both strengthening the economy and confronting the current environmental crisis. The pressing question is how technological innovation can be steered into the right direction. To this end, recent frameworks of Responsible Innovation (RI) focus on how to enable outcomes of innovation processes to become societally desirable and ethically acceptable. However, questions with regard to the technological nature of these innovation processes are rarely raised. For this reason, this paper raises the following research question: To what extent is RI possible in the current age, where the concept of innovation is predominantly presupposed as technological innovation? On the one hand, we depart from a post-phenomenological perspective to evaluate the possibility of RI in relation to the particular technological innovations discussed in the RI literature. On the other hand, we emphasize the central role innovation plays in the current age, and suggest that the presupposed concept of innovation projects a techno-economic paradigm. In doing so, we ultimately argue that in the attempt to steer innovation, frameworks of RI are in fact steered by the techno-economic paradigm inherent in the presupposed concept of innovation. Finally, we account for what implications this has for the societal purpose of RI.

The Conclusion

Hence, even though RI provides a critical analysis of innovation at the ontic level (i.e., concerning the introduction and usage of particular innovations), it still lacks a critical analysis at the ontological level (i.e., concerning the techno-economic paradigm of innovation). Therefore, RI is in need of a fundamental reflection that not only exposes the techno-economic paradigm of innovation—which we did in this paper—but that also explores an alternative concept of innovation which addresses the public good beyond the current privatization wave. The political origins of innovation that we encountered in Section 2, along with the political ends that the RI literature explicitly prioritizes, suggest that we should inquire into a political orientation of innovation. A crucial task of this inquiry would be to account for what such a political orientation of innovation precisely entails at the ontic level, and how it relates to the current techno-economic paradigm of innovation at the ontological level.