Kijewski, S., Ronchi, E., & Vayena, E. (2024).
AI And Ethics.
Abstract
The rapid advancement of artificial intelligence (AI) sparked the development of principles and guidelines for ethical AI by a broad set of actors. Given the high-level nature of these principles, stakeholders seek practical guidance for their implementation in the development, deployment and use of AI, fueling the growth of practical approaches for ethical AI. This paper reviews, synthesizes and assesses current practical approaches for AI in health, examining their scope and potential to aid organizations in adopting ethical standards. We performed a scoping review of existing reviews in accordance with the PRISMA extension for scoping reviews (PRISMA-ScR), systematically searching databases and the web between February and May 2023. A total of 4284 documents were identified, of which 17 were included in the final analysis. Content analysis was performed on the final sample. We identified a highly heterogeneous ecosystem of approaches and a diverse use of terminology, a higher prevalence of approaches for certain stages of the AI lifecycle, reflecting the dominance of specific stakeholder groups in their development, and several barriers to the adoption of approaches. These findings underscore the necessity of a nuanced understanding of the implementation context for these approaches and that no one-size-fits-all approach exists for ethical AI. While common terminology is needed, this should not come at the cost of pluralism in available approaches. As governments signal interest in and develop practical approaches, significant effort remains to guarantee their validity, reliability, and efficacy as tools for governance across the AI lifecycle.
Here are some thoughts:
The scoping review reveals a complex and varied landscape of practical approaches to ethical AI, marked by inconsistent terminology and a lack of consensus on defining characteristics such as purpose and target audience. Currently, there is no unified understanding of terms like "tools," "toolkits," and "frameworks" related to ethical AI, which complicates their implementation in governance. A clear categorization of these approaches is essential for policymakers, as the diversity in terminology and ethical principles suggests that no single method can effectively promote AI ethics. Implementing these approaches necessitates a comprehensive understanding of the operational context of AI and the ethical concerns involved.
While there is a pressing need to standardize terminology, this should not come at the expense of diversity, as different contexts may require distinct approaches. The review indicates significant variation in how these approaches apply across the AI lifecycle, with many focusing on early stages like design and development, while guidance for later stages is notably lacking. This gap may be influenced by the private sector's dominant role in AI system design and the associated governance mechanisms, which often prioritize reputational risk management over comprehensive ethical oversight.
The review raises three critical questions: First, whether the rise of practical approaches to AI ethics represents a business opportunity, potentially leading to a proliferation of options but lacking rigorous evaluation. Second, it questions the robustness of these approaches for monitoring AI systems, highlighting a shortage of practical methods for auditing and impact assessment. Third, it suggests that effective AI governance may require context-specific approaches, advocating for standards like "ethical disclosure by default" to enhance transparency and accountability.
Significant barriers to the adoption of these approaches have been identified, including the high levels of expertise and resources required, a general lack of awareness, and the absence of effective measurement methods for successful implementation. The review emphasizes the need for practical validation metrics to assess compliance with ethical principles, as measuring the impact of AI ethics remains challenging.