Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Governance. Show all posts
Showing posts with label Governance. Show all posts

Friday, October 30, 2020

The corporate responsibility facade is finally starting to crumble

Alison Taylor
Yahoo Finance
Originally posted 4 March 20

Here is an excerpt:

Any claim to be a responsible corporation is predicated on addressing these abuses of power. But most companies are instead clinging with remarkable persistence to the façades they’ve built to deflect attention. Compliance officers focus on pleasing regulators, even though there is limited evidence that their recommendations reduce wrongdoing. Corporate sustainability practitioners drown their messages in an alphabet soup of acronyms, initiatives, and alienating jargon about “empowered communities” and “engaged stakeholders,” when both functions are still considered peripheral to corporate strategy.

When reading a corporation’s sustainability report and then comparing it to its risk disclosures—or worse, its media coverage—we might as well be reading about entirely distinct companies. Investors focused on sustainability speak of “materiality” principles, meant to sharpen our focus on the most relevant environmental, social, and governance (ESG) issues for each industry. But when an issue is “material” enough to threaten core operating models, companies routinely ignore, evade, and equivocate.

Coca-Cola’s most recent annual sustainability report acknowledges its most pressing issue is “obesity concerns and category perceptions.” Accordingly, it highlights its lower-sugar product lines and references responsible marketing. But it continues its vigorous lobbying against soda taxes, and of course continues to make products with known links to obesity and other health problems. Facebook’s sustainability disclosures focus on efforts to fight climate change and improve labor rights in its supply chain, but make no reference to the mental-health impacts of social media or to its role in peddling disinformation and undermining democracy. Johnson and Johnson flags “product quality and safety” as its highest priority issue without mentioning that it is a defendant in criminal litigation over distribution of opioids. UBS touts its sustainability targets but not its ongoing financing of fossil-fuel projects.

Tuesday, May 12, 2020

Freedom in an Age of Algocracy

John Danaher
forthcoming in Oxford Handbook on the Philosophy of Technology
edited by Shannon Vallor

Abstract

There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus in this way enables us to see how algorithmic governance can be both emancipatory and enslaving, and provides a framework for future development and activism around the creation of this technology.

From the Conclusion:

Finally, I’ve outlined a framework for thinking about the likely impact of algocracy on freedom. Given the complexity of freedom and the complexity of algocracy, I’ve argued that there is unlikely to be a simple global assessment of the freedom-promoting or undermining power of algocracy. This is something that has to be assessed and determined on a case-by-case basis. Nevertheless, there are at least five interesting and relatively novel mechanisms through which algocratic systems can both promote and undermine freedom. We should pay attention to these different mechanisms, but do so in a properly contextualized manner, and not by ignoring the pre-existing mechanisms through which freedom is undermined and promoted.

The book chapter is here.

Thursday, October 17, 2019

Why Having a Chief Data Ethics Officer is Worth Consideration

The National Law Review
Image result for chief data ethics officerOriginally published September 20, 2019

Emerging technology has vastly outpaced corporate governance and strategy, and the use of data in the past has consistently been “grab it” and figure out a way to use it and monetize it later. Today’s consumers are becoming more educated and savvy about how companies are collecting, using and monetizing their data, and are starting to make buying decisions based on privacy considerations, and complaining to regulators and law makers about how the tech industry is using their data without their control or authorization.

Although consumers’ education is slowly deepening, data privacy laws, both internationally and in the U.S., are starting to address consumers’ concerns about the vast amount of individually identifiable data about them that is collected, used and disclosed.

Data ethics is something that big tech companies are starting to look at (rightfully so), because consumers, regulators and lawmakers are requiring them to do so. But tech companies should consider looking at data ethics as a fundamental core value of the company’s mission, and should determine how they will be addressed in their corporate governance structure.

The info is here.

Friday, August 9, 2019

Advice for technologists on promoting AI ethics

Joe McKendrick
www.zdnet.com
Originally posted July 13, 2019

Ethics looms as a vexing issue when it comes to artificial intelligence (AI). Where does AI bias spring from, especially when it's unintentional? Are companies paying enough attention to it as they plunge full-force into AI development and deployment? Are they doing anything about it? Do they even know what to do about it?

Wringing bias and unintended consequences out of AI is making its way into the job descriptions of technology managers and professionals, especially as business leaders turn to them for guidance and judgement. The drive to ethical AI means an increased role for technologists in the business, as described in a study of 1,580 executives and 4,400 consumers from the Capgemini Research Institute. The survey was able to make direct connections between AI ethics and business growth: if consumers sense a company is employing AI ethically, they'll keep coming back; it they sense unethical AI practices, their business is gone.

Competitive pressure is the reason businesses are pushing AI to its limits and risking crossing ethical lines. "The pressure to implement AI is fueling ethical issues," the Capgemini authors, led by Anne-Laure Thieullent, managing director of Capgemini's Artificial Intelligence & Analytics Group, state. "When we asked executives why ethical issues resulting from AI are an increasing problem, the top-ranked reason was the pressure to implement AI." Thirty-four percent cited this pressure to stay ahead with AI trends.

The info is here.

Monday, July 29, 2019

AI Ethics – Too Principled to Fail?

Brent Mittelstadt
Oxford Internet Institute
https://ssrn.com/abstract=3391293

Abstract

AI Ethics is now a global topic of discussion in academic and policy circles. At least 63 public-private initiatives have produced statements describing high-level principles, values, and other tenets to guide the ethical development, deployment, and governance of AI. According to recent meta-analyses, AI Ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI Ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach in the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.

The paper is here.

Shift from professional ethics to business ethics

The outputs of many AI Ethics initiatives resemble professional codes of ethics that address design requirements and the behaviours and values of individual professions.  The legitimacy of particular applications and their underlying business interests remain largely unquestioned.  This approach conveniently steers debate towards the transgressions of unethical individuals, and away from the collective failure of unethical businesses and business models.  Developers will always be constrained by the institutions that employ them. To be truly effective, the ethical challenges of AI cannot conceptualised as individual failures. Going forward, AI Ethics must become an ethics of AI businesses as well.

Tuesday, June 4, 2019

What's The Difference Between Compliance And Ethics?

Bruce Weinstein
GettyForbes.com
Originally posted May 9, 2019

I've noticed some confusion about the roles that ethics and compliance play in organizations. This confusion arises, in part, from the way these two fields are identified. Some companies have only a compliance department. Others have a compliance and ethics (or ethics and compliance) department. Some companies have a Chief Ethics Officer separate from compliance.

To get some clarity on these crucial roles, I asked seven leaders who are involved in both ethics and compliance to explain the similarities and differences as they saw them. I'll present their views, offer my own analysis and then consider what this means for your career and your organization.

(cut)

The Takeaways

What does all of this mean for you?

  1. If you're in compliance and/or ethics, it's worth having a clear understanding of what each department or program is about, how they're similar and how they differ. Then make sure that everyone in the organization understands these similarities and differences and what this means for their own roles.
  2. If you're not in compliance or ethics, find out how the company defines each area and what this means for you. Whether you want to move up in the organization or simply remain gainfully employed there, you will put yourself in good stead if you know the difference between ethics and compliance as your company defines them.
  3. No matter how your company views compliance and ethics, what its code of conduct is or whether you work within or outside of the compliance and ethics programs, it's not enough to ask, "What do laws, regulations or policies require of me?" The follow-up question should always be, "What is the right thing to do?"

Sunday, April 21, 2019

Microsoft will be adding AI ethics to its standard checklist for product release

Alan Boyle
www.geekwire.com
Originally posted March 25, 2019

Microsoft will “one day very soon” add an ethics review focusing on artificial-intelligence issues to its standard checklist of audits that precede the release of new products, according to Harry Shum, a top executive leading the company’s AI efforts.

AI ethics will join privacy, security and accessibility on the list, Shum said today at MIT Technology Review’s EmTech Digital conference in San Francisco.

Shum, who is executive vice president of Microsoft’s AI and Research group, said companies involved in AI development “need to engineer responsibility into the very fabric of the technology.”

Among the ethical concerns are the potential for AI agents to pick up all-too-human biases from the data on which they’re trained, to piece together privacy-invading insights through deep data analysis, to go horribly awry due to gaps in their perceptual capabilities, or simply to be given too much power by human handlers.

Shum noted that as AI becomes better at analyzing emotions, conversations and writings, the technology could open the way to increased propaganda and misinformation, as well as deeper intrusions into personal privacy.

The info is here.

Tuesday, January 8, 2019

Algorithmic governance: Developing a research agenda through the power of collective intelligence

John Danaher, Michael J Hogan, Chris Noone, Ronan Kennedy, et.al
Big Data & Society
July–December 2017: 1–21

Abstract

We are living in an algorithmic age where mathematics and computer science are coming together in powerful new ways to influence, shape and guide our behaviour and the governance of our societies. As these algorithmic governance structures proliferate, it is vital that we ensure their effectiveness and legitimacy. That is, we need to ensure that they are an effective means for achieving a legitimate policy goal that are also procedurally fair, open and unbiased. But how can we ensure that algorithmic governance structures are both? This article shares the results of a collective intelligence workshop that addressed exactly this question. The workshop brought together a multidisciplinary group of scholars to consider (a) barriers to legitimate and effective algorithmic governance and (b) the research methods needed to address the nature and impact of specific barriers. An interactive management workshop technique was used to harness the collective intelligence of this multidisciplinary group. This method enabled participants to produce a framework and research agenda for those who are concerned about algorithmic governance. We outline this research agenda below, providing a detailed map of key research themes, questions and methods that our workshop felt ought to be pursued. This builds upon existing work on research agendas for critical algorithm studies in a unique way through the method of collective intelligence.

The paper is here.

Sunday, December 9, 2018

The Vulnerable World Hypothesis

Nick Bostrom
Working Paper (2018)

Abstract

Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the “semi-anarchic default condition”. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology. A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance. The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order.

The working paper is here.

Vulnerable World Hypothesis: If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization
sufficiently exits the semi-anarchic default condition.

Tuesday, October 16, 2018

Let's Talk About AI Ethics; We're On A Deadline

Tom Vander Ark
Forbes.com
Originally posted September 13, 2018

Here is an excerpt:

Creating Values-Aligned AI

“The project of creating value-aligned AI is perhaps one of the most important things we will ever do,” said the Future of Life Institute. It’s not just about useful intelligence but “the ends to which intelligence is aimed and the social/political context, rules, and policies in and through which this all happens.”

The Institute created a visual map of interdisciplinary issues to be addressed:

  • Validation: ensuring that the right system specification is provided for the core of the agent given stakeholders' goals for the system.
  • Security: applying cyber security paradigms and techniques to AI-specific challenges.
  • Control: structural methods for operators to maintain control over advanced agents.
  • Foundations: foundational mathematical or philosophical problems that have bearing on multiple facets of safety
  • Verification: techniques that help prove a system was implemented correctly given a formal specification
  • Ethics: effort to understand what we ought to do and what counts as moral or good.
  • Governance: the norms and values held by society, which are structured through various formal and informal processes of decision-making to ensure accountability, stability, broad participation, and the rule of law.