Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Principles. Show all posts
Showing posts with label Principles. Show all posts

Saturday, June 26, 2021

Making moral principles suit yourself


Stanley, M.L., Henne, P., Niemi, L. et al. 
Psychon Bull Rev (2021). 
https://doi.org/10.3758/s13423-021-01935-8

Abstract

Normative ethical theories and religious traditions offer general moral principles for people to follow. These moral principles are typically meant to be fixed and rigid, offering reliable guides for moral judgment and decision-making. In two preregistered studies, we found consistent evidence that agreement with general moral principles shifted depending upon events recently accessed in memory. After recalling their own personal violations of moral principles, participants agreed less strongly with those very principles—relative to participants who recalled events in which other people violated the principles. This shift in agreement was explained, in part, by people’s willingness to excuse their own moral transgressions, but not the transgressions of others. These results have important implications for understanding the roles memory and personal identity in moral judgment. People’s commitment to moral principles may be maintained when they recall others’ past violations, but their commitment may wane when they recall their own violations.

From the General Discussion

 Moral disengagement mechanisms (e.g., distorting the consequences of actions, dehumanizing victims)
help people to convince themselves that their actions are permissible and that their ethical standards need not apply in certain contexts (Bandura, 1999; Bandura et al., 1996; Detert et al., 2008). These disengagement mechanisms are thought to help people to protect their favorable views of themselves.
Note that convincing oneself that a particular action is morally acceptable in a particular context via moral disengagement entails maintaining the same level of agreement with the overarching moral principles; the principle just does not apply in some particular context. In contrast, our findings suggest that by reflecting on their own morally objectionable actions, people’s agreement with the overarching, guiding principles
changes. It is not that the principle does not apply; it is that the principle is held with less conviction.

(cut)

Normative ethical theories and religious traditions that offer general moral principles are meant to help us to understand aspects of ourselves and our world in ways that offer insights and guidance for living a moral life (Albertzart, 2013; Väyrynen, 2008). Our findings introduce some cause for doubt about the stability of moral principles over time, and therefore, their reliability as accurate indicators of moral judgments and actions in the real world.

Wednesday, April 7, 2021

Actionable Principles for Artificial Intelligence Policy: Three Pathways

Stix, C. 
Sci Eng Ethics 27, 15 (2021). 
https://doi.org/10.1007/s11948-020-00277-3

Abstract

In the development of governmental policy for artificial intelligence (AI) that is informed by ethics, one avenue currently pursued is that of drawing on “AI Ethics Principles”. However, these AI Ethics Principles often fail to be actioned in governmental policy. This paper proposes a novel framework for the development of ‘Actionable Principles for AI’. The approach acknowledges the relevance of AI Ethics Principles and homes in on methodological elements to increase their practical implementability in policy processes. As a case study, elements are extracted from the development process of the Ethics Guidelines for Trustworthy AI of the European Commission’s “High Level Expert Group on AI”. Subsequently, these elements are expanded on and evaluated in light of their ability to contribute to a prototype framework for the development of 'Actionable Principles for AI'. The paper proposes the following three propositions for the formation of such a prototype framework: (1) preliminary landscape assessments; (2) multi-stakeholder participation and cross-sectoral feedback; and, (3) mechanisms to support implementation and operationalizability.

(cut)

Actionable Principles

In many areas, including AI, it has proven challenging to bridge ethics and governmental policy-making (Müller 2020, 1.3). To be clear, many AI Ethics Principles, such as those developed by industry actors or researchers for self-governance purposes, are not aimed at directly informing governmental policy-making, and therefore the challenge of bridging this gulf may not apply. Nonetheless, a significant subset of AI Ethics Principles are addressed to governmental actors, from the 2019 OECD Principles on AI (OECD 2019) to the US Defence Innovation Board’s AI Principles adopted by the Department of Defence (DIB 2019). Without focussing on any single effort in particular, the aggregate success of many AI Ethics Principles remains limited (Rességuier and Rodriques 2020). Clear shifts in governmental policy which can be directly traced back to preceding and corresponding sets of AI Ethics Principles, remain few and far between. This could mean, for example, concrete textual references reflecting a specific section of the AI Ethics Principle, or the establishment of (both enabling or preventative) policy actions building on relevant recommendations. A charitable interpretation could be that as governmental policy-making takes time, and given that the vast majority of AI Ethics Principles were published within the last two years, it may simply be premature to gauge (or dismiss) their impact. However, another interpretation could be that the current versions of AI Ethics Principles have fallen short of their promise, and reached their limitation for impact in governmental policy-making (henceforth: policy).

It is worth noting that successful actionability in policy goes well beyond AI Ethics Principles acting as a reference point. Actionable Principles could shape policy by influencing funding decisions, taxation, public education measures or social security programs. Concretely, this could mean increased funding into societally relevant areas, education programs to raise public awareness and increase vigilance, or to rethink retirement structures with regard to increased automation. To be sure, actionability in policy does not preclude impact in other adjacent domains, such as influencing codes of conduct for practitioners, clarifying what demands workers and unions should pose, or shaping consumer behaviour. Moreover, during political shifts or in response to a crisis, Actionable Principles may often prove to be the only (even if suboptimal) available governance tool to quickly inform precautionary and remedial (legal and) policy measures.


Thursday, May 21, 2020

Discussing the ethics of hydroxychloroquine prescriptions for COVID-19 prevention

Sharon Yoo
KARE11.com
Originally published 19 May 20

President Donald Trump said on Monday that he's been taking hydroxychloroquine to protect himself against the coronavirus. It is a drug typically used to treat malaria and lupus.

The Federal Drug Administration issued warnings that the drug should only be used in clinical trials or for patients at a hospital under the Emergency Use Authorization.

"Yeah, a White House doctor, didn't recommend—I asked him what do you think—and he said well, if you'd like it and I said yeah, I'd like it, I'd like to take it," President Trump said, when a reporter asked him if a White House doctor recommended that he take hydroxychloroquine on Monday.

In a statement, the President's physician, Dr. Sean Conley said after discussions, they've concluded the potential benefit from treatment outweighed the relative risks. All this, despite the FDA warnings.

University of Minnesota bioethics professor Joel Wu said this is problematic.

"It's ethically problematic if the President is being treated for COVID specifically by hydroxychloroquine because our understanding based on the current evidence is not safe or effective in treating or preventing COVID," Wu said.

The info is here.

Friday, March 13, 2020

DoD unveils how it will keep AI in check with ethics principles

Image result for military aiScott Maucione
federalnewsnetwork.com
Originally posted 25 Feb 20

Here is an excerpt:

The principle areas are based on recommendations from a 15-month study by the Defense Innovation Board — a panel of science and technology experts from industry and academia.

The principles are as follows:

  1. Responsible: DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  2. Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  4. Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  5. Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

Friday, February 21, 2020

Why Google thinks we need to regulate AI

Sundar Pichai
ft.com
Originally posted 19 Jan 20

Here are two excerpts:

Yet history is full of examples of how technology’s virtues aren’t guaranteed. Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents. The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.

These lessons teach us that we need to be clear-eyed about what could go wrong. There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone.

(cut)

But principles that remain on paper are meaningless. So we’ve also developed tools to put them into action, such as testing AI decisions for fairness and conducting independent human-rights assessments of new products. We have gone even further and made these tools and related open-source code widely available, which will empower others to use AI for good. We believe that any company developing new AI tools should also adopt guiding principles and rigorous review processes.

Government regulation will also play an important role. We don’t have to start from scratch. Existing rules such as Europe’s General Data Protection Regulation can serve as a strong foundation. Good regulatory frameworks will consider safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways. Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities.

Regulation can provide broad guidance while allowing for tailored implementation in different sectors. For some AI uses, such as regulated medical devices including AI-assisted heart monitors, existing frameworks are good starting points. For newer areas such as self-driving vehicles, governments will need to establish appropriate new rules that consider all relevant costs and benefits.


Wednesday, January 22, 2020

‘The Algorithm Made Me Do It’: Artificial Intelligence Ethics Is Still On Shaky Ground

Joe McKendrick
Forbes.com
Originally published 22 Dec 19

Here is an excerpt:

Inevitably, “there will be lawsuits that require you to reveal the human decisions behind the design of your AI systems, what ethical and social concerns you took into account, the origins and methods by which you procured your training data, and how well you monitored the results of those systems for traces of bias or discrimination,” warns Mike Walsh, CEO of Tomorrow, and author of The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You, in a recent Harvard Business Review article. “At the very least trust, the algorithmic processes at the heart of your business. Simply arguing that your AI platform was a black box that no one understood is unlikely to be a successful legal defense in the 21st century. It will be about as convincing as ‘the algorithm made me do it.’”

It’s more than legal considerations that should drive new thinking about AI ethics. It’s about “maintaining trust between organizations and the people they serve, whether clients, partners, employees, or the general public,” a recent report out of Accenture maintains. The report’s authors, Ronald Sandler and John Basl, both with Northeastern University’s philosophy department, and Steven Tiell of Accenture, state that a well-organized data ethics capacity can help organizations manage risks and liabilities associated with such data misuse and negligence.

“It can also help organizations clarify and make actionable mission and organizational values, such as responsibilities to and respect for the people and communities they serve,” Sandler and his co-authors advocate. A data ethics capability also offers organizations “a path to address the transformational power of data-driven AI and machine learning decision-making in an anticipatory way, allowing for proactive responsible development and use that can help organizations shape good governance, rather than inviting strict oversight.”

The info is here.

Monday, January 20, 2020

What Is Prudent Governance of Human Genome Editing?

Scott J. Schweikart
AMA J Ethics. 2019;21(12):E1042-1048.
doi: 10.1001/amajethics.2019.1042.

Abstract

CRISPR technology has made questions about how best to regulate human genome editing immediately relevant. A sound and ethical governance structure for human genome editing is necessary, as the consequences of this new technology are far-reaching and profound. Because there are currently many risks associated with genome editing technology, the extent of which are unknown, regulatory prudence is ideal. When considering how best to create a prudent governance scheme, we can look to 2 guiding examples: the Asilomar conference of 1975 and the German Ethics Council guidelines for human germline intervention. Both models offer a path towards prudent regulation in the face of unknown and significant risks.

Here is an excerpt:

Beyond this key distinction, the potential risks and consequences—both to individuals and society—of human genome editing are relevant to ethical considerations of nonmaleficence, beneficence, justice, and respect for autonomy and are thus also relevant to the creation of an appropriate regulatory model. Because genome editing technology is at its beginning stages, it poses safety risks, the off-target effects of CRISPR being one example. Another issue is whether gene editing is done for therapeutic or enhancement purposes. While either purpose can prove beneficial, enhancement has potential for abuse.
Moreover, concerns exist that genome editing for enhancement can thwart social justice, as wealthy people will likely have greater ability to enhance their genome (and thus presumably certain physical and mental characteristics), furthering social and class divides. With regards to germline editing, a relevant concern is how, during the informed consent process, to respect the autonomy of persons in future generations whose genomes are modified before birth. The questions raised by genome editing are profound, and the risks—both to the individual and to society—are evident. Left without proper governance, significant harmful consequences are possible.

The info is here.

Thursday, December 19, 2019

Where AI and ethics meet

Stephen Fleischresser
Cosmos Magazine
Originally posted 18 Nov 19

Here is an excerpt:

His first argument concerns common aims and fiduciary duties, the duties in which trusted professionals, such as doctors, place other’s interests above their own. Medicine is clearly bound together by the common aim of promoting the health and well-being of patients and Mittelstadt argues that it is a “defining quality of a profession for its practitioners to be part of a ‘moral community’ with common aims, values and training”.

For the field of AI research, however, the same cannot be said. “AI is largely developed by the private sector for deployment in public (for example, criminal sentencing) and private (for example, insurance) contexts,” Mittelstadt writes. “The fundamental aims of developers, users and affected parties do not necessarily align.”

Similarly, the fiduciary duties of the professions and their mechanisms of governance are absent in private AI research.

“AI developers do not commit to public service, which in other professions requires practitioners to uphold public interests in the face of competing business or managerial interests,” he writes. In AI research, “public interests are not granted primacy over commercial interests”.

In a related point, Mittelstadt argues that while medicine has a professional culture that lays out the necessary moral obligations and virtues stretching back to the physicians of ancient Greece, “AI development does not have a comparable history, homogeneous professional culture and identity, or similarly developed professional ethics frameworks”.

Medicine has had a long time over which to learn from its mistakes and the shortcomings of the minimal guidance provided by the Hippocratic tradition. In response, it has codified appropriate conduct into modern principlism which provides fuller and more satisfactory ethical guidance.

The info is here.

Friday, November 15, 2019

Is Moral Relativism Really a Problem?

Is Moral Relativism Really a Problem?Thomas Polzler
Scientific American Blog
Originally published October 16, 2019

Here is an excerpt:

Warnings against moral relativism are most often based on theoretical speculation. Critics consider the view’s nature and add certain assumptions about human psychology. Then they infer how being a relativist might affect a person’s behavior. For example, for a relativist, even actions such as murder or rape can never be really or absolutely wrong; they are only wrong to the extent that the relativist or most members of his or her culture believe them to be so.

One may therefore worry that relativists are less motivated to refrain from murdering and raping than people who regard these actions as objectively wrong. While this scenario may sound plausible, however, it is important to note that relativism’s effects can only ultimately be determined by relevant studies.

So far, scientific investigations do not support the suspicion that moral relativism is problematic. True, there are two studies that do suggest such a conclusion. In one of them, participants were led to think about morality in either relativist or objectivist terms. It turned out that subjects in the relativist condition were more likely to cheat in a lottery and to state that they would be willing to steal than those in the objectivist condition. In the other study, participants who had been exposed to relativist ideas were less likely to donate to charity than those who had been exposed to objectivist ones.

That said, there is also evidence that associates moral relativism with positive behaviors. In one of her earlier studies, Wright and her colleagues informed their participants that another person disagreed with one of their moral judgments. Then the researchers measured the subjects’ degree of tolerance for this person’s divergent moral view. For example, participants were asked how willing they would be to interact with the person, how willing they would be to help him or her and how comfortable they generally were with another individual denying one of their moral judgments. It turned out that subjects with relativist leanings were more tolerant toward the disagreeing person than those who had tended toward objectivism.

The info is here.

Wednesday, November 6, 2019

How to operationalize AI ethics

Khari Johnson
venturebeat.com
Originally published October 7, 2019

Here is an excerpt:

Tools, frameworks, and novel approaches

One of Thomas’ favorite AI ethics resources comes from the Markkula Center for Applied Ethics at Santa Clara University: a toolkit that recommends a number of processes to implement.

“The key one is ethical risk sweeps, periodically scheduling times to really go through what could go wrong and what are the ethical risks. Because I think a big part of ethics is thinking through what can go wrong before it does and having processes in place around what happens when there are mistakes or errors,” she said.

To root out bias, Sobhani recommends the What-If visualization tool from Google’s People + AI Research (PAIR) initiative as well as FairTest, a tool for “discovering unwarranted association within data-driven applications” from academic institutions like EPFL and Columbia University. She also endorses privacy-preserving AI techniques like federated learning to ensure better user privacy.

In addition to resources recommended by panelists, Algorithm Watch maintains a running list of AI ethics guidelines. Last week, the group found that guidelines released in March 2018 by IEEE, the world’s largest association for professional engineers, have seen little adoption at Facebook and Google.

The info is here.

Tuesday, August 20, 2019

What Alan Dershowitz taught me about morality

Molly Roberts
The Washington Post
Originally posted August 2, 2019

Here are two excerpts:

Dershowitz has been defending Donald Trump on television for years, casting himself as a warrior for due process. Now, Dershowitz is defending himself on TV, too, against accusations at the least that he knew about Epstein allegedly trafficking underage girls for sex with men, and at the worst that he was one of the men.

These cases have much in common, and they both bring me back to the classroom that day when no one around the table — not the girl who invoked Ernest Hemingway’s hedonism, nor the boy who invoked God’s commandments — seemed to know where our morality came from. Which was probably the point of the exercise.

(cut)

You can make a convoluted argument that investigations of the president constitute irresponsible congressional overreach, but contorting the Constitution is your choice, and the consequences to the country of your contortion are yours to own, too. Everyone deserves a defense, but lawyers in private practice choose their clients — and putting a particular focus on championing those Dershowitz calls the “most unpopular, most despised” requires grappling with what it means for victims when an abuser ends up with a cozy plea deal.

When the alleged abuser is your friend Jeffrey, whose case you could have avoided precisely because you have a personal relationship, that grappling is even more difficult. Maybe it’s still all worth it to keep the system from falling apart, because next time it might not be a billionaire financier who wanted to seed the human race with his DNA on the stand, but a poor teenager framed for a crime he didn’t commit.

Dershowitz once told the New York Times he regretted taking Epstein’s case. He told me, “I would do it again.”

The info is here.

Monday, July 29, 2019

AI Ethics – Too Principled to Fail?

Brent Mittelstadt
Oxford Internet Institute
https://ssrn.com/abstract=3391293

Abstract

AI Ethics is now a global topic of discussion in academic and policy circles. At least 63 public-private initiatives have produced statements describing high-level principles, values, and other tenets to guide the ethical development, deployment, and governance of AI. According to recent meta-analyses, AI Ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI Ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach in the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.

The paper is here.

Shift from professional ethics to business ethics

The outputs of many AI Ethics initiatives resemble professional codes of ethics that address design requirements and the behaviours and values of individual professions.  The legitimacy of particular applications and their underlying business interests remain largely unquestioned.  This approach conveniently steers debate towards the transgressions of unethical individuals, and away from the collective failure of unethical businesses and business models.  Developers will always be constrained by the institutions that employ them. To be truly effective, the ethical challenges of AI cannot conceptualised as individual failures. Going forward, AI Ethics must become an ethics of AI businesses as well.

Tuesday, May 7, 2019

Ethics Alone Can’t Fix Big Tech

Daniel Susser
Slate.com
Originally posted April 17, 2019

Here is an excerpt:

At a deeper level, these issues highlight problems with the way we’ve been thinking about how to create technology for good. Desperate for anything to rein in otherwise indiscriminate technological development, we have ignored the different roles our theoretical and practical tools are designed to play. With no coherent strategy for coordinating them, none succeed.

Consider ethics. In discussions about emerging technologies, there is a tendency to treat ethics as though it offers the tools to answer all values questions. I suspect this is largely ethicists’ own fault: Historically, philosophy (the larger discipline of which ethics is a part) has mostly neglected technology as an object of investigation, leaving that work for others to do. (Which is not to say there aren’t brilliant philosophers working on these issues; there are. But they are a minority.) The result, as researchers from Delft University of Technology and Leiden University in the Netherlands have shown, is that the vast majority of scholarly work addressing issues related to technology ethics is being conducted by academics trained and working in other fields.

This makes it easy to forget that ethics is a specific area of inquiry with a specific purview. And like every other discipline, it offers tools designed to address specific problems. To create a world in which A.I. helps people flourish (rather than just generate profit), we need to understand what flourishing requires, how A.I. can help and hinder it, and what responsibilities individuals and institutions have for creating technologies that improve our lives. These are the kinds of questions ethics is designed to address, and critically important work in A.I. ethics has begun to shed light on them.

At the same time, we also need to understand why attempts at building “good technologies” have failed in the past, what incentives drive individuals and organizations not to build them even when they know they should, and what kinds of collective action can change those dynamics. To answer these questions, we need more than ethics. We need history, sociology, psychology, political science, economics, law, and the lessons of political activism. In other words, to tackle the vast and complex problems emerging technologies are creating, we need to integrate research and teaching around technology with all of the humanities and social sciences.

The info is here.

Sunday, April 21, 2019

Microsoft will be adding AI ethics to its standard checklist for product release

Alan Boyle
www.geekwire.com
Originally posted March 25, 2019

Microsoft will “one day very soon” add an ethics review focusing on artificial-intelligence issues to its standard checklist of audits that precede the release of new products, according to Harry Shum, a top executive leading the company’s AI efforts.

AI ethics will join privacy, security and accessibility on the list, Shum said today at MIT Technology Review’s EmTech Digital conference in San Francisco.

Shum, who is executive vice president of Microsoft’s AI and Research group, said companies involved in AI development “need to engineer responsibility into the very fabric of the technology.”

Among the ethical concerns are the potential for AI agents to pick up all-too-human biases from the data on which they’re trained, to piece together privacy-invading insights through deep data analysis, to go horribly awry due to gaps in their perceptual capabilities, or simply to be given too much power by human handlers.

Shum noted that as AI becomes better at analyzing emotions, conversations and writings, the technology could open the way to increased propaganda and misinformation, as well as deeper intrusions into personal privacy.

The info is here.

Wednesday, April 17, 2019

A New Model For AI Ethics In R&D

Forbes Insight Team
Forbes.com
Originally posted March 27, 2019

Here is an excerpt:

The traditional ethics oversight and compliance model has two major problems, whether it is used in biomedical research or in AI. First, a list of guiding principles—whether four or 40—just summarizes important ethical concerns without resolving the conflicts between them.

Say, for example, that the development of a life-saving AI diagnostic tool requires access to large sets of personal data. The principle of respecting autonomy—that is, respecting every individual’s rational, informed, and voluntary decision making about herself and her life—would demand consent for using that data. But the principle of beneficence—that is, doing good—would require that this tool be developed as quickly as possible to help those who are suffering, even if this means neglecting consent. Any board relying solely on these principles for guidance will inevitably face an ethical conflict, because no hierarchy ranks these principles.

Second, decisions handed down by these boards are problematic in themselves. Ethics boards are far removed from researchers, acting as all-powerful decision-makers. Once ethics boards make a decision, typically no appeals process exists and no other authority can validate their decision. Without effective guiding principles and appropriate due process, this model uses ethics boards to police researchers. It implies that researchers cannot be trusted and it focuses solely on blocking what the boards consider to be unethical.

We can develop a better model for AI ethics, one in which ethics complements and enhances research and development and where researchers are trusted collaborators with ethicists. This requires shifting our focus from principles and boards to ethical reasoning and teamwork, from ethics policing to ethics integration.

The info is here.

Tuesday, February 19, 2019

How Our Attitude Influences Our Sense Of Morality

Konrad Bocian
Science Trend
Originally posted January 18, 2019

Here is an excerpt:

People think that their moral judgment is as rational and objective as scientific statements, but science does not confirm that belief. Within the two last decades, scholars interested in moral psychology discovered that people produce moral judgments based on fast and automatic intuitions than rational and controlled reasoning. For example, moral cognition research showed that moral judgments arise in approximately 250 milliseconds, and even then we are not able to explain them. Developmental psychologists proved that at already the age of 3 months, babies who do not have any lingual skills can distinguish a good protagonist (a helping one) from a bad one (a hindering one). But this does not mean that peoples’ moral judgments are based solely on intuitions. We can use deliberative processes when conditions are favorable – when we are both motivated to engage in and capable of conscious responding.

When we imagine how we would morally judge other people in a specific situation, we refer to actual rules and norms. If the laws are violated, the act itself is immoral. But we forget that intuitive reasoning also plays a role in forming a moral judgment. It is easy to condemn the librarian when our interest is involved on paper, but the whole picture changes when real money is on the table. We have known that rule for a very long time, but we still forget to use it when we predict our moral judgments.

Based on previous research on the intuitive nature of moral judgment, we decided to test how far our attitudes can impact our perception of morality. In our daily life, we meet a lot of people who are to some degree familiar, and we either have a positive or negative attitude toward these people.

The info is here.

Tuesday, January 1, 2019

AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations

Floridi, L., Cowls, J., Beltrametti, M. et al.
Minds & Machines (2018).
https://doi.org/10.1007/s11023-018-9482-5

Abstract

This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.

Friday, December 14, 2018

Why Health Professionals Should Speak Out Against False Beliefs on the Internet

Joel T. Wu and Jennifer B. McCormick
AMA J Ethics. 2018;20(11):E1052-1058.
doi: 10.1001/amajethics.2018.1052.

Abstract

Broad dissemination and consumption of false or misleading health information, amplified by the internet, poses risks to public health and problems for both the health care enterprise and the government. In this article, we review government power for, and constitutional limits on, regulating health-related speech, particularly on the internet. We suggest that government regulation can only partially address false or misleading health information dissemination. Drawing on the American Medical Association’s Code of Medical Ethics, we argue that health care professionals have responsibilities to convey truthful information to patients, peers, and communities. Finally, we suggest that all health care professionals have essential roles in helping patients and fellow citizens obtain reliable, evidence-based health information.

Here is an excerpt:

We would suggest that health care professionals have an ethical obligation to correct false or misleading health information, share truthful health information, and direct people to reliable sources of health information within their communities and spheres of influence. After all, health and well-being are values shared by almost everyone. Principle V of the AMA Principles of Ethics states: “A physician shall continue to study, apply, and advance scientific knowledge, maintain a commitment to medical education, make relevant information available to patients, colleagues, and the public, obtain consultation, and use the talents of other health professionals when indicated” (italics added). And Principle VII states: “A physician shall recognize a responsibility to participate in activities contributing to the improvement of the community and the betterment of public health” (italics added). Taken together, these principles articulate an ethical obligation to make relevant information available to the public to improve community and public health. In the modern information age, wherein the unconstrained and largely unregulated proliferation of false health information is enabled by the internet and medical knowledge is no longer privileged, these 2 principles have a special weight and relevance.

Thursday, October 18, 2018

When You Fear Your Company Has Forgotten Its Principles

Sue Shellenbarger
The Wall Street Journal
Originally published September 17, 2018

Here is an excerpt:

People who object on principle to their employers’ conduct face many obstacles. One is the bystander effect—people’s reluctance to intervene against wrongdoing when others are present and witnessing it too, Dr. Grant says. Ask yourself in such cases, “If no one acted here, what would be the consequences?” he says. While most people think first about potential damage to their reputation and relationships, the long-term effects could be worse, he says.

Be careful not to argue too passionately for the changes you want, Dr. Grant says. Show respect for others’ viewpoint, and acknowledge the flaws in your argument to show you’ve thought it through carefully.

Be open about your concerns, says Jonah Sachs, an Oakland, Calif., speaker and author of “Unsafe Thinking,” a book on creative risk-taking. People who complain in secret are more likely to make enemies and be seen as disloyal, compared with those who resist in the open, research shows.

Successful change-makers tend to frame proposed changes as benefiting the entire company and its employees and customers, rather than just themselves, Mr. Sachs says. He cites a former executive at a retail drug chain who helped persuade top management to stop selling cigarettes in its stores. While the move tracked with the company’s health-focused mission, the executive strengthened her case by correctly predicting that it would attract more health-minded customers.

The info is here.

Wednesday, October 3, 2018

Moral Reasoning

Richardson, Henry S.
The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.)

Here are two brief excerpts:

Moral considerations often conflict with one another. So do moral principles and moral commitments. Assuming that filial loyalty and patriotism are moral considerations, then Sartre’s student faces a moral conflict. Recall that it is one thing to model the metaphysics of morality or the truth conditions of moral statements and another to give an account of moral reasoning. In now looking at conflicting considerations, our interest here remains with the latter and not the former. Our principal interest is in ways that we need to structure or think about conflicting considerations in order to negotiate well our reasoning involving them.

(cut)

Understanding the notion of one duty overriding another in this way puts us in a position to take up the topic of moral dilemmas. Since this topic is covered in a separate article, here we may simply take up one attractive definition of a moral dilemma. Sinnott-Armstrong (1988) suggested that a moral dilemma is a situation in which the following are true of a single agent:

  1. He ought to do A.
  2. He ought to do B.
  3. He cannot do both A and B.
  4. (1) does not override (2) and (2) does not override (1).

This way of defining moral dilemmas distinguishes them from the kind of moral conflict, such as Ross’s promise-keeping/accident-prevention case, in which one of the duties is overridden by the other. Arguably, Sartre’s student faces a moral dilemma. Making sense of a situation in which neither of two duties overrides the other is easier if deliberative commensurability is denied. Whether moral dilemmas are possible will depend crucially on whether “ought” implies “can” and whether any pair of duties such as those comprised by (1) and (2) implies a single, “agglomerated” duty that the agent do both A and B. If either of these purported principles of the logic of duties is false, then moral dilemmas are possible.

The entry is here.