Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Trustworthy. Show all posts
Showing posts with label Trustworthy. Show all posts

Monday, August 12, 2019

Why it now pays for businesses to put ethics before economics

John Drummond
The National
Originally published July 14, 2019

Here is an excerpt:

All major companies today have an ethics code or a statement of business principles. I know this because at one time my company designed such codes for many FTSE companies. And all of these codes enshrine a commitment to moral standards. And these standards are often higher than those required by law.

When the boards of companies agree to these principles they largely do so because they believe in them – at the time. However, time moves on. People move on. The business changes. Along the way, company people forget.

So how can you tell if a business still believes in its stated principles? Actually, it is very simple. When an ethical problem, such as Mossmorran, happens, look to see who turns up to answer concerns. If it is a public relations man or woman, the company has lost the plot. By contrast, if it is the executive who runs the business, then the company is likely still in close touch with its ethical standards.

Economics and ethics can be seen as a spectrum. Ethics is at one side of the spectrum and economics at the other. Few organisations, or individuals for that matter, can operate on purely ethical lines alone, and few operate on solely economic considerations. Most organisations can be placed somewhere along this spectrum.

So, if a business uses public relations to shield top management from a problem, it occupies a position closer to economics than to ethics. On the other hand, where corporate executives face their critics directly, then the company would be located nearer to ethics.

The info is here.

Friday, June 30, 2017

Ethics and Artificial Intelligence With IBM Watson's Rob High

Blake Morgan
Forbes.com
Originally posted June 12, 2017

Artificial intelligence seems to be popping up everywhere, and it has the potential to change nearly everything we know about data and the customer experience. However, it also brings up new issues regarding ethics and privacy.

One of the keys to keeping AI ethical is for it to be transparent, says Rob High, vice president and chief technology officer of IBM Watson. When customers interact with a chatbot, for example, they need to know they are communicating with a machine and not an actual human. AI, like most other technology tools, is most effective when it is used to extend the natural capabilities of humans instead of replacing them. That means that AI and humans are best when they work together and can trust each other.

Chatbots are one of the most commonly used forms of AI. Although they can be used successfully in many ways, there is still a lot of room for growth. As they currently stand, chatbots mostly perform basic actions like turning on lights, providing directions, and answering simple questions that a person asks directly. However, in the future, chatbots should and will be able to go deeper to find the root of the problem. For example, a person asking a chatbot what her bank balance is might be asking the question because she wants to invest money or make a big purchase—a futuristic chatbot could find the real reason she is asking and turn it into a more developed conversation. In order to do that, chatbots will need to ask more questions and drill deeper, and humans need to feel comfortable providing their information to machines.

The article is here.

Tuesday, February 14, 2017

Are Kantians Better Social Partners? People Making Deontological Judgments are Perceived to Be More Prosocial than They Actually are

Capraro, Valerio and Sippel, Jonathan and Zhao, Bonan and others
(January 25, 2017).

Abstract

Why do people make deontological decisions, although they often lead to overall unfavorable outcomes? One account is receiving considerable attention: deontological judgments may signal commitment to prosociality and thus may increase people's chances of being selected as social partners --- which carries obvious long-term benefits. Here we test this framework by experimentally exploring whether people making deontological judgments are expected to be more prosocial than those making consequentialist judgments and whether they are actually so. We use two ways of identifying deontological choices. In a first set of three studies, we use a single moral dilemma whose consequentialist course of action requires a strong violation of Kant's practical imperative that humans should never be used solely as a mere means. In a second set of two studies, we use two moral dilemmas: one whose consequentialist course of action requires no violation of the practical imperative, and one whose consequentialist course of action requires a strong violation of the practical imperative; and we focus on people changing decision when passing from the former dilemma to the latter one, thereby revealing a strong reluctance to violate Kant's imperative. Using economic games, we take three measures of prosociality: trustworthiness, altruism, and cooperation. Our results procure converging evidence for a perception bias according to which people making deontological choices are believed to be more prosocial than those making consequentialist choices, but actually they are not so. Thus, these results provide a piece of evidence against the assumption that deontological judgments signal commitment to prosociality.

The article is here.

Saturday, December 24, 2016

The Adaptive Utility of Deontology: Deontological Moral Decision-Making Fosters Perceptions of Trust and Likeability

Sacco, D.F., Brown, M., Lustgraaf, C.J.N. et al.
Evolutionary Psychological Science (2016).
doi:10.1007/s40806-016-0080-6

Abstract

Although various motives underlie moral decision-making, recent research suggests that deontological moral decision-making may have evolved, in part, to communicate trustworthiness to conspecifics, thereby facilitating cooperative relations. Specifically, social actors whose decisions are guided by deontological (relative to utilitarian) moral reasoning are judged as more trustworthy, are preferred more as social partners, and are trusted more in economic games. The current study extends this research by using an alternative manipulation of moral decision-making as well as the inclusion of target facial identities to explore the potential role of participant and target sex in reactions to moral decisions. Participants viewed a series of male and female targets, half of whom were manipulated to either have responded to five moral dilemmas consistent with an underlying deontological motive or utilitarian motive; participants indicated their liking and trust toward each target. Consistent with previous research, participants liked and trusted targets whose decisions were consistent with deontological motives more than targets whose decisions were more consistent with utilitarian motives; this effect was stronger for perceptions of trust. Additionally, women reported greater dislike for targets whose decisions were consistent with utilitarianism than men. Results suggest that deontological moral reasoning evolved, in part, to facilitate positive relations among conspecifics and aid group living and that women may be particularly sensitive to the implications of the various motives underlying moral decision-making.

The research is here.

Editor's Note: This research may apply to psychotherapy, leadership style, and politics.

Tuesday, November 8, 2016

Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition

Marc A. Edwards and Siddhartha Roy
Environmental Engineering Science. September 2016

Abstract

Over the last 50 years, we argue that incentives for academic scientists have become increasingly perverse in terms of competition for research funding, development of quantitative metrics to measure performance, and a changing business model for higher education itself. Furthermore, decreased discretionary funding at the federal and state level is creating a hypercompetitive environment between government agencies (e.g., EPA, NIH, CDC), for scientists in these agencies, and for academics seeking funding from all sources—the combination of perverse incentives and decreased funding increases pressures that can lead to unethical behavior. If a critical mass of scientists become untrustworthy, a tipping point is possible in which the scientific enterprise itself becomes inherently corrupt and public trust is lost, risking a new dark age with devastating consequences to humanity. Academia and federal agencies should better support science as a public good, and incentivize altruistic and ethical outcomes, while de-emphasizing output.

The article is here.