Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, January 13, 2020

Big tech is thinking about digital ethics, and small businesses need to keep up

Daphne Leprince-Ringuet
zdnet.com
Originally posted 16 Dec 19

Here is an excerpt:

And insurance company Aviva recently published a one-page customer data charter along with an explainer video to detail how it uses personal information, "instead of long privacy policies that no one reads," said the company's chief data scientist, Orlando Machado.

For McDougall, however, this is just the tip of the iceberg. "We hear from Microsoft and Intel about what they are doing, and how they are implementing ethics," he said, "but there are many smaller organizations out there that are far from thinking about these things."

As an example of a positive development, he points to GDPR regulation introduced last year in the EU, and which provides more practical guidelines to ensure ethical business and protection of privacy.

Even GDPR rules, however, are struggling to find a grip with SMBs. A survey conducted this year among 716 small businesses in Europe showed that there was widespread ignorance about data security tools and loose adherence to the law's key privacy provisions.

About half of the respondents believed their organizations were compliant with the new rules – although only 9% were able to identify which end-to-end encrypted email service they used.

A full 44% said they were not confident that they always obtained consent or determined a lawful basis before using personal data.

The info is here.

ESG controversies wipe $500bn off value of US companies

Chris Flood
ft.com
Originally posted 14 Dec 19

Quarrels involving environmental, social and governance issues (ESG) have wiped more than $500bn off the value of large US companies over the past five years, according to an analysis by Bank of America.

ESG-related risks are becoming increasingly important considerations for institutional investors and asset managers because of mounting fears about climate change, high-profile scams and damaging corporate governance failures.

Bank of America examined the impact on stock prices of companies in the S&P 500 index, the main US equity market benchmark, of 24 controversies related to accounting scandals, data breaches, sexual harassment cases and other ESG issues.

It found these 24 ESG controversies together resulted in peak to trough market value losses of $534bn as the share prices of the companies involved sank relative to the S&P 500 over the following 12 months.

“The hit to market value of an ESG controversy is significant and the impact is long-lasting. It can take a year for a stock to reach a trough following an ESG controversy,” said Savita Subramanian, head of US equity and quantitative strategy at Bank of America. “Negative headlines stick in investors’ minds.”

Bank of America declined to name any of the companies involved in the controversies.

The info is here.

Sunday, January 12, 2020

Bias in algorithmic filtering and personalization

Engin Bozdag
Ethics Inf Technol (2013) 15: 209.

Abstract

Online information intermediaries such as Facebook and Google are slowly replacing traditional media channels thereby partly becoming the gatekeepers of our society. To deal with the growing amount of information on the social web and the burden it brings on the average user, these gatekeepers recently started to introduce personalization features, algorithms that filter information per individual. In this paper we show that these online services that filter information are not merely algorithms. Humans not only affect the design of the algorithms, but they also can manually influence the filtering process even when the algorithm is operational. We further analyze filtering processes in detail, show how personalization connects to other filtering techniques, and show that both human and technical biases are present in today’s emergent gatekeepers. We use the existing literature on gatekeeping and search engine bias and provide a model of algorithmic gatekeeping.

From the Discussion:

Today information seeking services can use interpersonal contacts of users in order to tailor information and to increase relevancy. This not only introduces bias as our model shows, but it also has serious implications for other human values, including user autonomy, transparency, objectivity, serendipity, privacy and trust. These values introduce ethical questions. Do private companies that are
offering information services have a social responsibility, and should they be regulated? Should they aim to promote values that the traditional media was adhering to, such as transparency, accountability and answerability? How can a value such as transparency be promoted in an algorithm?  How should we balance between autonomy and serendipity and between explicit and implicit personalization? How should we define serendipity? Should relevancy be defined as what is popular in a given location or by what our primary groups find interesting? Can algorithms truly replace human filterers?

The info can be downloaded here.

Saturday, January 11, 2020

A Semblance of Aliveness

J. Grunsven & A. Wynsberghe
Techné: Research in Philosophy and Technology
Published on December 3, 2019

While the design of sex robots is still in the early stages, the social implications of the potential proliferation of sex robots into our lives has been heavily debated by activists and scholars from various disciplines. What is missing in the current debate on sex robots and their potential impact on human social relations is a targeted look at the boundedness and bodily expressivity typically characteristic of humans, the role that these dimensions of human embodiment play in enabling reciprocal human interactions, and the manner in which this contrasts with sex robot-human interactions. Through a fine-grained discussion of these themes, rooted in fruitful but largely untapped resources from the field of enactive embodied cognition, we explore the unique embodiment of sex robots. We argue that the embodiment of the sex robot is constituted by what we term restricted expressivity and a lack of bodily boundedness and that this is the locus of negative but also potentially positive implications. We discuss the possible benefits that these two dimensions of embodiment may have for people within a specific demographic, namely some persons on the autism spectrum. Our preliminary conclusion—that the benefits and the downsides of sex robots reside in the same capability of the robot, its restricted expressivity and lack of bodily boundedness as we call it—demands we take stock of future developments in the design of sex robot embodiment. Given the importance of evidence-based research pertaining to sex robots in particular, as reinforced by Nature (2017) for drawing correlations and making claims, the analysis is intended to set the stage for future research.

The info is here.

Friday, January 10, 2020

Ethically Adrift: How Others Pull Our Moral Compass from True North, and How we Can Fix It

Moore, C., and F. Gino.
Research in Organizational Behavior 
33 (2013): 53–77.

Abstract

This chapter is about the social nature of morality. Using the metaphor of the moral compass to describe individuals' inner sense of right and wrong, we offer a framework to help us understand social reasons why our moral compass can come under others' control, leading even good people to cross ethical boundaries. Departing from prior work focusing on the role of individuals' cognitive limitations in explaining unethical behavior, we focus on the socio-psychological processes that function as triggers of moral neglect, moral justification and immoral action, and their impact on moral behavior. In addition, our framework discusses organizational factors that exacerbate the detrimental effects of each trigger. We conclude by discussing implications and recommendations for organizational scholars to take a more integrative approach to developing and evaluating theory about unethical behavior.

From the Summary

Even when individuals are aware of the ethical dimensions of the choices they are making, they may still engage in unethical behavior as long as they recruit justifications for it. In this section, we discussed the role of two social–psychological processes – social comparison and self-verification – that facilitate moral justification, which will lead to immoral behavior. We also discussed three characteristics of organizational life that amplify these social–psychological processes. Specifically, we discussed how organizational identification, group loyalty, and framing or euphemistic language can all affect the likelihood and extent to which individuals justify their actions, by judging them as ethical when in fact they are morally contentious. Finally, we discussed moral disengagement, moral hypocrisy, and moral licensing as intrapersonal consequences of these social facilitators of moral justification.

The paper can be downloaded here.

The Complicated Ethics of Genetic Engineering

Brit McCandless Farmer
cbsnews.com
Originally posted 8 Dec 19

Here is an excerpt:

A 2017 survey at the University of Wisconsin-Madison asked 1,600 members of the general public about their attitudes toward gene editing. The results showed 65 percent of respondents think gene editing is acceptable for therapeutic purposes. But when it comes to whether scientists should use technology for genetic enhancement, only 26 percent agreed.

Going forward, Church thinks genetic engineering needs government oversight. He is also concerned about reversibility—he does not want to create anything in his lab that cannot be reversed if it creates unintended consequences.

"A lot of the technology we develop, we try to make them reversible, containable," Church said. "So the risks are that some people get excited, so excited that they ignore well-articulated risks."

Back in his Harvard lab, Church's colleagues showed Pelley their work on "mini brains," tiny dots with millions of cells each. The cells, which come from a patient, can be grown into many types of organ tissue in a matter of days, making it possible for drugs to be tested on that patient's unique genome. Church aims to use genetic engineering to reverse aging and grow human organs for transplant.

Pelley said he was struck by the speed with which medical advancements are coming.

The info is here.

Thursday, January 9, 2020

How implicit bias harms patient care

Jeff Bendix
medicaleconomics.com
Originally posted 25 Nov 19

Here is an excerpt:

While many people have difficulty acknowledging that their actions are influenced by unconscious biases, the concept is particularly troubling for doctors, who have been trained to view—and treat—patients equally, and the vast majority of whom sincerely believe that they do.

“Doctors have been molded throughout medical school and all our training to be non-prejudiced when it comes to treating patients,” says James Allen, MD, a pulmonologist and medical director of University Hospital East, part of Ohio State University’s Wexner Medical Center. “It’s not only asked of us, it’s demanded of us, so many physicians would like to think they have no biases. But it’s not true. All human beings have biases.”

“Among physicians, there’s a stigma attached to any suggestion of racial bias,” adds Penner. “And were a person to be identified that way, there could be very severe consequences in terms of their career prospects or even maintaining their license.”

Ironically, as Penner and others point out, the conditions under which most doctors practice today—high levels of stress, frequent distractions, and brief visits that allow little time to get to know patients--are the ones most likely to heighten their vulnerability to unintentional biases.

“A doctor under time pressure from a backlog of overdue charting and whatever else they’re dealing with will have a harder time treating all patients with the same level of empathy and concern,” van Ryn says.

The info is here.

Artificial Intelligence Is Superseding Well-Paying Wall Street Jobs

Deutsche Boerse To Acquire NYSE Euronext To Create Largest Exchange OwnerJack Kelly
forbes.com
Originally posted 10 Dec 19

Here is an excerpt:

Compliance people run the risk of being replaced too. “As bad actors become more sophisticated, it is vital that financial regulators have the funding resources, technological capacity and access to AI and automated technologies to be a strong and effective cop on the beat,” said Martina Rejsjö, head of Nasdaq Surveillance North America Equities.

Nasdaq, a tech-driven trading platform, has an associated regulatory body that offers over 40 different algorithms, using 35,000 parameters, to spot possible market abuse and manipulation in real time. “The massive and, in many cases, exponential growth in market data is a significant challenge for surveillance professionals," Rejsjö said. “Market abuse attempts have become more sophisticated, putting more pressure on surveillance teams to find the proverbial needle in the data haystack." In layman's terms, she believes that the future is in tech overseeing trading activities, as the human eye is unable to keep up with the rapid-fire, sophisticated global trading dominated by algorithms.

When people say not to worry, that’s the precise time to worry. Companies—whether they are McDonald’s, introducing self-serve kiosks and firing hourly workers to cut costs, or top-tier investment banks that rely on software instead of traders to make million-dollar bets on the stock market—will continue to implement technology and downsize people in an effort to enhance profits and cut down on expenses. This trend will be hard to stop and have serious future consequences for the workers at all levels and salaries. 

The info is here.

Wednesday, January 8, 2020

Can expert bias be reduced in medical guidelines?

Sheldon Greenfield
BMJ 2019; 367
https://doi.org/10.1136/bmj.l6882 

Here are two excerpts:

Despite robust study designs, even double blind randomised controlled trials can be subject to subtle forms of bias. This can be because of the financial conflicts of interest of the authors, intellectual or disciplinary based opinions, pressure on researchers from sponsors, or conflicting values. For example, some researchers may favour mortality over quality of life as a primary outcome, demonstrating a value conflict. The quality of evidence is often uneven and can include underappreciated sources of bias. This makes interpreting the evidence difficult, which results in guideline developers turning to “experts” to translate it into clinical practice recommendations.

Can we be confident that these experts are objective and free of bias? A 2011 Institute of Medicine (now known as the National Academy of Medicine) report1 challenged the assumption of objectivity among guideline development experts.

(cut)

The science that supports clinical medicine is constantly evolving. The pace of that evolution is increasing.

There is an urgent imperative to generate and update accurate, unbiased, clinical practice guidelines. So, what can we do now? I have two suggestions.

Firstly, the public, which may include physicians, nurses, and other healthcare providers dependent on guidelines, should advocate for organisations like the ECRI Institute and its international counterparts to be supported and looked to for setting standards.

Secondly, we should continue to examine the details and principles of “shared decision making” and other initiatives like it, so that doctors and patients can be as clear as possible in the face of uncertain evidence about medical treatments and recommendations.

It is an uphill battle, but one worth fighting.