Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, January 12, 2021

Is that artificial intelligence ethical? Sony to review all products

NikkeiAsia
Nikkei staff writers
Originally posted 22 Dec 2020

Here is an excerpt:

Sony will start screening all of its AI-infused products for ethical risks as early as spring, Nikkei has learned. If a product is deemed ethically deficient, the company will improve it or halt development.

Sony uses AI in its latest generation of the Aibo robotic dog, for instance, which can recognize up to 100 faces and continues to learn through the cloud.

Sony will incorporate AI ethics into its quality control, using internal guidelines.

The company will review artificially intelligent products from development to post-launch on such criteria as privacy protection. Ethically deficient offerings will be modified or dropped.

An AI Ethics Committee, with its head appointed by the CEO, will have the power to halt development on products with issues.

Even products well into development could still be dropped. Ones already sold could be recalled if problems are found. The company plans to gradually broaden the AI ethics rules to offerings in finance and entertainment as well.

As AI finds its way into more devices, the responsibilities of developers are increasing, and companies are strengthening ethical guidelines.

Monday, January 11, 2021

'The robot made me do it': Robots encourage risk-taking behaviour in people

Press Release
University of Southampton
Originally released 11 Dec 20

New research has shown robots can encourage people to take greater risks in a simulated gambling scenario than they would if there was nothing to influence their behaviours. Increasing our understanding of whether robots can affect risk-taking could have clear ethical, practiCal and policy implications, which this study set out to explore.

Dr Yaniv Hanoch, Associate Professor in Risk Management at the University of Southampton who led the study explained, "We know that peer pressure can lead to higher risk-taking behaviour. With the ever-increasing scale of interaction between humans and technology, both online and physically, it is crucial that we understand more about whether machines can have a similar impact."

This new research, published in the journal Cyberpsychology, Behavior, and Social Networking, involved 180 undergraduate students taking the Balloon Analogue Risk Task (BART), a computer assessment that asks participants to press the spacebar on a keyboard to inflate a balloon displayed on the screen. With each press of the spacebar, the balloon inflates slightly, and 1 penny is added to the player's "temporary money bank". The balloons can explode randomly, meaning the player loses any money they have won for that balloon and they have the option to "cash-in" before this happens and move on to the next balloon.

One-third of the participants took the test in a room on their own (the control group), one third took the test alongside a robot that only provided them with the instructions but was silent the rest of the time and the final, the experimental group, took the test with the robot providing instruction as well as speaking encouraging statements such as "why did you stop pumping?"

The results showed that the group who were encouraged by the robot took more risks, blowing up their balloons significantly more frequently than those in the other groups did. They also earned more money overall. There was no significant difference in the behaviours of the students accompanied by the silent robot and those with no robot.

Sunday, January 10, 2021

Doctors Dating Patients: Love, Actually?

Shelly Reese
medscape.com
Originally posted 10 Dec 20

Here is an excerpt:

Not surprisingly, those who have seen such relationships end in messy, contentious divorces or who know stories of punitive actions are stridently opposed to the idea. "Never! Grounds for losing your license"; "it could only result in trouble"; "better to keep this absolute"; "you're asking for a horror story," wrote four male physicians.

Although doctor-patient romances don't frequently come to the attention of medical boards or courts until they have soured, even "happy ending" relationships may come at a cost. For example, in 2017, the Iowa Board of Medicine fined an orthopedic surgeon $5000 and ordered him to complete a professional boundaries program because he became involved with a patient while or soon after providing care, despite the fact that the couple had subsequently married.

Ethics aside, "this is a very dangerous situation, socially and professionally," writes a male physician in Pennsylvania. A New York physician agreed: "Many of my colleagues marry their patients, even after they do surgery on them. It's a sticky situation."

Doctors' Attitudes Are Shifting

The American Medical Association clearly states that sexual contact that is concurrent with the doctor/patient relationship constitutes sexual misconduct and that even a romance with a former patient "may be unduly influenced by the previous physician-patient relationship."

Although doctors' attitudes on the subject are evolving, that's not to say they suddenly believe they can start asking their patients out to dinner. Very few doctors (2%) condone romantic relationships with existing patients — a percentage that has remained largely unchanged over the past 10 years. Instead, physicians are taking a more nuanced approach to the issue.

Saturday, January 9, 2021

The Last Children of Down Syndrome

Sarah Zhang
The Atlantic
Originally posted December 2020

Here is an excerpt:

Eugenics in Denmark never became as systematic and violent as it did in Germany, but the policies came out of similar underlying goals: improving the health of a nation by preventing the birth of those deemed to be burdens on society. The term eugenics eventually fell out of favor, but in the 1970s, when Denmark began offering prenatal testing for Down syndrome to mothers over the age of 35, it was discussed in the context of saving money—as in, the testing cost was less than that of institutionalizing a child with a disability for life. The stated purpose was “to prevent birth of children with severe, lifelong disability.”

That language too has long since changed; in 1994, the stated purpose of the testing became “to offer women a choice.” Activists like Fält-Hansen have also pushed back against the subtle and not-so-subtle ways that the medical system encourages women to choose abortion. Some Danish parents told me that doctors automatically assumed they would want to schedule an abortion, as if there was really no other option. This is no longer the case, says Puk Sandager, a fetal-medicine specialist at Aarhus University Hospital. Ten years ago, doctors—especially older doctors—were more likely to expect parents to terminate, she told me. “And now we do not expect anything.” The National Down Syndrome Association has also worked with doctors to alter the language they use with patients—“probability” instead of “risk,” “chromosome aberration” instead of “chromosome error.” And, of course, hospitals now connect expecting parents with people like Fält-Hansen to have those conversations about what it’s like to raise a child with Down syndrome.

Friday, January 8, 2021

APA Condemns Violent Attack on U.S. Capitol, Warns of Long-Term Effects of Recurring Trauma

American Psychiatric Association
Released January 7, 2021

The American Psychiatric Association today condemns the violence that occurred during what should have been a peaceful step in the transfer of power in Washington, D.C., and offers resources for those whose mental health is impacted.

The world is still processing the unprecedented assault on democracy that occurred yesterday in the nation’s capital. The leaders of yesterday’s aggression and those that encouraged such anti-American conduct must be held accountable. The stark contrast between the government’s response to Black Lives Matter protesters during the summer and fall, a significant proportion of whom were Black, and its response to mostly white MAGA protesters yesterday, is deeply concerning.

These events coupled with the ongoing COVID pandemic continue to increase the anxiety and stress many are feeling. These recurring traumatic events can have a detrimental long-term effect across many domains (emotional, physical, cognitive, behavioral, social, and developmental). As physicians, we want to tell everyone who is distressed or feeling a higher level of anxiety right now that they are not alone, and that help is available.

“Yesterday’s violence and the rhetoric that incited it are seditious,” said APA President Jeffrey Geller, M.D., M.P.H. “Americans are hurting in the pandemic and this makes the pain, fear, and stress that many of us are feeling much worse. Those who have been subject to the impacts of systemic racism are dealing with the brunt of it.”

“We, as psychiatrists, are deeply concerned and angered by the violence that has occurred and that may continue in our communities,” said APA CEO and Medical Director Saul Levin, M.D., M.P.A. “If you are feeling anxious or unsafe, talk with your family and friends. If your feelings continue and it is impacting your daily life, do not hesitate to seek help through your primary care provider, a psychiatrist or other mental health professional, or other resources in your community.”

For more information about mental health in traumatic events such as this, visit: https://www.psychiatry.org/patients-families/coping-after-disaster-trauma

If you or a family member or friend needs immediate assistance, help is available:
  • Crisis Textline Text HOME to 741741
  • National Suicide Prevention Lifeline Call 800-273-8255 or Chat with Lifeline
  • Veterans Crisis Line (VA) Call 800-273-8255 or text 838255
  • Physician Support Line Call 1-888-409-0141
  • NAMI Helpline: 800-950-6264 M-F, 10 a.m. - 6 p.m., ET

Bias in science: natural and social

Joshua May
Synthese 

Abstract 

Moral, social, political, and other “nonepistemic” values can lead to bias in science, from prioritizing certain topics over others to the rationalization of questionable research practices. Such values might seem particularly common or powerful in the social sciences, given their subject matter. However, I argue first that the well documented phenomenon of motivated reasoning provides a useful framework for understanding when values guide scientific inquiry (in pernicious or productive ways). Second, this analysis reveals a parity thesis: values influence the social and natural sciences about equally, particularly because both are so prominently affected by desires for social credit and status, including recognition and career advancement. Ultimately, bias in natural and social science is both natural and social—that is, a part of human nature and considerably motivated by a concern for social status (and its maintenance). Whether the pervasive influence of values is inimical to the sciences is a separate question.

Conclusion 

We have seen how many of the putative biases that affect science can be explained and illuminated in terms of motivated reasoning, which yields a general understanding of how a researcher’s goals and values can influence scientific practice (whether positively or negatively). This general account helps to show that it is unwarranted to assume that such influences are significantly more prominent in the social sciences. The defense of this parity claim relies primarily on two key points. First, the natural sciences are also susceptible to the same values found in social science, particularly given that findings in many fields have social or political implications. Second, the ideological motivations that might seem to arise only in social science are minor compared to others. In particular, one’s reasoning is more often motivated by a desire to gain social credit (e.g. recognition among peers) than a desire to promote a moral or political ideology. Although there may be discernible differences in the quality of research across scientific domains, all are influenced by researchers’ values, as manifested in their motivations.

Thursday, January 7, 2021

How Might Artificial Intelligence Applications Impact Risk Management?

John Banja
AMA J Ethics. 2020;22(11):E945-951. 

Abstract

Artificial intelligence (AI) applications have attracted considerable ethical attention for good reasons. Although AI models might advance human welfare in unprecedented ways, progress will not occur without substantial risks. This article considers 3 such risks: system malfunctions, privacy protections, and consent to data repurposing. To meet these challenges, traditional risk managers will likely need to collaborate intensively with computer scientists, bioinformaticists, information technologists, and data privacy and security experts. This essay will speculate on the degree to which these AI risks might be embraced or dismissed by risk management. In any event, it seems that integration of AI models into health care operations will almost certainly introduce, if not new forms of risk, then a dramatically heightened magnitude of risk that will have to be managed.

AI Risks in Health Care

Artificial intelligence (AI) applications in health care have attracted enormous attention as well as immense public and private sector investment in the last few years.1 The anticipation is that AI technologies will dramatically alter—perhaps overhaul—health care practices and delivery. At the very least, hospitals and clinics will likely begin importing numerous AI models, especially “deep learning” varieties that draw on aggregate data, over the next decade.

A great deal of the ethics literature on AI has recently focused on the accuracy and fairness of algorithms, worries over privacy and confidentiality, “black box” decisional unexplainability, concerns over “big data” on which deep learning AI models depend, AI literacy, and the like. Although some of these risks, such as security breaches of medical records, have been around for some time, their materialization in AI applications will likely present large-scale privacy and confidentiality risks. AI models have already posed enormous challenges to hospitals and facilities by way of cyberattacks on protected health information, and they will introduce new ethical obligations for providers who might wish to share patient data or sell it to others. Because AI models are themselves dependent on hardware, software, algorithmic development and accuracy, implementation, data sharing and storage, continuous upgrading, and the like, risk management will find itself confronted with a new panoply of liability risks. On the one hand, risk management can choose to address these new risks by developing mitigation strategies. On the other hand, because these AI risks present a novel landscape of risk that might be quite unfamiliar, risk management might choose to leave certain of those challenges to others. This essay will discuss this “approach-avoidance” possibility in connection with 3 categories of risk—system malfunctions, privacy breaches, and consent to data repurposing—and conclude with some speculations on how those decisions might play out.

Wednesday, January 6, 2021

Moral “foundations” as the product of motivated social cognition: Empathy and other psychological underpinnings of ideological divergence in “individualizing” and “binding” concerns

Strupp-Levitsky M, et al.
PLoS ONE 15(11): e0241144. 

Abstract

According to moral foundations theory, there are five distinct sources of moral intuition on which political liberals and conservatives differ. The present research program seeks to contextualize this taxonomy within the broader research literature on political ideology as motivated social cognition, including the observation that conservative judgments often serve system-justifying functions. In two studies, a combination of regression and path modeling techniques were used to explore the motivational underpinnings of ideological differences in moral intuitions. Consistent with our integrative model, the “binding” foundations (in-group loyalty, respect for authority, and purity) were associated with epistemic and existential needs to reduce uncertainty and threat and system justification tendencies, whereas the so-called “individualizing” foundations (fairness and avoidance of harm) were generally unrelated to epistemic and existential motives and were instead linked to empathic motivation. Taken as a whole, these results are consistent with the position taken by Hatemi, Crabtree, and Smith that moral “foundations” are themselves the product of motivated social cognition.

Concluding remarks

Taken in conjunction, the results presented here lead to several conclusions that should be of relevance to social scientists who study morality, social justice, and political ideology. First, we observe that so-called “binding” moral concerns pertaining to ingroup loyalty, authority, and purity are psychologically linked to epistemic and, to a lesser extent, existential motives to reduce uncertainty and threat. Second, so-called “individualizing” concerns for fairness and avoidance of harm are not linked to these same motives. Rather, they seem to be driven largely by empathic sensitivity. Third, it would appear that theories of moral foundations and motivated social cognition are in some sense compatible, as suggested by Van Leeuween and Park, rather than incompatible, as suggested by Haidt and Graham and Haidt. That is, the motivational basis of conservative preferences for “binding” intuitions seems to be no different than the motivational basis for many other conservative preferences, including system justification and the epistemic and existential motives that are presumed to underlie system justification.

Tuesday, January 5, 2021

Psychological selfishness

Carlson, R. W.,  et al. (2020, October 29).

Abstract

Selfishness is central to many theories of human morality, yet its psychological nature remains largely overlooked. Psychologists often rely on classical conceptions of selfishness from economics (i.e., rational self-interest) and philosophy (i.e. psychological egoism), but such characterizations offer limited insight into the richer, motivated nature of selfishness. To address this gap, we propose a novel framework in which selfishness is recast as a psychological construction. From this view, selfishness is perceived in ourselves and others when we detect a situation-specific desire to benefit oneself that disregards others’ desires and prevailing social expectations for the situation. We argue that detecting and deterring such psychological selfishness in both oneself and others is crucial in social life—facilitating the maintenance of social cohesion and close relationships. In addition, we show how utilizing this psychological framework offers a richer understanding of the nature of human social behavior. Delineating a psychological construct of selfishness can promote coherence in interdisciplinary research on selfishness, and provide insights for interventions to prevent or remediate negative effects of selfishness.

Conclusion

Selfishness is a widely invoked, yet poorly defined construct in psychology. Many empirical “observations” of selfishness consist of isolated behaviors or de-contextualized motives. Here, we argued that these behaviors and motives often do not capture a psychologically meaningfully form of selfishness, and we addressed this gap in the literature by offering a concrete definition and framework for studying selfishness.

Selfishness is a mentalistic concept. As such, adopting a psychological framework can deepen our understanding of its nature. In the proposed model, selfishness unfolds within rich social situations that elicit specific desires, expectations, and considerations of others. Moreover, detecting selfishness serves the overarching function of coordinating and encouraging cooperative social behavior. To detect selfishness is to perceive a desire to act in violation of salient social expectations, and an array of emotions and corrective actions tend to follow. 

Selfishness is also a morally-laden concept. In fact, it is one of the least likable qualities a person can possess (N. H. Anderson, 1968). As such, selfishness is a construct in need of proper criteria for being manipulated, measured, and applied to peoples’ actions and motives. Scientific views have long been thought to shape human norms and beliefs(Gergen, 1973; Miller, 1999).