Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, February 27, 2023

Domestic violence hotline calls will soon be invisible on your family phone plan

Ashley Belanger
ARS Technica
Originally published 17 FEB 23

Today, the Federal Communications Commission proposed rules to implement the Safe Connections Act, which President Joe Biden signed into law last December. Advocates consider the law a landmark move to stop tech abuse. Under the law, mobile service providers are required to help survivors of domestic abuse and sexual violence access resources and maintain critical lines of communication with friends, family, and support organizations.

Under the proposed rules, mobile service providers are required to separate a survivor’s line from a shared or family plan within two business days. Service providers must also “omit records of calls or text messages to certain hotlines from consumer-facing call and text message logs,” so that abusers cannot see when survivors are seeking help. Additionally, the FCC plans to launch a “Lifeline” program, providing emergency communications support for up to six months for survivors who can’t afford to pay for mobile services.

“These proposed rules would help survivors obtain separate service lines from shared accounts that include their abusers, protect the privacy of calls made by survivors to domestic abuse hotlines, and provide support for survivors who suffer from financial hardship through our affordability programs,” the FCC’s announcement said.

The FCC has already consulted with tech associations and domestic violence support organizations in forming the proposed rules, but now the public has a chance to comment. An FCC spokesperson confirmed to Ars that comments are open now. Crystal Justice, the National Domestic Violence Hotline’s chief external affairs officer, told Ars that it’s critical for survivors to submit comments to help inform FCC rules with their experiences of tech abuse.

To express comments, visit this link and fill in “22-238” as the proceeding number. That will auto-populate a field that says “Supporting Survivors of Domestic and Sexual Violence.”

FCC’s spokesperson told Ars that the initial public comment period will be open for 30 days after the rules are published in the federal register, and then a reply comment period will be open for 30 days after the initial comment period ends.

Sunday, February 26, 2023

Time pressure reduces misinformation discrimination ability but does not alter response bias

Sultan, M., Tump, A.N., Geers, M. et al. 
Sci Rep 12, 22416 (2022).
https://doi.org/10.1038/s41598-022-26209-8

Abstract

Many parts of our social lives are speeding up, a process known as social acceleration. How social acceleration impacts people’s ability to judge the veracity of online news, and ultimately the spread of misinformation, is largely unknown. We examined the effects of accelerated online dynamics, operationalised as time pressure, on online misinformation evaluation. Participants judged the veracity of true and false news headlines with or without time pressure. We used signal detection theory to disentangle the effects of time pressure on discrimination ability and response bias, as well as on four key determinants of misinformation susceptibility: analytical thinking, ideological congruency, motivated reflection, and familiarity. Time pressure reduced participants’ ability to accurately distinguish true from false news (discrimination ability) but did not alter their tendency to classify an item as true or false (response bias). Key drivers of misinformation susceptibility, such as ideological congruency and familiarity, remained influential under time pressure. Our results highlight the dangers of social acceleration online: People are less able to accurately judge the veracity of news online, while prominent drivers of misinformation susceptibility remain present. Interventions aimed at increasing deliberation may thus be fruitful avenues to combat online misinformation.

Discussion

In this study, we investigated the impact of time pressure on people’s ability to judge the veracity of online misinformation in terms of (a) discrimination ability, (b) response bias, and (c) four key determinants of misinformation susceptibility (i.e., analytical thinking, ideological congruency, motivated reflection, and familiarity). We found that time pressure reduced discrimination ability but did not alter the—already present—negative response bias (i.e., general tendency to evaluate news as false). Moreover, the associations observed for the four determinants of misinformation susceptibility were largely stable across treatments, with the exception that the positive effect of familiarity on response bias (i.e., response tendency to treat familiar news as true) was slightly reduced under time pressure. We discuss each of these findings in more detail next.

As predicted, we found that time pressure reduced discrimination ability: Participants under time pressure were less able to distinguish between true and false news. These results corroborate earlier work on the speed–accuracy trade-off, and indicate that fast-paced news consumption on social media is likely leading to people misjudging the veracity of not only false news, as seen in the study by Bago and colleagues, but also true news. Like in their paper, we stress that interventions aimed at mitigating misinformation should target this phenomenon and seek to improve veracity judgements by encouraging deliberation. It will also be important to follow up on these findings by examining whether time pressure has a similar effect in the context of news items that have been subject to interventions such as debunking.

Our results for the response bias showed that participants had a general tendency to evaluate news headlines as false (i.e., a negative response bias); this effect was similarly strong across the two treatments. From the perspective of the individual decision maker, this response bias could reflect a preference to avoid one type of error over another (i.e., avoiding accepting false news as true more than rejecting true news as false) and/or an overall expectation that false news are more prevalent than true news in our experiment. Note that the ratio of true versus false news we used (1:1) is different from the real world, which typically is thought to contain a much smaller fraction of false news. A more ecologically valid experiment with a more representative sample could yield a different response bias. It will, thus, be important for future studies to assess whether participants hold such a bias in the real world, are conscious of this response tendency, and whether it translates into (in)accurate beliefs about the news itself.

Saturday, February 25, 2023

Five Steps to Get Students Thinking About Ethics

Karen Cotter, Laura Bond, & Lauren Fullmer
The Greater Good Science Center
Originally posted 22 FEB 23

Here is an excerpt and the 5 steps:

Five steps for ethical decision-making

Teaching ethical thinking aligns with the mission you may have as an educator to promote global citizenship. “Being a global citizen means understanding that global ideas and solutions must still fit the complexities of local contexts and cultures, and meet each community’s specific needs and capacities,” explains AFS-USA. While investigating real-world problems from many perspectives, students gain an appreciation for many sides of an issue and avoid the pitfall of simply reinforcing their preexisting attitudes.

Ethical thinking also enriches social-emotional learning. According to researchers Michael D. Burroughs and Nikolaus J. Barkauskas, “By focusing on social, emotional, and ethical literacy in schools educators can contribute to the development of persons with greater self-awareness, emotional understanding and, in turn, the capability to act ethically and successfully interact with others in a democratic society.” The five steps below serve as a seamless way to integrate ethical decision making into a science or STEM class.

These steps come from our Prosocial Design Process for Ethical Decision-Making, which itself is a synthesis of three frameworks: prosocial education (which focuses on promoting emotional, social, moral, and civic capacities that express character in students), the Engineering Design Process (an open-ended problem-solving practice that encourages growth from failure), and the IDEA Ethical Decision-Making Framework. This process offers a way for students to come up with creative solutions to a problem and bring ethical consideration to global issues.

1. Ask questions to identify the issue.
2. Consider the perspectives of people impacted to brainstorm solutions. 
3. Analyze research to design and test solutions. 
4. Evaluate and iterate for an ethically justifiable solution.
5. Communicate findings to all relevant stakeholders. 

(cut)

This ethical framework guides students to think beyond themselves to identify solutions that impact their community. The added SEL (social-emotional learning) benefits of self-reflection, social awareness, relationship skills, and appreciation of the world around them awaken students’ consciousness of core ethical values, equipping them to make decisions for the greater good. Using prosocial science topics like climate change empowers students to engage in relevant, real-world content to create a more equitable, sustainable, and just world where they experience how their humanity can impact the greater good.

Friday, February 24, 2023

What Do We Owe Lab Animals?

Brandon Keim
The New York Times
Originally published 24 Jan 23

Here is an excerpt:

Scientists often point to the so-called Three Rs, a set of principles first articulated in 1959 by William Russell, a sociologist, and Rex Burch, a microbiologist, to guide experimental research on animals. Researchers are encouraged to replace animals when alternatives are available, reduce the number of animals used and refine their use so as to minimize the infliction of pain and suffering.

These are unquestionably noble aims, ethicists note, but may seem insufficient when compared with the benefits derived from animals. Covid vaccines, for example, which were tested on mice and monkeys, and developed so quickly thanks to decades of animal-based work on mRNA vaccine technology, saved an estimated 20 million lives in their first year of use and earned tens of billions of dollars in revenues.

In light of that dynamic — which applies not only to Covid vaccines, but to many other human lifesaving, fortune-generating therapeutics — some wonder if a fourth R might be warranted: repayment.

Inklings of the idea of repayment can already be found in the research community, most visibly in laboratories that make arrangements for animals — primarily monkeys and other nonhuman primates — to be retired to sanctuaries. In the case of dogs and companion species, including rats, they are sometimes adopted as pets.

“It’s kind of karma,” said Laura Conour, the executive director of Laboratory Animal Resources at Princeton University, which has a retirement arrangement with the Peaceable Primate Sanctuary. “I feel like it balances it out a little bit.” The school has also adopted out guinea pigs, anole lizards and sugar gliders as pets to private citizens, and tries to help with their veterinary care.

Adoption is not an option for animals destined to be killed, however, which raises the question of how the debt can be repaid. Lesley Sharp, a medical anthropologist at Barnard College and author of “Animal Ethos: The Morality of Human-Animal Encounters in Experimental Lab Science,” noted that research labs sometimes create memorials for animals: commemorative plaques, bulletin boards with pictures and poems and informal gatherings in remembrance.

“There is this burden the animal has to carry for humans in the context of science,” Dr. Sharp said. “They require, I think, respect, and to be recognized and honored and mourned.”

She acknowledged that honoring sacrificed animals was not quite the same as giving something back to them. To imagine what that might entail, Dr. Sharp pointed to the practice of donating one’s organs after death. Transplant recipients often want to give something in return, “but the donor is dead,” Dr. Sharp said. “Then you need somebody who is a sort of proxy for them, and that proxy is the close surviving kin.”

If someone receives a cornea or a heart from a pig — or funding to study those procedures — then they might pay for the care of another pig at a farmed animal sanctuary, Dr. Sharp proposed: “You’re going to have animals who stand in for the whole.”

Thursday, February 23, 2023

Moral foundations partially explain the association of the Dark Triad traits with homophobia and transphobia

Kay, C. S., & Dimakis, S. M. (2022, June 24). 
https://doi.org/10.31234/osf.io/pukds

Abstract

People with antagonistic personality traits are reportedly more racist, sexist, and xenophobic than their non-antagonistic counterparts. In the present studies (N1 = 718; N2 = 267), we examined whether people with antagonistic personality traits are also more likely to hold homophobic and transphobic attitudes, and, if they are, whether this can be explained by their moral intuitions. We found that people high in Machiavellianism, narcissism, and psychopathy are more likely to endorse homophobic and transphobic views. The associations of Machiavellianism and psychopathy with homophobia and transphobia were primarily explained by low endorsement of individualizing moral foundations (i.e., care and fairness), while the association of narcissism with these beliefs was primarily explained by high endorsement of the binding moral foundations (i.e., loyalty, authority, and sanctity). These findings provide insight into the types of people who harbour homophobic and transphobic attitudes and how differences in moral dispositions contribute to their LGBTQ+ prejudice.

General discussion

We conducted two studies to test whether those with antagonistic personality traits (e.g., Machiavellianism, grandiose narcissism, and psychopathy) are more likely to express homonegative and transphobic views, and, if so, whether this is because of their moral intuitions.Study 1 used a convenience sample of 718undergraduate students drawn from a university Human Subjects Pool. It was exploratory, in the sense that we specified no formal hypotheses. That said, we suspected that those with antagonistic personality traits would be more likely to hold homonegative and transphobic attitudes and that they may do so because they dismiss individualizing morals concerns (e.g., do no harm; treat others fairly). At the same time, we suspected that those with antagonistic personality traits would also deemphasize the binding moral foundations (e.g., be loyal to your ingroup; respect authority; avoid contaminants, even those that are metaphysical),weakening any observed associations between the antagonistic personality traits and LGBTQ+ prejudice. The purpose of Study 2 was to examine whether the findings identified in Study 1 would generalize beyond a sample of undergraduate students.  Since we had no reason to suspect the results would differ between Study 1 and Study 2, our preregistered hypotheses for Study 2 were that we would observe the same pattern of results identified in Study 1.

There was clear evidence across both studies that those high in the three antagonistic personality traits were more likely to endorse statements that were reflective of traditional homonegativity, modern homonegativity, general genderism/transphobia, and gender-bashing. All of these associations were moderate-to-large in magnitude (Funder & Ozer, 2019), save for the association between narcissism and traditional homonegativity in Study 1. These results indicate that, on top of harbouring racist(Jones, 2013), xenophobic (Hodson et al., 2009), and sexist (Gluck et al., 2020) attitudes, those high in antagonistic personality traits also harbour homonegative and transphobic attitudes.

Wednesday, February 22, 2023

How and Why People Want to Be More Moral

Sun, J., Wilt, J. A., et al. (2022, October 13).
https://doi.org/10.31234/osf.io/6smzh

Abstract

What types of moral improvements do people wish to make? Do they hope to become more good, or less bad? Do they wish to be more caring? More honest? More loyal? And why exactly do they want to become more moral? Presumably, most people want to improve their morality because this would benefit others, but is this in fact their primary motivation? Here, we begin to investigate these questions. Across two large, preregistered studies (N = 1,818), participants provided open-ended descriptions of one change they could make in order to become more moral; they then reported their beliefs about and motives for this change. In both studies, people most frequently expressed desires to improve their compassion and more often framed their moral improvement goals in terms of amplifying good behaviors than curbing bad ones. The strongest predictor of moral motivation was the extent to which people believed that making the change would have positive consequences for their own well-being. Together, these studies provide rich descriptive insights into how ordinary people want to be more moral, and show that they are particularly motivated to do so for their own sake.

From the General Discussion section

Self-Interest is a KeyMotivation for Moral Improvement

What motivates people to be more moral? From the perspective that the function of morality is to suppress selfishness for the benefit of others (Haidt & Kesebir, 2010; Wolf, 1982), we might expect people to believe that moral improvements would primarily benefit others (rather than themselves). By a similar logic, people should also primarily want to be more moral for the sake of others (rather than for their own sake).

Surprisingly, however, this was not overwhelmingly the case. Instead, across both studies, participants were approximately equally split between those who believed that others would benefit the most and those who believed that they themselves would benefit the most(with the exception of compassion; see Figure S2). The finding that people perceive some personal benefits to becoming more moral has been demonstrated in recent research (Sun & Berman, in prep). In light of evidence that moral people tend to be happier (Sun et al., in prep) and that the presence of moral struggles predicts symptoms of depression and anxiety (Exline et al., 2014), such beliefs might also be somewhat accurate.  However, it is unclear why people believe that becoming more moral would benefit themselves more than it would others. Speculatively, one possibility is that people can more vividly imagine the impacts of their own actions on their own well-being, whereas they are much more uncertain about how their actions would affect others—especially when the impacts might be spread across many beneficiaries.

However, it is also possible that this finding only applies to self-selected moral improvements, rather than the universe of all possible moral improvements. That is, when asked what they could do to become more moral, people might more readily think of improvements that would improve their own well-being to a greater extent than the well-being of others. But, if we were to ask people to predict who would benefit the most from various moral improvements that were selected by researchers, people may be less likely to believe that it would be themselves. Future research should systematically study people’s evaluations of how various moral improvements would impact their own and others’ well-being.

Similarly, when explicitly asked for whose sake they were most motivated to make their moral improvement, almost half of the participants admitted that they were most motivated to change for their own sake (rather than for the sake of others).  However, when predicting motivation from both the expected well-being consequences for the self and the well-being consequences for others, we found that people’s perceptions of personal well-being consequences was a significantly stronger predictor in both studies.  In other words, if anything, people are relatively more motivated to make moral improvements for their own sake than for the sake of others.  This is consistent with the findings of another study which examined people’s interest in changing a variety of moral and nonmoral traits, and showed that people are particularly interested in improving the traits that they believed would make them relatively happier (Sun & Berman, in prep). Here, it is striking that personal fulfilment remains the most important motivator of personal improvement even exclusively in the moral domain.

Tuesday, February 21, 2023

Motornomativity: How Social Norms Hide a Major Public Health Hazard

Walker, I., Tapp, A., & Davis, A.
(2022, December 14).
https://doi.org/10.31234/osf.io/egnmj

Abstract

Decisions about motor transport, by individuals and policy-makers, show unconscious biases due to cultural assumptions about the role of private cars - a phenomenon we term motonormativity. To explore this claim, a national sample of 2157 UK adults rated, at random, a set of statements about driving (“People shouldn't drive in highly populated areas where other people have to breathe in the car fumes”) or a parallel set of statements with key words changed to shift context ("People shouldn't smoke in highly populated areas where other people have to breathe in the cigarette fumes"). Such context changes could radically alter responses (75% agreed with "People shouldn't smoke... " but only 17% agreed with "People shouldn't drive... "). We discuss how these biases systematically distort medical and policy decisions and give recommendations for how public policy and health professionals might begin to recognise and address these unconscious biases in their work.

Discussion

Our survey showed that people can go from agreeing with a health or risk-related proposition to disagreeing with it simply depending on whether it is couched as a driving or non-driving issue. In the most dramatic case, survey respondents felt that obliging people to breathe toxic fumes went from being unacceptable to acceptable depending on whether the fumes came from cigarettes or motor vehicles. It is, objectively, nonsensical that the ethical and public health issues involved in forcing non-consenting people to inhale air-borne toxins should be judged differently depending on their source, but that is what happened here. It seems that normal judgement criteria can indeed be suspended in the specific context of motoring, as we suggested.

Obviously, we used questions in this study that we felt would stand a good chance of demonstrating a difference between how motoring and non-motoring issues were viewed. But choosing questions likely to reveal differences is not the same thing as stacking the deck. We gave the social bias every chance to reveal itself, but that could only happen because it was out there to be revealed. Prentice and Miller (1992) argue that the ease with which a behavioural phenomenon can be triggered is an index of its true magnitude. The ease with which effects appeared in this study was striking: in the final question the UK public went from 17% agreement to 75% agreement just by changing two words in the question whilst leaving its underlying principle unchanged.


Another example of a culturally acceptable (or ingrained) bias for harm. Call it "car blindness" or "motornormativity."

Monday, February 20, 2023

Definition drives design: Disability models and mechanisms of bias in AI technologies

Newman-Griffis, D., et al. (2023).
First Monday, 28(1).
https://doi.org/10.5210/fm.v28i1.12903

Abstract

The increasing deployment of artificial intelligence (AI) tools to inform decision-making across diverse areas including healthcare, employment, social benefits, and government policy, presents a serious risk for disabled people, who have been shown to face bias in AI implementations. While there has been significant work on analysing and mitigating algorithmic bias, the broader mechanisms of how bias emerges in AI applications are not well understood, hampering efforts to address bias where it begins. In this article, we illustrate how bias in AI-assisted decision-making can arise from a range of specific design decisions, each of which may seem self-contained and non-biasing when considered separately. These design decisions include basic problem formulation, the data chosen for analysis, the use the AI technology is put to, and operational design elements in addition to the core algorithmic design. We draw on three historical models of disability common to different decision-making settings to demonstrate how differences in the definition of disability can lead to highly distinct decisions on each of these aspects of design, leading in turn to AI technologies with a variety of biases and downstream effects. We further show that the potential harms arising from inappropriate definitions of disability in fundamental design stages are further amplified by a lack of transparency and disabled participation throughout the AI design process. Our analysis provides a framework for critically examining AI technologies in decision-making contexts and guiding the development of a design praxis for disability-related AI analytics. We put forth this article to provide key questions to facilitate disability-led design and participatory development to produce more fair and equitable AI technologies in disability-related contexts.

Conclusion

The proliferation of artificial intelligence (AI) technologies as behind the scenes tools to support decision-making processes presents significant risks of harm for disabled people. The unspoken assumptions and unquestioned preconceptions that inform AI technology development can serve as mechanisms of bias, building the base problem formulation that guides a technology on reductive and harmful conceptualisations of disability. As we have shown, even when developing AI technologies to address the same overall goal, different definitions of disability can yield highly distinct analytic technologies that reflect contrasting, frequently incompatible decisions in the information to analyse, what analytic process to use, and what the end product of analysis will be. Here we have presented an initial framework to support critical examination of specific design elements in the formulation of AI technologies for data analytics, as a tool to examine the definitions of disability used in their design and the resulting impacts on the technology. We drew on three important historical models of disability that form common foundations for policy, practice, and personal experience today—the medical, social, and relational models—and two use cases in healthcare and government benefits to illustrate how different ways of conceiving of disability can yield technologies that contrast and conflict with one another, creating distinct risks for harm.

Sunday, February 19, 2023

Organs in exchange for freedom? Bill raises ethical concerns

Steve LeBlanc
Associated Press
Originally published 8 FEB 23

BOSTON (AP) — A proposal to let Massachusetts prisoners donate organs and bone marrow to shave time off their sentence is raising profound ethical and legal questions about putting undue pressure on inmates desperate for freedom.

The bill — which faces a steep climb in the Massachusetts Statehouse — may run afoul of federal law, which bars the sale of human organs or acquiring one for “valuable consideration.”

It also raises questions about whether and how prisons would be able to appropriately care for the health of inmates who go under the knife to give up organs. Critics are calling the idea coercive and dehumanizing even as one of the bill’s sponsors is framing the measure as a response to the over-incarceration of Hispanic and Black people and the need for matching donors in those communities.

“The bill reads like something from a dystopian novel,” said Kevin Ring, president of Families Against Mandatory Minimums, a Washington, D.C.-based criminal justice reform advocacy group. “Promoting organ donation is good. Reducing excessive prison terms is also good. Tying the two together is perverse.”

(cut)

Offering reduced sentences in exchange for organs is not only unethical, but also violates federal law, according to George Annas, director of the Center for Health Law, Ethics & Human Rights at the Boston University School of Public Health. Reducing a prison sentence is the equivalent of a payment, he said.

“You can’t buy an organ. That should end the discussion,” Annas said. “It’s compensation for services. We don’t exploit prisoners enough?”

Democratic state Rep. Carlos Gonzalez, another co-sponsor of the bill, defended the proposal, calling it a voluntary program. He also said he’s open to establishing a policy that would allow inmates to donate organs and bone marrow without the lure of a reduced sentence. There is currently no law against prisoner organ donation in Massachusetts, he said.

“It’s not quid pro quo. We are open to setting policy without incentives,” Gonzalez said, adding that it is “crucial to respect prisoners’ human dignity and agency by respecting their choice to donate bone marrow or an organ.”