Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, June 23, 2021

Experimental Regulations for AI: Sandboxes for Morals and Mores

Ranchordas, Sofia
Morals and Machines (vol.1, 2021)
Available at SSRN: 

Abstract

Recent EU legislative and policy initiatives aim to offer flexible, innovation-friendly, and future-proof regulatory frameworks. Key examples are the EU Coordinated Plan on AI and the recently published EU AI Regulation Proposal which refer to the importance of experimenting with regulatory sandboxes so as to balance innovation in AI against its potential risks. Originally developed in the Fintech sector, regulatory sandboxes create a testbed for a selected number of innovative projects, by waiving otherwise applicable rules, guiding compliance, or customizing enforcement. Despite the burgeoning literature on regulatory sandboxes and the regulation of AI, the legal, methodological, and ethical challenges of regulatory sandboxes have remained understudied. This exploratory article delves into the some of the benefits and intricacies of employing experimental legal instruments in the context of the regulation of AI. This article’s contribution is twofold: first, it contextualizes the adoption of regulatory sandboxes in the broader discussion on experimental approaches to regulation; second, it offers a reflection on the steps ahead for the design and implementation of AI regulatory sandboxes.

(cut)

In conclusion, AI regulatory sandboxes are not the answer to more innovation in AI. They are part of the path to a more forward-looking approach to the interaction between law and technology. This new approach will most certainly be welcomed with reluctance in years to come as it disrupts existing dogmas pertaining to the way in which we conceive the principle of legal certainty and the reactive—rather than anticipatory—nature of law. However, traditional law and regulation were designed with human agents and enigmas in mind. Many of the problems generated by AI (discrimination, power asymmetries, and manipulation) are still human but their scale and potential for harms (and benefits) have long ceased to be. It is thus time to rethink our fundamental approach to regulation and refocus on the new regulatory subject before us.

Tuesday, June 22, 2021

Against Empathy Bias: The Moral Value of Equitable Empathy

Fowler, Z., Law, K. F., & Gaesser, B.
Psychological Science
Volume: 32 issue: 5, page(s): 766-779

Abstract

Empathy has long been considered central in living a moral life. However, mounting evidence has shown that empathy is often biased towards (i.e., felt more strongly for) close and similar others, igniting a debate over whether empathy is inherently morally flawed and should be abandoned in efforts to strive towards greater equity. This debate has focused on whether empathy limits the scope of our morality, with little consideration of whether it may be our moral beliefs limiting our empathy. Across two studies conducted on Amazon’s Mechanical Turk (N= 604), we investigate moral judgments of biased and equitable feelings of empathy. We observed a moral preference for empathy towards socially close over distant others. However, feeling equal empathy for all is seen as the most morally and socially valuable. These findings provide new theoretical insight into the relationship between empathy and morality with implications for navigating towards a more egalitarian future.

General Discussion

The present studies investigated moral judgments of socially biased and equitable feelings of empathy in hypothetical vignettes. The results showed that moral judgments of empathy are biased towards preferring more empathy for a socially close over socially distant individual. Despite this bias in moral judgments, however, people consistently judged feeling equal empathy as the most morally right. These findings generalized from judgments of others’ empathy for targets matched on objective social distance to judgments of one’s own empathy for targets that were personally-tailored and matched on subjective social distance across subjects.  Further, participants most desired to affiliate with someone who felt equal empathy. We also found that participants’ desire to affiliate with the actor in the vignette mirrored their moral judgments of empathy.

Monday, June 21, 2021

Drug Overdose Deaths Up 30% in Pandemic Year, Government Data Show

Joyce Frieden
MedPage Today 
Originally published 1 June 2021

Mortality from all types of drug overdoses increased by a whopping 30% over a 1-year period, Nora Volkow, MD, director of the National Institute on Drug Abuse (NIDA), reported at the FDA Science Forum.

Data from the National Center for Health Statistics from October 2019 to October 2020 shows that mortality from overdoses from all types of drugs increased 30%, from 70,669 deaths in October 2019 to 91,862 deaths in October 2020, "and I think that that is a number that is very, very chilling," Volkow said at the forum. Among those overdose deaths in both years, more than half came from synthetic opiates -- "the most notable presence is fentanyl," she said. There was also a 46% increase in overdose deaths from other psychostimulants, mainly methamphetamine, and a 38% increase in deaths from cocaine overdoses.

Having any kind of substance use disorder (SUD) also affects the risk of getting COVID-19, she continued. According to a study done by Volkow and colleagues, "Regardless of the specific type of substance use disorder -- legal or illegal -- there was a significant increase in the likelihood of people that have a substance use disorder to become infected," she said. Their study, which included electronic health records from 7.5 million patients with an SUD diagnosis, found that patients with a recent SUD diagnosis -- within the past year -- were nearly nine times more likely to contract COVID-19 than patients without that diagnosis; for those with opioid use disorder in particular, their odds of contracting COVID were 10 times higher.

How relationships bias moral reasoning: Neural and self-report evidence

Berg, M. K., Kitayama, S., & Kross, E.
Journal of Experimental Social Psychology
Volume 95, July 2021, 104156

Abstract

Laws govern society, regulating people's behavior to create social harmony. Yet recent research indicates that when laws are broken by people we know and love, we consistently fail to report their crimes. Here we identify an expectancy-based cognitive mechanism that underlies this phenomenon and illustrate how it interacts with people's motivations to predict their intentions to report crimes. Using a combination of self-report and brain (ERP) measures, we demonstrate that although witnessing any crime violates people's expectations, expectancy violations are stronger when close (vs. distant) others commit crimes. We further employ an experimental-causal-chain design to show that people resolve their expectancy violations in diametrically opposed ways depending on their relationship to the transgressor. When close others commit crimes, people focus more on the individual (vs. the crime), which leads them to protect the transgressor. However, the reverse is true for distant others, which leads them to punish the transgressor. These findings highlight the sensitivity of early attentional processes to information about close relationships. They further demonstrate how these processes interact with motivation to shape moral decisions. Together, they help explain why people stubbornly protect close others, even in the face of severe crimes.

Highlights

• We used neural and self-report methods to explain people's reluctance to punish close others who act immorally.

• Close others acting immorally, and severe immoral acts, are highly unexpected.

• Expectancy violations interact with motivation to drive attention.

• For close others, people focus on the transgressor, which yields a more lenient response.

• For distant others, people focus on the immoral act, which yields a more punitive response.

Sunday, June 20, 2021

Artificial intelligence research may have hit a dead end

Thomas Nail
salon.com
Originally published 30 April 21

Here is an excerpt:

If it's true that cognitive fluctuations are requisite for consciousness, it would also take time for stable frequencies to emerge and then synchronize with one another in resting states. And indeed, this is precisely what we see in children's brains when they develop higher and more nested neural frequencies over time.

Thus, a general AI would probably not be brilliant in the beginning. Intelligence evolved through the mobility of organisms trying to synchronize their fluctuations with the world. It takes time to move through the world and learn to sync up with it. As the science fiction author Ted Chiang writes, "experience is algorithmically incompressible." 

This is also why dreaming is so important. Experimental research confirms that dreams help consolidate memories and facilitate learning. Dreaming is also a state of exceptionally playful and freely associated cognitive fluctuations. If this is true, why should we expect human-level intelligence to emerge without dreams? This is why newborns dream twice as much as adults, if they dream during REM sleep. They have a lot to learn, as would androids.

In my view, there will be no progress toward human-level AI until researchers stop trying to design computational slaves for capitalism and start taking the genuine source of intelligence seriously: fluctuating electric sheep.

Saturday, June 19, 2021

Preparing for the Next Generation of Ethical Challenges Concerning Heritable Human Genome Editing

Robert Klitzman
The American Journal of Bioethics
(2021) Volume 21 (6), 1-4.

Here is the conclusion

Moving Forward

Policymakers will thus need to make complex and nuanced risk/benefit calculations regarding costs and extents of treatments, ages of onset, severity of symptoms, degrees of genetic penetrance, disease prevalence, future scientific benefits, research costs, appropriate allocations of limited resources, and questions of who should pay.

Future efforts should thus consider examining scientific and ethical challenges in closer conjunction, not separated off, and bring together the respective strengths of the Commission’s and of the WHO Committee’s approaches. The WHO Committee includes broader stakeholders, but does not yet appear to have drawn conclusions regarding such specific medical and scientific scenarios (WHO 2020). These two groups’ respective memberships also differ in instructive ways that can mutually inform future deliberations. Among the Commission’s 18 chairs and members, only two appear to work primarily in ethics or policy; the majority are scientists (National Academy of Medicine, the National Academies of Sciences and the Royal Society 2020). In contrast, the WHO Committee includes two chairs and 16 members, with both chairs and the majority of members working primarily in ethics, policy or law (WHO 2020). ASRM and other countries’ relevant professional organizations should also stipulate that physicians and healthcare professionals should not be involved in any way in the care of patients using germline editing abroad.

The Commission’s Report thus provides valuable insights and guidelines, but multiple stakeholders will likely soon confront additional, complex dilemmas involving interplays of both science and ethics that also need urgent attention.

Friday, June 18, 2021

Wise teamwork: Collective confidence calibration predicts the effectiveness of group discussion

Silver, I, Mellers, B.A., & Tetlock, P.E.
Journal of Experimental Social Psychology
Volume 96, September 2021.

Abstract

‘Crowd wisdom’ refers to the surprising accuracy that can be attained by averaging judgments from independent individuals. However, independence is unusual; people often discuss and collaborate in groups. When does group interaction improve vs. degrade judgment accuracy relative to averaging the group's initial, independent answers? Two large laboratory studies explored the effects of 969 face-to-face discussions on the judgment accuracy of 211 teams facing a range of numeric estimation problems from geographic distances to historical dates to stock prices. Although participants nearly always expected discussions to make their answers more accurate, the actual effects of group interaction on judgment accuracy were decidedly mixed. Importantly, a novel, group-level measure of collective confidence calibration robustly predicted when discussion helped or hurt accuracy relative to the group's initial independent estimates. When groups were collectively calibrated prior to discussion, with more accurate members being more confident in their own judgment and less accurate members less confident, subsequent group interactions were likelier to yield increased accuracy. We argue that collective calibration predicts improvement because groups typically listen to their most confident members. When confidence and knowledge are positively associated across group members, the group's most knowledgeable members are more likely to influence the group's answers.

Conclusion

People often display exaggerated beliefs about their skills and knowledge. We misunderstand and over-estimate our ability to answer general knowledge questions (Arkes, Christensen, Lai, & Blumer, 1987), save for a rainy day (Berman, Tran, Lynch Jr, & Zauberman, 2016), and resist unhealthy foods (Loewenstein, 1996), to name just a few examples. Such failures of calibration can have serious consequences, hindering our ability to set goals (Kahneman & Lovallo, 1993), make plans (Janis, 1982), and enjoy experiences (Mellers & McGraw, 2004). Here, we show that collective calibration also predicts the effectiveness of group discussions. In the context of numeric estimation tasks, poorly calibrated groups were less likely to benefit from working together, and, ultimately, offered less accurate answers. Group interaction is the norm, not the exception. Knowing what we know (and what we don't know) can help predict whether interactions will strengthen or weaken crowd wisdom.

Thursday, June 17, 2021

Biased Benevolence: The Perceived Morality of Effective Altruism Across Social Distance

Law, K. F., Campbell, D., & Gaesser, B. 
(2019, July 11). 
https://doi.org/10.31234/osf.io/qzx67

Abstract

Is altruism always morally good, or is the morality of altruism fundamentally shaped by the social opportunity costs that often accompany helping decisions? Across five studies, we reveal that, although helping both socially closer and socially distant others is generally perceived favorably (Study 1), in cases of realistic tradeoffs in social distance for gains in welfare where helping socially distant others necessitates not helping socially closer others with the same resources, helping is deemed as less morally acceptable (Studies 2-5). Making helping decisions at a cost to socially closer others also negatively affects judgments of relationship quality (Study 3) and in turn, decreases cooperative behavior with the helper (Study 4). Ruling out an alternative explanation of physical distance accounting for the effects in Studies 1-4, social distance continued to impact moral acceptability when physical distance across social targets was matched (Study 5). These findings reveal that attempts to decrease biases in helping may have previously unconsidered consequences for moral judgments, relationships, and cooperation.

General Discussion

When judging the morality of altruistic tradeoffs in social distance for gains in welfare advocated by the philosophy and social movement of effective altruism, we find that the perceived morality of altruism is graded by social distance. People consistently view socially distant altruism as less morally acceptable as the person not receiving help becomes socially closer to the agent helping. This suggests that whereas altruism is generally evaluated as morally praiseworthy, the moral calculus of altruism flexibly shifts according to the social distance between the person offering aid and the people in need. Such findings highlight the empirical value and theoretical importance of investigating moral judgments situated in real-world social contexts.

Wednesday, June 16, 2021

Lazy, Not Biased: Susceptibility to Partisan Fake News Is Better Explained by Lack of Reasoning Than by Motivated Reasoning

Pennycook, G. & Rand, D. G.
Cognition. (2019)
Volume 188, July 2019, Pages 39-50

Abstract

Why do people believe blatantly inaccurate news headlines (“fake news”)? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3,446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news – even for headlines that align with individuals’ political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant’s ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one’s political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se – a finding that opens potential avenues for fighting fake news.

Highlights

• Participants rated perceived accuracy of fake and real news headlines.

• Analytic thinking was associated with ability to discern between fake and real.

• We found no evidence that analytic thinking exacerbates motivated reasoning.

• Falling for fake news is more a result of a lack of thinking than partisanship.