Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, March 9, 2023

Seeing your life story as a Hero's Journey increases meaning in life

B. Rogers, H. Chicas, J. M. Kelly, & E. Kubin
researchgate.com
Originally posted January 2023

Abstract

Meaning in life is tied to the stories people tell about their lives. We explore whether one timeless story—the Hero’s Journey—might make people’s lives feel more meaningful. This enduring story appears across history and cultures, and provides a template for ancient myths (e.g., Beowulf) and blockbuster books and movies (e.g., Harry Potter). Eight studies reveal that the Hero’s Journey predicts and can causally increase people’s experience of meaning in life. We first distill the Hero’s Journey into seven key elements—Protagonist, Shift, Quest, Allies, Challenge, Transformation, Legacy—and then develop a new measure that assesses the perceived presence of the Hero’s Journey narrative in people’s life stories: the Hero’s Journey Scale. Using this scale, we find a positive relationship between the Hero’s Journey and meaning in life with both online participants (Studies 1-2) and older adults in a community sample (Study 3). We then develop a re-storying intervention that leads people to see the events of their life as a Hero’s Journey (Study 4). This intervention causally increases meaning in life (Study 5) by prompting people to reflect on important elements of their lives and connecting them into a coherent and compelling narrative (Study 6). This Hero’s Journey re-storying intervention also increases the extent to which people perceive meaning in an ambiguous grammar task (Study 7) and increases their resilience to life’s challenges (Study 8). These results provide initial evidence that enduring cultural narratives like the Hero’s Journey both reflect meaningful lives and can help to create them.

General Discussion

Humans are natural storytellers. People make sense of their lives using stories and how they tell their stories shapes the way they see and react to the world (McAdams & McLean, 2013). While these stories are drawn from events in their lives, they are inherently subjective and people frame their experiences using common cultural narratives (Hammack, 2008; Meltzoff, 1988, McLean & Syed, 2016). In this paper, we tested whether one of the most enduring culturalnarratives—the Hero’s Journey—is tied to meaning in life.

Across eight studies and six supplementary studies, we found that Hero’s Journey narratives predicted meaning in life. We began by distilling the Hero’s Journey into its basic narrative elements and constructing a psychological measure using these elements (Supplementary Studies 1-3). Next, in Studies 1-3, we tested our first prediction that there is an association between the Hero’s Journey narrative and meaning in life. We found that the perceived presence of the Hero’s Journey in people’s lives correlated with meaning in life (Study 1 and Supplementary Study 4). The connection between the Hero’s Journey and life meaning also manifested in the stories people told to others. Life stories rated by independent coders as more similar to a Hero’s Journey predicted higher levels of meaning in life and a sense of
flourishing in the self-reports of the storytellers (Studies 2-3).

Studies 4-8 confirmed our second prediction that people can use a re-storying intervention to reframe their personal narratives as a Hero’s Journey (Study 4) which can increase meaning and benefit their well-being (Study 5 and Supplementary Studies 5-6). We provided evidence that the intervention increased meaning in life by helping people to identify and connect the important narrative elements in their lives into the culturally resonant Hero’s Journey framework (Study 6). The intervention did not only bring psychological benefits, but it also helped people to see more meaning in their ongoing experiences, from perceiving patterns in letter strings (Study 7) to finding solutions for their personal challenges (Study 8).


Once again, social psychology has significant contributions to clinical psychology.

Wednesday, March 8, 2023

Neuroscience is ready for neuroethics engagement

Das, J., Forlini, C., Porcello, D. M. et al.
Front. Commun., 21 December 2022
Sec. Science and Environmental Communication

Neuroscience research has been expanding, providing new insights into brain and nervous system function and potentially transformative technological applications. In recent years, there has been a flurry of prominent international scientific academies and intergovernmental organizations calling for engagement with different publics on social, ethical, and regulatory issues related to neuroscience and neurotechnology advances. Neuroscientific activities and outputs are value-laden; they reflect the cultural, ethical, and political values that are prioritized in different societies at a given time and impact a variety of publics beyond the laboratory. The focus on engagement in neuroscience recognizes the breadth and significance of current neuroscience research whilst acknowledging the need for a neuroethical approach that explores the epistemic and moral values influencing the neuroscientific agenda. The field of neuroethics is characterized by its focus on the social, legal, and philosophical implications of neuroscience including its impact on cultural assumptions about the cognitive experience, identity, consciousness, and decision-making. Here, we outline a proposal for neuroethics engagement that reflects an enhanced and evolving understanding of public engagement with neuroethical issues to create opportunities to share ideation, decision-making, and collaboration in neuroscience endeavors for the benefit of society. We demonstrate the synergies between public engagement and neuroethics scholarship and activities that can guide neuroethics engagement.

Conclusion

Building on research from numerous fields and experiences of the past, engagement between neuroscience, neuroethics, and publics offers a critical lens for anticipating and interrogating the unique societal implications of neuroscience discovery and dissemination, and it can help guide regulation so that neuroscience products promote societal well-being. Engagement offers a bridge not only for neuroscientists and neuroethicists, but also for neuroethics and the public. It is possible that more widespread use of neuroethics engagement will reveal yet unknown or overlooked ethical conflicts in neuroscience that may take priority over the ones listed here.

We offer this paper as part of a continued and expanded dialogue on neuroethics engagement. The concept we propose will require the input of stakeholders beyond neuroethics, neuroscience, and public engagement in science to build practices that are inclusive and fit for purpose. Effective neuroethics engagement should be locally and temporally informed, lead to a culturally situated understanding of science and diplomacy, aim to understand the transnational nature of scientific knowledge, and be mindful of the challenges raised by how knowledge of discoveries circulates.

Tuesday, March 7, 2023

FTC to Ban BetterHelp from Revealing Consumers’ Data, Including Sensitive Mental Health Information, to Facebook and Others for Targeted Advertising

Federal Trade Commission
Press Release
Originally released 2 MAR 23

The Federal Trade Commission has issued a proposed order banning online counseling service BetterHelp, Inc. from sharing consumers’ health data, including sensitive information about mental health challenges, for advertising. The proposed order also requires the company to pay $7.8 million to consumers to settle charges that it revealed consumers’ sensitive data with third parties such as Facebook and Snapchat for advertising after promising to keep such data private.

This is the first Commission action returning funds to consumers whose health data was compromised. In addition, the FTC’s proposed order will ban BetterHelp from sharing consumers’ personal information with certain third parties for re-targeting—the targeting of advertisements to consumers who previously had visited BetterHelp’s website or used its app, including those who had not signed up for the company’s counseling service. The proposed order also will limit the ways in which BetterHelp can share consumer data going forward.

"When a person struggling with mental health issues reaches out for help, they do so in a moment of vulnerability and with an expectation that professional counseling services will protect their privacy,” said Samuel Levine, Director of the FTC's Bureau of Consumer Protection. "Instead, BetterHelp betrayed consumers’ most personal health information for profit. Let this proposed order be a stout reminder that the FTC will prioritize defending Americans’ sensitive data from illegal exploitation."

California-based BetterHelp offers online counseling services under several names, including BetterHelp Counseling. It also markets services aimed at specific groups such as Faithful Counseling focused on Christians, Teen Counseling, which caters to teens and requires parental consent, and Pride Counseling, which is targeted to the LGBTQ community. Consumers interested in BetterHelp’s services must fill out a questionnaire that asks for sensitive mental health information—such as whether they have experienced depression or suicidal thoughts and are on any medications. They also provide their name, email address, birth date and other personal information. Consumers are then matched with a counselor and pay between $60 and $90 per week for counseling.

At several points in the signup process, BetterHelp promised consumers that it would not use or disclose their personal health data except for limited purposes, such as to provide counseling services. Despite these promises, BetterHelp used and revealed consumers’ email addresses, IP addresses, and health questionnaire information to Facebook, Snapchat, Criteo, and Pinterest for advertising purposes, according to the FTC’s complaint. 

For example, the company used consumers’ email addresses and the fact that they had previously been in therapy to instruct Facebook to identify similar consumers and target them with advertisements for BetterHelp’s counseling service, which helped the company bring in tens of thousands of new paying users and millions of dollars in revenue.

According to the complaint, BetterHelp pushed consumers to hand over their health information by repeatedly showing them privacy misrepresentations and nudging them with unavoidable prompts to sign up for its counseling service. Despite collecting such sensitive information, BetterHelp failed to maintain sufficient policies or procedures to protect it and did not obtain consumers’ affirmative express consent before disclosing their health data. BetterHelp also failed to place any limits on how third parties could use consumers’ health information—allowing Facebook and other third parties to use that information for their own internal purposes, including for research and development or to improve advertising.

Monday, March 6, 2023

Cognitive control and dishonesty

Speer, S. P., Smidts, A., & Boksem, M. A. (2022b).
Trends in Cognitive Sciences, 26(9), 796–808.
https://doi.org/10.1016/j.tics.2022.06.005

Abstract

Dishonesty is ubiquitous and imposes substantial financial and social burdens on society. Intuitively, dishonesty results from a failure of willpower to control selfish behavior. However, recent research suggests that the role of cognitive control in dishonesty is more complex. We review evidence that cognitive control is not needed to be honest or dishonest per se, but that it depends on individual differences in what we call one’s ‘moral default’: for those who are prone to dishonesty, cognitive control indeed aids in being honest, but for those who are already generally honest, cognitive control may help them cheat to occasionally profit from small acts of dishonesty. Thus, the role of cognitive control in (dis)honesty is to override the moral default.

Significance

The precise role of cognitive control in dishonesty has been debated for many years, but now important strides have been made to resolve this debate.

Recently developed paradigms that allow for investigating dishonesty on the level of the choice rather than on the level of the individual have substantially improved our understanding of the adaptive role of cognitive control in (dis)honesty.

These new paradigms revealed that the role of cognitive control differs across people: for cheaters, it helps them to sometimes be honest, while for those who are generally honest, it allows them to cheat on occasion. Thus, cognitive control is not required for (dis)honesty per se but is required to override one’s moral default to be either honest or to cheat.

Individual differences in moral default are driven by balancing motivation for reward and upholding a moral self-image.

From Concluding remarks

The Will and Grace hypotheses have been debated for quite some time, but recently important strides have been made to resolve this debate. Key elements in this proposed resolution are (i) recognizing that there is heterogeneity between individuals, some default more towards honesty, whereas others have a stronger inclination towards dishonesty; (ii) recognizing that there is heterogeneity within individuals, cheaters can be honest sometimes and honest people do cheat on occasion; and (iii) the development of experimental paradigms that allow dishonesty to be investigated on the level of the choice, rather than only on the level of the individual or the group. These developments have substantially enhanced understanding of the role of cognitive control in (dis)honesty: it is not required for being honest or dishonest per se, but it is required to override one’s moral default to either be honest or to cheat (Figure 1).

These insights open up novel research agendas and offer suggestions as to how to develop interventions to curtail dishonesty. Our review suggests three processes that may be targeted by such interventions: reward seeking, self-referential thinking, and cognitive control. Shaping contexts in ways that are conducive to honesty by targeting these processes may go a long way to increase honesty in everyday behavior.

Sunday, March 5, 2023

Four Recommendations for Ethical AI in Healthcare

Lindsey Jarrett
Center for Practical Bioethics

For several decades now, we have been having conversations about the impact that technology, from the voyage into space to the devices in our pockets, will have on society. The force with which technology alters our lives at times feels abrupt. It has us feeling excited one day and fearful the next.

If your experiences in life are not dependent on the use of technology — especially if your work still allows for you to disconnect from the virtual world – it may feel like technology is working at a decent pace. However, many of us require some sort of technology to work, to communicate with others, to develop relationships, and to disseminate ideas into the world. Further, we also increasingly need technology to help us make decisions. These decisions vary in complexity from auto-correcting our messages to connecting to someone on a dating app, and without access to a piece of technology, it is increasingly challenging to rely on anything but technology.

Is the use of technology for decision making a problem in and of itself due to its entrenched use across our lives, or are there particular components and contexts that need attention? Your answer may depend on what you want to use it for, how you want others to use it to know you, and why the technology is needed over other tools. These considerations are widely discussed in the areas of criminal justice, finance, security, hiring practices, and conversations are developing in other sectors as issues of inequity, injustice and power differentials begin to emerge.

Issues emerging in the healthcare sector is of particular interest to many, especially since the coronavirus pandemic. As these conversations unfold, people start to unpack the various dilemmas that exist within the intersection of technology and healthcare. Scholars have engaged in theoretical rhetoric to examine ethical implications, researchers have worked to evaluate the decision-making processes of data scientists who build clinical algorithms, and healthcare executives have tried to stay ahead of regulation that is looming over their hospital systems.

However, recommendations tend to focus exclusively on those involved with algorithm creation and offer little support to other stakeholders across the healthcare industry. While this guidance turns into practice across data science teams building algorithms, especially those building machine learning based tools, the Ethical AI Initiative sees opportunities to examine decisions that are made regarding these tools before they get to a data scientist’s queue and after they are ready for production. These opportunities are where systemic change can occur, and without that level of change, we will continue to build products to put on the shelf and more products to fill the shelf when those fail.

Healthcare is not unique in facing these types of challenges, and I will outline a few recommendations on how an adapted, augmented system of healthcare technology can operate, as the industry prepares for more forceful regulation of the use of machine learning-based tools in healthcare practice.

Saturday, March 4, 2023

Divide and Rule? Why Ethical Proliferation is not so Wrong for Technology Ethics.

Llorca Albareda, J., Rueda, J.
Philos. Technol. 36, 10 (2023).
https://doi.org/10.1007/s13347-023-00609-8

Abstract

Although the map of technology ethics is expanding, the growing subdomains within it may raise misgivings. In a recent and very interesting article, Sætra and Danaher have argued that the current dynamic of sub-specialization is harmful to the ethics of technology. In this commentary, we offer three reasons to diminish their concern about ethical proliferation. We argue first that the problem of demarcation is weakened if we attend to other sub-disciplines of technology ethics not mentioned by these authors. We claim secondly that the logic of sub-specializations is less problematic if one does adopt mixed models (combining internalist and externalist approaches) in applied ethics. We finally reject that clarity and distinction are necessary conditions for defining sub-fields within ethics of technology, defending the porosity and constructive nature of ethical disciplines.

Conclusion

Sætra and Danaher have initiated a necessary discussion about the increasing proliferation of neighboring sub-disciplines in technology ethics. Although we do not share their concern, we believe that this debate should continue in the future. Just as some subfields have recently been consolidated, others may do the same in the coming decades. The possible emergence of novel domain-specific technology ethics (say Virtual Reality Ethics) suggests that future proposals will point to as yet unknown positive and negative aspects of this ethical proliferation. In part, the creation of new sub-disciplines will depend on the increasing social prominence of other emerging and future technologies. The map of technology ethics thus includes uncharted waters and new subdomains to discover. This makes ethics of technology a fascinatingly lively and constantly evolving field of knowledge.

Friday, March 3, 2023

The Petrov dilemma: Moral responsibility in the age of ChatGPT

Julian Savulescu
NUS News
Originally published 13 FEB 23

Here is an excerpt:

3. Act ethically

What is to act ethically has been debated for thousands of years. But at its heart is a simple answer: add value to the lives of other people. For Lt-Col Petrov, doing nothing was the hardest choice: “All I had to do was to reach for the phone; to raise the direct line to our top commanders – but I couldn’t move. I felt like I was sitting on a hot frying pan.”

ChatGPT is an extremely powerful technology with enormous potential for human benefit. It can distill human knowledge and wisdom, quickly create factual and normative content, enable communication between speakers of different languages, and ultimately enhance or even surpass human creativity.

As with any powerful technology, there are significant risks. At present, there are numerous mistakes and false claims, but these presumably will be ironed out. Longer term, risks include to threats to privacy; large-scale perpetuation of biases; plagiarism; displacement of human workers and creativity; deception and malicious use.

Acting ethically requires realising the value of technology while ensuring the risks are reasonable. Institutional or structural changes may be necessary, for example, labelling and creation of watermarks to prevent plagiarism.

To be human is to make your own ethical decisions

One ethical question that confronts regulators and industry is: should a driverless car swerve and sacrifice an innocent bystander (pedestrian or cyclist) to save a greater number of lives in the vehicle? I asked ChatGPT whether it is right to kill one to save five. Here is the answer:

“Whether or not it is right to kill one person to save five people is a complex ethical question. Some people may argue that taking one life to save five is the more logical option, and is therefore justified. However, it is also important to consider the ethical principle of respect for autonomy.

“This principle states that all individuals have an inherent right to make autonomous and informed decisions about their lives, including their own death. Taking this into consideration, it may not be morally right to force someone to accept the sacrifice of their life in order to save the lives of five other people. Ultimately, the answer to this question is subjective and depends on one’s own ethical and moral beliefs.”

But as driverless cars are rolled out, we won’t have the luxury of remaining on the fence, or relying on drivers’ reflexes to get us out of a decision. We will have to make moral choices and give reasons, and apply them consistently to programming the vehicles. This requires deep – and human – ethical thinking and expertise.

Thursday, March 2, 2023

Ethical principles of traditional Indigenous medicine to guide western psychedelic research and practice

Celidwen, Y., Redvers, N., Githaiga, C., et al.
(2023). The Lancet Regional Health
Americas, 18, 100410.
https://doi.org/10.1016/j.lana.2022.100410

Summary

The resurgence of Western psychedelic research and practice has led to increasing concerns from many Indigenous Nations regarding cultural appropriation, lack of recognition of the sacred cultural positioning of these medicines, exclusionary practices in research and praxis, and patenting of traditional medicines. Indigenous voices and leadership have been notably absent from the Western psychedelic field currently widely represented by Westerners. An Indigenous-led globally represented group of practitioners, activists, scholars, lawyers, and human rights defenders came together with the purpose of formulating a set of ethical guidelines concerning traditional Indigenous medicines current use in Western psychedelic research and practice. A global Indigenous consensus process of knowledge-gathering was engaged which identified eight interconnected ethical principles, including: Reverence, Respect, Responsibility, Relevance, Regulation, Reparation, Restoration, and Reconciliation. A summary of the work is presented here with suggested ethical actions for moving forward within Western psychedelic research and practice spaces.

The way forward

To help orient this consensus process more directly towards solutions, we further summarize in Table 2 additional recommended practical solutions within Western psychedelic research and practice. This consensus process emphasized that by continuing the extraction and commercialization of various traditional Indigenous medicines, Western institutions are destroying the core themes of Indigenous life in the following important ways: (i) the glorification of psychedelics excites harmful narratives of exceptionalism that result in spiritual consumerism and exploitative tourism that is very often managed by Westerners; (ii) the unsustainable foraging of Indigenous medicines make them increasingly unavailable for local use; (iii) Indigenous Peoples are exposed to extreme violence from the infiltration of drug cartels into traditional territories that often source raw materials from forest habitats (e.g., MDMA); (iv) the spreading of false information about a particular plant medicine being the one key pill to human enlightenment. While these medicines may contribute towards powerful and transformative solutions to the health and planetary crisis, they can also lose their meaning when deprived of their cultural container. We hope that the eight ethical principles detailed here spark important conversation and action within the psychiatry and psychedelic research community towards better, more respectful relations.

Wednesday, March 1, 2023

Cognitive Control Promotes Either Honesty or Dishonesty, Depending on One's Moral Default

Speer, S. P., Smidts, A., & Boksem, M. A. S. (2021).
The Journal of Neuroscience, 41(42), 8815–8825. 
https://doi.org/10.1523/jneurosci.0666-21.2021

Abstract

Cognitive control is crucially involved in making (dis)honest decisions. However, the precise nature of this role has been hotly debated. Is honesty an intuitive response, or is will power needed to override an intuitive inclination to cheat? A reconciliation of these conflicting views proposes that cognitive control enables dishonest participants to be honest, whereas it allows those who are generally honest to cheat. Thus, cognitive control does not promote (dis)honesty per se; it depends on one's moral default. In the present study, we tested this proposal using electroencephalograms in humans (males and females) in combination with an independent localizer (Stroop task) to mitigate the problem of reverse inference. Our analysis revealed that the neural signature evoked by cognitive control demands in the Stroop task can be used to estimate (dis)honest choices in an independent cheating task, providing converging evidence that cognitive control can indeed help honest participants to cheat, whereas it facilitates honesty for cheaters.

Significance Statement

Dishonesty causes enormous economic losses. To target dishonesty with interventions, a rigorous understanding of the underlying cognitive mechanisms is required. A recent study found that cognitive control enables honest participants to cheat, whereas it helps cheaters to be honest. However, it is evident that a single study does not suffice as support for a novel hypothesis. Therefore, we tested the replicability of this finding using a different modality (EEG instead of fMRI) together with an independent localizer task to avoid reverse inference. We find that the same neural signature evoked by cognitive control demands in the localizer task can be used to estimate (dis)honesty in an independent cheating task, establishing converging evidence that the effect of cognitive control indeed depends on a person's moral default.

From the Discussion section

Previous research has deduced the involvement of cognitive control in moral decision-making through relating observed activations to those observed for cognitive control tasks in prior studies (Greene and Paxton, 2009; Abe and Greene, 2014) or with the help of meta-analytic evidence (Speer et al., 2020) from the Neurosynth platform (Yarkoni et al., 2011). This approach, which relies on reverse inference, must be used with caution because any given brain area may be involved in several different cognitive processes, which makes it difficult to conclude that activation observed in a particular brain area represents one specific function (Poldrack, 2006). Here, we extend prior research by providing more rigorous evidence by means of explicitly eliciting cognitive control in a separate localizer task and then demonstrating that this same neural signature can be identified in the Spot-The-Difference task when participants are exposed to the opportunity to cheat. Moreover, using similarity analysis we provide a direct link between the neural signature of cognitive control, as elicited by the Stroop task, and (dis)honesty by showing that time-frequency patterns of cognitive control demands in the Stroop task are indeed similar to those observed when tempted to cheat in the Spot-The-Difference task. These results provide strong evidence that cognitive control processes are recruited when individuals are tempted to cheat.