Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Engagement. Show all posts
Showing posts with label Engagement. Show all posts

Saturday, January 25, 2025

Mental health apps need a complete redesign

Benjamin Kaveladze
Statnews.com
Originally posted 9 Dec 2024

The internet has transformed the ways we access mental health support. Today, anyone with a computer or smartphone can use digital mental health interventions (DMHIs) like Calm for insomnia, PTSD Coach for post-traumatic stress, and Sesame Street’s Breathe, Think, Do with Sesame for anxious kids. Given that most people facing mental illness don’t access professional help through traditional sources like therapists or psychiatrists, DMHIs’ promise to provide effective and trustworthy support globally and equitably is a big deal.

But before consumer DMHIs can transform access to effective support, they must overcome an urgent problem: Most people don’t want to use them. Our best estimate is that 96% of people who download a mental health app will have entirely stopped using it just 15 days later. The field of digital mental health has been trying to tackle this profound engagement problem for years, with little progress. As a result, the wave of pandemic-era excitement and funding for digital mental health is drying up. To advance DMHIs toward their promise of global impact, we need a revolution in these tools’ design.


Here are some thoughts:

This article highlights the critical engagement challenges faced by digital mental health interventions (DMHIs), with 96% of users discontinuing app use within 15 days. This striking statistic points to a need for a fundamental redesign of mental health apps, which currently rely heavily on outdated and conventional approaches reminiscent of 1990s self-help handbooks. The author argues that DMHIs suffer from a lack of creative innovation, as developers have been constrained by traditional therapeutic frameworks, failing to explore the broader potential of technology to effect psychological change.

To address these issues, Kaveladze calls for a radical shift in DMHI design, advocating for the integration of insights from fields like video game design, advertising, and social media content creation. These disciplines excel in engaging users and could provide valuable strategies for creating more appealing and effective mental health tools. This opinion piece also emphasizes the importance of rigorous evaluation processes to ensure new DMHIs are not only effective but also safe, protecting users from potential harms, including privacy breaches and unintended psychological effects.

Psychologists should take note of these concerns and opportunities. When recommending mental health apps to clients, clinicians must critically assess the app's ability to sustain engagement and its adherence to evidence-based practices. Privacy and safety should be paramount considerations, particularly given the sensitive nature of mental health data. Furthermore, psychologists have an essential role to play in guiding the development and evaluation of DMHIs to ensure they meet ethical and clinical standards. Collaborative efforts between clinicians and technology developers could lead to tools that are both innovative and aligned with the needs of diverse populations, including those with limited access to traditional mental health services.

Monday, January 8, 2024

Human-Algorithm Interactions Help Explain the Spread of Misinformation

McLoughlin, K. L., & Brady, W. J. (2023).
Current Opinion in Psychology, 101770.

Abstract

Human attention biases toward moral and emotional information are as prevalent online as they are offline. When these biases interact with content algorithms that curate social media users’ news feeds to maximize attentional capture, moral and emotional information are privileged in the online information ecosystem. We review evidence for these human-algorithm interactions and argue that misinformation exploits this process to spread online. This framework suggests that interventions aimed at combating misinformation require a dual-pronged approach that combines person-centered and design-centered interventions to be most effective. We suggest several avenues for research in the psychological study of misinformation sharing under a framework of human-algorithm interaction.

Here is my summary:

This research highlights the crucial role of human-algorithm interactions in driving the spread of misinformation online. It argues that both human attentional biases and algorithmic amplification mechanisms contribute to this phenomenon.

Firstly, humans naturally gravitate towards information that evokes moral and emotional responses. This inherent bias makes us more susceptible to engaging with and sharing misinformation that leverages these emotions, such as outrage, fear, or anger.

Secondly, social media algorithms are designed to maximize user engagement, which often translates to prioritizing content that triggers strong emotions. This creates a feedback loop where emotionally charged misinformation is amplified, further attracting human attention and fueling its spread.

The research concludes that effectively combating misinformation requires a multifaceted approach. It emphasizes the need for interventions that address both human psychology and algorithmic design. This includes promoting media literacy, encouraging critical thinking skills, and designing algorithms that prioritize factual accuracy and diverse perspectives over emotional engagement.

Wednesday, September 28, 2022

Attributions of emotion and reduced attitude openness prevent people from engaging others with opposing views

Teeny, J. D., & Petty, R. E. (2022).
Journal of Experimental Social Psychology, 
102, 104373.
https://doi.org/10.1016/j.jesp.2022.104373

Abstract

People exhibit a general unwillingness to engage others on social issues for which they disagree (e.g., political elections, police funding, vaccine mandates, etc.), a phenomenon that contributes to the political polarization vexing societies today. Previous research has largely attributed this unwillingness to the perception that such counterattitudinal targets are extreme, certain, and/or difficult to change on these topics. However, the present research offers an additional theoretical explanation. First, we introduce a less studied perception of targets, their affective-cognitive attitude basis (i.e., the degree to which an attitude is seemingly based on emotions versus reasons) that is critical in determining engagement willingness. Specifically, perceivers are less willing to engage with targets who are perceived to hold an affective (vs. cognitive) attitude basis on a topic, because these targets are inferred to have low attitudinal openness on it (i.e., expected to be unlikely to genuinely “hear out” the perceiver). Second, we use a series of multimethod studies with varied U.S. samples to show why this person perception process is central to understanding counterattitudinal engagement. Compared to proattitudinal targets, perceivers on both sides of an issue ascribe more affective (vs. cognitive) attitude bases to rival (counterattitudinal) targets, which cues inferences of reduced attitudinal openness, thereby diminishing people's willingness to engage with these individuals.

From the General Discussion

One of the foremost paths to combatting political polarization is to have people of opposing views engage with counterattitudinal others (e.g., Broockman & Kalla, 2016). Unfortunately, people tend to be unwilling to do this, which previous research has largely attributed to perceptions about the target’s attitudinal extremity, certainty, and the perceived difficulty required to change the target’s mind. However, in the current research, effects on these measures were not only inconsistent (see Footnotes 2 and 4 as well as the web appendix), but they also had reduced explanatory power relative to the focal perceptions outlined here. That is, regardless of how certain, extreme, or difficult to change a counterattitudinal target was perceived to be, it was the affect (relative to cognition) ascribed to their attitude that predicted inferences of reduced attitudinal openness, which in turn determined bipartisan engagement.

These findings emerged across multiple topics, varied study designs, and in light of targets presenting actual rationale for their opinions.  Moreover, post-hoc analyses reveal that these effects were neither moderated by which side of the issue the participants took, nor the participant’s ideological stance (i.e., both liberals and conservatives demonstrated these effects), nor the participants’ own perceived attitude basis.

Saturday, February 29, 2020

Does Morality Matter? Depends On Your Definition Of Right And Wrong

Hannes Leroy
forbes.com
Originally posted 30 Jan 20

Here is an excerpt:

For our research into morality we reviewed some 300 studies on moral leadership. We discovered that morality is – generally speaking – a good thing for leadership effectiveness but it is also a double-edged sword about which you need to be careful and smart. 

To do this, there are three basic approaches.

First, followers can be inspired by a leader who advocates the highest common good for all and is motivated to contribute to that common good from an expectation of reciprocity (servant leadership; consequentialism).

Second, followers can also be inspired by a leader who advocates the adherence to a set of standards or rules and is motivated to contribute to the clarity and safety this structure imposes for an orderly society (ethical leadership; deontology).

Third and finally, followers can also be inspired by a leader who advocates for moral freedom and corresponding responsibility and is motivated to contribute to this system in the knowledge that others will afford them their own moral autonomy (authentic leadership; virtue ethics).

The info is here.

Wednesday, June 19, 2019

The Ethics of 'Biohacking' and Digital Health Data

Sy Mukherjee
Fortune.com
Originally posted June 6, 2019

Here is an excerpt:

Should personal health data ownership be a human right? Do digital health program participants deserve a cut of the profits from the information they provide to genomics companies? How do we get consumers to actually care about the privacy and ethics implications of this new digital health age? Can technology help (and, more importantly, should it have a responsibility to) bridge the persistent gap in representation for women in clinical trials? And how do you design a fair system of data distribution in an age of a la carte genomic editing, leveraged by large corporations, and seemingly ubiquitous data mining from consumers?

Ok, so we didn’t exactly come to definitive conclusions about all that in our limited time. But I look forward to sharing some of our panelists’ insights in the coming days. And I’ll note that, while some of the conversation may have sounded like dystopic cynicism, there was a general consensus that collective regulatory changes, new business models, and a culture of concern for data privacy could help realize the potential of digital health while mitigating its potential problems.

The information and interview are here.