Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Transparency. Show all posts
Showing posts with label Transparency. Show all posts

Wednesday, March 13, 2024

None of these people exist, but you can buy their books on Amazon anyway

Conspirador Norteno
Substack.com
Originally published 12 Jan 24

Meet Jason N. Martin N. Martin, the author of the exciting and dynamic Amazon bestseller “How to Talk to Anyone: Master Small Talks, Elevate Your Social Skills, Build Genuine Connections (Make Real Friends; Boost Confidence & Charisma)”, which is the 857,233rd most popular book on the Kindle Store as of January 12th, 2024. There are, however, a few obvious problems. In addition to the unnecessary repetition of the middle initial and last name, Mr. N. Martin N. Martin’s official portrait is a GAN-generated face, and (as we’ll see shortly), his sole published work is strangely similar to several books by another Amazon author with a GAN-generated face.

In an interesting twist, Amazon’s recommendation system suggests another author with a GAN-generated face in the “Customers also bought items by” section of Jason N. Martin N. Martin’s author page. Further exploration of the recommendations attached to both of these authors and their published works reveals a set of a dozen Amazon authors with GAN-generated faces and at least one published book. Amazon’s recommendation algorithms reliably link these authors together; whether this is a sign that the twelve author accounts are actually run by the same entity or merely an artifact of similarities in the content of their books is unclear at this point in time. 


Here's my take:

Forget literary pen names - AI is creating a new trend on Amazon: ghostwritten books. These novels, poetry collections, and even children's stories boast intriguing titles and blurbs, yet none of the authors on the cover are real people. Instead, their creations spring from the algorithms of powerful language models.

Here's the gist:
  • AI churns out content: Fueled by vast datasets of text and code, AI can generate chapters, characters, and storylines at an astonishing pace.
  • Ethical concerns: Questions swirl around copyright, originality, and the very nature of authorship. Is an AI-generated book truly a book, or just a clever algorithm mimicking creativity?
  • Quality varies: While some AI-written books garner praise, others are criticized for factual errors, nonsensical plots, and robotic dialogue.
  • Transparency is key: Many readers feel deceived by the lack of transparency about AI authorship. Should books disclose their digital ghostwriters?
This evolving technology challenges our understanding of literature and raises questions about the future of authorship. While AI holds potential to assist and inspire, the human touch in storytelling remains irreplaceable. So, the next time you browse Amazon, remember: the author on the cover might not be who they seem.

Sunday, January 21, 2024

Doctors With Histories of Big Malpractice Settlements Now Work for Insurers

P. Rucker, D. Armstrong, & D. Burke
Propublica.org
Originally published 15 Dec 23

Here is an excerpt:

Patients and the doctors who treat them don’t get to pick which medical director reviews their case. An anesthesiologist working for an insurer can overrule a patient’s oncologist. In other cases, the medical director might be a doctor like Kasemsap who has left clinical practice after multiple accusations of negligence.

As part of a yearlong series about how health plans refuse to pay for care, ProPublica and The Capitol Forum set out to examine who insurers picked for such important jobs.

Reporters could not find any comprehensive database of doctors working for insurance companies or any public listings by the insurers who employ them. Many health plans also farm out medical reviews to other companies that employ their own doctors. ProPublica and The Capitol Forum identified medical directors through regulatory filings, LinkedIn profiles, lawsuits and interviews with insurance industry insiders. Reporters then checked those names against malpractice databases, state licensing board actions and court filings in 17 states.

Among the findings: The Capitol Forum and ProPublica identified 12 insurance company doctors with either a history of multiple malpractice payments, a single payment in excess of $1 million or a disciplinary action by a state medical board.

One medical director settled malpractice cases with 11 patients, some of whom alleged he bungled their urology surgeries and left them incontinent. Another was reprimanded by a state medical board for behavior that it found to be deceptive and dishonest. A third settled a malpractice case for $1.8 million after failing to identify cancerous cells on a pathology slide, which delayed a diagnosis for a 27-year-old mother of two, who died less than a year after her cancer was finally discovered.

None of this would have been easily visible to patients seeking approvals for care or payment from insurers who relied on these medical directors.


The ethical implications in this article are staggering.  Here are some quick points:

Conflicted Care: In a concerning trend, some US insurers are employing doctors with past malpractice settlements to assess whether patients deserve coverage for recommended treatments.  So, do these still licensed reviewers actually understand best practices?

Financial Bias: Critics fear these doctors, having faced financial repercussions for past care decisions, might prioritize minimizing payouts over patient needs, potentially leading to denied claims and delayed care.  In other words, do the reviewers have an inherent bias against patients, given that former patients complained against them?

Transparency Concerns: The lack of clear disclosure about these doctors' backgrounds raises concerns about transparency and potential conflicts of interest within the healthcare system.

In essence, this is a horrible system to provide high quality medical review.

Tuesday, October 17, 2023

Tackling healthcare AI's bias, regulatory and inventorship challenges

Bill Siwicki
Healthcare IT News
Originally posted 29 August 23

While AI adoption is increasing in healthcare, there are privacy and content risks that come with technology advancements.

Healthcare organizations, according to Dr. Terri Shieh-Newton, an immunologist and a member at global law firm Mintz, must have an approach to AI that best positions themselves for growth, including managing:
  • Biases introduced by AI. Provider organizations must be mindful of how machine learning is integrating racial diversity, gender and genetics into practice to support the best outcome for patients.
  • Inventorship claims on intellectual property. Identifying ownership of IP as AI begins to develop solutions in a faster, smarter way compared to humans.
Healthcare IT News sat down with Shieh-Newton to discuss these issues, as well as the regulatory landscape’s response to data and how that impacts AI.

Q. Please describe the generative AI challenge with biases introduced from AI itself. How is machine learning integrating racial diversity, gender and genetics into practice?
A. Generative AI is a type of machine learning that can create new content based on the training of existing data. But what happens when that training set comes from data that has inherent bias? Biases can appear in many forms within AI, starting from the training set of data.

Take, as an example, a training set of patient samples already biased if the samples are collected from a non-diverse population. If this training set is used for discovering a new drug, then the outcome of the generative AI model can be a drug that works only in a subset of a population – or have just a partial functionality.

Some traits of novel drugs are better binding to its target and lower toxicity. If the training set excludes a population of patients of a certain gender or race (and the genetic differences that are inherent therein), then the outcome of proposed drug compounds is not as robust as when the training sets include a diversity of data.

This leads into questions of ethics and policies, where the most marginalized population of patients who need the most help could be the group that is excluded from the solution because they were not included in the underlying data used by the generative AI model to discover that new drug.

One can address this issue with more deliberate curation of the training databases. For example, is the patient population inclusive of many types of racial backgrounds? Gender? Age ranges?

By making sure there is a reasonable representation of gender, race and genetics included in the initial training set, generative AI models can accelerate drug discovery, for example, in a way that benefits most of the population.

The info is here. 

Here is my take:

 One of the biggest challenges is bias. AI systems are trained on data, and if that data is biased, the AI system will be biased as well. This can have serious consequences in healthcare, where biased AI systems could lead to patients receiving different levels of care or being denied care altogether.

Another challenge is regulation. Healthcare is a highly regulated industry, and AI systems need to comply with a variety of laws and regulations. This can be complex and time-consuming, and it can be difficult for healthcare organizations to keep up with the latest changes.

Finally, the article discusses the challenges of inventorship. As AI systems become more sophisticated, it can be difficult to determine who is the inventor of a new AI-powered healthcare solution. This can lead to disputes and delays in bringing new products and services to market.

The article concludes by offering some suggestions for how to address these challenges:
  • To reduce bias, healthcare organizations need to be mindful of the data they are using to train their AI systems. They should also audit their AI systems regularly to identify and address any bias.
  • To comply with regulations, healthcare organizations need to work with experts to ensure that their AI systems meet all applicable requirements.
  • To resolve inventorship disputes, healthcare organizations should develop clear policies and procedures for allocating intellectual property rights.
By addressing these challenges, healthcare organizations can ensure that AI is deployed in a way that is safe, effective, and ethical.

Additional thoughts

In addition to the challenges discussed in the article, there are a number of other factors that need to be considered when deploying AI in healthcare. For example, it is important to ensure that AI systems are transparent and accountable. This means that healthcare organizations should be able to explain how their AI systems work and why they make the decisions they do.

It is also important to ensure that AI systems are fair and equitable. This means that they should treat all patients equally, regardless of their race, ethnicity, gender, income, or other factors.

Finally, it is important to ensure that AI systems are used in a way that respects patient privacy and confidentiality. This means that healthcare organizations should have clear policies in place for the collection, use, and storage of patient data.

By carefully considering all of these factors, healthcare organizations can ensure that AI is used to improve patient care and outcomes in a responsible and ethical way.

Saturday, October 7, 2023

AI systems must not confuse users about their sentience or moral status

Schwitzgebel, E. (2023).
Patterns, 4(8), 100818.
https://doi.org/10.1016/j.patter.2023.100818 

The bigger picture

The draft European Union Artificial Intelligence Act highlights the seriousness with which policymakers and the public have begun to take issues in the ethics of artificial intelligence (AI). Scientists and engineers have been developing increasingly more sophisticated AI systems, with recent breakthroughs especially in large language models such as ChatGPT. Some scientists and engineers argue, or at least hope, that we are on the cusp of creating genuinely sentient AI systems, that is, systems capable of feeling genuine pain and pleasure. Ordinary users are increasingly growing attached to AI companions and might soon do so in much greater numbers. Before long, substantial numbers of people might come to regard some AI systems as deserving of at least some limited rights or moral standing, being targets of ethical concern for their own sake. Given high uncertainty both about the conditions under which an entity can be sentient and about the proper grounds of moral standing, we should expect to enter a period of dispute and confusion about the moral status of our most advanced and socially attractive machines.

Summary

One relatively neglected challenge in ethical artificial intelligence (AI) design is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. Although experts generally agree that current AI chatbots are not sentient to any meaningful degree, these systems can already provoke substantial attachment and sometimes intense emotional responses in users. Furthermore, rapid advances in AI technology could soon create AIs of plausibly debatable sentience and moral standing, at least by some relevant definitions. Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated. I argue here that, to the extent possible, we should avoid creating AI systems whose sentience or moral standing is unclear and that AI systems should be designed so as to invite appropriate emotional responses in ordinary users.

My take

The article proposes two design policies for avoiding morally confusing AI systems. The first is to create systems that are clearly non-conscious artifacts. This means that the systems should be designed in a way that makes it clear to users that they are not sentient beings. The second policy is to create systems that are clearly deserving of moral consideration as sentient beings. This means that the systems should be designed to have the same moral status as humans or other animals.

The article concludes that the best way to avoid morally confusing AI systems is to err on the side of caution and create systems that are clearly non-conscious artifacts. This is because it is less risky to underestimate the sentience of an AI system than to overestimate it.

Here are some additional points from the article:
  • The scientific study of sentience is highly contentious, and there is no agreed-upon definition of what it means for an entity to be sentient.
  • Rapid advances in AI technology could soon create AI systems that are plausibly debatable as sentient.
  • Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated.
  • The design of AI systems should be guided by ethical considerations, such as the need to avoid causing harm and the need to respect the dignity of all beings.

Wednesday, July 26, 2023

Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions

Krügel, S., Ostermaier, A. & Uhl, M.
Philos. Technol. 35, 17 (2022).

Abstract

Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest that AI is overtrusted rather than distrusted. We suggest digital literacy as a potential remedy to ensure the responsible use of AI.

Summary

Background: Artificial intelligence (AI) is increasingly being used to make ethical decisions. However, there is a concern that AI-powered advisors may not be trustworthy, due to factors such as bias and opacity.

Research question: The authors of this article investigated whether humans trust AI-powered advisors for ethical decisions, even when they know that the advisor is untrustworthy.

Methods: The authors conducted a series of experiments in which participants were asked to make ethical decisions with the help of an AI advisor. The advisor was either trustworthy or untrustworthy, and the participants were aware of this.

Results: The authors found that participants were more likely to trust the AI advisor, even when they knew that it was untrustworthy. This was especially true when the advisor was able to provide a convincing justification for its advice.

Conclusions: The authors concluded that humans are susceptible to "zombie trust" in AI-powered advisors. This means that we may trust AI advisors even when we know that they are untrustworthy. This is a concerning finding, as it could lead us to make bad decisions based on the advice of untrustworthy AI advisors.  By contrast, decision-makers do disregard advice from a human convicted criminal.

The article also discusses the implications of these findings for the development and use of AI-powered advisors. The authors suggest that it is important to make AI advisors more transparent and accountable, in order to reduce the risk of zombie trust. They also suggest that we need to educate people about the potential for AI advisors to be untrustworthy.

Monday, July 24, 2023

How AI can distort human beliefs

Kidd, C., & Birhane, A. (2023, June 23).
Science, 380(6651), 1222-1223.
doi:10.1126/science. adi0248

Here is an excerpt:

Three core tenets of human psychology can help build a bridge of understanding about what is at stake when discussing regulation and policy options. These ideas in psychology can connect to machine learning but also those in political science, education, communication, and the other fields that are considering the impact of bias and misinformation on population-level beliefs.

People form stronger, longer-lasting beliefs when they receive information from agents that they judge to be confident and knowledgeable, starting in early childhood. For example, children learned better when they learned from an agent who asserted their knowledgeability in the domain as compared with one who did not (5). That very young children track agents’ knowledgeability and use it to inform their beliefs and exploratory behavior supports the theory that this ability reflects an evolved capacity central to our species’ knowledge development.

Although humans sometimes communicate false or biased information, the rate of human errors would be an inappropriate baseline for judging AI because of fundamental differences in the types of exchanges between generative AI and people versus people and people. For example, people regularly communicate uncertainty through phrases such as “I think,” response delays, corrections, and speech disfluencies. By contrast, generative models unilaterally generate confident, fluent responses with no uncertainty representations nor the ability to communicate their absence. This lack of uncertainty signals in generative models could cause greater distortion compared with human inputs.

Further, people assign agency and intentionality readily. In a classic study, people read intentionality into the movements of simple animated geometric shapes (6). Likewise, people commonly read intentionality— and humanlike intelligence or emergent sentience—into generative models even though these attributes are unsubstantiated (7). This readiness to perceive generative models as knowledgeable, intentional agents implies a readiness to adopt the information that they provide more rapidly and with greater certainty. This tendency may be further strengthened because models support multimodal interactions that allow users to ask models to perform actions like “see,” “draw,” and “speak” that are associated with intentional agents. The potential influence of models’ problematic outputs on human beliefs thus exceeds what is typically observed for the influence of other forms of algorithmic content suggestion such as search. These issues are exacerbated by financial and liability interests incentivizing companies to anthropomorphize generative models as intelligent, sentient, empathetic, or even childlike.


Here is a summary of solutions that can be used to address the problem of AI-induced belief distortion. These solutions include:

Transparency: AI models should be transparent about their biases and limitations. This will help people to understand the limitations of AI models and to be more critical of the information that they generate.

Education: People should be educated about the potential for AI models to distort beliefs. This will help people to be more aware of the risks of using AI models and to be more critical of the information that they generate.

Regulation: Governments could regulate the use of AI models to ensure that they are not used to spread misinformation or to reinforce existing biases.

Sunday, July 23, 2023

How to Use AI Ethically for Ethical Decision-Making

Demaree-Cotton, J., Earp, B. D., & Savulescu, J.
(2022). American Journal of Bioethics, 22(7), 1–3.

Here is an excerpt:

The  kind  of AI  proposed  by  Meier  and  colleagues  (2022) has  the  fascinating  potential to improve the transparency of ethical decision-making, at least if it is used as a decision aid rather than a decision replacement (Savulescu & Maslen 2015). While artificial intelligence cannot itself engage in the human communicative process of justifying its decisions to patients, the AI they describe (unlike “black-box” AI) makes explicit which values and principles are involved and how much weight they are given.

By contrast, the moral principles or values underlying human moral intuition are not always consciously, introspectively accessible (Cushman, Young, and  Hauser  2006).  While humans sometimes have a fuzzy, intuitive sense of some of the factors that are relevant to their moral judgment, we often have strong moral intuitions without being sure of their source,  or  with- out being clear on precisely how strongly different  factors  played  a  role in  generating  the intuitions.  But  if  clinicians  make use  of  the AI  as a  decision  aid, this  could help  them  to transparently and precisely communicate the actual reasons behind their decision.

This is so even if the AI’s recommendation is ultimately rejected. Suppose, for example, that the AI recommends a course of action, with a certain amount of confidence, and it specifies the exact values or  weights it has assigned  to  autonomy versus  beneficence  in coming  to  this conclusion. Evaluating the recommendation made by the AI could help a committee make more explicit the “black box” aspects  of their own reasoning.  For example, the committee might decide that beneficence should actually be  weighted more heavily in this case than the AI suggests. Being able to understand the reason that their decision diverges from that of the AI gives them the opportunity to offer a further justifying reason as to why they think beneficence should be given more weight;  and  this,  in  turn, could improve the  transparency of their recommendation. 

However, the potential for the kind of AI described in the target article to improve the accuracy of moral decision-making may be more limited. This is so for two reasons. Firstly, whether AI can be expected to outperform human decision-making depends in part on the metrics used to train it. In non-ethical domains, superior accuracy can be achieved because the “verdicts” given to the AI in the training phase are not solely the human judgments that the AI is intended to replace or inform. Consider how AI can learn to detect lung cancer from scans at a superior rate to human radiologists after being trained on large datasets and being “told” which scans show cancer and which ones are cancer-free. Importantly, this training includes cases where radiologists  did  not  recognize  cancer  in  the  early  scans  themselves,  but  where  further information verified the correct diagnosis later on (Ardila et al. 2019). Consequently, these AIs are  able  to  detect  patterns  even in  early  scans  that  are  not  apparent  or  easily detectable  by human radiologists, leading to superior accuracy compared to human performance.  

Saturday, July 22, 2023

Generative AI companies must publish transparency reports

A. Narayanan and S. Kapoor
Knight First Amendment Institute
Originally published 26 June 23

Here is an excerpt:

Transparency reports must cover all three types of harms from AI-generated content

There are three main types of harms that may result from model outputs.

First, generative AI tools could be used to harm others, such as by creating non-consensual deepfakes or child sexual exploitation materials. Developers do have policies that prohibit such uses. For example, OpenAI's policies prohibit a long list of uses, including the use of its models to generate unauthorized legal, financial, or medical advice for others. But these policies cannot have real-world impact if they are not enforced, and due to platforms' lack of transparency about enforcement, we have no idea if they are effective. Similar challenges in ensuring platform accountability have also plagued social media in the past; for instance, ProPublica reporters repeatedly found that Facebook failed to fully remove discriminatory ads from its platform despite claiming to have done so.

Sophisticated bad actors might use open-source tools to generate content that harms others, so enforcing use policies can never be a comprehensive solution. In a recent essay, we argued that disinformation is best addressed by focusing on its distribution (e.g., on social media) rather than its generation. Still, some actors will use tools hosted in the cloud either due to convenience or because the most capable models don’t tend to be open-source. For these reasons, transparency is important for cloud-based generative AI.

Second, users may over-rely on AI for factual information, such as legal, financial, or medical advice. Sometimes they are simply unaware of the tendency of current chatbots to frequently generate incorrect information. For example, a user might ask "what are the divorce laws in my state?" and not know that the answer is unreliable. Alternatively, the user might be harmed because they weren’t careful enough to verify the generated information, despite knowing that it might be inaccurate. Research on automation bias shows that people tend to over-rely on automated tools in many scenarios, sometimes making more errors than when not using the tool.

ChatGPT includes a disclaimer that it sometimes generates inaccurate information. But OpenAI has often touted its performance on medical and legal exams. And importantly, the tool is often genuinely useful at medical diagnosis or legal guidance. So, regardless of whether it’s a good idea to do so, people are in fact using it for these purposes. That makes harm reduction important, and transparency is an important first step.

Third, generated content could be intrinsically undesirable. Unlike the previous types, here the harms arise not because of users' malice, carelessness, or lack of awareness of limitations. Rather, intrinsically problematic content is generated even though it wasn’t requested. For example, Lensa's avatar creation app generated sexualized images and nudes when women uploaded their selfies. Defamation is also intrinsically harmful rather than a matter of user responsibility. It is no comfort to the target of defamation to say that the problem would be solved if every user who might encounter a false claim about them were to exercise care to verify it.


Quick summary: 

The call for transparency reports aims to increase accountability and understanding of the inner workings of generative AI models. By disclosing information about the data used to train the models, the companies can address concerns regarding potential biases and ensure the ethical use of their technology.

Transparency reports could include details about the sources and types of data used, the demographics represented in the training data, any data augmentation techniques applied, and potential biases detected or addressed during model development. This information would enable users, policymakers, and researchers to evaluate the capabilities and limitations of the generative AI systems.

Saturday, June 10, 2023

Generative AI entails a credit–blame asymmetry

Porsdam Mann, S., Earp, B. et al. (2023).
Nature Machine Intelligence.

The recent releases of large-scale language models (LLMs), including OpenAI’s ChatGPT and GPT-4, Meta’s LLaMA, and Google’s Bard have garnered substantial global attention, leading to calls for urgent community discussion of the ethical issues involved. LLMs generate text by representing and predicting statistical properties of language. Optimized for statistical patterns and linguistic form rather than for
truth or reliability, these models cannot assess the quality of the information they use.

Recent work has highlighted ethical risks that are associated with LLMs, including biases that arise from training data; environmental and socioeconomic impacts; privacy and confidentiality risks; the perpetuation of stereotypes; and the potential for deliberate or accidental misuse. We focus on a distinct set of ethical questions concerning moral responsibility—specifically blame and credit—for LLM-generated
content. We argue that different responsibility standards apply to positive and negative uses (or outputs) of LLMs and offer preliminary recommendations. These include: calls for updated guidance from policymakers that reflect this asymmetry in responsibility standards; transparency norms; technology goals; and the establishment of interactive forums for participatory debate on LLMs.‌

(cut)

Credit–blame asymmetry may lead to achievement gaps

Since the Industrial Revolution, automating technologies have made workers redundant in many industries, particularly in agriculture and manufacturing. The recent assumption25 has been that creatives
and knowledge workers would remain much less impacted by these changes in the near-to-mid-term future. Advances in LLMs challenge this premise.

How these trends will impact human workforces is a key but unresolved question. The spread of AI-based applications and tools such as LLMs will not necessarily replace human workers; it may simply
shift them to tasks that complement the functions of the AI. This may decrease opportunities for human beings to distinguish themselves or excel in workplace settings. Their future tasks may involve supervising or maintaining LLMs that produce the sorts of outputs (for example, text or recommendations) that skilled human beings were previously producing and for which they were receiving credit. Consequently, work in a world relying on LLMs might often involve ‘achievement gaps’ for human beings: good, useful outcomes will be produced, but many of them will not be achievements for which human workers and professionals can claim credit.

This may result in an odd state of affairs. If responsibility for positive and negative outcomes produced by LLMs is asymmetrical as we have suggested, humans may be justifiably held responsible for negative outcomes created, or allowed to happen, when they or their organizations make use of LLMs. At the same time, they may deserve less credit for AI-generated positive outcomes, as they may not be displaying the skills and talents needed to produce text, exerting judgment to make a recommendation, or generating other creative outputs.

Wednesday, May 17, 2023

In Search of an Ethical Constraint on Hospital Revenue

Lauren Taylor
The Hastings Center
Originally published 14 APR 23

Here are two excerpts:

A physician whistleblower came forward alleging that Detroit Medical Center, owned by for-profit Tenet Healthcare, refused to halt elective procedures in early days of the pandemic, even after dozens of patients and staff were exposed to a COVID-positive patient undergoing an organ transplant. According to the physician, Tenet persisted on account of the margin it stood to generate. “Continuing to do this [was] truly a crime against patients,” recalled Dr. Shakir Hussein, who was fired shortly thereafter.

Earlier in 2022, nonprofit Bon Secours health system was investigated for its strategic downsizing of a community hospital in Richmond, Va., which left a predominantly Black community lacking access to standard medical services such as MRIs and maternity care. Still, the hospital managed to turn a $100 million margin, which buoyed the system’s $1 billion net revenue in 2021. “Bon Secours was basically laundering money through this poor hospital to its wealthy outposts,” said one emergency department physician who had worked at Richmond Community Hospital. “It was all about profits.”  

The academic literature further substantiates concerns about hospital margin maximization. One paper examining the use of municipal, tax-exempt debt among nonprofit hospitals found evidence of arbitrage behavior, where hospitals issued debt not to invest in new capital (the stated purpose of most municipal debt issuances) but to invest the proceeds of the issuance in securities and other endowment accounts. A more recent paper, focused on private equity-owned hospitals, found that facilities acquired by private equity were more likely to “add specific, profitable hospital-based services and less likely to add or continue those with unreliable revenue streams.” These and other findings led Donald Berwick to write that greed poses an existential threat to U.S. health care.

None of the hospital actions described above are necessarily illegal but they certainly bring long-lurking issues within bioethics to the fore. Recognizing that hospitals are resource-dependent organizations, what normative, ethical responsibilities–or constraints–do they face with regard to revenue-generation? A review of the health services and bioethics literature to date turns up three general answers to this question, all of which are unsatisfactory.

(cut)

In sum, we cannot rely on laws alone to provide an effective check on hospital revenue generation due to the law’s inevitably limited scope. We therefore must identify an internalized ethic to guide hospital revenue generation. The concept of an organizational mission is a weak check on nonprofit hospitals and virtually meaningless among for-profit hospitals, and reliance on professionalism is incongruous with the empirical data about who has final decision-making authority over hospitals today. We need a new way to conceptualize hospital responsibilities.

Two critiques of this idea merit confrontation. The first is that there is no urgent need for an internalized constraint on revenue generation because more than half of hospitals are currently operating in the red; seeking to curb their revenue further is counterproductive. But just because a proportion of this sector is in the red does not undercut the egregiousness of the hospital actions described earlier. Moreover, if hospitals are running a deficit in part because they choose not to undertake unethical action to generate revenue, then any rule developed saying they can’t undertake ethical actions to generate revenue won’t apply to them. The second critique is that the current revenues that hospitals generate are legitimate because they bolster institutional “rainy day funds” of sorts, which can be deployed to help people and communities in need at a future date. But with a declining national life expectancy, a Black maternal mortality rate hovering at roughly that of Tajikistan, and medical debt the leading cause of personal bankruptcy in the U.S. – it is already raining. Increasing reserves, by any means, can no longer be defended with this logic.

Monday, May 1, 2023

Take your ethics and shove it! Narcissists' angry responses to ethical leadership

Fox, F. R., Smith, M. B., & Webster, B. D. (2023). 
Personality and Individual Differences, 204, 112032.
https://doi.org/10.1016/j.paid.2022.112032

Abstract

Evoking the agentic model of narcissism, the present study contributes to understanding the nuanced responses to ethical leadership that result from the non-normative, dark personality trait of narcissism. We draw from affective events theory to understand why narcissists respond to ethical leadership with feelings of anger, which then results in withdrawal behaviors. We establish internal validity by testing our model via an experimental design. Next, we establish external validity by testing our theoretical model in a field study of university employees. Together, results from the studies suggest anger mediates the positive relationship between narcissism and withdrawal under conditions of high ethical leadership. We discuss the theoretical and practical implications of our findings.

From the Introduction:

Ethical leaders model socially acceptable behavior that is prosocial in nature while matching an individual moral-compass with the good of the group (Brown et al., 2005). Ethical leadership is defined as exalting the moral person (i.e., being an ethical example, fair treatment) and the moral manager (i.e., encourage normative behavior, discourage unethical behavior), and has been shown to be related to several beneficial organizational outcomes (Den Hartog, 2015; Mayer et al., 2012). The construct of ethical leadership is not only based on moral/ethical principles, but overtly promoting normative communally beneficial ideals and establishing guidelines for acceptable behavior (Bedi et al., 2016; Brown et al., 2005). Ethical leaders cultivate a reputation founded upon doing the right thing, treating others fairly, and thinking about the common good.

As a contextual factor, ethical leadership presents a situation where employees are presented with expectations and clear standards for normative behavior. Indeed, ethical leaders, by their behavior, convey what behavior is expected, rewarded, and punished (Brown et al., 2005). In other words, ethical leaders set the standard for behavior in the organization and are effective at establishing fair and transparent processes for rewarding performance. Consequently, ethical leadership has been shown to be positively related to task performance and citizenship behavior and negatively related to deviant behaviors (Peng & Kim, 2020).


This research examines how narcissistic individuals respond to ethical leadership, which is characterized by fairness, transparency, and concern for the well-being of employees. The study found that narcissistic individuals are more likely to respond with anger and hostility to ethical leadership compared to non-narcissistic individuals. The researchers suggest that this may be due to the fact that narcissists prioritize their own self-interests and are less concerned with the well-being of others. Ethical leadership, which promotes the well-being of employees, may therefore be perceived as a threat to their self-interests, leading to a negative response.

The study also found that when narcissists were in a leadership position, they were less likely to engage in ethical leadership behaviors themselves. This suggests that narcissistic individuals may not only be resistant to ethical leadership but may also be less likely to exhibit these behaviors themselves. The findings of this research have important implications for organizations and their leaders, as they highlight the challenges of promoting ethical leadership in the presence of narcissistic individuals.

Monday, April 24, 2023

ChatGPT in the Clinic? Medical AI Needs Ethicists

Emma Bedor Hiland
The Hastings Center
Originally published by 10 MAR 23

Concerns about the role of artificial intelligence in our lives, particularly if it will help us or harm us, improve our health and well-being or work to our detriment, are far from new. Whether 2001: A Space Odyssey’s HAL colored our earliest perceptions of AI, or the much more recent M3GAN, these questions are not unique to the contemporary era, as even the ancient Greeks wondered what it would be like to live alongside machines.

Unlike ancient times, today AI’s presence in health and medicine is not only accepted, it is also normative. Some of us rely upon FitBits or phone apps to track our daily steps and prompt us when to move or walk more throughout our day. Others utilize chatbots available via apps or online platforms that claim to improve user mental health, offering meditation or cognitive behavioral therapy. Medical professionals are also open to working with AI, particularly when it improves patient outcomes. Now the availability of sophisticated chatbots powered by programs such as OpenAI’s ChatGPT have brought us closer to the possibility of AI becoming a primary source in providing medical diagnoses and treatment plans.

Excitement about ChatGPT was the subject of much media attention in late 2022 and early 2023. Many in the health and medical fields were also eager to assess the AI’s abilities and applicability to their work. One study found ChatGPT adept at providing accurate diagnoses and triage recommendations. Others in medicine were quick to jump on its ability to complete administrative paperwork on their behalf. Other research found that ChatGPT reached, or came close to reaching, the passing threshold for United States Medical Licensing Exam.

Yet the public at large is not as excited about an AI-dominated medical future. A study from the Pew Research Center found that most Americans are “uncomfortable” with the prospect of AI-provided medical care. The data also showed widespread agreement that AI will negatively affect patient-provider relationships, and that the public is concerned health care providers will adopt AI technologies too quickly, before they fully understanding the risks of doing so.


In sum: As AI is increasingly used in healthcare, this article argues that there is a need for ethical considerations and expertise to ensure that these systems are designed and used in a responsible and beneficial manner. Ethicists can play a vital role in evaluating and addressing the ethical implications of medical AI, particularly in areas such as bias, transparency, and privacy.

Sunday, April 9, 2023

Clarence Thomas Has Reportedly Been Accepting Gifts From Republican Megadonor Harlan Crow For Decades—And Never Disclosed It

Alison Durkee
Forbes.com
Originally posted 6 APR 23

Supreme Court Justice Clarence Thomas has been accepting trips from Republican megadonor Harlan Crow for more than 20 years without disclosing them as required, ProPublica reports—including trips on private jets and yachts that could run afoul of the law—the latest in a series of ethical scandals the conservative justice has faced amid calls for justices to follow an ethics code.

Key Facts
  • Thomas has repeatedly used Crow’s private jet for travel and vacationed with him including on his superyacht and at Crow’s private resort in the Adirondacks, where guests stay for free, ProPublica reports, citing flight records, internal documents and interviews with Crow’s employees.
  • The justice has stayed at Crow’s resort “every summer for more than two decades,” according to ProPublica, and reportedly makes “regular use” of Crow’s private jet, including as recently as last year and for as short as a three-hour trip from Washington, D.C., to Connecticut in 2016.
  • While Supreme Court justices are not bound to the same code of ethics as lower federal court judges are, they do submit financial disclosures and are subject to laws that require disclosing gifts that are more than $415 in value, including any transportation that substitutes for commercial transport
  • Experts cited by ProPublica believe Thomas may have violated federal disclosure laws by not disclosing his yacht and jet travel, and that the stays at Crow’s resort may also have required disclosure because the resort is owned by Crow’s company rather than him personally.
  • Thomas’ stays at Crows’ resort also raise ethics concerns given the other guests Crow—a real estate magnate and Republican megadonor—has invited to the resort and on his yacht at the same time, which ProPublica reports include GOP donors, ​​executives at Verizon and PricewaterhouseCoopers, leaders from right-wing think tank American Enterprise Institute, Federalist Society leader Leonard Leo and Mark Paoletta, the general counsel for the Trump Administration’s Office of Management and Budget who now serves as Thomas’ wife’s attorney.

Thursday, March 16, 2023

Drowning in Debris: A Daughter Faces Her Mother’s Hoarding

Deborah Derrickson Kossmann
Psychotherapy Networker
March/April 2023

Here is an excerpt:

My job as a psychologist is to salvage things, to use the stories people tell me in therapy and help them understand themselves and others better. I make meaning out of the joy and wreckage of my own life, too. Sure, I could’ve just hired somebody to shovel all my mother’s mess into a dumpster, but I needed to be my family’s archaeologist, excavating and preserving what was beautiful and meaningful. My mother isn’t wrong to say that holding on to some things is important. Like her, I appreciate connections to the past. During the cleaning, I found photographs, jewelry passed down over generations, and my bronzed baby shoes. I treasure these things.

“Maybe I failed by not following anything the psychology books say to do with a hoarding client,” I tell my sister over the phone. “Sometimes I still feel like I wasn’t compassionate enough.”

“You handled it as best you could as her daughter,” my sister says. “You’re not her therapist.”

After six years, my mother has finally stopped saying she’s a “prisoner” at assisted living. She tells me she’s part of a “posse” of women who eat dinner together. My sister decorated her studio apartment beautifully, but the cluttering has begun again. Piles of magazines and newspapers sit in corners of her room. Sometimes, I feel the rage and despair these behaviors trigger in me. I still have nightmares where I drive to my mother’s house, open the door, and see only darkness, black and terrifying, like I’m looking into a deep cave. Then, I’m fleeing while trying to wipe feces off my arm. I wake up feeling sadness and shame, but I know it isn’t my own.

A few weeks ago, I pulled up in front of my mother’s building after taking her to the cardiologist. We turned toward each other and hugged goodbye. She opened the car door with some effort and determinedly waved off my help before grabbing the bag of books I’d brought for her.

“I can do it, Deborah,” she snapped. But after taking a few steps toward the building entrance, she turned around to look at me and smiled. “Thank you,” she said. “I really appreciate all you do for me.” She added, softly, “I know it’s a lot.”


The article is an important reminder that practicing psychologists cope with their own stressors, family dynamics, and unpleasant emotional experiences.  Psychologists are humans with families, value systems, emotions, beliefs, and shortcomings.

Monday, February 20, 2023

Definition drives design: Disability models and mechanisms of bias in AI technologies

Newman-Griffis, D., et al. (2023).
First Monday, 28(1).
https://doi.org/10.5210/fm.v28i1.12903

Abstract

The increasing deployment of artificial intelligence (AI) tools to inform decision-making across diverse areas including healthcare, employment, social benefits, and government policy, presents a serious risk for disabled people, who have been shown to face bias in AI implementations. While there has been significant work on analysing and mitigating algorithmic bias, the broader mechanisms of how bias emerges in AI applications are not well understood, hampering efforts to address bias where it begins. In this article, we illustrate how bias in AI-assisted decision-making can arise from a range of specific design decisions, each of which may seem self-contained and non-biasing when considered separately. These design decisions include basic problem formulation, the data chosen for analysis, the use the AI technology is put to, and operational design elements in addition to the core algorithmic design. We draw on three historical models of disability common to different decision-making settings to demonstrate how differences in the definition of disability can lead to highly distinct decisions on each of these aspects of design, leading in turn to AI technologies with a variety of biases and downstream effects. We further show that the potential harms arising from inappropriate definitions of disability in fundamental design stages are further amplified by a lack of transparency and disabled participation throughout the AI design process. Our analysis provides a framework for critically examining AI technologies in decision-making contexts and guiding the development of a design praxis for disability-related AI analytics. We put forth this article to provide key questions to facilitate disability-led design and participatory development to produce more fair and equitable AI technologies in disability-related contexts.

Conclusion

The proliferation of artificial intelligence (AI) technologies as behind the scenes tools to support decision-making processes presents significant risks of harm for disabled people. The unspoken assumptions and unquestioned preconceptions that inform AI technology development can serve as mechanisms of bias, building the base problem formulation that guides a technology on reductive and harmful conceptualisations of disability. As we have shown, even when developing AI technologies to address the same overall goal, different definitions of disability can yield highly distinct analytic technologies that reflect contrasting, frequently incompatible decisions in the information to analyse, what analytic process to use, and what the end product of analysis will be. Here we have presented an initial framework to support critical examination of specific design elements in the formulation of AI technologies for data analytics, as a tool to examine the definitions of disability used in their design and the resulting impacts on the technology. We drew on three important historical models of disability that form common foundations for policy, practice, and personal experience today—the medical, social, and relational models—and two use cases in healthcare and government benefits to illustrate how different ways of conceiving of disability can yield technologies that contrast and conflict with one another, creating distinct risks for harm.

Wednesday, December 21, 2022

Do You Really Want to Read What Your Doctor Writes About You?

Zoya Qureshi
The Atlantic
Originally posted 15 NOV 22

You may not be aware of this, but you can read everything that your doctor writes about you. Go to your patient portal online, click around until you land on notes from your past visits, and read away. This is a recent development, and a big one. Previously, you always had the right to request your medical record from your care providers—an often expensive and sometimes fruitless process—but in April 2021, a new federal rule went into effect, mandating that patients have the legal right to freely and electronically access most kinds of notes written about them by their doctors.

If you’ve never heard of “open notes,” as this new law is informally called, you’re not the only one. Doctors say that the majority of their patients have no clue. (This certainly has been the case for all of the friends and family I’ve asked.) If you do know about the law, you likely know a lot about it. That’s typically because you’re a doctor—one who now has to navigate a new era of transparency in medicine—or you’re someone who knows a doctor, or you’re a patient who has become intricately familiar with this country’s health system for one reason or another.

When open notes went into effect, the change was lauded by advocates as part of a greater push toward patient autonomy and away from medical gatekeeping. Previously, hospitals could charge up to hundreds of dollars to release records, if they released them at all. Many doctors, meanwhile, have been far from thrilled about open notes. They’ve argued that this rule will introduce more challenges than benefits for both patients and themselves. At worst, some have fretted, the law will damage people’s trust of doctors and make everyone’s lives worse.

A year and a half in, however, open notes don’t seem to have done too much of anything. So far, they have neither revolutionized patient care nor sunk America’s medical establishment. Instead, doctors say, open notes have barely shifted the clinical experience at all. Few individual practitioners have been advertising the change, and few patients are seeking it out on their own. We’ve been left with a partially implemented system and a big unresolved question: How much, really, should you want to read what your doctor is writing about you?

(cut)

Open notes are only part of this conversation. The new law also requires that test results be made immediately available to patients, meaning that patients might see their health information before their physician does. Although this is fine for the majority of tests, problems arise when results are harbingers of more complex, or just bad, news. Doctors I spoke with shared that some of their patients have suffered trauma from learning about their melanoma or pancreatic cancer or their child’s leukemia from an electronic message in the middle of the night, with no doctor to call and talk through the seriousness of that result with. This was the case for Tara Daniels, a digital-marketing consultant who lives near Boston. She’s had leukemia three times, and learned about the third via a late-night notification from her patient portal. Daniels appreciates the convenience of open notes, which help her keep track of her interactions with various doctors. But, she told me, when it comes to instant results, “I still hold a lot of resentment over the fact that I found out from test results, that I had to figure it out myself, before my doctor was able to tell me.”

Thursday, December 15, 2022

Dozens of telehealth startups sent sensitive health information to big tech companies

Katie Palmer with
Todd Feathers & Simon Fondrie-Teitler 
STAT NEWS
Originally posted 13 DEC 22

Here is an excerpt:

Health privacy experts and former regulators said sharing such sensitive medical information with the world’s largest advertising platforms threatens patient privacy and trust and could run afoul of unfair business practices laws. They also emphasized that privacy regulations like the Health Insurance Portability and Accountability Act (HIPAA) were not built for telehealth. That leaves “ethical and moral gray areas” that allow for the legal sharing of health-related data, said Andrew Mahler, a former investigator at the U.S. Department of Health and Human Services’ Office for Civil Rights.

“I thought I was at this point hard to shock,” said Ari Friedman, an emergency medicine physician at the University of Pennsylvania who researches digital health privacy. “And I find this particularly shocking.”

In October and November, STAT and The Markup signed up for accounts and completed onboarding forms on 50 telehealth sites using a fictional identity with dummy email and social media accounts. To determine what data was being shared by the telehealth sites as users completed their forms, reporters examined the network traffic between trackers using Chrome DevTools, a tool built into Google’s Chrome browser.

On Workit’s site, for example, STAT and The Markup found that a piece of code Meta calls a pixel sent responses about self-harm, drug and alcohol use, and personal information — including first name, email address, and phone number — to Facebook.

The investigation found trackers collecting information on websites that sell everything from addiction treatments and antidepressants to pills for weight loss and migraines. Despite efforts to trace the data using the tech companies’ own transparency tools, STAT and The Markup couldn’t independently confirm how or whether Meta and the other tech companies used the data they collected.

After STAT and The Markup shared detailed findings with all 50 companies, Workit said it had changed its use of trackers. When reporters tested the website again on Dec. 7, they found no evidence of tech platform trackers during the company’s intake or checkout process.

“Workit Health takes the privacy of our members seriously,” Kali Lux, a spokesperson for the company, wrote in an email. “Out of an abundance of caution, we elected to adjust the usage of a number of pixels for now as we continue to evaluate the issue.”

Thursday, November 10, 2022

Institutional betrayal, institutional courage and the church

Susan Shaw
Baptist News Global
Originally published 26 JUL 22

Betrayal by trusted people, like pastors, teachers, supervisors and coaches can inflict devastating consequences on victims. According to psychologists who study trauma, betrayal trauma affects the brain differently than any other trauma, particularly when the victim depends upon the perpetrator. Betrayal trauma threatens the very sense of self of the victim, who often cannot easily escape because of physical, psychological or spiritual dependence.

Institutional betrayal

When institutions don’t address perpetrators but rather meet survivors with denial, harassment and attack, they engage in institutional betrayal. Institutional betrayal occurs “when an institution causes harm to people who depend on it.”

Betrayal blindness describes ignoring, overlooking, “not-knowing” and forgetting betrayal. People, including victims themselves as well as perpetrators and witnesses, exhibit betrayal blindness to “preserve relationships, institutions and social systems upon which they depend.”

We don’t have to think very long to name a depressing list of instances of institutional betrayal by the church: segregation, clergy sex abuse, conversion therapy, exclusion of women from church leadership and ordained ministry, purity culture, the Magdalene laundries, witch hunts, Indian schools, on and on.

In recent days, we’ve seen institutional betrayal at work in megachurches like Hillsong and Highpoint, where popular pastors engaged in abusive conduct and their churches enabled them. The clergy abuse scandals in the Catholic Church and Southern Baptist Convention are textbook examples of institutional betrayal — institutions that chose to protect themselves rather than address the harm done to members.

Rather than challenging itself to create welcome, repair harm and do justice, the church often has chosen to preserve itself, to overlook harmful behavior by leaders and to demonize and ostracize those who speak out against abuse

Findley Edge, who taught religious education at Southern Baptist Theological Seminary, wrote about the process of institutionalization. Edge explained people developed great and exciting ideas, and these ideas lead to innovations and movements. As time goes along, these innovations and movements develop structure to continue to facilitate their growth. Eventually, the first generation that formed the great and exciting idea dies out, and soon people only know the institution and not the idea that sparked it. Their goal then becomes preservation of the institution, not the idea.

Uncritical dedication to the preservation of an institution can easily lead to institutional betrayal, especially when people depend upon organizations like the church, work or family.

Jennifer Freyd, the psychologist who coined “institutional betrayal,” says people protect institutions by participating in what she calls DARVO — Deny, Attack and Reverse Victim and Offender.

Wednesday, November 2, 2022

How the Classics Changed Research Ethics

Scott Sleek
Psychological Science
Originally posted 31 AUG 22

Here is an excerpt:

Social scientists have long contended that the Common Rule was largely designed to protect participants in biomedical experiments—where scientists face the risk of inducing physical harm on subjects—but fits poorly with the other disciplines that fall within its reach.

“It’s not like the IRBs are trying to hinder research. It’s just that regulations continue to be written in the medical model without any specificity for social science research,” she explained. 

The Common Rule was updated in 2018 to ease the level of institutional review for low-risk research techniques (e.g., surveys, educational tests, interviews) that are frequent tools in social and behavioral studies. A special committee of the National Research Council (NRC), chaired by APS Past President Susan Fiske, recommended many of those modifications. Fisher was involved in the NRC committee, along with APS Fellows Richard Nisbett (University of Michigan) and Felice J. Levine (American Educational Research Association), and clinical psychologist Melissa Abraham of Harvard University. But the Common Rule reforms have yet to fully expedite much of the research, partly because the review boards remain confused about exempt categories, Fisher said.  

Interference or support? 

That regulatory confusion has generated sour sentiments toward IRBs. For decades, many social and behavioral scientists have complained that IRBs effectively impede scientific progress through arbitrary questions and objections. 

In a Perspectives on Psychological Science paper they co-authored, APS Fellows Stephen Ceci of Cornell University and Maggie Bruck of Johns Hopkins University discussed an IRB rejection of their plans for a study with 6- to 10-year-old participants. Ceci and Bruck planned to show the children videos depicting a fictional police officer engaging in suggestive questioning of a child.  

“The IRB refused to approve the proposal because it was deemed unethical to show children public servants in a negative light,” they wrote, adding that the IRB held firm on its rejection despite government funders already having approved the study protocol (Ceci & Bruck, 2009). 

Other scientists have complained the IRBs exceed their Common Rule authority by requiring review of studies that are not government funded. In 2011, psychological scientist Jin Li sued Brown University in federal court for barring her from using data she collected in a privately funded study on educational testing. Brown’s IRB objected to the fact that she paid her participants different amounts of compensation based on need. (A year later, the university settled the case with Li.) 

In addition, IRBs often hover over minor aspects of a study that have no genuine relation to participant welfare, Ceci said in an email interview.  

Tuesday, November 1, 2022

LinkedIn ran undisclosed social experiments on 20 million users for years to study job success

Kathleen Wong
USAToday.com
Originally posted 25 SEPT 22

A new study analyzing the data of over 20 million LinkedIn users over the timespan of five years reveals that our acquaintances may be more helpful in finding a new job than close friends.

Researchers behind the study say the findings will improve job mobility on the platform, but since users were unaware of their data being studied, some may find the lack of transparency concerning.  

Published this month in Science, the study was conducted by researchers from LinkedIn, Harvard Business School and the Massachusetts Institute of Technology between 2015 and 2019. Researchers ran "multiple large-scale randomized experiments" on the platform's "People You May Know" algorithm, which suggests new connections to users. 

In a practice known as A/B testing, the experiments included giving certain users an algorithm that offered different (like close or not-so-close) contact recommendations and then analyzing the new jobs that came out of those two billion new connections.

(cut)

A question of ethics

Privacy advocates told the New York Times Sunday that some of the 20 million LinkedIn users may not be happy  that their data was used without consent. That resistance is part of a longstanding  pattern of people's data being tracked and used by tech companies without their knowledge.

LinkedIn told the paper it "acted consistently" with its user agreement, privacy policy and member settings.

LinkedIn did not respond to an email sent by USA TODAY on Sunday. 

The paper reports that LinkedIn's privacy policy does state the company reserves the right to use its users' personal data.

That access can be used "to conduct research and development for our Services in order to provide you and others with a better, more intuitive and personalized experience, drive membership growth and engagement on our Services, and help connect professionals to each other and to economic opportunity." 

It can also be deployed to research trends.

The company also said it used "noninvasive" techniques for the study's research. 

Aral told USA TODAY that researchers "received no private or personally identifying data during the study and only made aggregate data available for replication purposes to ensure further privacy safeguards."