Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Risk Management. Show all posts
Showing posts with label Risk Management. Show all posts

Tuesday, April 9, 2024

Why can’t anyone agree on how dangerous AI will be?

Dylan Matthews
Originally posted 13 March 24

Here is an excerpt:

The paper focuses on disagreement around AI’s potential to either wipe humanity out or cause an “unrecoverable collapse,” in which the human population shrinks to under 1 million for a million or more years, or global GDP falls to under $1 trillion (less than 1 percent of its current value) for a million years or more. At the risk of being crude, I think we can summarize these scenarios as “extinction or, at best, hell on earth.”

There are, of course, a number of other different risks from AI worth worrying about, many of which we already face today.

Existing AI systems sometimes exhibit worrying racial and gender biases; they can be unreliable in ways that cause problems when we rely upon them anyway; they can be used to bad ends, like creating fake news clips to fool the public or making pornography with the faces of unconsenting people.

But these harms, while surely bad, obviously pale in comparison to “losing control of the AIs such that everyone dies.” The researchers chose to focus on the extreme, existential scenarios.

So why do people disagree on the chances of these scenarios coming true? It’s not due to differences in access to information, or a lack of exposure to differing viewpoints. If it were, the adversarial collaboration, which consisted of massive exposure to new information and contrary opinions, would have moved people’s beliefs more dramatically.

Here is my summary:

The article discusses the ongoing debate surrounding the potential dangers of advanced AI, focusing on whether it could lead to catastrophic outcomes for humanity. The author highlights the contrasting views of experts and superforecasters regarding the risks posed by AI, with experts generally more concerned about disaster scenarios. The study conducted by the Forecasting Research Institute aimed to understand the root of these disagreements through an "adversarial collaboration" where both groups engaged in extensive discussions and exposure to new information.

The research identified key issues, termed "cruxes," that influence people's beliefs about AI risks. One significant crux was the potential for AI to autonomously replicate and acquire resources before 2030. Despite the collaborative efforts, the study did not lead to a convergence of opinions. The article delves into the reasons behind these disagreements, emphasizing fundamental worldview disparities and differing perspectives on the long-term development of AI.

Overall, the article provides insights into why individuals hold varying opinions on AI's dangers, highlighting the complexity of predicting future outcomes in this rapidly evolving field.

Friday, December 15, 2023

Clinical documentation of patient identities in the electronic health record: Ethical principles to consider

Decker, S. E., et al. (2023). 
Psychological Services.
Advance online publication.


The American Psychological Association’s multicultural guidelines encourage psychologists to use language sensitive to the lived experiences of the individuals they serve. In organized care settings, psychologists have important decisions to make about the language they use in the electronic health record (EHR), which may be accessible to both the patient and other health care providers. Language about patient identities (including but not limited to race, ethnicity, gender, and sexual orientation) is especially important, but little guidance exists for psychologists on how and when to document these identities in the EHR. Moreover, organizational mandates, patient preferences, fluid identities, and shifting language may suggest different documentation approaches, posing ethical dilemmas for psychologists to navigate. In this article, we review the purposes of documentation in organized care settings, review how each of the five American Psychological Association Code of Ethics’ General Principles relates to identity language in EHR documentation, and propose a set of questions for psychologists to ask themselves and their patients when making choices about documenting identity variables in the EHR.

Impact Statement

Psychologists in organized care settings may face ethical dilemmas about what language to use when documenting patient identities (race, ethnicity, gender, sexual orientation, and so on) in the electronic health record. This article provides a framework for considering how to navigate these decisions based on the American Psychological Association Code of Ethics’ five General Principles. To guide psychologists in decision making, questions to ask self and patient are included, as well as suggestions for further study.

Here is my summary:

The authors emphasize the lack of clear guidelines for psychologists on how and when to document these identity variables in EHRs. They acknowledge the complexities arising from organizational mandates, patient preferences, fluid identities, and evolving language, which can lead to ethical dilemmas for psychologists.

To address these challenges, the article proposes a framework based on the five General Principles of the American Psychological Association (APA) Code of Ethics:
  1. Fidelity and Responsibility: Psychologists must prioritize patient welfare and act in their best interests. This includes respecting their privacy and self-determination when documenting identity variables.
  2. Competence: Psychologists should possess the necessary knowledge and skills to accurately and sensitively document patient identities. This may involve ongoing training and staying abreast of evolving language and cultural norms.
  3. Integrity: Psychologists must maintain ethical standards and avoid misrepresenting or misusing patient identity information. This includes being transparent about the purposes of documentation and seeking patient consent when appropriate.
  4. Respect for Human Rights and Dignity: Psychologists must respect the inherent dignity and worth of all individuals, regardless of their identity. This includes avoiding discriminatory or stigmatizing language in EHR documentation.
  5. Social Justice and Public Interest: Psychologists should contribute to the promotion of social justice and the elimination of discrimination. This includes being mindful of how identity documentation can impact patients' access to care and their overall well-being.
To aid psychologists in making informed decisions about identity documentation, the authors propose a set of questions to consider:
  1. What is the purpose of documenting this identity variable?
  2. Is this information necessary for providing appropriate care or fulfilling legal/regulatory requirements?
  3. How will this information be used?
  4. What are the potential risks and benefits of documenting this information?
  5. What are the patient's preferences regarding the documentation of their identity?
By carefully considering these questions, psychologists can make ethically sound decisions that protect patient privacy and promote their well-being.

Saturday, November 18, 2023

Resolving the battle of short- vs. long-term AI risks

Sætra, H.S., Danaher, J.
AI Ethics (2023).


AI poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to think two thoughts at the same time. While disagreements over the exact probabilities and impacts of risks will remain, fostering a more productive dialogue will be important. This entails, for example, distinguishing between evaluations of particular risks and the politics of risk. Without proper discussions of AI risk, it will be difficult to properly manage them, and we could end up in a situation where neither short- nor long-term risks are managed and mitigated.

Here is my summary:

Artificial intelligence (AI) poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to prioritize these risks. Some argue that short-term risks, such as bias and discrimination, are more pressing and should be addressed first, while others argue that long-term risks, such as the possibility of AI surpassing human intelligence and becoming uncontrollable, are more serious and should be prioritized.

Sætra and Danaher argue that it is important to consider both short- and long-term risks when developing AI policies and regulations. They point out that short-term risks can have long-term consequences, and that long-term risks can have short-term impacts. For example, if AI is biased against certain groups of people, this could lead to long-term inequality and injustice. Conversely, if we take steps to mitigate long-term risks, such as by developing safety standards for AI systems, this could also reduce short-term risks.

Sætra and Danaher offer a number of suggestions for how to better balance short- and long-term AI risks. One suggestion is to develop a risk matrix that categorizes risks by their impact and likelihood. This could help policymakers to identify and prioritize the most important risks. Another suggestion is to create a research agenda that addresses both short- and long-term risks. This would help to ensure that we are investing in the research that is most needed to keep AI safe and beneficial.

Monday, July 3, 2023

Is Avoiding Extinction from AI Really an Urgent Priority?

S. Lazar, J, Howard, & A. Narayanan
Originally posted 30 May 23

Here is an excerpt:

And why focus on extinction in particular? Bad as it would be, as the preamble to the statement notes AI poses other serious societal-scale risks. And global priorities should be not only important, but urgent. We’re still in the middle of a global pandemic, and Russian aggression in Ukraine has made nuclear war an imminent threat. Catastrophic climate change, not mentioned in the statement, has very likely already begun. Is the threat of extinction from AI equally pressing? Do the signatories believe that existing AI systems or their immediate successors might wipe us all out? If they do, then the industry leaders signing this statement should immediately shut down their data centres and hand everything over to national governments. The researchers should stop trying to make existing AI systems safe, and instead call for their elimination.

We think that, in fact, most signatories to the statement believe that runaway AI is a way off yet, and that it will take a significant scientific advance to get there—one that we cannot anticipate, even if we are confident that it will someday occur. If this is so, then at least two things follow.

First, we should give more weight to serious risks from AI that are more urgent. Even if existing AI systems and their plausible extensions won’t wipe us out, they are already causing much more concentrated harm, they are sure to exacerbate inequality and, in the hands of power-hungry governments and unscrupulous corporations, will undermine individual and collective freedom. We can mitigate these risks now—we don’t have to wait for some unpredictable scientific advance to make progress. They should be our priority. After all, why would we have any confidence in our ability to address risks from future AI, if we won’t do the hard work of addressing those that are already with us?

Second, instead of alarming the public with ambiguous projections about the future of AI, we should focus less on what we should worry about, and more on what we should do. The possibly extreme risks from future AI systems should be part of that conversation, but they should not dominate it. We should start by acknowledging that the future of AI—perhaps more so than of pandemics, nuclear war, and climate change—is fundamentally within our collective control. We need to ask, now, what kind of future we want that to be. This doesn’t just mean soliciting input on what rules god-like AI should be governed by. It means asking whether there is, anywhere, a democratic majority for creating such systems at all.

Monday, February 6, 2023

How Far Is Too Far? Crossing Boundaries in Therapeutic Relationships

Gloria Umali
American Professional Agency
Risk Management Report
January 2023

While there appears to be a clear understanding of what constitutes a boundary violation, defining the boundary remains challenging as the line can be ambiguous with often no right or wrong answer. The APA Ethical Principles and Code of Conduct (2017) (“Ethics Code”) provides guidance on boundary and relationship questions to guide Psychologists toward an ethical course of action. The Ethics Code states that relationships which give rise to the potential for exploitation or harm to the client, or those that impair objectivity in judgment, must be avoided.

Boundary crossing, if allowed to progress, may hurt both the therapist and the client.  The good news is that a consensus exists among professionals in the mental health community that there are boundary crossings which are unquestionably considered helpful and therapeutic to clients. However, with no straightforward formula to delineate between helpful boundaries and harmful or unhealthy boundaries, the resulting ‘grey area’ creates challenges for most psychologists. Examining the general public’s perception and understanding of what an unhealthy boundary crossing looks like may provide additional insight on the right ethical course of action, including the impact of boundary crossing on relationships on a case-by-case basis. 



Attaining and maintaining healthy boundaries is a goal that all psychologists should work toward while providing supportive therapy services to clients. Strong and consistent boundaries build trust and make therapy safe for both the client and the therapist. Building healthy boundaries not only promotes compliance with the Ethics Code, but also lets clients know you have their best interest in mind. In summation, while concerns for a client’s wellbeing can cloud judgement, the use of both the risk considerations above and the APA Ethical Principles of Psychologists and Code of Conduct, can assist in clarifying the boundary line and help provide a safe and therapeutic environment for all parties involved. 

A good risk management reminder for psychologists.

Thursday, February 2, 2023

Yale Changes Mental Health Policies for Students in Crisis

William Wan
The Washington Post
Originally posted 18 JAN 23

Here are some excerpts:

In interviews with The Post, several students — who relied on Yale’s health insurance — described losing access to therapy and health care at the moment they needed it most.

The policy changes announced Wednesday reversed many of those practices.

By allowing students in mental crisis to take a leave of absence rather than withdraw, they will continue to have access to health insurance through Yale, university officials said. They can continue to work as a student employee, meet with career advisers, have access to campus and use library resources.

Finding a way to allow students to retain health insurance required overcoming significant logistical and financial hurdles, Lewis said, since New Haven and Connecticut are where most health providers in Yale’s system are located. But under the new policies, students on leave can switch to “affiliate coverage,” which would cover out-of-network care in other states.

In recent weeks, students and mental advocates questioned why Yale would not allow students struggling with mental health issues to take fewer classes. The new policies will now allow students to drop their course load to as low as two classes under special circumstances. But students can do so only if they require significant time for treatment and if their petition is approved.

In the past, withdrawn students had to submit an application for reinstatement, which included letters of recommendation, and proof they had remained “constructively occupied” during their time away. Under new policies, students returning from a medical leave of absence will submit a “simplified reinstatement request” that includes a letter from their clinician and a personal statement explaining why they left, the treatment they received and why they feel ready to return.


In their updated online policies, the university made clear it still retained the right to impose an involuntary medical leave on students in cases of “a significant risk to the student’s health or safety, or to the health or safety of others.”

The changes were announced one day before Yale officials were scheduled to meet for settlement talks with the group of current and former students who filed a proposed class-action lawsuit against the university, demanding policy changes. 


In a statement, one of the plaintiffs — a nonprofit group called Elis for Rachael, led by former Yale students — said they are still pushing for more to be done: “We remain in negotiations. We thank Yale for this first step. But if Yale were to receive a grade for its work on mental health, it would be an incomplete at best.”

But after decades of mental health advocacy with little change at the university, some students said they were surprised at the changes Yale has made already.

“I really didn’t think it would happen during my time here,” said Akweley Mazarae Lartey, a senior at Yale who has advocated for mental rights throughout his time at the school. 

“I started thinking of all the situations that I and people I care for have ended up in and how much we could have used these policies sooner.”

Monday, March 8, 2021

Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy

Stix, C., Maas, M.M.
AI Ethics (2021). 


Recent progress in artificial intelligence (AI) raises a wide array of ethical and societal concerns. Accordingly, an appropriate policy approach is urgently needed. While there has been a wave of scholarship in this field, the research community at times appears divided amongst those who emphasize ‘near-term’ concerns and those focusing on ‘long-term’ concerns and corresponding policy measures. In this paper, we seek to examine this alleged ‘gap’, with a view to understanding the practical space for inter-community collaboration on AI policy. We propose to make use of the principle of an ‘incompletely theorized agreement’ to bridge some underlying disagreements, in the name of important cooperation on addressing AI’s urgent challenges. We propose that on certain issue areas, scholars working with near-term and long-term perspectives can converge and cooperate on selected mutually beneficial AI policy projects, while maintaining their distinct perspectives.

From the Conclusion

AI has raised multiple societal and ethical concerns. This highlights the urgent need for suitable and impactful policy measures in response. Nonetheless, there is at present an experienced fragmentation in the responsible AI policy community, amongst clusters of scholars focusing on ‘near-term’ AI risks, and those focusing on ‘longer-term’ risks. This paper has sought to map the practical space for inter-community collaboration, with a view towards the practical development of AI policy.

As such, we briefly provided a rationale for such collaboration, by reviewing historical cases of scientific community conflict or collaboration, as well as the contemporary challenges facing AI policy. We argued that fragmentation within a given community can hinder progress on key and urgent policies. Consequently, we reviewed a number of potential (epistemic, normative or pragmatic) sources of disagreement in the AI ethics community, and argued that these trade-offs are often exaggerated, and at any rate do not need to preclude collaboration. On this basis, we presented the novel proposal for drawing on the constitutional law principle of an ‘incompletely theorized agreement’, for the communities to set aside or suspend these and other disagreements for the purpose of achieving higher-order AI policy goals of both communities in selected areas. We, therefore, non-exhaustively discussed a number of promising shared AI policy areas which could serve as the sites for such agreements, while also discussing some of the overall limits of this framework.

Saturday, July 18, 2020

Making Decisions in a COVID-19 World

Baruch Fischoff
JAMA. 2020;324(2):139-140.

Here are two excerpts:

Individuals must answer complementary questions. When is it safe enough to visit a physician’s office, get a dental check-up, shop for clothing, ride the bus, visit an aging or incarcerated relative, or go to the gym? What does it mean that some places are open but not others and in one state, but not in a bordering one? How do individuals make sense of conflicting advice about face masks, fomites, and foodstuffs?

Risk analysis translates technical knowledge into terms that people can use. Done to a publication standard, risk analysis requires advanced training and substantial resources. However, even back-of-the-envelope calculations can help individuals make sense of otherwise bewildering choices. Combined with behavioral research, risk analysis can help explain why reasonable people sometimes make different decisions. Why do some people wear face masks and crowd on the beach, while others do not? Do they perceive the risks differently or are they concerned about different risks?


Second, risk analyses are needed to apply that knowledge. However solid the science on basic physical, biological, and behavioral processes, applying it requires knowledge of specific settings. How do air and people circulate? What objects and surfaces do people and viruses touch? How sustainable are physical barriers and behavioral practices? Risk analysts derive such estimates by consulting with scientists who know the processes and decision makers who know the settings.3 Boundary organizations are needed to bring the relevant parties together in each sector (medicine, sports, schools, movie production, etc) to produce estimates informed by the science and by people who know how that sector works.

The info is here.

Tuesday, April 28, 2020

What needs to happen before your boss can make you return to work

Mark Kaufman
Originally posted 24 April 20

Here is an excerpt:

But, there is a way for tens of millions of Americans to return to workplaces while significantly limiting how many people infect one another. It will require extraordinary efforts on the part of both employers and governments. This will feel weird, at first: Imagine regularly having your temperature taken at work, routinely getting tested for an infection or immunity, mandatory handwashing breaks, and perhaps even wearing a mask.

Yet, these are exceptional times. So restarting the economy and returning to workplace normalcy will require unparalleled efforts.

"This is truly unprecedented," said Christopher Hayes, a labor historian at the Rutgers School of Management and Labor Relations.

"This is like the 1918 flu and the Great Depression at the same time," Hayes said.

Yet unlike previous recessions and depressions over the last 100 years, most recently the Great Recession of 2008-2009, American workers must now ask themselves an unsettling question: "People now have to worry, ‘Is it safe to go to this job?’" said Hayes.

Right now, many employers aren't nearly prepared to tell workers in the U.S. to return to work and office spaces. To avoid infection, "the only tools you’ve got in your toolbox are the simple but hard-to-sustain public health tools like testing, contact tracing, and social distancing," explained Michael Gusmano, a health policy expert at the Rutgers School of Public Health.

"We’re not anywhere near a situation where you could claim that you can, with any credibility, send people back en masse now," Gusmano said.

The info is here.

Thursday, February 20, 2020

Sharing Patient Data Without Exploiting Patients

McCoy MS, Joffe S, Emanuel EJ.
JAMA. Published online January 16, 2020.

Here is an excerpt:

The Risks of Data Sharing

When health systems share patient data, the primary risk to patients is the exposure of their personal health information, which can result in a range of harms including embarrassment, stigma, and discrimination. Such exposure is most obvious when health systems fail to remove identifying information before sharing data, as is alleged in the lawsuit against Google and the University of Chicago. But even when shared data are fully deidentified in accordance with the requirements of the Health Insurance Portability and Accountability Act reidentification is possible, especially when patient data are linked with other data sets. Indeed, even new data privacy laws such as Europe's General Data Protection Regulation and California's Consumer Privacy Act do not eliminate reidentification risk.

Companies that acquire patient data also accept risk by investing in research and development that may not result in marketable products. This risk is less ethically concerning, however, than that borne by patients. While companies usually can abandon unpromising ventures, patients’ lack of control over data-sharing arrangements makes them vulnerable to exploitation. Patients lack control, first, because they may have no option other than to seek care in a health system that plans to share their data. Second, even if patients are able to authorize sharing of their data, they are rarely given the information and opportunity to ask questions needed to give meaningful informed consent to future uses of their data.

Thus, for the foreseeable future, data sharing will entail ethically concerning risks to patients whose data are shared. But whether these exchanges are exploitative depends on how much benefit patients receive from data sharing.

The info is here.

Sunday, September 15, 2019

To Study the Brain, a Doctor Puts Himself Under the Knife

Adam Piore
MIT Technology Review
Originally published November 9, 2015

Here are two excerpts:

Kennedy became convinced that the way to take his research to the next level was to find a volunteer who could still speak. For almost a year he searched for a volunteer with ALS who still retained some vocal abilities, hoping to take the patient offshore for surgery. “I couldn’t get one. So after much thinking and pondering I decided to do it on myself,” he says. “I tried to talk myself out of it for years.”

The surgery took place in June 2014 at a 13-bed Belize City hospital a thousand miles south of his Georgia-based neurology practice and also far from the reach of the FDA. Prior to boarding his flight, Kennedy did all he could to prepare. At his small company, Neural Signals, he fabricated the electrodes the neurosurgeon would implant into his motor cortex—even chose the spot where he wanted them buried. He put aside enough money to support himself for a few months if the surgery went wrong. He had made sure his living will was in order and that his older son knew where he was.


To some researchers, Kennedy’s decisions could be seen as unwise, even unethical. Yet there are cases where self-experiments have paid off. In 1984, an Australian doctor named Barry Marshall drank a beaker filled with bacteria in order to prove they caused stomach ulcers. He later won the Nobel Prize. “There’s been a long tradition of medical scientists experimenting on themselves, sometimes with good results and sometimes without such good results,” says Jonathan Wolpaw, a brain-computer interface researcher at the Wadsworth Center in New York. “It’s in that tradition. That’s probably all I should say without more information.”

The info is here.

Monday, July 8, 2019

Prediction Models for Suicide Attempts and Deaths: A Systematic Review and Simulation

Bradley Belsher, Derek Smolenski, Larry Pruitt, and others
JAMA Psychiatry. 2019;76(6):642-651.

Importance  Suicide prediction models have the potential to improve the identification of patients at heightened suicide risk by using predictive algorithms on large-scale data sources. Suicide prediction models are being developed for use across enterprise-level health care systems including the US Department of Defense, US Department of Veterans Affairs, and Kaiser Permanente.

To evaluate the diagnostic accuracy of suicide prediction models in predicting suicide and suicide attempts and to simulate the effects of implementing suicide prediction models using population-level estimates of suicide rates.

Evidence Review
A systematic literature search was conducted in MEDLINE, PsycINFO, Embase, and the Cochrane Library to identify research evaluating the predictive accuracy of suicide prediction models in identifying patients at high risk for a suicide attempt or death by suicide. Each database was searched from inception to August 21, 2018. The search strategy included search terms for suicidal behavior, risk prediction, and predictive modeling. Reference lists of included studies were also screened. Two reviewers independently screened and evaluated eligible studies.

From a total of 7306 abstracts reviewed, 17 cohort studies met the inclusion criteria, representing 64 unique prediction models across 5 countries with more than 14 million participants. The research quality of the included studies was generally high. Global classification accuracy was good (≥0.80 in most models), while the predictive validity associated with a positive result for suicide mortality was extremely low (≤0.01 in most models). Simulations of the results suggest very low positive predictive values across a variety of population assessment characteristics.

Conclusions and Relevance
To date, suicide prediction models produce accurate overall classification models, but their accuracy of predicting a future event is near 0. Several critical concerns remain unaddressed, precluding their readiness for clinical applications across health systems.

Friday, June 21, 2019

Tech, Data And The New Democracy Of Ethics

Neil Lustig
Originally posted June 10, 2019

As recently as 15 years ago, consumers had no visibility into whether the brands they shopped used overseas slave labor or if multinationals were bribing public officials to give them unfair advantages internationally. Executives could engage in whatever type of misconduct they wanted to behind closed doors, and there was no early warning system for investors, board members and employees, who were directly impacted by the consequences of their behavior.
Now, thanks to globalization, social media, big data, whistleblowers and corporate compliance initiatives, we have more visibility than ever into the organizations and people that affect our lives and our economy.

What we’ve learned from this surge in transparency is that sometimes companies mess up even when they’re not trying to. There’s a distinct difference between companies that deliberately engage in unethical practices and those that get caught up in them due to loose policies, inadequate self-policing or a few bad actors that misrepresent the ethics of the rest of the organization. The primary difference between these two types of companies is how fast they’re able to act -- and if they act at all.

Fortunately, just as technology and data can introduce unprecedented visibility into organizations’ unethical practices, they can also equip organizations with ways of protecting themselves from internal and external risks. As CEO of a compliance management platform, I believe there are three things that must be in place for organizations to stay above board in a rising democracy of ethics.

The info is here.

Wednesday, June 19, 2019

The Ethics of 'Biohacking' and Digital Health Data

Sy Mukherjee
Originally posted June 6, 2019

Here is an excerpt:

Should personal health data ownership be a human right? Do digital health program participants deserve a cut of the profits from the information they provide to genomics companies? How do we get consumers to actually care about the privacy and ethics implications of this new digital health age? Can technology help (and, more importantly, should it have a responsibility to) bridge the persistent gap in representation for women in clinical trials? And how do you design a fair system of data distribution in an age of a la carte genomic editing, leveraged by large corporations, and seemingly ubiquitous data mining from consumers?

Ok, so we didn’t exactly come to definitive conclusions about all that in our limited time. But I look forward to sharing some of our panelists’ insights in the coming days. And I’ll note that, while some of the conversation may have sounded like dystopic cynicism, there was a general consensus that collective regulatory changes, new business models, and a culture of concern for data privacy could help realize the potential of digital health while mitigating its potential problems.

The information and interview are here.

Friday, June 14, 2019

The Ethics of Treating Loved Ones

Christopher Cheney
Originally posted May 19, 2019

When treating family members, friends, colleague, or themselves, ER physicians face ethical, professional, patient welfare, and liability concerns, a recent research article found.

Similar to situations arising in the treatment of VIP patients, ER physicians treating loved ones or close associates may vary their customary medical care from the standard treatment and inadvertently produce harm rather than benefit.

"Despite being common, this practice raises ethical concerns and concern for the welfare of both the patient and the physician," the authors of the recent article wrote in the American Journal of Emergency Medicine.

There are several liability concerns for clinicians, the lead author explained.

"Doctors would be held to the same standard of care as for other patients, and if care is violated and leads to damages, they could be liable. Intuitively, family and friends might be less likely to sue but that is not true of subordinates. In addition, as we state in the paper, for most ED physicians, practice outside of the home institution is not a covered event by the malpractice insurer," said Joel Geiderman, MD, professor and co-chairman of emergency medicine, Department of Emergency Medicine, Cedars-Sinai Medical Center, Los Angeles.

The info is here.

Tuesday, December 4, 2018

Document ‘informed refusal’ just as you would informed consent

James Scibilia
AAP News
Originally posted October 20, 2018

Here is an excerpt:

The requirements of informed refusal are the same as informed consent. Providers must explain:

  • the proposed treatment or testing;
  • the risks and benefits of refusal;
  • anticipated outcome with and without treatment; and
  • alternative therapies, if available.

Documentation of this discussion, including all four components, in the medical record is critical to mounting a successful defense from a claim that you failed to warn about the consequences of refusing care.

Since state laws vary, it is good practice to check with your malpractice carrier about preferred risk management documentation. Generally, the facts of these discussions should be included and signed by the caretaker. This conversation and documentation should not be delegated to other members of the health care team. At least one state has affirmed through a Supreme Court decision that informed consent must be obtained by the provider performing the procedure and not another team member; it is likely the concept of informed refusal would bear the same requirements.

The info is here.

Sunday, November 11, 2018

Nine risk management lessons for practitioners.

Taube, Daniel O.,Scroppo, Joe,Zelechoski, Amanda D.
Practice Innovations, Oct 04 , 2018


Risk management is an essential skill for professionals and is important throughout the course of their careers. Effective risk management blends a utilitarian focus on the potential costs and benefits of particular courses of action, with a solid foundation in ethical principles. Awareness of particularly risk-laden circumstances and practical strategies can promote safer and more effective practice. This article reviews nine situations and their associated lessons, illustrated by case examples. These situations emerged from our experience as risk management consultants who have listened to and assisted many practitioners in addressing the challenges they face on a day-to-day basis. The lessons include a focus on obtaining consent, setting boundaries, flexibility, attention to clinician affect, differentiating the clinician’s own values and needs from those of the client, awareness of the limits of competence, maintaining adequate legal knowledge, keeping good records, and routine consultation. We highlight issues and approaches to consider in these types of cases that minimize risks of adverse outcomes and enhance good practice.

The info is here.

Here is a portion of the article:

Being aware of basic legal parameters can help clinicians to avoid making errors in this complex arena. Yet clinicians are not usually lawyers and tend to have only limited legal knowledge. This gives rise to a risk of assuming more mastery than one may have.

Indeed, research suggests that a range of professionals, including psychotherapists, overestimate their capabilities and competencies, even in areas in which they have received substantial training (Creed, Wolk, Feinberg, Evans, & Beck, 2016; Lipsett, Harris, & Downing, 2011; Mathieson, Barnfield, & Beaumont, 2009; Walfish, McAlister, O’Donnell, & Lambert, 2012).

Monday, November 5, 2018

We Need To Examine The Ethics And Governance Of Artificial Intelligence

Nikita Malik
Originally posted October 4, 2018

Here is an excerpt:

The second concern is on regulation and ethics. Research teams at MIT and Harvard are already looking into the fast-developing area of AI to map the boundaries within which sensitive but important data can be used. Who determines whether this technology can save lives, for example, versus the very real risk of veering into an Orwellian dystopia?

Take artificial intelligence systems that have the ability to predicate a crime based on an individual’s history, and their propensity to do harm. Pennsylvania could be one of the first states in the United States to base criminal sentences not just on the crimes people are convicted of, but also on whether they are deemed likely to commit additional crimes in the future. Statistically derived risk assessments – based on factors such as age, criminal record, and employment, will help judges determine which sentences to give. This would help reduce the cost of, and burden on, the prison system.

Risk assessments – which have existed for a long time - have been used in other areas such as the prevention of terrorism and child sexual exploitation. In the latter category, existing human systems are so overburdened that children are often overlooked, at grave risk to themselves. Human errors in the case work of the severely abused child Gabriel Fernandez contributed to his eventual death at the hands of his parents, and a serious inquest into the shortcomings of the County Department of Children and Family Services in Los Angeles. Using artificial intelligence in vulnerability assessments of children could aid overworked caseworkers and administrators and flag errors in existing systems.

The info is here.

Tuesday, October 23, 2018

Why you need a code of ethics (and how to build one that sticks)

Josh Fruhlinger
Originally posted September 17, 2018

Here is an excerpt:

Most of us probably think of ourselves as ethical people. But within organizations built to maximize profits, many seemingly inevitably drift towards more dubious behavior, especially when it comes to user personal data. "More companies than not are collecting data just for the sake of collecting data, without having any reason as to why or what to do with it," says Philip Jones, a GDPR regulatory compliance expert at Capgemini. "Although this is an expensive and unethical approach, most businesses don’t think twice about it. I view this approach as one of the highest risks to companies today, because they have no clue where, how long, or how accurate much of their private data is on consumers."

This is the sort of organizational ethical drift that can arise in the absence of clear ethical guidelines—and it's the sort of drift that laws like the GDPR, the EU's stringent new framework for how companies must handle customer data, are meant to counter. And the temptation is certainly there to simply use such regulations as a de facto ethics policy. "The GDPR and laws like it make the process of creating a digital ethics policy much easier than it once was," says Ian McClarty, President and CEO of PhoenixNAP.  "Anything and everything that an organization does with personal data obtained from an individual must come with the explicit consent of that data owner. It’s very hard to subvert digital ethics when one’s ability to use personal data is curtailed in such a draconian fashion."

But companies cannot simply outsource their ethics codes to regulators and think that hewing to the letter of the law will keep their reputations intact. "New possibilities emerge so fast," says Mads Hennelund, a consultant at Nextwork, "that companies will be forced by market competition to apply new technologies before any regulator has been able to grasp them and impose meaningful rules or standards." He also notes that, if different silos within a company are left to their own devices and subject to their own particular forms of regulation and technology adoption, "the organization as a whole becomes ethically fragmented, consisting of multiple ethically autonomous departments."

The info is here.

Friday, October 19, 2018

Risk Management Considerations When Treating Violent Patients

Kristen Lambert
Psychiatric News
Originally posted September 4, 2018

Here is an excerpt:

When a patient has a history of expressing homicidal ideation or has been violent previously, you should document, in every subsequent session, whether the patient admits or denies homicidal ideation. When the patient expresses homicidal ideation, document what he/she expressed and the steps you did or did not take in response and why. Should an incident occur, your documentation will play an important role in defending your actions.

Despite taking precautions, your patient may still commit a violent act. The following are some strategies that may minimize your risk.

  • Conduct complete timely/thorough risk assessments.
  • Document, including the reasons for taking and not taking certain actions.
  • Understand your state’s law on duty to warn. Be aware of the language in the law on whether you have a mandatory, permissive, or no duty to warn/protect.
  • Understand your state’s laws regarding civil commitment.
  • Understand your state’s laws regarding disclosure of confidential information and when you can do so.
  • Understand your state’s laws regarding discussing firearms ownership and/or possession with patients.
  • If you have questions, consult an attorney or risk management professional.