Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Risk Management. Show all posts
Showing posts with label Risk Management. Show all posts

Thursday, October 11, 2018

Does your nonprofit have a code of ethics that works?

Mary Beth West
USA Today Network - Tennessee
Originally posted September 10, 2018

Each year, the Public Relations Society of America recognizes September as ethics month.

Our present #FakeNews / #MeToo era offers a daily diet of news coverage and exposés about ethics shortfalls in business, media and government sectors.

One arena sometimes overlooked is that of nonprofit organizations.

I am currently involved in a national ethics-driven bylaw reform movement for PRSA itself, which is a 501(c)(6) nonprofit with 21,000-plus members globally, in the “business league” category.

While PRSA’s code of ethics has stood for decades as an industry standard for communications ethics – promoting members’ adherence to only truthful and honest practices – PRSA’s code is not enforceable.

Challenges with unenforced ethics codes

Unenforced codes of ethics are commonplace in the nonprofit arena, particularly for volunteer, member-driven organizations.

PRSA converted from its enforced code of ethics to one that is unenforced by design, nearly two decades ago.

The reason: enforcing code compliance and the adjudication processes inherent to it were a pain in the neck (and a pain in the wallet, due to litigation risks).

The info is here.

Friday, September 14, 2018

Law, Ethics, and Conversations between Physicians and Patients about Firearms in the Home

Alexander D. McCourt, and Jon S. Vernick
AMA J Ethics. 2018;20(1):69-76.

Abstract

Firearms in the home pose a risk to household members, including homicide, suicide, and unintentional deaths. Medical societies urge clinicians to counsel patients about those risks as part of sound medical practice. Depending on the circumstances, clinicians might recommend safe firearm storage, temporary removal of the firearm from the home, or other measures. Certain state firearm laws, however, might present legal and ethical challenges for physicians who counsel patients about guns in the home. Specifically, we discuss state background check laws for gun transfers, safe gun storage laws, and laws forbidding physicians from engaging in certain firearm-related conversations with their patients. Medical professionals should be aware of these and other state gun laws but should offer anticipatory guidance when clinically appropriate.

The info is here.

Friday, August 31, 2018

What you may not know about online therapy companies

Pauline Wallin
The Practice Institute
Originally posted August 19, 2018

Here is an excerpt:

In summary, while platforms such as Talkspace and BetterHelp provide you with ready access to working with clients online, they also limit your control over your relationships with your clients and in how you work with them.

Before signing on with such platforms, read the terms of service thoroughly. Search online for lawsuits against the company you're considering working with, and read reviews that are not on the company's website.

Also, talk with the risk management consultant provided by your malpractice insurer, who can alert you to legal or ethical liabilities. For your maximum legal protection, hire an attorney who specializes in mental health services to review the contract that you will be signing. The contract will most likely be geared to protecting the company, not your or your license.

The info is here.

Saturday, April 7, 2018

The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?

Sven Nyholm and Jilles Smids
Ethical Theory and Moral Practice
November 2016, Volume 19, Issue 5, pp 1275–1289

Abstract

Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways in which the ethics of accident-algorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: (i) the basic decision-making situation faced by those who decide how self-driving cars should be programmed to deal with accidents; (ii) moral and legal responsibility; and (iii) decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars.

The article is here.

Friday, March 23, 2018

Mark Zuckerberg Has No Way Out of Facebook's Quagmire

Leonid Bershidsky
Bloomberg News
Originally posted March 21, 2018

Here is an excerpt:

"Making sure time spent on Facebook is time well spent," as Zuckerberg puts it, should lead to the collection of better-quality data. If nobody is setting up fake accounts to spread disinformation, users are more likely to be their normal selves. Anyone analyzing these healthier interactions will likely have more success in targeting commercial and, yes, political offerings to real people. This would inevitably be a smaller yet still profitable enterprise, and no longer a growing one, at least in the short term. But the Cambridge Analytica scandal shows people may not be okay with Facebook's data gathering, improved or not.

The scandal follows the revelation (to most Facebook users who read about it) that, until 2015, application developers on the social network's platform were able to get information about a user's Facebook friends after asking permission in the most perfunctory way. The 2012 Obama campaign used this functionality. So -- though in a more underhanded way -- did Cambridge Analytica, which may or may not have used the data to help elect President Donald Trump.

Many people are angry at Facebook for not acting more resolutely to prevent CA's abuse, but if that were the whole problem, it would have been enough for Zuckerberg to apologize and point out that the offending functionality hasn't been available for several years. The #deletefacebook campaign -- now backed by WhatsApp co-founder Brian Acton, whom Facebook made a billionaire -- is, however, powered by a bigger problem than that. People are worried about the data Facebook is accumulating about them and about how these data are used. Facebook itself works with political campaigns to help them target messages; it did so for the Trump campaign, too, perhaps helping it more than CA did.

The article is here.

First Question: Should you stop using Facebook because they violated your trust?

Second Question: Is Facebook a defective product?

Thursday, March 22, 2018

The Ethical Design of Intelligent Robots

Sunidhi Ramesh
The Neuroethics Blog
Originally published February 27, 2018

Here is an excerpt:

In a 2016 study, a team of Georgia Tech scholars formulated a simulation in which 26 volunteers interacted “with a robot in a non-emergency task to experience its behavior and then [chose] whether [or not] to follow the robot’s instructions in an emergency.” To the researchers’ surprise (and unease), in this “emergency” situation (complete with artificial smoke and fire alarms), “all [of the] participants followed the robot in the emergency, despite half observing the same robot perform poorly [making errors by spinning, etc.] in a navigation guidance task just minutes before… even when the robot pointed to a dark room with no discernible exit, the majority of people did not choose to safely exit the way they entered.” It seems that we not only trust robots, but we also do so almost blindly.

The investigators proceeded to label this tendency as a concerning and alarming display of overtrust of robots—an overtrust that applied even to robots that showed indications of not being trustworthy.

Not convinced? Let’s consider the recent Tesla self-driving car crashes. How, you may ask, could a self-driving car barrel into parked vehicles when the driver is still able to override the autopilot machinery and manually stop the vehicle in seemingly dangerous situations? Yet, these accidents have happened. Numerous times.

The answer may, again, lie in overtrust. “My Tesla knows when to stop,” such a driver may think. Yet, as the car lurches uncomfortably into a position that would push the rest of us to slam onto our breaks, a driver in a self-driving car (and an unknowing victim of this overtrust) still has faith in the technology.

“My Tesla knows when to stop.” Until it doesn’t. And it’s too late.

Friday, January 26, 2018

Power Causes Brain Damage

Jerry Useem
The Atlantic
Originally published July 2017

Here is an excerpt:

This is a depressing finding. Knowledge is supposed to be power. But what good is knowing that power deprives you of knowledge?

The sunniest possible spin, it seems, is that these changes are only sometimes harmful. Power, the research says, primes our brain to screen out peripheral information. In most situations, this provides a helpful efficiency boost. In social ones, it has the unfortunate side effect of making us more obtuse. Even that is not necessarily bad for the prospects of the powerful, or the groups they lead. As Susan Fiske, a Princeton psychology professor, has persuasively argued, power lessens the need for a nuanced read of people, since it gives us command of resources we once had to cajole from others. But of course, in a modern organization, the maintenance of that command relies on some level of organizational support. And the sheer number of examples of executive hubris that bristle from the headlines suggests that many leaders cross the line into counterproductive folly.

Less able to make out people’s individuating traits, they rely more heavily on stereotype. And the less they’re able to see, other research suggests, the more they rely on a personal “vision” for navigation. John Stumpf saw a Wells Fargo where every customer had eight separate accounts. (As he’d often noted to employees, eight rhymes with great.) “Cross-selling,” he told Congress, “is shorthand for deepening relationships.”

The article is here.

Thursday, October 26, 2017

After medical error, apology goes a long way

Science Daily
Originally posted October 2, 2017

Summary: Discussing hospital errors with patients leads to better patient safety without spurring a barrage of malpractice claims, new research shows.

In patient injury cases, revealing facts, offering apology does not lead to increase in lawsuits, study finds

Sometimes a straightforward explanation and an apology for what went wrong in the hospital goes a long way toward preventing medical malpractice litigation and improving patient safety.

That's what Michelle Mello, JD, PhD, and her colleagues found in a study to be published Oct. 2 in Health Affairs.

Mello, a professor of health research and policy and of law at Stanford University, is the lead author of the study. The senior author is Kenneth Sands, former senior vice president at Beth Israel Deaconess Medical Center.

Medical injuries are a leading cause of death in the United States. The lawsuits they spawn are also a major concern for physicians and health care facilities. So, hospital risk managers and liability insurers are experimenting with new approaches to resolving these disputes that channel them away from litigation.

The focus is on meeting patients' needs without requiring them to sue. Hospitals disclose accidents to patients, investigate and explain why they occurred, apologize and, in cases in which the harm was due to a medical error, offer compensation and reassurance that steps will be taken to keep it from happening again.

The article is here.

The target article is here.

Monday, October 16, 2017

No Child Left Alone: Moral Judgments about Parents Affect Estimates of Risk to Children

Thomas, A. J., Stanford, P. K., & Sarnecka, B. W. (2016).
Collabra, 2(1), 10.

Abstract

In recent decades, Americans have adopted a parenting norm in which every child is expected to be under constant direct adult supervision. Parents who violate this norm by allowing their children to be alone, even for short periods of time, often face harsh criticism and even legal action. This is true despite the fact that children are much more likely to be hurt, for example, in car accidents. Why then do bystanders call 911 when they see children playing in parks, but not when they see children riding in cars? Here, we present results from six studies indicating that moral judgments play a role: The less morally acceptable a parent’s reason for leaving a child alone, the more danger people think the child is in. This suggests that people’s estimates of danger to unsupervised children are affected by an intuition that parents who leave their children alone have done something morally wrong.

Here is part of the discussion:

The most important conclusion we draw from this set of experiments is the following: People don’t only think that leaving children alone is dangerous and therefore immoral. They also think it is immoral and therefore dangerous. That is, people overestimate the actual danger to children who are left alone by their parents, in order to better support or justify their moral condemnation of parents who do so.

This brings us back to our opening question: How can we explain the recent hysteria about unsupervised children, often wildly out of proportion to the actual risks posed by the situation? Our findings suggest that once a moralized norm of ‘No child left alone’ was generated, people began to feel morally outraged by parents who violated that norm. The need (or opportunity) to better support or justify this outrage then elevated people’s estimates of the actual dangers faced by children. These elevated risk estimates, in turn, may have led to even stronger moral condemnation of parents and so on, in a self-reinforcing feedback loop.

The article is here.

Wednesday, September 27, 2017

New York’s Highest Court Rules Against Physician-Assisted Suicide

Jacob Gershman
The Wall Street Journal
Originally posted September 7, 2017

New York’s highest court on Thursday ruled that physician-assisted suicide isn’t a fundamental right, rejecting a legal effort by terminally ill patients to decriminalize doctor-assisted suicide through the courts.

The state Court of Appeals, though, said it wouldn’t stand in the way if New York’s legislature were to decide that assisted suicide could be “effectively regulated” and pass legislation allowing terminally ill and suffering patients to kill themselves.

Physician-assisted suicide is illegal in most of the country. But advocates who support loosening the laws have been making gains. Doctor-assisted dying has been legalized in several states, most recently in California and Colorado, the former by legislation and the latter by a ballot measure approved by voters in November. Oregon, Vermont and Washington have enacted similar “end-of-life” measures. Washington, D.C., also passed an “assisted-dying” law last year.

Montana’s highest court in 2009 ruled that physicians who provide “aid in dying” are shielded from liability.

No state court has recognized “aid in dying” as a fundamental right.

The article is here.

Friday, June 23, 2017

Moral Injury, Posttraumatic Stress Disorder, and Suicidal Behavior Among National Guard Personnel.

Craig Bryan, Anna Belle Bryan, Erika Roberge, Feea Leifker, & David Rozek
Psychological Trauma: Theory, Research, Practice, and Policy 

Abstract

To empirically examine similarities and differences in the signs and symptoms of posttraumatic stress disorder (PTSD) and moral injury and to determine if the combination of these 2 constructs is associated with increased risk for suicidal thoughts and behaviors in a sample of U.S. National Guard personnel. Method: 930 National Guard personnel from the states of Utah and Idaho completed an anonymous online survey. Exploratory structural equation modeling (ESEM) was used to test a measurement model of PTSD and moral injury. A structural model was next constructed to test the interactive effects of PTSD and moral injury on history of suicide ideation and attempts. Results: Results of the ESEM confirmed that PTSD and moral injury were distinct constructs characterized by unique symptoms, although depressed mood loaded onto both PTSD and moral injury. The interaction of PTSD and moral injury was associated with significantly increased risk for suicide ideation and attempts. A sensitivity analysis indicated the interaction remained a statistically significant predictor of suicide attempt even among the subgroup of participants with a history of suicide ideation. Conclusion: PTSD and moral injury represent separate constructs with unique signs and symptoms. The combination of PTSD and moral injury confers increased risk for suicidal thoughts and behaviors, and differentiates between military personnel who have attempted suicide and those who have only thought about suicide.

The article is here.

Tuesday, June 6, 2017

Some Social Scientists Are Tired of Asking for Permission

Kate Murphy
The New York Times
Originally published May 22, 2017

Who gets to decide whether the experimental protocol — what subjects are asked to do and disclose — is appropriate and ethical? That question has been roiling the academic community since the Department of Health and Human Services’s Office for Human Research Protections revised its rules in January.

The revision exempts from oversight studies involving “benign behavioral interventions.” This was welcome news to economists, psychologists and sociologists who have long complained that they need not receive as much scrutiny as, say, a medical researcher.

The change received little notice until a March opinion article in The Chronicle of Higher Education went viral. The authors of the article, a professor of human development and a professor of psychology, interpreted the revision as a license to conduct research without submitting it for approval by an institutional review board.

That is, social science researchers ought to be able to decide on their own whether or not their studies are harmful to human subjects.

The Federal Policy for the Protection of Human Subjects (known as the Common Rule) was published in 1991 after a long history of exploitation of human subjects in federally funded research — notably, the Tuskegee syphilis study and a series of radiation experiments that took place over three decades after World War II.

The remedial policy mandated that all institutions, academic or otherwise, establish a review board to ensure that federally funded researchers conducted ethical studies.

The article is here.

Monday, April 10, 2017

Citigroup Has an On-call Ethicist to Help It Solve Moral Issues

Alana Abramson
Fortune Magazine
Originally posted March 17, 2017

It turns out that Citigroup has an on-call ethicist to handle issues around the intersection of banking, finance, and morality.

The bank has worked with Princeton University Professor David Miller for the past three years, according to the Wall Street Journal. His role includes providing advice to top executives and reviewing topics and projects they have concerns about.

Miller was brought on, according to the Journal, by Citigroup CEO Michael Corbat, who felt the role was necessary after learning about employees' hesitations to voice concerns about wrongdoings, and public perceptions of banks.

The article is here.

Friday, March 17, 2017

Professional Liability for Forensic Activities: Liability Without a Treatment Relationship

Donna Vanderpool
Innov Clin Neurosci. 2016 Jul-Aug; 13(7-8): 41–44.

This ongoing column is dedicated to providing information to our readers on managing legal risks associated with medical practice. We invite questions from our readers. The answers are provided by PRMS, Inc. (www.prms.com), a manager of medical professional liability insurance programs with services that include risk management consultation, education and onsite risk management audits, and other resources to healthcare providers to help improve patient outcomes and reduce professional liability risk. The answers published in this column represent those of only one risk management consulting company. Other risk management consulting companies or insurance carriers may provide different advice, and readers should take this into consideration. The information in this column does not constitute legal advice. For legal advice, contact your personal attorney. Note: The information and recommendations in this article are applicable to physicians and other healthcare professionals so “clinician” is used to indicate all treatment team members.

Question:

In my mental health practice, I am doing more and more forensic activities, such as IMEs and expert testimony. Since I am not treating the evaluees, there should be no professional liability risk, right?

The answer and column is here.

Wednesday, February 8, 2017

Medical culture encourages doctors to avoid admitting mistakes

By Lawrence Schlachter
STAT News
Originally published on January 13, 2017

Here are two excerpts:

In reality, the factor that most influences doctors to hide or disclose medical errors should be clear to anyone who has spent much time in the profession: The culture of medicine frowns on admitting mistakes, usually on the pretense of fear of malpractice lawsuits.

But what’s really at risk are doctors’ egos and the preservation of a system that lets physicians avoid accountability by ignoring problems or shifting blame to “the system” or any culprit other than themselves.

(cut)

What is a patient to do in this environment? The first thing is to be aware of your own predisposition to take everything your doctor says at face value. Listen closely and you may hear cause for more intense questioning.

You will likely never hear the terms negligence, error, mistake, or injury in a hospital. Instead, these harsh but truthful words and phrases are replaced with softer ones like accident, adverse event, or unfortunate outcome. If you hear any of these euphemisms, ask more questions or seek another opinion from a different doctor, preferably at a different facility.

Most doctors would never tell a flagrant lie. But in my experience as a neurosurgeon and as an attorney, too many of them resort to half-truths and glaring omissions when it comes to errors. Beware of passive language like “the patient experienced bleeding” rather than “I made a bad cut”; attributing an error to random chance or a nameless, faceless system; or trivialization of the consequences of the error by claiming something was “a blessing in disguise.”

The article is here.

Saturday, November 19, 2016

Risk Management and You: 9 Most Frequent Violations for Psychologists

Ken Pope and Melba Vasquez
Ethics in Psychotherapy and Counseling: Practical Guide (5th edition)
(2016)

For U.S. and Canadian psychologists, the 9 most frequent causes among the 5,582 disciplinary actions over the years were (in descending order of frequency):

  1. unprofessional conduct, 
  2. sexual misconduct, 
  3. negligence, 
  4. nonsexual dual relationships, 
  5. conviction of a crime, 
  6. failure to maintain adequate or accurate records, 
  7. failure to comply with continuing education or competency requirements, 
  8. inadequate or improper supervision or delegation, and 
  9. substandard or inadequate care. 

Thursday, June 16, 2016

The Corporate Joust with Morality

by Caroline Kaeb And David Scheffer
Opino Juris
Originally posted June 6, 2016

Here is the end:

This duel between corporate responsibility and corporate deceit and culpability is no small matter.  The fate of human society and of the earth increasingly falls on the shoulders of corporate executives who either embrace society’s challenges and, if necessary, counterattack for worthy aims or they succumb to dangerous gambits for inflated profits, whatever the impact on society.

The fulcrum of risk management must be forged with sophisticated strategies that propel corporations into the great policy debates of our times in order to promote social responsibility and thus strengthen the long-term viability of corporate operations.  We believe that task must begin in business schools and in corporate boardrooms where decisions that shape the world are made every day.

The article is here.

Tuesday, January 19, 2016

How Our Self-Control Affects the Way We See Risk

Research shows that people with low self-control tend to underplay the negative consequences of their decisions.

by Kerry A. Dolan
Stanford Business
November 25, 2015

Here is an excerpt:

Academic research has long shown that people with low self-control engage in riskier behaviors than do those with higher self-control. But what is the connection between self-control and risk? Are people with low self-control simply unable to stop themselves from risky behavior?

Not exactly. In a new study, researchers from Stanford and the University of Hong Kong found that people with low self-control look at consequences differently than those with higher self-control.

The article is here.

Wednesday, December 23, 2015

Is It Safe For Medical Residents To Work 30-Hour Shifts?

By Rob Stein
NPR
Originally published December 7, 2015

Since 2003, strict rules have limited how long medical residents can work without a break. The rules are supposed to minimize the risk that these doctors-in-training will make mistakes that threaten patients' safety because of fatigue.

But are these rules really the best for new doctors and their patients? There's been intense debate over that and some say little data to resolve the question.

So a group of researchers decided to follow thousands of medical residents at dozens of hospitals around the country.

The study compares the current rules, which limit first-year residents to working no more than 16 hours without a break, with a more flexible schedule that could allow the young doctors to work up to 30 hours.

Researchers will examine whether more mistakes happen on one schedule or the other and whether the residents learn more one way or the other. The year-long study started in July.

The entire article is here.

Friday, December 18, 2015

Physician Burnout Climbs 10% in 3 Years, Hits 55%

By Diana Swift
Medscape
Originally posted December 1, 2015

Professional burnout among US physicians has reached a dangerous level, with more than half of physicians affected, according to the results of a 2014 national survey across various medical specialties and practice settings. Compared with responses from a similar survey in 2011, burnout and satisfaction with work–life balance have worsened dramatically, even though work hours have not increased overall.

"American medicine is at a tipping point," lead author Tait D. Shanafelt, MD, from the Mayo Clinic's Department of Internal Medicine in Rochester, Minnesota, told Medscape Medical News. "If a research study identified a system-based problem that potentially decreased patient safety for 50% of medical encounters, we would swiftly move to address the problem. That is precisely the circumstance we are in, and we need an appropriate system level response."