Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Patient Safety. Show all posts
Showing posts with label Patient Safety. Show all posts

Tuesday, August 12, 2025

Gov Pritzker Signs Legislation Prohibiting AI Therapy in Illinois

Jeff Lagasse
Healthcare Finance
Originally posted 11 August 25

Illinois Governor J.B. Pritzker has signed into law a piece of legislation that will ban the use of artificial intelligence in delivering therapy or psychotherapy unless it's overseen by licensed clinicians. 

The Wellness and Oversight for Psychological Resources Act prohibits anyone from using AI to aid in mental health and therapeutic decision-making, while still allowing the use of AI for administrative and supplementary support services for licensed behavioral health professionals. 

The intent, said Pritzker, is to protect patients from unregulated AI products, protect the jobs of qualified behavioral health providers, and protect children from rising concerns about the use of AI chatbots in mental health services.

“The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients,” said Mario Treto Jr., secretary of the Illinois Department of Financial and Professional Regulation (IDFPR). “This legislation stands as our commitment to safeguarding the well-being of our residents by ensuring that mental health services are delivered by trained experts who prioritize patient care above all else.”


Here are some thoughts:

Illinois has enacted the Wellness and Oversight for Psychological Resources Act, banning the use of artificial intelligence in therapy or psychotherapy unless supervised by licensed clinicians. The law allows AI for administrative and support functions but prohibits its use in direct therapeutic decision-making. Officials cite patient safety, protection of professional jobs, and prevention of harmful AI-generated advice as key reasons. The Illinois Department of Financial and Professional Regulation will enforce the law, with penalties up to $10,000 for violations.

Sunday, October 27, 2024

Care robot literacy: integrating AI ethics and technological literacy in contemporary healthcare

Turja, T., et al.
AI Ethics (2024). 

Abstract

Healthcare work is guided by care ethics, and any technological changes, including the use of robots and artificial intelligence (AI), must comply with existing norms, values and work practices. By bridging technological literacy and AI ethics, this study provides a nuanced definition and an integrative conceptualization of care robot literacy (CRL) for contemporary care work. Robotized care tasks require new orientation and qualifications on the part of employees. CRL is considered as one of these new demands, which requires practitioners to have the resources, skills and understanding necessary to work with robots. This study builds on sociotechnical approach of literacy by highlighting a dynamic relationship of care robotization in which successful human–technology interaction relies on exchanges between the technological and the social. Our findings from directed content analysis and theoretical synthesis of in-demand technological literacy and AI ethics in care work emphasize competencies and situational awareness regarding both using the robot and communicating about the care robot. The initial conceptualization of CRL provides a conceptual framework for future studies, implementation and product development of care robots, drastically differing from studying, implementing and developing robots in general. In searching for technologically sound and ethically compliant solutions, the study advocates for the future significance of context-specific CRL as valuable addition to the terminology of ethical AI in healthcare.

Here are some thoughts:

Healthcare work is fundamentally guided by care ethics, which must be upheld as robots and artificial intelligence (AI) are integrated into care settings. Any technological advancements in healthcare must align with existing norms, values, and work practices to ensure that ethical care delivery is maintained. This highlights the importance of a thoughtful approach to the incorporation of technology in healthcare environments.

A novel concept emerging from this discourse is Care Robot Literacy (CRL), which bridges technological literacy and AI ethics. CRL encompasses the resources, skills, and understanding necessary for healthcare practitioners to work effectively with robots in their care practices. As robotized care tasks require new orientations and qualifications from employees, CRL becomes essential for equipping practitioners with the competencies needed to navigate this evolving landscape.

This study adopts a sociotechnical approach to CRL, emphasizing the dynamic relationship between care robotization and human-technology interaction. Successful integration of robots in healthcare relies on effective exchanges between technological capabilities and social factors. This interplay is crucial for fostering an environment where both patients and practitioners can benefit from technological advancements.

Key components of CRL include practical skills for operating robots and the ability to communicate about their use within care settings. These competencies are vital for ensuring that healthcare workers can not only utilize robotic systems effectively but also articulate their roles and benefits to patients and colleagues alike.

The implications of CRL extend beyond mere technical skills; it serves as a valuable occupational asset that encompasses digital proficiency, ethical awareness, and situational understanding. These elements are critical for supporting patient safety and well-being, particularly in an increasingly automated healthcare environment where the quality of care must remain a top priority.

Looking ahead, the initial conceptualization of CRL provides a framework for future studies, implementation strategies, and product development specific to care robots. As healthcare seeks technologically sound and ethically compliant solutions, CRL is positioned to become an integral part of the terminology and practice surrounding ethical AI in healthcare. 

Tuesday, October 22, 2024

Pennsylvania health system agrees to $65 million settlement after hackers leaked nude photos of cancer patients

Sean Lyngass
CNN.com
Originally posted 23 Sept 24

A Pennsylvania health care system this month agreed to pay $65 million to victims of a February 2023 ransomware attack after hackers posted nude photos of cancer patients online, according to the victims’ lawyers.

It’s the largest settlement of its kind in terms of per-patient compensation for victims of a cyberattack, according to Saltz Mongeluzzi Bendesky, a law firm that for the plaintiffs.

The settlement, which is subject to approval by a judge, is a warning to other big US health care providers that the most sensitive patient records they hold are of enormous value to both hackers and the patients themselves, health care cyber experts told CNN. Eighty percent of the $65-million settlement is set aside for victims whose nude photos were published online.

The settlement “shifts the legal, insurance and adversarial ecosystem,” said Carter Groome, chief executive of cybersecurity firm First Health Advisory. “If you’re protecting health data as a crown jewel — as you should be — images or photos are going to need another level of compartmentalized protection.”

It’s a potentially continuous cycle where hackers increasingly seek out the most sensitive patient data to steal, and health care providers move to settle claims out of courts to avoid “ongoing reputational harm,” Groome told CNN.

According to the lawsuit, a cybercriminal gang stole nude photos of cancer patients last year from Lehigh Valley Health Network, which comprises 15 hospitals and health centers in eastern Pennsylvania. The hackers demanded a ransom payment and when Lehigh refused to pay, they leaked the photos online.

The lawsuit, filed on behalf of a Pennsylvania woman and others whose nude photos were posted online, said that Lehigh Valley Health Network needed to be held accountable “for the embarrassment and humiliation” it had caused plaintiffs.

“Patient, physician, and staff privacy is among our top priorities, and we continue to enhance our defenses to prevent incidents in the future,” Lehigh Valley Health Network said in a statement to CNN on Monday.


Here are some thoughts:

The ransomware attack on Lehigh Valley Health Network raises significant ethical and healthcare concerns. The exposure of nude photos of cancer patients is a profound breach of trust and privacy, causing significant emotional distress and psychological harm. Healthcare providers have a duty of care to protect patient data and must be held accountable for their failure to do so. The decision to pay a ransom is ethically complex, as it can incentivize further attacks and potentially jeopardize patient safety. The frequency and severity of ransomware attacks highlight the urgent need for stronger cybersecurity measures in the healthcare sector. By addressing these ethical and practical considerations, healthcare organizations can better safeguard patient information and ensure the delivery of high-quality care.

Friday, October 4, 2024

New Ethics Opinion Addresses Obligations Associated With Collateral Information

Moran, M. (2024).
Psychiatric News, 59(09). 

What are a psychiatrist’s ethical obligations regarding confidentiality of sources of collateral information obtained in the involuntary hospitalization of a patient?

In a new opinion added to “The Principles of Medical Ethics With Annotations Especially Applicable to Psychiatry,” the APA Ethics Committee underscored that a psychiatrist’s overriding ethical obligation is to the safety of the patient, and that there can be no guarantee of confidentiality to family members or other sources who provide information that is used during involuntary hospitalization.

“Psychiatrists deal with collateral information in clinical practice routinely,” said Ethics Committee member Gregory Barber, M.D. “It’s a standard part of the job to collect collateral information in cases where a patient is hospitalized, especially involuntarily, and there can be a lot of complicated interpersonal dynamics that come up when family members provide that information.

“We obtain collateral information from people who know a patient well as a way to ensure we have a full clinical picture regarding the patient’s situation,” Barber said. “But our ethical obligations are to the patient and the patient’s safety. Psychiatrists do not have an established doctor-patient relationship with the source of collateral information and do not have obligations to keep the source hidden from patients. And we should not make guarantees that the information will remain confidential.”


Here are some thoughts:

Psychiatrists' ethical obligations regarding confidentiality of collateral information in involuntary hospitalization prioritize patient safety. While they should strive to protect sources' privacy, this may be secondary to ensuring the patient's well-being. Transparency and open communication with both the patient and the collateral source are essential for building trust and preventing conflicts.

Sunday, September 29, 2024

Whistleblowing in science: this physician faced ostracization after standing up to pharma

Sara Reardon
nature.com
Originally posted 20 Aug 24

The image of a lone scientist standing up for integrity against a pharmaceutical giant seems romantic and compelling. But to haematologist Nancy Olivieri, who went public when the company sponsoring her drug trial for a genetic blood disorder tried to suppress data about harmful side effects, the experience was as unglamorous as it was damaging and isolating. “There’s a lot of people who fight for justice in research integrity and against the pharmaceutical industry, but very few people know what it’s like to take on the hospital administrators” too, she says.

Now, after more than 30 years of ostracization by colleagues, several job losses and more than 20 lawsuits — some of which are ongoing — Olivieri is still amazed that what she saw as efforts to protect her patients could have proved so controversial, and that so few people took her side. Last year, she won the John Maddox Prize, a partnership between the London-based charity Sense about Science and Nature, which recognizes “researchers who stand up and speak out for science” and who achieve changes amid hostility. “It’s absolutely astounding to me that you could become famous as a physician for saying, ‘I think there might be a complication here,’” she says. “There was a lot of really good work that we could have done that we wasted a lot of years not doing because of all this.”

Olivieri didn’t set out to be a troublemaker. As a young researcher at the University of Toronto (UT), Canada, in the 1980s, she worked with children with thalassaemia — a blood condition that prevents the body from making enough oxygen-carrying haemoglobin, and that causes a fatal build-up of iron in the organs if left untreated. She worked her way up to become head of the sickle-cell-disease programme at the city’s Hospital for Sick Children (SickKids). In 1989, she started a clinical trial at SickKids to test a drug called deferiprone that traps iron in the blood. The hospital eventually brought in a pharmaceutical company called Apotex, based in Toronto, Canada, to co-sponsor the study as part of regulatory requirements.


Here are some thoughts:

The case of Nancy Olivieri, a haematologist who blew the whistle on a pharmaceutical company's attempts to suppress data about harmful side effects of a drug, highlights the challenges and consequences faced by researchers who speak out against industry and institutional pressures. Olivieri's experience demonstrates how institutions can turn against researchers who challenge industry interests, leading to isolation, ostracization, and damage to their careers. Despite the risks, Olivieri's story emphasizes the crucial role of allies and support networks in helping whistle-blowers navigate the challenges they face.

The case also underscores the importance of maintaining research integrity and transparency, even in the face of industry pressure. Olivieri's experience shows that prioritizing patient safety and well-being over industry interests is critical, and institutions must be held accountable for their actions. Additionally, the significant emotional toll that whistle-blowing can take on individuals, including anxiety, isolation, and disillusionment, must be acknowledged.

To address these issues, policy reforms are necessary to protect researchers from retaliation and ensure that they can speak out without fear of retribution. Industry transparency is also essential to minimize conflicts of interest. Furthermore, institutions and professional organizations must establish support networks for researchers who speak out against wrongdoing.

Wednesday, January 3, 2024

Doctors Wrestle With A.I. in Patient Care, Citing Lax Oversight

Christina Jewett
The New York Times
Originally posted 30 October 23

In medicine, the cautionary tales about the unintended effects of artificial intelligence are already legendary.

There was the program meant to predict when patients would develop sepsis, a deadly bloodstream infection, that triggered a litany of false alarms. Another, intended to improve follow-up care for the sickest patients, appeared to deepen troubling health disparities.

Wary of such flaws, physicians have kept A.I. working on the sidelines: assisting as a scribe, as a casual second opinion and as a back-office organizer. But the field has gained investment and momentum for uses in medicine and beyond.

Within the Food and Drug Administration, which plays a key role in approving new medical products, A.I. is a hot topic. It is helping to discover new drugs. It could pinpoint unexpected side effects. And it is even being discussed as an aid to staff who are overwhelmed with repetitive, rote tasks.

Yet in one crucial way, the F.D.A.’s role has been subject to sharp criticism: how carefully it vets and describes the programs it approves to help doctors detect everything from tumors to blood clots to collapsed lungs.

“We’re going to have a lot of choices. It’s exciting,” Dr. Jesse Ehrenfeld, president of the American Medical Association, a leading doctors’ lobbying group, said in an interview. “But if physicians are going to incorporate these things into their workflow, if they’re going to pay for them and if they’re going to use them — we’re going to have to have some confidence that these tools work.”


My summary: 

This article delves into the growing integration of artificial intelligence (A.I.) in patient care, exploring the challenges and concerns raised by doctors regarding the perceived lack of oversight. The medical community is increasingly leveraging A.I. technologies to aid in diagnostics, treatment planning, and patient management. However, physicians express apprehension about the potential risks associated with the use of these technologies, emphasizing the need for comprehensive oversight and regulatory frameworks to ensure patient safety and uphold ethical standards. The article highlights the ongoing debate within the medical profession on striking a balance between harnessing the benefits of A.I. and addressing the associated uncertainties and risks.

Wednesday, March 25, 2020

What Should Health Care Organizations Do to Reduce Billing Fraud and Abuse?

K. Drabiak and J. Wolfson
AMA J Ethics. 2020;22(3):E221-231.
doi: 10.1001/amajethics.2020.221.

Abstract

Whether physicians are being trained or encouraged to commit fraud within corporatized organizational cultures through contractual incentives (or mandates) to optimize billing and process more patients is unknown. What is known is that upcoding and misrepresentation of clinical information (fraud) costs more than $100 billion annually and can result in unnecessary procedures and prescriptions. This article proposes fraud mitigation strategies that combine organizational cultural enhancements and deployment of transparent compliance and risk management systems that rely on front-end data analytics.

Fraud in Health Care

Growth in corporatization and profitization in medicine, insurance company payment rules, and government regulation have fed natural proclivities, even among physicians, to optimize profits and reimbursements (Florida Department of Health, oral communication, September 2019). According to the most recent Health Care Fraud and Abuse Control Program Annual Report, in one case a management company “pressured and incentivized” dentists to meet specific production goals through a system that disciplined “unproductive” dentists and awarded cash bonuses tied to the revenue from procedures—including many allegedly medically unnecessary services—they performed. This has come at a price: escalating costs, fraud and abuse, medically unnecessary services, adverse effects on patient safety, and physician burnout.

Breaking the cycle of bad behaviors that are induced in part by financial incentives speaks to core ethical issues in the practice of medicine that can be addressed through a combination of organizational and cultural enhancements and more transparent practice-based compliance and risk management systems that rely on front-end data analytics designed to identify, flag, and focus investigations on fraud and abuse at the practice site. Here, we discuss types of health care fraud and their impact on health care costs and patient safety, how this behavior is incentivized and justified within current and evolving medical practice settings, and a 2-pronged strategy for mitigating this behavior.

The info is here.

Friday, February 2, 2018

Confidential deals can obscure sexual misconduct allegations against doctors

Jayne O'Donnell
USA TODAY
Originally published January 5, 2018

Here are two excerpts:

Hospitals will often take over doctors' liability in confidential settlements, which Washington plaintiffs' attorney Patrick Malone calls a "frequent dodge" to keep medical negligence claims out of the National Practitioners Data Bank. Before they hire doctors, hospitals check the data bank, which also includes disciplinary actions by hospitals, medical societies and boards, which also have access to it.

Duncan's case, however, was a "miscellaneous tort claim," filed after Ohio's one-year statute of limitations for medical malpractice claims had passed.

That's just one of the many laws working in the favor of the Cleveland Clinic and the health care industry in Ohio. Plaintiff lawyer Michael Shroge, a former Cleveland Clinic associate general counsel, says major health care systems are "very often more interested in protecting their brand than protecting the health of patients."

(cut)

Critics of settlement deals' gag clauses say they compromise patients' health and safety and are unethical.

Confidential settlements are particularly problematic when it comes to health care, as "we take off our clothes in front of doctors," said Malone, who specializes in medical malpractice cases. "For a doctor to violate that in a sexual way is the ultimate wrong," he said, adding that he only agrees to confidential settlements if his client insists and only of the settlement amount.

The information is here.

Tuesday, January 9, 2018

Drug Companies’ Liability for the Opioid Epidemic

Rebecca L. Haffajee and Michelle M. Mello
N Engl J Med 2017; 377:2301-2305
December 14, 2017
DOI: 10.1056/NEJMp1710756

Here is an excerpt:

Opioid products, they alleged, were defectively designed because companies failed to include safety mechanisms, such as an antagonist agent or tamper-resistant formulation. Manufacturers also purportedly failed to adequately warn about addiction risks on drug packaging and in promotional activities. Some claims alleged that opioid manufacturers deliberately withheld information about their products’ dangers, misrepresenting them as safer than alternatives.

These suits faced formidable barriers that persist today. As with other prescription drugs, persuading a jury that an opioid is defectively designed if the Food and Drug Administration approved it is challenging. Furthermore, in most states, a drug manufacturer’s duty to warn about risks is limited to issuing an adequate warning to prescribers, who are responsible for communicating with patients. Finally, juries may resist laying legal responsibility at the manufacturer’s feet when the prescriber’s decisions and the patient’s behavior contributed to the harm. Some individuals do not take opioids as prescribed or purchase them illegally. Companies may argue that such conduct precludes holding manufacturers liable, or at least should reduce damages awards.

One procedural strategy adopted in opioid litigation that can help overcome defenses based on users’ conduct is the class action suit, brought by a large group of similarly situated individuals. In such suits, the causal relationship between the companies’ business practices and the harm is assessed at the group level, with the focus on statistical associations between product use and injury. The use of class actions was instrumental in overcoming tobacco companies’ defenses based on smokers’ conduct. But early attempts to bring class actions against opioid manufacturers encountered procedural barriers. Because of different factual circumstances surrounding individuals’ opioid use and clinical conditions, judges often deemed proposed class members to lack sufficiently common claims.

The article is here.

Tuesday, February 14, 2017

Medical errors: Disclosure styles, interpersonal forgiveness, and outcomes

Hannawa, A. F., Shigemoto, Y., & Little, T. (2016).
Social Science & Medicine, 156, 29-38.

Abstract

Rationale

This study investigates the intrapersonal and interpersonal factors and processes that are associated with patient forgiveness of a provider in the aftermath of a harmful medical error.

Objective

This study aims to examine what antecedents are most predictive of patient forgiveness and non-forgiveness, and the extent to which social-cognitive factors (i.e., fault attributions, empathy, rumination) influence the forgiveness process. Furthermore, the study evaluates the role of different disclosure styles in two different forgiveness models, and measures their respective causal outcomes.

Methods

In January 2011, 318 outpatients at Wake Forest Baptist Medical Center in the United States were randomly assigned to three hypothetical error disclosure vignettes that operationalized verbally effective disclosures with different nonverbal disclosure styles (i.e., high nonverbal involvement, low nonverbal involvement, written disclosure vignette without nonverbal information). All patients responded to the same forgiveness-related self-report measures after having been exposed to one of the vignettes.

Results

The results favored the proximity model of interpersonal forgiveness, which implies that factors more proximal in time to the act of forgiving (i.e., patient rumination and empathy for the offender) are more predictive of forgiveness and non-forgiveness than less proximal factors (e.g., relationship variables and offense-related factors such as the presence or absence of an apology). Patients' fault attributions had no effect on their forgiveness across conditions. The results evidenced sizeable effects of physician nonverbal involvement-patients in the low nonverbal involvement condition perceived the error as more severe, experienced the physician's apology as less sincere, were more likely to blame the physician, felt less empathy, ruminated more about the error, were less likely to forgive and more likely to avoid the physician, reported less closeness, trust, and satisfaction but higher distress, were more likely to change doctors, less compliant, and more likely to seek legal advice.

Conclusion

The findings of this study imply that physician nonverbal involvement during error disclosures stimulates a healing mechanism for patients and the physician-patient relationship. Physicians who disclose a medical error in a nonverbally uninvolved way, on the other hand, carry a higher malpractice risk and are less likely to promote healthy, reconciliatory outcomes.

The article is here.

Friday, December 16, 2016

How a doctor convicted in drugs-for-sex case returned to practice

Danny Robbins
Atlantic Journal Constitution
Part of a series on Physical and Sexual Abuse

Here is an excerpt:

“The pimp with a prescription pad” is what one prosecutor called him during a trial in which it was revealed that more than 400 sexually explicit photos of female patients and other women had been discovered in his office.

In some states, where legislatures have enacted laws prohibiting doctors who commit certain crimes from practicing, Dekle’s career would be over. But in Georgia, where the law gives the medical board the discretion to license anyone it sees fit, he was back in practice two years after leaving prison.

More than a dozen years later, that decision still leads some to wonder what the board was thinking.

“It’s particularly damning that he was using his ability to write prescriptions to further his sexual activities,” said Chris Dorsey, the Georgia Bureau of Investigation agent who led the probe that sent Dekle to prison. “A doctor burglarizes a house and then pays his debt to society, could he be a good doctor? I could argue it both ways. But when you have someone who abused everything centering on a medical practice to victimize all these people, that’s really a separate issue.”

The article is here.

Thursday, December 15, 2016

How Well Does Your State Protect Patients?

By Carrie Teegardin
Atlantic Journal-Constitution
A series on Physicians and Abuse

Here is an excerpt:

In most states, doctors dominate medical licensing boards and have the authority to decide who is fit to practice medicine and who isn’t. Usually the laws do not restrict a board’s authority by mandating certain punishments for some types of violations. Many licensing boards — including Georgia’s — say that’s how it should be.

“Having a bold, bright line saying a felony equals this or that is not good policy,” said Bob Jeffery, executive director of the Georgia Composite Medical Board.

Jeffery said criminal courts punish offenders and civil courts can compensate victims. Medical regulators, he said, have a different role.

“A licensing board is charged with making sure a (doctor) is safe to practice and that patients are protected,” he said.

With no legal prohibition standing in the way in most states, doctor-dominated medical boards often decide that doctors busted for abusive or illegal behaviors can be rehabilitated and safely returned to exam rooms.

New Jersey licensed a doctor convicted of sexual offenses with four patients. Kansas licensed a doctor imprisoned in Ohio for a sexual offense involving a child; that doctor later lost his Kansas license after making anonymous obscene phone calls to patients. Utah licensed a doctor who didn’t contest misdemeanor charges of sexual battery for intentionally touching the genitals of patients, staff members and others.

The article is here.

Wednesday, November 4, 2015

Psychological principles could explain major healthcare failings

Press Release
Bangor University
Originally released on

Here is an excerpt:

In the research paper, Dr Michelle Rydon-Grange who has just qualified as a Clinical Psychologist at the School of Psychology, applies psychological theory to find new understandings of the causes that lead to catastrophic failures in healthcare settings.  She explains that the aspect often neglected in inquiries is the role that human behaviour plays in contributing to these failures, and hopes that using psychological theories could prevent their reoccurrence in the future.

The value of psychological theory in safety-critical industries such as aviation and nuclear power has long been acknowledged and is based upon the notion that certain employee behaviours are required to maintain safety. However, the same is not yet true of healthcare.

Though there may not be obvious similarities between various healthcare scandals which have occurred in disparate areas of medicine over the last few decades, striking similarities in the conditions under which these crises occurred can be found, according to Rydon-Grange.

The entire pressor is here.

Saturday, September 19, 2015

When Bad Doctors Happen to Good Patients

By Thomas Moore and Steve Cohen
The New York Times
Originally published August 31, 2015

Here is an excerpt:

That Lavern’s Law wasn’t allowed to come up for a final vote is Albany’s shame. The greater shame is that hospitals don’t put more emphasis on patient safety. As the Lavern’s Law travesty makes clear, we need better solutions. Don’t limit what injured people may collect, and don’t make it more difficult for victims to get their cases heard. Even better for all concerned, keep the negligent act from ever happening in the first place. And there are practical ways to do that.

Doctors and hospitals must do a better job of policing themselves. Six percent of all doctors were estimated to be responsible for 58 percent of all malpractice payments between 1991 and 2005. State licensing agencies must do a much better job of keeping those worst of the worst out of hospitals. The threshold for state medical licensing agencies to initiate reviews should be reduced; in New York it takes six malpractice judgments or settlements. It should be three at most.

The entire article is here.

Friday, May 22, 2015

Expansion of ‘Right to Try’ legislation raises ethical, safety concerns

By HemOnc Today
Originally published April 25, 2015

Early access to experimental drugs has historically been reserved for patients enrolled on clinical trials.

In 2009, the FDA revamped its 1980’s expanded access program, which allows terminally ill patients ineligible for clinical trials and for whom no alternative, approved therapies exist to ask pharmaceutical companies for access to an investigational drug in their pipeline. More than 1,500 patients received an experimental treatment through the FDA’s program in 2014.

Now, some legislatures are going a step further by adopting so-called “Right to Try” legislation, intended to give terminally ill patients comparable access to investigational drugs but removing the FDA from the process.

Since 2014, thirteen states have passed Right to Try laws, and legislators in 20 more states have plans to introduce similar legislation this year.

The entire article is here.

Tuesday, July 1, 2014

An analysis of electronic health record-related patient safety concerns

By D. W. Meeks, M. W. Smith, L. Taylor and others
J Am Med Inform Assoc doi:10.1136/amiajnl-2013-002578

Here is a portion of the Discussion Section

Our findings underscore the importance of continuing the process of detecting and addressing safety concerns long after EHR implementation and ‘go-live’ has occurred. Having a mature EHR system clearly does not eliminate EHR-related safety concerns, and a majority of reported incidents were phase 1 or unsafe technology. However, few healthcare systems have robust reporting and analytic infrastructure similar to the VA's IPS. In light of increasing use of EHRs, activities to achieve a resilient EHR-enabled healthcare system should include a reporting and analysis infrastructure for EHR-related safety concerns. Proactive risk assessments to identify safety concerns, such as through the use of SAFER guides released recently by The Office of the National Coordinator for Health Information Technology, can be used by healthcare organizations or EHR users to facilitate meaningful conversations and collaborative efforts with vendors to improve patient safety, including developing better and safer EHR designs.

Thursday, March 27, 2014

Best practices for remote psychological assessment via telehealth technologies

By David Luxton, Larry Pruitt, and Janyce Osenbach
Professional Psychology: Research and Practice, Vol 45(1), Feb 2014, 27-35.
doi: 10.1037/a0034547
Special Section: Telepractice

Abstract

The use and capabilities of telehealth technologies to conduct psychological assessments remotely are expanding. Clinical practitioners and researchers need to be aware of what influences the psychometric properties of telehealth-based assessments to assure optimal and competent assessments. The purpose of this review is to discuss the specific factors that influence the validity and reliability of remote psychological assessments and to provide best practices recommendations. Specific factors discussed include the lack of physical presence, technological issues, patient and provider acceptance of and comfort with technology, and procedural issues. Psychometric data regarding telehealth-based psychological assessment and limitations to these data, as well as cultural, ethical, and safety considerations are discussed. The information presented is applicable to all mental health professionals who conduct psychological assessment with telehealth technologies.

The entire article is here, behind a paywall.

Monday, February 3, 2014

Episode 1: What Psychologists Need to Know about Divorce, Mediation, and Collaborative Law

In this inaugural podcast, John interviews Attorney James Demmel about divorce, litigation, mediation, and collaborative law.  Psychologists frequently find themselves working with individuals contemplating a divorce or actually going through the divorce process.  The purpose of this podcast is to give psychologists an overview of issues surrounding divorce, litigation, mediation, and collaboration.

At the end of this podcast, the listener will be able to:

1. Describe collaborative law,
2. Differentiate between collaborative law and mediation, and,
3. Describe the benefits of mediation and collaborative law.

Click here to purchase 1 APA-approved Continuing Education credit

Find this podcast in iTunes

Or listen directly here.




Resources

Link to Attorney Demmel's website

Frequently Asked Questions about Collaborative Law and Mediation

Items Needed to Analyze Marital Assets - From Demmel Law Office

Link to The International Academy of Collaborative Professionals

Link to Collaborative Professionals of Central Pennsylvania 

Listener feedback about this episode can be sent to John Gavazzi

Thursday, December 5, 2013

Watchful Eye in Nursing Homes

By Jan Hoffman
The New York Times
Originally published November 18, 2013

Here are some excerpts:

In June, Mike DeWine, the Ohio state attorney general, announced that his office, with permission from families, had placed cameras in residents’ rooms in an unspecified number of state facilities. Mr. DeWine has moved to shut down at least one facility, in Zanesville, where, he said, cameras caught actions like an aide’s repeatedly leaving a stroke patient’s food by his incapacitated side.

The recordings can have an impact. Based on Ms. Racher’s videos, one aide pleaded guilty to abuse and neglect. The other appears to have fled the country. Similar scenes of abuse have been captured in New Jersey, New York, Pennsylvania, Texas and other states by relatives who placed cameras in potted plants and radios, webcams and iPhones.

(cut)

But the secret monitoring of a resident raises ethical and legal questions. Families must balance fears for their relative’s safety against an undignified invasion of their privacy. They must also consider the privacy rights of others who pass through the room, including roommates and visitors.

Proponents of hidden cameras argue that expectations of privacy have fallen throughout society: nanny cams, webcams and security cameras are ubiquitous.

The entire article is here.

Thursday, November 21, 2013

Talking with Patients about Other Clinicians' Errors

By Thomas H. Gallagher, Michelle M. Mello, and others
The New England Journal of Medicine
Originally published November 6, 2013

Here is an excerpt:

The rationales for disclosing harmful errors to patients are compelling and well described. Nonetheless, multiple barriers, including embarrassment, lack of confidence in one's disclosure skills, and mixed messages from institutions and malpractice insurers, make talking with patients about errors challenging. Several distinctive aspects of disclosing harmful errors involving colleagues intensify the difficulties.

One challenge is determining what happened when a clinician was not directly involved in the event in question. He or she may have little firsthand knowledge about the event, and relevant information in the medical record may be lacking. Beyond this, potential errors exist on a broad spectrum ranging from clinical decisions that are “not what I would have done” but are within the standard of care to blatant errors that might even suggest a problem of professional competence or proficiency.

The entire article is here.

Thanks to Gary Schoener for this information.