Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Regulations. Show all posts
Showing posts with label Regulations. Show all posts

Sunday, December 31, 2023

Problems with the interjurisdictional regulation of psychological practice

Taube, D. O., Shapiro, D. L., et al. (2023).
Professional Psychology: Research and Practice,
54(6), 389–402.

Abstract

The U.S. Constitutional structure creates ethical conflicts for the cross-jurisdictional practice of professional psychology. The profession has chosen to seek interstate agreements to overcome such barriers, and such agreements now include almost 80% of American jurisdictions. Although an improvement over a patchwork of state laws regarding practice, the structure of this agreement and the exclusion of the remaining states continue to pose barriers to the principles of beneficence and nonmaleficence. It creates a system that is extraordinarily difficult to change and places an unrealistic burden on professionals to know, address, and act under complex legal mandates. As psychological services have moved increasingly to remote platforms, cross-jurisdictional business models, and a nationwide mental health crisis emerged alongside the pandemic, it is time to consider a national professional licensing system more seriously, both to further reduce barriers to care and complexity and permit the best interests of patients to prevail.

Impact Statement

Access to and the ability to continue receiving mental health care across jurisdictions and nations has become increasingly urgent in the wake of the COVID-19 pandemic This Ethics in Motion section highlights legal barriers to providing ethical care across jurisdictions, how those challenges developed, and strengths and limitations of current approaches and potential solutions.


My summary: 

The current system of interjurisdictional regulation of psychological practice in the United States is problematic because it creates ethical conflicts for psychologists and places an unrealistic burden on them to comply with complex legal mandates. The system is also extraordinarily difficult to change, and it excludes psychologists in states that have not joined the interstate agreement. As a result, the current system does not adequately protect the interests of patients.

A national professional licensing system would be a more effective way to regulate the practice of psychology across state lines. Such a system would eliminate the need for psychologists to comply with multiple state laws, and it would make it easier for them to provide care to patients who live in different states. A national system would also be more equitable, as it would ensure that all psychologists are held to the same standards.

Monday, July 3, 2023

Is Avoiding Extinction from AI Really an Urgent Priority?

S. Lazar, J, Howard, & A. Narayanan
fast.ai
Originally posted 30 May 23

Here is an excerpt:

And why focus on extinction in particular? Bad as it would be, as the preamble to the statement notes AI poses other serious societal-scale risks. And global priorities should be not only important, but urgent. We’re still in the middle of a global pandemic, and Russian aggression in Ukraine has made nuclear war an imminent threat. Catastrophic climate change, not mentioned in the statement, has very likely already begun. Is the threat of extinction from AI equally pressing? Do the signatories believe that existing AI systems or their immediate successors might wipe us all out? If they do, then the industry leaders signing this statement should immediately shut down their data centres and hand everything over to national governments. The researchers should stop trying to make existing AI systems safe, and instead call for their elimination.

We think that, in fact, most signatories to the statement believe that runaway AI is a way off yet, and that it will take a significant scientific advance to get there—one that we cannot anticipate, even if we are confident that it will someday occur. If this is so, then at least two things follow.

First, we should give more weight to serious risks from AI that are more urgent. Even if existing AI systems and their plausible extensions won’t wipe us out, they are already causing much more concentrated harm, they are sure to exacerbate inequality and, in the hands of power-hungry governments and unscrupulous corporations, will undermine individual and collective freedom. We can mitigate these risks now—we don’t have to wait for some unpredictable scientific advance to make progress. They should be our priority. After all, why would we have any confidence in our ability to address risks from future AI, if we won’t do the hard work of addressing those that are already with us?

Second, instead of alarming the public with ambiguous projections about the future of AI, we should focus less on what we should worry about, and more on what we should do. The possibly extreme risks from future AI systems should be part of that conversation, but they should not dominate it. We should start by acknowledging that the future of AI—perhaps more so than of pandemics, nuclear war, and climate change—is fundamentally within our collective control. We need to ask, now, what kind of future we want that to be. This doesn’t just mean soliciting input on what rules god-like AI should be governed by. It means asking whether there is, anywhere, a democratic majority for creating such systems at all.

Friday, June 23, 2023

In the US, patient data privacy is an illusion

Harlan M Krumholz
Opinion
BMJ 2023;381:p1225

Here is an excerpt:

The regulation allows anyone involved in a patient’s care to access health information about them. It is based on the paternalistic assumption that for any healthcare provider or related associate to be able to provide care for a patient, unfettered access to all of that individual’s health records is required, regardless of the patient’s preference. This provision removes control from the patient’s hands for choices that should be theirs alone to make. For example, the pop-up covid testing service you may have used can claim to be an entity involved in your care and gain access to your data. This access can be bought through many for-profit companies. The urgent care centre you visited for your bruised ankle can access all your data. The team conducting your prenatal testing is considered involved in your care and can access your records. Health insurance companies can obtain all the records. And these are just a few examples.

Moreover, health systems legally transmit sensitive information with partners, affiliates, and vendors through Business Associate Agreements. But patients may not want their sensitive information disseminated—they may not want all their identified data transmitted to a third party through contracts that enable those companies to sell their personal information if the data are de-identified. And importantly, with all the advances in data science, effectively de-identifying detailed health information is almost impossible.

HIPAA confers ample latitude to these third parties. As a result, companies make massive profits from the sale of data. Some companies claim to be able to provide comprehensive health information on more than 300 million Americans—most of the American public—for a price. These companies' business models are legal, yet most patients remain in the dark about what may be happening to their data.

However, massive accumulations of medical data do have the potential to produce insights into medical problems and accelerate progress towards better outcomes. And many uses of a patient’s data, despite moving throughout the healthcare ecosystem without their knowledge, may nevertheless help advance new diagnostics and therapeutics. The critical questions surround the assumptions people should have about their health data and the disclosures that should be made before a patient speaks with a health professional. Should each person be notified before interacting with a healthcare provider about what may happen with the information they share or the data their tests reveal? Are there new technologies that could help patients regain control over their data?

Although no one would relish a return to paper records, that cumbersome system at least made it difficult for patients’ data to be made into a commodity. The digital transformation of healthcare data has enabled wonderous breakthroughs—but at the cost of our privacy. And as computational power and more clever means of moving and organising data emerge, the likelihood of permission-based privacy will recede even further.

Sunday, March 26, 2023

State medical board chair Dr. Brian Hyatt resigns, faces Medicaid fraud allegations

Ashley Savage
Arkansas Democrat Gazette
Originally published 3 MAR 23

Dr. Brian Hyatt stepped down as chairman of the Arkansas State Medical Board Thursday in a special meeting following "credible allegations of fraud," noted in a letter from the state's office of Medicaid inspector general.

Members of the board met remotely Thursday with only one item on the agenda: "Discussion of Arkansas State Board's leadership."

The motion to approve Hyatt's request to step down as chairman and out of an executive role on the board was approved unanimously.

Board members also decided that Dr. Rhys Branman will take over as the interim chairman until an election to fill the seat is held in April.

According to the board Thursday, the vacant seats for vice chair and chair of the board will be voted on separate ballots in the April elections.

The Medicaid letter states "red flags" were discovered in Hyatt's use of Medicaid claims and process of billing for medical services. In Arkansas, Medicaid fraud resulting in an overpayment over $2,500 is a felony.

"Dr. Hyatt is a clear outlier, and his claims are so high they skew the averages on certain codes for the entire Medicaid program in Arkansas," the affidavit states.

"The suspension is temporary and there's a right to appeal. I see only allegations and I don't see any actual charges and I haven't dealt with this a lot," said Branman.

Hyatt has 30 days to appeal his suspension from the Medicaid program.

Other information from the letter shows that Hyatt is alleged to have billed more Medicaid patients at the 99233 code than any other doctor billed for all of their Medicaid patients between January of 2019 and June 30, 2022.

Thursday, March 24, 2022

Proposal for Revising the Uniform Determination of Death Act

Hastings Bioethics Center
Originally posted 18 FEB 22

Organ transplantation has saved many lives in the past half-century, and the majority of postmortem organ donations have occurred after a declaration of death by neurological criteria, or brain death. However, inconsistencies between the biological concept of death and the diagnostic protocols used to determine brain death–as well as questions about the underlying assumptions of brain death–have led to a justified reassessment of the legal standard of death. We believe that the concept of brain death, though flawed in its present application, can be preserved and promoted as a pathway to organ donation, but only after particular changes are made in the medical criteria for its diagnosis. These changes should precede changes in the Uniform Determination of Death Act (UDDA).

The UDDA, approved in 1981, provides a legal definition of death, which has been adopted in some form by all 50 states. It says that death can be defined as the irreversible cessation of circulatory and respiratory functions or of brain functions. The act defines brain death as “irreversible cessation of all functions of the entire brain, including the brainstem.” This description is based on a widely held assumption at the time that the brain is the master integrator of the body, such that when it ceases to function, the body would no longer be able to maintain integrated functions. It was presumed that this would result in both cardiac and pulmonary arrest and the death of the body as a whole. Now that assumption has been called into question by exceptional cases of individuals on ventilators who were declared brain dead but who continued to have function in the hypothalamus. 

(cut)

Revision of the UDDA should first defer to a revision of the guidelines. Clinical criteria for the diagnosis of “cessation of all functions of the entire brain” must include all pertinent functions, including hypothalamic functions such as hormone release and regulation of temperature and blood pressure, to avoid the specter of neurologic recovery in those who fulfill the current clinical criteria for the diagnosis of brain death.

It is likely that the failure to account for a full set of pertinent brain functions has led to inconsistent diagnoses and conflicting results. Such inconsistencies, although well-documented in a number of cases, may have been even more frequent but unrecognized because declaration of brain death is often a self-fulfilling prophecy: rarely do any life-sustaining interventions continue after the diagnosis is made.

To be consistent, transparent, and accurate, the cessation of function in both the cardiopulmonary and the neurological standard of the UDDA should be described as permanent (i.e., no reversal will be attempted) rather than irreversible (i.e., no reversal is possible). We recognize additional challenges in complying with the UDDA requirements that these cessation criteria for brain death include “all functions” of the “entire brain.” In the absence of universally accepted and easily implemented testing criteria, there may be real problems with being in perfect compliance with these legal criteria in spite of being in perfect compliance with the currently published medical guidelines. If the concept of brain death is philosophically valid, as we think is defensible, then the diagnostic guidelines should be corrected before any attempt is made to correct the UDDA. They must then “say what they mean and mean what they say” to eliminate any possibility of patients with persistent evidence of brain function, including hypothalamic function, being erroneously declared brain dead.

Tuesday, March 8, 2022

"Without Her Consent" Harvard Allegedly Obtained Title IX Complainant’s Outside Psychotherapy Records, Absent Her Permission

Colleen Flaherty
Inside Higher Ed
Originally published 10 FEB 22

Here are two excerpts:

Harvard provided background information about how its dispute resolution office works, saying that it doesn’t contact a party’s medical care provider except when a party has indicated that the provider has relevant information that the party wants the office to consider. In that case, the office receives information from the care provider only with the party’s consent.

Multiple legal experts said Wednesday that this is the established protocol across higher education.

Asked for more details about what happened, Kilburn’s lawyer, Carolin Guentert, said that Kilburn’s therapist is a private provider unaffiliated with Harvard, and “we understand that ODR contacted Ms. Kilburn’s therapist and obtained the psychotherapy notes from her sessions with Ms. Kilburn, without first seeking Ms. Kilburn’s written consent as required under HIPAA,” the Health Insurance Portability and Accountability Act of 1996, which governs patient privacy.

Asked if Kilburn ever signed a privacy waiver with her therapist that would have granted the university access to her records, Guentert said Kilburn “has no recollection of signing such a waiver, nor has Harvard provided one to us.”

(cut)

Even more seriously, these experts said that Harvard would have had no right to obtain Kilburn’s mental health records from a third-party provider without her consent.

Andra J. Hutchins, a Massachusetts-based attorney who specializes in education law, said that therapy records are protected by psychotherapist-patient privilege (something akin to attorney-client privilege).

“Unless the school has an agreement with and a release from the student to provide access to those records or speak to the student’s therapist—which can be the case if a student is placed on involuntary leave due to a mental health issue—there should be no reason that a school would be able to obtain a student’s psychotherapy records,” she said.

As far as investigations under Title IX (the federal law against gender-based discrimination in education) go, questions from the investigator seeking information about the student’s psychological records aren’t permitted unless the student has given written consent, Hutchins added. “Schools have to follow state and federal health-care privacy laws throughout the Title IX process. I can’t speculate as to how or why these records were released.”

Daniel Carter, president of Safety Advisors for Educational Campuses, said that “it is absolutely illegal and improper for an institution of higher education to obtain one of their students’ private therapy records from a third party. There’s no circumstance under which that is permissible without their consent.”

Thursday, November 25, 2021

APF Gold Medal Award for Life Achievement in the Practice of Psychology: Samuel Knapp

American Psychologist, 76(5), 812–814. 

This award recognizes a distinguished career and enduring contribution to the practice of psychology. Samuel Knapp’s long, distinguished career has resulted in demonstrable effects and significant contributions to best practices in professionalism, ethics education, positive ethics, and legislative advocacy as Director of Professional Affairs for the Pennsylvania Psychological Association and as an ethics educator extraordinaire. Dr. Knapp’s work has modified the way psychologists think about professional ethics through education, from avoiding disciplinary consequences to promoting overarching ethical principles to achieve the highest standards of ethical behavior. His focus on respectful collaboration among psychologists promotes honesty through nonjudgmental conversations. His Ethics Educators Workshop and other continuing education programs have brought together psychology practitioners and faculty to focus deeply on ethics and resulted in the development of the APA Ethics Educators Award.

From the Biography section

Ethics education became especially important in Pennsylvania when the Pennsylvania State Board of Psychology mandated ethics as part of its continuing education requirement. But even before that, members of the PPA Ethics Committee and Board of Directors, saw ethics education as a vehicle to help psychologists to improve the quality of their services to their patients. Also, to the extent that ethics education can help promote good decision-making, it could also reduce the emotional burden that professional psychologists often feel when faced with difficult ethical situations. Often the continuing education programs were interactive with the secondary goals of helping psychologists to build contacts with each other and an opportunity for the presenters to promote authentic and compassion-driven approaches to teaching ethics. Yes, Sam and the other PPA Ethics educators, such as the PPA attorney Rachael Baturin, also taught the laws, ethics codes, and the risk management strategies. facts.  But these were only one component of PPA’s ethics education program. More important was the development of a cadre of psychologists/ethicists who taught most of these continuing education programs.


Thursday, June 3, 2021

Scientific panel loosens ’14-day rule’ limiting how long human embryos can be grown in the lab

Andrew Joseph
STATnews.com
Originally posted 26 May 2021

An influential scientific panel cracked open the door on Wednesday to growing human embryos in the lab for longer periods of time than currently allowed, a step that could enable the plumbing of developmental mysteries but that also raises thorny questions about whether research that can be pursued should be.

For decades, scientists around the world have followed the “14-day rule,” which stipulates that they should let human embryos develop in the lab for only up to two weeks after fertilization. The rule — which some countries (though not the United States) have codified into law — was meant to allow researchers to conduct inquiries into the early days of embryonic development, but not without limits. And for years, researchers didn’t push that boundary, not just for legal and ethical reasons, but for technical ones as well: They couldn’t keep the embryos growing in lab dishes that long.

More recently, however, scientists have refined their cell-culture techniques, finding ways to sustain embryos up to that deadline. Those advances — along with other leaps in the world of stem cell research, with scientists now transmogrifying cells into blobs that resemble early embryos or injecting human cells into animals — have complicated ethical debates about how far biomedical research should go in its quest for knowledge and potential treatments.

Now, in the latest updates to its guidelines, the International Society for Stem Cell Research has revised its view on studies that would take human embryos beyond 14 days, moving such experiments from the “absolutely not” category to a “maybe” — but only if lots of conditions are first met.

“We’ve relaxed the guidelines in that respect, we haven’t abandoned them,” developmental biologist Robin Lovell-Badge of the Francis Crick Institute, who chaired the ISSCR’s guidelines task force, said at a press briefing.

Wednesday, December 30, 2020

Google AI researcher's exit sparks ethics, bias concerns

Timnit Gebru
Matt Obrien
AP Tech Writer
Originally published 4 DEC 20

Here is an excerpt:

Gebru on Tuesday vented her frustrations about the process to an internal diversity-and-inclusion email group at Google, with the subject line: “Silencing Marginalized Voices in Every Way Possible." Gebru said on Twitter that's the email that got her fired.

Dean, in an email to employees, said the company accepted “her decision to resign from Google” because she told managers she'd leave if her demands about the study were not met.

"Ousting Timnit for having the audacity to demand research integrity severely undermines Google’s credibility for supporting rigorous research on AI ethics and algorithmic auditing," said Joy Buolamwini, a graduate researcher at the Massachusetts Institute of Technology who co-authored the 2018 facial recognition study with Gebru.

“She deserves more than Google knew how to give, and now she is an all-star free agent who will continue to transform the tech industry,” Buolamwini said in an email Friday.

How Google will handle its AI ethics initiative and the internal dissent sparked by Gebru's exit is one of a number of problems facing the company heading into the new year.

At the same time she was on her way out, the National Labor Relations Board on Wednesday cast another spotlight on Google's workplace. In a complaint, the NRLB accused the company of spying on employees during a 2019 effort to organize a union before the company fired two activist workers for engaging in activities allowed under U.S. law. Google has denied the allegations in the case, which is scheduled for an April hearing.

Monday, December 7, 2020

Artificial Intelligence and Legal Disruption: A New Model for Analysis

Hin-Yan Liu,  et .al (2020) 
Law, Innovation and Technology, 
12:2, 205-258
DOI: 10.1080/17579961.2020.1815402

Abstract

Artificial intelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims to: (i) defend the need for a novel conceptual model for understanding the systemic legal disruption caused by new technologies such as AI; (ii) to situate this model in relation to preceding debates about the interaction of regulation with new technologies (particularly the ‘cyberlaw’ and ‘robolaw’ debates); and (iii) to set out a detailed model for understanding the legal disruption precipitated by AI, examining both pathways stemming from new affordances that can give rise to a regulatory ‘disruptive moment’, as well as the Legal Development, Displacement or Destruction that can ensue. The article proposes that this model of legal disruption can be broadly generalisable to understanding the legal effects and challenges of other emerging technologies.

From Concluding Thoughts

As artificial intelligence is often claimed to be an exponential technology, and law progresses incrementally in a linear fashion, there is bound to be a point at which the exponential take off crosses the straight line if these assumptions hold. Everything to the left of this intersection, where AI is below the line, is where hype about the technology does not quite live up to expectations and is generally disappointing in terms of functioning and capability. To the right of this intersection, however, the previously dull technology takes on a surprising and startling tone as it rapidly outpaces both predictions about its capacities and collective abilities to contextualise, accommodate or situate it. It is widely claimed that we are now nearing this intersection. If these claims hold up, the law is one of the institutions that stands to be shocked by the rapid progression and incorporation of AI into society. If this is right, then it is important to start projecting forward in an attempt to minimise the gap between exponential technologies and linear expectations. The legal disruption framework we have presented does exactly this. Furthermore, even if these claims turn out to be misguided, thinking though such transformations sheds different light upon the legal enterprise which hopes to illuminate the entire law.

Thursday, August 6, 2020

Five tips for transitioning your practice to telehealth

Five tips for transitioning your practice to telehealthRebecca Clay
American Psychological Association
Originally posted 19 June 20

When COVID-19 forced Boston private practitioner Luana Bessa, PhD, to take her practice Bela Luz Health online in March, she was worried about whether she could still have deep, meaningful connections with patients through a screen.

To her surprise, Bessa’s intimacy with patients increased instead of diminished. While she is still mindful of maintaining the therapeutic “frame,” it can be easier for everyday life to intrude on that frame while working virtually. But that’s OK, says Bessa. “I’ve had clients tell me, ‘It makes you more human when I see your cat jump on your lap,’” she laughs. “It has really enriched my relationships with some clients.”

Bessa and others recommend several ways to ensure that the transition to telehealth is a positive experience for both you and your patients.

Protect your practice’s financial health

Make sure your practice will be viable so that you continue serving patients over the long haul. If you have an office sitting idle, for example, see if your landlord will renegotiate or suspend lease payments, suggests Kimberly Y. Campbell, PhD, of Campbell Psychological Services, LLC, in Silver Spring, Maryland. Also renegotiate agreements with other vendors, such as parking lot owners, cleaning services, and the like.

And since patients can’t just hand you or your receptionist a credit card, you’ll need to set up an alternate payment system. Campbell turned to a credit card processing company called Clover. Other practitioners use the payment system that’s part of their electronic health record system. Natasha Holmes, PsyD, uses SimplePractice to handle payment for her Boston practice And Still We Rise, LLC. Although there’s a fee for processing payments, an integrated program makes payment as easy as clicking a button after a patient’s session and watching the payment show up at your bank the next day.

The info is here.

Thursday, April 9, 2020

Banned Devices; Proposal To Ban Electrical Stimulation Devices Used To Treat Self-Injurious or Aggressive Behavior

FDA Press Release
Posted March 5, 2020

Here is an excerpt:

After careful consideration, the U.S. Food and Drug Administration today published a final rule to ban electrical stimulation devices (ESDs) used for self-injurious or aggressive behavior because they present an unreasonable and substantial risk of illness or injury that cannot be corrected or eliminated through new or updated device labeling.

“Since ESDs were first marketed more than 20 years ago, we have gained a better understanding of the danger these devices present to public health,” said William Maisel, M.D., M.P.H., director of the Office of Product Evaluation and Quality in the FDA’s Center for Devices and Radiological Health. “Through advancements in medical science, there are now more treatment options available to reduce or stop self-injurious or aggressive behavior, thus avoiding the substantial risk ESDs present.”

ESDs administer electrical shocks through electrodes attached to the skin of individuals to immediately interrupt self-injurious or aggressive behavior or attempt to condition the individuals to stop engaging in such behavior. Evidence indicates a number of significant psychological and physical risks are associated with the use of these devices, including worsening of underlying symptoms, depression, anxiety, posttraumatic stress disorder, pain, burns and tissue damage. In addition, many people who are exposed to these devices have intellectual or developmental disabilities that make it difficult to communicate their pain. Evidence of the device’s effectiveness is weak and evidence supporting the benefit-risk profiles of alternatives is strong. As the risks presented by ESDs meet the agency’s definition of unreasonable and substantial and cannot be corrected or eliminated through new or updated labeling, banning the product is necessary to protect public health.

The act of banning a device is rare and the circumstances under which the agency can take this action is stringent, but the FDA has the authority to take this action when necessary to protect the health of the public. The FDA has only banned two other medical devices since gaining the authority to do so.

This final rule issued today follows a 2016 proposed rule to ban ESDs from the marketplace and takes effect 30 days after publication in the Federal Register. The FDA understands that a gradual transition period may be needed for a subgroup of individuals currently exposed to these devices, to allow time for them to transition to another treatment, so the agency is establishing two compliance dates. For devices in use on specific individuals as of the date of publication and subject to a physician-directed transition plan, compliance is required 180 days after publication of the final rule in the Federal Register. For all other devices, compliance is required 30 days after publication of the final rule in the Federal Register.

The FDA received more than 1,500 comments on the proposed rule, as well as approximately 300 comments submitted to the April 2014 FDA advisory panel meetingExternal Link Disclaimerdocket, which the FDA has associated with this rulemaking action. Comments were received from a variety of stakeholders, including parents of individuals with intellectual and developmental disabilities, state agencies and their sister public-private organizations, the affected manufacturer and residential facility, some of the facility’s employees, and parents of individual residents. State and federal legislators also expressed interest, as did state and national advocacy groups. The overwhelming majority of comments supported this ban.

The proposed rule is here.

Wednesday, February 26, 2020

Ethical and Legal Aspects of Ambient Intelligence in Hospitals

Gerke S, Yeung S, Cohen IG.
JAMA. Published online January 24, 2020.
doi:10.1001/jama.2019.21699

Ambient intelligence in hospitals is an emerging form of technology characterized by a constant awareness of activity in designated physical spaces and of the use of that awareness to assist health care workers such as physicians and nurses in delivering quality care. Recently, advances in artificial intelligence (AI) and, in particular, computer vision, the domain of AI focused on machine interpretation of visual data, have propelled broad classes of ambient intelligence applications based on continuous video capture.

One important goal is for computer vision-driven ambient intelligence to serve as a constant and fatigue-free observer at the patient bedside, monitoring for deviations from intended bedside practices, such as reliable hand hygiene and central line insertions. While early studies took place in single patient rooms, more recent work has demonstrated ambient intelligence systems that can detect patient mobilization activities across 7 rooms in an ICU ward and detect hand hygiene activity across 2 wards in 2 hospitals.

As computer vision–driven ambient intelligence accelerates toward a future when its capabilities will most likely be widely adopted in hospitals, it also raises new ethical and legal questions. Although some of these concerns are familiar from other health surveillance technologies, what is distinct about ambient intelligence is that the technology not only captures video data as many surveillance systems do but does so by targeting the physical spaces where sensitive patient care activities take place, and furthermore interprets the video such that the behaviors of patients, health care workers, and visitors may be constantly analyzed. This Viewpoint focuses on 3 specific concerns: (1) privacy and reidentification risk, (2) consent, and (3) liability.

The info is here.

Thursday, February 13, 2020

Groundbreaking Court Ruling Against Insurer Offers Hope in 2020

Katherine G. Kennedy
Psychiatric News
Originally posted 9 Jan 20

Here is an excerpt:

In his 106-page opinion, Judge Spero criticized UBH for using flawed, internally developed, and overly restrictive medical necessity guidelines that favored protecting the financial interests of UBH over medical treatment of its members.

“By a preponderance of the evidence,” Judge Spero wrote, “in each version of the Guidelines at issue in this case the defect is pervasive and results in a significantly narrower scope of coverage than is consistent with generally accepted standards of care.” His full decision can be accessed here.

As of this writing, we are still awaiting Judge Spero’s remedies order (a court-ordered directive that requires specific actions, such as reparations) against UBH. Following that determination, we will know what UBH will be required to do to compensate class members who suffered damages (that is, protracted illness or death) or their beneficiaries as a result of UBH’s denial of their coverage claims.

But waiting for the remedies order does not prevent us from looking for answers to critical questions like these:

  • Will Wit. v. UBH impact the insurance industry enough to catalyze widespread reforms in how utilization review guidelines are determined and used?
  • How will the 50 offices of state insurance commissioners respond? Will these regulators mandate the use of clinical coverage guidelines that reflect the findings in Wit. v. UBH? Will they tighten their oversight with updated regulations and enforcement actions?


The info is here.

FDA and NIH let clinical trial sponsors keep results secret and break the law

Charles Piller
sciencemag.org
Originally posted 13 Jan 20

For 20 years, the U.S. government has urged companies, universities, and other institutions that conduct clinical trials to record their results in a federal database, so doctors and patients can see whether new treatments are safe and effective. Few trial sponsors have consistently done so, even after a 2007 law made posting mandatory for many trials registered in the database. In 2017, the National Institutes of Health (NIH) and the Food and Drug Administration (FDA) tried again, enacting a long-awaited “final rule” to clarify the law’s expectations and penalties for failing to disclose trial results. The rule took full effect 2 years ago, on 18 January 2018, giving trial sponsors ample time to comply. But a Science investigation shows that many still ignore the requirement, while federal officials do little or nothing to enforce the law.

(cut)

Contacted for comment, none of the institutions disputed the findings of this investigation. In all 4768 trials Science checked, sponsors violated the reporting law more than 55% of the time. And in hundreds of cases where the sponsors got credit for reporting trial results, they have yet to be publicly posted because of quality lapses flagged by ClinicalTrials.gov staff.

The info is here.

Wednesday, February 12, 2020

Judge holds Pa. psychologist in contempt, calls her defiance ‘extraordinary’ in trucker’s case

John Beague
PennLive.com
Originally 18 Jan 20

A federal judge has held a Sunbury psychologist in contempt and sanctioned her $8,288 for failing to comply with a subpoena and a court order in a civil case stemming from a 2016 traffic crash.

U.S. Middle District Judge Matthew W. Brann, in an opinion issued Friday, said he has never encountered the “obstinance” displayed by Donna Pinter of Psychological Services Clinic Inc.

He called Pinter’s defiance “extraordinary” and pointed out that she never objected to the validity of the subpoena or court order and did not provide an adequate excuse.

“She forced the parties and this court to waste significant and limited resources litigating these motions and convening two hearings for what should have been a routine document production,” he wrote.

The defendants sought information about Kenneth Kerlin of Middleburg from Pinter because she has treated him for years and in his suit he claims the crash, which involved two tractor-trailers, has caused him mental suffering.

The info is here.

Monday, January 20, 2020

Chinese court sentences 'gene-editing' scientist to three years in prison

Huizhong Wu and Lusha Zhan
kfgo.com
Originally posted 29 Dec 19

A Chinese court sentenced the scientist who created the world's first "gene-edited" babies to three years in prison on Monday for illegally practising medicine and violating research regulations, the official Xinhua news agency said.

In November 2018, He Jiankui, then an associate professor at Southern University of Science and Technology in Shenzhen, said he had used gene-editing technology known as CRISPR-Cas9 to change the genes of twin girls to protect them from getting infected with the AIDS virus in the future.

The backlash in China and globally about the ethics of his research and work was fast and widespread.

Xinhua said He and his collaborators forged ethical review materials and recruited men with AIDS who were part of a couple to carry out the gene-editing. His experiments, it said, resulted in two women giving birth to three gene-edited babies.

The court also handed lesser sentences to Zhang Renli and Qin Jinzhou, who worked at two unnamed medical institutions, for having conspired with He in his work.

The info is here.

Sunday, January 5, 2020

The Big Change Coming to Just About Every Website on New Year’s Day

Facebook billboard with a hand cursor clicking an X.Aaron Mak
Slate.com
Originally published 30 Dec 19

Starting New Year’s Day, you may notice a small but momentous change to the websites you visit: a button or link, probably at the bottom of the page, reading “Do Not Sell My Personal Information.”

The change is one of many going into effect Jan. 1, 2020, thanks to a sweeping new data privacy law known as the California Consumer Privacy Act. The California law essentially empowers consumers to access the personal data that companies have collected on them, to demand that it be deleted, and to prevent it from being sold to third parties. Since it’s a lot more work to create a separate infrastructure just for California residents to opt out of the data collection industry, these requirements will transform the internet for everyone.

Ahead of the January deadline, tech companies are scrambling to update their privacy policies and figure out how to comply with the complex requirements. The CCPA will only apply to businesses that earn more than $25 million in gross revenue, that collect data on more than 50,000 people, or for which selling consumer data accounts for more than 50 percent of revenue. The companies that meet these qualifications are expected to collectively spend a total of $55 billion upfront to meet the new standards, in addition to $16 billion over the next decade. Major tech firms have already added a number of user features over the past few months in preparation. In early December, Twitter rolled out a privacy center where users can learn more about the company’s approach to the CCPA and navigate to a dashboard for customizing the types of info that the platform is allowed to use for ad targeting. Google has also created a protocol that blocks websites from transmitting data to the company, which users can take advantage of by downloading an opt-out add-on. Facebook, meanwhile, is arguing that it does not need to change anything because it does not technically “sell” personal information. Companies must at least set up a webpage and a toll-free phone number for fielding data requests.

The info is here.

Tuesday, December 24, 2019

DNA genealogical databases are a gold mine for police, but with few rules and little transparency

Paige St. John
The LA Times
Originally posted 24 Nov 19

Here is an excerpt:

But law enforcement has plunged into this new world with little to no rules or oversight, intense secrecy and by forming unusual alliances with private companies that collect the DNA, often from people interested not in helping close cold cases but learning their ethnic origins and ancestry.

A Times investigation found:
  • There is no uniform approach for when detectives turn to genealogical databases to solve cases. In some departments, they are to be used only as a last resort. Others are putting them at the center of their investigative process. Some, like Orlando, have no policies at all.
  • When DNA services were used, law enforcement generally declined to provide details to the public, including which companies detectives got the match from. The secrecy made it difficult to understand the extent to which privacy was invaded, how many people came under investigation, and what false leads were generated.
  • California prosecutors collaborated with a Texas genealogy company at the outset of what became a $2-million campaign to spotlight the heinous crimes they can solve with consumer DNA. Their goal is to encourage more people to make their DNA available to police matching.
There are growing concerns that the race to use genealogical databases will have serious consequences, from its inherent erosion of privacy to the implications of broadened police power.

In California, an innocent twin was thrown in jail. In Georgia, a mother was deceived into incriminating her son. In Texas, police met search guidelines by classifying a case as sexual assault but after an arrest only filed charges of burglary. And in the county that started the DNA race with the arrest of the Golden State killer suspect, prosecutors have persuaded a judge to treat unsuspecting genetic contributors as “confidential informants” and seal searches so consumers are not scared away from adding their own DNA to the forensic stockpile.

Tuesday, December 17, 2019

We Might Soon Build AI Who Deserve Rights

Image result for robot rightsEric Schweitzengebel
Splintered Mind Blog
From a Talk at Notre Dame
Originally posted 17 Nov 19

Abstract

Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.

(cut)

But as AI gets cuter and more sophisticated, and as chatbots start sounding more and more like normal humans, passing more and more difficult versions of the Turing Test, these movements will gain steam among the people with liberal views of consciousness. At some point, people will demand serious rights for some AI systems. The AI systems themselves, if they are capable of speech or speechlike outputs, might also demand or seem to demand rights.

Let me be clear: This will occur whether or not these systems really are conscious. Even if you’re very conservative in your view about what sorts of systems would be conscious, you should, I think, acknowledge the likelihood that if technological development continues on its current trajectory there will eventually be groups of people who assert the need for us to give AI systems human-like moral consideration.

And then we’ll need a good, scientifically justified consensus theory of consciousness to sort it out. Is this system that says, “Hey, I’m conscious, just like you!” really conscious, just like you? Or is it just some empty algorithm, no more conscious than a toaster?

Here’s my conjecture: We will face this social problem before we succeed in developing the good, scientifically justified consensus theory of consciousness that we need to solve the problem. We will then have machines whose moral status is unclear. Maybe they do deserve rights. Maybe they really are conscious like us. Or maybe they don’t. We won’t know.

And then, if we don’t know, we face quite a terrible dilemma.

If we don’t give these machines rights, and if turns out that the machines really do deserve rights, then we will be perpetrating slavery and murder every time we assign a task and delete a program.

The blog post is here.