Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Transparency. Show all posts
Showing posts with label Transparency. Show all posts

Monday, October 25, 2021

Federal Reserve tightens ethics rules to ban active trading by senior officials

Brian Cheung
Yahoo Business News
Originally posted 21 OCT 21

The Federal Reserve on Thursday said it will tighten its ethics rules concerning personal finances among its most senior officials, the latest development in a trading scandal that has led to the resignation of two policymakers.

The central bank said it has introduced a “broad set of new rules” that restricts any active trading and prohibits the purchase of any individual securities (i.e. stocks, bonds, or derivatives). The new restrictions effectively only allow purchases of diversified investment vehicles like mutual funds.

If policymakers want to make any purchases or sales, they will be required to provide 45 days of advance notice and obtain prior approval for any purchases and sales. Those officials will also be required to hold onto those investments for at least one year, with no purchases or sales allowed during periods of “heightened financial market stress.”

Fed officials are still working on the details of what would define that level of stress, but said the market conditions of spring 2020 would have qualified.

The new rules will also increase the frequency of public disclosures from the reserve bank presidents, requiring monthly filings instead of the status quo of annual filings. Those at the Federal Reserve Board in Washington already were required to make monthly disclosures.

The restrictions apply to policymakers and senior staff at the Fed’s headquarters in Washington, as well as its 12 Federal Reserve Bank regional outposts. The new rules will be implemented “over the coming months.”

Fed officials said changes will likely require divestments from any existing holdings that do not meet the updated standards.

Tuesday, October 12, 2021

Demand five precepts to aid social-media watchdogs

Ethan Zucker
Nature 597, 9 (2021)
Originally punished 31 Aug 21

Here is an excerpt:

I propose the following. First, give researchers access to the same targeting tools that platforms offer to advertisers and commercial partners. Second, for publicly viewable content, allow researchers to combine and share data sets by supplying keys to application programming interfaces. Third, explicitly allow users to donate data about their online behaviour for research, and make code used for such studies publicly reviewable for security flaws. Fourth, create safe-haven protections that recognize the public interest. Fifth, mandate regular audits of algorithms that moderate content and serve ads.

In the United States, the FTC could demand this access on behalf of consumers: it has broad powers to compel the release of data. In Europe, making such demands should be even more straightforward. The European Data Governance Act, proposed in November 2020, advances the concept of “data altruism” that allows users to donate their data, and the broader Digital Services Act includes a potential framework to implement protections for research in the public interest.

Technology companies argue that they must restrict data access because of the potential for harm, which also conveniently insulates them from criticism and scrutiny. They cite misuse of data, such as in the Cambridge Analytica scandal (which came to light in 2018 and prompted the FTC orders), in which an academic researcher took data from tens of millions of Facebook users collected through online ‘personality tests’ and gave it to a UK political consultancy that worked on behalf of Donald Trump and the Brexit campaign. Another example of abuse of data is the case of Clearview AI, which used scraping to produce a huge photographic database to allow federal and state law-enforcement agencies to identify individuals.

These incidents have led tech companies to design systems to prevent misuse — but such systems also prevent research necessary for oversight and scrutiny. To ensure that platforms act fairly and benefit society, there must be ways to protect user data and allow independent oversight.

Sunday, July 25, 2021

Should we be concerned that the decisions of AIs are inscrutable?

John Zerilli
Psyche.co
Originally published 14 June 21

Here is an excerpt:

However, there’s a danger of carrying reliabilist thinking too far. Compare a simple digital calculator with an instrument designed to assess the risk that someone convicted of a crime will fall back into criminal behaviour (‘recidivism risk’ tools are being used all over the United States right now to help officials determine bail, sentencing and parole outcomes). The calculator’s outputs are so dependable that an explanation of them seems superfluous – even for the first-time homebuyer whose mortgage repayments are determined by it. One might take issue with other aspects of the process – the fairness of the loan terms, the intrusiveness of the credit rating agency – but you wouldn’t ordinarily question the engineering of the calculator itself.

That’s utterly unlike the recidivism risk tool. When it labels a prisoner as ‘high risk’, neither the prisoner nor the parole board can be truly satisfied until they have some grasp of the factors that led to it, and the relative weights of each factor. Why? Because the assessment is such that any answer will necessarily be imprecise. It involves the calculation of probabilities on the basis of limited and potentially poor-quality information whose very selection is value-laden.

But what if systems such as the recidivism tool were in fact more like the calculator? For argument’s sake, imagine a recidivism risk-assessment tool that was basically infallible, a kind of Casio-cum-Oracle-of-Delphi. Would we still expect it to ‘show its working’?

This requires us to think more deeply about what it means for an automated decision system to be ‘reliable’. It’s natural to think that such a system would make the ‘right’ recommendations, most of the time. But what if there were no such thing as a right recommendation? What if all we could hope for were only a right way of arriving at a recommendation – a right way of approaching a given set of circumstances? This is a familiar situation in law, politics and ethics. Here, competing values and ethical frameworks often produce very different conclusions about the proper course of action. There are rarely unambiguously correct outcomes; instead, there are only right ways of justifying them. This makes talk of ‘reliability’ suspect. For many of the most morally consequential and controversial applications of ML, to know that an automated system works properly just is to know and be satisfied with its reasons for deciding.

Tuesday, April 20, 2021

State Medical Board Recommendations for Stronger Approaches to Sexual Misconduct by Physicians

King PA, Chaudhry HJ, Staz ML. 
JAMA. 
Published online March 29, 2021. 
doi:10.1001/jama.2020.25775

The Federation of State Medical Boards (FSMB) recently engaged with its member boards and investigators, trauma experts, physicians, resident physicians, medical students, survivors of physician abuse, and the public to critically review practices related to the handling of reports of sexual misconduct (including harassment and abuse) toward patients by physicians. The review was undertaken as part of a core responsibility of boards to protect the public and motivated by concerning reports of unacceptable behavior by physicians. Specific recommendations from the review were adopted by the FSMB’s House of Delegates on May 2, 2020, and are highlighted in this Viewpoint.

Sexual misconduct by physicians exists along a spectrum of severity that may begin with “grooming” behaviors and end with sexual assault. Behaviors at any point on this spectrum should be of concern because unreported minor violations (including sexually suggestive comments or inappropriate physical contact) may lead to greater misconduct. In 2018, the National Academies of Science, Engineering, and Medicine identified sexual harassment as an important problem in scientific communities and medicine, finding that greater than 50% of women faculty and staff and 20% to 50% of women students reportedly have encountered or experienced sexually harassing conduct in academia. Data from state medical boards indicate that 251 disciplinary actions were taken against physicians in 2019 for “sexual misconduct” violations (Table). The actual number may be higher because boards often use a variety of terms, including unprofessional conduct, physician-patient boundary issues, or moral unfitness, to describe such actions. The FSMB has begun a project to encourage boards to align their categorization of all disciplinary actions to better understand the scope of misconduct.

Saturday, April 10, 2021

Ethical and Professionalism Implications of Physician Employment and Health Care Business Practices

De Camp, M, & Sulmasy, L. S.
Annals of Internal Medicine
Position Paper: 16 March 21

Abstract

The environment in which physicians practice and patients receive care continues to change. Increasing employment of physicians, changing practice models, new regulatory requirements, and market dynamics all affect medical practice; some changes may also place greater emphasis on the business of medicine. Fundamental ethical principles and professional values about the patient–physician relationship, the primacy of patient welfare over self-interest, and the role of medicine as a moral community and learned profession need to be applied to the changing environment, and physicians must consider the effect the practice environment has on their ethical and professional responsibilities. Recognizing that all health care delivery arrangements come with advantages, disadvantages, and salient questions for ethics and professionalism, this American College of Physicians policy paper examines the ethical implications of issues that are particularly relevant today, including incentives in the shift to value-based care, physician contract clauses that affect care, private equity ownership, clinical priority setting, and physician leadership. Physicians should take the lead in helping to ensure that relationships and practices are structured to explicitly recognize and support the commitments of the physician and the profession of medicine to patients and patient care.

Here is an excerpt:

Employment of physicians likewise has advantages, such as financial stability, practice management assistance, and opportunities for collaboration and continuing education, but there is also the potential for dual loyalty when physicians try to be accountable to both their patients and their employers. Dual loyalty is not new; for example, mandatory reporting of communicable diseases may place societal interests in preventing disease at odds with patient privacy interests. However, the ethics of everyday business models and practices in medicine has been less explored.

Trust is the foundation of the patient–physician relationship. Trust, honesty, fairness, and respect among health care stakeholders support the delivery of high-value, patient-centered care. Trust depends on expertise, competence, honesty, transparency, and intentions or goodwill. Institutions, systems, payers, purchasers, clinicians, and patients should recognize and support “the intimacy and importance of patient–clinician relationships” and the ethical duties of physicians, including the primary obligation to act in the patient's best interests (beneficence).

Business ethics does not necessarily conflict with the ethos of medicine. Today, physician leadership of health care organizations may be vital for delivering high-quality care and building trust, including in health care institutions. Truly trustworthy institutions may be more successful (in patient care and financially) in the long term.

Blanket statements about business practices and contractual provisions are unhelpful; most have both potential positives and potential negatives. Nevertheless, it is important to raise awareness of business practices relevant to ethics and professionalism in medical practice and promote the physician's ability to advocate for arrangements that align with medicine's core values. In this paper, the American College of Physicians (ACP) highlights 6 contemporary issues and offers ethical guidance for physicians. Although the observed trends toward physician employment and organizational consolidation merit reflection, certain issues may also resonate with independent practices and in other practice settings.

Saturday, December 26, 2020

Baby God: how DNA testing uncovered a shocking web of fertility fraud

Arian Horton
The Guardian
Originally published 2 Dec 20

Here ate two excerpts:

The database unmasked, with detached clarity, a dark secret hidden in plain sight for decades: the physician once named Nevada’s doctor of the year, who died in 2006 at age 94, had impregnated numerous patients with his own sperm, unbeknownst to the women or their families. The decades-long fertility fraud scheme, unspooled in the HBO documentary Baby God, left a swath of families – 26 children as of this writing, spanning 40 years of the doctor’s treatments – shocked at long-obscured medical betrayal, unmoored from assumptions of family history and stumbling over the most essential questions of identity. Who are you, when half your DNA is not what you thought?

(cut)

That reality – a once unknowable crime now made plainly knowable – has now come to pass, and the film features interviews with several of Fortier’s previously unknown children, each grappling with and tracing their way into a new web of half-siblings, questions of lineage and inheritance, and reframing of family history. Babst, who started as a cop at 19, dove into her own investigation, sourcing records on Dr Fortier that eventually revealed allegations of sexual abuse and molestation against his own stepchildren.

Brad Gulko, a human genomics scientist in San Francisco who bears a striking resemblance to the young Fortier, initially approached the revelation from the clinical perspective of biological motivations for procreation. “I feel like Dr Fortier found a way to justify in his own mind doing what he wanted to do that didn’t violate his ethical norms too much, even if he pushed them really hard,” he says in the film. “I’m still struggling with that. I don’t know where I’ll end up.”

The film quickly morphed, according to Olson, from an investigation of the Fortier case and his potential motivations to the larger, unresolvable questions of identity, nature versus nurture. “At first it was like ‘let’s get all the facts, we’re going to figure it out, what are his motivations, it will be super clear,’” said Olson. 

Thursday, December 17, 2020

AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust

Capgemini Research Institute

In the wake of the COVID-19 crisis, our reliance on AI has skyrocketed. Today more than ever before, we look to AI to help us limit physical interactions, predict the next wave of the pandemic, disinfect our healthcare facilities and even deliver our food. But can we trust it?

In the latest report from the Capgemini Research Institute – AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust – we surveyed over 800 organizations and 2,900 consumers to get a picture of the state of ethics in AI today. We wanted to understand what organizations can do to move to AI systems that are ethical by design, how they can benefit from doing so, and the consequences if they don’t. We found that while customers are becoming more trusting of AI-enabled interactions, organizations’ progress in ethical dimensions is underwhelming. And this is dangerous because once violated, trust can be difficult to rebuild.

Ethically sound AI requires a strong foundation of leadership, governance, and internal practices around audits, training, and operationalization of ethics. Building on this foundation, organizations have to:
  1. Clearly outline the intended purpose of AI systems and assess their overall potential impact
  2. Proactively deploy AI to achieve sustainability goals
  3. Embed diversity and inclusion principles proactively throughout the lifecycle of AI systems for advancing fairness
  4. Enhance transparency with the help of technology tools, humanize the AI experience and ensure human oversight of AI systems
  5. Ensure technological robustness of AI systems
  6. Empower customers with privacy controls to put them in charge of AI interactions.
For more information on ethics in AI, download the report.

Wednesday, December 2, 2020

Do antidepressants work?

Jacob Stegenga
aeon.co
Originally published 5 Mar 19

Here is an excerpt:

To see this, consider an analogy. Imagine we are testing a drug for weight loss. For every 100 subjects in the drug group, three subjects lose one kilogramme and 97 subjects gain five kilos. For every 100 subjects in the placebo group, two lose four kilos and 98 subjects do not gain or lose any weight. How effective is the drug for weight loss? The odds ratio of weight loss is 1.5, and yet this number tells us nothing about how much weight people on average gain or lose – indeed, the number entirely conceals the real effects of the drug. Though this is an extreme analogy, it shows how cautious we must be when interpreting this celebrated meta-analysis. Unfortunately, however, in response to this work, many leading psychiatrists celebrated, and news headlines misleadingly claimed ‘The drugs do work.’ On the winding route from the hard work of these researchers to the news reports where you were most likely to hear about that study, a simple number became a lie.

When analysed properly, the best evidence indicates that antidepressants are not clinically beneficial. The meta-analyses worth considering, such as the one above, involve attempts to gather evidence from all trials on antidepressants, including those that remain unpublished. Of course it is impossible to know that a meta-analysis includes all unpublished evidence, because publication bias is characterised by deception, either inadvertent or wilful. Nevertheless, these meta-analyses are serious attempts to address publication bias by finding as much data as possible. What, then, do they show?

In meta-analyses that include as much of the evidence as possible, the severity of depression among subjects who receive antidepressants goes down by approximately two points compared with subjects who receive a placebo. Two points. Remember, a depression score can go down by double that amount simply if a subject stops fidgeting. This result, found by both champions and critics of antidepressants, has been replicated year after year for more than a decade (see, for example, the meta-analyses led by Irving Kirsch in 2008, by J C Fournier in 2010, and by Janus Christian Jakobsen in 2017). The phenomena of blind-breaking, the placebo effect and unresolved publication bias could easily account for this trivial two-point reduction in severity scores.

Friday, November 27, 2020

Where Are The Self-Correcting Mechanisms In Science?

Vazire, S., & Holcombe, A. O. 
(2020, August 13).

Abstract

It is often said that science is self-correcting, but the replication crisis suggests that, at least in some fields, self-correction mechanisms have fallen short of what we might hope for. How can we know whether a particular scientific field has effective self-correction mechanisms, that is, whether its findings are credible? The usual processes that supposedly provide mechanisms for scientific self-correction – mainly peer review and disciplinary committees – have been inadequate. We argue for more verifiable indicators of a field’s commitment to self-correction. These include transparency, which is already a target of many reform efforts, and critical appraisal, which has received less attention. Only by obtaining Measurements of Observable Self-Correction (MOSCs) can we begin to evaluate the claim that “science is self-correcting.” We expect the validity of this claim to vary across fields and subfields, and suggest that some fields, such as psychology and biomedicine, fall far short of an appropriate level of transparency and, especially, critical appraisal. Fields without robust, verifiable mechanisms for transparency and critical appraisal cannot reasonably be said to be self-correcting, and thus do not warrant the credibility often imputed to science as a whole.

Tuesday, September 22, 2020

How to be an ethical scientist

W. A. Cunningham, J. J. Van Bavel,
& L. H. Somerville
Science Magazine
Originally posted 5 August 20

True discovery takes time, has many stops and starts, and is rarely neat and tidy. For example, news that the Higgs boson was finally observed in 2012 came 48 years after its original proposal by Peter Higgs. The slow pace of science helps ensure that research is done correctly, but it can come into conflict with the incentive structure of academic progress, as publications—the key marker of productivity in many disciplines—depend on research findings. Even Higgs recognized this problem with the modern academic system: “Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough.”

It’s easy to forget about the “long view” when there is constant pressure to produce. So, in this column, we’re going to focus on the type of long-term thinking that advances science. For example, are you going to cut corners to get ahead, or take a slow, methodical approach? What will you do if your experiment doesn’t turn out as expected? Without reflecting on these deeper issues, we can get sucked into the daily goals necessary for success while failing to see the long-term implications of our actions.

Thinking carefully about these issues will not only impact your own career outcomes, but it can also impact others. Your own decisions and actions affect those around you, including your labmates, your collaborators, and your academic advisers. Our goal is to help you avoid pitfalls and find an approach that will allow you to succeed without impairing the broader goals of science.

Be open to being wrong

Science often advances through accidental (but replicable) findings. The logic is simple: If studies always came out exactly as you anticipated, then nothing new would ever be learned. Our previous theories of the world would be just as good as they ever were. This is why scientific discovery is often most profound when you stumble on something entirely new. Isaac Asimov put it best when he said, “The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ but ‘That’s funny ... .’”

The info is here.

Saturday, September 12, 2020

Psychotherapy, placebos, and informed consent

Leder G
Journal of Medical Ethics 
Published Online First: 20 August 2020.
doi: 10.1136/medethics-2020-106453

Abstract

Several authors have recently argued that psychotherapy, as it is commonly practiced, is deceptive and undermines patients’ ability to give informed consent to treatment. This ‘deception’ claim is based on the findings that some, and possibly most, of the ameliorative effects in psychotherapeutic interventions are mediated by therapeutic common factors shared by successful treatments (eg, expectancy effects and therapist effects), rather than because of theory-specific techniques. These findings have led to claims that psychotherapy is, at least partly, likely a placebo, and that practitioners of psychotherapy have a duty to ‘go open’ to patients about the role of common factors in therapy (even if this risks negatively affecting the efficacy of treatment); to not ‘go open’ is supposed to unjustly restrict patients’ autonomy. This paper makes two related arguments against the ‘go open’ claim. (1) While therapies ought to provide patients with sufficient information to make informed treatment decisions, informed consent does not require that practitioners ‘go open’ about therapeutic common factors in psychotherapy, and (2) clarity about the mechanisms of change in psychotherapy shows us that the common-factors findings are consistent with, rather than undermining of, the truth of many theory-specific forms of psychotherapy; psychotherapy, as it is commonly practiced, is not deceptive and is not a placebo. The call to ‘go open’ should be resisted and may have serious detrimental effects on patients via the dissemination of a false view about how therapy works.

Conclusion

The ‘go open’ argument is based on a mistaken view about the mechanisms of change in psychotherapy and threatens to harm patients by undermining their ability to make informed treatment decisions. This paper has argued that the prima facie ethical problem raised by the ‘go open’ argument is diffused if we clear up a conceptual confusion about what, exactly, we should be
going open about. Therapists should be open with patients about the differing theories of the mechanisms of change in psychotherapy; this can, but need not involve discussing information
about the therapeutic common factors.

The article is here.

Note from Dr. Gavazzi: Using "deception" is the wrong frame for this issue.  How complete is your informed consent?  Can we ever give "perfect" informed consent?  The answer is likely no.

Thursday, August 27, 2020

Patients aren’t being told about the AI systems advising their care

Rebecca Robbins and Erin Brodwin
statnews.com
Originally posted 15 July 20

Here is an excerpt:

The decision not to mention these systems to patients is the product of an emerging consensus among doctors, hospital executives, developers, and system architects, who see little value — but plenty of downside — in raising the subject.

They worry that bringing up AI will derail clinicians’ conversations with patients, diverting time and attention away from actionable steps that patients can take to improve their health and quality of life. Doctors also emphasize that they, not the AI, make the decisions about care. An AI system’s recommendation, after all, is just one of many factors that clinicians take into account before making a decision about a patient’s care, and it would be absurd to detail every single guideline, protocol, and data source that gets considered, they say.

Internist Karyn Baum, who’s leading M Health Fairview’s rollout of the tool, said she doesn’t bring up the AI to her patients “in the same way that I wouldn’t say that the X-ray has decided that you’re ready to go home.” She said she would never tell a fellow clinician not to mention the model to a patient, but in practice, her colleagues generally don’t bring it up either.

Four of the health system’s 13 hospitals have now rolled out the hospital discharge planning tool, which was developed by the Silicon Valley AI company Qventus. The model is designed to identify hospitalized patients who are likely to be clinically ready to go home soon and flag steps that might be needed to make that happen, such as scheduling a necessary physical therapy appointment.

Clinicians consult the tool during their daily morning huddle, gathering around a computer to peer at a dashboard of hospitalized patients, estimated discharge dates, and barriers that could prevent that from occurring on schedule.

The info is here.

Friday, May 1, 2020

The therapist's dilemma: Tell the whole truth?

Image result for psychotherapyJackson, D.
J. Clin. Psychol. 2020; 76: 286– 291.
https://doi.org/10.1002/jclp.22895

Abstract

Honest communication between therapist and client is foundational to good psychotherapy. However, while past research has focused on client honesty, the topic of therapist honesty remains almost entirely untouched. Our lab's research seeks to explore the role of therapist honesty, how and why therapists make decisions about when to be completely honest with clients (and when to abstain from telling the whole truth), and the perceived consequences of these decisions. This article reviews findings from our preliminary research, presents a case study of the author's honest disclosure dilemma, and discusses the role of therapeutic tact and its function in the therapeutic process.

Here is an excerpt:

Based on our preliminary research, one of the most common topics of overt dishonesty among therapists was their feelings of frustration or disappointment toward their clients. For example, a therapist working with a client with a diagnosis of avoidant personality disorder may find herself increasingly frustrated by the client’s continual resistance to discussing emotional topics or engaging in activities that would broaden his or her world. Such a client —let’s assume male—is also likely to feel preoccupied with concerns about whether the therapist “likes” him or feels as frustrated with him as he does with himself. Should this client apologize for his behavior and ask if the therapist is frustrated with him, the therapist may feel compelled to reduce the discomfort he is already experiencing by dispelling his concern: “No, it’s okay, I’m not frustrated.”

But either at this moment or at a later point in therapy, once rapport (i.e., the therapeutic alliance) has been more firmly established, a more honest answer to this question might be fruitful: “Yes, I am feeling frustrated that we haven’t been able to find ways for you to implement the changes we discuss here, outside of session. How does it feel for you to hear that I am feeling frustrated?” Or, arguably, an even more honest answer: “Yes, I am sometimes frustrated. I sometimes think we could go deeper here—I think it’d be helpful.” Or, an honest answer that is somewhat less critical of the patient and more self‐focused: “I do feel frustrated that I haven’t been able to be more helpful.” Clearly, there are many ways for a therapist to be honest and/or dishonest, and there are also gradations in whichever direction a therapist chooses.

Thursday, April 2, 2020

Intelligence, Surveillance, and Ethics in a Pandemic

Jessica Davis
JustSecurity.org
Originally posted 31 March 20

Here is an excerpt:

It is imperative that States and their citizens question how much freedom and privacy should be sacrificed to limit the impact of this pandemic. It is also not sufficient to ask simply “if” something is legal; we should also ask whether it should be, and under what circumstances. States should consider the ethics of surveillance and intelligence, specifically whether it is justified, done under the right authority, if it can be done with intentionality and proportionality and as a last resort, and if targets of surveillance can be separated from non-targets to avoid mass surveillance. These considerations, combined with enhanced transparency and sunset clauses on the use of intelligence and surveillance techniques, can allow States to ethically deploy these powerful tools to help stop the spread of the virus.

States are employing intelligence and surveillance techniques to contain the spread of the illness because these methods can help track and identify infected or exposed people and enforce quarantines. States have used cell phone data to track people at risk of infection or transmission and financial data to identify places frequented by at-risk people. Social media intelligence is also ripe for exploitation in terms of identifying social contacts. This intelligence, is increasingly being combined with health data, creating a unique (and informative) picture of a person’s life that is undoubtedly useful for virus containment. But how long should States have access to this type of information on their citizens, if at all? Considering natural limits to the collection of granular data on citizens is imperative, both in terms of time and access to this data.

The info is here.

Wednesday, April 1, 2020

The Ethics of Quarantine

The Ethics of Quarantine | Journal of Ethics | American Medical ...Ross Upshur
Virtual Mentor. 2003;5(11):393-395.


Here are two excerpts:

There are 2 independent ethical considerations to consider here: whether the concept of quarantine is justified ethically and whether it is effective. It is also important to make a clear distinction between quarantine and isolation. Quarantine refers to the separation of those exposed individuals who are not yet symptomatic for a period of time (usually the known incubation period of the suspected pathogen) to determine whether they will develop symptoms. Quarantine achieves 2 goals. First, it stops the chain of transmission because it is less possible to infect others if one is not in social circulation. Second, it allows the individuals under surveillance to be identified and directed toward appropriate care if they become symptomatic. This is more important in diseases where there is presymptomatic shedding of virus. Isolation, on the other hand, is keeping those who have symptoms from circulation in general populations.

Justification of quarantine and quarantine laws stems from a general moral obligation to prevent harm to (infection of) others if this can be done. Most democracies have public health laws that do permit quarantine. Even though quarantine is a curtailment of civil liberties, it can be broadly justified if several criteria can be met.

(cut)

Secondly, the proportionality, or least-restrictive-means, principle should be observed. This holds that public health authorities should use the least restrictive measures proportional to the goal of achieving disease control. This would indicate that quarantine be made voluntary before more restrictive means and sanctions such as mandatory orders or surveillance devices, home cameras, bracelets, or incarceration are contemplated. It is striking to note that in the Canadian SARS outbreak in the Greater Toronto area, approximately 30,000 persons were quarantined at some time. Toronto Public Health reports writing only 22 orders for mandatory detainment [3]. Even if the report is a tenfold underestimate, the remaining instances of voluntary quarantine constitute an impressive display of civic-mindedness.

Thirdly, reciprocity must be upheld. If society asks individuals to curtail their liberties for the good of others, society has a reciprocal obligation to assist them in the discharge of their obligations. That means providing individuals with adequate food and shelter and psychological support, accommodating them in their workplaces, and not discriminating against them. They should suffer no penalty on account of discharging their obligations to society.

The info is here.

Tuesday, March 31, 2020

How Should We Judge Whether and When Mission Statements Are Ethically Deployed?

K. Schuler & D. Stulberg
AMA J Ethics. 2020;22(3):E239-247.
doi: 10.1001/amajethics.2020.239.

Abstract

Mission statements communicate health care organizations’ fundamental purposes and can help potential patients choose where to seek care and employees where to seek employment. They offer limited benefit, however, when patients do not have meaningful choices about where to seek care, and they can be misused. Ethical implementation of mission statements requires health care organizations to be truthful and transparent about how their mission influences patient care, to create environments that help clinicians execute their professional obligations to patients, and to amplify their obligations to communities.

Ethics, Mission, Standard of Care

Mission statements have long been used to communicate an organization’s values, priorities, and goals; serve as a moral compass for an organization; guide institutional decision making; and align efforts of employees. They can also be seen as advertising to prospective patients and employees. Although health care organizations’ mission statements serve these beneficial purposes, ethical questions (especially about business practices seen as motivating profit by rewarding underutilization) arise when mission implementation conflicts with acting in the best interests of patients. Ethical questions also arise when religiously affiliated organizations deny clinically indicated care in order to uphold their religiously based mission. For example, a Catholic organization’s mission statement might include phrases such as “faithful,” “honoring our sponsor’s spirit,” or “promoting reverence for life” and likely accords the Ethical and Religious Directives for Catholic Health Care Services, which Catholic organizations’ clinicians are required to follow as a condition of employment or privileges.

When strictly followed, these directives restrict health care service delivery, such that patients—particularly those seeking contraception, pregnancy termination, miscarriage management, end-of-life care, or other services perceived as conflicting with Catholic teaching—are not given the standard of care. Federal and state laws protect conscience rights of organizations, allowing them to refuse to provide services that conflict with the deeply held beliefs and values that drive their mission.6 Recognizing the potential for conflict between mission statements and patients’ autonomy or best interests, we maintain that health care organizations have fundamental ethical and professional obligations to patients that should not be superseded by a mission statement.

The info is here.

Monday, March 23, 2020

Burr moves to quell fallout from stock sales with request for Ethics probe

Richard BurrJack Brewster
politico.com
Originally posted 20 March 20

Sen. Richard Burr (R-N.C.) on Friday asked the Senate Ethics Committee to review stock sales he made weeks before the markets began to tank in response to the coronavirus pandemic — a move designed to limit the fallout from an intensifying political crisis.

Burr, who chairs the powerful Senate Intelligence Committee, defended the sales, saying he “relied solely on public news reports to guide my decision regarding the sale of stocks" and disputed the notion he used information that he was privy to during classified briefings on the novel coronavirus. Burr specifically name-checked CNBC’s daily health and science reporting from its Asia bureau.

“Understanding the assumption many could make in hindsight however, I spoke this morning with the chairman of the Senate Ethics Committee and asked him to open a complete review of the matter with full transparency,” Burr said in a statement.

Burr, who is retiring at the end of 2022, has faced calls to resign from across the ideological spectrum since ProPublica reported Thursday that he dumped between $628,000 and $1.72 million of his holdings on Feb. 13 in 33 different transactions — a week before the stock market began plummeting amid fears of the coronavirus spreading in the U.S.

The info is here.

Saturday, February 22, 2020

Hospitals Give Tech Giants Access to Detailed Medical Records

Melanie Evans
The Wall Street Journal
Originally published 20 Jan 20

Here is an excerpt:

Recent revelations that Alphabet Inc.’s Google is able to tap personally identifiable medical data about patients, reported by The Wall Street Journal, has raised concerns among lawmakers, patients and doctors about privacy.

The Journal also recently reported that Google has access to more records than first disclosed in a deal with the Mayo Clinic.

Mayo officials say the deal allows the Rochester, Minn., hospital system to share personal information, though it has no current plans to do so.

“It was not our intention to mislead the public,” said Cris Ross, Mayo’s chief information officer.

Dr. David Feinberg, head of Google Health, said Google is one of many companies with hospital agreements that allow the sharing of personally identifiable medical data to test products used in treatment and operations.

(cut)

Amazon, Google, IBM and Microsoft are vying for hospitals’ business in the cloud storage market in part by offering algorithms and technology features. To create and launch algorithms, tech companies are striking separate deals for access to medical-record data for research, development and product pilots.

The Health Insurance Portability and Accountability Act, or HIPAA, lets hospitals confidentially send data to business partners related to health insurance, medical devices and other services.

The law requires hospitals to notify patients about health-data uses, but they don’t have to ask for permission.

Data that can identify patients—including name and Social Security number—can’t be shared unless such records are needed for treatment, payment or hospital operations. Deals with tech companies to develop apps and algorithms can fall under these broad umbrellas. Hospitals aren’t required to notify patients of specific deals.

The info is here.

Tuesday, February 11, 2020

How to build ethical AI

Carolyn Herzog
thehill.com
Originally posted 18 Jan 20

Here is an excerpt:

Any standard-setting in this field must be rooted in the understanding that data is the lifeblood of AI. The continual input of information is what fuels machine learning, and the most powerful AI tools require massive amounts of it. This of course raises issues of how that data is being collected, how it is being used, and how it is being safeguarded.

One of the most difficult questions we must address is how to overcome bias, particularly the unintentional kind. Let’s consider one potential application for AI: criminal justice. By removing prejudices that contribute to racial and demographic disparities, we can create systems that produce more uniform sentencing standards. Yet, programming such a system still requires weighting countless factors to determine appropriate outcomes. It is a human who must program the AI, and a person’s worldview will shape how they program machines to learn. That’s just one reason why enterprises developing AI must consider workforce diversity and put in place best practices and control for both intentional and inherent bias.

This leads back to transparency.

A computer can make a highly complex decision in an instant, but will we have confidence that it’s making a just one?

Whether a machine is determining a jail sentence, or approving a loan, or deciding who is admitted to a college, how do we explain how those choices were made? And how do we make sure the factors that went into that algorithm are understandable for the average person?

The info is here.

Tuesday, January 21, 2020

10 Years Ago, DNA Tests Were The Future Of Medicine. Now They’re A Social Network — And A Data Privacy Mess

Peter Aldhaus
buzzfeednews.com
Originally posted 11 Dec 19

Here is an excerpt:

But DNA testing can reveal uncomfortable truths, too. Families have been torn apart by the discovery that the man they call “Dad” is not the biological father of his children. Home DNA tests can also be used to show that a relative is a rapist or a killer.

That possibility burst into the public consciousness in April 2018, with the arrest of Joseph James DeAngelo, alleged to be the Golden State Killer responsible for at least 13 killings and more than 50 rapes in the 1970s and 1980s. DeAngelo was finally tracked down after DNA left at the scene of a 1980 double murder was matched to people in GEDmatch who were the killer's third or fourth cousins. Through months of painstaking work, investigators working with the genealogist Barbara Rae-Venter built family trees that converged on DeAngelo.

Genealogists had long realized that databases like GEDmatch could be used in this way, but had been wary of working with law enforcement — fearing that DNA test customers would object to the idea of cops searching their DNA profiles and rummaging around in their family trees.

But the Golden State Killer’s crimes were so heinous that the anticipated backlash initially failed to materialize. Indeed, a May 2018 survey of more than 1,500 US adults found that 80% backed police using public genealogy databases to solve violent crimes.

“I was very surprised with the Golden State Killer case how positive the reaction was across the board,” CeCe Moore, a genealogist known for her appearances on TV, told BuzzFeed News a couple of months after DeAngelo’s arrest.

The info is here.