Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Safety. Show all posts
Showing posts with label Safety. Show all posts

Saturday, December 9, 2023

Physicians’ Refusal to Wear Masks to Protect Vulnerable Patients—An Ethical Dilemma for the Medical Profession

Dorfman D, Raz M, Berger Z.
JAMA Health Forum. 2023;4(11):e233780.
doi:10.1001/jamahealthforum.2023.3780

Here is an excerpt:

In theory, the solution to the problem should be simple: patients who wear masks to protect themselves, as recommended by the CDC, can ask the staff and clinicians to wear a mask as well when seeing them, and the clinicians would oblige given the efficacy masks have shown in reducing the spread of respiratory illnesses. However, disabled patients report physicians and other clinical staff having refused to wear a mask when caring for them. Although it is hard to know how prevalent this phenomenon is, what recourse do patients have? How should health care systems approach clinicians and staff who refuse to mask when treating a disabled patient?

Physicians have a history of antagonism to the idea that they themselves might present a health risk to their patients. Famously, when Hungarian physician Ignaz Semmelweis originally proposed handwashing as a measure to reduce purpureal fever, he was met with ridicule and ostracized from the profession.

Physicians were also historically reluctant to adopt new practices to protect not only patients but also physicians themselves against infection in the midst of the AIDS epidemic. In 1985, the CDC presented its guidance on workplace transmission, instructing physicians to provide care, “regardless of whether HCWs [health care workers] or patients are known to be infected with HTLV-III/LAV [human T-lymphotropic virus type III/lymphadenopathy-associated virus] or HBV [hepatitis B virus].” These CDC guidelines offered universal precautions, common-sense, nonstigmatizing, standardized methods to reduce infection. Yet, some physicians bristled at the idea that they need to take simple, universal public health steps to prevent transmission, even in cases in which infectivity is unknown, and instead advocated for a medicalized approach: testing or masking only in cases when a patient is known to be infected. Such an individualized medicalized approach fails to meet the public health needs of the moment.

Patients are the ones who pay the price for physicians’ objections to changes in practices, whether it is handwashing or the denial of care as an unwarranted HIV precaution. Yet today, with the enactment of disability antidiscrimination law, patients are protected, at least on the books.

As we have written elsewhere, federal law supports the right of a disabled individual to request masking as a reasonable disability accommodation in the workplace and at schools.


Here is my summary:

This article explores the ethical dilemma arising from physicians refusing to wear masks, potentially jeopardizing the protection of vulnerable patients. The author delves into the conflict between personal beliefs and professional responsibilities, questioning the ethical implications of such refusals within the medical profession. The analysis emphasizes the importance of prioritizing patient well-being and public health over individual preferences, calling for a balance between personal freedoms and ethical obligations in healthcare settings.

Friday, September 1, 2023

Building Superintelligence Is Riskier Than Russian Roulette

Tam Hunt & Roman Yampolskiy
nautil.us
Originally posted 2 August 23

Here is an excerpt:

The precautionary principle is a long-standing approach for new technologies and methods that urges positive proof of safety before real-world deployment. Companies like OpenAI have so far released their tools to the public with no requirements at all to establish their safety. The burden of proof should be on companies to show that their AI products are safe—not on public advocates to show that those same products are not safe.

Recursively self-improving AI, the kind many companies are already pursuing, is the most dangerous kind, because it may lead to an intelligence explosion some have called “the singularity,” a point in time beyond which it becomes impossible to predict what might happen because AI becomes god-like in its abilities. That moment could happen in the next year or two, or it could be a decade or more away.

Humans won’t be able to anticipate what a far-smarter entity plans to do or how it will carry out its plans. Such superintelligent machines, in theory, will be able to harness all of the energy available on our planet, then the solar system, then eventually the entire galaxy, and we have no way of knowing what those activities will mean for human well-being or survival.

Can we trust that a god-like AI will have our best interests in mind? Similarly, can we trust that human actors using the coming generations of AI will have the best interests of humanity in mind? With the stakes so incredibly high in developing superintelligent AI, we must have a good answer to these questions—before we go over the precipice.

Because of these existential concerns, more scientists and engineers are now working toward addressing them. For example, the theoretical computer scientist Scott Aaronson recently said that he’s working with OpenAI to develop ways of implementing a kind of watermark on the text that the company’s large language models, like GPT-4, produce, so that people can verify the text’s source. It’s still far too little, and perhaps too late, but it is encouraging to us that a growing number of highly intelligent humans are turning their attention to these issues.

Philosopher Toby Ord argues, in his book The Precipice: Existential Risk and the Future of Humanity, that in our ethical thinking and, in particular, when thinking about existential risks like AI, we must consider not just the welfare of today’s humans but the entirety of our likely future, which could extend for billions or even trillions of years if we play our cards right. So the risks stemming from our AI creations need to be considered not only over the next decade or two, but for every decade stretching forward over vast amounts of time. That’s a much higher bar than ensuring AI safety “only” for a decade or two.

Skeptics of these arguments often suggest that we can simply program AI to be benevolent, and if or when it becomes superintelligent, it will still have to follow its programming. This ignores the ability of superintelligent AI to either reprogram itself or to persuade humans to reprogram it. In the same way that humans have figured out ways to transcend our own “evolutionary programming”—caring about all of humanity rather than just our family or tribe, for example—AI will very likely be able to find countless ways to transcend any limitations or guardrails we try to build into it early on.


Here is my summary:

The article argues that building superintelligence is a risky endeavor, even more so than playing Russian roulette. Further, there is no way to guarantee that we will be able to control a superintelligent AI, and that even if we could, it is possible that the AI would not share our values. This could lead to the AI harming or even destroying humanity.

The authors propose that we should pause our current efforts to develop superintelligence and instead focus on understanding the risks involved. He argues that we need to develop a better understanding of how to align AI with our values, and that we need to develop safety mechanisms that will prevent AI from harming humanity.  (See Shelley's Frankenstein as a literary example.)

Monday, July 3, 2023

Is Avoiding Extinction from AI Really an Urgent Priority?

S. Lazar, J, Howard, & A. Narayanan
fast.ai
Originally posted 30 May 23

Here is an excerpt:

And why focus on extinction in particular? Bad as it would be, as the preamble to the statement notes AI poses other serious societal-scale risks. And global priorities should be not only important, but urgent. We’re still in the middle of a global pandemic, and Russian aggression in Ukraine has made nuclear war an imminent threat. Catastrophic climate change, not mentioned in the statement, has very likely already begun. Is the threat of extinction from AI equally pressing? Do the signatories believe that existing AI systems or their immediate successors might wipe us all out? If they do, then the industry leaders signing this statement should immediately shut down their data centres and hand everything over to national governments. The researchers should stop trying to make existing AI systems safe, and instead call for their elimination.

We think that, in fact, most signatories to the statement believe that runaway AI is a way off yet, and that it will take a significant scientific advance to get there—one that we cannot anticipate, even if we are confident that it will someday occur. If this is so, then at least two things follow.

First, we should give more weight to serious risks from AI that are more urgent. Even if existing AI systems and their plausible extensions won’t wipe us out, they are already causing much more concentrated harm, they are sure to exacerbate inequality and, in the hands of power-hungry governments and unscrupulous corporations, will undermine individual and collective freedom. We can mitigate these risks now—we don’t have to wait for some unpredictable scientific advance to make progress. They should be our priority. After all, why would we have any confidence in our ability to address risks from future AI, if we won’t do the hard work of addressing those that are already with us?

Second, instead of alarming the public with ambiguous projections about the future of AI, we should focus less on what we should worry about, and more on what we should do. The possibly extreme risks from future AI systems should be part of that conversation, but they should not dominate it. We should start by acknowledging that the future of AI—perhaps more so than of pandemics, nuclear war, and climate change—is fundamentally within our collective control. We need to ask, now, what kind of future we want that to be. This doesn’t just mean soliciting input on what rules god-like AI should be governed by. It means asking whether there is, anywhere, a democratic majority for creating such systems at all.

Sunday, July 2, 2023

Predictable, preventable medical errors kill thousands yearly. Is it getting any better?

Karen Weintraub
USAToday.com
Originally posted 3 May 23

Here are two excerpts:

A 2017 study put the figure at over 250,000 a year, making medical errors the nation's third leading cause of death at the time. There are no more recent figures.

But the pandemic clearly worsened patient safety, with Leapfrog's new assessment showing increases in hospital-acquired infections, including urinary tract and drug-resistant staph infections as well as infections in central lines ‒ tubes inserted into the neck, chest, groin, or arm to rapidly provide fluids, blood or medications. These infections spiked to a 5-year high during the pandemic and remain high.

"Those are really terrible declines in performance," Binder said.

Patient safety: 'I've never ever, ever seen that'

Not all patient safety news is bad. In one study published last year, researchers examined records from 190,000 patients discharged from hospitals nationwide after being treated for a heart attack, heart failure, pneumonia or major surgery. Patients saw far fewer bad events following treatment for those four conditions, as well as for adverse events caused by medications, hospital-acquired infections, and other factors.

It was the first study of patient safety that left Binder optimistic. "This was improvement and I've never ever, ever seen that," she said.

(cut)

On any given day now, 1 of every 31 hospitalized patients acquires an infection while hospitalized, according to a recent study from the Centers for Disease Control and Prevention. This costs health care systems at least $28.4 billion each year and accounts for an additional $12.4 billion from lost productivity and premature deaths.

"That blew me away," said Shaunte Walton, system director of Clinical Epidemiology & Infection Prevention at UCLA Health. Electronic tools can help, but even with them, "there's work to do to try to operationalize them," she said.

The patient experience also slipped during the pandemic. According to Leapfrog's latest survey, patients reported declines in nurse communication, doctor communication, staff responsiveness, communication about medicine and discharge information.

Boards and leadership teams are "highly distracted" right now with workforce shortages, new payment systems, concerns about equity and decarbonization, said Dr. Donald Berwick, president emeritus and senior fellow at the Institute for Healthcare Improvement and former administrator of the Centers for Medicare & Medicaid Services.

Tuesday, June 20, 2023

Ethical Accident Algorithms for Autonomous Vehicles and the Trolley Problem: Three Philosophical Disputes

Sven Nyholm
In Lillehammer, H. (ed.), The Trolley Problem.
Cambridge: Cambridge University Press, 2023

Abstract

The Trolley Problem is one of the most intensively discussed and controversial puzzles in contemporary moral philosophy. Over the last half-century, it has also become something of a cultural phenomenon, having been the subject of scientific experiments, online polls, television programs, computer games, and several popular books. This volume offers newly written chapters on a range of topics including the formulation of the Trolley Problem and its standard variations; the evaluation of different forms of moral theory; the neuroscience and social psychology of moral behavior; and the application of thought experiments to moral dilemmas in real life. The chapters are written by leading experts on moral theory, applied philosophy, neuroscience, and social psychology, and include several authors who have set the terms of the ongoing debates. The volume will be valuable for students and scholars working on any aspect of the Trolley Problem and its intellectual significance.

Here is the conclusion:

Accordingly, it seems to me that just as the first methodological approach mentioned a few paragraphs above is problematic, so is the third methodological approach. In other words, we do best to take the second approach. We should neither rely too heavily (or indeed exclusively) on the comparison between the ethics of self-driving cars and the trolley problem, nor wholly ignore and pay no attention to the comparison between the ethics of self-driving cars and the trolley problem. Rather, we do best to make this one – but not the only – thing we do when we think about the ethics of self-driving cars. With what is still a relatively new issue for philosophical ethics to work with, and indeed also regarding older ethical issues that have been around much longer, using a mixed and pluralistic method that approaches the moral issues we are considering from many different angles is surely the best way to go. In this instance, that includes reflecting on – and reflecting critically on – how the ethics of crashes involving self-driving cars is both similar to and different from the philosophy of the trolley problem.

At this point, somebody might say, “what if I am somebody who really dislikes the self-driving cars/trolley problem comparison, and I would really prefer reflecting on the ethics of self-driving cars without spending any time on thinking about the similarities and differences between the ethics of self-driving cars and the trolley problem?” In other words, should everyone working on the ethics of self-driving cars spend at least some of their time reflecting on the comparison with the trolley problem? Luckily for those who are reluctant to spend any of their time reflecting on the self-driving cars/trolley problem comparison, there are others who are willing and able to devote at least some of their energies to this comparison.

In general, I think we should view the community that works on the ethics of this issue as being one in which there can be a division of labor, whereby different members of this field can partly focus on different things, and thereby together cover all of the different aspects that are relevant and important to investigate regarding the ethics of self-driving cars.  As it happens, there has been a remarkable variety in the methods and approaches people have used to address the ethics of self-driving cars (see Nyholm 2018 a-b).  So, while it is my own view that anybody who wants to form a complete overview of the ethics of self-driving cars should, among other things, devote some of their time to studying the comparison with the trolley problem, it is ultimately no big problem if not everyone wishes to do so. There are others who have been studying, and who will most likely continue to reflect on, this comparison.

Monday, February 27, 2023

Domestic violence hotline calls will soon be invisible on your family phone plan

Ashley Belanger
ARS Technica
Originally published 17 FEB 23

Today, the Federal Communications Commission proposed rules to implement the Safe Connections Act, which President Joe Biden signed into law last December. Advocates consider the law a landmark move to stop tech abuse. Under the law, mobile service providers are required to help survivors of domestic abuse and sexual violence access resources and maintain critical lines of communication with friends, family, and support organizations.

Under the proposed rules, mobile service providers are required to separate a survivor’s line from a shared or family plan within two business days. Service providers must also “omit records of calls or text messages to certain hotlines from consumer-facing call and text message logs,” so that abusers cannot see when survivors are seeking help. Additionally, the FCC plans to launch a “Lifeline” program, providing emergency communications support for up to six months for survivors who can’t afford to pay for mobile services.

“These proposed rules would help survivors obtain separate service lines from shared accounts that include their abusers, protect the privacy of calls made by survivors to domestic abuse hotlines, and provide support for survivors who suffer from financial hardship through our affordability programs,” the FCC’s announcement said.

The FCC has already consulted with tech associations and domestic violence support organizations in forming the proposed rules, but now the public has a chance to comment. An FCC spokesperson confirmed to Ars that comments are open now. Crystal Justice, the National Domestic Violence Hotline’s chief external affairs officer, told Ars that it’s critical for survivors to submit comments to help inform FCC rules with their experiences of tech abuse.

To express comments, visit this link and fill in “22-238” as the proceeding number. That will auto-populate a field that says “Supporting Survivors of Domestic and Sexual Violence.”

FCC’s spokesperson told Ars that the initial public comment period will be open for 30 days after the rules are published in the federal register, and then a reply comment period will be open for 30 days after the initial comment period ends.

Friday, November 4, 2022

Mental Health Implications of Abortion Restrictions for Historically Marginalized Populations

Ogbu-Nwobodo, L., Shim, R.S., et al.
October 27, 2022
N Engl J Med 2022; 387:1613-1617
DOI: 10.1056/NEJMms2211124

Here is an excerpt:

Abortion and Mental Health

To begin with, abortion does not lead to mental health harm — a fact that has been established by data and recognized by the National Academies of Sciences, Engineering, and Medicine and the American Psychological Association The Turnaway Study, a longitudinal study that compared mental health outcomes among people who obtained an abortion with those among people denied abortion care, found that abortion denial was associated with initially higher levels of stress, anxiety, and low self-esteem than was obtaining of wanted abortion care. People who had an abortion did not have an increased risk of any mental health disorder, including depression, anxiety, suicidal ideation, post-traumatic stress disorder, or substance use disorders. Whether people obtained or were denied an abortion, those at greatest risk for adverse psychological outcomes after seeking an abortion were those with a history of mental health conditions or of child abuse or neglect and those who perceived abortion stigma (i.e., they felt others would look down on them for seeking an abortion). Furthermore, people who are highly oppressed and marginalized by society are more vulnerable to psychological distress.

There is evidence that people seeking abortion have poorer baseline mental health, on average, than people who are not seeking an abortion. However, this poorer mental health results in part from structural inequities that disproportionately expose some populations to poverty, trauma, adverse childhood experiences (including physical and sexual abuse), and intimate partner violence. People seek abortion for many reasons, including (but not limited to) timing issues, the need to focus on their other children, concern for their own physical or mental health, the desire to avoid exposing a child to a violent or abusive partner, and the lack of financial security to raise a child.

In addition, for people with a history of mental illness, pregnancy and the postpartum period are a time of high risk, with increased rates of recurrence of psychiatric symptoms and of adverse pregnancy and birth outcomes. Because of stigma and discrimination, birthing or pregnant people with serious mental illnesses or substance use disorders are more likely to be counseled by health professionals to avoid or terminate pregnancies, as highlighted by a small study of women with bipolar disorder. One study found that among women with mental health conditions, the rate of readmission to a psychiatric hospital was not elevated around the time of abortion, but there was an increased rate of hospitalization in psychiatric facilities at the time of childbirth. Data also indicate that for people with preexisting mental health conditions, mental health outcomes are poor whether they obtain an abortion or give birth.

The Role of Structural Racism

Structural racism — defined as ongoing interactions between macro-level systems and institutions that constrain the resources, opportunities, and power of marginalized racial and ethnic groups — is widely considered a fundamental cause of poor health and racial inequities, including adverse maternal health outcomes. Structural racism ensures the inequitable distribution of a broad range of health-promoting resources and opportunities that unfairly advantage White people and unfairly disadvantage historically marginalized racial and ethnic groups (e.g., education, paid leave from work, access to high-quality health care, safe neighborhoods, and affordable housing). In addition, structural racism is responsible for inequities and poor mental health outcomes among many diverse populations.


Tuesday, March 29, 2022

Gene editing gets safer thanks to redesigned Cas9 protein

Science Daily
Originally posted 2 MAR 22

Summary:

Scientists have redesigned a key component of a widely used CRISPR-based gene-editing tool, called Cas9, to be thousands of times less likely to target the wrong stretch of DNA while remaining just as efficient as the original version, making it potentially much safer.

-----------------

Scientists have redesigned a key component of a widely used CRISPR-based gene-editing tool, called Cas9, to be thousands of times less likely to target the wrong stretch of DNA while remaining just as efficient as the original version, making it potentially much safer.

One of the grand challenges with using CRISPR-based gene editing on humans is that the molecular machinery sometimes makes changes to the wrong section of a host's genome, creating the possibility that an attempt to repair a genetic mutation in one spot in the genome could accidentally create a dangerous new mutation in another.

But now, scientists at The University of Texas at Austin have redesigned a key component of a widely used CRISPR-based gene-editing tool, called Cas9, to be thousands of times less likely to target the wrong stretch of DNA while remaining just as efficient as the original version, making it potentially much safer. The work is described in a paper published today in the journal Nature.

"This really could be a game changer in terms of a wider application of the CRISPR Cas systems in gene editing," said Kenneth Johnson, a professor of molecular biosciences and co-senior author of the study with David Taylor, an assistant professor of molecular biosciences. The paper's co-first authors are postdoctoral fellows Jack Bravo and Mu-Sen Liu.


Journal Reference:

Jack P. K. Bravo, Mu-Sen Liu, et al. Structural basis for mismatch surveillance by CRISPR–Cas9. Nature, 2022; DOI: 10.1038/s41586-022-04470-1

Sunday, March 21, 2021

Who Should Stop Unethical A.I.?

Matthew Hutson
The New Yorker
Originally published 15 Feb 21

Here is an excerpt:

Many kinds of researchers—biologists, psychologists, anthropologists, and so on—encounter checkpoints at which they are asked about the ethics of their research. This doesn’t happen as much in computer science. Funding agencies might inquire about a project’s potential applications, but not its risks. University research that involves human subjects is typically scrutinized by an I.R.B., but most computer science doesn’t rely on people in the same way. In any case, the Department of Health and Human Services explicitly asks I.R.B.s not to evaluate the “possible long-range effects of applying knowledge gained in the research,” lest approval processes get bogged down in political debate. At journals, peer reviewers are expected to look out for methodological issues, such as plagiarism and conflicts of interest; they haven’t traditionally been called upon to consider how a new invention might rend the social fabric.

A few years ago, a number of A.I.-research organizations began to develop systems for addressing ethical impact. The Association for Computing Machinery’s Special Interest Group on Computer-Human Interaction (sigchi) is, by virtue of its focus, already committed to thinking about the role that technology plays in people’s lives; in 2016, it launched a small working group that grew into a research-ethics committee. The committee offers to review papers submitted to sigchi conferences, at the request of program chairs. In 2019, it received ten inquiries, mostly addressing research methods: How much should crowd-workers be paid? Is it O.K. to use data sets that are released when Web sites are hacked? By the next year, though, it was hearing from researchers with broader concerns. “Increasingly, we do see, especially in the A.I. space, more and more questions of, Should this kind of research even be a thing?” Katie Shilton, an information scientist at the University of Maryland and the chair of the committee, told me.

Shilton explained that questions about possible impacts tend to fall into one of four categories. First, she said, “there are the kinds of A.I. that could easily be weaponized against populations”—facial recognition, location tracking, surveillance, and so on. Second, there are technologies, such as Speech2Face, that may “harden people into categories that don’t fit well,” such as gender or sexual orientation. Third, there is automated-weapons research. And fourth, there are tools “to create alternate sets of reality”—fake news, voices, or images.

Wednesday, January 27, 2021

What One Health System Learned About Providing Digital Services in the Pandemic

Marc Harrison
Harvard Business Review
Originally posted 11 Dec 20

Here are two excerpts:

Lesson 2: Digital care is safer during the pandemic.

A patient who’s tested positive for Covid doesn’t have to go see her doctor or go into an urgent care clinic to discuss her symptoms. Doctors and other caregivers who are providing virtual care for hospitalized Covid patients don’t face increased risk of exposure. They also don’t have to put on personal protective equipment, step into the patient’s room, then step outside and take off their PPE. We need those supplies, and telehealth helps us preserve it.

Intermountain Healthcare’s virtual hospital is especially well-suited for Covid patients. It works like this: In a regular hospital, you come into the ER, and we check you out and think you’re probably going to be okay, but you’re sick enough that we want to monitor you. So, we admit you.

With our virtual hospital — which uses a combination of telemedicine, home health, and remote patient monitoring — we send you home with a technology kit that allows us to check how you’re doing. You’ll be cared for by a virtual team, including a hospitalist who monitors your vital signs around the clock and home health nurses who do routine rounding. That’s working really well: Our clinical outcomes are excellent, our satisfaction scores are through the roof, and it’s less expensive. Plus, it frees up the hospital beds and staff we need to treat our sickest Covid patients.

(cut)

Lesson 4: Digital tools support the direction health care is headed.

Telehealth supports value-based care, in which hospitals and other care providers are paid based on the health outcomes of their patients, not on the amount of care they provide. The result is a greater emphasis on preventive care — which reduces unsustainable health care costs.

Intermountain serves a large population of at-risk, pre-paid consumers, and the more they use telehealth, the easier it is for them to stay healthy — which reduces costs for them and for us. The pandemic has forced payment systems, including the government’s, to keep up by expanding reimbursements for telehealth services.

This is worth emphasizing: If we can deliver care in lower-cost settings, we can reduce the cost of care. Some examples:
  • The average cost of a virtual encounter at Intermountain is $367 less than the cost of a visit to an urgent care clinic, physician’s office, or emergency department (ED).
  • Our virtual newborn ICU has helped us reduce the number of transports to our large hospitals by 65 a year since 2015. Not counting the clinical and personal benefits, that’s saved $350,000 per year in transportation costs.
  • Our internal study of 150 patients in one rural Utah town showed each patient saved an average of $2,000 in driving expenses and lost wages over a year’s time because he or she was able to receive telehealth care close to home. We also avoided pumping 106,460 kilograms of CO2 into the environment — and (per the following point) the town’s 24-bed hospital earned $1.6 million that otherwise would have shifted to a larger hospital in a bigger town.

Thursday, December 17, 2020

AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust

Capgemini Research Institute

In the wake of the COVID-19 crisis, our reliance on AI has skyrocketed. Today more than ever before, we look to AI to help us limit physical interactions, predict the next wave of the pandemic, disinfect our healthcare facilities and even deliver our food. But can we trust it?

In the latest report from the Capgemini Research Institute – AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust – we surveyed over 800 organizations and 2,900 consumers to get a picture of the state of ethics in AI today. We wanted to understand what organizations can do to move to AI systems that are ethical by design, how they can benefit from doing so, and the consequences if they don’t. We found that while customers are becoming more trusting of AI-enabled interactions, organizations’ progress in ethical dimensions is underwhelming. And this is dangerous because once violated, trust can be difficult to rebuild.

Ethically sound AI requires a strong foundation of leadership, governance, and internal practices around audits, training, and operationalization of ethics. Building on this foundation, organizations have to:
  1. Clearly outline the intended purpose of AI systems and assess their overall potential impact
  2. Proactively deploy AI to achieve sustainability goals
  3. Embed diversity and inclusion principles proactively throughout the lifecycle of AI systems for advancing fairness
  4. Enhance transparency with the help of technology tools, humanize the AI experience and ensure human oversight of AI systems
  5. Ensure technological robustness of AI systems
  6. Empower customers with privacy controls to put them in charge of AI interactions.
For more information on ethics in AI, download the report.

Friday, October 30, 2020

The corporate responsibility facade is finally starting to crumble

Alison Taylor
Yahoo Finance
Originally posted 4 March 20

Here is an excerpt:

Any claim to be a responsible corporation is predicated on addressing these abuses of power. But most companies are instead clinging with remarkable persistence to the façades they’ve built to deflect attention. Compliance officers focus on pleasing regulators, even though there is limited evidence that their recommendations reduce wrongdoing. Corporate sustainability practitioners drown their messages in an alphabet soup of acronyms, initiatives, and alienating jargon about “empowered communities” and “engaged stakeholders,” when both functions are still considered peripheral to corporate strategy.

When reading a corporation’s sustainability report and then comparing it to its risk disclosures—or worse, its media coverage—we might as well be reading about entirely distinct companies. Investors focused on sustainability speak of “materiality” principles, meant to sharpen our focus on the most relevant environmental, social, and governance (ESG) issues for each industry. But when an issue is “material” enough to threaten core operating models, companies routinely ignore, evade, and equivocate.

Coca-Cola’s most recent annual sustainability report acknowledges its most pressing issue is “obesity concerns and category perceptions.” Accordingly, it highlights its lower-sugar product lines and references responsible marketing. But it continues its vigorous lobbying against soda taxes, and of course continues to make products with known links to obesity and other health problems. Facebook’s sustainability disclosures focus on efforts to fight climate change and improve labor rights in its supply chain, but make no reference to the mental-health impacts of social media or to its role in peddling disinformation and undermining democracy. Johnson and Johnson flags “product quality and safety” as its highest priority issue without mentioning that it is a defendant in criminal litigation over distribution of opioids. UBS touts its sustainability targets but not its ongoing financing of fossil-fuel projects.

Wednesday, September 2, 2020

Poll: Most Americans believe the Covid-19 vaccine approval process is driven by politics, not science

Ed Silverman
statnews.com
Originally published 31 August 20

Seventy-eight percent of Americans worry the Covid-19 vaccine approval process is being driven more by politics than science, according to a new survey from STAT and the Harris Poll, a reflection of concern that the Trump administration may give the green light to a vaccine prematurely.

The response was largely bipartisan, with 72% of Republicans and 82% of Democrats expressing such worries, according to the poll, which was conducted last week and surveyed 2,067 American adults.

The sentiment underscores rising speculation that President Trump may pressure the Food and Drug Administration to approve or authorize emergency use of at least one Covid-19 vaccine prior to the Nov. 3 election, but before testing has been fully completed.

Concerns intensified in recent days after Trump suggested in a tweet that the FDA is part of a “deep state” conspiracy to sabotage his reelection bid. In a speech Thursday night at the Republican National Convention, he pledged that the administration “will produce a vaccine before the end of the year, or maybe even sooner.”

The info is here.

Please see top line: 80% of Americans surveyed worry that approving vaccine too quickly would worry about safety.  The implication is that fewer people would choose the vaccine if safety is an issue.

Thursday, August 13, 2020

Every Decision Is A Risk. Every Risk Is A Decision.

Maggie Koerth
fivethirtyeight.com
Originally posted 21 July 20

Here is an excerpt:

In general, research has shown that indoors is riskier than outside, long visits riskier than short ones, crowds riskier than individuals — and, look, just avoid situations where you’re being sneezed, yelled, coughed or sung at.

But the trouble with the muddy middle is that a general idea of what is riskier isn’t the same thing as a clear delineation between right and wrong. These charts — even the best ones — aren’t absolute arbiters of safety: They’re the result of surveying experts. In the case of Popescu’s chart, the risk categorizations were assigned based on discussions among herself, Emanuel and Dr. James P. Phillips, the chief of disaster medicine at George Washington University Emergency Medicine. They each independently assigned a risk level to each activity, and then hashed out the ones on which they disagreed.

Take golf. How safe is it to go out to the links? Initially, the three experts had different risk levels assigned to this activity because they were all making different assumptions about what a game of golf naturally involved, Popescu said. “Are people doing it alone? If not, how many people are in a cart? Are they wearing masks? Are they drinking? …. those little variables that can increase the risk,” she told me.

Golf isn’t just golf. It’s how you golf that matters.

Those variables and assumptions aren’t trivial to calculating risk. Nor are they static. There’s different muck under your boggy feet in different parts of the country, at different times. For instance, how safe is it to eat outdoors with friends? Popescu’s chart ranks “outdoor picnic or porch dining” with people outside your household as low risk — a very validating categorization, personally. But a chart produced by the Texas Medical Association, based on a survey of its 53,000 physician members, rates “attending a backyard barbeque” as a moderate risk, a 5 on a scale in which 9 is the stuff most of us have no problem eschewing.

The info is here.

Monday, August 3, 2020

The Role of Cognitive Dissonance in the Pandemic

Elliot Aronson and Carol Tavris
The Atlantic
Originally published 12 July 20

Here is an excerpt:

Because of the intense polarization in our country, a great many Americans now see the life-and-death decisions of the coronavirus as political choices rather than medical ones. In the absence of a unifying narrative and competent national leadership, Americans have to choose whom to believe as they make decisions about how to live: the scientists and the public-health experts, whose advice will necessarily change as they learn more about the virus, treatment, and risks? Or President Donald Trump and his acolytes, who suggest that masks and social distancing are unnecessary or “optional”?

The cognition I want to go back to work or I want to go to my favorite bar to hang out with my friends is dissonant with any information that suggests these actions might be dangerous—if not to individuals themselves, then to others with whom they interact.

How to resolve this dissonance? People could avoid the crowds, parties, and bars and wear a mask. Or they could jump back into their former ways. But to preserve their belief that they are smart and competent and would never do anything foolish to risk their lives, they will need some self-justifications: Claim that masks impair their breathing, deny that the pandemic is serious, or protest that their “freedom” to do what they want is paramount. “You’re removing our freedoms and stomping on our constitutional rights by these Communist-dictatorship orders,” a woman at a Palm Beach County commissioners’ hearing said. “Masks are literally killing people,” said another. South Dakota Governor Kristi Noem, referring to masks and any other government interventions, said, “More freedom, not more government, is the answer.” Vice President Mike Pence added his own justification for encouraging people to gather in unsafe crowds for a Trump rally: “The right to peacefully assemble is enshrined in the First Amendment of the Constitution.”

The info is here.

Saturday, July 25, 2020

America’s Schools Are a Moral and Medical Catastrophe

Laurie Garrett
foreignpolicy.com
Originally posted 24 July 20

After U.S. President Donald Trump demanded last week that schools nationwide reopen this fall, regardless of the status of their community’s COVID-19 epidemic status, his Secretary of Education Betsy DeVos was asked how this could safely be accomplished. She offered no guidelines, nor financial support to strapped school districts. Her reply was that school districts nationwide needed to create their own safety schemes and realize that the federal government will cut off funds if schools fail to reopen. “I think the go-to needs to be kids in school, in person, in the classroom,” she said in an interview on CNN on July 12.

This is nothing short of moral bankruptcy. The Trump administration is effectively demanding schools bend to its will, without offering a hint of expert guidance on how to do so safely, much less the necessary financing.

I can’t correct for the latter failure, of course. But here’s some information that will be of use to the many rightfully concerned parents and educators across the United States.

1. Should a national-scale school reopening be considered, at all?

Emphatically, no. The state of Florida’s data shows that 13 percent of children who have been tested for the novel coronavirus were found to be infected, and there’s a gradient of infection downward with age: Only 16 percent of these positive cases are in children 1 to 4 years old, whereas 29 percent are in those 15 to 17 years old. In Nueces County, Texas, 85 children under age 2 have tested positive for the coronavirus since March, killing one of them. The infections were likely caught from parents or older siblings. A South Korean government survey of 60,000 households discovered that adults living in households that had an infected child aged 10 to 19 years had the highest rate of catching the coronavirus—more so than when an infected adult was present. Nearly 19 percent of people living with an infected teenager went on to test positive for the virus within 10 days. A Kaiser Family Foundation study says some 3.3 million adults over 65 in the United States live in a home with at least one school-aged child, putting the elders at special risk.

The info is here.

Tuesday, July 21, 2020

College Football’s Brand At Stake, Ethics Expert Says

Penn State football seniors deserved a bigger crowd in final game ...Ray Glier
Forbes.com
Originally posted 16 July 20

Here is an excerpt:

“What is the potential harm vs potential good? This the core ethical question,” Etzel said.

The caretakers of college athletics insist it is too early to be making decisions about canceling football this fall. They are allowing players to work out, coaches to scheme, and fans to dream until the last possible moment before they have to pull the plug. Their runway is growing short.

“To be certain—rigid in what is important—is very risky,” Etzel said in an email response to the ethical dilemma facing college administrators. “Decisions and potential mistakes of this magnitude have not been made in the past, so those running and influencing the show have no benchmarks.

“Presidents and other leaders need to responsibly step in to decide on their own—consistent with their job descriptions—just what the most useful, compassionate path is for each organization.”

If athletes get sick from the virus in workouts this summer and do not recover, or have permanent damage to their health, the college game will get hit with vitriol nationally like it has never seen before. Millions of people in the U.S. are college football fans, but not everyone worships the U. Coaches and administrators are going to be painted as money-thirsty villains. An athletic director, maybe a coach, is going to be scapegoated, then fired, if an athlete does not recover from the virus.

The info is here.

Monday, July 20, 2020

Physicians united: Here’s why pulling out of WHO is a big mistake

Andis Robeznieks
American Medical Association
Originally published 8 July 20

Here is an excerpt:

The joint statement builds on a previous response from the AMA made back in May after the administration announced its intention to withdraw from the WHO.

Withdrawal served “no logical purpose,” made finding a solution to the pandemic more challenging and could have harmful repercussions in worldwide efforts to develop a vaccine and effective COVID-19 treatments, then-AMA President Patrice A. Harris, MD, MA, said at the time.

Defeating COVID-19 “requires the entire world working together,” Dr. Harris added.

In April, Dr. Harris said withdrawing from the WHO would be “a dangerous step in the wrong direction, and noted that “fighting a global pandemic requires international cooperation “

“Cutting funding to the WHO—rather than focusing on solutions—is a dangerous move at a precarious moment for the world,” she added

The message regarding the need for a unified international effort was echoed in the statement from the physician leaders.

"As our nation and the rest of the world face a global health pandemic, a worldwide, coordinated response is more vital than ever,” they said. “This dangerous withdrawal not only impacts the global response against COVID-19, but also undermines efforts to address other major public health threats.”

The info is here.

Thursday, July 16, 2020

At Stake in Reopening Schools: ‘The Future of the Country’

Matt Peterson
barrons.com
Originally posted 10 July 20

Here is an excerpt:

We do have to think about this longer-term. We also have to think about it from an ethics standpoint, acknowledging the following. At least right now, the primary motivation behind closing schools—having children not be educated in school buildings—is because of a belief that keeping schools physically open with children congregating poses a risk to community transmission. Either to teachers directly or back to households and the wider community. Whether it’s bad for kids themselves or how risky it is for kids themselves remains an open question. We’re worried about multisymptom inflammatory syndrome. But at least, right now, the evidence continues to suggest children are not themselves a particularly high risk group for serious Covid disease.

From an ethics point of view, when one group is being burdened primarily to benefit other groups, that puts a very special onus on justifying that it is ethically OK. If we conclude it is the right thing to do ethically because of what’s at stake for the community, we have both to make sure that it’s justified, this disproportionate burden on children, and that we do everything we can to mitigate those burdens.

The info is here.

Tuesday, June 30, 2020

Want To See Your Therapist In-Person Mid-Pandemic? Think Again

Todd Essig
Forbes.com
Originally posted 27 June 20

Here is an excerpt:

Psychotherapy is built on a promise; you bring your suffering to this private place and I will work with you to keep you safe and help you heal. That promise is changed by necessary viral precautions. First, the possibility of contact tracing weakens the promise of confidentiality. I promise to keep this private changes to a promise to keep it private unless someone gets sick and I need to contact the local health department.

Even more powerful is the fact that a mid-pandemic in-person psychotherapy promise has to include all the ways we will protect each other from very real dangers, hardly the experience of psychological safety. There will even be a promise to pretend we are safe together even when we are doing so many things to remind us we are each the source of a potentially life-altering infection.

When I imagine how my caseload would react were I to begin mid-pandemic in-person work, like I did for a recent webinar for the NYS Psychological Association, I anticipate as many people welcoming the chance to work together on a shared project of viral safety as I do imagining those who would feel devastated or burdened. But even for the first group of willing co-participants, it is important to see that such a joint project of mutual safety is not psychotherapy. No anticipated reaction included the experience of psychological safety on which effective psychotherapy rests.

Rather than feeling safe enough to address the private and dark, patients/clients will each in their own way labor under the burden of keeping themselves, their families, their therapist, other patients, and office staff safe. The vigilance required to remain safe will inevitably reduce the therapeutic benefits one might hope would develop from being back in the office.

The article is here.