Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Audit. Show all posts
Showing posts with label Audit. Show all posts

Monday, May 27, 2019

How To Prevent AI Ethics Councils From Failing

uncaptionedManoj Saxena
www.forbes.com
Originally posted April 30, 2019

There's nothing “artificial” about Artificial Intelligence-infused systems. These systems are real and are already impacting everyone today though automated predictions and decisions. However, these digital brains may demonstrate unpredictable behaviors that can be disruptive, confusing, offensive, and even dangerous. Therefore, before getting started with an AI strategy, it’s critical for companies to consider AI through the lens of building systems you can trust.

Educate on the criticality of a ‘people and ethics first’ approach

AI systems often function in oblique, invisible ways that may harm the most vulnerable. Examples of such harm include loss of privacy, discrimination, loss of skills, economic impact, the security of critical infrastructure, and long-term effects on social well-being.

The ‘’technology and monetization first approach’’ to AI needs to evolve to a “people and ethics first” approach. Ethically aligned design is a set of societal and policy guidelines for the development of intelligent and autonomous systems to ensure such systems serve humanity’s values and ethical principles.

Multiple noteworthy organizations and countries have proposed guidelines for developing trusted AI systems going forward. These include the IEEE, World Economic Forum, the Future of Life Institute, Alan Turing Institute, AI Global, and the Government of Canada. Once you have your guidelines in place, you can then start educating everyone internally and externally about the promise and perils of AI and the need for an AI ethics council.

The info is here.

Thursday, January 3, 2019

Why We Need to Audit Algorithms

James Guszcza, Iyad Rahwan Will, Bible Manuel Cebrian, & Vic Katyal
Harvard Business Review
Originally published November 28, 2018

Algorithmic decision-making and artificial intelligence (AI) hold enormous potential and are likely to be economic blockbusters, but we worry that the hype has led many people to overlook the serious problems of introducing algorithms into business and society. Indeed, we see many succumbing to what Microsoft’s Kate Crawford calls “data fundamentalism” — the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. A more nuanced view is needed. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies can encode societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing.

Ensuring that societal values are reflected in algorithms and AI technologies will require no less creativity, hard work, and innovation than developing the AI technologies themselves. We have a proposal for a good place to start: auditing. Companies have long been required to issue audited financial statements for the benefit of financial markets and other stakeholders. That’s because — like algorithms — companies’ internal operations appear as “black boxes” to those on the outside. This gives managers an informational advantage over the investing public which could be abused by unethical actors. Requiring managers to report periodically on their operations provides a check on that advantage. To bolster the trustworthiness of these reports, independent auditors are hired to provide reasonable assurance that the reports coming from the “black box” are free of material misstatement. Should we not subject societally impactful “black box” algorithms to comparable scrutiny?

The info is here.

Wednesday, January 2, 2019

The Intuitive Appeal of Explainable Machines

Andrew D. Selbst & Solon Barocas
Fordham Law Review -Volume 87

Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.

Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible.


In most cases, intuition serves as the unacknowledged bridge between a descriptive account to a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself.

The info is here.

Saturday, July 21, 2018

Bias detectives: the researchers striving to make algorithms fair

Rachel Courtland
Nature.com
Originally posted

Here is an excerpt:

“What concerns me most is the idea that we’re coming up with systems that are supposed to ameliorate problems [but] that might end up exacerbating them,” says Kate Crawford, co-founder of the AI Now Institute, a research centre at New York University that studies the social implications of artificial intelligence.

With Crawford and others waving red flags, governments are trying to make software more accountable. Last December, the New York City Council passed a bill to set up a task force that will recommend how to publicly share information about algorithms and investigate them for bias. This year, France’s president, Emmanuel Macron, has said that the country will make all algorithms used by its government open. And in guidance issued this month, the UK government called for those working with data in the public sector to be transparent and accountable. Europe’s General Data Protection Regulation (GDPR), which came into force at the end of May, is also expected to promote algorithmic accountability.

In the midst of such activity, scientists are confronting complex questions about what it means to make an algorithm fair. Researchers such as Vaithianathan, who work with public agencies to try to build responsible and effective software, must grapple with how automated tools might introduce bias or entrench existing inequity — especially if they are being inserted into an already discriminatory social system.

The information is here.

Friday, November 11, 2016

The map is not the territory: medical records and 21st century practice

Stephen A Martin & Christine A Sinsky
The Lancet
Published: 25 April 2016

Summary

Documentation of care is at risk of overtaking the delivery of care in terms of time, clinician focus, and perceived importance. The medical record as currently used for documentation contributes to increased cognitive workload, strained clinician–patient relationships, and burnout. We posit that a near verbatim transcript of the clinical encounter is neither feasible nor desirable, and that attempts to produce this exact recording are harmful to patients, clinicians, and the health system. In this Viewpoint, we focus on the alternative constructions of the medical record to bring them back to their primary purpose—to aid cognition, communicate, create a succinct account of care, and support longitudinal comprehensive care—thereby to support the building of relationships and medical decision making while decreasing workload.

Here are two excerpts:

While our vantage point is American, documentation guidelines are part of a global tapestry of what has been termed technogovernance, a bureaucratic model in which professionals' behaviour is shaped and manipulated by tight regulatory policies.

(cut)

In 1931, the scientist Alfred Korzybski introduced the phrase "the map is not the territory", to suggest that the representation of reality is not reality itself. In health care, creating the map (ie, the clinical record) can take on more importance and consume more resources than providing care itself. Indeed, more time may be spent documenting care than delivering care. In addition, fee-for-service payment arrangements pay for the map (the medical note), not the territory (the actual care). Readers of contemporary electronic notes, composed generously of auto-text output, copy forward text, and boiler plate statements for compliance, billing, and performance measurement understand all too well the gap between the map and the territory, and more profoundly, between what is done to patients in service of creating the map and what patients actually need.

Contemporary medical records are used for purposes that extend beyond supporting patient and caregiver. Records are used in quality evaluations, practitioner monitoring, practice certifications, billing justification, audit defence, disability determinations, health insurance risk assessments, legal actions, and research.

Wednesday, September 21, 2011

Antipsychotics overprescribed in nursing homes

By M. Price
September 2011, Volume 42, No. 8
Print Version: Page 11

Physicians are widely prescribing antipsychotics to people in nursing homes for off-label conditions such as dementia, and Medicare is largely picking up the bill, even though Medicare guidelines don't allow for off-label prescription reimbursements, according to an audit released in May by the U.S. Department of Health and Human Services Office of the Inspector General.

The findings underscore the fact that antipsychotics are often used when behavioral treatments would be more effective, psychologists say.

The office reviewed Medicare claims of people age 65 and older living in nursing homes in 2007—the most recent data at the time the study began—and found that 51 percent of all claims contained errors, resulting $116 million worth of antipsychotics such as Abilify, Risperdal and Zyprexa being charged to Medicare by people whose conditions didn't match the drugs' intended uses. Among the audit's findings are:
  • 14 percent of the 2.1 million elderly people living in nursing homes use Medicare to pay for at least one antipsychotic prescription.
  • 83 percent of all Medicare claims for antipsychotics are, based on medical reviews, prescribed for off-label conditions, specifically dementia.
  • 22 percent of the claims for antipsychotics do not comply with the Centers for Medicare and Medicaid Services' guidelines outlining how drugs should be administered, including those guidelines stating that nursing home residents should not receive excessive doses and doses over excessive periods of time.
The report suggests that Medicare overseers reassess their nursing home certification processes and develop methods besides medical review to confirm that medications are prescribed for appropriate conditions.

Why such high rates of overprescription for antipsychotics? HHS Inspector General Daniel Levinson argued in the report that pharmaceutical companies' marketing tactics are often to blame for antipsychotics' overprescribing. Victor Molinari, PhD, a geropsychologist at the University of South Florida in Tampa, says that another important issue is the dearth of psychologists trained to provide behavioral interventions to people in nursing homes. While he agrees that people in nursing homes are taking too much antipsychotic medication, he believes nursing home physicians are often responding to a lack of options.

Many nursing home administrators are quite savvy in their mental health knowledge and would prefer to offer their residents the option of behavioral treatments, Molinari says, but when residents need immediate calming, physicians will turn to antipsychotic medication because it's quick and available. Additionally, he says, many nursing home staff aren't educated enough about nonmedical options, so they go straight for the antipsychotics.

"It follows the saying, 'If your only tool is a hammer, everything is a nail,'" he says. "Nursing homes are not just straitjacketing residents with medications as a matter of course, but because there are a host of barriers to giving them optimal care."