Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Human Error. Show all posts
Showing posts with label Human Error. Show all posts

Monday, December 9, 2019

Escaping Skinner's Box: AI and the New Era of Techno-Superstition

John Danaher
Philosophical Disquisitions
Originally posted October 10, 2019

Here is an excerpt:

The findings were dispiriting. Green and Chen found that using algorithms did improve the overall accuracy of decision-making across all conditions, but this was not because adding information and explanations enabled the humans to play a more meaningful role in the process. On the contrary, adding more information often made the human interaction with the algorithm worse. When given the opportunity to learn from the real-world outcomes, the humans became overconfident in their own judgments, more biased, and less accurate overall. When given explanations, they could maintain accuracy but only to the extent that they deferred more to the algorithm. In short, the more transparent the system seemed to the worker, the more the workers made them worse or limited their own agency.

It is important not to extrapolate too much from one study, but the findings here are consistent what has been found in other cases of automation in the workplace: humans are often the weak link in the chain. They need to be kept in check. This suggests that if we want to reap the benefits of AI and automation, we may have to create an environment that is much like that of the Skinner box, one in which humans can flap their wings, convinced they are making a difference, but prevented from doing any real damage. This is the enchanted world of techno-superstition: a world in which we adopt odd rituals and habits (explainable AI; fair AI etc) to create an illusion of control.

(cut)

These two problems combine into a third problem: the erosion of the possibility of achievement. One reason why we work is so that we can achieve certain outcomes. But when we lack understanding and control it undermines our sense of achievement. We achieve things when we use our reason to overcome obstacles to problem-solving in the real world. Some people might argue that a human collaborating with an AI system to produce some change in the world is achieving something through the combination of their efforts. But this is only true if the human plays some significant role in the collaboration. If humans cannot meaningfully make a difference to the success of AI or accurately calibrate their behaviour to produce better outcomes in tandem with the AI, then the pathway to achievement is blocked. This seems to be what happens, even when we try to make the systems more transparent.

The info is here.

Friday, November 29, 2019

Drivers are blamed more than their automated cars when both make mistakes

Image result for Drivers are blamed more than their automated cars when both make mistakesEdmond Awad and others
Nature Human Behaviour (2019)
Published: 28 October 2019


Abstract

When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human–machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.

The research is here.

Wednesday, December 19, 2018

Hackers are not main cause of health data breaches

Lisa Rapaport
Reuters News
Originally posted November 19, 2018

Most health information data breaches in the U.S. in recent years haven’t been the work of hackers but instead have been due to mistakes or security lapses inside healthcare organizations, a new study suggests.

Most health information data breaches in the U.S. in recent years haven’t been the work of hackers but instead have been due to mistakes or security lapses inside healthcare organizations, a new study suggests.

Another 25 percent of cases involved employee errors like mailing or emailing records to the wrong person, sending unencrypted data, taking records home or forwarding data to personal accounts or devices.

“More than half of breaches were triggered by internal negligence and thus are to some extent preventable,” said study coauthor Ge Bai of the Johns Hopkins Carey Business School in Washington, D.C.

The info is here.

Sunday, August 14, 2016

The Ethics of Artificial Intelligence in Intelligence Agencies

Cortney Weinbaum
The National Interest
Originally published July 18, 2016

Here is an excerpt:

Consider what could happen if the intelligence community creates a policy similar to the Pentagon directive and requires a human operator be allowed to intervene at any moment. One day the computer warns of an imminent attack, but the human analyst disagrees with the AI intelligence assessment. Does the CIA warn the president that an attack is about to occur? How is the human analyst’s assessment valued against the AI-generated intelligence?

 Or imagine that a highly sophisticated foreign country infiltrates the most sensitive U.S. intelligence systems, gains access to the algorithms and replaces the programming code with its own. The hacked AI system is no longer capable of providing accurate intelligence on that country.

The article is here.

Tuesday, May 28, 2013

Learning From Litigation

By Joanna C. Schwartz
The New York Times - Op Ed
Originally published May 16, 2013

MUCH of the discussion over the Affordable Care Act has focused on whether it will bring down health care costs. Less attention has been paid to another goal of the act: improving patient safety. Each year tens of thousands of people die, and hundreds of thousands more are injured, as a result of medical error.

Experts agree that the best way to reduce medical error is to gather and analyze information about past errors with an eye toward improving future care. But many believe that a major barrier to doing so is the medical malpractice tort system: the threat of being sued is believed to prevent the kind of transparency necessary to identify and learn from errors when they occur.

New evidence, however, contradicts the conventional wisdom that malpractice litigation compromises the patient safety movement’s call for transparency. In fact, the opposite appears to be occurring: the openness and transparency promoted by patient safety advocates appear to be influencing hospitals’ responses to litigation risk.

I recently surveyed more than 400 people responsible for hospital risk management, claims management and quality improvement in health care centers around the country, in cooperation with the American Society of Health Care Risk Managers, and I interviewed dozens more.

The entire story is here.

Sunday, May 26, 2013

Owning Our Mistakes

By Nate Kreuter
Inside Higher Ed - Career Advice
Originally published May 15, 2013

Some of the columns that I write here at Inside Higher Ed arise from a really basic formula. It goes something like this: I make a mistake at work. I realize my error, or am compelled by another party to realize it, and I take corrective action. Then I write a column addressing the mistake in general terms, in hopes of perhaps removing a little of the trial and error from this whole higher education gig for a reader or two. Somewhat less frequently I simply observe the mistake of another and then write a column. I probably couldn’t keep up with this column without the steady stream of mistakes I make myself. Maybe my mistakes are job security of a strange sort.

I probably could even use this venue to make a public promise regarding my mistakes to my colleagues in my department, college, university, and across my discipline. Here goes: I promise you all that I’ll screw up again one day. I don’t know exactly how and I don’t know exactly when, but I promise to bungle something. Maybe just in a small way. Maybe in a big way. Who knows?

But here’s what I also promise: I promise to own up to whatever mistakes I make as soon as I recognize them, to do everything in my power to correct them, and to do my damnedest not to repeat them. This is, I think and I hope, what it means to be a good colleague. I certainly would not ask a colleague for more, but I also expect no less.

If to err is human, then 'fessing up is humane. Humane for ourselves and humane for our fellows.

The entire post is here.

Thursday, April 4, 2013

Fewer Hours for Doctors-in-Training Leading To More Mistakes

By Alexandra Sifferlin
Time
Originally published March 26, 2013

Giving residents less time on duty and more time to sleep was supposed to lead to fewer medical errors. But the latest research shows that’s not the case. What’s going on?

Since 2011, new regulations restricting the number of continuous hours first-year residents spend on-call cut the time that trainees spend at the hospital during a typical duty session from 24 hours to 16 hours. Excessively long shifts, studies showed, were leading to fatigue and stress that hampered not just the learning process, but the care these doctors provided to patients.

And there were tragic examples of the high cost of this exhausting schedule. In 1984, 18-year old Libby Zion, who was admitted to a New York City hospital with a fever and convulsions, was treated by residents who ordered opiates and restraints when she became agitated and uncooperative. Busy overseeing other patients, the residents didn’t evaluate Zion again until hours later, by which time her fever has soared to 107 degrees and she went into cardiac arrest, and died. The case highlighted the enormous pressures on doctors-in-training, and the need for reform in the way residents were taught. In 1987, a New York state commission limited the number of hours that doctors could train in the hospital to 80 each week, which was less than the 100 hour a week shifts with 36 hour “call” times that were the norm at the time. In 2003, the Accreditation Council for Graduate Medical Education followed suit with rules for all programs that mandated that trainees could work no more than 24 consecutive hours.

The entire article is here.

Monday, September 26, 2011

HHS: More than 5.4M patients affected by data breaches in 2010



Written by the Editorial Staff of CMIO.net


In U.S. Department of Health and Human Services’ annual report to Congress, Secretary Kathleen Sebelius reported that between Jan. 1, 2010, and Dec. 31, 2010, breaches involving 500 or more individuals were less than 1 percent of the breaches reported, but accounted for more than 99 percent of the more than 5.4 million individuals who were affected.

As part of the Health IT for Economic and Clinical Health (HITECH) Act, the HHS secretary is required to annually report to Congress on the number and nature of data breaches, and actions taken to respond to the breaches.

The number is growing because between Sept. 23, 2009, and Dec. 31, 2009, breaches involving 500 or more individuals were less than 1 percent, but accounted for more than 99 percent of the more than 2.4 million individuals affected by a breach of protected health information. The largest breaches occurred as a result of a theft, an error or failure to adequately secure protected health information. The greatest number of incidents resulted from human or technological error and involved the protected health information of just one individual, HHS’ report said.

The largest breaches in 2010, much like 2009, occurred as a result of a theft, HHS reported. However, compared with 2009, the number of individuals affected by the loss of electronic media or paper records containing protected health information in 2010 was greater than the number of individuals affected by unauthorized access or human error.

The report said the 2010 incidents involved an additional category, improper disposal of paper records by a covered entity or business associate. The greatest number of reported incidents in 2010 resulted from small breaches involving human or technological error, with the most common incidents involving protected health information of only one or two individuals.

HHS said in its report that the breach notification requirements are achieving their objectives: Increasing public transparency of breaches and increasing accountability of the covered entities.

The secretary indicated that covered entities and business associates are providing breach notifications. Millions of affected individuals are receiving notifications, local media are being notified in the regions affected, and the secretary is receiving breach reports. To provide increased public transparency, information about breaches involving 500 or more individuals is available on the Office of Civil Rights (OCR) website

Also, the report said that more entities are taking remedial action to provide relief and mitigation to individuals and taking further action to prevent future breaches. In addition, OCR continues to exercise its oversight responsibility for reviewing and responding to and investigating breaches involving 500 or more individuals.

More than 250 breaches involving 500 or more individuals occurred in 2009 and 2010, and OCR has closed approximately 76 cases where it determined that the covered entity properly complied with the notification requirements, and corrective actions were taken. In the remaining cases, OCR continues to investigate and is working with the covered entities to ensure remedial action is taken to prevent future incidents.

For breaches involving less than 500 individuals, a covered entity must notify the secretary. HHS received approximately 5,521 reports of smaller breaches that occurred between Sept. 23, 2009, and Dec. 31, 2009. These smaller breaches affected approximately 12,000 individuals. HHS received more than 25,000 reports of smaller breaches occurring between Jan. 1, 2010, and Dec. 31, 2010. These smaller breaches affected more than 50,000 individuals.

The majority of the smaller breaches involved misdirected communications. Often, a clinical or claims record was mistakenly mailed or faxed to the wrong individual. In other instances, test results were sent to the wrong patient, files were attached to the wrong record, e-mails were sent to the wrong address and member ID cards were mailed to the wrong individuals. HHS said the covered entities reported fixing “glitches” in software that incorrectly compiled patient lists, revised policies and procedures, and trained or retrained employees who mishandled protected health information.