Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Safety. Show all posts
Showing posts with label Safety. Show all posts

Monday, June 15, 2020

Suicide Risk Increases Immediately After Gun Purchase

Psychiatric News Alert
Originally published 11 June 20

A study published in the New England Journal of Medicine expands on past research on the association between access to guns and suicide, finding that handgun ownership is associated with an elevated risk of suicide by firearm, particularly immediately after the gun is acquired.

Since the COVID-19 pandemic began, gun sales have sharply increased, an accompanying commentary pointed out. In March, Americans bought nearly two million guns, marking the second-highest monthly total since 1998, when the Federal Bureau of Investigation (FBI) began publishing such data.

“How will the current surge of gun purchases affect firearm-related violence?” wrote Chana A. Sacks, M.D., M.P.H., and Stephen J. Bartels, M.D., in their commentary. “With an additional 2 million guns now in households across the country at a time of widespread unemployment, social isolation, and acute national stress that is unprecedented in our lifetime, we urgently need to find out.”

Lead author David M. Studdert, LL.B., Sc.D., of the Stanford Law School and School of Medicine and colleagues tracked firearm ownership and mortality over 12 years (2004-2016) among 26.3 million adults in California. They used the California Statewide Voter Registration Database to form the cohort, as the database updates its information on registered voters in the state every year.

The researchers then used the California Department of Justice’s Dealer Record of Sale for details on which cohort members acquired handguns and when. Additionally, the California Death Statistical Master Files provided records of all deaths reported during the study period.

The alert is here.

Monday, June 8, 2020

One Nation Under Guard

Samuel Bowles and Arjun Jayadev
The New York Times
Originally posted 15 Feb 2014
(and still relevant today)

Here is an excerpt:

What is happening in America today is both unprecedented in our history, and virtually unique among Western democratic nations. The share of our labor force devoted to guard labor has risen fivefold since 1890 — a year when, in case you were wondering, the homicide rate was much higher than today.

Is this the curse of affluence? Or of ethnic diversity? We don’t think so. The guard-labor share of employment in the United States is four times what it is in Sweden, where living standards rival America’s. And Britain, with its diverse population, uses substantially less guard labor than the United States.

In America, growing inequality has been accompanied by a boom in gated communities and armies of doormen controlling access to upscale apartment buildings. We did not count the doormen, or those producing the gates, locks and security equipment. One could quibble about the numbers; we have elsewhere adopted a broader definition, including prisoners, work supervisors with disciplinary functions, and others.

But however one totes up guard labor in the United States, there is a lot of it, and it seems to go along with economic inequality. States with high levels of income inequality — New York and Louisiana — employ twice as many security workers (as a fraction of their labor force) as less unequal states like Idaho and New Hampshire.

When we look across advanced industrialized countries, we see the same pattern: the more inequality, the more guard labor. As the graph shows, the United States leads in both.

The info is here.

Tuesday, May 26, 2020

Four concepts to assess your personal risk as the U.S. reopens

Leana Wen
The Washington Post
Originally posted 21 May 20

Here is an excerpt:

So what does that mean in terms of choices each of us makes — what’s safe to do and what’s not?

Here are four concepts from other harm-reduction strategies that can help to guide our decisions:

Relative risk. Driving is an activity that carries risk, which can be reduced by following the speed limit and wearing a seat belt. For covid-19, we can think of risk through three key variables: proximity, activity and time.

The highest-risk scenario is if you are in close proximity with someone who is infected, in an indoor space, for an extended period of time. That’s why when one person in the household becomes ill, others are likely to get infected, too.

Also, certain activities, such as singing, expel more droplets; in one case, a single infected person in choir practice spread covid-19 to 52 people, two of whom died.

The same goes for gatherings where people hug one another — funerals and birthdays can be such “superspreader” events. Conversely, there are no documented cases of someone acquiring covid-19 by passing a stranger while walking outdoors.

You can decrease your risk by modifying one of these three variables. If you want to see friends, avoid crowded bars, and instead host in your backyard or a park, where everyone can keep their distance.

Use your own utensils and, to be even safer, bring your own food and drinks.

Skip the hugs, kisses and handshakes. If you go to the beach, find areas where you can stay at least six feet away from others who are not in your household. Takeout food is the safest. If you really want a meal out, eating outdoors with tables farther apart will be safer than dining in a crowded indoor restaurant.

Businesses should also heed this principle as they are reopening, by keeping up telecommuting and staggered shifts, reducing capacity in conference rooms, and closing communal dining areas. Museums can limit not only the number of people allowed in at once, but also the amount of time people are allowed to spend in each exhibit.

Pooled risk. If you engage in high-risk activity and are around others who do the same, you increase everyone’s risk. Think of the analogy with safe-sex practices: Those with multiple partners have higher risk than people in monogamous relationships. As applied to covid-19, this means those who have very low exposure are probably safe to associate with one another.

This principle is particularly relevant for separated families that want to see one another. I receive many questions from grandparents who miss their grandchildren and want to know when they can see them again. If two families have both been sheltering at home with virtually no outside interaction, there should be no concern with them being with one another. Families can come together for day care arrangements this way if all continue to abide by strict social distancing guidelines in other aspects of their lives. (The equation changes when any one individual resumes higher-risk activities — returning to work outside the home, for example.)

The info is here.

Monday, May 25, 2020

How Could the CDC Make That Mistake?

Alexis C. Madrigal & Robinson Meyer
The Atlantic
Originally posted 21 May 20

The Centers for Disease Control and Prevention is conflating the results of two different types of coronavirus tests, distorting several important metrics and providing the country with an inaccurate picture of the state of the pandemic. We’ve learned that the CDC is making, at best, a debilitating mistake: combining test results that diagnose current coronavirus infections with test results that measure whether someone has ever had the virus. The upshot is that the government’s disease-fighting agency is overstating the country’s ability to test people who are sick with COVID-19. The agency confirmed to The Atlantic on Wednesday that it is mixing the results of viral and antibody tests, even though the two tests reveal different information and are used for different reasons.

This is not merely a technical error. States have set quantitative guidelines for reopening their economies based on these flawed data points.

Several states—including Pennsylvania, the site of one of the country’s largest outbreaks, as well as Texas, Georgia, and Vermont—are blending the data in the same way. Virginia likewise mixed viral and antibody test results until last week, but it reversed course and the governor apologized for the practice after it was covered by the Richmond Times-Dispatch and The Atlantic. Maine similarly separated its data on Wednesday; Vermont authorities claimed they didn’t even know they were doing this.

The widespread use of the practice means that it remains difficult to know exactly how much the country’s ability to test people who are actively sick with COVID-19 has improved.

The info is here.

Wednesday, May 13, 2020

What To Do If You Need to See Patients In Office?

If you are a mental health professional who continues to see (some) patients in the office because of patient needs, the following chart may be helpful.  

To protect my patients, I imagine I am a carrier, even though I have no way of knowing because our government lacks the capacity for adequate COVID-19 testing.




Tuesday, April 28, 2020

What needs to happen before your boss can make you return to work

Mark Kaufman
www.mashable.com
Originally posted 24 April 20

Here is an excerpt:

But, there is a way for tens of millions of Americans to return to workplaces while significantly limiting how many people infect one another. It will require extraordinary efforts on the part of both employers and governments. This will feel weird, at first: Imagine regularly having your temperature taken at work, routinely getting tested for an infection or immunity, mandatory handwashing breaks, and perhaps even wearing a mask.

Yet, these are exceptional times. So restarting the economy and returning to workplace normalcy will require unparalleled efforts.

"This is truly unprecedented," said Christopher Hayes, a labor historian at the Rutgers School of Management and Labor Relations.

"This is like the 1918 flu and the Great Depression at the same time," Hayes said.

Yet unlike previous recessions and depressions over the last 100 years, most recently the Great Recession of 2008-2009, American workers must now ask themselves an unsettling question: "People now have to worry, ‘Is it safe to go to this job?’" said Hayes.

Right now, many employers aren't nearly prepared to tell workers in the U.S. to return to work and office spaces. To avoid infection, "the only tools you’ve got in your toolbox are the simple but hard-to-sustain public health tools like testing, contact tracing, and social distancing," explained Michael Gusmano, a health policy expert at the Rutgers School of Public Health.

"We’re not anywhere near a situation where you could claim that you can, with any credibility, send people back en masse now," Gusmano said.

The info is here.

Monday, April 27, 2020

Drivers are blamed more than their automated cars when both make mistakes

Awad, E., Levine, S., Kleiman-Weiner, M. et al.
Nat Hum Behav 4, 134–143 (2020).
https://doi.org/10.1038/s41562-019-0762-8

Abstract

When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human–machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.

From the Discussion:

Our central finding (diminished blame apportioned to the machine in dual-error cases) leads us to believe that, while there may be many psychological barriers to self-driving car adoption19, public over-reaction to dual-error cases is not likely to be one of them. In fact, we should perhaps be concerned about public underreaction. Because the public are less likely to see the machine as being at fault in dual-error cases like the Tesla and Uber crashes, the sort of public pressure that drives regulation might be lacking. For instance, if we were to allow the standards for automated vehicles to be set through jury-based court-room decisions, we expect that juries will be biased to absolve the car manufacturer of blame in dual-error cases, thereby failing to put sufficient pressure on manufacturers to improve car designs.

The article is here.

Monday, April 20, 2020

Europe plans to strictly regulate high-risk AI technology

Nicholas Wallace
sciencemag.org
Originally published 19 Feb 20

Here is an excerpt:

The commission wants binding rules for “high-risk” uses of AI in sectors like health care, transport, or criminal justice. The criteria to determine risk would include considerations such as whether someone could get hurt—by a self-driving car or a medical device, for example—or whether a person has little say in whether they’re affected by a machine’s decision, such as when AI is used in job recruitment or policing.

For high-risk scenarios, the commission wants to stop inscrutable “black box” AIs by requiring human oversight. The rules would also govern the large data sets used in training AI systems, ensuring that they are legally procured, traceable to their source, and sufficiently broad to train the system. “An AI system needs to be technically robust and accurate in order to be trustworthy,” the commission’s digital czar Margrethe Vestager said at the press conference.

The law will also establish who is responsible for an AI system’s actions—such as the company using it, or the company that designed it. High-risk applications would have to be shown to be compliant with the rules before being deployed in the European Union.

The commission also plans to offer a “trustworthy AI” certification, to encourage voluntary compliance in low-risk uses. Certified systems later found to have breached the rules could face fines.

The info is here.

Sunday, March 22, 2020

Our moral instincts don’t match this crisis

Yascha Mounk
The Atlantic
Originally posted March 19, 2020

Here is an excerpt:

There are at least three straightforward explanations.

The first has to do with simple ignorance. For those of us who have spent the past weeks obsessing about every last headline regarding the evolution of the crisis, it can be easy to forget that many of our fellow citizens simply don’t follow the news with the same regularity—or that they tune into radio shows and television networks that have, shamefully, been downplaying the extent of the public-health emergency. People crowding into restaurants or hanging out in big groups, then, may simply fail to realize the severity of the pandemic. Their sin is honest ignorance.

The second explanation has to do with selfishness. Going out for trivial reasons imposes a real risk on those who will likely die if they contract the disease. Though the coronavirus does kill some young people, preliminary data from China and Italy suggest that they are, on average, less strongly affected by it. For those who are far more likely to survive, it is—from a purely selfish perspective—less obviously irrational to chance such social encounters.

The third explanation has to do with the human tendency to make sacrifices for the suffering that is right in front of our eyes, but not the suffering that is distant or difficult to see.

The philosopher Peter Singer presented a simple thought experiment in a famous paper. If you went for a walk in a park, and saw a little girl drowning in a pond, you would likely feel that you should help her, even if you might ruin your fancy shirt. Most people recognize a moral obligation to help another at relatively little cost to themselves.

Then Singer imagined a different scenario. What if a girl was in mortal danger halfway across the world, and you could save her by donating the same amount of money it would take to buy that fancy shirt? The moral obligation to help, he argued, would be the same: The life of the distant girl is just as important, and the cost to you just as small. And yet, most people would not feel the same obligation to intervene.

The same might apply in the time of COVID-19. Those refusing to stay home may not know the victims of their actions, even if they are geographically proximate, and might never find out about the terrible consequences of what they did. Distance makes them unjustifiably callous.

The info is here.

Friday, February 21, 2020

Why Google thinks we need to regulate AI

Sundar Pichai
ft.com
Originally posted 19 Jan 20

Here are two excerpts:

Yet history is full of examples of how technology’s virtues aren’t guaranteed. Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents. The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.

These lessons teach us that we need to be clear-eyed about what could go wrong. There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone.

(cut)

But principles that remain on paper are meaningless. So we’ve also developed tools to put them into action, such as testing AI decisions for fairness and conducting independent human-rights assessments of new products. We have gone even further and made these tools and related open-source code widely available, which will empower others to use AI for good. We believe that any company developing new AI tools should also adopt guiding principles and rigorous review processes.

Government regulation will also play an important role. We don’t have to start from scratch. Existing rules such as Europe’s General Data Protection Regulation can serve as a strong foundation. Good regulatory frameworks will consider safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways. Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities.

Regulation can provide broad guidance while allowing for tailored implementation in different sectors. For some AI uses, such as regulated medical devices including AI-assisted heart monitors, existing frameworks are good starting points. For newer areas such as self-driving vehicles, governments will need to establish appropriate new rules that consider all relevant costs and benefits.


Thursday, February 13, 2020

FDA and NIH let clinical trial sponsors keep results secret and break the law

Charles Piller
sciencemag.org
Originally posted 13 Jan 20

For 20 years, the U.S. government has urged companies, universities, and other institutions that conduct clinical trials to record their results in a federal database, so doctors and patients can see whether new treatments are safe and effective. Few trial sponsors have consistently done so, even after a 2007 law made posting mandatory for many trials registered in the database. In 2017, the National Institutes of Health (NIH) and the Food and Drug Administration (FDA) tried again, enacting a long-awaited “final rule” to clarify the law’s expectations and penalties for failing to disclose trial results. The rule took full effect 2 years ago, on 18 January 2018, giving trial sponsors ample time to comply. But a Science investigation shows that many still ignore the requirement, while federal officials do little or nothing to enforce the law.

(cut)

Contacted for comment, none of the institutions disputed the findings of this investigation. In all 4768 trials Science checked, sponsors violated the reporting law more than 55% of the time. And in hundreds of cases where the sponsors got credit for reporting trial results, they have yet to be publicly posted because of quality lapses flagged by ClinicalTrials.gov staff.

The info is here.

Wednesday, February 5, 2020

A Reality Check On Artificial Intelligence: Are Health Care Claims Overblown?

Liz Szabo
Kaiser Health News
Originally published 30 Dec 19

Here is an excerpt:

“Almost none of the [AI] stuff marketed to patients really works,” said Dr. Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices ― such as ones that help people count their daily steps ― need less scrutiny than ones that diagnose or treat disease.

Some software developers don’t bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. “It’s not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal,” said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academy’s report. “That’s not how the U.S. economy works.”

But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

The info is here.

Tuesday, January 28, 2020

Examining clinician burnout in health industries

Cynda Hylton Rushton
Cynda Hylton Rushton
Danielle Kress
Johns Hopkins Magazine
Originally posted 26 Dec 19


Here is an excerpt from the interview with Cynda Hylton Rushton:

How much is burnout really affecting clinicians?

Among nurses, 35-45% experience some form of burnout, with comparable rates among other providers and higher rates among physicians. It's important to note that burnout has been viewed as an occupational hazard rather than a mental health diagnosis. It is not a few days or even weeks of depletion or exhaustion. It is the cumulative, long-term distress and suffering that is slowly eroding the workforce and leading to significant job dissatisfaction and many leaving their professions. In some instances, serious health concerns and suicide can result.

What about the impact on patients?

Patient care can suffer when clinicians withdraw or are not fully engaged in their work. Moral distress, long hours, negative work environments, or organizational inefficiencies can all impact a clinician's ability to provide what they feel is quality, safe patient care. Likewise, patients are impacted when health care organizations are unable to attract and retain competent and compassionate clinicians.

What does this mean for nurses?

As the largest sector of the health care professions, nurses have the most patient interaction and are at the center of the health care team. Nurses are integral to helping patients to holistically respond to their health conditions, illness, or injury. If nurses are suffering from burnout and moral distress, the whole care team and the patient will experience serious consequences when nurses' capacities to adapt to the organizational and external pressures are eventually exceeded.

The info is here.

Monday, January 20, 2020

What Is Prudent Governance of Human Genome Editing?

Scott J. Schweikart
AMA J Ethics. 2019;21(12):E1042-1048.
doi: 10.1001/amajethics.2019.1042.

Abstract

CRISPR technology has made questions about how best to regulate human genome editing immediately relevant. A sound and ethical governance structure for human genome editing is necessary, as the consequences of this new technology are far-reaching and profound. Because there are currently many risks associated with genome editing technology, the extent of which are unknown, regulatory prudence is ideal. When considering how best to create a prudent governance scheme, we can look to 2 guiding examples: the Asilomar conference of 1975 and the German Ethics Council guidelines for human germline intervention. Both models offer a path towards prudent regulation in the face of unknown and significant risks.

Here is an excerpt:

Beyond this key distinction, the potential risks and consequences—both to individuals and society—of human genome editing are relevant to ethical considerations of nonmaleficence, beneficence, justice, and respect for autonomy and are thus also relevant to the creation of an appropriate regulatory model. Because genome editing technology is at its beginning stages, it poses safety risks, the off-target effects of CRISPR being one example. Another issue is whether gene editing is done for therapeutic or enhancement purposes. While either purpose can prove beneficial, enhancement has potential for abuse.
Moreover, concerns exist that genome editing for enhancement can thwart social justice, as wealthy people will likely have greater ability to enhance their genome (and thus presumably certain physical and mental characteristics), furthering social and class divides. With regards to germline editing, a relevant concern is how, during the informed consent process, to respect the autonomy of persons in future generations whose genomes are modified before birth. The questions raised by genome editing are profound, and the risks—both to the individual and to society—are evident. Left without proper governance, significant harmful consequences are possible.

The info is here.

Wednesday, December 4, 2019

AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense

Image result for AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of DefenseDepartment of Defense
Defense Innovation Board
Published November 2019

Here is an excerpt:

What DoD is Doing to Establish an Ethical AI Culture

DoD’s “enduring mission is to provide combat-credible military forces needed to deter war and protect the security of our nation.” As such, DoD seeks to responsibly integrate and leverage AI across all domains and mission areas, as well as business administration, cybersecurity, decision support, personnel, maintenance and supply, logistics, healthcare, and humanitarian programs. Notably, many AI use cases are non-lethal in nature. From making battery fuel cells more efficient to predicting kidney disease in our veterans to managing fraud in supply chain management, AI has myriad applications throughout the Department.

DoD is mission-oriented, and to complete its mission, it requires access to cutting edge technologies to support its warfighters at home and abroad. These technologies, however, are only one component to fulfilling its mission. To ensure the safety of its personnel, to comply with the Law of War, and to maintain an exquisite professional force, DoD maintains and abides by myriad processes, procedures, rules, and laws to guide its work.  These are buttressed by DoD’s strong commitment to the following values: leadership, professionalism, and technical knowledge through the dedication to duty, integrity, ethics, honor, courage, and loyalty. As DoD utilizes AI in its mission, these values ground, inform,
and sustain the AI Ethics Principles.

As DoD continues to comply with existing policies, processes, and procedures, as well as to
create new opportunities for responsible research and innovation in AI, there are several
cases where DoD is beginning to or already engaging in activities that comport with the
calls from the DoD AI Strategy and the AI Ethics Principles enumerated here.

The document is here.

Wednesday, November 6, 2019

How to operationalize AI ethics

Khari Johnson
venturebeat.com
Originally published October 7, 2019

Here is an excerpt:

Tools, frameworks, and novel approaches

One of Thomas’ favorite AI ethics resources comes from the Markkula Center for Applied Ethics at Santa Clara University: a toolkit that recommends a number of processes to implement.

“The key one is ethical risk sweeps, periodically scheduling times to really go through what could go wrong and what are the ethical risks. Because I think a big part of ethics is thinking through what can go wrong before it does and having processes in place around what happens when there are mistakes or errors,” she said.

To root out bias, Sobhani recommends the What-If visualization tool from Google’s People + AI Research (PAIR) initiative as well as FairTest, a tool for “discovering unwarranted association within data-driven applications” from academic institutions like EPFL and Columbia University. She also endorses privacy-preserving AI techniques like federated learning to ensure better user privacy.

In addition to resources recommended by panelists, Algorithm Watch maintains a running list of AI ethics guidelines. Last week, the group found that guidelines released in March 2018 by IEEE, the world’s largest association for professional engineers, have seen little adoption at Facebook and Google.

The info is here.

Tuesday, November 5, 2019

Will Robots Wake Up?

Susan Schneider
orbitermag.com
Originally published September 30, 2019

Machine consciousness, if it ever exists, may not be found in the robots that tug at our heartstrings, like R2D2. It may instead reside in some unsexy server farm in the basement of a computer science building at MIT. Or perhaps it will exist in some top-secret military program and get snuffed out, because it is too dangerous or simply too inefficient.

AI consciousness likely depends on phenomena that we cannot, at this point, gauge—such as whether some microchip yet to be invented has the right configuration, or whether AI developers or the public want conscious AI. It may even depend on something as unpredictable as the whim of a single AI designer, like Anthony Hopkins’s character in Westworld. The uncertainty we face moves me to a middle-of-the-road position, one that stops short of either techno-optimism (believing that technology can solve our problems) or biological naturalism.

This approach I call, simply, the “Wait and See Approach.”

In keeping with my desire to look at real-world considerations that speak to whether AI consciousness is even compatible with the laws of nature—and, if so, whether it is technologically feasible or even interesting to build—my discussion draws from concrete scenarios in AI research and cognitive science.

The info is here.

Saturday, November 2, 2019

Burnout in healthcare: the case for organisational change

Image result for burnoutA Montgomery, E Panagopoulou, A Esmail,
T Richards, & C Maslach
BMJ 2019; 366
doi: https://doi.org/10.1136/bmj.l4774
(Published 30 July 2019)

Burnout has become a big concern within healthcare. It is a response to prolonged exposure to occupational stressors, and it has serious consequences for healthcare professionals and the organisations in which they work. Burnout is associated with sleep deprivation, medical errors, poor quality of care, and low ratings of patient satisfaction. Yet often initiatives to tackle burnout are focused on individuals rather than taking a systems approach to the problem.

Evidence on the association of burnout with objective indicators of performance (as opposed to self report) is scarce in all occupations, including healthcare. But the few examples of studies using objective indicators of patient safety at a system level confirm the association between burnout and suboptimal care. For example, in a recent study, intensive care units in which staff had high emotional exhaustion had higher patient standardised mortality ratios, even after objective unit characteristics such as workload had been controlled for.

The link between burnout and performance in healthcare is probably underestimated: job performance can still be maintained even when burnt out staff lack mental or physical energy as they adopt “performance protection” strategies to maintain high priority clinical tasks and neglect low priority secondary tasks (such as reassuring patients). Thus, evidence that the system is broken is masked until critical points are reached. Measuring and assessing burnout within a system could act as a signal to stimulate intervention before it erodes quality of care and results in harm to patients.

Burnout does not just affect patient safety. Failing to deal with burnout results in higher staff turnover, lost revenue associated with decreased productivity, financial risk, and threats to the organisation’s long term viability because of the effects of burnout on quality of care, patient satisfaction, and safety. Given that roughly 10% of the active EU workforce is engaged in the health sector in its widest sense, the direct and indirect costs of burnout could be substantial.

The info is here.

Saturday, October 5, 2019

Brain-reading tech is coming. The law is not ready to protect us.

Sigal Samuel
vox.com
Originally posted August 30, 2019

Here is an excerpt:

2. The right to mental privacy

You should have the right to seclude your brain data or to publicly share it.

Ienca emphasized that neurotechnology has huge implications for law enforcement and government surveillance. “If brain-reading devices have the ability to read the content of thoughts,” he said, “in the years to come governments will be interested in using this tech for interrogations and investigations.”

The right to remain silent and the principle against self-incrimination — enshrined in the US Constitution — could become meaningless in a world where the authorities are empowered to eavesdrop on your mental state without your consent.

It’s a scenario reminiscent of the sci-fi movie Minority Report, in which a special police unit called the PreCrime Division identifies and arrests murderers before they commit their crimes.

3. The right to mental integrity

You should have the right not to be harmed physically or psychologically by neurotechnology.

BCIs equipped with a “write” function can enable new forms of brainwashing, theoretically enabling all sorts of people to exert control over our minds: religious authorities who want to indoctrinate people, political regimes that want to quash dissent, terrorist groups seeking new recruits.

What’s more, devices like those being built by Facebook and Neuralink may be vulnerable to hacking. What happens if you’re using one of them and a malicious actor intercepts the Bluetooth signal, increasing or decreasing the voltage of the current that goes to your brain — thus making you more depressed, say, or more compliant?

Neuroethicists refer to that as brainjacking. “This is still hypothetical, but the possibility has been demonstrated in proof-of-concept studies,” Ienca said, adding, “A hack like this wouldn’t require that much technological sophistication.”

The info is here.

Wednesday, September 4, 2019

AI Ethics Guidelines Every CIO Should Read

Image: Mopic - stock.adobe.comJohn McClurg
www.informationweek.com
Originally posted August 7, 2019

Here is an excerpt:

Because AI technology and use cases are changing so rapidly, chief information officers and other executives are going to find it difficult to keep ahead of these ethical concerns without a roadmap. To guide both deep thinking and rapid decision-making about emerging AI technologies, organizations should consider developing an internal AI ethics framework.

The framework won’t be able to account for all the situations an enterprise will encounter on its journey to increased AI adoption. But it can lay the groundwork for future executive discussions. With a framework in hand, they can confidently chart a sensible path forward that aligns with the company’s culture, risk tolerance, and business objectives.

The good news is that CIOs and executives don’t need to come up with an AI ethics framework out of thin air. Many smart thinkers in the AI world have been mulling over ethics issues for some time and have published several foundational guidelines that an organization can use to draft a framework that makes sense for their business. Here are five of the best resources to get technology and ethics leaders started.

The info is here.