Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Help. Show all posts
Showing posts with label Help. Show all posts

Monday, July 15, 2019

Why parents are struggling to find mental health care for their children

Bernard Wolfson
Kaiser Health News/PBS.org
Originally posted May 7, 2019

Here is an excerpt:

Think about how perverse this is. Mental health professionals say that with children, early intervention is crucial to avoid more severe and costly problems later on. Yet even parents with good insurance struggle to find care for their children.

The U.S. faces a growing shortage of mental health professionals trained to work with young people — at a time when depression and anxiety are on the rise. Suicide was the No. 2 cause of death for children and young adults from age 10 to 24 in 2017, after accidents.

There is only one practicing child and adolescent psychiatrist in the U.S. for about every 1,800 children who need one, according to data from the American Academy of Child & Adolescent Psychiatry.

Not only is it hard to get appointments with psychiatrists and therapists, but the ones who are available often don’t accept insurance.

“This country currently lacks the capacity to provide the mental health support that young people need,” says Dr. Steven Adelsheim, director of the Stanford University psychiatry department’s Center for Youth Mental Health and Wellbeing.

The info is here.

Monday, July 1, 2019

How do you teach a machine right from wrong? Addressing the morality within Artificial Intelligence

Joseph Brean
The Kingston Whig Standard
Originally published May 30, 2019

Here is an excerpt:

AI “will touch or transform every sector and industry in Canada,” the government of Canada said in a news release in mid-May, as it named 15 experts to a new advisory council on artificial intelligence, focused on ethical concerns. Their goal will be to “increase trust and accountability in AI while protecting our democratic values, processes and institutions,” and to ensure Canada has a “human-centric approach to AI, grounded in human rights, transparency and openness.”

It is a curious project, helping computers be more accountable and trustworthy. But here we are. Artificial intelligence has disrupted the basic moral question of how to assign responsibility after decisions are made, according to David Gunkel, a philosopher of robotics and ethics at Northern Illinois University. He calls this the “responsibility gap” of artificial intelligence.

“Who is able to answer for something going right or wrong?” Gunkel said. The answer, increasingly, is no one.

It is a familiar problem that is finding new expressions. One example was the 2008 financial crisis, which reflected the disastrous scope of automated decisions. Gunkel also points to the success of Google’s AlphaGo, a computer program that has beaten the world’s best players at the famously complex board game Go. Go has too many possible moves for a computer to calculate and evaluate them all, so the program uses a strategy of “deep learning” to reinforce promising moves, thereby approximating human intuition. So when it won against the world’s top players, such as top-ranked Ke Jie in 2017, there was confusion about who deserved the credit. Even the programmers could not account for the victory. They had not taught AlphaGo to play Go. They had taught it to learn Go, which it did all by itself.

The info is here.

Sunday, June 9, 2019

German ethics council expresses openness to eventual embryo editing

Sharon Begley
www.statnews.com
Originally posted May 13, 2019

Here is an excerpt:

The council’s openness to human germline editing was notable, however. Because of the Nazis’ eugenics programs and horrific human medical experiments, Germany has historically been even warier than other Western countries of medical technologies that might violate human dignity or could be exploited for eugenic purposes. The country’s 1990 Embryo Protection Act prohibits germline modifications for the purpose of reproduction.

“Germany has been very reluctant to get involved with anything that could lead to a re-introduction of eugenic practices in their society,” Annas said.

Despite that history, a large majority of the council called further development and possible use of germline editing “a legitimate ethical goal when aimed at avoiding or reducing genetically determined disease risks,” it said in a statement. If the procedure can be shown not to harm embryos or the children they become, it added, then altering a gene that otherwise causes a devastating illness such as cystic fibrosis or sickle cell is acceptable.

While some ethicists and others argue against embryo editing on the ground that it violates the embryos’ dignity, the German council wrote, “the question also arises as to whether the renunciation of germline intervention, which could spare the people concerned severe suffering, would not violate their human dignity, too.” Similarly, failing to intervene in order to spare a future child pain and suffering “would at least have to be justified,” the council said, echoing arguments that some families with a history of inherited diseases have.

The info is here.

Friday, May 31, 2019

The Ethics of Smart Devices That Analyze How We Speak

Trevor Cox
Harvard Business Review
Originally posted May 20, 2019

Here is an excerpt:

But what happens when machines start analyzing how we talk? The big tech firms are coy about exactly what they are planning to detect in our voices and why, but Amazon has a patent that lists a range of traits they might collect, including identity (“gender, age, ethnic origin, etc.”), health (“sore throat, sickness, etc.”), and feelings, (“happy, sad, tired, sleepy, excited, etc.”).

This worries me — and it should worry you, too — because algorithms are imperfect. And voice is particularly difficult to analyze because the signals we give off are inconsistent and ambiguous. What’s more, the inferences that even humans make are distorted by stereotypes. Let’s use the example of trying to identify sexual orientation. There is a style of speaking with raised pitch and swooping intonations which some people assume signals a gay man. But confusion often arises because some heterosexuals speak this way, and many homosexuals don’t. Science experiments show that human aural “gaydar” is only right about 60% of the time. Studies of machines attempting to detect sexual orientation from facial images have shown a success rate of about 70%. Sound impressive? Not to me, because that means those machines are wrong 30% of the time. And I would anticipate success rates to be even lower for voices, because how we speak changes depending on who we’re talking to. Our vocal anatomy is very flexible, which allows us to be oral chameleons, subconsciously changing our voices to fit in better with the person we’re speaking with.

The info is here.

Wednesday, May 1, 2019

The U.S. Healthcare Cost Crisis

Gallup
Report issued April 2019

Executive Summary

The high cost of healthcare in the United States is a significant source of apprehension and fear for millions of Americans, according to a new national survey by West Health and Gallup.

Relative to the quality of the care they receive, Americans overwhelmingly agree they pay too much, and receive too little, and few have confidence that elected officials can solve the problem.

Americans in large numbers are borrowing money, skipping treatments and cutting back on household expenses because of high costs, and a large percentage fear a major health event could bankrupt them. More than three-quarters of Americans are also concerned that high healthcare costs could cause significant and lasting damage to the U.S. economy.

Despite the financial burden and fears caused by high healthcare costs, partisan filters lead to divergent views of the healthcare system at large: By a wide margin, more Republicans than Democrats consider the quality of care in the U.S. to be the best or among the best in the world — all while the U.S. significantly outspends other advanced economies on healthcare with dismal outcomes on basic health indicators such as infant mortality and heart attack mortality.

Republicans and Democrats are about as likely to resort to drastic measures, from deferring care to cutting back on other expenses including groceries, clothing, and gas and electricity. And many do not see the situation improving. In fact, most believe costs will only increase. When given the choice between a freeze in healthcare costs for the next five years or a 10% increase in household
income, 61% of Americans report that their preference is a freeze in costs.

West Health and Gallup’s major study included interviews with members of Gallup’s National Panel of Households and healthcare industry experts as well as a nationally representative survey of more than 3,537 randomly selected adults.

The report can be downloaded here.

Friday, April 26, 2019

Social media giants no longer can avoid moral compass

Don Hepburn
thehill.com
Originally published April 1, 2019

Here is an excerpt:

There are genuine moral, legal and technical dilemmas in addressing the challenges raised by the ubiquitous nature of the not-so-new social media conglomerates. Why, then, are social media giants avoiding the moral compass, evading legal guidelines and ignoring technical solutions available to them? The answer is, their corporate culture refuses to be held accountable to the same standards the public has applied to all other global corporations for the past five decades.

A wholesale change of culture and leadership is required within the social media industry. The culture of “everything goes” because “we are the future” needs to be more than tweaked; it must come to an end. Like any large conglomerate, social media platforms cannot ignore the public’s demand that they act with some semblance of responsibility. Just like the early stages of the U.S. coal, oil and chemical industries, the social media industry is impacting not only our physical environment but the social good and public safety. No serious journalism organization would ever allow a stranger to write their own hate-filled stories (with photos) for their newspaper’s daily headline — that’s why there’s a position called editor-in-chief.

If social media giants insist they are open platforms, then anyone can purposefully exploit them for good or evil. But if social media platforms demonstrate no moral or ethical standards, they should be subject to some form of government regulation. We have regulatory environments where we see the need to protect the public good against the need for profit-driven enterprises; why should social media platforms be given preferential treatment?

The info is here.

Sunday, April 21, 2019

Microsoft will be adding AI ethics to its standard checklist for product release

Alan Boyle
www.geekwire.com
Originally posted March 25, 2019

Microsoft will “one day very soon” add an ethics review focusing on artificial-intelligence issues to its standard checklist of audits that precede the release of new products, according to Harry Shum, a top executive leading the company’s AI efforts.

AI ethics will join privacy, security and accessibility on the list, Shum said today at MIT Technology Review’s EmTech Digital conference in San Francisco.

Shum, who is executive vice president of Microsoft’s AI and Research group, said companies involved in AI development “need to engineer responsibility into the very fabric of the technology.”

Among the ethical concerns are the potential for AI agents to pick up all-too-human biases from the data on which they’re trained, to piece together privacy-invading insights through deep data analysis, to go horribly awry due to gaps in their perceptual capabilities, or simply to be given too much power by human handlers.

Shum noted that as AI becomes better at analyzing emotions, conversations and writings, the technology could open the way to increased propaganda and misinformation, as well as deeper intrusions into personal privacy.

The info is here.

Wednesday, April 17, 2019

Warnings of a Dark Side to A.I. in Health Care

Cade Metz and Craig S. Smith
The New York Times
Originally published March 21, 2019

Here is an excerpt:

An adversarial attack exploits a fundamental aspect of the way many A.I. systems are designed and built. Increasingly, A.I. is driven by neural networks, complex mathematical systems that learn tasks largely on their own by analyzing vast amounts of data.

By analyzing thousands of eye scans, for instance, a neural network can learn to detect signs of diabetic blindness. This “machine learning” happens on such an enormous scale — human behavior is defined by countless disparate pieces of data — that it can produce unexpected behavior of its own.

In 2016, a team at Carnegie Mellon used patterns printed on eyeglass frames to fool face-recognition systems into thinking the wearers were celebrities. When the researchers wore the frames, the systems mistook them for famous people, including Milla Jovovich and John Malkovich.

A group of Chinese researchers pulled a similar trick by projecting infrared light from the underside of a hat brim onto the face of whoever wore the hat. The light was invisible to the wearer, but it could trick a face-recognition system into thinking the wearer was, say, the musician Moby, who is Caucasian, rather than an Asian scientist.

Researchers have also warned that adversarial attacks could fool self-driving cars into seeing things that are not there. By making small changes to street signs, they have duped cars into detecting a yield sign instead of a stop sign.

The info is here.

Sunday, April 7, 2019

In Spain, prisoners’ brains are being electrically stimulated in the name of science

Sigal Samuel
vox.com
Originally posted March 9, 2019

A team of scientists in Spain is getting ready to experiment on prisoners. If the scientists get the necessary approvals, they plan to start a study this month that involves placing electrodes on inmates’ foreheads and sending a current into their brains. The electricity will target the prefrontal cortex, a brain region that plays a role in decision-making and social behavior. The idea is that stimulating more activity in that region may make the prisoners less aggressive.

This technique — transcranial direct current stimulation, or tDCS — is a form of neurointervention, meaning it acts directly on the brain. Using neurointerventions in the criminal justice system is highly controversial. In recent years, scientists and philosophers have been debating under what conditions (if any) it might be ethical.

The Spanish team is the first to use tDCS on prisoners. They’ve already done it in a pilot study, publishing their findings in Neuroscience in January, and they were all set to implement a follow-up study involving at least 12 convicted murderers and other inmates this month. On Wednesday, New Scientist broke news of the upcoming experiment, noting that it had approval from the Spanish government, prison officials, and a university ethics committee. The next day, the Interior Ministry changed course and put the study on hold.

Andrés Molero-Chamizo, a psychologist at the University of Huelva and the lead researcher behind the study, told me he’s trying to find out what led to the government’s unexpected decision. He said it makes sense to run such an experiment on inmates because “prisoners have a high level of aggressiveness.”

The info is here.

Sunday, March 31, 2019

Is Ethical A.I. Even Possible?

Cade Metz
The New York Times
Originally posted March 1, 2019

Here is an excerpt:

As companies and governments deploy these A.I. technologies, researchers are also realizing that some systems are woefully biased. Facial recognition services, for instance, can be significantly less accurate when trying to identify women or someone with darker skin. Other systems may include security holes unlike any seen in the past. Researchers have shown that driverless cars can be fooled into seeing things that are not really there.

All this means that building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.

As some Microsoft employees protest the company’s military contracts, Mr. Smith said that American tech companies had long supported the military and that they must continue to do so. “The U.S. military is charged with protecting the freedoms of this country,” he told the conference. “We have to stand by the people who are risking their lives.”

Though some Clarifai employees draw an ethical line at autonomous weapons, others do not. Mr. Zeiler argued that autonomous weapons will ultimately save lives because they would be more accurate than weapons controlled by human operators. “A.I. is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents,” he said in a statement.

The info is here.

Tuesday, March 26, 2019

Does AI Ethics Have a Bad Name?

Calum Chace
Forbes.com
Originally posted March 7, 2019

Here is an excerpt:

Artificial intelligence is a technology, and a very powerful one, like nuclear fission.  It will become increasingly pervasive, like electricity. Some say that its arrival may even turn out to be as significant as the discovery of fire.  Like nuclear fission, electricity and fire, AI can have positive impacts and negative impacts, and given how powerful it is and it will become, it is vital that we figure out how to promote the positive outcomes and avoid the negative ones.

It's the bias that concerns people in the AI ethics community.  They want to minimise the amount of bias in the data which informs the AI systems that help us to make decisions – and ideally, to eliminate the bias altogether.  They want to ensure that tech giants and governments respect our privacy at the same time as they develop and deliver compelling products and services. They want the people who deploy AI to make their systems as transparent as possible so that in advance or in retrospect, we can check for sources of bias and other forms of harm.

But if AI is a technology like fire or electricity, why is the field called “AI ethics”?  We don’t have “fire ethics” or “electricity ethics,” so why should we have AI ethics?  There may be a terminological confusion here, and it could have negative consequences.

One possible downside is that people outside the field may get the impression that some sort of moral agency is being attributed to the AI, rather than to the humans who develop AI systems.  The AI we have today is narrow AI: superhuman in certain narrow domains, like playing chess and Go, but useless at anything else. It makes no more sense to attribute moral agency to these systems than it does to a car or a rock.  It will probably be many years before we create an AI which can reasonably be described as a moral agent.

The info is here.

Monday, March 25, 2019

Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability

Alex John London
The Hastings Center Report
Volume49, Issue1, January/February 2019, Pages 15-21

Abstract

Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or a rationale for particular decisions in individual cases, some commentators regard ceding medical decision‐making to black box systems as contravening the profound moral responsibilities of clinicians. I argue, however, that opaque decisions are more common in medicine than critics realize. Moreover, as Aristotle noted over two millennia ago, when our knowledge of causal systems is incomplete and precarious—as it often is in medicine—the ability to explain how results are produced can be less important than the ability to produce such results and empirically verify their accuracy.

The info is here.

Sunday, March 24, 2019

An Ethical Obligation for Bioethicists to Utilize Social Media

Herron, PD
Hastings Cent Rep. 2019 Jan;49(1):39-40.
doi: 10.1002/hast.978.

Here is an excerpt:

Unfortunately, it appears that bioethicists are no better informed than other health professionals, policy experts, or (even) elected officials, and they are sometimes resistant to becoming informed. But bioethicists have a duty to develop our knowledge and usefulness with respect to social media; many of our skills can and should be adapted to this area. There is growing evidence of the power of social media to foster dissemination of misinformation. The harms associated with misinformation or “fake news” are not new threats. Historically, there have always been individuals or organized efforts to propagate false information or to deceive others. Social media and other technologies have provided the ability to rapidly and expansively share both information and misinformation. Bioethics serves society by offering guidance about ethical issues associated with advances in medicine, science, and technology. Much of the public’s conversation about and exposure to these emerging issues occurs online. If we bioethicists are not part of the mix, we risk yielding to alternative and less authoritative sources of information. Social media’s transformative impact has led some to view it as not just a personal tool but the equivalent to a public utility, which, as such, should be publicly regulated. Bioethicists can also play a significant part in this dialogue. But to do so, we need to engage with social media. We need to ensure that our understanding of social media is based on experiential use, not just abstract theory.

Bioethics has expanded over the past few decades, extending beyond the academy to include, for example, clinical ethics consultants and leadership positions in public affairs and public health policy. These varied roles bring weighty responsibilities and impose a need for critical reflection on how bioethicists can best serve the public interest in a way that reflects and is accountable to the public’s needs.

Wednesday, March 20, 2019

Israel Approves Compassionate Use of MDMA to Treat PTSD

Ido Efrati
www.haaretz.com
Originally posted February 10, 2019

MDMA, popularly known as ecstasy, is a drug more commonly associated with raves and nightclubs than a therapist’s office.

Emerging research has shown promising results in using this “party drug” to treat patients suffering from post-traumatic stress disorder, and Israel’s Health Ministry has just approved the use of MDMA to treat dozens of patients.

MDMA is classified in Israel as a “dangerous drug”, recreational use is illegal, and therapeutic use of MDMA has yet to be formally approved and is still in clinical trials.

However, this treatment is deemed as “compassionate use,” which allows drugs that are still in development to be made available to patients outside of a clinical trial due to the lack of effective alternatives.

The info is here.

Monday, March 18, 2019

OpenAI's Realistic Text-Generating AI Triggers Ethics Concerns

William Falcon
Forbes.com
Originally posted February 18, 2019

Here is an excerpt:

Why you should care.

GPT-2 is the closest AI we have to make conversational AI a possibility. Although conversational AI is far from solved, chatbots powered by this technology could help doctors scale advice over chats, scale advice for potential suicide victims, improve translation systems, and improve speech recognition across applications.

Although OpenAI acknowledges these potential benefits, it also acknowledges the potential risks of releasing the technology. Misuse could include, impersonate others online, generate misleading news headlines, or automate the automation of fake posts to social media.

But I argue these malicious applications are already possible without this AI. There exist other public models which can already be used for these purposes. Thus, I think not releasing this code is more harmful to the community because A) it sets a bad precedent for open research, B) keeps companies from improving their services, C) unnecessarily hypes these results and D) may trigger unnecessary fears about AI in the general public.

The info is here.

Friday, March 15, 2019

Ethical considerations on the complicity of psychologists and scientists in torture

Evans NG, Sisti DA, Moreno JD
Journal of the Royal Army Medical Corps 
Published Online First: 20 February 2019.
doi: 10.1136/jramc-2018-001008

Abstract

Introduction 
The long-standing debate on medical complicity in torture has overlooked the complicity of cognitive scientists—psychologists, psychiatrists and neuroscientists—in the practice of torture as a distinct phenomenon. In this paper, we identify the risk of the re-emergence of torture as a practice in the USA, and the complicity of cognitive scientists in these practices.

Methods 
We review arguments for physician complicity in torture. We argue that these defences fail to defend the complicity of cognitive scientists. We address objections to our account, and then provide recommendations for professional associations in resisting complicity in torture.

Results 
Arguments for cognitive scientist complicity in torture fail when those actions stem from the same reasons as physician complicity. Cognitive scientist involvement in the torture programme has, from the outset, been focused on the outcomes of interrogation rather than supportive care. Any possibility of a therapeutic relationship between cognitive therapists and detainees is fatally undermined by therapists’ complicity with torture.

Conclusion 
Professional associations ought to strengthen their commitment to refraining from engaging in any aspect of torture. They should also move to protect whistle-blowers against torture programmes who are members of their association. If the political institutions that are supposed to prevent the practice of torture are not strengthened, cognitive scientists should take collective action to compel intelligence agencies to refrain from torture.

Monday, January 28, 2019

Second woman carrying gene-edited baby, Chinese authorities confirm

Zhou Xiaoqin, left, loads Cas9 protein and PCSK9 sgRNA molecules into a fine glass pipette as Qin Jinzhou watches at a laboratory in Shenzhen in southern ChinaAgence France-Presse
Originally posted January 21, 2019


A second woman became pregnant during the experiment to create the world’s first genetically edited babies, Chinese authorities have confirmed, as the researcher behind the claim faces a police investigation.

He Jiankui shocked the scientific community last year after announcing he had successfully altered the genes of twin girls born in November to prevent them contracting HIV.

He had told a human genome forum in Hong Kong there had been “another potential pregnancy” involving a second couple.

A provincial government investigation has since confirmed the existence of the second mother and that the woman was still pregnant, the official Xinhua news agency reported.

The expectant mother and the twin girls from the first pregnancy will be put under medical observation, an investigator told Xinhua.

The info is here.

Monday, January 14, 2019

The Amazing Ways Artificial Intelligence Is Transforming Genomics and Gene Editing

Bernard Marr
Forbes.com
Originally posted November 16, 2018

Here is an excerpt:

Another thing experts are working to resolve in the process of gene editing is how to prevent off-target effects—when the tools mistakenly work on the wrong gene because it looks similar to the target gene.

Artificial intelligence and machine learning help make gene editing initiatives more accurate, cheaper and easier.

The future for AI and gene technology is expected to include pharmacogenomics, genetic screening tools for newborns, enhancements to agriculture and more. While we can't predict the future, one thing is for sure: AI and machine learning will accelerate our understanding of our own genetic makeup and those of other living organisms.

The info is here.

Wednesday, January 9, 2019

'Should we even consider this?' WHO starts work on gene editing ethics

Agence France-Presse
Originally published 3 Dec 2018

The World Health Organization is creating a panel to study the implications of gene editing after a Chinese scientist controversially claimed to have created the world’s first genetically edited babies.

“It cannot just be done without clear guidelines,” Tedros Adhanom Ghebreyesus, the head of the UN health agency, said in Geneva.

The organisation was gathering experts to discuss rules and guidelines on “ethical and social safety issues”, added Tedros, a former Ethiopian health minister.

Tedros made the comments after a medical trial, which was led by Chinese scientist He Jiankui, claimed to have successfully altered the DNA of twin girls, whose father is HIV-positive, to prevent them from contracting the virus.

His experiment has prompted widespread condemnation from the scientific community in China and abroad, as well as a backlash from the Chinese government.

The info is here.

Friday, December 14, 2018

Why Health Professionals Should Speak Out Against False Beliefs on the Internet

Joel T. Wu and Jennifer B. McCormick
AMA J Ethics. 2018;20(11):E1052-1058.
doi: 10.1001/amajethics.2018.1052.

Abstract

Broad dissemination and consumption of false or misleading health information, amplified by the internet, poses risks to public health and problems for both the health care enterprise and the government. In this article, we review government power for, and constitutional limits on, regulating health-related speech, particularly on the internet. We suggest that government regulation can only partially address false or misleading health information dissemination. Drawing on the American Medical Association’s Code of Medical Ethics, we argue that health care professionals have responsibilities to convey truthful information to patients, peers, and communities. Finally, we suggest that all health care professionals have essential roles in helping patients and fellow citizens obtain reliable, evidence-based health information.

Here is an excerpt:

We would suggest that health care professionals have an ethical obligation to correct false or misleading health information, share truthful health information, and direct people to reliable sources of health information within their communities and spheres of influence. After all, health and well-being are values shared by almost everyone. Principle V of the AMA Principles of Ethics states: “A physician shall continue to study, apply, and advance scientific knowledge, maintain a commitment to medical education, make relevant information available to patients, colleagues, and the public, obtain consultation, and use the talents of other health professionals when indicated” (italics added). And Principle VII states: “A physician shall recognize a responsibility to participate in activities contributing to the improvement of the community and the betterment of public health” (italics added). Taken together, these principles articulate an ethical obligation to make relevant information available to the public to improve community and public health. In the modern information age, wherein the unconstrained and largely unregulated proliferation of false health information is enabled by the internet and medical knowledge is no longer privileged, these 2 principles have a special weight and relevance.