Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Regulations. Show all posts
Showing posts with label Regulations. Show all posts

Monday, December 16, 2019

Courts Strike Down Trump’s ‘Refusal of Care’ Conscience Rule

Alicia Gallegos
mdedge.com
Originally posted 7 Nov 19

A federal court has struck down a Trump administration rule that would have allowed clinicians to refuse to provide medical care to patients for religious or moral reasons.

In a Nov. 6 decision, the U.S. District Court for the Southern District of New York vacated President Trump’s rule in its entirety, concluding that the rule had no justification and that its provisions were arbitrary and capricious. In his 147-page opinion, District Judge Paul Engelmayer wrote that the U.S. Department of Health & Human Services did not have the authority to enact such an expansive rule and that the measure conflicts with the Administrative Procedure Act, Title VII of the Civil Rights Act, and the Emergency Medical Treatment & Labor Act, among other laws.

“Had the court found only narrow parts of the rule infirm, a remedy tailoring the vacatur to only the problematic provision might well have been viable,” Judge Engelmayer wrote. “The [Administrative Procedure Act] violations that the court has found, however, are numerous, fundamental, and far reaching ... In these circumstances, a decision to leave standing isolated shards of the rule that have not been found specifically infirm would ignore the big picture: that the rulemaking exercise here was sufficiently shot through with glaring legal defects as to not justify a search for survivors [and] leaving stray nonsubstantive provisions intact would not serve a useful purpose.”

At press time, the Trump administration had not indicated whether they plan to file an appeal.

The info is here.

Thursday, October 17, 2019

Why Having a Chief Data Ethics Officer is Worth Consideration

The National Law Review
Image result for chief data ethics officerOriginally published September 20, 2019

Emerging technology has vastly outpaced corporate governance and strategy, and the use of data in the past has consistently been “grab it” and figure out a way to use it and monetize it later. Today’s consumers are becoming more educated and savvy about how companies are collecting, using and monetizing their data, and are starting to make buying decisions based on privacy considerations, and complaining to regulators and law makers about how the tech industry is using their data without their control or authorization.

Although consumers’ education is slowly deepening, data privacy laws, both internationally and in the U.S., are starting to address consumers’ concerns about the vast amount of individually identifiable data about them that is collected, used and disclosed.

Data ethics is something that big tech companies are starting to look at (rightfully so), because consumers, regulators and lawmakers are requiring them to do so. But tech companies should consider looking at data ethics as a fundamental core value of the company’s mission, and should determine how they will be addressed in their corporate governance structure.

The info is here.

Tuesday, October 1, 2019

NACAC Agrees to Change Its Code of Ethics

Scott Jaschik
insidehighered.com
Originally published September 30-, 2019

When the Assembly of the National Association for College Admission Counseling has in years past debated measures to regulate the recruiting of international students or the proper rules for waiting lists and many other issues, debate has been heated. It was anything but heated this year, although the issue before the delegates was arguably more important than any of those.

Delegates voted Saturday -- 211 to 3 -- to strip provisions from the Code of Ethics and Professional Practice that may violate antitrust laws. The provisions are:

  • Colleges must not offer incentives exclusive to students applying or admitted under an early decision application plan. Examples of incentives include the promise of special housing, enhanced financial aid packages, and special scholarships for early decision admits. Colleges may, however, disclose how admission rates for early decision differ from those for other admission plans."
  • College choices should be informed, well-considered, and free from coercion. Students require a reasonable amount of time to identify their college choices; complete applications for admission, financial aid, and scholarships; and decide which offer of admission to accept. Once students have committed themselves to a college, other colleges must respect that choice and cease recruiting them."
  • Colleges will not knowingly recruit or offer enrollment incentives to students who are already enrolled, registered, have declared their intent, or submitted contractual deposits to other institutions. May 1 is the point at which commitments to enroll become final, and colleges must respect that. The recognized exceptions are when students are admitted from a wait list, students initiate inquiries themselves, or cooperation is sought by institutions that provide transfer programs."
  • Colleges must not solicit transfer applications from a previous year’s applicant or prospect pool unless the students have themselves initiated a transfer inquiry or the college has verified prior to contacting the students that they are either enrolled at a college that allows transfer recruitment from other colleges or are not currently enrolled in a college."

Before they approved the measure to strip the provisions, the delegates approved (unanimously) rules that would limit discussion, but they didn't need the rules. There was no discussion on stripping the provisions, which most NACAC members learned of only at the beginning of the month. The Justice Department has been investigating NACAC for possible violations of antitrust laws for nearly two years, but the details of that investigation have not been generally known for most of that time. The Justice Department believes that with these rules, colleges are colluding to take away student choices.

The info is here.

Monday, September 9, 2019

China approves ethics advisory group after CRISPR-babies scandal

Hepeng Jia
Nature.com
Originally published August 8, 2019

China will establish a national committee to advise the government on research-ethics regulations. The decision comes less than a year after a Chinese scientist sparked an international outcry over claims that he had created the world’s first genome-edited babies.

The country's most powerful policymaking body, the Central Comprehensively Deepening Reforms Commission of the ruling Chinese Communist Party, headed by President Xi Jinping, approved at the end of last month a plan to form the committee. According to Chinese media, it will strengthen the coordination and implementation of a comprehensive and consistent system of ethics governance for science and technology.

The government has released few details on how the committee will work. But Qiu Renzong, a bioethicist at the Chinese Academy of Social Science in Beijing, says it could help to reduce the fragmentation in biomedical ethics regulations across ministries, identifying loopholes in the enforcement of regulations and advise the government on appropriate punishments for those who violate the rules.

The info is here.

Monday, August 19, 2019

The Case Against A.I. Controlling Our Moral Compass

Image result for moral compassBrian Gallagher
ethicalsystems.org
Originally published June 25, 2019


Here is an excerpt:

Morality, the researchers found, isn’t like any other decision space. People were averse to machines having the power to choose what to do in life and death situations—specifically in driving, legal, medical, and military contexts. This hinged on their perception of machine minds as incomplete, or lacking in agency (the capacity to reason, plan, and communicate effectively) and subjective experience (the possession of a human-like consciousness, with the ability to empathize and to feel pain and other emotions).

For example, when the researchers presented subjects with hypothetical medical and military situations—where a human or machine would decide on a surgery as well as a missile strike, and the surgery and strike succeeded—subjects still found the machine’s decision less permissible, due to its lack of agency and subjective experience relative to the human. Not having the appropriate sort of mind, it seems, disqualifies machines, in the judgement of these subjects, from making moral decisions even if they are the same decisions that a human made. Having a machine sound human, with an emotional and expressive voice, and claim to experience emotion, doesn’t help—people found a compassionate-sounding machine just as unqualified for moral choice as one that spoke robotically.

Only in certain circumstances would a machine’s moral choice trump a human’s. People preferred an expert machine’s decision over an average doctor’s, for instance, but just barely. Bigman and Gray also found that some people are willing to have machines support human moral decision-making as advisors. A substantial portion of subjects, 32 percent, were even against that, though, “demonstrating the tenacious aversion to machine moral decision-making,” the researchers wrote. The results “suggest that reducing the aversion to machine moral decision-making is not easy, and depends upon making very salient the expertise of machines and the overriding authority of humans—and even then, it still lingers.”

The info is here.

Thursday, August 15, 2019

World’s first ever human-monkey hybrid grown in lab in China

Henry Holloway
www.dailystar.co.uk
Originally posted August 1, 2019

Here is an excerpt:

Scientists have successfully formed a hybrid human-monkey embryo  – with the experiment taking place in China to avoid “legal issues”.

Researchers led by scientist Juan Carlos Izpisúa spliced together the genes to grow a monkey with human cells.

It is said the creature could have grown and been born, but scientists aborted the process.

The team, made up of members of the Salk Institute in the United States and the Murcia Catholic University, genetically modified the monkey embryos.

Researchers deactivates the genes which form organs, and replaced them with human stem cells.

And it is hoped that one day these hybrid-grown organs will be able to be translated into humans.

Scientists have successfully formed a hybrid human-monkey embryo  – with the experiment taking place in China to avoid “legal issues”.

Researchers led by scientist Juan Carlos Izpisúa spliced together the genes to grow a monkey with human cells.

It is said the creature could have grown and been born, but scientists aborted the process.

The info is here.

Monday, August 5, 2019

Ethics working group to hash out what kind of company service is off limits

Chris Marquette
www.rollcall.com
Originally published July 22, 2019

A House Ethics Committee working group on Thursday will discuss proposed regulations to govern what kind of roles lawmakers may perform in companies, part of a push to head off the kind of ethical issues that led to the federal indictment of Rep. Chris Collins, who is accused of trading insider information while simultaneously serving as a company board member and public official.

(cut)

House Resolution 6 created a new clause in the Code of Official Conduct — set to take effect Jan. 1, 2020 — that prohibits members, delegates, resident commissioners, officers or employees in the House from serving as an officer or director of any public company.

The clause required the Ethics Committee to develop by Dec. 31 regulations addressing other prohibited service or positions that could lead to conflicts of interest.

The info is here.

Friday, August 2, 2019

Therapist accused of sending client photos of herself in lingerie can’t get her state license back: Pa. court

Matt Miller
www.pennlive.com
Originally posted July 17, 2019

A therapist who was accused of sending a patient photos of herself in lingerie can’t have her state counseling license back, a Commonwealth Court panel ruled Wednesday.

That is so even though Sheri Colston denied sending those photos or having any inappropriate interactions with the male client, the court found in an opinion by Judge Robert Simpson.

The court ruling upholds an indefinite suspension of Colston’s license imposed by the State Board of Social Workers, Marriage and Family Therapists and Professional Counselors. That board also ordered Colston to pay $7,409 to cover the cost of investigating her case.

The info is here.

Thursday, July 18, 2019

Taking Ethics Seriously: Toward Comprehensive Education in Ethics and Human Rights for Psychologists

Duška Franeta
European Psychologist (2019), 24, pp. 125-135.

Education in ethics and professional regulation are not alternatives; education in ethics for psychologists should not be framed merely as instruction regarding current professional regulation, or “ethical training.” This would reduce ethics to essentially a legal perspective, diminish professional responsibility, debase professional ethics, and downplay its primary purpose – the continuous critical reflection of professional identity and professional role. This paper discusses the meaning and function of education in ethics for psychologists and articulates the reasons why comprehensive education in ethics for psychologists should not be substituted by instruction in professional codes. Likewise, human rights education for psychologists should not be downgraded to mere instruction in existing legal norms. Human rights discourse represents an important segment of the comprehensive education in ethics for psychologists. Education in ethics should expose and examine substantial ethical ideas that serve as the framework for the law of human rights as well as the interpretative, multifaceted, evolving, even manipulable character of the human rights narrative. The typically proclaimed duty of psychologists to protect and promote human rights requires a deepening and expounding of the human rights legal framework through elaborate scrutiny of its ethical meaning. The idea of affirming and restoring human dignity – the concept often designated as the legal and ethical basis, essence, and purpose of human rights – represents one approach to framing this duty by which the goals of psychology on the professional and ethical levels become unified.

The info is here.

Thursday, June 20, 2019

Legal Promise Of Equal Mental Health Treatment Often Falls Short

Graison Dangor
Kaiser Health News
Originally pubished June 7, 2019

Here is an excerpt:

The laws have been partially successful. Insurers can no longer write policies that charge higher copays and deductibles for mental health care, nor can they set annual or lifetime limits on how much they will pay for it. But patient advocates say insurance companies still interpret mental health claims more stringently.

“Insurance companies can easily circumvent mental health parity mandates by imposing restrictive standards of medical necessity,” said Meiram Bendat, a lawyer leading a class-action lawsuit against a mental health subsidiary of UnitedHealthcare.

In a closely watched ruling, a federal court in March sided with Bendat and patients alleging the insurer was deliberately shortchanging mental health claims. Chief Magistrate Judge Joseph Spero of the U.S. District Court for the Northern District of California ruled that United Behavioral Health wrote its guidelines for treatment much more narrowly than common medical standards, covering only enough to stabilize patients “while ignoring the effective treatment of members’ underlying conditions.”

UnitedHealthcare works to “ensure our products meet the needs of our members and comply with state and federal law,” said spokeswoman Tracey Lempner.

Several studies, though, have found evidence of disparities in insurers’ decisions.

The info is here.

Thursday, May 30, 2019

How Big Tech is struggling with the ethics of AI

Madhumita Murgia & Siddarth Shrkianth
Financial Times
Originally posted April 28, 2019

Here is an excerpt:

The development and application of AI is causing huge divisions both inside and outside tech companies, and Google is not alone in struggling to find an ethical approach.

The companies that are leading research into AI in the US and China, including Google, Amazon, Microsoft, Baidu, SenseTime and Tencent, have taken very different approaches to AI and to whether to develop technology that can ultimately be used for military and surveillance purposes.

For instance, Google has said it will not sell facial recognition services to governments, while Amazon and Microsoft both do so. They have also been attacked for the algorithmic bias of their programmes, where computers inadvertently propagate bias through unfair or corrupt data inputs.

In response to criticism not only from campaigners and academics but also their own staff, companies have begun to self-regulate by trying to set up their own “AI ethics” initiatives that perform roles ranging from academic research — as in the case of Google-owned DeepMind’s Ethics and Society division — to formulating guidelines and convening external oversight panels.

The info is here.

Thursday, May 2, 2019

A Facebook request: Write a code of tech ethics

A Facebook request: Write a code of tech ethicsMike Godwin
www.latimes.com
Originally published April 30, 2019

Facebook is preparing to pay a multi-billion-dollar fine and dealing with ongoing ire from all corners for its user privacy lapses, the viral transmission of lies during elections, and delivery of ads in ways that skew along gender and racial lines. To grapple with these problems (and to get ahead of the bad PR they created), Chief Executive Mark Zuckerberg has proposed that governments get together and set some laws and regulations for Facebook to follow.

But Zuckerberg should be aiming higher. The question isn’t just what rules should a reformed Facebook follow. The bigger question is what all the big tech companies’ relationships with users should look like. The framework needed can’t be created out of whole cloth just by new government regulation; it has to be grounded in professional ethics.

Doctors and lawyers, as they became increasingly professionalized in the 19th century, developed formal ethical codes that became the seeds of modern-day professional practice. Tech-company professionals should follow their example. An industry-wide code of ethics could guide companies through the big questions of privacy and harmful content.

The info is here.

Editor's note: Many social media companies engage in unethical behavior on a regular basis, typically revolving around lack of consent, lack of privacy standards, filter bubble (personalized algorithms) issues, lack of accountability, lack of transparency, harmful content, and third party use of data.

Tuesday, April 16, 2019

Is there such a thing as moral progress?

John Danaher
Philosophical Disquisitions
Originally posted March 18, 2019

We often speak as if we believe in moral progress. We talk about recent moral changes, such as the legalisation of gay marriage, as ‘progressive’ moral changes. We express dismay at the ‘regressive’ moral views of racists and bigots. Some people (I’m looking at you Steven Pinker) have written long books that defend the idea that, although there have been setbacks, there has been a general upward trend in our moral attitudes over the course of human history. Martin Luther King once said that the arc of the moral universe is long but bend towards justice.

But does moral progress really exist? And how would we know if it did? Philosophers have puzzled over this question for some time. The problem is this. There is no doubt that there has been moral change over time, and there is no doubt that we often think of our moral views as being more advanced than those of our ancestors, but it is hard to see exactly what justifies this belief. It seems like you would need some absolute moral standard or goal against which you can measure moral change to justify that belief. Do we have such a thing?

In this post, I want offer some of my own, preliminary and underdeveloped, thoughts on the idea of moral progress. I do so by first clarifying the concept of moral progress, and then considering whether and when we can say that it exists. I will suggest that moral progress is real, and we are at least sometimes justified in saying that it has taken place. Nevertheless, there are some serious puzzles and conceptual difficulties with identifying some forms of moral progress.

The info is here.

Monday, April 8, 2019

Mark Zuckerberg And The Tech World Still Do Not Understand Ethics

Derek Lidow
Forbes.com
Originally posted March 11, 2018

Here is an excerpt:

Expectations for technology startups encourage expedient, not ethical, decision making. 

As people in the industry are fond of saying, the tech world moves at “lightspeed.” That includes the pace of innovation, the rise and fall of markets, the speed of customer adoption, the evolution of business models and the lifecycles of companies. Decisions must be made quickly and leaders too often choose the most expedient path regardless of whether it is safe, legal or ethical.

 This “move fast and break things” ethos is embodied in practices like working toward a minimum viable product (MVP), helping to establish a bias toward cutting corners. In addition, many founders look for CFOs who are “tech trained—that is, people accustomed to a world where time and money wait for no one—as opposed to a seasoned financial officer with good accounting chops and a moral compass.

The host of scandals at Zenefits, a cloud-based provider of employee-benefits software to small businesses and once one of the most promising of Silicon Valley startups, had their origins in the shortcuts the company took in order to meet unreasonably high expectations for growth. The founder apparently created software that helped employees cheat on California’s online broker license course. As the company expanded rapidly, it began hiring people with little experience in the highly regulated health insurance industry. As the company moved from small businesses to larger businesses, the strain on it software increased. Instead of developing appropriate software, the company hired more people to manually take up the slack where the existing software failed. When the founder was asked by an interviewer before the scandals why he was so intent on expanding so rapidly he replied, “Slowing down doesn’t feel like something I want to do.”

The info is here.

Friday, March 22, 2019

Pop Culture, AI And Ethics

Phaedra Boinodiris
Forbes.com
Originally published February 24, 2019

Here is an excerpt:


5 Areas of Ethical Focus

The guide goes on to outline five areas of ethical focus or consideration:

Accountability – there is a group responsible for ensuring that REAL guests in the hotel are interviewed to determine their needs. When feedback is negative this group implements a feedback loop to better understand preferences. They ensure that at any point in time, a guest can turn the AI off.

Fairness – If there is bias in the system, the accountable team must take the time to train with a larger, more diverse set of data.Ensure that the data collected about a user's race, gender, etc. in combination with their usage of the AI, will not be used to market to or exclude certain demographics.

Explainability and Enforced Transparency – if a guest doesn’t like the AI’s answer, she can ask how it made that recommendation using which dataset. A user must explicitly opt in to use the assistant and provide the guest options to consent on what information to gather.

User Data Rights – The hotel does not own a guest’s data and a guest has the right to have the system purges at any time. Upon request, a guest can receive a summary of what information was gathered by the Ai assistant.

Value Alignment – Align the experience to the values of the hotel. The hotel values privacy and ensuring that guests feel respected and valued. Make it clear that the AI assistant is not designed to keep data or monitor guests. Relay how often guest data is auto deleted. Ensure that the AI can speak in the guest’s respective language.

The info is here.

Sunday, February 17, 2019

Physician burnout now essentially a public health crisis

Priyanka Dayal McCluskey
Boston Globe
Originally posted January 17, 2019

Physician burnout has reached alarming levels and now amounts to a public health crisis that threatens to undermine the doctor-patient relationship and the delivery of health care nationwide, according to a report from Massachusetts doctors to be released Thursday.

The report — from the Massachusetts Medical Society, the Massachusetts Health & Hospital Association, and the Harvard T.H. Chan School of Public Health — portrays a profession struggling with the unyielding demands of electronic health record systems and ever-growing regulatory burdens.

It urges hospitals and medical practices to take immediate action by putting senior executives in charge of physician well-being and by giving doctors better access to mental health services. The report also calls for significant changes to make health record systems more user-friendly.

While burnout has long been a worry in the profession, the report reflects a newer phenomenon — the draining documentation and data entry now required of doctors. Today’s electronic record systems are so complex that a simple task, such as ordering a prescription, can take many clicks.

The info is here.

Monday, February 11, 2019

Recent events highlight an unpleasant scientific practice: ethics dumping

Science and Technology
The Economist
Originally published January 2019

Here is an excerpt:

Ethics dumping is the carrying out by researchers from one country (usually rich, and with strict regulations) in another (usually less well off, and with laxer laws) of an experiment that would not be permitted at home, or of one that might be permitted, but in a way that would be frowned on. The most worrisome cases involve medical research, in which health, and possibly lives, are at stake. But other investigations—anthropological ones, for example—may also be carried out in a more cavalier fashion abroad. As science becomes more international the risk of ethics dumping, both intentional and unintentional, has risen. The suggestion in this case is that Dr He was encouraged and assisted in his project by a researcher at an American university.

Wednesday, January 16, 2019

Debate ethics of embryo models from stem cells

Nicolas Rivron, Martin Pera, Janet Rossant, Alfonso Martinez Arias, and others
Nature
Originally posted December 12, 2018

Here are some excerpts:

Four questions

Future progress depends on addressing now the ethical and policy issues that could arise.

Ultimately, individual jurisdictions will need to formulate their own policies and regulations, reflecting their values and priorities. However, we urge funding bodies, along with scientific and medical societies, to start an international discussion as a first step. Bioethicists, scientists, clinicians, legal and regulatory specialists, patient advocates and other citizens could offer at least some consensus on an appropriate trajectory for the field.

Two outputs are needed. First, guidelines for researchers; second, a reliable source of information about the current state of the research, its possible trajectory, its potential medical benefits and the key ethical and policy issues it raises. Both guidelines and information should be disseminated to journalists, ethics committees, regulatory bodies and policymakers.

Four questions in particular need attention.

Should embryo models be treated legally and ethically as human embryos, now or in the future?

Which research applications involving human embryo models are ethically acceptable?

How far should attempts to develop an intact human embryo in a dish be allowed to proceed?

Does a modelled part of a human embryo have an ethical and legal status similar to that of a complete embryo?

The info is here.

Sunday, December 23, 2018

Fresh urgency in mapping out ethics of brain organoid research

Julian Koplin and Julian Savulescu
The Conversation
Originally published November 20, 2018

Here is an excerpt:

But brain organoid research also raises serious ethical questions. The main concern is that brain organoids could one day attain consciousness – an issue that has just been brought to the fore by a new scientific breakthrough.

Researchers from the University of California, San Diego, recently published the creation of brain organoids that spontaneously produce brain waves resembling those found in premature infants. Although this electrical activity does not necessarily mean these organoids are conscious, it does show that we need to think through the ethics sooner rather than later.

Regulatory gaps

Stem cell research is already subject to careful regulation. However, existing regulatory frameworks have not yet caught up with the unique set of ethical concerns associated with brain organoids.

Guidelines like the National Health and Medical Research Council’s National Statement on Ethical Conduct in Human Research protect the interests of those who donate human biological material to research (and also address a host of other issues). But they do not consider whether brain organoids themselves could acquire morally relevant interests.

This gap has not gone unnoticed. A growing number of commentators argue that brain organoid research should face restrictions beyond those that apply to stem cell research more generally. Unfortunately, little progress has been made on identifying what form these restrictions should take.

The info is here.

Tuesday, December 18, 2018

Super-smart designer babies could be on offer soon. But is that ethical?

A micro image of embryo selection for IVF.Philip Ball
The Guardian
Originally posted November 19, 2018

Here is an excerpt:


Before we start imagining a Gattaca-style future of genetic elites and underclasses, there’s some context needed. The company says it is only offering such testing to spot embryos with an IQ low enough to be classed as a disability, and won’t conduct analyses for high IQ. But the technology the company is using will permit that in principle, and co-founder Stephen Hsu, who has long advocated for the prediction of traits from genes, is quoted as saying: “If we don’t do it, some other company will.”

The development must be set, too, against what is already possible and permitted in IVF embryo screening. The procedure called pre-implantation genetic diagnosis (PGD) involves extracting cells from embryos at a very early stage and “reading” their genomes before choosing which to implant. It has been enabled by rapid advances in genome-sequencing technology, making the process fast and relatively cheap. In the UK, PGD is strictly regulated by the Human Fertilisation and Embryology Authority (HFEA), which permits its use to identify embryos with several hundred rare genetic diseases of which the parents are known to be carriers. PGD for other purposes is illegal.

The info is here.