Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, October 20, 2018

Who should answer the ethical questions surrounding artificial intelligence?

Jack Karsten
Brookings.edu
Originally published September 14, 2018

Continuing advancements in artificial intelligence (AI) for use in both the public and private sectors warrant serious ethical consideration. As the capability of AI improves, the issues of transparency, fairness, privacy, and accountability associated with using these technologies become more serious. Many developers in the private sector acknowledge the threats AI poses and have created their own codes of ethics to monitor AI development responsibly. However, many experts believe government regulation may be required to resolve issues ranging from racial bias in facial recognition software to the use of autonomous weapons in warfare.

On Sept. 14, the Center for Technology Innovation hosted a panel discussion at the Brookings Institution to consider the ethical dilemmas of AI. Brookings scholars Christopher Meserole, Darrell West, and William Galston were joined by Charina Chou, the global policy lead for emerging technologies at Google, and Heather Patterson, a senior research scientist at Intel.

Enjoy the video


Friday, October 19, 2018

If Humility Is So Important, Why Are Leaders So Arrogant?

Bill Taylor
Harvard Business Review
Originally published October 15, 2018

Here is an excerpt:

With all due modesty, I’d offer a few answers to these vexing questions. For one thing, too many leaders think they can’t be humble and ambitious at the same time. One of the great benefits of becoming CEO of a company, head of a business unit, or leader of a team, the prevailing logic goes, is that you’re finally in charge of making things happen and delivering results. Edgar Schein, professor emeritus at MIT Sloan School of Management, and an expert on leadership and culture, once asked a group of his students what it means to be promoted to the rank of manager. “They said without hesitation, ‘It means I can now tell others what to do.’” Those are the roots of the know-it-all style of leadership. “Deep down, many of us believe that if you are not winning, you are losing,” Schein warns. The “tacit assumption” among executives “is that life is fundamentally and always a competition” — between companies, but also between individuals within companies. That’s not exactly a mindset that recognizes the virtues of humility.

In reality, of course, humility and ambition need not be at odds. Indeed, humility in the service of ambition is the most effective and sustainable mindset for leaders who aspire to do big things in a world filled with huge unknowns. Years ago, a group of HR professionals at IBM embraced a term to capture this mindset. The most effective leaders, they argued, exuded a sense of “humbition,” which they defined as “one part humility and one part ambition.” We “notice that by far the lion’s share of world-changing luminaries are humble people,” they wrote. “They focus on the work, not themselves. They seek success — they are ambitious — but they are humbled when it arrives…They feel lucky, not all-powerful.”

The info is here.

Risk Management Considerations When Treating Violent Patients

Kristen Lambert
Psychiatric News
Originally posted September 4, 2018

Here is an excerpt:

When a patient has a history of expressing homicidal ideation or has been violent previously, you should document, in every subsequent session, whether the patient admits or denies homicidal ideation. When the patient expresses homicidal ideation, document what he/she expressed and the steps you did or did not take in response and why. Should an incident occur, your documentation will play an important role in defending your actions.

Despite taking precautions, your patient may still commit a violent act. The following are some strategies that may minimize your risk.

  • Conduct complete timely/thorough risk assessments.
  • Document, including the reasons for taking and not taking certain actions.
  • Understand your state’s law on duty to warn. Be aware of the language in the law on whether you have a mandatory, permissive, or no duty to warn/protect.
  • Understand your state’s laws regarding civil commitment.
  • Understand your state’s laws regarding disclosure of confidential information and when you can do so.
  • Understand your state’s laws regarding discussing firearms ownership and/or possession with patients.
  • If you have questions, consult an attorney or risk management professional.

Thursday, October 18, 2018

Medicine’s Financial Contamination

Editorial Board
The New York Times
Originally posted September 14, 2018

Here is an excerpt:

Sloan Kettering’s other leaders were well aware of these relationships. The hospital has said that it takes pains to wall off any employee involved with a given outside company from the hospital’s dealings with that company. But it’s difficult to believe that conflicts of this magnitude could have truly been worked around, given how many of them there were, and how high up on the organizational chart Dr. Baselga sat. It also strains credulity to suggest that he was the hospital’s only leader with such conflicts or with such apparent difficulty disclosing them. After the initial report, but before Dr. Baselga’s resignation, the hospital sent a letter to its entire 17,000-person staff acknowledging that the institution as a whole needed to do better. It remains to be seen what additional actions will be taken — and by whom — to repair the situation.

Financial conflicts are hardly confined to Sloan Kettering. A 2015 study in The BMJ found that a “substantial number” of academic leaders hold directorships that pay as much as or more than their clinical salaries. According to other surveys, nearly 70 percent of oncologists who speak at national meetings, nearly 70 percent of psychiatrists on the task force that ultimately decides what treatments should be recommended for what mental illnesses, and a significant number of doctors on Food and Drug Administration advisory committees have financial ties to the drug and medical device industries. As bioethicists have warned and as journal publishers have long acknowledged, not all of them report those ties when and where they are supposed to.

The info is here.

When You Fear Your Company Has Forgotten Its Principles

Sue Shellenbarger
The Wall Street Journal
Originally published September 17, 2018

Here is an excerpt:

People who object on principle to their employers’ conduct face many obstacles. One is the bystander effect—people’s reluctance to intervene against wrongdoing when others are present and witnessing it too, Dr. Grant says. Ask yourself in such cases, “If no one acted here, what would be the consequences?” he says. While most people think first about potential damage to their reputation and relationships, the long-term effects could be worse, he says.

Be careful not to argue too passionately for the changes you want, Dr. Grant says. Show respect for others’ viewpoint, and acknowledge the flaws in your argument to show you’ve thought it through carefully.

Be open about your concerns, says Jonah Sachs, an Oakland, Calif., speaker and author of “Unsafe Thinking,” a book on creative risk-taking. People who complain in secret are more likely to make enemies and be seen as disloyal, compared with those who resist in the open, research shows.

Successful change-makers tend to frame proposed changes as benefiting the entire company and its employees and customers, rather than just themselves, Mr. Sachs says. He cites a former executive at a retail drug chain who helped persuade top management to stop selling cigarettes in its stores. While the move tracked with the company’s health-focused mission, the executive strengthened her case by correctly predicting that it would attract more health-minded customers.

The info is here.

Wednesday, October 17, 2018

Huge price hikes by drug companies are immoral

Robert Klitzman
CNN.com
Originally posted September 18, 2018

Several pharmaceutical companies have been jacking up the prices of their drugs in unethical ways. Most recently, Nirmal Mulye, founder and president of Nostrum Pharmaceuticals, defended his decision to more than quadruple the price of nitrofurantoin, used to treat bladder infections, from about $500 to more than $2,300 a bottle. He said it was his "moral requirement to sell the product at the highest price."

Mulye argues that his only moral duty is to benefit his investors. As he said in defending Martin Shkreli, who in 2015 raised the price of an anti-parasite drug, daraprim, 5,000% from $13.50 to $750 per tablet, "When he raised the price of his drug he was within his rights because he had to reward his shareholders."

Mulye is wrong for many reasons. Drug companies deserve reasonable return on their investment in research and development, but some of these companies are abusing the system. The development of countless new drugs depends on taxpayer money and sacrifices that patients in studies make in good faith. Excessive price hikes harm many people, threaten public health and deplete huge amounts of taxpayer money that could be better used in other ways.

The US government pays more than 40% of all Americans' prescription costs, and this amount has been growing faster than inflation. In 2015, over 118 million Americans were on some form of government health insurance, including around 52 million on Medicare and 62 million on Medicaid. And these numbers have been increasing. Today, around 59 million Americans are on Medicare and 75 million on Medicaid.

The info is here.

Machine Ethics and Artificial Moral Agents

Francesco Corea
Medium.com
Originally posted July 6, 2017

Here is an excerpt:

However, let’s look at the problem from a different angle. I was educated as an economist, so allow me to start my argument with this statement: let’s assume we have the perfect dataset. It is not only omni-comprehensive but also clean, consistent and deep both longitudinally and temporally speaking.

Even in this case, we have no guarantee AI won’t learn the same bias autonomously as we did. In other words, removing biases by hand or by construction is not a guarantee of those biases to not come out again spontaneously.

This possibility also raises another (philosophical) question: we are building this argument from the assumption that biases are bad (mostly). So let’s say the machines come up with a result we see as biased, and therefore we reset them and start again the analysis with new data. But the machines come up with a similarly ‘biased result’. Would we then be open to accepting that as true and revision what we consider to be biased?

This is basically a cultural and philosophical clash between two different species.

In other words, I believe that two of the reasons why embedding ethics into machine designing is extremely hard is that i) we don’t really know unanimously what ethics is, and ii) we should be open to admit that our values or ethics might not be completely right and that what we consider to be biased is not the exception but rather the norm.

Developing a (general) AI is making us think about those problems and it will change (if it hasn’t already started) our values system. And perhaps, who knows, we will end up learning something from machines’ ethics as well.

The info is here.

Tuesday, October 16, 2018

Let's Talk About AI Ethics; We're On A Deadline

Tom Vander Ark
Forbes.com
Originally posted September 13, 2018

Here is an excerpt:

Creating Values-Aligned AI

“The project of creating value-aligned AI is perhaps one of the most important things we will ever do,” said the Future of Life Institute. It’s not just about useful intelligence but “the ends to which intelligence is aimed and the social/political context, rules, and policies in and through which this all happens.”

The Institute created a visual map of interdisciplinary issues to be addressed:

  • Validation: ensuring that the right system specification is provided for the core of the agent given stakeholders' goals for the system.
  • Security: applying cyber security paradigms and techniques to AI-specific challenges.
  • Control: structural methods for operators to maintain control over advanced agents.
  • Foundations: foundational mathematical or philosophical problems that have bearing on multiple facets of safety
  • Verification: techniques that help prove a system was implemented correctly given a formal specification
  • Ethics: effort to understand what we ought to do and what counts as moral or good.
  • Governance: the norms and values held by society, which are structured through various formal and informal processes of decision-making to ensure accountability, stability, broad participation, and the rule of law.

Nudge or Grudge? Choice Architecture and Parental Decision‐Making

Jennifer Blumenthal‐Barby and Douglas J. Opel
The Hastings Center Report
Originally published March 28, 2018

Abstract

Richard Thaler and Cass Sunstein define a nudge as “any aspect of the choice architecture that alters people's behavior in a predictable way without forbidding any options or significantly changing their economic incentives.” Much has been written about the ethics of nudging competent adult patients. Less has been written about the ethics of nudging surrogates’ decision‐making and how the ethical considerations and arguments in that context might differ. Even less has been written about nudging surrogate decision‐making in the context of pediatrics, despite fundamental differences that exist between the pediatric and adult contexts. Yet, as the field of behavioral economics matures and its insights become more established and well‐known, nudges will become more crafted, sophisticated, intentional, and targeted. Thus, the time is now for reflection and ethical analysis regarding the appropriateness of nudges in pediatrics.

We argue that there is an even stronger ethical justification for nudging in parental decision‐making than with competent adult patients deciding for themselves. We give three main reasons in support of this: (1) child patients do not have autonomy that can be violated (a concern with some nudges), and nudging need not violate parental decision‐making authority; (2) nudging can help fulfill pediatric clinicians’ obligations to ensure parental decisions are in the child's interests, particularly in contexts where there is high certainty that a recommended intervention is low risk and of high benefit; and (3) nudging can relieve parents’ decisional burden regarding what is best for their child, particularly with decisions that have implications for public health.

The info is here.