Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Wednesday, October 17, 2018

Huge price hikes by drug companies are immoral

Robert Klitzman
CNN.com
Originally posted September 18, 2018

Several pharmaceutical companies have been jacking up the prices of their drugs in unethical ways. Most recently, Nirmal Mulye, founder and president of Nostrum Pharmaceuticals, defended his decision to more than quadruple the price of nitrofurantoin, used to treat bladder infections, from about $500 to more than $2,300 a bottle. He said it was his "moral requirement to sell the product at the highest price."

Mulye argues that his only moral duty is to benefit his investors. As he said in defending Martin Shkreli, who in 2015 raised the price of an anti-parasite drug, daraprim, 5,000% from $13.50 to $750 per tablet, "When he raised the price of his drug he was within his rights because he had to reward his shareholders."

Mulye is wrong for many reasons. Drug companies deserve reasonable return on their investment in research and development, but some of these companies are abusing the system. The development of countless new drugs depends on taxpayer money and sacrifices that patients in studies make in good faith. Excessive price hikes harm many people, threaten public health and deplete huge amounts of taxpayer money that could be better used in other ways.

The US government pays more than 40% of all Americans' prescription costs, and this amount has been growing faster than inflation. In 2015, over 118 million Americans were on some form of government health insurance, including around 52 million on Medicare and 62 million on Medicaid. And these numbers have been increasing. Today, around 59 million Americans are on Medicare and 75 million on Medicaid.

The info is here.

Machine Ethics and Artificial Moral Agents

Francesco Corea
Medium.com
Originally posted July 6, 2017

Here is an excerpt:

However, let’s look at the problem from a different angle. I was educated as an economist, so allow me to start my argument with this statement: let’s assume we have the perfect dataset. It is not only omni-comprehensive but also clean, consistent and deep both longitudinally and temporally speaking.

Even in this case, we have no guarantee AI won’t learn the same bias autonomously as we did. In other words, removing biases by hand or by construction is not a guarantee of those biases to not come out again spontaneously.

This possibility also raises another (philosophical) question: we are building this argument from the assumption that biases are bad (mostly). So let’s say the machines come up with a result we see as biased, and therefore we reset them and start again the analysis with new data. But the machines come up with a similarly ‘biased result’. Would we then be open to accepting that as true and revision what we consider to be biased?

This is basically a cultural and philosophical clash between two different species.

In other words, I believe that two of the reasons why embedding ethics into machine designing is extremely hard is that i) we don’t really know unanimously what ethics is, and ii) we should be open to admit that our values or ethics might not be completely right and that what we consider to be biased is not the exception but rather the norm.

Developing a (general) AI is making us think about those problems and it will change (if it hasn’t already started) our values system. And perhaps, who knows, we will end up learning something from machines’ ethics as well.

The info is here.

Tuesday, October 16, 2018

Let's Talk About AI Ethics; We're On A Deadline

Tom Vander Ark
Forbes.com
Originally posted September 13, 2018

Here is an excerpt:

Creating Values-Aligned AI

“The project of creating value-aligned AI is perhaps one of the most important things we will ever do,” said the Future of Life Institute. It’s not just about useful intelligence but “the ends to which intelligence is aimed and the social/political context, rules, and policies in and through which this all happens.”

The Institute created a visual map of interdisciplinary issues to be addressed:

  • Validation: ensuring that the right system specification is provided for the core of the agent given stakeholders' goals for the system.
  • Security: applying cyber security paradigms and techniques to AI-specific challenges.
  • Control: structural methods for operators to maintain control over advanced agents.
  • Foundations: foundational mathematical or philosophical problems that have bearing on multiple facets of safety
  • Verification: techniques that help prove a system was implemented correctly given a formal specification
  • Ethics: effort to understand what we ought to do and what counts as moral or good.
  • Governance: the norms and values held by society, which are structured through various formal and informal processes of decision-making to ensure accountability, stability, broad participation, and the rule of law.

Nudge or Grudge? Choice Architecture and Parental Decision‐Making

Jennifer Blumenthal‐Barby and Douglas J. Opel
The Hastings Center Report
Originally published March 28, 2018

Abstract

Richard Thaler and Cass Sunstein define a nudge as “any aspect of the choice architecture that alters people's behavior in a predictable way without forbidding any options or significantly changing their economic incentives.” Much has been written about the ethics of nudging competent adult patients. Less has been written about the ethics of nudging surrogates’ decision‐making and how the ethical considerations and arguments in that context might differ. Even less has been written about nudging surrogate decision‐making in the context of pediatrics, despite fundamental differences that exist between the pediatric and adult contexts. Yet, as the field of behavioral economics matures and its insights become more established and well‐known, nudges will become more crafted, sophisticated, intentional, and targeted. Thus, the time is now for reflection and ethical analysis regarding the appropriateness of nudges in pediatrics.

We argue that there is an even stronger ethical justification for nudging in parental decision‐making than with competent adult patients deciding for themselves. We give three main reasons in support of this: (1) child patients do not have autonomy that can be violated (a concern with some nudges), and nudging need not violate parental decision‐making authority; (2) nudging can help fulfill pediatric clinicians’ obligations to ensure parental decisions are in the child's interests, particularly in contexts where there is high certainty that a recommended intervention is low risk and of high benefit; and (3) nudging can relieve parents’ decisional burden regarding what is best for their child, particularly with decisions that have implications for public health.

The info is here.

Monday, October 15, 2018

ICP Ethics Code

Institute of Contemporary Psychoanalysis

Psychoanalysts strive to reduce suffering and promote self-understanding, while respecting human dignity. Above all, we take care to do no harm. Working in the uncertain realm of unconscious emotions and feelings, our exclusive focus must be on safeguarding and benefitting our patients as we try to help them understand their unconscious mental life. Our mandate requires us to err on the side of ethical caution. As clinicians who help people understand the meaning of their dreams and unconscious longings, we are aware of our power and sway. We acknowledge a special obligation to protect people from unintended harm resulting from our own human foibles.

In recognition of our professional mandate and our authority—and the private, subjective and influential nature of our work—we commit to upholding the highest ethical standards. These standards take the guesswork out of how best to create a safe container for psychoanalysis. These ethical principles inspire tolerant and respectful behaviors, which in turn facilitate the health and safety of our candidates, members and, most especially, our patients. Ultimately, ethical behavior protects us from ourselves, while preserving the integrity of our institute and profession.

Professional misconduct is not permitted, including, but not limited to dishonesty, discrimination and boundary violations. Members are asked to keep firmly in mind our core values of personal integrity, tolerance and respect for others. These values are critical to fulfilling our mission as practitioners and educators of psychoanalytic therapy. Prejudice is never tolerated whether on the basis of age, disability, ethnicity, gender, gender identity, race, religion, sexual orientation or social class. Institute decisions (candidate advancement, professional opportunities, etc.) are to be made exclusively on the basis of merit or seniority. Boundary violations, including, but not limited to sexual misconduct, undue influence, exploitation, harassment and the illegal breaking of confidentiality, are not permitted. Members are encouraged to seek consultation readily when grappling with any ethical or clinical concerns. Participatory democracy is a primary value of ICP. All members and candidates have the responsibility for knowing these guidelines, adhering to them and helping other members comply with them.

The ethics code is here.

Big Island considers adding honesty policy to ethics code

Associated Press
Originally posted September 14, 2018

Big Island officials are considering adding language to the county's ethics code requiring officers and employees to provide the public with information that is accurate and factual.

The county council voted last week in support of the measure, requiring county employees to provide honest information to "the best of each officer's or employee's abilities and knowledge," West Hawaii Today reported . It's set to go before council for final approval next week.

The current measure has changed from Puna Councilwoman Eileen O'Hara's original bill that simply stated "officers and employees should be truthful."

She introduced the measure in response to residents' concerns, but amended it to gain the support of her colleagues, she said.

The info is here.

Sunday, October 14, 2018

The Myth of Freedom

Yuval Noah Harari
The Guardian
Originally posted September 14, 2018

Here is an excerpt:

Unfortunately, “free will” isn’t a scientific reality. It is a myth inherited from Christian theology. Theologians developed the idea of “free will” to explain why God is right to punish sinners for their bad choices and reward saints for their good choices. If our choices aren’t made freely, why should God punish or reward us for them? According to the theologians, it is reasonable for God to do so, because our choices reflect the free will of our eternal souls, which are independent of all physical and biological constraints.

This myth has little to do with what science now teaches us about Homo sapiens and other animals. Humans certainly have a will – but it isn’t free. You cannot decide what desires you have. You don’t decide to be introvert or extrovert, easy-going or anxious, gay or straight. Humans make choices – but they are never independent choices. Every choice depends on a lot of biological, social and personal conditions that you cannot determine for yourself. I can choose what to eat, whom to marry and whom to vote for, but these choices are determined in part by my genes, my biochemistry, my gender, my family background, my national culture, etc – and I didn’t choose which genes or family to have.

This is not abstract theory. You can witness this easily. Just observe the next thought that pops up in your mind. Where did it come from? Did you freely choose to think it? Obviously not. If you carefully observe your own mind, you come to realise that you have little control of what’s going on there, and you are not choosing freely what to think, what to feel, and what to want.

Though “free will” was always a myth, in previous centuries it was a helpful one. It emboldened people who had to fight against the Inquisition, the divine right of kings, the KGB and the KKK. The myth also carried few costs. In 1776 or 1945 there was relatively little harm in believing that your feelings and choices were the product of some “free will” rather than the result of biochemistry and neurology.

But now the belief in “free will” suddenly becomes dangerous. If governments and corporations succeed in hacking the human animal, the easiest people to manipulate will be those who believe in free will.

The info is here.

Saturday, October 13, 2018

A Top Goldman Banker Raised Ethics Concerns. Then He Was Gone.

Emily Flitter, Kate Kelly and David Enrich
The New York Times
Originally posted September 11, 2018

By the tight-lipped standards of Goldman Sachs, the phone call from one of the firm’s most senior investment bankers was explosive.

James C. Katzman, a Goldman partner and the leader of its West Coast mergers-and-acquisitions practice, dialed the bank’s whistle-blower hotline in 2014 to complain about what he regarded as a range of unethical practices, according to accounts by people close to Mr. Katzman, which a Goldman spokesman confirmed. His grievances included an effort by Goldman to hire a customer’s child and colleagues’ repeated attempts to obtain and then share confidential client information.

Mr. Katzman expected lawyers at the firm Fried, Frank, Harris, Shriver & Jacobson, which monitored the hotline, to investigate his allegations and share them with independent members of Goldman’s board of directors, the people close to Mr. Katzman said.

The complaints were an extraordinary example of a senior employee’s taking on what he perceived to be corporate wrongdoing at an elite Wall Street bank. But they were never independently investigated or fully relayed to the Goldman board.

The information is here.

Friday, October 12, 2018

The New Standardized Morality Test. Really.

Peter Greene
Forbes - Education
Originally published September 13, 2018

Here is an excerpt:

Morality is sticky and complicated, and I'm not going to pin it down here. It's one thing to manage your own moral growth and another thing to foster the moral development of family and friends and still quite another thing to have a company hired by a government draft up morality curriculum that will be delivered by yet another wing of the government. And it is yet another other thing to create a standardized test by which to give students morality scores.

But the folks at ACT say they will "leverage the expertise of U.S.-based research and test development teams to create the assessment, which will utilize the latest theory and principles of social and emotional learning (SEL) through the development process." That is quite a pile of jargon to dress up "We're going to cobble together a test to measure how moral a student is. The test will be based on stuff."

ACT Chief Commercial Officer Suzana Delanghe is quoted saying "We are thrilled to be supporting a holistic approach to student success" and promises that they will create a "world class assessment that measures UAE student readiness" because even an ACT manager knows better than to say that they're going to write a standardized test for morality.

The info is here.