Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, October 7, 2019

Ethics a distant second to profits in Silicon Valley

Gabriel Fairman
www.sdtimes.com
Originally published September 9, 2019

Here is an excerpt:

For ethics to become a part of the value system that drives behavior in Silicon Valley, it would have to be incentivized as such. I have a hard time envisioning a world were ethics can offer shareholders huge returns. Ethics is about doing the right thing, and the right thing and the lucrative thing typically don’t necessarily go hand in hand.

Everyone can understand ethics. Basic questions such as “Will this be good for the world in a year, 10 years or 20 years?”, “Would I want this for my kids?” are easy litmus tests to differentiate between ethical and unethical conduct. The challenge is that considerations on ethics slow down development by raising challenges and concerns early on.  Ethics are about amplifying potential problems that can be foreseen down the road.

On the other hand, venture-funded start-ups are about minimizing the ramifications of these problems as they move on quickly. How can ethics compete with billion-dollar exits? It can’t. Ethics are just this thing that we read about in articles or hear about in lectures. It is not driving day-to-day decision-making. You listen to people in boardrooms asking, “How will this impact our valuation?,” or “What is the ROI of this initiative?” but you don’t hear top-level execs brainstorming about how their product or company could be more ethical because there is no compensation tied to that. The way we have built our world, ethics are just fluff.

We are also extraordinary at differentiating private vs. public lives. Many people working at tech companies don’t allow their kids to use electronic devices ubiquitously or would not want their kids bossed around by an algorithm as they let go of full-time employee benefits. But they promote these things and further them because these things are highly profitable, not because they are fundamentally good. This key distinction between private and public behavior allows people to behave in wildly hypocritical ways, by helping advance the very things they do not want in their own homes.

The info is here.

A Theranos Whistleblower’s Mission to Make Tech Ethical

Brian Gallagher
ethicalsystems.org
Originally published September 12, 2019

Here is an excerpt from the interview:

Is Theranos emblematic of a cultural trend or an anomaly of unethical behavior?

My initial impression was that Theranos was some very bizarre one-off scandal. But as I started to review thousands of startups, I realized that there is quite a lot of unethical behavior in tech. The stories may not be quite as grandiose or large-scale as Theranos’, but it was really common to see companies lie to investors, mislead customers, and create abusive work environments. Many founders lacked an understanding of how their products could have negative impacts on society. The frustration of seeing the same mistakes happen over and over again made it clear that something needed to be done about this.

How has your experience at Theranos helped shape your understanding of the link between ethics and culture?

If the company had effective and ethically mature leadership, the company may not have used underdeveloped technology on patients without their consent. If the board was constructed in a way to properly challenge the product, perhaps it would have been developed. If employees weren’t scared and disillusioned, perhaps constructive conversations about novel solutions could have arisen. On rare occasions are these scandals a sort of random surprise or the result of an unexpected disaster. They are often an accumulation of poor ethical decisions. Having a culture where, at every stakeholder level, people can speak-up and be properly considered when they see something wrong is crucial. It makes the difference in building ethical organizations and preventing large disastrous events from happening.

The info is here.

Sunday, October 6, 2019

Thinking Fast and Furious: Emotional Intensity and Opinion Polarization in Online Media

David Asker & Elias Dinas
Public Opinion Quarterly
Published: 09 September 2019
https://doi.org/10.1093/poq/nfz042

Abstract

How do online media increase opinion polarization? The “echo chamber” thesis points to the role of selective exposure to homogeneous views and information. Critics of this view emphasize the potential of online media to expand the ideological spectrum that news consumers encounter. Embedded in this discussion is the assumption that online media affects public opinion via the range of information that it offers to users. We show that online media can induce opinion polarization even among users exposed to ideologically heterogeneous views, by heightening the emotional intensity of the content. Higher affective intensity provokes motivated reasoning, which in turn leads to opinion polarization. The results of an online experiment focusing on the comments section, a user-driven tool of communication whose effects on opinion formation remain poorly understood, show that participants randomly assigned to read an online news article with a user comments section subsequently express more extreme views on the topic of the article than a control group reading the same article without any comments. Consistent with expectations, this effect is driven by the emotional intensity of the comments, lending support to the idea that motivated reasoning is the mechanism behind this effect.

From the Discussion:

These results should not be taken as a challenge to the echo chamber argument, but rather as a complement to it. Selective exposure to desirable information and motivated rejection of undesirable information constitute separate mechanisms whereby online news audiences may develop more extreme views. Whereas there is already ample empirical evidence about the first mechanism, previous research on the second has been scant. Our contribution should thus be seen as an attempt to fill this gap.

Saturday, October 5, 2019

Brain-reading tech is coming. The law is not ready to protect us.

Sigal Samuel
vox.com
Originally posted August 30, 2019

Here is an excerpt:

2. The right to mental privacy

You should have the right to seclude your brain data or to publicly share it.

Ienca emphasized that neurotechnology has huge implications for law enforcement and government surveillance. “If brain-reading devices have the ability to read the content of thoughts,” he said, “in the years to come governments will be interested in using this tech for interrogations and investigations.”

The right to remain silent and the principle against self-incrimination — enshrined in the US Constitution — could become meaningless in a world where the authorities are empowered to eavesdrop on your mental state without your consent.

It’s a scenario reminiscent of the sci-fi movie Minority Report, in which a special police unit called the PreCrime Division identifies and arrests murderers before they commit their crimes.

3. The right to mental integrity

You should have the right not to be harmed physically or psychologically by neurotechnology.

BCIs equipped with a “write” function can enable new forms of brainwashing, theoretically enabling all sorts of people to exert control over our minds: religious authorities who want to indoctrinate people, political regimes that want to quash dissent, terrorist groups seeking new recruits.

What’s more, devices like those being built by Facebook and Neuralink may be vulnerable to hacking. What happens if you’re using one of them and a malicious actor intercepts the Bluetooth signal, increasing or decreasing the voltage of the current that goes to your brain — thus making you more depressed, say, or more compliant?

Neuroethicists refer to that as brainjacking. “This is still hypothetical, but the possibility has been demonstrated in proof-of-concept studies,” Ienca said, adding, “A hack like this wouldn’t require that much technological sophistication.”

The info is here.

Friday, October 4, 2019

When Patients Request Unproven Treatments

Casey Humbyrd and Matthew Wynia
medscape.com
Originally posted March 25, 2019

Here is an excerpt:

Ethicists have made a variety of arguments about these injections. The primary arguments against them have focused on the perils of physicians becoming sellers of "snake oil," promising outlandish benefits and charging huge sums for treatments that might not work. The conflict of interest inherent in making money by providing an unproven therapy is a legitimate ethical concern. These treatments are very expensive and, as they are unproven, are rarely covered by insurance. As a result, some patients have turned to crowdfunding sites to pay for these questionable treatments.

But the profit motive may not be the most important ethical issue at stake. If it were removed, hypothetically, and physicians provided the injections at cost, would that make this practice more acceptable?

No. We believe that physicians who offer these injections are skipping the most important step in the ethical adoption of any new treatment modality: research that clarifies the benefits and risks. The costs of omitting that important step are much more than just monetary.

For the sake of argument, let's assume that stem cells are tremendously successful and that they heal arthritic joints, making them as good as new. By selling these injections to those who can pay before the treatment is backed by research, physicians are ensuring unavailability to patients who can't pay, because insurance won't cover unproven treatments.

The info is here.

Google bans ads for unproven medical treatments

Megan Graham
www.cnbc.com
Originally posted September 6, 2019

Google on Friday announced a new health care and medicines policy that bans advertising for “unproven or experimental medical techniques,” which it says includes most stem cell, cellular and gene therapies.

A blog post from Google policy advisor Adrienne Biddings said the company will prohibit ads selling treatments “that have no established biomedical or scientific basis.” It will also extend the policy to treatments that are rooted in scientific findings and preliminary clinical experience “but currently have insufficient formal clinical testing to justify widespread clinical use.” The change was first reported by The Washington Post.

The new Google ads policy may put the heat on for the stem cell clinic industry, which has until recently been largely unregulated and has some players who have been accused of taking advantage of seriously ill patients, The Washington Post reported.

“We know that important medical discoveries often start as unproven ideas — and we believe that monitored, regulated clinical trials are the most reliable way to test and prove important medical advances,” Biddings said. “At the same time, we have seen a rise in bad actors attempting to take advantage of individuals by offering untested, deceptive treatments. Often times, these treatments can lead to dangerous health outcomes and we feel they have no place on our platforms.”

The Google post included a quote from the president of the International Society for Stem Cell Research, Deepak Srivastava, who said the new policy is a “much-needed and welcome step to curb the marketing of unscrupulous medical products such as unproven stem cell therapies.”

The info is here.

Thursday, October 3, 2019

Empathy in the Age of the EMR

Danielle Ofri
The Lancet

Here is an excerpt:

Keeping the doctor-patient connection from eroding in the age of the EMR is an uphill battle. We all know that the eye contact that Fildes depicts is a critical ingredient for communication and connection, but when the computer screen is so demanding of focus that the patient becomes a distraction, even an impediment—this is hopelessly elusive.

Recently, I was battling the EMR during a visit with a patient who had particularly complicated medical conditions. We hadn’t seen each other in more than a year, so there was much to catch up on. Each time she raised an issue, I turned to the computer to complete the requisite documentation for that concern. In that pause, however, my patient intuited a natural turn of conversation. Thinking that it was now her turn to talk, she would bring up the next thing on her mind. But of course I wasn’t finished with the last thing, so I would say, “Would you mind holding that thought for a second? I just need to finish this one thing…”

I’d turn back to the computer and fall silent to finish documenting. After a polite minute, she would apparently sense that it was again her turn in the conversation and thus begin her next thought. I was torn because I didn’t want to stop her in her tracks, but we’ve been so admonished about the risks inherent in distracted multitasking that I wanted to focus fully on the thought I was entering into the computer. I know it’s rude to cut someone off, but preserving a clinical train of thought is crucial for avoiding medical error.

The info is here.

Deception and self-deception

Peter Schwardmann and Joel van der Weele
Nature Human Behaviour (2019)

Abstract

There is ample evidence that the average person thinks he or she is more skillful, more beautiful and kinder than others and that such overconfidence may result in substantial personal and social costs. To explain the prevalence of overconfidence, social scientists usually point to its affective benefits, such as those stemming from a good self-image or reduced anxiety about an uncertain future. An alternative theory, first advanced by evolutionary biologist Robert Trivers, posits that people self-deceive into higher confidence to more effectively persuade or deceive others. Here we conduct two experiments (combined n = 688) to test this strategic self-deception hypothesis. After performing a cognitively challenging task, half of our subjects are informed that they can earn money if, during a short face-to-face interaction, they convince others of their superior performance. We find that the privately elicited beliefs of the group that was informed of the profitable deception opportunity exhibit significantly more overconfidence than the beliefs of the control group. To test whether higher confidence ultimately pays off, we experimentally manipulate the confidence of the subjects by means of a noisy feedback signal. We find that this exogenous shift in confidence makes subjects more persuasive in subsequent face-to-face interactions. Overconfidence emerges from these results as the product of an adaptive cognitive technology with important social benefits, rather than some deficiency or bias.

From the Discussion section

The results of our experiment demonstrate that the strategic environment matters for cognition about the self. We observe that deception opportunities increase average overconfidence relative to others, and that, under the right circumstances, increased confidence can pay off. Our data thus support the the idea that overconfidence is strategically employed for social gain.

Our results do not allow for decisive statements about the exact cognitive channels underlying such self-deception. While we find some indications that an aversion to lying increases overconfidence, the evidence is underwhelming.13 When it comes to the ability to deceive others, we find that even when we control for the message, confidence leads to higher evaluations in some conditions. This is  consistent with the idea that self-deception improves the deception technology of contestants, possibly by eliminating non-verbal give-away cues.

The research is here. 

Wednesday, October 2, 2019

Evolutionary Thinking Can Help Companies Foster More Ethical Culture

Brian Gallagher
ethicalsystems.org
Originally published August 20, 2019


Here are two excerpts:

How might human beings be mismatched to the modern business environment?

Many problems of the modern workplace have not been viewed through a mismatch lens, so at this point these are still hypotheses. But let’s take the role of managers, for example. Humans have a strong aversion to dominance, which is a result of our egalitarian nature that served us well in the small-scale societies in which we evolved. One of the biggest causes of job dissatisfaction, people report, is the interaction with their line manager. Many people find this relationship extremely stressful, as it infringes on their sense of autonomy, to be dominated by someone who controls them and gives them orders. Or take the physical work environment that looks nothing like our ancestral environment—our ancestors were always outside, working as they socialized and getting plenty of physical exercise while they hunted and gathered in tight social groups. Now we are forced to spend much of our daytime in tall buildings with small offices surrounded by genetic strangers and no natural scenes to speak of.

(cut)

What can business leaders learn from evolutionary psychology about how to structure relationships between bosses and employees?

One of the most important lessons from our research is that leaders are effective to the extent that they enable their teams to be effective. This sounds obvious, but leadership is really about the team and the followers. Individuals gladly follow leaders who they respect because of their skills and competence, and they have a hard time, by contrast, following a leader who is dominant and threatening. Yet human nature is also such that if you give someone power, they will use it—there is a fundamental leader-follower conflict. To keep managers from following the easy route of threat and dominance, every healthy organization should have mechanisms in place to curtail their power. In small-scale societies, as the anthropological literature makes clear, leaders are kept in check because they can only exercise influence in their domain of expertise, nothing else. What’s more, there should be room to gossip about and ridicule leaders, and leaders should be regularly replaced in order to prevent them building up a power base. Why not have feedback sessions where employees can provide regular inputs in the assessment of their bosses? Why not include workers in hiring board members? Many public and private organizations in Europe are currently experimenting with these power-leveling mechanisms.

The info is here.