Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, March 18, 2020

‘Hunters’: explores justice, morality of revenge

Gabe Friedman
ijn.com
Originally posted 27 Feb 20

Here is an excerpt:

“The center of the series really revolves around the moral, ethical question, ‘Does it take evil to fight evil? Do you have to be a bad guy in order to effectively combat the bad guys?’” Logan Lerman, who plays the show’s protagonist Jonah Heidelbaum, says in a phone interview from Los Angeles. “I’m really curious to see what people’s responses are.”

The show, which was co-produced by Jordan Peele — the writer and director behind the horror blockbusters “Get Out” and “Us” — whirls into motion after Jonah’s grandmother is murdered in her Brooklyn apartment.

Jonah’s quest to discover the perpetrator brings him into contact with Meyer, who has assembled an “Ocean’s 11”-style team with members whose specialties range from combat to disguise. Jonah fits in immediately as a code-breaker because of his ability to recognize written patterns.

Meyer informs Jonah — one of multiple Jewish members of the squad — that there are many Nazis hiding in plain sight throughout the country.

In fact, in the show’s world, there is a large Nazi network that plans to establish a “Fourth Reich.” The hunters set to work to dismantle it, and they aren’t afraid to get their hands dirty (and very bloody) along the way.

The show imagines an alternate history in which some of the thousands of Nazis and Nazi collaborators who made their way to the US after WW II maintained their Nazi identities rather than hiding them.

The info is here.

How Salesforce Makes Decisions on Ethics and Social Issues

Kristin Broughton
The Wall Street Journal
Originally published 17 Feb 20

After facing public backlash in 2018 for doing business with U.S. immigration authorities amid the separation of migrant families at the southern U.S. border, Salesforce.com Inc., a company known for speaking up on social issues, hired a resident ethicist.

Paula Goldman joined the business software company early last year as chief ethical and humane use officer, a new role tasked with developing a framework for making decisions on complicated political issues.

Although the company’s contract with U.S. Customs and Border Protection remains in place, Salesforce has tackled other controversial issues. In her first year on the job, Ms. Goldman supervised the development of a corporate policy that prohibits customers from using Salesforce’s software to sell military-style firearms to private citizens.

She also is responsible for ensuring Salesforce’s products are developed with ethics in mind, particularly those involving artificial intelligence. One way she has done that is by introducing a process known as “consequence scanning,” an exercise that requires employees to document the potential unintended outcomes of releasing a new function, she said.

“We’re in this moment of correction where it’s like, ‘Oh yeah, this is our responsibility to integrate this question into the way we do business,’” Ms. Goldman said.

The info is here.

Tuesday, March 17, 2020

Trump's separation of families constitutes torture, doctors find

David Xol-Cholom of Guatemala hugs his son Byron at Los Angeles international airport last month as they reunite after being separated about one and half years ago.Amanda Holpuch
theguardian.com
Originally posted 25 Feb 20

Here is an excerpt:

Legal experts have argued family separation constituted torture, but this is the first time a medical group has reached the determination.

PHR volunteer psychiatrists evaluated 17 adults and nine children who had been separated between 30 to 90 days. Most met the criteria for at least one mental health condition, including post-traumatic stress disorder, major depressive disorder or generalized anxiety disorder “consistent with, and likely linked to, the trauma of family separation”, according to the report.

Not only did the brutal family separation policy create trauma, it was intensified by the families’ previous exposure to violence on their journey to the US and in their home countries of Honduras, Guatemala and El Salvador.

All but two of the adults evaluated by PHR said they had received death threats in their home countries and 14 out of the 17 adults said they were targeted by drug cartels. All were fearful their child would be harmed or killed if they remained at home.

Almost all the children had been drugged, kidnapped, poisoned or threatened by gangs before they left. One mother told investigators she moved her daughter to different schools in El Salvador several times so gang members couldn’t find her and kill her.

The info is here.

Some Researchers Wear Yellow Pants, but Even Fewer Participants Read Consent Forms

B, Douglas, E. McGorray, & P. Ewell
PsyArXiv
Originally published 5 Feb 20

Abstract

Though consent forms include important information, those experienced with behavioral research often observe that participants do not carefully read consent forms. Three studies examined participants’ reading of consent forms for in-person experiments. In each study, we inserted the phrase “some researchers wear yellow pants” into sections of the consent form and measured participants’ reading of the form by testing their recall of the color yellow. In Study 1, we found that the majority of participants did not read consent forms thoroughly. This suggests that overall, participants sign consent forms that they have not read, confirming what has been observed anecdotally and documented in other research domains. Study 2 examined which sections of consent forms participants read and found that participants were more likely to read the first two sections of a consent form (procedure and risks) than later sections (benefits and anonymity and confidentiality). Given that rates of recall of the target phrase were under 70% even when the sentence was inserted into earlier sections of the form, we explored ways to improve participant reading in Study 3. Theorizing that the presence of a researcher may influence participants’ retention of the form, we assigned participants to read the form with or without a researcher present. Results indicated that removing the researcher from the room while participants read the consent form decreased recall of the target phrase. Implications of these results and suggestions for future researchers are discussed.

The research is here.

Monday, March 16, 2020

Video Games Need More Complex Morality Systems

Hayes Madsen
screenrant.com
Originally published 26 Feb 20

Hereis an excerpt:

Perhaps a bigger issue is the simple fact that games separate decisions into these two opposed ideas. There's a growing idea that games need to represent morality as shades of grey, rather than black and white. Titles like The Witcher 3 further this effort by trying to make each conflict not have a right or wrong answer, as well as consequences, but all too often the neutral path is ignored. Even with multiple moral options, games generally reward players for being good or evil. Take inFamous for example, as making moral choices rewards you with good or bad karma, which in turn unlocks new abilities and powers. The problem here is that great powers are locked away for players on either end, cordoning off gameplay based on your moral choices.

Video games need to make more of an effort to make any choice matter for players, and if they decide to go back and forth between good and evil, that should be represented, not discouraged. Things are seldom black and white, and for games to represent that properly there needs to be incentive across the board, whether the player wants to be good, evil, or anything in between.

Moral choices can shape the landscape of game worlds, even killing characters or entire races. Yet, choices don't always need to be so dramatic or earth-shattering. Characterization is important for making huge decisions, but the smaller day-to-day decisions often have a bigger impact on fleshing out characters.

The info is here.

U.S. Indian Health Service Doctor Indicted on Charges of Sexual Abuse

Christopher Weaver and Dan Frosch
The Wall Street Journal
Originally published 13 Feb 20

Here is an excerpt:

The new allegations aren’t the first about Dr. Ibarra-Perocier, some of the people familiar with the matter said. At least two nurses accused him internally of workplace sexual harassment in past years, the people said. Dr. Ibarra-Perocier’s wife, who left her job due to illness in 2017 and died the next year, was his supervisor during that time, they said.

In December, the HHS inspector general found the agency’s patient-protection policies don’t go far enough.

The inspectors concluded the agency had focused so narrowly on medical providers who commit child sexual abuse that it didn’t adequately direct employees on how to respond to other kinds of perpetrators, victims or types of abuse.

A separate White House task force convened to examine the widening scandal is expected to release additional recommendations for improving safety at the agency’s facilities next week.

The IHS also commissioned a review of its own handling of the Weber case that is expected to lead to additional changes. The private contractor the agency retained to do that work completed its report, but the agency has withheld the document, arguing that it is a record of quality assurance program that by law is confidential.

“IHS is committed to transparency, accountability and continuous improvement,” an agency spokeswoman said in a January statement. “We also respect and protect patient privacy.”

The info is here.

Sunday, March 15, 2020

Will Past Criminals Reoffend? (Humans are Terrible at Predicting; Algorithms Worse)

Sophie Bushwick
Scientific American
Originally published 14 Feb 2020

Here is an excerpt:

Based on the wider variety of experimental conditions, the new study concluded that algorithms such as COMPAS and LSI-R are indeed better than humans at predicting risk. This finding makes sense to Monahan, who emphasizes how difficult it is for people to make educated guesses about recidivism. “It’s not clear to me how, in real life situations—when actual judges are confronted with many, many things that could be risk factors and when they’re not given feedback—how the human judges could be as good as the statistical algorithms,” he says. But Goel cautions that his conclusion does not mean algorithms should be adopted unreservedly. “There are lots of open questions about the proper use of risk assessment in the criminal justice system,” he says. “I would hate for people to come away thinking, ‘Algorithms are better than humans. And so now we can all go home.’”

Goel points out that researchers are still studying how risk-assessment algorithms can encode racial biases. For instance, COMPAS can say whether a person might be arrested again—but one can be arrested without having committed an offense. “Rearrest for low-level crime is going to be dictated by where policing is occurring,” Goel says, “which itself is intensely concentrated in minority neighborhoods.” Researchers have been exploring the extent of bias in algorithms for years. Dressel and Farid also examined such issues in their 2018 paper. “Part of the problem with this idea that you're going to take the human out of [the] loop and remove the bias is: it’s ignoring the big, fat, whopping problem, which is the historical data is riddled with bias—against women, against people of color, against LGBTQ,” Farid says.

The info is here.

Saturday, March 14, 2020

You’re Not Going to Kill Them With Kindness. You’ll Do Just the Opposite.

Judith Newman
The New York Times
Originally posted 8 Jan 20

It was New Year’s Eve, and my friends had just adopted a little girl, 4 years old, from China. The family was going around the table, suggesting what each thought the New Year’s resolution should be for the other. Fei Fei’s English was still shaky. When her turn came, though, she didn’t hesitate. She pointed at her new father, mother and sister in turn. “Be nice, be nice, be nice,” she said.

Fifteen years later, in this dark age for civility, a toddler’s cri de coeur resonates more than ever. In his recent remarks at the memorial service for Congressman Elijah Cummings, President Obama said, “Being a strong man includes being kind, and there’s nothing weak about kindness and compassion; nothing weak about looking out for others.” On a more pedestrian level, yesterday I walked into the Phluid Project, the NoHo gender-neutral shop where T-shirts have slogans like “Hatephobic” and “Be Your Self.” I asked the salesperson, “What is your current best seller?” She pointed to a shirt in the window imprinted with the slogan: “Be kind.”

So I’m not surprised that there’s been a little flurry of self-help books on basic human decency and what it will do for you.

Kindness is doing small acts for others without expecting anything in return. It’s the opposite of transactional, and therefore the opposite of what we’re seeing in our body politic today.

The info is here.

Friday, March 13, 2020

DoD unveils how it will keep AI in check with ethics principles

Image result for military aiScott Maucione
federalnewsnetwork.com
Originally posted 25 Feb 20

Here is an excerpt:

The principle areas are based on recommendations from a 15-month study by the Defense Innovation Board — a panel of science and technology experts from industry and academia.

The principles are as follows:

  1. Responsible: DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  2. Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  4. Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  5. Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.