Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Accountability. Show all posts
Showing posts with label Accountability. Show all posts

Monday, February 1, 2021

Does civility pay?

Porath, C. L., & Gerbasi, A. (2015). 
Organizational Dynamics, 44(4), 281–286.

Abstract 

Being nice may bring you friends, but does it help or harm you in your career? After all, research by Timothy Judge and colleagues shows a negative relationship between a person’s agreeableness and income. Research by Amy Cuddy has shown that warm people are perceived to be less competent, which is likely to have negative career implications. People who buck social rules by treating people rudely and getting away with it tend to garner power. If you are civil you may be perceived as weak, and ignored or taken advantage. Being kind or considerate may be hazardous to your self-esteem, goal achievement, influence, career, and income.  Over the last two decades we have studied the costs of incivility–—and the benefits of civility. We’ve polled tens of thousands of workers across industries around the world about how they’re treated on the job and the effects. The costs of incivility are enormous. Organizations and their employees would be much more likely to thrive if employees treated each other respectfully.  Many see civility as an investment and are skeptical about the potential returns. Porath surveyed of hundreds across organizations spanning more than 17 industries and found that a quarter believe that they will be less leader-like, and nearly 40 percent are afraid that they’ll be taken advantage of if they’re nice at work. Nearly half think that is better to flex your muscles to garner power.  In network studies of a biotechnology firm and international MBAs, along with surveys, and experiments, we address whether civility pays. In this article we discuss our findings and propose recommendations for leaders and organizations.

(cut)

Conclusions

Civility pays. It is a potent behavior you want to master to enhance your influence and effectiveness. It is unique in the sense that it elicits both warmth and competence–—the two characteristics that account for over 90 percent of positive impressions. By being respectful you enhance–—not deter–—career opportunities and effectiveness.

Sunday, December 27, 2020

Do criminals freely decide to commit offences? How the courts decide?

J. Kennett & A. McCay
The Conversation
Originally published 15 OCT 20

Here is an excerpt:

Expert witnesses were reportedly divided on whether Gargasoulas had the capacity to properly participate in his trial, despite suffering from paranoid schizophrenia and delusions.

A psychiatrist for the defence said Gargasoulas’ delusional belief system “overwhelms him”; the psychiatrist expressed concern Gargasoulas was using the court process as a platform to voice his belief he is the messiah.

A second forensic psychiatrist agreed Gargasoulas was “not able to rationally enter a plea”.

However, a psychologist for the prosecution assessed him as fit and the prosecution argued there was evidence from recorded phone calls that he was capable of rational thought.

Notwithstanding the opinion of the majority of expert witnesses, the jury found Gargasoulas was fit to stand trial, and later he was convicted and sentenced to life imprisonment.

Working from media reports, it is difficult to be sure precisely what happened in court, and we cannot know why the jury favoured the evidence suggesting he was fit to stand trial. However, it is interesting to consider whether research into the psychology of blame and punishment can shed any light on their decision.

Questions of consequence

Some psychologists argue judgements of blame are not always based on a balanced assessment of free will or rational control, as the law presumes. Sometimes we decide how much control or freedom a person possessed based upon our automatic negative responses to harmful consequences.

As the psychologist Mark Alicke says:
we simply don’t want to excuse people who do horrible things, regardless of how disordered their cognitive states may be.
When a person has done something very bad, we are motivated to look for evidence that supports blaming them and to downplay evidence that might excuse them by showing that they lacked free will.

Thursday, December 17, 2020

AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust

Capgemini Research Institute

In the wake of the COVID-19 crisis, our reliance on AI has skyrocketed. Today more than ever before, we look to AI to help us limit physical interactions, predict the next wave of the pandemic, disinfect our healthcare facilities and even deliver our food. But can we trust it?

In the latest report from the Capgemini Research Institute – AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust – we surveyed over 800 organizations and 2,900 consumers to get a picture of the state of ethics in AI today. We wanted to understand what organizations can do to move to AI systems that are ethical by design, how they can benefit from doing so, and the consequences if they don’t. We found that while customers are becoming more trusting of AI-enabled interactions, organizations’ progress in ethical dimensions is underwhelming. And this is dangerous because once violated, trust can be difficult to rebuild.

Ethically sound AI requires a strong foundation of leadership, governance, and internal practices around audits, training, and operationalization of ethics. Building on this foundation, organizations have to:
  1. Clearly outline the intended purpose of AI systems and assess their overall potential impact
  2. Proactively deploy AI to achieve sustainability goals
  3. Embed diversity and inclusion principles proactively throughout the lifecycle of AI systems for advancing fairness
  4. Enhance transparency with the help of technology tools, humanize the AI experience and ensure human oversight of AI systems
  5. Ensure technological robustness of AI systems
  6. Empower customers with privacy controls to put them in charge of AI interactions.
For more information on ethics in AI, download the report.

Tuesday, August 4, 2020

When a Patient Regrets Having Undergone a Carefully and Jointly Considered Treatment Plan, How Should Her Physician Respond?

L. V. Selby and others
AMA J Ethics. 2020;22(5):E352-357.
doi: 10.1001/amajethics.2020.352.

Abstract

Shared decision making is best utilized when a decision is preference sensitive. However, a consequence of choosing between one of several reasonable options is decisional regret: wishing a different decision had been made. In this vignette, a patient chooses mastectomy to avoid radiotherapy. However, postoperatively, she regrets the more disfiguring operation and wishes she had picked the other option: lumpectomy and radiation. Although the physician might view decisional regret as a failure of shared decision making, the physician should reflect on the process by which the decision was made. If the patient’s wishes and values were explored and the decision was made in keeping with those values, decisional regret should be viewed as a consequence of decision making, not necessarily as a failure of shared decision making.

(cut)

Commentary

This case vignette highlights decisional regret, which is one of the possible consequences of the patient decision-making process when there are multiple treatment options available. Although the process of shared decision making, which appears to have been carried out in this case, is utilized to help guide the patient and the physician to come to a mutually acceptable and optimal health care decision, it clearly does not always obviate the risk of a patient’s regretting that decision after treatment. Ironically, the patient might end up experiencing more regret after participating in a decision-making process in which more rather than fewer options are presented and in which the patient perceives the process as collaborative rather than paternalistic. For example, among men with prostate cancer, those with lower levels of decisional involvement had lower levels of decisional regret. We argue that decisional regret does not mean that shared decision making is not best practice, even though it can result in patients being reminded of their role in the decision and associated personal regret with that decision.

The info is here.

Thursday, May 14, 2020

Is justice blind or myopic? An examination of the effects of meta-cognitive myopia and truth bias on mock jurors and judges

M. Pantazi, O. Klein, & M. Kissine
Judgment and Decision Making, 
Vol. 15, No. 2, March 2020, pp. 214-229

Abstract

Previous studies have shown that people are truth-biased in that they tend to believe the information they receive, even if it is clearly flagged as false. The truth bias has been recently proposed to be an instance of meta-cognitive myopia, that is, of a generalized human insensitivity towards the quality and correctness of the information available in the environment. In two studies we tested whether meta-cognitive myopia and the ensuing truth bias may operate in a courtroom setting. Based on a well-established paradigm in the truth-bias literature, we asked mock jurors (Study 1) and professional judges (Study 2) to read two crime reports containing aggravating or mitigating information that was explicitly flagged as false. Our findings suggest that jurors and judges are truth-biased, as their decisions and memory about the cases were affected by the false information. We discuss the implications of the potential operation of the truth bias in the courtroom, in the light of the literature on inadmissible and discredible evidence, and make some policy suggestions.

From the Discussion:

Fortunately, the judiciary system is to some extent shielded by intrusions of illegitimate evidence, since objections are most often raised before a witness’s answer or piece of evidence is presented in court. Therefore, most of the time, inadmissible or false evidence is prevented from entering the fact-finders’ mental representations of a case in the first place. Nevertheless, objections can also be raised after a witnesses’ response has been given. Such objections may not actually protect the fact-finders from the information that has already been presented. An important question that remains open from a policy perspective is therefore how we are to safeguard the rules of evidence, given the fact-finders’ inability to take such meta-information into account.

The research is here.

Sunday, May 3, 2020

Complicit silence in medical malpractice

Editorial
Volume 395, Issue 10223, p. 467
February 15, 2020

Clinicians and health-care managers displayed “a capacity for willful blindness” that allowed Ian Paterson to hide in plain sight—that is the uncomfortable opening statement of the independent inquiry into Paterson's malpractice, published on Feb 4, 2020. Paterson worked as a consultant surgeon from 1993 to 2011 in both private and National Health Service hospitals in West Midlands, UK. During that period, he treated thousands of patients, many of whom had surgery. Paterson demonstrated an array of abhorrent and unsafe activities over this time, including exaggerating patients' diagnoses to coerce them into having surgery, performing his own version of a mastectomy, which goes against internationally agreed oncological principles, and inappropriate conduct towards patients and staff.

The inquiry makes a range of valuable recommendations that cover regulatory reform, corporate accountability, information for patients, informed consent, complaints, and clinical indemnity. The crucial message is that these reforms must occur across both the NHS and the private sector and must be implemented earnestly and urgently. But many of the issues in the Paterson case cannot be regulated and flow from the murky waters of medical professionalism. At times during the 87 pages of patient testimony, patients suggested in hindsight they could see that other clinicians knew there was a problem with Paterson but did not say anything. The hurt and disappointment that patients felt with the medical profession are chilling.

The info is here.

Wednesday, April 29, 2020

Characteristics of Faculty Accused of Academic Sexual Misconduct in the Biomedical and Health Sciences

Espinoza M, Hsiehchen D.
JAMA. 2020;323(15):1503–1505.
doi:10.1001/jama.2020.1810

Abstract

Despite protections mandated in educational environments, unwanted sexual behaviors have been reported in medical training. Policies to combat such behaviors need to be based on better understanding of the perpetrators. We characterized faculty accused of sexual misconduct resulting in institutional or legal actions that proved or supported guilt at US higher education institutions in the biomedical and health sciences.

Discussion

Of biomedical and health sciences faculty accused of sexual misconduct resulting in institutional or legal action, a majority were full professors, chairs or directors, or deans. Sexual misconduct was rarely an isolated event. Accused faculty frequently resigned or remained in academics, and few were sanctioned by governing boards.

Limitations include that only data on accused faculty who received media attention or were involved in legal proceedings were captured. In addition, the duration of behaviors, the exact number of targets, and the outcome data could not be identified for all accused faculty. Thus, this study cannot determine the prevalence of faculty who commit sexual misconduct, and the characteristics may not be generalizable across institutions.

The lack of transparency in investigations suggests that misconduct behaviors may not have been wholly captured by the public documents. Efforts to eliminate nondisclosure agreements are needed to enhance transparency. Further work is needed on mechanisms to prevent sexual misconduct at teaching institutions.

The info is here.

Wednesday, April 22, 2020

Your Code of Conduct May Be Sending the Wrong Message

F. Gino, M, Kouchaki, & Y. Feldman
Harvard Business Review
Originally posted 13 March 20


Here is an excerpt:

We examined the relationship between the language used (personal or impersonal) in these codes and corporate illegality. Research assistants blind to our research questions and hypotheses coded each document based on the degree to which it used “we” or “member/employee” language. Next, we searched media sources for any type of illegal acts these firms may have been involved in, such as environmental violations, anticompetitive actions, false claims, and fraudulent actions. Our analysis showed that firms that used personal language in their codes of conduct were more likely to be found guilty of illegal behaviors.

We found this initial evidence to be compelling enough to dig further into the link between personal “we” language and unethical behavior. What would explain such a link? We reasoned that when language communicating ethical standards is personal, employees tend to assume they are part of a community where members are easygoing, helpful, cooperative, and forgiving. By contrast, when the language is impersonal — for example, “organizational members are expected to put customers first” — employees feel they are part of a transactional relationship in which members are more formal and distant.

Here’s the problem: When we view our organization as tolerant and forgiving, we believe we’re less likely to be punished for misconduct. Across nine different studies, using data from lab- and field-based experiments as well as a large dataset of S&P firms, we find that personal language (“we,” “us”) leads to less ethical behavior than impersonal language (“employees,” “members”) does, apparently because people encountering more personal language believe their organization is less serious about punishing wrongdoing.

The info is here.

Monday, April 20, 2020

Europe plans to strictly regulate high-risk AI technology

Nicholas Wallace
sciencemag.org
Originally published 19 Feb 20

Here is an excerpt:

The commission wants binding rules for “high-risk” uses of AI in sectors like health care, transport, or criminal justice. The criteria to determine risk would include considerations such as whether someone could get hurt—by a self-driving car or a medical device, for example—or whether a person has little say in whether they’re affected by a machine’s decision, such as when AI is used in job recruitment or policing.

For high-risk scenarios, the commission wants to stop inscrutable “black box” AIs by requiring human oversight. The rules would also govern the large data sets used in training AI systems, ensuring that they are legally procured, traceable to their source, and sufficiently broad to train the system. “An AI system needs to be technically robust and accurate in order to be trustworthy,” the commission’s digital czar Margrethe Vestager said at the press conference.

The law will also establish who is responsible for an AI system’s actions—such as the company using it, or the company that designed it. High-risk applications would have to be shown to be compliant with the rules before being deployed in the European Union.

The commission also plans to offer a “trustworthy AI” certification, to encourage voluntary compliance in low-risk uses. Certified systems later found to have breached the rules could face fines.

The info is here.

Sunday, April 19, 2020

On the ethics of algorithmic decision-making in healthcare

Grote T, Berens P
Journal of Medical Ethics 
2020;46:205-211.

Abstract

In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical diagnosis, it comes at the expense of opacity when trying to assess the reliability of given diagnosis. Drawing on literature in social epistemology and moral responsibility, we argue that the uncertainty in question potentially undermines the epistemic authority of clinicians. Furthermore, we elucidate potential pitfalls of involving machine learning in healthcare with respect to paternalism, moral responsibility and fairness. At last, we discuss how the deployment of machine learning algorithms might shift the evidentiary norms of medical diagnosis. In this regard, we hope to lay the grounds for further ethical reflection of the opportunities and pitfalls of machine learning for enhancing decision-making in healthcare.

From the Conclusion

In this paper, we aimed at examining which opportunities and pitfalls machine learning potentially provides to enhance of medical decision-making on epistemic and ethical grounds. As should have become clear, enhancing medical decision-making by deferring to machine learning algorithms requires trade-offs at different levels. Clinicians, or their respective healthcare institutions, are facing a dilemma: while there is plenty of evidence of machine learning algorithms outsmarting their human counterparts, their deployment comes at the costs of high degrees of uncertainty. On epistemic grounds, relevant uncertainty promotes risk-averse decision-making among clinicians, which then might lead to impoverished medical diagnosis. From an ethical perspective, deferring to machine learning algorithms blurs the attribution of accountability and imposes health risks to patients. Furthermore, the deployment of machine learning might also foster a shift of norms within healthcare. It needs to be pointed out, however, that none of the issues we discussed presents a knockout argument against deploying machine learning in medicine, and our article is not intended this way at all. On the contrary, we are convinced that machine learning provides plenty of opportunities to enhance decision-making in medicine.

The article is here.

Friday, April 17, 2020

Toward equipping Artificial Moral Agents with multiple ethical theories

George Rautenbach and C. Maria Keet
arXiv:2003.00935v1 [cs.CY] 2 Mar 2020

Abstract

Artificial Moral Agents (AMA’s) is a field in computer science with the purpose of creating autonomous machines that can make moral decisions akin to how humans do. Researchers have proposed theoretical means of creating such machines, while philosophers have made arguments as to how these machines ought to behave, or whether they should even exist.

Of the currently theorised AMA’s, all research and design has been done with either none or at most one specified normative ethical theory as basis. This is problematic because it narrows down the AMA’s functional ability and versatility which in turn causes moral outcomes that a limited number of people agree with (thereby undermining an AMA’s ability to be moral in a human sense). As solution we design a three-layer model for general normative ethical theories that can be used to serialise the ethical views of people and businesses for an AMA to use during reasoning. Four specific ethical norms (Kantianism, divine command theory, utilitarianism, and egoism) were modelled and evaluated as proof of concept for normative modelling. Furthermore, all models were serialised to XML/XSD as proof of support for computerisation.

From the Discussion:

A big philosophical grey area in AMA’s is with regards to agency. That is, an entity’s ability to
understand available actions and their moral values and to freely choose between them. Whether
or not machines can truly understand their decisions and whether they can be held accountable
for them is a matter of philosophical discourse. Whatever the answer may be, AMA agency
poses a difficult question that must be addressed.

The question is as follows: should the machine act as an agent itself, or should it act as an informant for another agent? If an AMA reasons for another agent (e.g., a person) then reasoning will be done with that person as the actor and the one who holds responsibility. This has the disadvantage of putting that person’s interest before other morally considerable entities, especially with regards to ethical theories like egoism. Making the machine the moral agent has the advantage of objectivity where multiple people are concerned, but makes it harder to assign blame for its actions - a machine does not care for imprisonment or even disassembly. A Luddite would say it has no incentive to do good to humanity. Of course, a deterministic machine does not need incentive at all, since it will always behave according to the theory it is running. This lack of fear or “personal interest” can be good, because it ensures objective reasoning and fair consideration of affected parties.

The paper is here.

Friday, March 27, 2020

Human Trafficking Survivor Settles Lawsuit Against Motel Where She Was Held Captive

Todd Bookman
npr.org
Originally posted 20 Feb 20

Here is an excerpt:

Legal experts and anti-trafficking groups say her 2015 case was the first filed against a hotel or motel for its role in a trafficking crime.

"It is not that any hotel is liable just because trafficking occurred on their premises," explains Cindy Vreeland, a partner at the firm WilmerHale, which handled Ricchio's case pro bono. "The question is whether the company that's been sued knew or should have known about the trafficking."

After a number of appeals and delays, the case finally settled in December 2019 with Ricchio receiving an undisclosed monetary award. Owners of the Shangri-La Motel didn't respond to a request for comment.

"I never thought it would be, like, an eight-year process," Ricchio says. "Anything in the court system seems to take forever."

That slow process isn't deterring other survivors of trafficking from bringing their own suits.

According to the Human Trafficking Institute, there were at least 25 new cases filed nationwide against hotels and motels last year under the TVPA.

Some of the named defendants include major chains such as Hilton, Marriott and Red Roof Inn.

"You can't just let anything happen on your property, turn a blind eye and say, 'Too bad, so sad, I didn't do it, so I'm not responsible,' " says Paul Pennock with the firm Weitz & Luxenberg.

The info is here.

Wednesday, January 22, 2020

‘The Algorithm Made Me Do It’: Artificial Intelligence Ethics Is Still On Shaky Ground

Joe McKendrick
Forbes.com
Originally published 22 Dec 19

Here is an excerpt:

Inevitably, “there will be lawsuits that require you to reveal the human decisions behind the design of your AI systems, what ethical and social concerns you took into account, the origins and methods by which you procured your training data, and how well you monitored the results of those systems for traces of bias or discrimination,” warns Mike Walsh, CEO of Tomorrow, and author of The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You, in a recent Harvard Business Review article. “At the very least trust, the algorithmic processes at the heart of your business. Simply arguing that your AI platform was a black box that no one understood is unlikely to be a successful legal defense in the 21st century. It will be about as convincing as ‘the algorithm made me do it.’”

It’s more than legal considerations that should drive new thinking about AI ethics. It’s about “maintaining trust between organizations and the people they serve, whether clients, partners, employees, or the general public,” a recent report out of Accenture maintains. The report’s authors, Ronald Sandler and John Basl, both with Northeastern University’s philosophy department, and Steven Tiell of Accenture, state that a well-organized data ethics capacity can help organizations manage risks and liabilities associated with such data misuse and negligence.

“It can also help organizations clarify and make actionable mission and organizational values, such as responsibilities to and respect for the people and communities they serve,” Sandler and his co-authors advocate. A data ethics capability also offers organizations “a path to address the transformational power of data-driven AI and machine learning decision-making in an anticipatory way, allowing for proactive responsible development and use that can help organizations shape good governance, rather than inviting strict oversight.”

The info is here.

Saturday, January 4, 2020

Robots in Finance Could Wipe Out Some of Its Highest-Paying Jobs

Lananh Nguyen
Bloomberg.com
Originally poste 6 Dec 19

Robots have replaced thousands of routine jobs on Wall Street. Now, they’re coming for higher-ups.

That’s the contention of Marcos Lopez de Prado, a Cornell University professor and the former head of machine learning at AQR Capital Management LLC, who testified in Washington on Friday about the impact of artificial intelligence on capital markets and jobs. The use of algorithms in electronic markets has automated the jobs of tens of thousands of execution traders worldwide, and it’s also displaced people who model prices and risk or build investment portfolios, he said.

“Financial machine learning creates a number of challenges for the 6.14 million people employed in the finance and insurance industry, many of whom will lose their jobs -- not necessarily because they are replaced by machines, but because they are not trained to work alongside algorithms,” Lopez de Prado told the U.S. House Committee on Financial Services.

During the almost two-hour hearing, lawmakers asked experts about racial and gender bias in AI, competition for highly skilled technology workers, and the challenges of regulating increasingly complex, data-driven financial markets.

The info is here.

Friday, January 3, 2020

Robotics researchers have a duty to prevent autonomous weapons

Christoffer Heckman
theconversation.com
Originally posted 4 Dec 19

Here is an excerpt:

As with all technology, the range of future uses for our research is difficult to imagine. It’s even more challenging to forecast given how quickly this field is changing. Take, for example, the ability for a computer to identify objects in an image: in 2010, the state of the art was successful only about half of the time, and it was stuck there for years. Today, though, the best algorithms as shown in published papers are now at 86% accuracy. That advance alone allows autonomous robots to understand what they are seeing through the camera lenses. It also shows the rapid pace of progress over the past decade due to developments in AI.

This kind of improvement is a true milestone from a technical perspective. Whereas in the past manually reviewing troves of video footage would require an incredible number of hours, now such data can be rapidly and accurately parsed by a computer program.

But it also gives rise to an ethical dilemma. In removing humans from the process, the assumptions that underpin the decisions related to privacy and security have been fundamentally altered. For example, the use of cameras in public streets may have raised privacy concerns 15 or 20 years ago, but adding accurate facial recognition technology dramatically alters those privacy implications.

The info is here.

Sunday, December 15, 2019

The automation of ethics: the case of self-driving cars

Raffaele Rodogno, Marco Nørskov
Forthcoming in C. Hasse and
D. M. Søndergaard (Eds.)
Designing Robots.

Abstract
This paper explores the disruptive potential of artificial moral decision-makers on our moral practices by investigating the concrete case of self-driving cars. Our argument unfolds in two movements, the first purely philosophical, the second bringing self-driving cars into the picture.  More in particular, in the first movement, we bring to the fore three features of moral life, to wit, (i) the limits of the cognitive and motivational capacities of moral agents; (ii) the limited extent to which moral norms regulate human interaction; and (iii) the inescapable presence of tragic choices and moral dilemmas as part of moral life. Our ultimate aim, however, is not to provide a mere description of moral life in terms of these features but to show how a number of central moral practices can be envisaged as a response to, or an expression of these features. With this understanding of moral practices in hand, we broach the second movement. Here we expand our study to the transformative potential that self-driving cars would have on our moral practices . We locate two cases of potential disruption. First, the advent of self-driving cars would force us to treat as unproblematic the normative regulation of interactions that are inescapably tragic. More concretely, we will need to programme these cars’ algorithms as if there existed uncontroversial answers to the moral dilemmas that they might well face once on the road. Second, the introduction of this technology would contribute to the dissipation of accountability and lead to the partial truncation of our moral practices. People will be harmed and suffer in road related accidents, but it will be harder to find anyone to take responsibility for what happened—i.e. do what we normally do when we need to repair human relations after harm, loss, or suffering has occurred.

The book chapter is here.

Wednesday, December 4, 2019

Veterans Must Also Heal From Moral Injury After War

Camillo Mac Bica
truthout.org
Originally published Nov 11, 2019

Here are two excerpts:

Humankind has identified and internalized a set of values and norms through which we define ourselves as persons, structure our world and render our relationship to it — and to other human beings — comprehensible. These values and norms provide the parameters of our being: our moral identity. Consequently, we now have the need and the means to weigh concrete situations to determine acceptable (right) and unacceptable (wrong) behavior.

Whether an individual chooses to act rightly or wrongly, according to or in violation of her moral identity, will affect whether she perceives herself as true to her personal convictions and to others in the moral community who share her values and ideals. As the moral gravity of one’s actions and experiences on the battlefield becomes apparent, a warrior may suffer profound moral confusion and distress at having transgressed her moral foundations, her moral identity.

Guilt is, simply speaking, the awareness of having transgressed one’s moral convictions and the anxiety precipitated by a perceived breakdown of one’s ethical cohesion — one’s integrity — and an alienation from the moral community. Shame is the loss of self-esteem consequent to a failure to live up to personal and communal expectations.

(cut)

Having completed the necessary philosophical and psychological groundwork, veterans can now begin the very difficult task of confronting the experience. That is, of remembering, reassessing and morally reevaluating their responsibility and culpability for their perceived transgressions on the battlefield.

Reassessing their behavior in combat within the parameters of their increased philosophical and psychological awareness, veterans realize that the programming to which they were subjected and the experience of war as a survival situation are causally connected to those specific battlefield incidents and behaviors, theirs and/or others’, that weigh heavily on their consciences — their moral injury. As a consequence, they understand these influences as extenuating circumstances.

Finally, as they morally reevaluate their actions in war, they see these incidents and behaviors in combat not as justifiable, but as understandable, perhaps even excusable, and their culpability mitigated by the fact that those who determined policy, sent them to war, issued the orders, and allowed the war to occur and/or to continue unchallenged must share responsibility for the crimes and horror that inevitably characterize war.

The info is here.

Tuesday, December 3, 2019

AI Ethics is All About Power

Code of Ethics in TechnologyKhair Johnson
venturebeat.com
Originally published Nov 11, 2109


Here is an excerpt:

“The gap between those who develop and profit from AI and those most likely to suffer the consequences of its negative effects is growing larger, not smaller,” the report reads, citing a lack of government regulation in an AI industry where power is concentrated among a few companies.

Dr. Safiya Noble and Sarah Roberts chronicled the impact of the tech industry’s lack of diversity in a paper UCLA published in August. The coauthors argue that we’re now witnessing the “rise of a digital technocracy” that masks itself as post-racial and merit-driven but is actually a power system that hoards resources and is likely to judge a person’s value based on racial identity, gender, or class.

“American corporations were not able to ‘self-regulate’ and ‘innovate’ an end to racial discrimination — even under federal law. Among modern digital technology elites, myths of meritocracy and intellectual superiority are used as racial and gender signifiers that disproportionately consolidate resources away from people of color, particularly African-Americans, Latinx, and Native Americans,” reads the report. “Investments in meritocratic myths suppress interrogations of racism and discrimination even as the products of digital elites are infused with racial, class, and gender markers.”

Despite talk about how to solve tech’s diversity problem, much of the tech industry has only made incremental progress, and funding for startups with Latinx or black founders still lags behind those for white founders. To address the tech industry’s general lack of progress on diversity and inclusion initiatives, a pair of Data & Society fellows suggested that tech and AI companies embrace racial literacy.

The info is here.

Editor's Note: The article covers a huge swath of information.

Monday, December 2, 2019

A.I. Systems Echo Biases They’re Fed, Putting Scientists on Guard

Cade Metz
The New York Times
Originally published Nov 11, 2019

Here is the conclusion:

“This is hard. You need a lot of time and care,” he said. “We found an obvious bias. But how many others are in there?”

Dr. Bohannon said computer scientists must develop the skills of a biologist. Much as a biologist strives to understand how a cell works, software engineers must find ways of understanding systems like BERT.

In unveiling the new version of its search engine last month, Google executives acknowledged this phenomenon. And they said they tested their systems extensively with an eye toward removing any bias.

Researchers are only beginning to understand the effects of bias in systems like BERT. But as Dr. Munro showed, companies are already slow to notice even obvious bias in their systems. After Dr. Munro pointed out the problem, Amazon corrected it. Google said it was working to fix the issue.

Primer’s chief executive, Sean Gourley, said vetting the behavior of this new technology would become so important, it will spawn a whole new industry, where companies pay specialists to audit their algorithms for all kinds of bias and other unexpected behavior.

“This is probably a billion-dollar industry,” he said.

The whole article is here.

Friday, November 29, 2019

Drivers are blamed more than their automated cars when both make mistakes

Image result for Drivers are blamed more than their automated cars when both make mistakesEdmond Awad and others
Nature Human Behaviour (2019)
Published: 28 October 2019


Abstract

When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human–machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.

The research is here.