Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Ethical Decision-making. Show all posts
Showing posts with label Ethical Decision-making. Show all posts

Monday, August 30, 2021

Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?

Lara, F. 
Sci Eng Ethics 27, 42 (2021). 
https://doi.org/10.1007/s11948-021-00318-5

Abstract

Can Artificial Intelligence (AI) be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these proposals, this article proposes a virtual assistant that, through dialogue, neutrality and virtual reality technologies, can teach users to make better moral decisions on their own. The author concludes that, as long as certain precautions are taken in its design, such an assistant could do this better than a human instructor adopting the same educational methodology.

From the Conclusion

The key in moral education is that it be pursued while respecting and promoting personal autonomy. Educators should avoid the mistake of limiting the capacities of individuals to freely and reflectively determine their own values by attempting to enhance their behaviour directly. On the contrary, they must do what they can to ensure that those being educated, at least at an advanced age, actively participate in this process in order to assume the values that will define them and give meaning to their lives. The problem with current proposals for moral enhancement through new technologies is that they treat the subject of their interventions as a "passive recipient". Moral bioenhancement does so because it aims to change the motivation of the individual by bypassing the reflection and gradual assimilation of values that should accompany any adoption of new identity traits. This constitutes a passivity that would also occur in proposals for moral AIenhancement based on ethical machines that either replace humans in decision-making, or surreptitiously direct them to do the right thing, or simply advise them based on their own supposedly undisputed values.

Wednesday, April 15, 2020

How to be a more ethical Amazon shopper during the pandemic

Samantha Murphy Kelly
CNN.org
Updated on 13 April 20

Here is an excerpt:

For customers who may feel uneasy about these workplace issues but are desperate for household goods, there are a range of options to shop more consciously, from avoiding unnecessary purchases on the platform and tipping Amazon's grocery delivery workers handsomely to buying more from local stores online. But there are conflicting views on whether the best way to be an ethical shopper at this moment means not shopping from Amazon at all, especially given its position as one of the biggest hirers during a severe labor market crunch.

"If people choose to work at Amazon, we should respect their decisions," said Peter Singer, an ethics professor at Princeton University and author of "The Most Good You Can Do: How Effective Altruism Is Changing Ideas About Living Ethically."

The US Department of Labor announced Thursday that about 6.6 million people filed for unemployment benefits in the last week alone, bringing the number of lost jobs during the pandemic to nearly 17 million. Singer highlighted how delivery services are one of the few areas in which businesses are hiring.

But Christian Smalls, the former Amazon employee who partially organized a protest calling for senior warehouse officials to close the Staten Island, New York, facility for deep cleaning after multiple cases of the virus emerged there, advises otherwise. (The company later fired Smalls, citing he did not stay in quarantine after exposure to someone who tested positive.)

"If you want to practice real social distancing, stop pressing the buy button," Smalls told CNN Business. "You'll be saving lives. I understand that people need groceries and certain items, depending where you live, are limited. But people are buying things they don't need and it's putting workers' health at risk."

Although the issue is complex, shoppers who decide to continue using Amazon, or any online delivery platform, can keep a few best practices in mind.

The info is here.

Friday, December 13, 2019

The Ethical Dilemma at the Heart of Big Tech Companies

Emanuel Moss and Jacob Metcalf
Harvard Business Review
Originally posted 14 Nov 19

Here is an excerpt:

The central challenge ethics owners are grappling with is negotiating between external pressures to respond to ethical crises at the same time that they must be responsive to the internal logics of their companies and the industry. On the one hand, external criticisms push them toward challenging core business practices and priorities. On the other hand, the logics of Silicon Valley, and of business more generally, create pressures to establish or restore predictable processes and outcomes that still serve the bottom line.

We identified three distinct logics that characterize this tension between internal and external pressures:

Meritocracy: Although originally coined as a derisive term in satirical science fiction by British sociologist Michael Young, meritocracy infuses everything in Silicon Valley from hiring practices to policy positions, and retroactively justifies the industry’s power in our lives. As such, ethics is often framed with an eye toward smarter, better, and faster approaches, as if the problems of the tech industry can be addressed through those virtues. Given this, it is not surprising that many within the tech industry position themselves as the actors best suited to address ethical challenges, rather than less technically-inclined stakeholders, including elected officials and advocacy groups. In our interviews, this manifested in relying on engineers to use their personal judgement by “grappling with the hard questions on the ground,” trusting them to discern and to evaluate the ethical stakes of their own products. While there are some rigorous procedures that help designers scan for the consequences of their products, sitting in a room and “thinking hard” about the potential harms of a product in the real world is not the same as thoroughly understanding how someone (whose life is very different than a software engineer) might be affected by things like predictive policing or facial recognition technology, as obvious examples. Ethics owners find themselves being pulled between technical staff that assert generalized competence over many domains and their own knowledge that ethics is a specialized domain that requires deep contextual understanding.

The info is here.

Monday, October 7, 2019

Ethics a distant second to profits in Silicon Valley

Gabriel Fairman
www.sdtimes.com
Originally published September 9, 2019

Here is an excerpt:

For ethics to become a part of the value system that drives behavior in Silicon Valley, it would have to be incentivized as such. I have a hard time envisioning a world were ethics can offer shareholders huge returns. Ethics is about doing the right thing, and the right thing and the lucrative thing typically don’t necessarily go hand in hand.

Everyone can understand ethics. Basic questions such as “Will this be good for the world in a year, 10 years or 20 years?”, “Would I want this for my kids?” are easy litmus tests to differentiate between ethical and unethical conduct. The challenge is that considerations on ethics slow down development by raising challenges and concerns early on.  Ethics are about amplifying potential problems that can be foreseen down the road.

On the other hand, venture-funded start-ups are about minimizing the ramifications of these problems as they move on quickly. How can ethics compete with billion-dollar exits? It can’t. Ethics are just this thing that we read about in articles or hear about in lectures. It is not driving day-to-day decision-making. You listen to people in boardrooms asking, “How will this impact our valuation?,” or “What is the ROI of this initiative?” but you don’t hear top-level execs brainstorming about how their product or company could be more ethical because there is no compensation tied to that. The way we have built our world, ethics are just fluff.

We are also extraordinary at differentiating private vs. public lives. Many people working at tech companies don’t allow their kids to use electronic devices ubiquitously or would not want their kids bossed around by an algorithm as they let go of full-time employee benefits. But they promote these things and further them because these things are highly profitable, not because they are fundamentally good. This key distinction between private and public behavior allows people to behave in wildly hypocritical ways, by helping advance the very things they do not want in their own homes.

The info is here.

Thursday, June 13, 2019

Moral dilemmas in (not) treating patients who feel they are a burden

Metselaar S, Widdershoven G.
[published online April 23, 2019]
Bioethics. 2019;33(4):431-438.

Abstract

Working as clinical ethicists in an academic hospital, we find that practitioners tend to take a principle‐based approach to moral dilemmas when it comes to (not) treating patients who feel like a burden, in which respect for autonomy tends to trump other principles. We argue that this approach insufficiently deals with the moral doubts of professionals with regard to feeling that you are a burden as a motive to decline or withdraw from treatment. Neither does it take into adequately account the specific needs of the patient that might underlie their feeling of being a burden to others. We propose a care ethics approach as an alternative. It focuses on being attentive and responsive to the caring needs of those involved in the care process—which can be much more specific than either receiving or withdrawing from treatment. This approach considers these needs in the context of the patient's identity, biography and relationships, and regards autonomy as relational rather than as individual. We illustrate the difference between these two approaches by means of the case of Mrs K. Furthermore, we show that a care ethics approach is in line with interventions that are found to alleviate feeling a burden and maintain that facilitating moral case deliberation among practitioners can supports them in taking a care ethics approach to moral dilemmas in (not) treating patients who feel like a burden.

The info is here.

Wednesday, June 12, 2019

'Ethics Bots' and Other Ways to Move Your Code of Business Conduct Beyond Puffery

Michael Blanding
Harvard Business Week
Originally posted May 14, 2019

Here is an excerpt:

Even if not ready to develop or deploy such technologically advanced solutions, companies can still make their ethics codes more intuitive, interactive, and practical for day-to-day decision-making, Soltes says. That may mean reducing the number of broad-brush value statements and uninspired clip-art, instead making the document more concise in describing practical guidelines for the company’s employees.

He also recommends thinking beyond the legal department to bring in other areas of the company, such as marketing, communications, or consumer behavior specialists, to help design a code that will be understandable to employees. Uber, for example, rolled out a mobile app-focused version of its ethics code to better serve its employees, who are younger and more tech savvy.

Lastly, Soltes advises that firms not be afraid to experiment. An ethics code shouldn’t be a monolith, but rather a living document that can be adapted to the expanding needs of a firm and its employees. After rolling out a policy to a subgroup of employees, for example, companies should evaluate how the code is actually being used in practice and how it can be further refined and improved.

That kind of creativity can help companies stay away from the scrutiny of regulators and avoid negative headlines. “Ultimately, the goal should not simply be to just create a legal document, but instead a valuable tool that helps cultivate the kind of behavior and culture the firm wants to support on a day-to-day basis,” Soltes says.

The info is here.

Wednesday, May 15, 2019

Students' Ethical Decision‐Making When Considering Boundary Crossings With Counselor Educators

Stephanie T. Burns
Counseling and Values
First published: 10 April 2019
https://doi.org/10.1002/cvj.12094

Abstract

Counselor education students (N = 224) rated 16 boundary‐crossing scenarios involving counselor educators. They viewed boundary crossings as unethical and were aware of power differentials between the 2 groups. Next, they rated the scenarios again, after reviewing 1 of 4 ethical informational resources: relevant standards in the ACA Code of Ethics (American Counseling Association, 2014), 2 different boundary‐crossing decision‐making models, and a placebo. Although participants rated all resources except the placebo as moderately helpful, these resources had little to no influence on their ethical decision‐making. Only 47% of students in the 2 ethical decision‐making model groups reported they would use the model they were exposed to in the future when contemplating boundary crossings.

Here is a portion from Implications for Practice and Training

Counselor education students took conservative stances toward the 16 boundary-crossing scenarios with counselor educators. These findings support results of previous researchers who stated that students struggle with even the smallest of boundary crossings (Kozlowski et al., 2014) because they understand that power differentials have implications for grades, evaluations, recommendation letters, and obtaining authentic skill development feedback (Gu et al., 2011). Counselor educators need to be aware that students find not providing appropriate feedback because of the counselor educator’s personal feelings toward the student, not providing students with required supervision time in practicum, and taking first authorship when the student performed all the work on the submission as being as abusive as having sex with a student.

The research is here.

Monday, February 25, 2019

A philosopher’s life

Margaret Nagle
UMaineToday
Fall/Winter 2018

Here is an excerpt:

Mention philosophy and for most people, images of the bearded philosophers of Ancient Greece pontificating in the marketplace come to mind. Today, philosophers are still in public arenas, Miller says, but now that engagement with society is in K–12 education, medicine, government, corporations, environmental issues and so much more. Public philosophers are students of community knowledge, learning as much as they teach.

The field of clinical ethics, which helps patients, families and clinicians address ethical issues that arise in health care, emerged in recent decades as medical decisions became more complex in an increasingly technological society. Those questions can range from when to stop aggressive medical intervention to whether expressed breast milk from a patient who uses medical marijuana should be given to her baby in the neonatal intensive care unit.

As a clinical ethicist, Miller provides training and consultation for physicians, nurses and other medical personnel. She also may be called on to consult with patients and their family members. Unlike urban areas where a city hospital may have a whole department devoted to clinical ethics, rural health care settings often struggle to find such philosophy-focused resources.

That’s why Miller does what she does in Maine.

Miller focuses on “building clinical ethics capacity” in the state’s rural health care settings, providing training, connecting hospital personnel to readings and resources, and facilitating opportunities to maintain ongoing exploration of critical issues.

The article is here.

Sunday, February 3, 2019

Leaders matter morally: The role of ethical leadership in shaping employee moral cognition and misconduct.

Moore, C., Mayer, D. M., Chiang, F. F. T., Crossley, C., Karlesky, M. J., & Birtch, T. A. (2019). Journal of Applied Psychology, 104(1), 123-145.

Abstract

There has long been interest in how leaders influence the unethical behavior of those who they lead. However, research in this area has tended to focus on leaders’ direct influence over subordinate behavior, such as through role modeling or eliciting positive social exchange. We extend this research by examining how ethical leaders affect how employees construe morally problematic decisions, ultimately influencing their behavior. Across four studies, diverse in methods (lab and field) and national context (the United States and China), we find that ethical leadership decreases employees’ propensity to morally disengage, with ultimate effects on employees’ unethical decisions and deviant behavior. Further, employee moral identity moderates this mediated effect. However, the form of this moderation is not consistent. In Studies 2 and 4, we find that ethical leaders have the largest positive influence over individuals with a weak moral identity (providing a “saving grace”), whereas in Study 3, we find that ethical leaders have the largest positive influence over individuals with a strong moral identity (catalyzing a “virtuous synergy”). We use these findings to speculate about when ethical leaders might function as a “saving grace” versus a “virtuous synergy.” Together, our results suggest that employee misconduct stems from a complex interaction between employees, their leaders, and the context in which this relationship takes place, specifically via leaders’ influence over employees’ moral cognition.

Here is the Conclusion:

Our research points to one of the reasons why 'cleaning house' of morally compromised leaders after scandals may be less effective than we might expect. The fact that leadership affects the extent to which subordinates morally disengage means that their influence may be more profound and nefarious than one might conclude given earlier understandings of the mechanisms through which ethical leadership elicits its outcomes. One can eliminate perverse incentives and remove poor role models, but once a leader shifts how subordinates cognitively construe decisions with ethical import, their continuing influence on employee misconduct may be harder to undo.

The info is here.

Monday, December 10, 2018

What makes a ‘good’ clinical ethicist?

Trevor Bibler
Baylor College of Medicine Blog
Originally posted October 12, 2018

Here is an excerpt:

Some hold that the complexity of clinical ethics consultations couldn’t be reduced to multiple-choice questions based on a few sources, arguing that creating multiple-choice questions that reflect the challenges of doing clinical ethics is nearly impossible. Most of the time, the HEC-C Program is careful to emphasize that they are testing knowledge of issues in clinical ethics, not the ethicist’s ability to apply this knowledge to the practice of clinical ethics.

This is a nuanced distinction that may be lost on those outside the field. For example, an administrator might view the HEC-C Program as separating a good ethicist from an inadequate ethicist simply because they have 400 hours of experience and can pass a multiple-choice exam.

Others disagree with the source material (called “core references”) that serves as the basis for exam questions. I believe the core references, if repetitious, are important works in the field. My concern is that these works do not pay sufficient attention to some of the most pressing and challenging issues in clinical ethics today: income inequality, care for non-citizens, drug abuse, race, religion, sex and gender, to name a few areas.

Also, it’s feasible that inadequate ethicists will become certified. I can imagine an ethicist might meet the requirements, but fall short of being a good ethicist because in practice they are poor communicators, lack empathy, are authoritarian when analyzing ethics issues, or have an off-putting presence.

On the other hand, I know some ethicists I would consider experts in the field who are not going to undergo the certification process because they disagree with it. Both of these scenarios show that HEC certification should not be the single requirement that separates a good ethicist from an inadequate ethicist.

The info is here.

Thursday, November 29, 2018

Ethical Free Riding: When Honest People Find Dishonest Partners

Jörg Gross, Margarita Leib, Theo Offerman, & Shaul Shalvi
Psychological Science
https://doi.org/10.1177/0956797618796480

Abstract

Corruption is often the product of coordinated rule violations. Here, we investigated how such corrupt collaboration emerges and spreads when people can choose their partners versus when they cannot. Participants were assigned a partner and could increase their payoff by coordinated lying. After several interactions, they were either free to choose whether to stay with or switch their partner or forced to stay with or switch their partner. Results reveal that both dishonest and honest people exploit the freedom to choose a partner. Dishonest people seek a partner who will also lie—a “partner in crime.” Honest people, by contrast, engage in ethical free riding: They refrain from lying but also from leaving dishonest partners, taking advantage of their partners’ lies. We conclude that to curb collaborative corruption, relying on people’s honesty is insufficient. Encouraging honest individuals not to engage in ethical free riding is essential.

Conclusion
The freedom to select partners is important for the establishment of trust and cooperation. As we show here, however, it is also associated with potential moral hazards. For individuals who seek to keep the risk of collusion low, policies providing the freedom to choose one’s partners should be implemented with caution. Relying on people’s honesty may not always be sufficient because honest people may be willing to tolerate others’ rule violations if they stand to profit from them. Our results clarify yet again that people who are not willing to turn a blind eye and stand up to corruption should receive all praise.

Does AI Ethics Need to be More Inclusive?

Patrick Lin
Forbes.com
Originally posted October 29, 2018

Here is an excerpt:

Ethics is more than a survey of opinions

First, as the study’s authors allude to in their Nature paper and elsewhere, public attitudes don’t dictate what’s ethical or not.  People believe all kinds of crazy things—such as that slavery should be permitted—but that doesn’t mean those ethical beliefs are true or have any weight.  So, capturing responses of more people doesn’t necessarily help figure out what’s ethical or not.  Sometimes, more is just more, not better or even helpful.

This is the difference between descriptive ethics and normative ethics.  The former is more like sociology that simply seeks to describe what people believe, while the latter is more like philosophy that seeks reasons for why a belief may be justified (or not) and how things ought to be.

Dr. Edmond Awad, lead author of the Nature paper, cautioned, “What we are trying to show here is descriptive ethics: peoples’ preferences in ethical decisions.  But when it comes to normative ethics, which is how things should be done, that should be left to experts.”

Nonetheless, public attitudes are a necessary ingredient in practical policymaking, which should aim at the ethical but doesn’t always hit that mark.  If expert judgments in ethics diverge too much from public attitudes—asking more from a population than what they’re willing to agree to—that’s a problem for implementing the policy, and a resolution is needed.

The info is here.

Sunday, October 21, 2018

Leaders matter morally: The role of ethical leadership in shaping employee moral cognition and misconduct.

Moore, C., Mayer, D. M., Chiang, and others
Journal of Applied Psychology. Advance online publication.
http://dx.doi.org/10.1037/apl0000341

Abstract

There has long been interest in how leaders influence the unethical behavior of those who they lead. However, research in this area has tended to focus on leaders’ direct influence over subordinate behavior, such as through role modeling or eliciting positive social exchange. We extend this research by examining how ethical leaders affect how employees construe morally problematic decisions, ultimately influencing their behavior. Across four studies, diverse in methods (lab and field) and national context (the United States and China), we find that ethical leadership decreases employees’ propensity to morally disengage, with ultimate effects on employees’ unethical decisions and deviant behavior. Further, employee moral identity moderates this mediated effect. However, the form of this moderation is not consistent. In Studies 2 and 4, we find that ethical leaders have the largest positive influence over individuals with a weak moral identity (providing a “saving grace”), whereas in Study 3, we find that ethical leaders have the largest positive influence over individuals with a strong moral identity (catalyzing a “virtuous synergy”). We use these findings to speculate about when ethical leaders might function as a “saving grace” versus a “virtuous synergy.” Together, our results suggest that employee misconduct stems from a complex interaction between employees, their leaders, and the context in which this relationship takes place, specifically via leaders’ influence over employees’ moral cognition.

Beginning of the Discussion section

Three primary findings emerge from these four studies. First, we consistently find a negative relationship between ethical leadership and employee moral disengagement. This supports our primary hypothesis: leader behavior is associated with how employees construe decisions with ethical import. Our manipulation of ethical leadership and its resulting effects provide confidence that ethical leadership has a direct causal influence over employee moral disengagement.

In addition, this finding was consistent in both American and Chinese work contexts, suggesting the effect is not culturally bound.

Second, we also found evidence across all four studies that moral disengagement functions as a mechanism to explain the relationship between ethical leadership and employee unethical decisions and behaviors. Again, this result was consistent across time- and respondent-separated field studies and an experiment, in American and Chinese organizations, and using different measures of our primary constructs, providing important assurance of the generalizability of our findings and bolstering our confidence that moral disengagement as an important, unique, and robust mechanism to explain ethical leaders’ positive effects within their organizations.

Finally, we found persistent evidence that the centrality of an employee’s moral identity plays a key role in the relationship between ethical leadership and employee unethical decisions and behavior (through moral disengagement). However, the nature of this moderated relationship varied across studies.

Friday, August 10, 2018

SAS officers given lessons in ‘morality’

Paul Maley
PM Malcolm Turnbull with Defence Minister Marise Payne and current Chief of the Defence Force Air Chief Marshal Mark Binskin. Picture: Kym SmithThe Australian
Originally posted July 9, 2018

SAS officers are being given ­additional training in ethics, ­morality and courage in leadership as the army braces itself for a potentially damning report ­expected to find that a small number of troops may have committed war crimes during the decade-long fight in Afghanistan.

With the Inspector-General of the Australian Defence Force due within months to hand down his report into ­alleged battlefield atrocities committed by Diggers, The Australian can reveal that the SAS Regiment has been quietly instituting a series of reforms ahead of the findings.

The changes to special forces training reflect a widely held view within the army that any alleged misconduct committed by Australian troops was in part the ­result of a failure of leadership, as well as the transgression of individual soldiers.

Many of the reforms are ­focused on strengthening operational leadership and regimental culture, while others are designed to help special operations officers make ethical ­decisions even under the most challenging conditions.

Wednesday, February 28, 2018

Can scientists agree on a code of ethics?

David Ryan Polgar
BigThink.com
Originally published January 30, 2018

Here is an excerpt:

Regarding the motivation for developing this Code of Ethics, Hug mentioned the threat of reduced credibility of research if the standards seem to loose. She mentioned the pressure that many young scientists face in being prolific with research, insinuating the tension with quantity versus quality. "We want research to remain credible because we want it to have an impact on policymakers, research being turned into action." One of the goals of Hug presenting about the Code of Ethics, she said, was to start having various research institutions endorse the document, and have those institutions start distributing the Code of Ethics within their network.

“All these goals will conflict with each other," said Jodi Halpern, referring to the issues that may get in the way of adopting a code of ethics for scientists. "People need rigorous education in ethical reasoning, which is just as rigorous as science education...what I’d rather have as a requirement, if I’d like to put teeth anywhere. I’d like to have every doctoral student not just have one of those superficial IRB fake compliance courses, but I’d like to have them have to pass a rigorous exam showing how they would deal with certain ethical dilemmas. And everybody who will be the head of a lab someday will have really learned how to do that type of thinking.”

The article is here.

Tuesday, January 9, 2018

Dangers of neglecting non-financial conflicts of interest in health and medicine

Wiersma M, Kerridge I, Lipworth W.
Journal of Medical Ethics 
Published Online First: 24 November 2017.
doi: 10.1136/medethics-2017-104530

Abstract

Non-financial interests, and the conflicts of interest that may result from them, are frequently overlooked in biomedicine. This is partly due to the complex and varied nature of these interests, and the limited evidence available regarding their prevalence and impact on biomedical research and clinical practice. We suggest that there are no meaningful conceptual distinctions, and few practical differences, between financial and non-financial conflicts of interest, and accordingly, that both require careful consideration. Further, a better understanding of the complexities of non-financial conflicts of interest, and their entanglement with financial conflicts of interest, may assist in the development of a more sophisticated approach to all forms of conflicts of interest.

The article is here.

Monday, November 6, 2017

Crowdsourced Morality Could Determine the Ethics of Artificial Intelligence

Dom Galeon
Futurism.com
Originally published October 17, 2017

Here is an excerpt:

Crowdsourced Morality

This idea of having to choose between two morally problematic outcomes isn’t new. Ethicists even have a name for it: the double-effect. However, having to apply the concept to an artificially intelligent system is something humankind has never had to do before, and numerous experts have shared their opinions on how best to go about it.

OpenAI co-chairman Elon Musk believes that creating an ethical AI is a matter of coming up with clear guidelines or policies to govern development, and governments and institutions are slowly heeding Musk’s call. Germany, for example, crafted the world’s first ethical guidelines for self-driving cars. Meanwhile, Google parent company Alphabet’s AI DeepMind now has an ethics and society unit.

Other experts, including a team of researchers from Duke University, think that the best way to move forward is to create a “general framework” that describes how AI will make ethical decisions. These researchers believe that aggregating the collective moral views of a crowd on various issues — like the Moral Machine does with self-driving cars — to create this framework would result in a system that’s better than one built by an individual.

The article is here.

Wednesday, May 17, 2017

Where did Nazi doctors learn their ethics? From a textbook

Michael Cook
BioEdge.org
Originally posted April 29, 2017

German medicine under Hitler resulted in so many horrors – eugenics, human experimentation, forced sterilization, involuntary euthanasia, mass murder – that there is a temptation to say that “Nazi doctors had no ethics”.

However, according to an article in the Annals of Internal Medicine by Florian Bruns and Tessa Chelouche (from Germany and Israel respectively), this was not the case at all. In fact, medical ethics was an important part of the medical curriculum between 1939 and 1945. Nazi officials established lectureships in every medical school in Germany for a subject called “Medical Law and Professional Studies” (MLPS).

There was no lack of ethics. It was just the wrong kind of ethics.

(cut)

It is important to realize that ethical reasoning can be corrupted and that teaching ethics is, in itself, no guarantee of the moral integrity of physicians.

The article is here.

Tuesday, May 16, 2017

Why are we reluctant to trust robots?

Jim Everett, David Pizarro and Molly Crockett
The Guardian
Originally posted April 27, 2017

Technologies built on artificial intelligence are revolutionising human life. As these machines become increasingly integrated in our daily lives, the decisions they face will go beyond the merely pragmatic, and extend into the ethical. When faced with an unavoidable accident, should a self-driving car protect its passengers or seek to minimise overall lives lost? Should a drone strike a group of terrorists planning an attack, even if civilian casualties will occur? As artificially intelligent machines become more autonomous, these questions are impossible to ignore.

There are good arguments for why some ethical decisions ought to be left to computers—unlike human beings, machines are not led astray by cognitive biases, do not experience fatigue, and do not feel hatred toward an enemy. An ethical AI could, in principle, be programmed to reflect the values and rules of an ideal moral agent. And free from human limitations, such machines could even be said to make better moral decisions than us. Yet the notion that a machine might be given free reign over moral decision-making seems distressing to many—so much so that, for some, their use poses a fundamental threat to human dignity. Why are we so reluctant to trust machines when it comes to making moral decisions? Psychology research provides a clue: we seem to have a fundamental mistrust of individuals who make moral decisions by calculating costs and benefits – like computers do.

The article is here.

Tuesday, August 9, 2016

The Effects of Victim Anonymity on Unethical Behavior

Yam, K.C. & Reynolds, S.J.
J Bus Ethics (2016) 136: 13.
doi:10.1007/s10551-014-2367-5

Abstract

We theorize that victim anonymity is an important factor in ethical decision making, such that actors engage in more self-interested and unethical behaviors toward anonymous victims than they do toward identifiable victims. Three experiments provided empirical support for this argument. In Study 1, participants withheld more life-saving products from anonymous than from identifiable victims. In Study 2, participants allocated a sum of payment more unfairly when interacting with an anonymous than with an identifiable partner. Finally, in Study 3, participants cheated more from an anonymous than from an identifiable person. Anticipated guilt fully mediated these effects in all three studies. Taken together, our research suggests that anonymous victims may be more likely to incur unethical treatment, which could explain many unethical business behaviors.

The article is here.