Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Dilemmas. Show all posts
Showing posts with label Dilemmas. Show all posts

Saturday, April 15, 2023

Resolving content moderation dilemmas between free speech and harmful misinformation

Kozyreva, A., Herzog, S. M., et al. (2023). 
PNAS of US, 120(7).
https://doi.org/10.1073/pnas.2210666120

Abstract

In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people’s judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents (N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents’ decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.

Significance

Content moderation of online speech is a moral minefield, especially when two key values come into conflict: upholding freedom of expression and preventing harm caused by misinformation. Currently, these decisions are made without any knowledge of how people would approach them. In our study, we systematically varied factors that could influence moral judgments and found that despite significant differences along political lines, most US citizens preferred quashing harmful misinformation over protecting free speech. Furthermore, people were more likely to remove posts and suspend accounts if the consequences of the misinformation were severe or if it was a repeated offense. Our results can inform the design of transparent, consistent rules for content moderation that the general public accepts as legitimate.

Discussion

Content moderation is controversial and consequential. Regulators are reluctant to restrict harmful but legal content such as misinformation, thereby leaving platforms to decide what content to allow and what to ban. At the heart of policy approaches to online content moderation are trade-offs between fundamental values such as freedom of expression and the protection of public health. In our investigation of which aspects of content moderation dilemmas affect people’s choices about these trade-offs and what impact individual attitudes have on these decisions, we found that respondents’ willingness to remove posts or to suspend an account increased with the severity of the consequences of misinformation and whether the account had previously posted misinformation. The topic of the misinformation also mattered—climate change denial was acted on the least, whereas Holocaust denial and election denial were acted on more often, closely followed by antivaccination content. In contrast, features of the account itself—the person behind the account, their partisanship, and number of followers—had little to no effect on respondents’ decisions. In sum, the individual characteristics of those who spread misinformation mattered little, whereas the amount of harm, repeated offenses, and type of content mattered the most.

Thursday, January 26, 2023

The AI Ethicist's Dirty Hands Problem

H. S. Sætra, M. Coeckelbergh, & J. Danaher
Communications of the ACM, January 2023, 
Vol. 66 No. 1, Pages 39-41

Assume an AI ethicist uncovers objectionable effects related to the increased usage of AI. What should they do about it? One option is to seek alliances with Big Tech in order to "borrow" their power to change things for the better. Another option is to seek opportunities for change that actively avoids reliance on Big Tech.

The choice between these two strategies gives rise to an ethical dilemma. For example, if the ethicist's research emphasized the grave and unfortunate consequences of Twitter and Facebook, should they promote this research by building communities on said networks? Should they take funding from Big Tech to promote the reform of Big Tech? Should they seek opportunities at Google or OpenAI if they are deeply concerned about the negative implications of large-scale language models?

The AI ethicist’s dilemma emerges when an ethicist must consider how their success in communicating an
identified challenge is associated with a high risk of decreasing the chances of successfully addressing the challenge.  This dilemma occurs in situations in which the means to achieve one’s goals are seemingly best achieved by supporting that which one wishes to correct and/or practicing the opposite of that which one preaches.

(cut)

The Need for More than AI Ethics

Our analysis of the ethicist’s dilemma shows why close ties with Big Tech can be detrimental for the ethicist seeking remedies for AI related problems.   It is important for ethicists, and computer scientists in general, to be aware of their links to the sources of ethical challenges related to AI.  One useful exercise would be to carefully examine what could happen if they attempted to challenge the actors with whom they are aligned. Such actions could include attempts to report unfortunate implications of the company’s activities internally, but also publicly, as Gebru did. Would such actions be met with active resistance, with inaction, or even straightforward sanctions? Such an exercise will reveal whether or not the ethicist feels free to openly and honestly express concerns about the technology with which they work. Such an exercise could be important, but as we have argued, these individuals are not necessarily positioned to achieve fundamental change in this system.

In response, we suggest the role of government is key to balancing the power the tech companies have
through employment, funding, and their control of modern digital infrastructure. Some will rightly argue that political power is also dangerous.   But so are the dangers of technology and unbridled innovation, and private corporations are central sources of these dangers. We therefore argue that private power must be effectively bridled by the power of government.  This is not a new argument, and is in fact widely accepted.

Saturday, November 13, 2021

Moral behavior in games: A review and call for additional research

E. Clarkson
New Ideas in Psychology
Volume 64, January 2022, 100912

Abstract

The current review has been completed with several specific aims. First, it seeks to acknowledge, and detail, a new and growing body of research, that associates moral judgments with behavior in social dilemmas and economic games. Second, it seeks to address how a study of moral behavior is advantaged over past research that exclusively measured morality by asking about moral judgment or belief. In an analysis of these advantages, it is argued that additional research, that associates moral judgments with behavior, is better equipped to answer debates within the field, such as whether sacrificial judgments do reflect a concern for the greater good and if utilitarianism (or other moral theories) are better suited to solve certain collective action problems (like tragedies of the commons). To this end, future researchers should use methods that require participants to make decisions with real-world behavioral consequences.

Highlights

• Prior work has long investigated moral judgments in hypothetical scenarios.

• Arguments that debate the validity of this method are reviewed.

• New research is investigating the association between moral judgments and behavior.

• Future study should continue and broaden these investigations to new moral theories.


Thursday, April 16, 2020

How To Move From Data Privacy To Data Ethics

Photo:Thomas Walle
forbes.com
Originally posted 11 March 20


Here is an excerpt:

Data Ethics Is Up To Each And Every Company

Data ethics, however, is more nuanced and complicated. It's up to each company to decide what use cases their collected data should support or not. There are no federal or state laws related to data ethics, and there are no government-owned bodies that will penalize the ones that cross the ethical boundaries of how data should and should not be used.

However, in the growing data industry, which is composed of those helping companies and individuals to make better decisions, there’s a constant influx of new data being generated and collected, such as health data, car driving data and location data, to name a few. These data sets and insights are new to the market, and I believe we will start to see the first wave of forward-looking data companies taking a clear stance and drawing their own ethical guidelines.

These are companies that acknowledge the responsibility they have when holding such information and want to see it be used for the right use cases -- to make people’s lives better, easier and safer. So, if you agree that data ethics is important and want to be ahead of the curve, what is there to do?

Creating A Set Of Ethical Guidelines

My recommendation for any data company is to define a set of core ethical guidelines your company should adhere to. To accomplish this, follow these steps:

1. Define Your Guidelines

The guidelines should be created by inviting different parts of your organization to get a balanced and mixed view of what the company sees as acceptable use cases for its insights and data. In my experience, including different departments, such as commercial and engineering, people from different nationalities and all geographies, if your companies operate in multiple markets, is crucial in getting a nuanced and healthy view of what the company, its employees and stakeholders see as ethically acceptable.

The info is here.

Monday, April 6, 2020

Life and death decisions of autonomous vehicles

Y. E. Bigman and K. Gray
Nature
Originally published 4 May 20

How should self-driving cars make decisions when human lives hang in the balance? The Moral Machine experiment (MME) suggests that people want autonomous vehicles (AVs) to treat different human lives unequally, preferentially killing some people (for example, men, the old and the poor) over others (for example, women, the young and the rich). Our results challenge this idea, revealing that this apparent preference for inequality is driven by the specific ‘trolley-type’ paradigm used by the MME. Multiple studies with a revised paradigm reveal that people overwhelmingly want autonomous vehicles to treat different human lives equally in life and death situations, ignoring gender, age and status—a preference consistent with a general desire for equality.

The large-scale adoption of autonomous vehicles raises ethical challenges because autonomous vehicles may sometimes have to decide between killing one person or another. The MME seeks to reveal people’s preferences in these situations and many of these revealed preferences, such as ‘save more people over fewer’ and ‘kill by inaction over action’ are consistent with preferences documented in previous research.

However, the MME also concludes that people want autonomous vehicles to make decisions about who to kill on the basis of personal features, including physical fitness, age, status and gender (for example, saving women and killing men). This conclusion contradicts well-documented ethical preferences for equal treatment across demographic features and identities, a preference enshrined in the US Constitution, the United Nations Universal Declaration of Human Rights and in the Ethical Guideline 9 of the German Ethics Code for Automated and Connected Driving.

The info is here.

Friday, March 27, 2020

Coronavirus and ethics: 'Act so that most people survive'

Georg Marckmann
dw.com
Originally posted 24 March 20

Here is an excerpt:

Triage, a word used in military medicine, means classification. What groups do you classify the patients into?

There are several categories. Critically-ill patients are treated immediately, the treatment of seriously-ill patients is delayed, and patients who are slightly ill are treated later. Patients with no chance of survival receive purely palliative care.

The crucial element of situations involving a large number of sick people that we can no longer care for adequately is that we have to switch from a patient-centered approach to a group- or population-oriented approach. In a patient-centered approach, we try to adjust treatment as best we can to ensure the well-being of the individual patient and accommodate their wishes.

In a group-centered approach, we try to ensure that the incidence of illness and death within a population group is as low as possible. This places a strain on those making these decisions, because they're not used to it.

As a basic rule, we try to act in such a way that the largest number of people survive, because that is in the public interest.

The info is here.

Saturday, March 21, 2020

Moral Courage in the Coronavirus: A Guide for Medical Providers and Institutions

Holly Tabor & Alyssa Burgard
Just Security
Originally published 18 March 20

Times of crisis generate extreme moral dilemmas: situations we can’t begin to imagine, unthinkable choices emerging between options that all seem bad, each with harms and negative outcomes. During the COVID-19 pandemic, these moral dilemmas are experienced across the healthcare landscape — from bedside encounters to executive suites of hospitals and health systems. Who gets put on a ventilator? Who transitions to comfort care? What does end of life care look like when high flow oxygen can’t be used because of viral spread? Who gets a hospital bed? How do we choose which sick person, with or without COVID-19, gets treated? Which patients should be enrolled in research? How do we support patients when their families cannot visit them? We will turn away people who, in any other circumstance in a U.S. medical facility, we would have been obliged to treat. We will second guess these decisions, and perhaps be haunted by them forever. We only know one thing for sure: people will suffer and die regardless of which decisions we make.

How should we confront these intense challenges? Many institutions are doing what they can to provide guidance. But “guidelines” by design are intended to provide broad parameters to aid in decision making, and therefore rarely address the exact situations clinicians face. Certainly no guidelines can reduce the pain of having to actually carry out recommendations that affect an individual patient.  For other decisions, front line providers will have no guidance at all, or will have ill-informed, or even potentially harmful guidance. In perhaps the worst case scenario, they may even be encouraged to keep quiet about their concerns or observations rather than raise them to others’ attention.

As bioethicists, we know that moral dilemmas require personal moral courage, that is, the ability to take action for moral reasons, despite the risk of adverse consequences. We have already seen several stark examples of moral courage from doctors, nurses, and researchers in this outbreak. In late December in Wuhan, China, a 34 year-old ophthalmologist, Dr. Li Wenliang, raised the alarm in a chat group of doctors about a new virus he was seeing. He was subsequently punished by the Chinese government. He continued to share his story via social media, even from his hospital bed, and was repeatedly censored. Dr. Wenliang died of the virus on February 7.

The info is here.

Sunday, February 9, 2020

The Ethical Practice of Psychotherapy: Clearly Within Our Reach

Jeff Barnett
Image result for ethical psychologyPsychotherapy, 56(4), 431-440
http://dx.doi.org/10.1037/pst0000272

Abstract

This introductory article to the special section on ethics in psychotherapy highlights the challenges and ethical dilemmas psychotherapists regularly face throughout their careers, and the limits of the American Psychological Association Ethics Code in offering clear guidance for how specifically to respond to each of these situations. Reasons for the Ethics Code’s naturally occurring limitations are shared. The role of ethical decision-making, the use of multiple sources of guidance, and the role of consultation with colleagues to augment and support the psychotherapist’s professional judgment are illustrated. Representative ethics challenges in a range of areas of practice are described, with particular attention given to tele-mental health and social media, interprofessional practice and collaboration with medical professionals, and self-care and the promotion of wellness. Key recommendations are shared to promote ethical conduct and to resolve commonly occurring ethical dilemmas in each of these areas of psychotherapy practice. Each of the six articles that follow in this special section on ethics in psychotherapy are introduced, and their main points are summarized.

Here is an excerpt:

Yet, the ethical practice of psychotherapy is complex and multifaceted. This is true as well for psychotherapy research, the supervision of psychotherapy by trainees, and all other professional roles in which psychotherapists may serve. Psychotherapists engage in complex and challenging work in a wide range of practice settings, with a diverse range of clients/patients with highly individualized treatment needs, histories, and circumstances, using a plethora of possible treatment techniques and strategies. Each possible combination of these factors can yield a range of complexities, often presenting psychotherapists with challenges and situations that may not have been anticipated and that tax the psychotherapist’s ability to choose the correct or most appropriate course of action. In such circumstances, ethical dilemmas (situations in which no right or correct course of action is readily apparent and where multiple factors may influence or impact one’s decision on how to proceed) are common. Knowing how to respond to these challenges and dilemmas is of paramount importance for psychotherapists so that we may fulfill our overarching obligations to our clients and all others we serve in our professional roles.

Wednesday, January 29, 2020

Why morals matter in foreign policy

Joseph Nye
aspistrategist.org.au
Originally published 10 Jan 20

Here is the conclusion:

Good moral reasoning should be three-dimensional, weighing and balancing intentions, consequences and means. A foreign policy should be judged accordingly. Moreover, a moral foreign policy must consider consequences such as maintaining an institutional order that encourages moral interests, in addition to particular newsworthy actions such as helping a dissident or a persecuted group in another country. And it’s important to include the ethical consequences of ‘nonactions’, such as President Harry S. Truman’s willingness to accept stalemate and domestic political punishment during the Korean War rather than follow General Douglas MacArthur’s recommendation to use nuclear weapons. As Sherlock Holmes famously noted, much can be learned from a dog that doesn’t bark.

It’s pointless to argue that ethics will play no role in the foreign policy debates that await this year. We should acknowledge that we always use moral reasoning to judge foreign policy, and we should learn to do it better.

The info is here.

Tuesday, December 17, 2019

We Might Soon Build AI Who Deserve Rights

Image result for robot rightsEric Schweitzengebel
Splintered Mind Blog
From a Talk at Notre Dame
Originally posted 17 Nov 19

Abstract

Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.

(cut)

But as AI gets cuter and more sophisticated, and as chatbots start sounding more and more like normal humans, passing more and more difficult versions of the Turing Test, these movements will gain steam among the people with liberal views of consciousness. At some point, people will demand serious rights for some AI systems. The AI systems themselves, if they are capable of speech or speechlike outputs, might also demand or seem to demand rights.

Let me be clear: This will occur whether or not these systems really are conscious. Even if you’re very conservative in your view about what sorts of systems would be conscious, you should, I think, acknowledge the likelihood that if technological development continues on its current trajectory there will eventually be groups of people who assert the need for us to give AI systems human-like moral consideration.

And then we’ll need a good, scientifically justified consensus theory of consciousness to sort it out. Is this system that says, “Hey, I’m conscious, just like you!” really conscious, just like you? Or is it just some empty algorithm, no more conscious than a toaster?

Here’s my conjecture: We will face this social problem before we succeed in developing the good, scientifically justified consensus theory of consciousness that we need to solve the problem. We will then have machines whose moral status is unclear. Maybe they do deserve rights. Maybe they really are conscious like us. Or maybe they don’t. We won’t know.

And then, if we don’t know, we face quite a terrible dilemma.

If we don’t give these machines rights, and if turns out that the machines really do deserve rights, then we will be perpetrating slavery and murder every time we assign a task and delete a program.

The blog post is here.

Wednesday, December 11, 2019

Veil-of-ignorance reasoning favors the greater good

Karen Huang, Joshua D. Greene and Max Bazerman
PNAS first published November 12, 2019
https://doi.org/10.1073/pnas.1910125116

Abstract

The “veil of ignorance” is a moral reasoning device designed to promote impartial decision-making by denying decision-makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across six experiments (N = 5,785), three pre-registered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by a simple anchoring account, probabilistic reasoning, or generic perspective-taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision-makers who wish to make more impartial and/or socially beneficial choices.

Significance

The philosopher John Rawls aimed to identify fair governing principles by imagining people choosing their principles from behind a “veil of ignorance,” without knowing their places in the social order. Across 7 experiments with over 6,000 participants, we show that veil-of-ignorance reasoning leads to choices that favor the greater good. Veil-of-ignorance reasoning makes people more likely to donate to a more effective charity and to favor saving more lives in a bioethical dilemma. It also addresses the social dilemma of autonomous vehicles (AVs), aligning abstract approval of utilitarian AVs (which minimize total harm) with support for a utilitarian AV policy. These studies indicate that veil-of-ignorance reasoning may be used to promote decision making that is more impartial and socially beneficial.

Friday, October 25, 2019

Deciding Versus Reacting:Conceptions of Moral Judgment and the Reason-Affect Debate

Monin, B., Pizarro, D. A., & Beer, J. S. (2007).
Review of General Psychology, 11(2), 99–111.
https://doi.org/10.1037/1089-2680.11.2.99

Abstract

Recent approaches to moral judgment have typically pitted emotion against reason. In an effort to move beyond this debate, we propose that authors presenting diverging models are considering quite different prototypical situations: those focusing on the resolution of complex dilemmas conclude that morality involves sophisticated reasoning, whereas those studying reactions to shocking moral violations find that morality involves quick, affect-laden processes. We articulate these diverging dominant approaches and consider three directions for future research (moral temptation, moral self-image, and lay understandings of morality) that we propose have not received sufficient attention as a result of the focus on these two prototypical situations within moral psychology.

Concluding Thoughts

Recent theorizing on the psychology of moral decision making has pitted deliberative reasoning against quick affect-laden intuitions. In this article, we propose a resolution to this tension by arguing that it results from a choice of different prototypical situations: advocates of the reasoning approach have focused on sophisticated dilemmas, whereas advocates of the intuition/emotion approach have focused on reactions to other people’s moral infractions. Arbitrarily choosing one or the other as the typical moral situation has a significant impact on one’s characterization of moral judgment.

Thursday, August 29, 2019

Why Businesses Need Ethics to Survive Disruption

Mathew Donald
Business EthicsHR Technologist
Originally posted July 29, 2019

Here is an excerpt:

Using Ethics as the Guideline

An alternative model for an organization in disruption may be to connect staff and their organization to society values. Whilst these standards may not all be written, the staff will generally know right from wrong, where they live in harmony with the broad rule of society. People do not normally steal, drive on the wrong side of the road or take advantage of the poor. Whilst written laws may prevail and guide society, it is clear that most people follow unwritten society values. People make decisions on moral grounds daily, each based on their beliefs, refraining from actions that may be frowned upon by their friends and neighbors.

Ethics may be a key ingredient to add to your organization in a disruptive environment, as it may guide your staff through new situations without the necessity for a written rule or government law. It would seem that ethics based on a sense of fair play, not taking undue advantage, not overusing power and control, alignment with everyday society values may address some of this heightened risk in the disruption. Once the set of ethics is agreed upon and imbibed by the staff, it may be possible for them to review new transactions, new situations, and potential opportunities without necessarily needing to see written guidelines.

The info is here.

Friday, July 5, 2019

Ethical considerations in the use of Pernkopf's Atlas of Anatomy: A surgical case study

Yee, A., Zubovic, E, and others
Surgery May 2019Volume 165, Issue 5, Pages 860–867

Abstract

The use of Eduard Pernkopf's anatomic atlas presents ethical challenges for modern surgery concerning the use of data resulting from abusive scientific work. In the 1980s and 1990s, historic investigations revealed that Pernkopf was an active National Socialist (Nazi) functionary at the University of Vienna and that among the bodies depicted in the atlas were those of Nazi victims. Since then, discussions persist concerning the ethicality of the continued use of the atlas, because some surgeons still rely on information from this anatomic resource for procedural planning. The ethical implications relevant to the use of this atlas in the care of surgical patients have not been discussed in detail. Based on a recapitulation of the main arguments from the historic controversy surrounding the use of Pernkopf's atlas, this study presents an actual patient case to illustrate some of the ethical considerations relevant to the decision of whether to use the atlas in surgery. This investigation aims to provide a historic and ethical framework for questions concerning the use of the Pernkopf atlas in the management of anatomically complex and difficult surgical cases, with special attention to implications for medical ethics drawn from Jewish law.

The info is here.

Tuesday, February 26, 2019

Strengthening Our Science: AGU Launches Ethics and Equity Center

Robyn Bell
EOS.org
Originally published February 14, 2019

In the next century, our species will face a multitude of challenges. A diverse and inclusive community of researchers ready to lead the way is essential to solving these global-scale challenges. While Earth and space science has made many positive contributions to society over the past century, our community has suffered from a lack of diversity and a culture that tolerates unacceptable and divisive conduct. Bias, harassment, and discrimination create a hostile work climate, undermining the entire global scientific enterprise and its ability to benefit humanity.

As we considered how our Centennial can launch the next century of amazing Earth and space science, we focused on working with our community to build diverse, inclusive, and ethical workplaces where all participants are encouraged to develop their full potential. That’s why I’m so proud to announce the launch of the AGU Ethics and Equity Center, a new hub for comprehensive resources and tools designed to support our community across a range of topics linked to ethics and workplace excellence. The Center will provide resources to individual researchers, students, department heads, and institutional leaders. These resources are designed to help share and promote leading practices on issues ranging from building inclusive environments, to scientific publications and data management, to combating harassment, to example codes of conduct. AGU plans to transform our culture in scientific institutions so we can achieve inclusive excellence.

The info is here.

Monday, August 27, 2018

It’s impossible to lead a totally ethical life—but it’s fun to try

Ephrat Livni
Quartz.com
Originally posted July 15, 2018

Here is an excerpt:

“As much as we’d love to believe bad ethics come from bad people and good ethics come from the rest of us, our everyday choices such as cutting someone off on the freeway, fudging on our taxes, taking credit for something someone else did—these are all ethical choices,” he tells Quartz. We don’t think of our individual acts as having major implications, but those are the things we can control.

In his research, he’s found that people are outraged by ethical abstractions and don’t think a lot about simple things they might be doing wrong. “When people list unethical behavior, they often cite the illegal actions of corporations or the heinous decisions of politicians–these are strong examples of a growing disregard for ethics, but what’s missing on the list are the smaller and far more numerous everyday choices we make,” Gilbert says.

He suggests using ethics as philosophical and existential guardrails that guide us as we try to climb the rungs of the moral ladder. By extending the consideration we give our actions to an ever-wider group, we succeed in being more ethical, if not perfectly moral.

The information is here.

Tuesday, June 19, 2018

Of Mice, Men, and Trolleys: Hypothetical Judgment Versus Real-Life Behavior in Trolley-Style Moral Dilemmas

Dries H. Bostyn, Sybren Sevenhant, and Arne Roets
Psychological Science 
First Published May 9, 2018

Abstract

Scholars have been using hypothetical dilemmas to investigate moral decision making for decades. However, whether people’s responses to these dilemmas truly reflect the decisions they would make in real life is unclear. In the current study, participants had to make the real-life decision to administer an electroshock (that they did not know was bogus) to a single mouse or allow five other mice to receive the shock. Our results indicate that responses to hypothetical dilemmas are not predictive of real-life dilemma behavior, but they are predictive of affective and cognitive aspects of the real-life decision. Furthermore, participants were twice as likely to refrain from shocking the single mouse when confronted with a hypothetical versus the real version of the dilemma. We argue that hypothetical-dilemma research, while valuable for understanding moral cognition, has little predictive value for actual behavior and that future studies should investigate actual moral behavior along with the hypothetical scenarios dominating the field.

The research is here.

Wednesday, May 30, 2018

Reining It In: Making Ethical Decisions in a Forensic Practice

Donna M. Veraldi and Lorna Veraldi
A Paper Presented to American College of Forensic Psychology
34th Annual Symposium, San Diego, CA

Here is an excerpt:

Ethical dilemmas sometimes require making difficult choices among competing ethical principles and values. This presentation will discuss ethical dilemmas arising from the use of coercion and deception in forensic practice. In a forensic practice, the choice is not as simple as “do no harm” or “tell the truth.” What is and is not acceptable in terms of using various forms of pressure on individuals or of assisting agencies that put pressure on individuals? How much information should forensic psychologists share with individuals about evaluation techniques? What does informed consent
mean in the context of a forensic practice where many of the individuals with whom we interact are not there by choice?

The information is here.

Sunday, April 22, 2018

What is the ethics of ageing?

Christopher Simon Wareham
Journal of Medical Ethics 2018;44:128-132.

Abstract

Applied ethics is home to numerous productive subfields such as procreative ethics, intergenerational ethics and environmental ethics. By contrast, there is far less ethical work on ageing, and there is no boundary work that attempts to set the scope for ‘ageing ethics’ or the ‘ethics of ageing’. Yet ageing is a fundamental aspect of life; arguably even more fundamental and ubiquitous than procreation. To remedy this situation, I examine conceptions of what the ethics of ageing might mean and argue that these conceptions fail to capture the requirements of the desired subfield. The key reasons for this are, first, that they view ageing as something that happens only when one is old, thereby ignoring the fact that ageing is a process to which we are all subject, and second that the ageing person is treated as an object in ethical discourse rather than as its subject. In response to these shortcomings I put forward a better conception, one which places the ageing person at the centre of ethical analysis, has relevance not just for the elderly and provides a rich yet workable scope. While clarifying and justifying the conceptual boundaries of the subfield, the proposed scope pleasingly broadens the ethics of ageing beyond common negative associations with ageing.

The article is here.

Monday, February 19, 2018

Antecedents and Consequences of Medical Students’ Moral Decision Making during Professionalism Dilemmas

Lynn Monrouxe, Malissa Shaw, and Charlotte Rees
AMA Journal of Ethics. June 2017, Volume 19, Number 6: 568-577.

Abstract

Medical students often experience professionalism dilemmas (which differ from ethical dilemmas) wherein students sometimes witness and/or participate in patient safety, dignity, and consent lapses. When faced with such dilemmas, students make moral decisions. If students’ action (or inaction) runs counter to their perceived moral values—often due to organizational constraints or power hierarchies—they can suffer moral distress, burnout, or a desire to leave the profession. If moral transgressions are rationalized as being for the greater good, moral distress can decrease as dilemmas are experienced more frequently (habituation); if no learner benefit is seen, distress can increase with greater exposure to dilemmas (disturbance). We suggest how medical educators can support students’ understandings of ethical dilemmas and facilitate their habits of enacting professionalism: by modeling appropriate resistance behaviors.

Here is an excerpt:

Rather than being a straightforward matter of doing the right thing, medical students’ understandings of morally correct behavior differ from one individual to another. This is partly because moral judgments frequently concern decisions about behaviors that might entail some form of harm to another, and different individuals hold different perspectives about moral trade-offs (i.e., how to decide between two courses of action when the consequences of both have morally undesirable effects). It is partly because the majority of human behavior arises within a person-situation interaction. Indeed, moral “flexibility” suggests that though we are motivated to do the right thing, any moral principle can bring forth a variety of context-dependent moral judgments and decisions. Moral rules and principles are abstract ideas—rather than facts—and these ideas need to be operationalized and applied to specific situations. Each situation will have different affordances highlighting one facet or another of any given moral value. Thus, when faced with morally dubious situations—such as being asked to participate in lapses of patient consent by senior clinicians during workplace learning events—medical students’ subsequent actions (compliance or resistance) differ.

The article is here.