Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Intentionality. Show all posts
Showing posts with label Intentionality. Show all posts

Sunday, July 4, 2021

Understanding Side-Effect Intentionality Asymmetries: Meaning, Morality, or Attitudes and Defaults?

Laurent SM, Reich BJ, Skorinko JLM. 
Personality and Social Psychology Bulletin. 
2021;47(3):410-425. 
doi:10.1177/0146167220928237

Abstract

People frequently label harmful (but not helpful) side effects as intentional. One proposed explanation for this asymmetry is that moral considerations fundamentally affect how people think about and apply the concept of intentional action. We propose something else: People interpret the meaning of questions about intentionally harming versus helping in fundamentally different ways. Four experiments substantially support this hypothesis. When presented with helpful (but not harmful) side effects, people interpret questions concerning intentional helping as literally asking whether helping is the agents’ intentional action or believe questions are asking about why agents acted. Presented with harmful (but not helpful) side effects, people interpret the question as asking whether agents intentionally acted, knowing this would lead to harm. Differences in participants’ definitions consistently helped to explain intentionality responses. These findings cast doubt on whether side-effect intentionality asymmetries are informative regarding people’s core understanding and application of the concept of intentional action.

From the Discussion

Second, questions about intentionality of harm may focus people on two distinct elements presented in the vignette: the agent’s  intentional action  (e.g., starting a profit-increasing program) and the harmful secondary outcome he knows this goal-directed action will cause. Because the concept of intentionality is most frequently applied to actions rather than consequences of actions (Laurent, Clark, & Schweitzer, 2015), reframing the question as asking about an intentional action undertaken with foreknowledge of harm has advantages. It allows consideration of key elements from the story and is responsive to what people may feel is at the heart of the question: “Did the chairman act intentionally, knowing this would lead to harm?” Notably, responses to questions capturing this idea significantly mediated intentionality responses in each experiment presented here, whereas other variables tested failed to consistently do so. 

Saturday, February 13, 2021

Allocating moral responsibility to multiple agents

Gantman, A. P., Sternisko, A., et al.
Journal of Experimental Social Psychology
Volume 91, November 2020, 

Abstract

Moral and immoral actions often involve multiple individuals who play different roles in bringing about the outcome. For example, one agent may deliberate and decide what to do while another may plan and implement that decision. We suggest that the Mindset Theory of Action Phases provides a useful lens through which to understand these cases and the implications that these different roles, which correspond to different mindsets, have for judgments of moral responsibility. In Experiment 1, participants learned about a disastrous oil spill in which one company made decisions about a faulty oil rig, and another installed that rig. Participants judged the company who made decisions as more responsible than the company who implemented them. In Experiment 2 and a direct replication, we tested whether people judge implementers to be morally responsible at all. We examined a known asymmetry in blame and praise. Moral agents received blame for actions that resulted in a bad outcome but not praise for the same action that resulted in a good outcome. We found this asymmetry for deciders but not implementers, an indication that implementers were judged through a moral lens to a lesser extent than deciders. Implications for allocating moral responsibility across multiple agents are discussed.

Highlights

• Acts can be divided into parts and thereby roles (e.g., decider, implementer).

• Deliberating agent earns more blame than implementing one for a bad outcome.

• Asymmetry in blame vs. praise for the decider but not the implementer

• Asymmetry in blame vs. praise suggests only the decider is judged as moral agent

• Effect is attenuated if decider's job is primarily to implement.

Saturday, October 10, 2020

A Theory of Moral Praise

Anderson, R. A, Crockett, M. J., & Pizarro, D.
Trends in Cognitive Sciences
Volume 24, Issue 9, September 2020, 
Pages 694-703

Abstract

How do people judge whether someone deserves moral praise for their actions?  In contrast to the large literature on moral blame, work on how people attribute praise has, until recently, been scarce. However, there is a growing body of recent work from a variety of subfields in psychology (including social, cognitive, developmental, and consumer) suggesting that moral praise is a fundamentally unique form of moral attribution and not simply the positive moral analogue of
blame attributions. A functional perspective helps explain asymmetries in blame and praise: we propose that while blame is primarily for punishment and signaling one’s moral character, praise is primarily for relationship building.

Concluding Remarks

Moral praise, we have argued, is a psychological response that, like other forms of moral judgment,
serves a particular functional role in establishing social bonds, encouraging cooperative alliances,
and promoting good behavior. Through this lens, seemingly perplexing asymmetries between
judgments of blame for immoral acts and judgments of praise for moral acts can be understood
as consistent with the relative roles, and associated costs, played by these two kinds of moral
judgments. While both blame and praise judgments require that an agent played some causal
and intentional role in the act being judged, praise appears to be less sensitive to these features
and more sensitive to more general features about an individual’s stable, underlying character
traits. In other words, we believe that the growth of studies on moral praise in the past few years
demonstrate that, when deciding whether or not doling out praise is justified, individuals seem to
care less on how the action was performed and far more about what kind of person performed
the action. We suggest that future research on moral attribution should seek to complement
the rich literature examining moral blame by examining potentially unique processes engaged in
moral praise, guided by an understanding of their differing costs and benefits, as well as their
potentially distinct functional roles in social life.

The article is here.

Thursday, October 1, 2020

Intentional Action Without Knowledge

Vekony, R., Mele, A. & Rose, D.
Synthese (2020).

Abstract

In order to be doing something intentionally, must one know that one is doing it? Some philosophers have answered yes. Our aim is to test a version of this knowledge thesis, what we call the Knowledge/Awareness Thesis, or KAT. KAT states that an agent is doing something intentionally only if he knows that he is doing it or is aware that he is doing it. Here, using vignettes featuring skilled action and vignettes featuring habitual action, we provide evidence that, in various scenarios, a majority of non-specialists regard agents as intentionally doing things that the agents do not know they are doing and are not aware of doing. This puts pressure on proponents of KAT and leaves it to them to find a way these results can coexist with KAT.

Conclusion

Our aim was to evaluate KAT empirically. We found that majority responses to our vignettes
are at odds with KAT. Our results show that, on an ordinary view of matters, neither knowledge nor
awareness of doing something is necessary for doing it intentionally. We tested cases of skilled action
and habitual action, and we found that, for both, people ascribed intentionality to an action at an
appreciably higher rate than knowledge and awareness.

The research is here.

Friday, December 7, 2018

Lay beliefs about the controllability of everyday mental states.

Cusimano, C., & Goodwin, G.
In press, Journal of Experimental Psychology: General

Abstract

Prominent accounts of folk theory of mind posit that people judge others’ mental states to be uncontrollable, unintentional, or otherwise involuntary. Yet, this claim has little empirical support: few studies have investigated lay judgments about mental state control, and those that have done so yield conflicting conclusions. We address this shortcoming across six studies, which show that, in fact, lay people attribute to others a high degree of intentional control over their mental states, including their emotions, desires, beliefs, and evaluative attitudes. For prototypical mental states, people’s judgments of control systematically varied by mental state category (e.g., emotions were seen as less controllable than desires, which in turn were seen as less controllable than beliefs and evaluative attitudes). However, these differences were attenuated, sometimes completely, when the content of and context for each mental state were tightly controlled. Finally, judgments of control over mental states correlated positively with judgments of responsibility and blame for them, and to a lesser extent, with judgments that the mental state reveals the agent’s character. These findings replicated across multiple populations and methods, and generalized to people’s real-world experiences. The present results challenge the view that people judge others’ mental states as passive, involuntary, or unintentional, and suggest that mental state control judgments play a key role in other important areas of social judgment and decision making.

The research is here.

Important research for those practicing psychotherapy.

Tuesday, August 14, 2018

The developmental and cultural psychology of free will

Tamar Kushnir
Philosophy Compass
Originally published July 12, 2018

Abstract

This paper provides an account of the developmental origins of our belief in free will based on research from a range of ages—infants, preschoolers, older children, and adults—and across cultures. The foundations of free will beliefs are in infants' understanding of intentional action—their ability to use context to infer when agents are free to “do otherwise” and when they are constrained. In early childhood, new knowledge about causes of action leads to new abilities to imagine constraints on action. Moreover, unlike adults, young children tend to view psychological causes (i.e., desires) and social causes (i.e., following rules or group norms, being kind or fair) of action as constraints on free will. But these beliefs change, and also diverge across cultures, corresponding to differences between Eastern and Western philosophies of mind, self, and action. Finally, new evidence shows developmentally early, culturally dependent links between free will beliefs and behavior, in particular when choice‐making requires self‐control.

Here is part of the Conclusion:

I've argued here that free will beliefs are early‐developing and culturally universal, and that the folk psychology of free will involves considering actions in the context of alternative possibilities and constraints on possibility. There are developmental differences in how children reason about the possibility of acting against desires, and there are both developmental and cultural differences in how children consider the social and moral limitations on possibility.  Finally, there is new evidence emerging for developmentally early, culturally moderated links between free will beliefs and willpower, delay of gratification, and self‐regulation.

The article is here.

Tuesday, May 15, 2018

Mens rea ascription, expertise and outcome effects: Professional judges surveyed

Markus Kneer and Sacha Bourgeois-Gironde
Cognition
Volume 169, December 2017, Pages 139-146

Abstract

A coherent practice of mens rea (‘guilty mind’) ascription in criminal law presupposes a concept of mens rea which is insensitive to the moral valence of an action’s outcome. For instance, an assessment of whether an agent harmed another person intentionally should be unaffected by the severity of harm done. Ascriptions of intentionality made by laypeople, however, are subject to a strong outcome bias. As demonstrated by the Knobe effect, a knowingly incurred negative side effect is standardly judged intentional, whereas a positive side effect is not. We report the first empirical investigation into intentionality ascriptions made by professional judges, which finds (i) that professionals are sensitive to the moral valence of outcome type, and (ii) that the worse the outcome, the higher the propensity to ascribe intentionality. The data shows the intentionality ascriptions of professional judges to be inconsistent with the concept of mens rea supposedly at the foundation of criminal law.

Highlights

• The first paper to present empirical data regarding mens rea ascriptions of professional judges.

• Intentionality ascriptions of professional judges manifest the Knobe effect.

• Intentionality ascriptions of judges are also sensitive to severity of outcome.

The research is here.

Wednesday, February 3, 2016

Two Distinct Moral Mechanisms for Ascribing and Denying Intentionality

L. Ngo, M. Kelly, C. G. Coutlee, R. M. Carter, W. Sinnott-Armstrong & S. A. Huettel
Scientific Reports 5, Article number: 17390 (2015)
doi:10.1038/srep17390

Abstract

Philosophers and legal scholars have long theorized about how intentionality serves as a critical input for morality and culpability, but the emerging field of experimental philosophy has revealed a puzzling asymmetry. People judge actions leading to negative consequences as being more intentional than those leading to positive ones. The implications of this asymmetry remain unclear because there is no consensus regarding the underlying mechanism. Based on converging behavioral and neural evidence, we demonstrate that there is no single underlying mechanism. Instead, two distinct mechanisms together generate the asymmetry. Emotion drives ascriptions of intentionality for negative consequences, while the consideration of statistical norms leads to the denial of intentionality for positive consequences. We employ this novel two-mechanism model to illustrate that morality can paradoxically shape judgments of intentionality. This is consequential for mens rea in legal practice and arguments in moral philosophy pertaining to terror bombing, abortion, and euthanasia among others.

The article is here.

Thursday, August 6, 2015

The causal cognition of wrong doing: incest, intentionality, and morality

Rita Astuti and Maurice Bloch
Front. Psychol., 18 February 2015

Abstract

The paper concerns the role of intentionality in reasoning about wrong doing. Anthropologists have claimed that, in certain non-Western societies, people ignore whether an act of wrong doing is committed intentionally or accidentally. To examine this proposition, we look at the case of Madagascar. We start by analyzing how Malagasy people respond to incest, and we find that in this case they do not seem to take intentionality into account: catastrophic consequences follow even if those who commit incest are not aware that they are related as kin; punishment befalls on innocent people; and the whole community is responsible for repairing the damage. However, by looking at how people reason about other types of wrong doing, we show that the role of intentionality is well understood, and that in fact this is so even in the case of incest. We therefore argue that, when people contemplate incest and its consequences, they simultaneously consider two quite different issues: the issue of intentionality and blame, and the much more troubling and dumbfounding issue of what society would be like if incest were to be permitted. This entails such a fundamental attack on kinship and on the very basis of society that issues of intentionality and blame become irrelevant. Using the insights we derive from this Malagasy case study, we re-examine the results of Haidt’s psychological experiment on moral dumbfoundedness, which uses a story about incest between siblings as one of its test scenarios. We suggest that the dumbfoundedness that was documented among North American students may be explained by the same kind of complexity that we found in Madagascar. In light of this, we discuss the methodological limitations of experimental protocols, which are unable to grasp multiple levels of response. We also note the limitations of anthropological methods and the benefits of closer cross-disciplinary collaboration.

The entire article is here.

Monday, June 15, 2015

Understanding ordinary unethical behavior: why people who value morality act immorally

by Francesca Gino
Current Opinion in Behavioral Sciences
Volume 3, June 2015, Pages 107–111

Cheating, deception, organizational misconduct, and many other forms of unethical behavior are among the greatest challenges in today's society. As regularly highlighted by the media, extreme cases and costly scams (e.g., Enron, Bernard Madoff) are common. Yet, even more frequent and pervasive are cases of ‘ordinary’ unethical behavior — unethical actions committed by people who value about morality but behave unethically when faced with an opportunity to cheat. A growing body of research in behavioral ethics and moral psychology shows that even good people (i.e., people who care about being moral) can and often do bad things. Examples include cheating on taxes, deceiving in interpersonal relationships, overstating performance and contributions to teamwork, inflating business expense reports, and lying in negotiations.

When considered cumulatively, ordinary unethical behavior causes considerable societal damage. For instance, employee theft causes U.S. companies to lose approximately $52 billion per year [4]. This empirical evidence is striking in light of social–psychological research that, for decades, has robustly shown that people typically value honesty, believe strongly in their own morality, and strive to maintain a positive self-image as moral individuals.

The entire article is here.

Wednesday, December 24, 2014

Don't Execute Schizophrenic Killers

By Sally L. Satel
Bloomberg View
Originally posted December 1, 2014

Is someone who was diagnosed with schizophrenia years before committing murder sane enough to be sentenced to death?

The government thinks so in the case of Scott L. Panetti, 56, who will die on Wednesday by lethal injection in Texas unless Governor Rick Perry stays the execution.

(cut)

This is unjust. It is wrong to execute, even to punish, people who are so floridly psychotic when they commit their crimes that they are incapable of correcting the errors by logic or evidence.

Yet Texas, like many other states, considers a defendant sane as long as he knows, factually, that murder is wrong. Indeed, Panetti’s jury, which was instructed to apply this narrow standard, may have been legally correct to reject his insanity defense because he may have known that the murders were technically wrong.

The entire article is here.

Sunday, November 23, 2014

The Philosophical Implications of the Urge to Urinate

The state of our body affects how we think the world works

by Daniel Yudkin
Scientific American
Originally published November 4, 2014

If one thing’s for sure, it’s that I decided what breakfast cereal to eat this morning. I opened the cupboard, I perused the options, and when I ultimately chose the Honey Bunches of Oats over the Kashi Good Friends, it came from a place of considered judgment, free from external constraints and predetermined laws.

Or did it? This question—about how much people are in charge of their own actions—is among the most central to the human condition. Do we have free will? Are we in control of our destiny? Do we choose the proverbial Honey Bunches of Oats? Or does the cereal—or some other mysterious force in the vast and unknowable universe—choose us?

The entire article is here.

Wednesday, June 18, 2014

Free will seems a matter of mind, not soul

Press Release
Brown University
Originally released May 27, 2014

Across the board, even if they believed in the concept of a soul, people in a new study ascribed free will based on down-to-Earth criteria: Did the actor in question have the capacity to make an intentional and independent choice? The study suggests that while grand metaphysical views of the universe remain common, they have little to do with how people assess each other’s behavior.

“I find it relieving to know that whether you believe in a soul or not, or have a religion or not, or an assumption about how the universe works, that has very little bearing on how you act as a member of the social community,” said Bertram Malle, professor of cognitive, linguistic and psychological sciences at Brown University and senior author of the new study. “In a sense, what unites us across all these assumptions is we see others as intentional beings who can make choices, and we blame them on the basis of that.”

The entire press release is here.

Tuesday, April 1, 2014

The Power of Conscious Intention Proven At Last?

By Neuroskeptic
The Neuroskeptic Blog
Originally published March 15, 2014

Here is an excerpt:

To simplify, one school of thought holds that (at least some of the time), our intentions or plans control our actions. Many people would say that this is what common sense teaches us as well.

But there’s an alternative view, in which our consciously-experienced intentions are not causes of our actions but are actually products of them, being generated after the action has already begun. This view is certainly counterintuitive, and many find it disturbing as it seems to undermine ‘free will’.

That’s the background. Zschorlich and Köhling say that they’ve demonstrated that conscious intentions do exist, prior to motor actions, and that these intentions are accompanied by particular changes in brain activity. They claim to have done this using transcranial magnetic stimulation (TMS), a way of causing a localized modulation of brain electrical activity.

The entire blog post is here.

Friday, March 28, 2014

Human brains 'hard-wired' to link what we see with what we do

University of London
Originally posted March 13, 2014

Summary

Your brain's ability to instantly link what you see with what you do is down to a dedicated information 'highway,' suggests new research. For the first time, researchers have found evidence of a specialized mechanism for spatial self-awareness that combines visual cues with body motion. The newly-discovered system could explain why some schizophrenia patients feel like their actions are controlled by someone else.

The entire story is here.

Sunday, March 16, 2014

The Failure of Social and Moral Intuitions

Edge Videos
HeadCon '13: Part IX
David Pizarro

Today I want to talk a little about our social and moral intuitions and I want to present a case that they're rapidly failing, more so than ever. Let me start with an example. Recently, I collaborated with economist Rob Frank, roboticist Cynthia Breazeal, and social psychologist David DeSteno. The experiment that we did was interested in looking at how we detect trustworthiness in others.

We had people interact—strangers interact in the lab—and we filmed them, and we got the cues that seemed to indicate that somebody's going to be either more cooperative or less cooperative. But the fun part of this study was that for the second part we got those cues and we programmed a robot—Nexi the robot, from the lab of Cynthia Breazeal at MIT—to emulate, in one condition, those non-verbal gestures. So what I'm talking about today is not about the results of that study, but rather what was interesting about looking at people interacting with the robot.



The entire page is here.

Wednesday, March 5, 2014

Experimental Philosophy: Intentionality, Emotion, and Moral Reasoning

By Joshua Knobe
Edge Videos
Originally published February 2014

Joshua Knobe outlines research on intentionality, emotion, and moral reasoning.


Wednesday, August 14, 2013

Intent to Harm: Willful Acts Seem More Damaging

Science Daily
Originally published July 29, 2013

How harmful we perceive an act to be depends on whether we see the act as intentional, reveals new research published in Psychological Science, a journal of the Association for Psychological Science.

The new research shows that people significantly overestimate the monetary cost of intentional harm, even when they are given a financial incentive to be accurate.

"The law already recognizes intentional harm as more wrong than unintentional harm," explain researchers Daniel Ames and Susan Fiske of Princeton University. "But it assumes that people can assess compensatory damages -- what it would cost to make a person 'whole' again -- independently of punitive damages."

According to Ames and Fiske, the new research suggests that this separation may not be psychologically plausible:

"These studies suggest that people might not only penalize intentional harm more, but actually perceive it as intrinsically more damaging."

The entire story is here.

Saturday, April 21, 2012

Turning Good Intentions into Good Behavior: Self-perception, Self-care, and Social Influences

Samuel Knapp, EdD, ABPP
Director of Professional Affairs

John D. Gavazzi, PsyD, ABPP
Chair, PPA Ethics Committee

Originally published in The Pennsylvania Psychologist


            Most of us want to fulfill our ethical mandate to help our clients as best as we can. However, non-rational factors, such as faulty thinking habits, situational pressures, or fatigue can overpower our good intentions and lead to less-than-optimal ethical behaviors. We are not just referring to flagrant misconduct that would leave us vulnerable to a licensing board complaint or lawsuit. Instead, this less-than-optimal behavior is more subtle, such as delivering acceptable (but not top quality) professional services.
Traditional approaches to improve ethical conduct and clinical skills involve attending didactic lectures. As helpful as these lectures may be, behavioral change is more likely to occur when we take a more active role in exploring how important variables such as self-perception, self-care, and social factors influence clinical performance (Tjeltveit & Gottlieb, 2010). Reducing our blind spots, increasing our self-knowledge, and enhancing our awareness of work pressures and organizational cultures are worthwhile processes to explore in order to investigate our basic ethical obligations (Bazerman & Tenbrunsel, 2011).
Professional narcissism,” or an “overestimation of one’s abilities” (Younggren, 2007, p. 515) represents one such blind spot. For example, Davis et al. (2006) asked physicians to perform a standardized patient procedure, and then estimate their competence at that procedure. Most physicians rated themselves higher than justified, including a few who performed incompetently but nonetheless rated themselves very high. While a modest amount of overconfidence may be harmless (or perhaps even healthy), we need to guard against the tendency to see ourselves as much better than we really are. We can avoid professional narcissism through activities that promote self-reflection, such as keeping a journal geared toward clinical experiences and contemplating ethical nuances of practice. We can also establish routines to ensure regular feedback about our behavior, such as asking patients questions at the end of sessions. We can ask how the session went or how we could have been more helpful. Some psychologists have adopted a productive philosophy of admitting mistakes, apologizing for them (when appropriate), learning from them, and then moving on (show self-compassion). “People can learn to see mistakes not as terrible personal failings to be denied or justified, but as inevitable aspects of life that help us grow” (Tavris & Aronson, 2007, p. 235).
Medical residents who are fatigued make more errors as their fatigue increases (Harvard Work Group, 2004). Similarly, we are less able to focus on our professional obligations and we can become more prone to errors when we are fatigued. Highly competent psychologists engage in positive self-care activities, such as regular exercise, good sleep hygiene, healthy eating, and other activities that promote health and wellness. Part of self-care means accepting our limitations in terms of time, energy, and resources. Healthy psychologists acknowledge that they cannot help everyone and cannot master every facet in the psychology domain.
Some practices, agencies, or organizations may not value ethical behavior, even though they may have an ethics policy, an ethics code, mandatory ethics education, or other formal structures designed to promote ethics. However, the “hidden culture” of the organization often has more influence then formal guidelines when framing ethical dilemmas and determining ethical behavior. “Formal systems are the weakest link in an organization’s ethical infrastructure” (Bazerman & Tenbrunsel, 2011, p. 118). That is, the interactions and comments that occur among members of the organization create the day-to-day ethical tone of an organization. The informal ethical culture of an organization courses through the stories that employees tell, the euphemisms that they use to describe issues, or the socialization rituals that employees undergo. In many cases, the cultural influences on practitioners remain unseen, especially to those who remain frame-dependent.
 Here are some strategies, activities, or routines that some psychologists have used to reduce the gap between good intentions and good behavior.

Self-Directed Activities to Enhance Ethical Practice

Encourage self-reflection (to reduce or to avoid professional narcissism)

Keep a journal or a diary to focus on therapy and possible ethical issues in daily practice, engage in therapy, try to be more open-minded, listen to feelings.
Routinely ask patients for feedback at the end of each session (what did I do that was helpful today? Not helpful?). Routinely gather outcome data. Re-read therapy notes to become aware of any unproductive emotions or countertransference.
Think in terms of ethical issues when facing clinical problems.
            Have a productive philosophy concerning mistakes: Admit them, apologize (if helpful), learn from them, and move on (show self-compassion).
                       
Attend to environmental influences

Encourage friends or colleagues to tell me when they think I am doing something wrong.
Develop schedules – although not too rigidly—and think about time management.
Attend to environmental circumstances that might influence me to engage in less than optimal ethical behavior.
Be aware of temptations to minimize the worth or individuality of clients or other people (e.g., interpret troublesome behaviors as barriers, not manifestations of evil).

Establish Healthy Routines

Make checklists or schedule healthy activities.
Make learning a habit. Attend CE programs (especially programs on ethics), read journals, get advanced training or certification in an area of psychology.
Keep the APA Ethics Code or the Pennsylvania licensing law and regulations close by.
Get in the habit of using an ethical decision-making model.
Belong to and participate in a professional association  (or present at a CE program, join a listserv, start a blog, or participate in student groups, committees).
Uphold ideals without being sanctimonious.
                                   
Prevent problems ahead of time

Practice self-care: e.g., pay attention to exercise, sleep hygiene, and diet.
Maintain a good work-life balance.
Reduce dysfunctional emotions through meditation, mindfulness exercises, therapy, or recreational activities unrelated to school or work.
Manage time and tasks carefully (breaking big tasks into smaller ones).
Accept my limitations in terms of time, energy, and resources. (I can’t help
            everyone; I can’t do everything). Balance compassion and altruism with my own needs.[1]
Show concern for others, including your fellow psychologists (help them out if
            I can); commit random acts of kindness; express appreciation (say “thank you”).

References
Bazerman, M., & Tenbrunsel, A. (2011). Blind spots. Princeton, NJ: Princeton University Press.
Davis, D., Mazmanian, P. E., Fordis, M., Van Harrison, R., Thorpe, K. E., & Perrier, L. (2006). Accuracy of physician self-assessment compared with observed measures of competence: A systematic review. Journal of the American Medical Association, 296, 1137-1139.
Harvard Work Hours Health and Safety Group. (2004). New England Journal of Medicine, 351, 1838-1848.
Ross, W. D. (1998). What makes right act right? In J. Rachaels (Ed.). Ethical theory (pp. 265-285). New York: Oxford University Press. (Original work published 1930).
Tavris, C., & Aronson, E. (2007). Mistakes were made. Orlando, FL: Harcourt.
Tjeltveit, A., & Gottlieb, M. (2010). Avoiding the road to ethical disaster: Overcoming vulnerabilities and developing resilience. Psychotherapy: Theory, Research, Practice, Training, 47, 98-110.
Younggren, J. (2007). Competence as a process of self-appraisal. Professional Psychology: Research and Practice, 38, 515-516.


[1] W.D. Ross (1998) says that supererogatory obligations should not distract us from our primary obligations to family, close friends, and ourselves.

Wednesday, August 3, 2011

Reviewing Autonomy

Implications of the Neurosciences and the Free Will Debate for the Principle of Respect for the Patient's Autonomy

Sabine Muller & Henrik Walter. Cambridge Quarterly of Healthcare Ethics. New York: Apr 2010. Vol. 19, Iss. 2; pg. 205, 13 pgs

Introduction

Beauchamp and Childress have performed a great service by strengthening the principle of respect for the patient's autonomy against the paternalism that dominated medicine until at least the 1970s. Nevertheless, we think that the concept of autonomy should be elaborated further. We suggest such an elaboration built on recent developments within the neurosciences and the free will debate. The reason for this suggestion is at least twofold: First, Beauchamp and Childress neglect some important elements of autonomy. Second, neuroscience itself needs a conceptual apparatus to deal with the neural basis of autonomy for diagnostic purposes. This desideratum is actually increasing because modern therapy options can considerably influence the neural basis of autonomy itself.

Beauchamp and Childress analyze autonomous actions in terms of normal choosers who act (1) intentionally, (2) with understanding, and (3) without controlling influences (coercion, persuasion, and manipulation) that determine their actions. 1 In terms of the free will debate, the absence of external controlling influences, their third criterion, corresponds to the freedom of action: to do what one wants to do without being hindered to do so. Criteria one and two are related to volition: that a choice is intentional, that is, that it has a certain goal that is properly understood by the person choosing.

According to Beauchamp and Childress, the principle of autonomy implies that patients have the right to choose between different medical therapy options taking into account risks and benefits as well as their personal situation and individual values. To enable an autonomous decision the procedure of informed consent 2 has been developed. This procedure has become the gold standard in almost every part of medicine. Importantly, Beauchamp and Childress demand respect for a patient's autonomy under the premise that the patient is able to act in a sufficiently autonomous manner. 3 The crucial question in a special situation is whether this is the case.

Let us consider the example of the recent controversial discussion of Body Integrity Identity disorder: 4 If a patient asks a physician to amputate one of his legs although it neither hurts nor is deformed, paralyzed, or ugly (in the patient's view), and if the patient understands the consequences of the amputation and is not controlled by external influences, then one could deduce from the principle of respect for the patient's autonomy that the physician should amputate the leg. Although some commentators regard this as self-evident, we think that the case is not yet made, as it is important which internal processes have led to the wish of the patient.

We propose to add a fourth criterion for autonomous actions, namely, freedom of internal coercive influences. In the case of the patient who desires an amputation, it would have to be investigated whether his decision is based on internal coercion. Clear examples for that would be an acute episode of schizophrenia or a brain tumor. More controversial are neurotic beliefs, obsession and compulsion, severe personality disorders, or neurological dysfunctions not accessible with conventional diagnostic tools.

Although Beauchamp and Childress have not elaborated the principle of autonomy with regard to internal coercions, they clearly argue that the obligations to respect autonomy do not apply to persons who show a substantial lack of autonomy because they are immature, incapacitated, ignorant, coerced, or exploited, for example, infants, irrationally suicidal individuals, severely demented subjects, or drug-dependent patients. 5 But these kinds of patients are treated in medical ethics as exceptions and therefore as marginal cases. They are not considered to be important for the formulation of the principles.

The rest of the article can be found here.  Without access to PubMed.gov, it is not available for free.  A university library may also be helpful in reading the entire article.