Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Self-Reflection. Show all posts
Showing posts with label Self-Reflection. Show all posts

Sunday, August 27, 2023

Ontario court rules against Jordan Peterson, upholds social media training order

Canadian Broadcasting Company
Originally posted 23 August 23

An Ontario court ruled against psychologist and media personality Jordan Peterson Wednesday, and upheld a regulatory body's order that he take social media training in the wake of complaints about his controversial online posts and statements.

Last November, Peterson, a professor emeritus with the University of Toronto psychology department who is also an author and media commentator, was ordered by the College of Psychologists of Ontario to undergo a coaching program on professionalism in public statements.

That followed numerous complaints to the governing body of Ontario psychologists, of which Peterson is a member, regarding his online commentary directed at politicians, a plus-sized model, and transgender actor Elliot Page, among other issues. You can read more about those social media posts here.

The college's complaints committee concluded his controversial public statements could amount to professional misconduct and ordered Peterson to pay for a media coaching program — noting failure to comply could mean the loss of his licence to practice psychology in the province.

Peterson filed for a judicial review, arguing his political commentary is not under the college's purview.

Three Ontario Divisional Court judges unanimously dismissed Peterson's application, ruling that the college's decision falls within its mandate to regulate the profession in the public interest and does not affect his freedom of expression.

"The order is not disciplinary and does not prevent Dr. Peterson from expressing himself on controversial topics; it has a minimal impact on his right to freedom of expression," the decision written by Justice Paul Schabas reads, in part.

My take:

Peterson has argued that the order violates his right to free speech. He has also said that the complaints against him were politically motivated. However, the court ruled that the college's order was justified in order to protect the public from harm.

The case of Jordan Peterson is a reminder that psychologists, like other human beings, are not infallible. They are capable of making mistakes and of expressing harmful views. It is important to hold psychologists accountable for their actions, and to ensure that they are held to the highest ethical standards.

In addition to the steps outlined above, there are a number of other things that can be done to mitigate bias in psychology. These include:
  • Increasing diversity in the field of psychology
  • Promoting critical thinking and self-reflection among psychologists
  • Developing more specific ethical guidelines for psychologists' use of social media
  • Holding psychologists accountable for their online behavior

Tuesday, August 23, 2022

Tackling Implicit Bias in Health Care

J. A. Sabin
N Engl J Med 2022; 387:105-107
DOI: 10.1056/NEJMp2201180

Implicit and explicit biases are among many factors that contribute to disparities in health and health care. Explicit biases, the attitudes and assumptions that we acknowledge as part of our personal belief systems, can be assessed directly by means of self-report. Explicit, overtly racist, sexist, and homophobic attitudes often underpin discriminatory actions. Implicit biases, by contrast, are attitudes and beliefs about race, ethnicity, age, ability, gender, or other characteristics that operate outside our conscious awareness and can be measured only indirectly. Implicit biases surreptitiously influence judgment and can, without intent, contribute to discriminatory behavior. A person can hold explicit egalitarian beliefs while harboring implicit attitudes and stereotypes that contradict their conscious beliefs.

Moreover, our individual biases operate within larger social, cultural, and economic structures whose biased policies and practices perpetuate systemic racism, sexism, and other forms of discrimination. In medicine, bias-driven discriminatory practices and policies not only negatively affect patient care and the medical training environment, but also limit the diversity of the health care workforce, lead to inequitable distribution of research funding, and can hinder career advancement.

A review of studies involving physicians, nurses, and other medical professionals found that health care providers’ implicit racial bias is associated with diagnostic uncertainty and, for Black patients, negative ratings of their clinical interactions, less patient-centeredness, poor provider communication, undertreatment of pain, views of Black patients as less medically adherent than White patients, and other ill effects.1 These biases are learned from cultural exposure and internalized over time: in one study, 48.7% of U.S. medical students surveyed reported having been exposed to negative comments about Black patients by attending or resident physicians, and those students demonstrated significantly greater implicit racial bias in year 4 than they had in year 1.

A review of the literature on reducing implicit bias, which examined evidence on many approaches and strategies, revealed that methods such as exposure to counterstereotypical exemplars, recognizing and understanding others’ perspectives, and appeals to egalitarian values have not resulted in reduction of implicit biases.2 Indeed, no interventions for reducing implicit biases have been shown to have enduring effects. Therefore, it makes sense for health care organizations to forgo bias-reduction interventions and focus instead on eliminating discriminatory behavior and other harms caused by implicit bias.

Though pervasive, implicit bias is hidden and difficult to recognize, especially in oneself. It can be assumed that we all hold implicit biases, but both individual and organizational actions can combat the harms caused by these attitudes and beliefs. Awareness of bias is one step toward behavior change. There are various ways to increase our awareness of personal biases, including taking the Harvard Implicit Association Tests, paying close attention to our own mistaken assumptions, and critically reflecting on biased behavior that we engage in or experience. Gonzalez and colleagues offer 12 tips for teaching recognition and management of implicit bias; these include creating a safe environment, presenting the science of implicit bias and evidence of its influence on clinical care, using critical reflection exercises, and engaging learners in skill-building exercises and activities in which they must embrace their discomfort.

Thursday, February 17, 2022

Filling the gaps: Cognitive control as a critical lens for understanding mechanisms of value-based decision-making.

Frömer, R., & Shenhav, A. (2021, May 17). 


While often seeming to investigate rather different problems, research into value-based decision making and cognitive control have historically offered parallel insights into how people select thoughts and actions. While the former studies how people weigh costs and benefits to make a decision, the latter studies how they adjust information processing to achieve their goals. Recent work has highlighted ways in which decision-making research can inform our understanding of cognitive control. Here, we provide the complementary perspective: how cognitive control research has informed understanding of decision-making. We highlight three particular areas of research where this critical interchange has occurred: (1) how different types of goals shape the evaluation of choice options, (2) how people use control to adjust how they make their decisions, and (3) how people monitor decisions to inform adjustments to control at multiple levels and timescales. We show how adopting this alternate viewpoint offers new insight into the determinants of both decisions and control; provides alternative interpretations for common neuroeconomic findings; and generates fruitful directions for future research.


•  We review how taking a cognitive control perspective provides novel insights into the mechanisms of value based choice.

•  We highlight three areas of research where this critical interchange has occurred:

      (1) how different types of goals shape the evaluation of choice options,

      (2) how people use control to adjust how they make their decisions, and

      (3) how people monitor decisions to inform adjustments to control at multiple levels and timescales.

From Exerting Control Beyond Our Current Choice

We have so far discussed choices the way they are typically studied:in isolation. However, we don’t make choices in a vacuum, and our current choices depend on previous choices we have made (Erev & Roth, 2014; Keung, Hagen, & Wilson, 2019; Talluri et al., 2020; 618Urai, Braun, & Donner, 2017; Urai, de Gee, Tsetsos, & Donner, 2019). One natural way in which choices influence each other is through learning about the options, where the evaluations of the outcome of one choice refines the expected value (incorporating range and probability) assigned to that option in future choices (Fontanesi, Gluth, et al., 2019; Fontanesi, Palminteri, et al., 2019; Miletic et al., 2021).  Here we focus on a different, complementary way, central to cognitive control research, where evaluations of the process of ongoing and past choices inform the process of future choices(Botvinick et al., 1999; Bugg, Jacoby, & Chanani, 2011; Verguts, Vassena, & Silvetti, 2015). In cognitive control research, these choice evaluations and their influence on subsequent adaptation are studied under the umbrella of performance monitoring (Carter et al., 1998; Ullsperger, Fischer, Nigbur, & Endrass, 2014). Unlike option-based learning, performance monitoring influences not only which options are chosen, but also how subsequent choices are made. It also informs higher order decisions about strategy and task selection(Fig. 6305A).

Wednesday, March 4, 2020

How Common Mental Shortcuts Can Cause Major Physician Errors

Anupam B. Jena and Andrew R. Olenski
The New York Times
Originally posted 20 Feb 20

Here is an excerpt:

In health care, such unconscious biases can lead to disparate treatment of patients and can affect whether similar patients live or die.

Sometimes these cognitive biases are simple overreactions to recent events, what psychologists term availability bias. One study found that when patients experienced an unlikely adverse side effect of a drug, their doctor was less likely to order that same drug for the next patient whose condition might call for it, even though the efficacy and appropriateness of the drug had not changed.

A similar study found that when mothers giving birth experienced an adverse event, their obstetrician was more likely to switch delivery modes for the next patient (C-section vs. vaginal delivery), regardless of the appropriateness for that next patient. This cognitive bias resulted in both higher spending and worse outcomes.

Doctor biases don’t affect treatment decisions alone; they can shape the profession as a whole. A recent study analyzed gender bias in surgeon referrals and found that when the patient of a female surgeon dies, the physician who made the referral to that surgeon sends fewer patients to all female surgeons in the future. The study found no such decline in referrals for male surgeons after a patient death.

This list of biases is far from exhaustive, and though they may be disconcerting, uncovering new systematic mistakes is critical for improving clinical practice.

The info is here.

Wednesday, February 19, 2020

How to talk someone out of bigotry

Brian Resnick
Originally published 29 Jan 20

Here is an excerpt:

Topping and dozens of other canvassers were a part of that 2016 effort. It was an important study: Not only has social science found very few strategies that work, in experiments, to change minds on issues of prejudice, but even fewer tests of those strategies have occurred in the real world.

Typically, the conversations begin with the canvasser asking the voter for their opinion on a topic, like abortion access, immigration, or LGBTQ rights. Canvassers (who may or may not be members of the impacted community) listen nonjudgmentally. They don’t say if they are pleased or hurt by the response. They are supposed “to appear genuinely interested in hearing the subject ruminate on the question,” as Broockman and Kalla’s latest study instructions read.

The canvassers then ask if the voters know anyone in the affected community, and ask if they relate to the person’s story. If they don’t, and even if they do, they’re asked a question like, “When was a time someone showed you compassion when you really needed it?” to get them to reflect on their experience when they might have felt something similar to the people in the marginalized community.

The canvassers also share their own stories: about being an immigrant, about being a member of the LGBTQ community, or about just knowing people who are.

It’s a type of conversation that’s closer to what a psychotherapist might have with a patient than a typical political argument. (One clinical therapist I showed it to said it sounded a bit like “motivational interviewing,” a technique used to help clients work through ambivalent feelings.) It’s not about listing facts or calling people out on their prejudicial views. It’s about sharing and listening, all the while nudging people to be analytical and think about their shared humanity with marginalized groups.

The info is here.

Friday, November 15, 2019

Gartner Fellow discusses ethics in artificial intelligence

Teena Maddox
Originally published October 28, 2019

Here is an excerpt:

There are tons of ways you can use AI ethically and also unethically. One way that is typically being cited is, for instance, using attributes of people that shouldn't be used. For instance, when granting somebody a mortgage or access to something or making other decisions. Racial profiling is typically mentioned as an example. So, you need to be mindful which attributes are being used for making decisions. How do the algorithms learn? Other ways of abuse of AI is, for instance, with autonomous killer drones. Would we allow algorithms to decide who gets bombed by a drone and who does not? And most people seem to agree that autonomous killer drones are not a very good idea.

The most important thing that a developer can do in order to create ethical AI is to not think of this as technology, but an exercise in self-reflection. Developers have certain biases. They have certain characteristics themselves. For instance, developers are keen to search for the optimal solution of a problem; it is built into their brains. But ethics is a very pluralistic thing. There's different people who have different ideas. There is not one optimal answer of what is good and bad. First and foremost, developers should be aware of their own ethical biases of what they think is good and bad and create an environment of diversity where they test those assumptions and where they test their results. The the developer brain isn't the only brain or type of brain that is out there, to say the least.

So, AI and ethics is really a story of hope. It's for the very first time that a discussion of ethics is taking place before the widespread implementation, unlike in previous rounds where the ethical considerations were taking place after the effects.

The info is here.

Thursday, October 17, 2019

AI ethics and the limits of code(s)

Machine learningGeoff Mulgan
Originally published September 16, 2019

Here is an excerpt:

1. Ethics involve context and interpretation - not just deduction from codes.

Too much writing about AI ethics uses a misleading model of what ethics means in practice. It assumes that ethics can be distilled into principles from which conclusions can then be deduced, like a code. The last few years have brought a glut of lists of principles (including some produced by colleagues at Nesta). Various overviews have been attempted in recent years. A recent AI Ethics Guidelines Global Inventory collects over 80 different ethical frameworks. There’s nothing wrong with any of them and all are perfectly sensible and reasonable. But this isn’t how most ethical reasoning happens. The lists assume that ethics is largely deductive, when in fact it is interpretive and context specific, as is wisdom. One basic reason is that the principles often point in opposite directions - for example, autonomy, justice and transparency. Indeed, this is also the lesson of medical ethics over many decades. Intense conversation about specific examples, working through difficult ambiguities and contradictions, counts for a lot more than generic principles.

The info is here.

Sunday, June 2, 2019

Promoting competent and flourishing life-long practice for psychologists: A communitarian perspective

Wise, E. H., & Reuman, L. (2019).
Professional Psychology: Research and Practice, 50(2), 129-135.


Based on awareness of the challenges inherent in the practice of psychology there is a burgeoning interest in ensuring that psychologists who serve the public remain competent. These challenges include remaining current in our technical skills and maintaining sufficient personal wellness over the course of our careers. However, beyond merely maintaining competence, we encourage psychologists to envision flourishing lifelong practice that incorporates positive relationships, enhancement of meaning, and positive engagement. In this article we provide an overview of the foundational competencies related to professionalism including ethics, reflective practice, self-assessment, and self-care that underlie our ability to effectively apply technical skills in often complex and emotionally challenging relational contexts. Building on these foundational competencies that were initially defined and promulgated for academic training in health service psychology, we provide an initial framework for conceptualizing psychologist well-being and flourishing lifelong practice that incorporates tenets of applied positive psychology, values-based practice, and a communitarian-oriented approach into the following categories: fostering relationships, meaning making and value-based practice, and enhancing engagement. Finally, we propose broad strategies and specific examples intended to leverage current continuing education mandates into a broadly conceived vision of continuing professional development to support enhanced psychologist functioning for lifelong practice.

The info is here.

Wednesday, May 29, 2019

Why Do We Need Wisdom To Lead In The Future?

Sesil Pir
Originally posted May 19, 2019

Here is an excerpt:

We live in a society that encourages us to think about how to have a great career but leaves us inarticulate about how to cultivate the inner life. The road to success is definitively paved through competition and so fiercely that it becomes all-consuming for many of us. It is commonly accepted today that information is the key source of all being; yet, information alone doesn’t laver one with knowledge as knowledge alone doesn’t lead to righteous action. In the age of artificial information, we need to consider beyond data to drive purposeful progression and authentic illuminations.

Wisdom in the context of leadership refers to our quality of having good, sound judgment. It is a source that provides light into our own insight and introduces a new appreciation for the world around us. It helps us recognize that others are more than our limiting impressions of them. It fills us with confidence that we are connected and better capable than we could ever dream of.

The people with this quality tends to lead from a place of strong internal cohesion. They have overcome fragmentation to reach a level of integration, which supports the way they show up – tranquil, settled and rooted. These people tend to withstand the hard winds of volatility and not easily crumble in the face of adversity. They ground their thoughts, emotions and behaviors in values that feed their self-efficacy and they heartfully understand perfectionism is an unattainable goal.

The info is here.

Saturday, December 15, 2018

What is ‘moral distress’? A narrative synthesis of the literature

Georgina Morley, Jonathan Ives, Caroline Bradbury-Jones, & Fiona Irvine
Nursing Ethics
First Published October 8, 2017 Review Article  


The concept of moral distress (MD) was introduced to nursing by Jameton who defined MD as arising, ‘when one knows the right thing to do, but institutional constraints make it nearly impossible to pursue the right course of action’. MD has subsequently gained increasing attention in nursing research, the majority of which conducted in North America but now emerging in South America, Europe, the Middle East and Asia. Studies have highlighted the deleterious effects of MD, with correlations between higher levels of MD, negative perceptions of ethical climate and increased levels of compassion fatigue among nurses. Consensus is that MD can negatively impact patient care, causing nurses to avoid certain clinical situations and ultimately leave the profession. MD is therefore a significant problem within nursing, requiring investigation, understanding, clarification and responses. The growing body of MD research, however, is arguably failing to bring the required clarification but rather has complicated attempts to study it. The increasing number of cited causes and effects of MD means the term has expanded to the point that according to Hanna and McCarthy and Deady, it is becoming an ‘umbrella term’ that lacks conceptual clarity referring unhelpfully to a wide range of phenomena and causes. Without, however, a coherent and consistent conceptual understanding, empirical studies of MD’s prevalence, effects, and possible responses are likely to be confused and contradictory.

A useful starting point is a systematic exploration of existing literature to critically examine definitions and understandings currently available, interrogating their similarities, differences, conceptual strengths and weaknesses. This article presents a narrative synthesis that explored proposed necessary and sufficient conditions for MD, and in doing so, this article also identifies areas of conceptual tension and agreement.

Thursday, July 19, 2018

Ethics Policies Don't Build Ethical Cultures

Dori Meinert
Originally posted June 19, 2018

Here is an excerpt:

Most people think they would never voluntarily commit an unethical or illegal act. But when Gallagher asked how many people in the audience had ever received a speeding ticket, numerous hands were raised. Similarly, employees rationalize their misuse of company supplies all the time, such as shopping online on their company-issued computer during work hours.

"It's easy to make unethical choices when they are socially acceptable," he said.

But those seemingly small choices can start people down a slippery slope.

Be on the Lookout for Triggers

No one plans to destroy their career by breaking the law or violating their company's ethics policy. There are usually personal stressors that push them over the edge, triggering a "fight or flight" response. At that point, they're not thinking rationally, Gallagher said.

Financial problems, relationship problems or health issues are the most common emotional stressors, he said.

"If you're going to be an ethical leader, are you paying attention to your employees' emotional triggers?"

The information is here.

Wednesday, December 13, 2017

Authenticity and Modernity

Andrew Bowie
Originally published November 6, 2017

Here are two excerpts:

As soon as there is a division in the self, of the kind generated by seeking self-knowledge, attributes like authenticity become a problem. The idea of anyone claiming ‘I am an authentic person’ involves a kind of self-observation that destroys what it seeks to affirm. This situation poses important questions about knowledge. If authenticity is destroyed by the subject thinking it knows that it is authentic, there seem to be ways of being which may be valuable because they transcend our ability to know them. As we shall see in a moment, this idea may help explain why art takes on new significance in modernity.

Despite these difficulties, the notion of authenticity has not disappeared from social discourse, which suggests it answers to a need to articulate something, even as that articulation seems to negate it. The problem with the notion as applied to individuals lies, then, in modern conflicts about the nature of the subject, where Marx, Nietzsche, Freud, and many others, put in question, in the manner already suggested by Schelling, the extent to which people can be transparent to themselves. Is what I am doing a true expression of myself, or is it the result of social conditioning, self-deception, the unconscious?


The early uses of ‘sincere’ and ‘authentic’ had applied both to objects and people, but the moralising of the terms in the wake of the new senses of the self/subject that emerge in the modern era meant the terms came to apply predominantly to assessments of people. The more recent application of ‘authentic’ to watches, iPhones, trainers, etc., thus to objects which rely not least on their status as ‘brands’, can therefore be read as part of what Georg Lukács termed ‘reification’. Relations to objects can start to distort relations between people, giving the value of the ‘brand’ object primacy over that of other subjects. The figures here may be open to question, but the phenomenon seems to be real. The point is that this particular kind of violent theft is linked to the way objects are promoted as ‘authentic’ in the market, rather than just to either their monetary- or use-value.

The article is here.

Tuesday, October 24, 2017

Gaslighting, betrayal and the boogeyman: Personal reflections on the American Psychological Association, PENS and the involvement of psychologists in torture

Nina Thomas
International Journal of Applied Psychoanalytic Studies


The American Psychological Association's (APA's) sanctioning psychologists' involvement in “enhanced interrogations,” aka torture, authorized by the closely parsed re-interpretation of relevant law by the Bush administration, has roiled the association since it appointed a task force in 2005. The Psychological Ethics and National Security (PENS) task force, its composition, methods and outcomes have brought public shame to the profession, the association and its members. Having served on the task force and been involved in the aftermath, I offer reflections on my role to provide an insider's look at the struggle I experienced over loyalty to principle, profession, colleagues, and the association. Situating what occurred in the course of the PENS process and its aftermath within the framework of Freyd's and her collaborators ‘theory of “betrayal trauma,” in particular “institutional trauma,” I suggest that others too share similar feelings of profound betrayal by an organization with which so many of us have been identified over the course of many years. I explore the ways in which attachments have been challenged and undermined by what occurred. Among the questions I have grappled with are: Was I the betrayed or betrayer, or both? How can similar self-reflection usefully be undertaken both by the association itself and other members about their actions or inactions?

The article is here.

Monday, June 19, 2017

The Value of Sharing Information: A Neural Account of Information Transmission

Elisa C. Baek, Christin Scholz, Matthew Brook O’Donnell, & Emily Falk
Psychological Science
May 2017


Humans routinely share information with one another. What drives this behavior? We used neuroimaging to test an account of information selection and sharing that emphasizes inherent reward in self-reflection and connecting with other people. Participants underwent functional MRI while they considered personally reading and sharing New York Times articles. Activity in neural regions involved in positive valuation, self-related processing, and taking the perspective of others was significantly associated with decisions to select and share articles, and scaled with preferences to do so. Activity in all three sets of regions was greater when participants considered sharing articles with other people rather than selecting articles to read themselves. The findings suggest that people may consider value not only to themselves but also to others even when selecting news articles to consume personally. Further, sharing heightens activity in these pathways, in line with our proposal that humans derive value from self-reflection and connecting to others via sharing.

The article is here.

Wednesday, March 22, 2017

The Case of Dr. Oz: Ethics, Evidence, and Does Professional Self-Regulation Work?

Jon C. Tilburt, Megan Allyse, and Frederic W. Hafferty
AMA Journal of Ethics. February 2017, Volume 19, Number 2: 199-206.


Dr. Mehmet Oz is widely known not just as a successful media personality donning the title “America’s Doctor®,” but, we suggest, also as a physician visibly out of step with his profession. A recent, unsuccessful attempt to censure Dr. Oz raises the issue of whether the medical profession can effectively self-regulate at all. It also raises concern that the medical profession’s self-regulation might be selectively activated, perhaps only when the subject of professional censure has achieved a level of public visibility. We argue here that the medical profession must look at itself with a healthy dose of self-doubt about whether it has sufficient knowledge of or handle on the less visible Dr. “Ozes” quietly operating under the profession’s presumptive endorsement.

The article is here.

Friday, March 3, 2017

Doctors suffer from the same cognitive distortions as the rest of us

Michael Lewis
Originally posted February 9, 2017

Here are two excerpts:

What struck Redelmeier wasn’t the idea that people made mistakes. Of course people made mistakes! What was so compelling is that the mistakes were predictable and systematic. They seemed ingrained in human nature. One passage in particular stuck with him—about the role of the imagination in human error. “The risk involved in an adventurous expedition, for example, is evaluated by imagining contingencies with which the expedition is not equipped to cope,” the authors wrote. “If many such difficulties are vividly portrayed, the expedition can be made to appear exceedingly dangerous, although the ease with which disasters are imagined need not reflect their actual likelihood. Conversely, the risk involved in an undertaking may be grossly underestimated if some possible dangers are either difficult to conceive of, or simply do not come to mind.” This wasn’t just about how many words in the English language started with the letter K. This was about life and death.


Toward the end of their article in Science, Daniel Kahneman and Amos Tversky had pointed out that, while statistically sophisticated people might avoid the simple mistakes made by less savvy people, even the most sophisticated minds were prone to error. As they put it, “their intuitive judgments are liable to similar fallacies in more intricate and less transparent problems.” That, the young Redelmeier realized, was a “fantastic rationale why brilliant physicians were not immune to these fallibilities.” Error wasn’t necessarily shameful; it was merely human. “They provided a language and a logic for articulating some of the pitfalls people encounter when they think. Now these mistakes could be communicated. It was the recognition of human error. Not its denial. Not its demonization. Just the understanding that they are part of human nature.”

The article is here.

Monday, December 5, 2016

Why Some People Get Burned Out and Others Don't

Kandi Wiens and Annie McKee
Harvard Business Review
Originally posted November 23, 2016

Here is an excerpt:

What You Can Do to Manage Stress and Avoid Burnout

People do all kinds of destructive things to deal with stress—they overeat, abuse drugs and alcohol, and push harder rather than slowing down. What we learned from our study of chief medical officers is that people can leverage their emotional intelligence to deal with stress and ward off burnout. You, too, might want to try the following:

Don’t be the source of your stress. Too many of us create our own stress, with its full bodily response, merely by thinking about or anticipating future episodes or encounters that might be stressful. People who have a high need to achieve or perfectionist tendencies may be more prone to creating their own stress. We learned from our study that leaders who are attuned to the pressures they put on themselves are better able to control their stress level. As one CMO described, “I’ve realized that much of my stress is self-inflicted from years of being hard on myself. Now that I know the problems it causes for me, I can talk myself out of the non-stop pressure.”

Recognize your limitations. Becoming more aware of your strengths and weaknesses will clue you in to where you need help. In our study, CMOs described the transition from a clinician to leadership role as being a major source of their stress. Those who recognized when the demands were outweighing their abilities, didn’t go it alone—they surrounded themselves with trusted advisors and asked for help.

The article is here.

Tuesday, May 17, 2016

Later Career Remedial Supervision - The Practice Event Audit

Jon Amundson
The Practitioner Scholar: Journal of Counseling and Professional Psychology 32 
Volume 5, 2016


Clinical supervision has for the most part focused upon early career preparation and training. Fundamental to this process is emphasis upon emerging competency. However, supervision can also be required in relation to enduring competency. Where lapses in professional practice are of a subtle or non-egregious nature, supervision may arise as a remedial route. Through hearing, tribunal mandate or negotiation, arising from Alternative Dispute Resolution (ADR), remedial supervision may be the outcome. In this article mandated or negotiated remedial supervision is discussed with a specific description of a means for such – the Practice Event Audit. Issues related to ethics, conduct and competency, remedial supervision and the Professional Event Audit are discussed in light of a case example.

The paper is here.

Wednesday, December 9, 2015

Three Principles to REVISE People's Unethical Behavior

Ayal, S., F. Gino, R. Barkan, and D. Ariely.
Perspectives on Psychological Science
November 2015 vol. 10 no. 6 738-741


Dishonesty and unethical behavior are widespread in the public and private sectors and cause immense annual losses. For instance, estimates of U.S. annual losses indicate $1 trillion paid in bribes, $270 billion lost due to unreported income, and $42 billion lost in retail due to shoplifting and employee theft. In this article, we draw on insights from the growing fields of moral psychology and behavioral ethics to present a three-principle framework we call REVISE. This framework classifies forces that affect dishonesty into three main categories and then redirects those forces to encourage moral behavior. The first principle, reminding, emphasizes the effectiveness of subtle cues that increase the salience of morality and decrease people’s ability to justify dishonesty. The second principle, visibility, aims to restrict anonymity, prompt peer monitoring, and elicit responsible norms. The third principle, self-engagement, increases people’s motivation to maintain a positive self-perception as a moral person and helps bridge the gap between moral values and actual behavior. The REVISE framework can guide the design of policy interventions to defeat dishonesty.

The article is here.

Wednesday, September 9, 2015

How can healthcare professionals better manage their unconscious racial bias?

By April Dembosky
MedCity News
Originally published August 21, 2015

Here is an excerpt:

Racial Disparity In Medical Treatment Persists

Even as the health of Americans has improved, the disparities in treatment and outcomes between white patients and black and Latino patients are almost as big as they were 50 years ago.

A growing body of research suggests that doctors’ unconscious behavior plays a role in these statistics, and the Institute of Medicine of the National Academy of Sciences has called for more studies looking at discrimination and prejudice in health care.

For example, several studies show that African-American patients are often prescribed less pain medication than white patients with the same complaints. Black patients with chest pain are referred for advanced cardiac care less often than white patients with identical symptoms.

Doctors, nurses and other health workers don’t mean to treat people differently, says Howard Ross, founder of management consulting firm Cook Ross, who has worked with many groups on diversity issues. But all these professionals harbor stereotypes that they’re not aware they have, he says. Everybody does.

The entire article is here.