Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Benefits. Show all posts
Showing posts with label Benefits. Show all posts

Thursday, October 19, 2023

10 Things Your Corporate Culture Needs to Get Right

D. Sull and C. Sull
MIT Sloan Management Review
Originally posted 16 September 21

Here are two excerpts:

What distinguishes a good corporate culture from a bad one in the eyes of employees? This is a trickier question than it might appear at first glance. Most leaders agree in principle that culture matters but have widely divergent views about which elements of culture are most important. In an earlier study, we identified more than 60 distinct values that companies listed among their official “core values.” Most often, an organization’s official core values signal top executives’ cultural aspirations, rather than reflecting the elements of corporate culture that matter most to employees.

Which elements of corporate life shape how employees rate culture? To address this question, we analyzed the language workers used to describe their employers. When they complete a Glassdoor review, employees not only rate corporate culture on a 5-point scale, but also describe — in their own words — the pros and cons of working at their organization. The topics they choose to write about reveal which factors are most salient to them, and sentiment analysis reveals how positively (or negatively) they feel about each topic. (Glassdoor reviews are remarkably balanced between positive and negative observations.) By analyzing the relationship between their descriptions and rating of culture, we can start to understand what employees are talking about when they talk about culture.

(cut)

The following chart summarizes the factors that best predict whether employees love (or loathe) their companies. The bars represent each topic’s relative importance in predicting a company’s culture rating. Whether employees feel respected, for example, is 18 times more powerful as a predictor of a company’s culture rating compared with the average topic. We’ve grouped related factors to tease out broader themes that emerge from our analysis.

Here are the 10 cultural dynamics and my take
  1. Employees feel respected. Employees want to be treated with consideration, courtesy, and dignity. They want their perspectives to be taken seriously and their contributions to be valued.
  2. Employees have supportive leaders. Employees need leaders who will help them to do their best work, respond to their requests, accommodate their individual needs, offer encouragement, and have their backs.
  3. Leaders live core values. Employees need to see that their leaders are committed to the company's core values and that they are willing to walk the talk.
  4. Toxic managers. Toxic managers can create a poisonous work environment and lead to high turnover rates and low productivity.
  5. Unethical behavior. Employees need to have confidence that their colleagues and leaders are acting ethically and honestly.
  6. Employees have good benefits. Employees expect to be compensated fairly and to have access to a good benefits package.
  7. Perks. Perks can be anything from free snacks to on-site childcare to flexible work arrangements. They can help to make the workplace more enjoyable and improve employee morale.
  8. Employees have opportunities for learning and development. Employees want to grow and develop in their careers. They need to have access to training and development opportunities that will help them to reach their full potential.
  9. Job security. Employees need to feel secure in their jobs in order to focus on their work and be productive.
  10. Reorganizations. How employees view reorganizations, including frequency and quality.
The authors argue that these ten elements are essential for creating a corporate culture that is attractive to top talent, drives innovation and productivity, and leads to long-term success.

Additional thoughts

In addition to the ten elements listed above, there are a number of other factors that can contribute to a strong and positive corporate culture. These include:
  • Diversity and inclusion. Employees want to work in a company where they feel respected and valued, regardless of their race, ethnicity, gender, sexual orientation, or other factors.
  • Collaboration and teamwork. Employees want to work in a company where they can collaborate with others and achieve common goals.
  • Open communication and feedback. Employees need to feel comfortable communicating with their managers and colleagues, and they need to be open to receiving feedback.
  • Celebration of success. It is important to celebrate successes and recognize employees for their contributions. This helps to create a positive and supportive work environment.
  • By investing in these factors, companies can create a corporate culture that is both attractive to employees and beneficial to the bottom line.

Thursday, November 18, 2021

Ethics Pays: Summary for Businesses

Ethicalsystems.org
September 2021

Is good ethics good for business? Crime and sleazy behavior sometimes pay off handsomely. People would not do such things if they didn’t think they were more profitable than the alternatives.

But let us make two distinctions right up front. First, let us contrast individual employees with companies. Of course, it can benefit individual employees to lie, cheat, and steal when they can get away with it. But these benefits usually come at the expense of the firm and its shareholders, so leaders and managers should work very hard to design ethical systems that will discourage such self-serving behavior (known as the “principal-agent problem”).

The harder question is whether ethical violations committed by the firm or for the firm’s benefit are profitable. Cheating customers, avoiding taxes, circumventing costly regulations, and undermining competitors can all directly increase shareholder value.

And here we must make the second distinction: short-term vs. long-term. Of course, bad ethics can be extremely profitable in the short run. Business is a complex web of relationships, and it is easy to increase revenues or decrease costs by exploiting some of those relationships. But what happens in the long run?

Customers are happy and confident in knowing they’re dealing with an honest company. Ethical companies retain the bulk of their employees for the long-term, which reduces costs associated with turnover. Investors have peace of mind when they invest in companies that display good ethics because they feel assured that their funds are protected. Good ethics keep share prices high and protect businesses from takeovers.

Culture has a tremendous influence on ethics and its application in a business setting. A corporation’s ability to deliver ethical value is dependent on the state of its culture. The culture of a company influences the moral judgment of employees and stakeholders. Companies that work to create a strong ethical culture motivate everyone to speak and act with honesty and integrity. Companies that portray strong ethics attract customers to their products and services, and are far more likely to manage their negative environmental and social externalities well.

Sunday, November 29, 2020

Freerolls and binds: making policy when information is missing

Duke, A. & Sunstein, C.
(2020). Behavioural Public Policy, 1-22. 

Abstract

When policymakers focus on costs and benefits, they often find that hard questions become easy – as, for example, when the benefits clearly exceed the costs, or when the costs clearly exceed the benefits. In some cases, however, benefits or costs are difficult to quantify, perhaps because of limitations in scientific knowledge. In extreme cases, policymakers are proceeding in circumstances of uncertainty rather than risk, in the sense that they cannot assign probabilities to various outcomes. We suggest that in difficult cases in which important information is absent, it is useful for policymakers to consider a concept from poker: ‘freerolls.’ A freeroll exists when choosers can lose nothing from selecting an option but stand to gain something (whose magnitude may itself be unknown). In some cases, people display ‘freeroll neglect.’ In terms of social justice, John Rawls’ defense of the difference principle is grounded in the idea that, behind the veil of ignorance, choosers have a freeroll. In terms of regulatory policy, one of the most promising defenses of the Precautionary Principle sees it as a kind of freeroll. Some responses to climate change, pandemics and financial crises can be seen as near-freerolls. Freerolls and near-freerolls must be distinguished from cases involving cumulatively high costs and also from faux freerolls, which can be found when the costs of an option are real and significant, but not visible. ‘Binds’ are the mirror-image of freerolls; they involve options from which people are guaranteed to lose something (of uncertain magnitude). Some regulatory options are binds, and there are faux binds as well.

From the Conclusion

In ordinary life, people may be asked whether they want a freeroll, in the form of a good or opportunity from which they will lose nothing, but from which they gain something of value, when the magnitude of the gain cannot be specified. The gain might take the form of the elimination of a risk. More commonly, people are given near-freerolls, because they have to pay something for the option. Often what they have to pay is very low, which makes the deal a good one. The central point here is an asymmetry in what people know. They know the costs, while they have large epistemic gaps with respect to the potential gains. People often fall prey to ‘freeroll neglect.’ When this is so, they do not see pure or near-freerolls; they seek missing information before choosing among options, even though they have no need to do so.

Freerolls are mirrored by binds, in which people are given an option from which they can only lose, even though they do not know how much they might lose. To know that binds are undesirable, the chooser need not have full knowledge about the range of possible downside outcomes. Nor need the chooser know anything about the shape of the distribution of those outcomes.

Saturday, September 19, 2020

Don’t ask if artificial intelligence is good or fair, ask how it shifts power

Pratyusha Kalluri
nature.com
Originally posted 7 July 20

Here is an excerpt:

Researchers in AI overwhelmingly focus on providing highly accurate information to decision makers. Remarkably little research focuses on serving data subjects. What’s needed are ways for these people to investigate AI, to contest it, to influence it or to even dismantle it. For example, the advocacy group Our Data Bodies is putting forward ways to protect personal data when interacting with US fair-housing and child-protection services. Such work gets little attention. Meanwhile, mainstream research is creating systems that are extraordinarily expensive to train, further empowering already powerful institutions, from Amazon, Google and Facebook to domestic surveillance and military programmes.

Many researchers have trouble seeing their intellectual work with AI as furthering inequity. Researchers such as me spend our days working on what are, to us, mathematically beautiful and useful systems, and hearing of AI success stories, such as winning Go championships or showing promise in detecting cancer. It is our responsibility to recognize our skewed perspective and listen to those impacted by AI.

Through the lens of power, it’s possible to see why accurate, generalizable and efficient AI systems are not good for everyone. In the hands of exploitative companies or oppressive law enforcement, a more accurate facial recognition system is harmful. Organizations have responded with pledges to design ‘fair’ and ‘transparent’ systems, but fair and transparent according to whom? These systems sometimes mitigate harm, but are controlled by powerful institutions with their own agendas. At best, they are unreliable; at worst, they masquerade as ‘ethics-washing’ technologies that still perpetuate inequity.

Already, some researchers are exposing hidden limitations and failures of systems. They braid their research findings with advocacy for AI regulation. Their work includes critiquing inadequate technological ‘fixes’. Other researchers are explaining to the public how natural resources, data and human labour are extracted to create AI.

The info is here.

Wednesday, March 4, 2020

Stressed Out at the Office? Therapy Can Come to You

Rachel Feintzeig
The Wall Street Journal
Originally published 31 Jan 20

Here is an excerpt:

In the past, discussion of mental-health issues at the office was uncommon. Workers were largely expected to leave their personal struggles at home. Crying was confined to the bathroom stall.

Today, that’s changing. One reason is a broadening of the popular understanding of “mental health” to encompass anxiety, stress and other widespread issues.

It’s also a reflection of a changing workplace. Younger workers are more comfortable talking about their struggles and expect their employers to take emotional distress seriously, says Jeffrey Pfeffer, a professor of organizational behavior at the Stanford Graduate School of Business.

Senior leaders are responding, rolling out mental-health services and sometimes speaking about their own experiences. Lloyds Banking Group Plc chief executive António Horta-Osório has said publicly in recent years that the pressure he felt around the bank’s financial situation in 2011 dominated his thoughts, leaving him unable to sleep and exhausted. He took eight weeks off from the company to recover, working with a psychiatrist. The psychiatrist later helped him devise a mental-health program for Lloyds employees.

Brynn Brichet, a lead product manager at Cerner Corp., a maker of electronic medical-records systems, said she sometimes returns from her counseling appointments with an on-site therapist red-faced from crying. (The therapist sits a few floors down.) If colleagues ask, she tells them that she just got out of an intense therapy session. Some are taken aback when she mentions her therapy, she said. But she thinks it’s important to be open.

“We all are terrified. We all are struggling,” she said. “If we don’t talk about it, it can run our lives.”

The info is here.

Friday, October 4, 2019

When Patients Request Unproven Treatments

Casey Humbyrd and Matthew Wynia
medscape.com
Originally posted March 25, 2019

Here is an excerpt:

Ethicists have made a variety of arguments about these injections. The primary arguments against them have focused on the perils of physicians becoming sellers of "snake oil," promising outlandish benefits and charging huge sums for treatments that might not work. The conflict of interest inherent in making money by providing an unproven therapy is a legitimate ethical concern. These treatments are very expensive and, as they are unproven, are rarely covered by insurance. As a result, some patients have turned to crowdfunding sites to pay for these questionable treatments.

But the profit motive may not be the most important ethical issue at stake. If it were removed, hypothetically, and physicians provided the injections at cost, would that make this practice more acceptable?

No. We believe that physicians who offer these injections are skipping the most important step in the ethical adoption of any new treatment modality: research that clarifies the benefits and risks. The costs of omitting that important step are much more than just monetary.

For the sake of argument, let's assume that stem cells are tremendously successful and that they heal arthritic joints, making them as good as new. By selling these injections to those who can pay before the treatment is backed by research, physicians are ensuring unavailability to patients who can't pay, because insurance won't cover unproven treatments.

The info is here.

Monday, March 18, 2019

OpenAI's Realistic Text-Generating AI Triggers Ethics Concerns

William Falcon
Forbes.com
Originally posted February 18, 2019

Here is an excerpt:

Why you should care.

GPT-2 is the closest AI we have to make conversational AI a possibility. Although conversational AI is far from solved, chatbots powered by this technology could help doctors scale advice over chats, scale advice for potential suicide victims, improve translation systems, and improve speech recognition across applications.

Although OpenAI acknowledges these potential benefits, it also acknowledges the potential risks of releasing the technology. Misuse could include, impersonate others online, generate misleading news headlines, or automate the automation of fake posts to social media.

But I argue these malicious applications are already possible without this AI. There exist other public models which can already be used for these purposes. Thus, I think not releasing this code is more harmful to the community because A) it sets a bad precedent for open research, B) keeps companies from improving their services, C) unnecessarily hypes these results and D) may trigger unnecessary fears about AI in the general public.

The info is here.

Monday, February 4, 2019

What “informed consent” really means

Stacy Weiner
www.aamcnews.org
Originally published January 19, 2019

Here is an excerpt:

Conflicts around consent

The informed consent process is not without its thornier aspects. At times, malpractice suits shift the landscape. For example, in a 2017 Pennsylvania case with possible implications in other states, the court ruled that the physician performing a procedure — not a delegate — must personally ensure that the patient understands the risks involved.

And sometimes, informed consent grabs headlines, as happened recently with allegations that medical students are performing pelvic exams on anesthetized women without consent.

That claim, Orlowski notes, relied on studies from more than 10 years ago, before such changes as more detailed consent forms. Typically, she says, students practice pelvic exams with special mannequins and standardized patients who are specifically trained for this purpose. When students and residents do perform pelvic exams on surgical patients, Orlowski adds, specific consent must be obtained first. “Performing pelvic examinations under anesthesia without patients’ consent is unethical and unacceptable,” she says.

In fact, the American College of Obstetricians and Gynecologists states that “pelvic examinations on an anesthetized woman … performed solely for teaching purposes should be performed only with her specific informed consent obtained before her surgery.”

Marie Walters, a student at Wright State University Boonshoft School of Medicine, says she was perplexed by the allegations, so she checked with fellow students at her school and elsewhere. Her explanation: medical students may not know that patients agreed to such exams. “Although students witness some consent processes, we’re likely not around when patients give consent for the surgeries we observe,” says Walters, who is a member of the AAMC Board of Directors. "We may be there just for the day of the surgery,” she notes.

The info is here.

Sunday, November 11, 2018

Nine risk management lessons for practitioners.

Taube, Daniel O.,Scroppo, Joe,Zelechoski, Amanda D.
Practice Innovations, Oct 04 , 2018

Abstract

Risk management is an essential skill for professionals and is important throughout the course of their careers. Effective risk management blends a utilitarian focus on the potential costs and benefits of particular courses of action, with a solid foundation in ethical principles. Awareness of particularly risk-laden circumstances and practical strategies can promote safer and more effective practice. This article reviews nine situations and their associated lessons, illustrated by case examples. These situations emerged from our experience as risk management consultants who have listened to and assisted many practitioners in addressing the challenges they face on a day-to-day basis. The lessons include a focus on obtaining consent, setting boundaries, flexibility, attention to clinician affect, differentiating the clinician’s own values and needs from those of the client, awareness of the limits of competence, maintaining adequate legal knowledge, keeping good records, and routine consultation. We highlight issues and approaches to consider in these types of cases that minimize risks of adverse outcomes and enhance good practice.

The info is here.

Here is a portion of the article:

Being aware of basic legal parameters can help clinicians to avoid making errors in this complex arena. Yet clinicians are not usually lawyers and tend to have only limited legal knowledge. This gives rise to a risk of assuming more mastery than one may have.

Indeed, research suggests that a range of professionals, including psychotherapists, overestimate their capabilities and competencies, even in areas in which they have received substantial training (Creed, Wolk, Feinberg, Evans, & Beck, 2016; Lipsett, Harris, & Downing, 2011; Mathieson, Barnfield, & Beaumont, 2009; Walfish, McAlister, O’Donnell, & Lambert, 2012).

Monday, November 5, 2018

We Need To Examine The Ethics And Governance Of Artificial Intelligence

Nikita Malik
forbes.com
Originally posted October 4, 2018

Here is an excerpt:

The second concern is on regulation and ethics. Research teams at MIT and Harvard are already looking into the fast-developing area of AI to map the boundaries within which sensitive but important data can be used. Who determines whether this technology can save lives, for example, versus the very real risk of veering into an Orwellian dystopia?

Take artificial intelligence systems that have the ability to predicate a crime based on an individual’s history, and their propensity to do harm. Pennsylvania could be one of the first states in the United States to base criminal sentences not just on the crimes people are convicted of, but also on whether they are deemed likely to commit additional crimes in the future. Statistically derived risk assessments – based on factors such as age, criminal record, and employment, will help judges determine which sentences to give. This would help reduce the cost of, and burden on, the prison system.

Risk assessments – which have existed for a long time - have been used in other areas such as the prevention of terrorism and child sexual exploitation. In the latter category, existing human systems are so overburdened that children are often overlooked, at grave risk to themselves. Human errors in the case work of the severely abused child Gabriel Fernandez contributed to his eventual death at the hands of his parents, and a serious inquest into the shortcomings of the County Department of Children and Family Services in Los Angeles. Using artificial intelligence in vulnerability assessments of children could aid overworked caseworkers and administrators and flag errors in existing systems.

The info is here.

Tuesday, September 4, 2018

Financial Ties That Bind: Studies Often Fall Short On Conflict-Of-Interest Disclosures

Rachel Bluth
Kaiser Health News
Originally published August 15, 2018

Papers in medical journals go through rigorous peer review and meticulous data analysis.

Yet many of these articles are missing a key piece of information: the financial ties of the authors.

Nearly two-thirds of the 100 physicians who rake in the most money from 10 device manufacturers failed to disclose a conflict of interest in their academic writing in 2016, according to a study published Wednesday in JAMA Surgery.

The omission can have real-life impact for patients when their doctors rely on such research to make medical decisions, potentially without knowing the authors’ potential conflicts of interest.

“The issue is anytime there’s a new technology, people get really excited about it,” said lead researcher Dr. Mehraneh Jafari. “Whoever is reading the data on it needs to have the most information.”

The article is here.

Thursday, June 14, 2018

The Benefits of Admitting When You Don’t Know

Tenelle Porter
Behavioral Scientist
Originally published April 30, 2018

Here is an excerpt:

We found that the more intellectually humble students were more motivated to learn and more likely to use effective metacognitive strategies, like quizzing themselves to check their own understanding. They also ended the year with higher grades in math. We also found that the teachers, who hadn’t seen students’ intellectual humility questionnaires, rated the more intellectually humble students as more engaged in learning.

Next, we moved into the lab. Could temporarily boosting intellectual humility make people more willing to seek help in an area of intellectual weakness? We induced intellectual humility in half of our participants by having them read a brief article that described the benefits of admitting what you do not know. The other half read an article about the benefits of being very certain of what you know. We then measured their intellectual humility.

Those who read the benefits-of-humility article self-reported higher intellectual humility than those in the other group. What’s more, in a follow-up exercise 85 percent of these same participants sought extra help for an area of intellectual weakness. By contrast, only 65 percent of the participants who read about the benefits of being certain sought the extra help that they needed. This experiment provided evidence that enhancing intellectual humility has the potential to affect students’ actual learning behavior.

Together, our findings illustrate that intellectual humility is associated with a host of outcomes that we think are important for learning in school, and they suggest that boosting intellectual humility may have benefits for learning.

The article is here.

Tuesday, April 17, 2018

Planning Complexity Registers as a Cost in Metacontrol

Kool, W., Gershman, S. J., & Cushman, F. A. (in press). Planning complexity registers as a
cost in metacontrol. Journal of Cognitive Neuroscience.

Abstract

Decision-making algorithms face a basic tradeoff between accuracy and effort (i.e., computational demands). It is widely agreed that humans have can choose between multiple decision-making processes that embody different solutions to this tradeoff: Some are computationally cheap but inaccurate, while others are computationally expensive but accurate. Recent progress in understanding this tradeoff has been catalyzed by formalizing it in terms of model-free (i.e., habitual) versus model-based (i.e., planning) approaches to reinforcement learning. Intuitively, if two tasks offer the same rewards for accuracy but one of them is much more demanding, we might expect people to rely on habit more in the difficult task: Devoting significant computation to achieve slight marginal accuracy gains wouldn’t be “worth it”. We test and verify this prediction in a sequential RL task. Because our paradigm is amenable to formal analysis, it contributes to the development of a computational model of how people balance the costs and benefits of different decision-making processes in a task-specific manner; in other words, how we decide when hard thinking is worth it.

The research is here.

Monday, October 23, 2017

Holding People Responsible for Ethical Violations: The Surprising Benefits of Accusing Others

Jessica A. Kennedy and Maurice E. Schweitzer
Wharton Behavioral Lab

Abstract

Individuals who accuse others of unethical behavior can derive significant benefits.  Compared to individuals who do not make accusations, accusers engender greater trust and are perceived to have higher ethical standards. In Study 1, accusations increased trust in the accuser and lowered trust in the target. In Study 2, we find that accusations elevate trust in the accuser by boosting perceptions of the accuser’s ethical standards. In Study 3, we find that accusations boosted both attitudinal and behavioral trust in the accuser, decreased trust in the target, and promoted relationship conflict within the group. In Study 4, we examine the moderating role of moral hypocrisy. Compared to individuals who did not make an accusation, individuals who made an accusation were trusted more if they had acted ethically but not if they had acted unethically. Taken together, we find that accusations have significant interpersonal consequences. In addition to harming accused targets, accusations can substantially benefit accusers.

Here is part of the Discussion:

It is possible, however, that even as accusations promote group conflict, accusations could benefit organizations by enforcing norms and promoting ethical behavior. To ensure ethical conduct, organizations must set an ethical tone (Mayer et al., 2013). To do so, organizations need to encourage detection and punishment of unethical behavior. Punishment of norm violators has been conceptualized as an altruistic behavior (Fehr & Gachter, 2000). Our findings challenge this conceptualization. Rather than reflecting altruism, accusers may derive substantial personal benefits from punishing norm violators. The trust benefits of making an accusation provide a reason for even the most self-interested actors to intervene when they perceive unethical activity. That is, even when self-interest is the norm (e.g., Pillutla & Chen, 1999), individuals have trust incentives to openly oppose unethical behavior.

The research is here.

Sunday, October 8, 2017

Moral outrage in the digital age

Molly J. Crockett
Nature Human Behaviour (2017)
Originally posted September 18, 2017

Moral outrage is an ancient emotion that is now widespread on digital media and online social networks. How might these new technologies change the expression of moral outrage and its social consequences?

Moral outrage is a powerful emotion that motivates people to shame and punish wrongdoers. Moralistic punishment can be a force for good, increasing cooperation by holding bad actors accountable. But punishment also has a dark side — it can exacerbate social conflict by dehumanizing others and escalating into destructive feuds.

Moral outrage is at least as old as civilization itself, but civilization is rapidly changing in the face of new technologies. Worldwide, more than a billion people now spend at least an hour a day on social media, and moral outrage is all the rage online. In recent years, viral online shaming has cost companies millions, candidates elections, and individuals their careers overnight.

As digital media infiltrates our social lives, it is crucial that we understand how this technology might transform the expression of moral outrage and its social consequences. Here, I describe a simple psychological framework for tackling this question (Fig. 1). Moral outrage is triggered by stimuli that call attention to moral norm violations. These stimuli evoke a range of emotional and behavioural responses that vary in their costs and constraints. Finally, expressing outrage leads to a variety of personal and social outcomes. This framework reveals that digital media may exacerbate the expression of moral outrage by inflating its triggering stimuli, reducing some of its costs and amplifying many of its personal benefits.

The article is here.

Wednesday, March 1, 2017

Clinicians’ Expectations of the Benefits and Harms of Treatments, Screening, and Tests

Tammy C. Hoffmann & Chris Del Mar
JAMA Intern Med. 
Published online January 9, 2017.
doi:10.1001/jamainternmed.2016.8254

Question

Do clinicians have accurate expectations of the benefits and harms of treatments, tests, and screening tests?

Findings

In this systematic review of 48 studies (13 011 clinicians), most participants correctly estimated 13% of the 69 harm expectation outcomes and 11% of the 28 benefit expectations. The majority of participants overestimated benefit for 32% of outcomes, underestimated benefit for 9%, underestimated harm for 34%, and overestimated harm for 5% of outcomes.

Meaning

Clinicians rarely had accurate expectations of benefits or harms, with inaccuracies in both directions, but more often underestimated harms and overestimated benefits.

The research is here.

Friday, July 1, 2016

Predictive genetic testing for neurodegenerative conditions: how should conflicting interests within families be managed?

Zornitza Stark, Jane Wallace, Lynn Gillam, Matthew Burgess, Martin B Delatycki
J Med Ethics doi:10.1136/medethics-2016-103400

Abstract

Predictive genetic testing for a neurodegenerative condition in one individual in a family may have implications for other family members, in that it can reveal their genetic status. Herein a complex clinical case is explored where the testing wish of one family member was in direct conflict to that of another. The son of a person at 50% risk of an autosomal dominant neurodegenerative condition requested testing to reveal his genetic status. The main reason for the request was if he had the familial mutation, he and his partner planned to utilise preimplantation genetic diagnosis to prevent his offspring having the condition. His at-risk parent was clear that if they found out they had the mutation, they would commit suicide. We assess the potential benefits and harms from acceding to or denying such a request and present an approach to balancing competing rights of individuals within families at risk of late-onset genetic conditions, where family members have irreconcilable differences with respect to predictive testing. We argue that while it may not be possible to completely avoid harm in these situations, it is important to consider the magnitude of risks, and make every effort to limit the potential for adverse outcomes.

The article is here.

Saturday, June 18, 2016

The New Era of Informed Consent

Getting to a Reasonable-Patient Standard Through Shared Decision Making

Erica S. Spatz, Harlan M. Krumholz, MD, Benjamin W. Moulton
JAMA. 2016; 315(19):2063-2064. doi:10.1001/jama.2016.3070.

Here is an excerpt:

Informed consent discussions are often devoid of details about the material risks, benefits, and alternatives that are critical to meaningful patient decision making. Informed consent documents for procedures, surgery, and medical treatments with material risks (eg, radiation therapy) tend to be generic, containing information intended to protect the physician or hospital from litigation. These documents are often written at a high reading level and sometimes presented in nonlegible print, putting a premium on health literacy and proactive information-seeking behavior. Moreover, informed consent documents are often signed minutes before the start of a procedure, a time when patients are most vulnerable and least likely to ask questions—hardly consistent with what a reasonable patient would deem acceptable. In the United States, with the exception of 1 state, Washington, that explicitly recognizes shared decision making as an alternative to the traditional informed consent process, the law has yet to promote a process that truly supports a reasonable-patient–centered standard through shared decision making.

The article is here.

Monday, August 31, 2015

The What and Why of Self-Deception

Zoë Chance and Michael I. Norton
Current Opinion in Psychology
Available online 3 August 2015

Scholars from many disciplines have investigated self-deception, but both defining self-deception and establishing its possible benefits have been a matter of heated debate – a debate impoverished by a relative lack of empirical research. Drawing on recent research, we first classify three distinct definitions of self-deception, ranging from a view that self-deception is synonymous with positive illusions to a more stringent view that self-deception requires the presence of simultaneous conflicting beliefs. We then review recent research on the possible benefits of self-deception, identifying three adaptive functions: deceiving others, social status, and psychological benefits. We suggest potential directions for future research.

The nature and definition of self-deception remains open to debate. Philosophers have questioned whether – and how – self-deception is possible; evolutionary theorists have conjectured that self-deception may – or must – be adaptive. Until recently, there was little evidence for either the existence or processes of self-deception; indeed, Robert Trivers wrote that research on self-deception is still in its infancy. In recent years, however, empirical research on self-deception has been gaining traction in social psychology and economics, providing much-needed evidence and shedding light on the psychology of self-deception. We first classify competing definitions of self-deception, then review recent research supporting three distinct advantages of self-deception: improved success in deceiving others, social status, and psychological benefits.

The entire article is here.

Note to Psychologists: Psychologists engage in self-deception in psychotherapy.  Psychologists typically judge psychotherapy sessions as having been more beneficial than patients.  Self-deception may lead to clinical missteps and errors in judgment, both clinical and ethical.

Friday, February 13, 2015

Diagnosis or Delusion?

Patients who say they have Morgellons point to skin lesions as proof of their disease. But doctors believe the lesions are self-inflicted—that the condition is psychological, not dermatological.

By Katherine Foley
The Atlantic
Originally published January 18, 2015

Here is an excerpt:

When patients with these symptoms seek dermatological treatment, they’re usually told that they have delusions of parasitosis, a condition in which people are falsely convinced that they’re infested with parasites—told, in other words, that the crawling, itching sensations under their skin are only in their heads, and the fibers are remnants from clothing. Still, they pick away, trying to get the feeling out. According to Casey, most doctors refuse to even examine the alleged skin fibers and only offer anti-psychotic medication as treatment. It took her three years to find a dermatologist willing to treat her in any other way, and she and her husband had to drive all the way from California to Texas to see him.

The article outlining the conundrum is here.