Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Incentives. Show all posts
Showing posts with label Incentives. Show all posts

Sunday, October 31, 2021

Silenced by Fear: The Nature, Sources, and Consequences of Fear at Work

Kish-Gephart, J. J. et al. (2009)
Research in Organizational Behavior, 29, 163-193. 
https://doi.org/10.1016/j.riob.2009.07.002

Abstract

In every organization, individual members have the potential to speak up about important issues, but a growing body of research suggests that they often remain silent instead, out of fear of negative personal and professional consequences. In this chapter, we draw on research from disciplines ranging from evolutionary psychology to neuroscience, sociology, and anthropology to unpack fear as a discrete emotion and to elucidate its effects on workplace silence. In doing so, we move beyond prior descriptions and categorizations of what employees fear to present a deeper understanding of the nature of fear experiences, where such fears originate, and the different types of employee silence they motivate. Our aim is to introduce new directions for future research on silence as well as to encourage further attention to the powerful and pervasive role of fear across numerous areas of theory and research on organizational behavior.

Discussion 

Fear, a powerful and pervasive emotion, influences human perception, cognition, and behavior in ways and to an extent that we find underappreciated in much of the organizational literature. This chapter draws from a broad range of literatures, including evolutionary psychology, neuroscience, sociology, and anthropology, to provide a fuller understanding of how fear influences silence in organizations. Our intention is to provide a foundation to inform future theorizing and research on fear’s effects in the workplace, and to elucidate why people at work fear challenging authority and thus how fear inhibits speaking up with even routine problems or suggestions for improvement.

Our review of the literature on fear generated insights with the potential to extend theory on silence in several ways.  First, we proposed that silence should be differentiated based on the intensity of fear experienced and the time available for choosing a response. Both non-deliberative, low-road silence and conscious but schema-driven silence differ from descriptions in extant literature of defensive silence as intentional, reasoned and involving an expectancy-like mental calculus. Thus, our proposed typology (in Fig. 2) suggests the need for content-specific future theory and research. For example, the description of silence as the result of extended, conscious deliberation may fit choices about whistleblowing and major issue selling well, while not explaining how individuals decide to speak up or remain silent in more routine high fear intensity or high immediacy situations. We also theorized that as a natural outcome of humans’ innate tendency to avoid the unpleasant characteristics of fear, employees may develop a type of habituated silence behavior that is largely unrecognized by them.

We expanded understanding of the antecedents of workplace silence by explaining in detail how prior (individual and societal) experiences affect the perceptions, appraisals, and outcomes of fear-based silence. Noting that the fear of challenging authority has roots in the biological mechanisms developed to aid survival in early humans, we argued that this prepared fear is continually developed and reinforced through a lifetime of experiences across most social institutions (e.g., family, school, religion) that implicitly and explicitly convey messages about authority relationships.Over time, these direct and indirect learning experiences, coupled with the characteristics of an evolutionary-based fear module, become the memories and beliefs against which current stimuli in moments of possible voice are compared.

Finally, we proposed two factors to help explain why and how certain individuals speak up to authority despite experiencing some fear of doing so. Though the deck is clearly stacked in favor of fear and silence, anger as a biologically-based emotion and voice efficacy as a learned belief in one’s ability to successfully speak up in difficult voice situations may help employees prevail over fear – in part, through their influence on the control appraisals that are central to emotional experience.

Friday, July 26, 2019

Dark Pathways to Achievement in Science: Researchers’ Achievement Goals Predict Engagement in Questionable Research Practices

Janke, S., Daumiller, M., & Rudert, S. C. (2019).
Social Psychological and Personality Science, 10(6), 783–791.

Abstract

Questionable research practices (QRPs) are a strongly debated topic in the scientific community. Hypotheses about the relationship between individual differences and QRPs are plentiful but have rarely been empirically tested. Here, we investigate whether researchers’ personal motivation (expressed by achievement goals) is associated with self-reported engagement in QRPs within a sample of 217 psychology researchers. Appearance approach goals (striving for skill demonstration) positively predicted engagement in QRPs, while learning approach goals (striving for skill development) were a negative predictor. These effects remained stable when also considering Machiavellianism, narcissism, and psychopathy in a latent multiple regression model. Additional moderation analyses revealed that the more researchers favored publishing over scientific rigor, the stronger the association between appearance approach goals and engagement in QRPs. The findings deliver first insights into the nature of the relationship between personal motivation and scientific malpractice.

The research can be found here.

Wednesday, February 27, 2019

Business Ethics And Integrity: It Starts With The Tone At The Top

Betsy Atkins
Forbes.com
Originally posted 7, 2019

Here is the conclusion:

Transparency leads to empowerment:

Share your successes and your failures and look to everyone to help build a better company.  By including everyone, you create the illusive “we” that is the essence of company culture.  Transparency leads to a company culture that creates an outcome because the CEO creates a bigger purpose for the organization than just making money or reaching quarterly numbers.  Company culture guru Kenneth Kurtzman author of Common Purpose said it best when he said “CEOs need to know how to read their organizations’ emotional tone and need to engage behaviors that build trust including leading-by-listening, building bridges, showing compassion and caring, demonstrating their own commitment to the organization, and giving employees the authority to do their job while inspiring them to do their best work.”

There is no substitute for CEO leadership in creating a company culture of integrity.  A board that supports the CEO in building a company culture of integrity, transparency, and collaboration will be supporting a successful company.

The info is here.

Thursday, July 26, 2018

Virtuous technology

Mustafa Suleyman
medium.com
Originally published June 26, 2018

Hereis an excerpt:

There are at least three important asymmetries between the world of tech and the world itself. First, the asymmetry between people who develop technologies and the communities who use them. Salaries in Silicon Valley are twice the median wage for the rest of the US and the employee base is unrepresentative when it comes to gender, race, class and more. As we have seen in other fields, this risks a disconnect between the inner workings of organisations and the societies they seek to serve.

This is an urgent problem. Women and minority groups remain badly underrepresented, and leaders need to be proactive in breaking the mould. The recent spotlight on these issues has meant that more people are aware of the need for workplace cultures to change, but these underlying inequalities also make their way into our companies in more insidious ways. Technology is not value neutral — it reflects the biases of its creators — and must be built and shaped by diverse communities if we are to minimise the risk of unintended harms.

Second, there is an asymmetry of information regarding how technology actually works, and the impact that digital systems have on everyday life. Ethical outcomes in tech depend on far more than algorithms and data: they depend on the quality of societal debate and genuine accountability.

The information is here.

Tuesday, May 29, 2018

Choosing partners or rivals

The Harvard Gazette
Originally published April 27, 2018

Here is the conclusion:

“The interesting observation is that natural selection always chooses either partners or rivals,” Nowak said. “If it chooses partners, the system naturally moves to cooperation. If it chooses rivals, it goes to defection, and is doomed. An approach like ‘America First’ embodies a rival strategy which guarantees the demise of cooperation.”

In addition to shedding light on how cooperation might evolve in a society, Nowak believes the study offers an instructive example of how to foster cooperation among individuals.

“With the partner strategy, I have to accept that sometimes I’m in a relationship where the other person gets more than me,” he said. “But I can nevertheless provide an incentive structure where the best thing the other person can do is to cooperate with me.

“So the best I can do in this world is to play a strategy such that the other person gets the maximum payoff if they always cooperate,” he continued. “That strategy does not prevent a situation where the other person, to some extent, exploits me. But if they exploit me, they get a lower payoff than if they fully cooperated.”

The information is here.

Tuesday, November 28, 2017

Don’t Nudge Me: The Limits of Behavioral Economics in Medicine

Aaron E. Carroll
The New York Times - The Upshot
Originally posted November 6, 2017

Here is an excerpt:

But those excited about the potential of behavioral economics should keep in mind the results of a recent study. It pulled out all the stops in trying to get patients who had a heart attack to be more compliant in taking their medication. (Patients’ adherence at such a time is surprisingly low, even though it makes a big difference in outcomes, so this is a major problem.)

Researchers randomly assigned more than 1,500 people to one of two groups. All had recently had heart attacks. One group received the usual care. The other received special electronic pill bottles that monitored patients’ use of medication. Those patients who took their drugs were entered into a lottery in which they had a 20 percent chance to receive $5 and a 1 percent chance to win $50 every day for a year.

That’s not all. The lottery group members could also sign up to have a friend or family member automatically be notified if they didn’t take their pills so that they could receive social support. They were given access to special social work resources. There was even a staff engagement adviser whose specific duty was providing close monitoring and feedback, and who would remind patients about the importance of adherence.

This was a kitchen-sink approach. It involved direct financial incentives, social support nudges, health care system resources and significant clinical management. It failed.

The article is here.

Thursday, June 8, 2017

Shining Light on Conflicts of Interest

Craig Klugman
The American Journal of Bioethics 
Volume 17, 2017 - Issue 6

Chimonas, DeVito and Rothman (2017) offer a descriptive target article that examines physicians' knowledge of and reaction to the Sunshine Act's Open Payments Database. This program is a federal computer repository of all payments and goods with a worth over $10 made from pharmaceutical companies and device manufacturers to physicians. Created under the 2010 Affordable Care Act, the goal of this database is to make the relationships between physicians and the medical drug/device industry more transparent. Such transparency is often touted as a solution to financial conflicts of interest (COI). A COI occurs when a person owes featly to more than one party. For example, physicians have fiduciary duties toward patients. At the same time, when physicians receive gifts or benefits from a pharmaceutical company, they are more likely to prescribe that company's products (Spurling et al. 2010). The gift creates a sense of a moral obligation toward the company. These two interests can be (but may not be) in conflict. Such arrangements can undermine a patient's trust with his/her physician, and more broadly, the public's trust of medicine.

(cut)

The idea is that if people are told about the conflict, then they can judge for themselves whether the provider is compromised and whether they wish to receive care from this person. The database exists with this intent—that transparency alone is enough. What is a patient to do with this information? Should patients avoid physicians who have conflicts? The decision is left in the patient's hands. Back in 2014, the Pharmaceutical Research and Manufacturers of America lobbying group expressed concern that the public would not understand the context of any payments or gifts to physicians (Castellani 2014).

The article is here.

Saturday, April 1, 2017

Does everyone have a price? On the role of payoff magnitude for ethical decision making

Benjamin E. Hilbig and Isabel Thielmann
Cognition
Volume 163, June 2017, Pages 15–25

Abstract

Most approaches to dishonest behavior emphasize the importance of corresponding payoffs, typically implying that dishonesty might increase with increasing incentives. However, prior evidence does not appear to confirm this intuition. However, extant findings are based on relatively small payoffs, the potential effects of which are solely analyzed across participants. In two experiments, we used different multi-trial die-rolling paradigms designed to investigate dishonesty at the individual level (i.e., within participants) and as a function of the payoffs at stake – implementing substantial incentives exceeding 100€. Results show that incentive sizes indeed matter for ethical decision making, though primarily for two subsets of “corruptible individuals” (who cheat more the more they are offered) and “small sinners” (who tend to cheat less as the potential payoffs increase). Others (“brazen liars”) are willing to cheat for practically any non-zero incentive whereas still others (“honest individuals”) do not cheat at all, even for large payoffs. By implication, the influence of payoff magnitude on ethical decision making is often obscured when analyzed across participants and with insufficiently tempting payoffs.

The article is here.

Tuesday, November 8, 2016

Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition

Marc A. Edwards and Siddhartha Roy
Environmental Engineering Science. September 2016

Abstract

Over the last 50 years, we argue that incentives for academic scientists have become increasingly perverse in terms of competition for research funding, development of quantitative metrics to measure performance, and a changing business model for higher education itself. Furthermore, decreased discretionary funding at the federal and state level is creating a hypercompetitive environment between government agencies (e.g., EPA, NIH, CDC), for scientists in these agencies, and for academics seeking funding from all sources—the combination of perverse incentives and decreased funding increases pressures that can lead to unethical behavior. If a critical mass of scientists become untrustworthy, a tipping point is possible in which the scientific enterprise itself becomes inherently corrupt and public trust is lost, risking a new dark age with devastating consequences to humanity. Academia and federal agencies should better support science as a public good, and incentivize altruistic and ethical outcomes, while de-emphasizing output.

The article is here.

Thursday, January 7, 2016

Seeking better health care outcomes: the ethics of using the "nudge".

Blumenthal-Barby JS, Burroughs H.
Am J Bioeth. 2012;12(2):1-10.
doi: 10.1080/15265161.2011.634481.

Abstract

Policymakers, employers, insurance companies, researchers, and health care providers have developed an increasing interest in using principles from behavioral economics and psychology to persuade people to change their health-related behaviors, lifestyles, and habits. In this article, we examine how principles from behavioral economics and psychology are being used to nudge people (the public, patients, or health care providers) toward particular decisions or behaviors related to health or health care, and we identify the ethically relevant dimensions that should be considered for the utilization of each principle.

The article is here.

Wednesday, November 11, 2015

Putting a price on empathy: against incentivising moral enhancement

By Sarah Carter
J Med Ethics 
doi:10.1136/medethics-2015-102804

Abstract

Concerns that people would be disinclined to voluntarily undergo moral enhancement have led to suggestions that an incentivised programme should be introduced to encourage participation. This paper argues that, while such measures do not necessarily result in coercion or undue inducement (issues with which one may typically associate the use of incentives in general), the use of incentives for this purpose may present a taboo trade-off. This is due to empirical research suggesting that those characteristics likely to be affected by moral enhancement are often perceived as fundamental to the self; therefore, any attempt to put a price on such traits would likely be deemed morally unacceptable by those who hold this view. A better approach to address the possible lack of participation may be to instead invest in alternative marketing strategies and remove incentives altogether.