Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Integrity. Show all posts
Showing posts with label Integrity. Show all posts

Sunday, March 3, 2024

Is Dan Ariely Telling the Truth?

Tom Bartlett
The Chronicle of Higher Ed
Originally posted 18 Feb 24

Here is an excerpt:

In August 2021, the blog Data Colada published a post titled “Evidence of Fraud in an Influential Field Experiment About Dishonesty.” Data Colada is run by three researchers — Uri Simonsohn, Leif Nelson, and Joe Simmons — and it serves as a freelance watchdog for the field of behavioral science, which has historically done a poor job of policing itself. The influential field experiment in question was described in a 2012 paper, published in the Proceedings of the National Academy of Sciences, by Ariely and four co-authors. In the study, customers of an insurance company were asked to report how many miles they had driven over a period of time, an answer that might affect their premiums. One set of customers signed an honesty pledge at the top of the form, and another signed at the bottom. The study found that those who signed at the top reported higher mileage totals, suggesting that they were more honest. The authors wrote that a “simple change of the signature location could lead to significant improvements in compliance.” The study was classic Ariely: a slight tweak to a system that yields real-world results.

But did it actually work? In 2020, an attempted replication of the effect found that it did not. In fact, multiple attempts to replicate the 2012 finding all failed (though Ariely points to evidence in a recent, unpublished paper, on which he is a co-author, indicating that the effect might be real). The authors of the attempted replication posted the original data from the 2012 study, which was then scrutinized by a group of anonymous researchers who found that the data, or some of it anyway, had clearly been faked. They passed the data along to the Data Colada team. There were multiple red flags. For instance, the number of miles customers said they’d driven was unrealistically uniform. About the same number of people drove 40,000 miles as drove 500 miles. No actual sampling would look like that — but randomly generated data would. Two different fonts were used in the file, apparently because whoever fudged the numbers wasn’t being careful.

In short, there is no doubt that the data were faked. The only question is, who did it?


This article discusses an investigation into the research conduct of Dr. Dan Ariely, a well-known behavioral economist at Duke University. The investigation, prompted by concerns about potential data fabrication, concluded that while no evidence of fabricated data was found, Ariely did commit research misconduct by failing to adequately vet findings and maintain proper records.

The article highlights several specific issues identified by the investigation, including inconsistencies in data and a lack of supporting documentation for key findings. It also mentions that Ariely made inaccurate statements about his personal history, such as misrepresenting his age at the time of a childhood accident.

While Ariely maintains that he did not intentionally fabricate data and attributes the errors to negligence and a lack of awareness, the investigation's findings have damaged his reputation and raised questions about the integrity of his research. The article concludes by leaving the reader to ponder whether Ariely's transgressions can be forgiven or if they represent a deeper pattern of dishonesty.

It's important to note that the article presents one perspective on a complex issue and doesn't offer definitive answers. Further research and analysis are necessary to form a complete understanding of the situation.

Friday, December 15, 2023

Clinical documentation of patient identities in the electronic health record: Ethical principles to consider

Decker, S. E., et al. (2023). 
Psychological Services.
Advance online publication.

Abstract

The American Psychological Association’s multicultural guidelines encourage psychologists to use language sensitive to the lived experiences of the individuals they serve. In organized care settings, psychologists have important decisions to make about the language they use in the electronic health record (EHR), which may be accessible to both the patient and other health care providers. Language about patient identities (including but not limited to race, ethnicity, gender, and sexual orientation) is especially important, but little guidance exists for psychologists on how and when to document these identities in the EHR. Moreover, organizational mandates, patient preferences, fluid identities, and shifting language may suggest different documentation approaches, posing ethical dilemmas for psychologists to navigate. In this article, we review the purposes of documentation in organized care settings, review how each of the five American Psychological Association Code of Ethics’ General Principles relates to identity language in EHR documentation, and propose a set of questions for psychologists to ask themselves and their patients when making choices about documenting identity variables in the EHR.

Impact Statement

Psychologists in organized care settings may face ethical dilemmas about what language to use when documenting patient identities (race, ethnicity, gender, sexual orientation, and so on) in the electronic health record. This article provides a framework for considering how to navigate these decisions based on the American Psychological Association Code of Ethics’ five General Principles. To guide psychologists in decision making, questions to ask self and patient are included, as well as suggestions for further study.

Here is my summary:

The authors emphasize the lack of clear guidelines for psychologists on how and when to document these identity variables in EHRs. They acknowledge the complexities arising from organizational mandates, patient preferences, fluid identities, and evolving language, which can lead to ethical dilemmas for psychologists.

To address these challenges, the article proposes a framework based on the five General Principles of the American Psychological Association (APA) Code of Ethics:
  1. Fidelity and Responsibility: Psychologists must prioritize patient welfare and act in their best interests. This includes respecting their privacy and self-determination when documenting identity variables.
  2. Competence: Psychologists should possess the necessary knowledge and skills to accurately and sensitively document patient identities. This may involve ongoing training and staying abreast of evolving language and cultural norms.
  3. Integrity: Psychologists must maintain ethical standards and avoid misrepresenting or misusing patient identity information. This includes being transparent about the purposes of documentation and seeking patient consent when appropriate.
  4. Respect for Human Rights and Dignity: Psychologists must respect the inherent dignity and worth of all individuals, regardless of their identity. This includes avoiding discriminatory or stigmatizing language in EHR documentation.
  5. Social Justice and Public Interest: Psychologists should contribute to the promotion of social justice and the elimination of discrimination. This includes being mindful of how identity documentation can impact patients' access to care and their overall well-being.
To aid psychologists in making informed decisions about identity documentation, the authors propose a set of questions to consider:
  1. What is the purpose of documenting this identity variable?
  2. Is this information necessary for providing appropriate care or fulfilling legal/regulatory requirements?
  3. How will this information be used?
  4. What are the potential risks and benefits of documenting this information?
  5. What are the patient's preferences regarding the documentation of their identity?
By carefully considering these questions, psychologists can make ethically sound decisions that protect patient privacy and promote their well-being.

Sunday, August 20, 2023

When Scholars Sue Their Accusers. Francesca Gino is the Latest. Such Litigation Rarely Succeeds.

Adam Marcus and Ivan Oransky
The Chronicle of Higher Education
Originally posted 18 AUG 23

Francesca Gino has made headlines twice since June: once when serious allegations of misconduct involving her work became public, and again when she filed a $25-million lawsuit against her accusers, including Harvard University, where she is a professor at the business school.

The suit itself met with a barrage of criticism from those who worried that, as one scientist put it, it would have a “chilling effect on fraud detection.” A smaller number of people supported the move, saying that Harvard and her accusers had abandoned due process and that they believed in Gino’s integrity.How the case will play out, of course, remains to be seen. But Gino is hardly the first researcher to sue her critics and her employer when faced with misconduct findings. As the founders of Retraction Watch, a website devoted to covering problems in the scientific literature, we’ve reported many of these kinds of cases since we launched our blog in 2010. Platintiffs tend to claim defamation, but sometimes sue over wrongful termination or employment discrimination, and these kinds of cases typically end up in federal courts. A look at how some other suits fared might yield recommendations for how to limit the pain they can cause.The first thing to know about defamation and employment suits is that most plaintiffs, but not all, lose. Mario Saad, a diabetes researcher at Brazil’s Unicamp, found that out when he sued the American Diabetes Association in the very same federal district court in Massachusetts where Gino filed her case.Saad was trying to prevent Diabetes, the flagship research journal of the American Diabetes Association, from publishing expressions of concern about four of his papers following allegations of image manipulation. He lost that effort in 2015, and has now had 18 papers retracted.

(cut)

Such cases can be extremely expensive — not only for the defense, whether the costs are borne by institutions or insurance companies, but also for the plaintiffs. Ask Carlo Croce and Mark Jacobson.

Croce, a cancer researcher at Ohio State University, has at various points sued The New York Times, a Purdue University biologist named David Sanders, and Ohio State. He has lost all of those cases, including on appeal. The suits against the Times and Sanders claimed that a front-page story in 2017 that quoted Sanders had defamed Croce. His suit against Ohio State alleged that he had been improperly removed as department chair.

Croce racked up some $2 million in legal bills — and was sued for nonpayment. A judge has now ordered Croce’s collection of old masters paintings to be seized and sold for the benefit of his lawyers, and has also garnished Croce’s bank accounts. Another judgment means that his lawyers may now foreclose on his house to recoup their costs. Ohio State has been garnishing his wages since March by about $15,600 each month, or about a quarter of his paycheck. He continues to earn more than $800,000 per year from the university, even after a professorship and the chair were taken away from him.

When two researchers published a critique of the work of Mark Jacobson, an energy researcher at Stanford University, in the Proceedings of the National Academy of Sciences, Jacobson sued them along with the journal’s publisher for $10 million. He dropped the case just months after filing it.

But thanks to a so-called anti-SLAPP statute, “designed to provide for early dismissal of meritless lawsuits filed against people for the exercise of First Amendment rights,” a judge has ordered Jacobson to pay $500,000 in legal fees to the defendants. Jacobson wants Stanford to pay those costs, and California’s labor commissioner said the university had to pay at least some of them because protecting his reputation was part of Jacobson’s job. The fate of those fees, and who will pay them, is up in the air, with Jacobson once again appealing the judgment against him.

Saturday, July 8, 2023

Microsoft Scraps Entire Ethical AI Team Amid AI Boom

Lauren Leffer
gizmodo.com
Updated on March 14, 2023
Still relevant

Microsoft is currently in the process of shoehorning text-generating artificial intelligence into every single product that it can. And starting this month, the company will be continuing on its AI rampage without a team dedicated to internally ensuring those AI features meet Microsoft’s ethical standards, according to a Monday night report from Platformer.

Microsoft has scrapped its whole Ethics and Society team within the company’s AI sector, as part of ongoing layoffs set to impact 10,000 total employees, per Platformer. The company maintains its Office of Responsible AI, which creates the broad, Microsoft-wide principles to govern corporate AI decision making. But the ethics and society taskforce, which bridged the gap between policy and products, is reportedly no more.

Gizmodo reached out to Microsoft to confirm the news. In response, a company spokesperson sent the following statement:
Microsoft remains committed to developing and designing AI products and experiences safely and responsibly. As the technology has evolved and strengthened, so has our investment, which at times has meant adjusting team structures to be more effective. For example, over the past six years we have increased the number of people within our product teams who are dedicated to ensuring we adhere to our AI principles. We have also increased the scale and scope of our Office of Responsible AI, which provides cross-company support for things like reviewing sensitive use cases and advocating for policies that protect customers.

To Platformer, the company reportedly previously shared this slightly different version of the same statement:

Microsoft is committed to developing AI products and experiences safely and responsibly...Over the past six years we have increased the number of people across our product teams within the Office of Responsible AI who, along with all of us at Microsoft, are accountable for ensuring we put our AI principles into practice...We appreciate the trailblazing work the ethics and society team did to help us on our ongoing responsible AI journey.

Note that, in this older version, Microsoft does inadvertently confirm that the ethics and society team is no more. The company also previously specified staffing increases were in the Office of Responsible AI vs people generally “dedicated to ensuring we adhere to our AI principles.”

Yet, despite Microsoft’s reassurances, former employees told Platformer that the Ethics and Society team played a key role translating big ideas from the responsibility office into actionable changes at the product development level.

The info is here.

Sunday, June 25, 2023

Harvard Business School Professor Francesca Gino Accused of Committing Data Fraud

Rahem D. Hamid
Crimson Staff Writer
Originally published 24 June 23

Here is an excerpt:

But in a post on June 17, Data Colada wrote that they found evidence of additional data fabrication in that study in a separate experiment that Gino was responsible for.

Harvard has also been internally investigating “a series of papers” for more than a year, according to the Chronicle of Higher Education. Data Colada wrote last week that the University’s internal report may be around 1,200 pages.

The professors added that Harvard has requested that three other papers co-authored by Gino — which Data Colada flagged — also be retracted and that the 2012 paper’s retraction be amended to include Gino’s fabrications.

Last week, Bazerman told the Chronicle of Higher Education that he was informed by Harvard that the experiments he co-authored contained additional fraudulent data.

Bazerman called the evidence presented to him by the University “compelling,” but he denied to the Chronicle that he was at all involved with the data manipulation.

According to Data Colada, Gino was “the only author involved in the data collection and analysis” of the experiment in question.

“To the best of our knowledge, none of Gino’s co-authors carried out or assisted with the data collection for the studies in question,” the professors wrote.

In their second post on Tuesday, the investigators wrote that a 2015 study co-authored by Gino also contains manipulations to prove the paper’s hypothesis.

Observations in the paper, the three wrote, “were altered to produce the desired effect.”

“And if these observations were altered, then it is reasonable to suspect that other observations were altered as well,” they added.


Science is a part of a healthy society:
  • Scientific research relies on the integrity of the researchers. When researchers fabricate or falsify data, they undermine the trust that is necessary for scientific progress.
  • Data fraud can have serious consequences. It can lead to the publication of false or misleading findings, which can have a negative impact on public policy, business decisions, and other areas.

Friday, January 8, 2021

Bias in science: natural and social

Joshua May
Synthese 

Abstract 

Moral, social, political, and other “nonepistemic” values can lead to bias in science, from prioritizing certain topics over others to the rationalization of questionable research practices. Such values might seem particularly common or powerful in the social sciences, given their subject matter. However, I argue first that the well documented phenomenon of motivated reasoning provides a useful framework for understanding when values guide scientific inquiry (in pernicious or productive ways). Second, this analysis reveals a parity thesis: values influence the social and natural sciences about equally, particularly because both are so prominently affected by desires for social credit and status, including recognition and career advancement. Ultimately, bias in natural and social science is both natural and social—that is, a part of human nature and considerably motivated by a concern for social status (and its maintenance). Whether the pervasive influence of values is inimical to the sciences is a separate question.

Conclusion 

We have seen how many of the putative biases that affect science can be explained and illuminated in terms of motivated reasoning, which yields a general understanding of how a researcher’s goals and values can influence scientific practice (whether positively or negatively). This general account helps to show that it is unwarranted to assume that such influences are significantly more prominent in the social sciences. The defense of this parity claim relies primarily on two key points. First, the natural sciences are also susceptible to the same values found in social science, particularly given that findings in many fields have social or political implications. Second, the ideological motivations that might seem to arise only in social science are minor compared to others. In particular, one’s reasoning is more often motivated by a desire to gain social credit (e.g. recognition among peers) than a desire to promote a moral or political ideology. Although there may be discernible differences in the quality of research across scientific domains, all are influenced by researchers’ values, as manifested in their motivations.

Wednesday, June 10, 2020

The moral courage of the military in confronting the commander in chief

Robert Bruce Adolph
Tampa Bay Times
Originally posted 9 June 20

The president recently threatened to use our active duty military to “dominate” demonstrators nationwide, who are exercising their wholly legitimate right to assemble and be heard.

The distinguished former Secretary of Defense Jim Mattis nailed it in his recent broadside published in The Atlantic that took aim at our current commander-in-chief. Mattis states, “When I joined the military, some 50 years ago … I swore an oath to support and defend the Constitution. Never did I dream that troops taking the same oath would be ordered under any circumstances to violate the constitutional rights of their fellow citizens—much less to provide a bizarre photo op for the elected commander-in-chief, with military leadership standing alongside.”

The current Secretary of Defense, Mike Esper, who now perhaps regrets being made into a photographic prop for the president, has come out publicly against using the active duty military to quell civil unrest in our cities; as has 89 high ranking former defense officials who stated that they were “alarmed” by the chief executive’s threat to use troops against our country’s citizens on U.S. soil. Former Secretary of State Colin Powell, a former U.S. Army general and Republican Party member, has also taken aim at this presidency by stating that he will vote for Joe Biden in the next election.

The info is here.

Monday, January 27, 2020

Nurses Continue to Rate Highest in Honesty, Ethics

Nurses Continue to Rate Highest in Honesty, EthicsRJ Reinhart
news.gallup.com
Originally posted 6 Jan 20

For the 18th year in a row, Americans rate the honesty and ethics of nurses highest among a list of professions that Gallup asks U.S. adults to assess annually. Currently, 85% of Americans say nurses' honesty and ethical standards are "very high" or "high," essentially unchanged from the 84% who said the same in 2018. Alternatively, Americans hold car salespeople in the lowest esteem, with 9% saying individuals in this field have high levels of ethics and honesty, similar to the 8% who said the same in 2018.

Nurses are consistently rated higher in honesty and ethics than all other professions that Gallup asks about, by a wide margin. Medical professions in general rate highly in Americans' assessments of honesty and ethics, with at least six in 10 U.S. adults saying medical doctors, pharmacists and dentists have high levels of these virtues. The only nonmedical profession that Americans now hold in a similar level of esteem is engineers, with 66% saying individuals in this field have high levels of honesty and ethics.

Americans' high regard for healthcare professionals contrasts sharply with their assessments of stockbrokers, advertising professionals, insurance salespeople, senators, members of Congress and car salespeople -- all of which garner less than 20% of U.S. adults saying they have high levels of honesty and ethics.

The public's low levels of belief in the honesty and ethical standards of senators and members of Congress may be a contributing factor in poor job approval ratings for the legislature. No more than 30% of Americans have approved of Congress in the past 10 years.

The info is here.

Thursday, November 7, 2019

Digital Ethics and the Blockchain

Dan Blum
ISACA, Volume 2, 2018

Here is an excerpt:

Integrity and Transparency

Integrity and transparency are core values for delivering trust to prosperous markets. Blockchains can provide immutable land title records to improve property rights and growth in small economies, such as Honduras.6 In smart power grids, blockchain-enabled meters can replace inefficient centralized record-keeping systems for transparent energy trading. Businesses can keep transparent records for product provenance, production, distribution and sales. Forward-thinking governments are exploring use cases through which transparent, immutable blockchains could facilitate a lighter, more effective regulatory touch to holding industry accountable.

However, trade secrets and personal information should not be published openly on blockchains. Blockchain miners may reorder transactions to increase fees or delay certain business processes at the expense of others. Architects must leaven accountability and transparency with confidentiality and privacy. Developers (or regulators) should sometimes add a human touch to smart contracts to avoid rigid systems operating without any consumer safeguards.

The info is here.

Saturday, October 5, 2019

Brain-reading tech is coming. The law is not ready to protect us.

Sigal Samuel
vox.com
Originally posted August 30, 2019

Here is an excerpt:

2. The right to mental privacy

You should have the right to seclude your brain data or to publicly share it.

Ienca emphasized that neurotechnology has huge implications for law enforcement and government surveillance. “If brain-reading devices have the ability to read the content of thoughts,” he said, “in the years to come governments will be interested in using this tech for interrogations and investigations.”

The right to remain silent and the principle against self-incrimination — enshrined in the US Constitution — could become meaningless in a world where the authorities are empowered to eavesdrop on your mental state without your consent.

It’s a scenario reminiscent of the sci-fi movie Minority Report, in which a special police unit called the PreCrime Division identifies and arrests murderers before they commit their crimes.

3. The right to mental integrity

You should have the right not to be harmed physically or psychologically by neurotechnology.

BCIs equipped with a “write” function can enable new forms of brainwashing, theoretically enabling all sorts of people to exert control over our minds: religious authorities who want to indoctrinate people, political regimes that want to quash dissent, terrorist groups seeking new recruits.

What’s more, devices like those being built by Facebook and Neuralink may be vulnerable to hacking. What happens if you’re using one of them and a malicious actor intercepts the Bluetooth signal, increasing or decreasing the voltage of the current that goes to your brain — thus making you more depressed, say, or more compliant?

Neuroethicists refer to that as brainjacking. “This is still hypothetical, but the possibility has been demonstrated in proof-of-concept studies,” Ienca said, adding, “A hack like this wouldn’t require that much technological sophistication.”

The info is here.

Tuesday, April 23, 2019

4 Ways Lying Becomes the Norm at a Company

Ron Carucci
Harvard Business Review
Originally published February 15, 2019

Many of the corporate scandals in the past several years — think Volkswagen or Wells Fargo — have been cases of wide-scale dishonesty. It’s hard to fathom how lying and deceit permeated these organizations. Some researchers point to group decision-making processes or psychological traps that snare leaders into justification of unethical choices. Certainly those factors are at play, but they largely explain dishonest behavior at an individual level and I wondered about systemic factors that might influence whether or not people in organizations distort or withhold the truth from one another.

This is what my team set out to understand through a 15-year longitudinal study. We analyzed 3,200 interviews that were conducted as part of 210 organizational assessments to see whether there were factors that predicted whether or not people inside a company will be honest. Our research yielded four factors — not individual character traits, but organizational issues — that played a role. The good news is that these factors are completely within a corporation’s control and improving them can make your company more honest, and help avert the reputation and financial disasters that dishonesty can lead to.

The stakes here are high. Accenture’s Competitive Agility Index — a 7,000-company, 20-industry analysis, for the first time tangibly quantified how a decline in stakeholder trust impacts a company’s financial performance. The analysis reveals more than half (54%) of companies on the index experienced a material drop in trust — from incidents such as product recalls, fraud, data breaches and c-suite missteps — which equates to a minimum of $180 billion in missed revenues. Worse, following a drop in trust, a company’s index score drops 2 points on average, negatively impacting revenue growth by 6% and EBITDA by 10% on average.

The info is here.

Wednesday, November 14, 2018

Moral resilience: how to navigate ethical complexity in clinical practice

Cynda Rushton
Oxford University Press
Originally posted October 12, 2018

Clinicians are constantly confronted with ethical questions. Recent examples of healthcare workers caught up in high-profile best-interest cases are on the rise, but decisions regarding the allocation of the clinician’s time and skills, or scare resources such as organs and medication, are everyday occurrences. The increasing pressure of “doing more with less” is one that can take its toll.

Dr Cynda Rushton is a professor of clinical ethics, and a proponent of ‘moral resilience’ as a pathway through which clinicians can lessen their experience of moral distress, and navigate the contentious issues they may face with a greater sense of integrity. In the video series below, she provides the guiding principles of moral resilience, and explores how they can be put into practice.



The videos are here.

Friday, August 24, 2018

Government Ethics In The Trump Administration

Scott Simon
Host, Weekend Edition, NPR
Originally posted August 11, 2018

President Trump appointed what's considered the richest Cabinet in U.S. history, and reportedly, more than half of the president's Cabinet, current and former, have been the subject of ethics allegations. There's HUD Secretary Carson's pricey dining table, VA Secretary Shulkin's seats at Wimbledon, Scott Pruitt's housing sublet from a lobbyist, Interior Secretary Zinke's charter planes, Treasury Secretary Mnuchin taking a government plane to see the solar eclipse, and Commerce Secretary Wilbur Ross might need his own category. Forbes magazine reports on many people who have accused him of outright theft, saying - Forbes magazine - quote, "if even half of the accusations are legitimate, the current United States secretary of commerce could rank among the biggest grifters in American history."

The interview is here.

Wednesday, August 15, 2018

Four Rules for Learning How to Talk To Each Other Again

Jason Pontin
www.wired.com
Originally posted

Here is an excerpt:

Here’s how to speak in a polity where we loathe each other. Let this be the Law of Parsimonious Claims:

1. Say nothing you know to be untrue, whether to deceive, confuse, or, worst of all, encourage a wearied cynicism.

2. Make mostly falsifiable assertions or offer prescriptions whose outcomes could be measured, always explaining how your assertion or prescription could be tested.

3. Whereof you have no evidence but possess only moral intuitions, say so candidly, and accept you must coexist with people who have different intuitions.

4. When evidence proves you wrong, admit it cheerfully, pleased that your mistake has contributed to the general progress.

Finally, as you listen, assume the good faith of your opponents, unless you have proof otherwise. Judge their assertions and prescriptions based on the plain meaning of their words, rather on than what you guess to be their motives. Often, people will tell you about experiences they found significant. If they are earnest, hear them sympathetically.

The info is here.

Wednesday, May 2, 2018

Institutional Research Misconduct Reports Need More Credibility

Gunsalus CK, Marcus AR, Oransky I.
JAMA. 2018;319(13):1315–1316.
doi:10.1001/jama.2018.0358

Institutions have a central role in protecting the integrity of research. They employ researchers, own the facilities where the work is conducted, receive grant funding, and teach many students about the research process. When questions arise about research misconduct associated with published articles, scientists and journal editors usually first ask the researchers’ institution to investigate the allegations and then report the outcomes, under defined circumstances, to federal oversight agencies and other entities, including journals.

Depending on institutions to investigate their own faculty presents significant challenges. Misconduct reports, the mandated product of institutional investigations for which US federal dollars have been spent, have a wide range of problems. These include lack of standardization, inherent conflicts of interest that must be addressed to directly ensure credibility, little quality control or peer review, and limited oversight. Even when institutions act, the information they release to the public is often limited and unhelpful.

As a result, like most elements of research misconduct, little is known about institutions’ responses to potential misconduct by their own members. The community that relies on the integrity of university research does not have access to information about how often such claims arise, or how they are resolved. Nonetheless, there are some indications that many internal reviews are deficient.

The article is here.

Monday, April 23, 2018

Bad science puts innocent people in jail — and keeps them there

Radley Balko and Tucker Carrington
The Washington Post
Originally posted March 21, 2018

Here is an excerpt:

At the trial level, juries hear far too much dubious science, whether it’s an unproven field like bite mark matching or blood splatter analysis, exaggerated claims in a field like hair fiber analysis, or analysts testifying outside their area of expertise.  It’s difficult to say how many convictions have involved faulty or suspect forensics, but the FBI estimated in 2015 that its hair fiber analysts had testified in about 3,000 cases — and that’s merely one subspecialty of forensics, and only at the federal level.    Extrapolating from the database of DNA exonerations, the Innocence Project estimates that bad forensics contributes to about 45 percent of wrongful convictions.

But flawed evidence presented at trial is only part of the problem.  Even once a field of forensics or a particular expert has been discredited, the courts have made it extremely difficult for those convicted by bad science to get a new trial.

The Supreme Court makes judges responsible for determining what is good science.  They already decide what evidence is allowed at trial, so asking them to do the same for expert testimony may seem intuitive.  But judges are trained to do legal analyses, not scientific ones.  They generally deal with challenges to expert testimony by looking at what other judges have said.  If a previous court has allowed a field of forensic evidence, subsequent courts will, too.

The article is here.

Note: These issues also apply to psychologists in the courtroom.

Wednesday, April 18, 2018

Why it’s a bad idea to break the rules, even if it’s for a good cause

Robert Wiblin
80000hours.org
Originally posted March 20, 2018

How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or, should we attempt to set a new norm for ourselves?

Dr Stefan Schubert, a researcher at the Social Behaviour and Ethics Lab at Oxford University, has been modelling this in the context of the effective altruism community. He thinks people trying to improve the world should hold themselves to very high standards of integrity, because their minor sins can impose major costs on the thousands of others who share their goals.

In addition, when a norm is uniquely important to our situation, we should be willing to question society and come up with something different and hopefully better.

But in other cases, we can be better off sticking with whatever our culture expects, both to save time, avoid making mistakes, and ensure others can predict our behaviour.

The key points and podcast are here.

Sunday, March 5, 2017

What We Know About Moral Distress

Patricia Rodney
AJN, American Journal of Nursing:
February 2017 - Volume 117 - Issue 2 - p S7–S10
doi: 10.1097/01.NAJ.0000512204.85973.04

Moral distress arises when nurses are unable to act according to their moral judgment. The concept is relatively recent, dating to American ethicist Andrew Jameton's 1984 landmark text on nursing ethics. Until that point, distress among clinicians had been understood primarily through psychological concepts such as stress and burnout, which, although relevant, were not sufficient. With the introduction of the concept of moral distress, Jameton added an ethical dimension to the study of distress.

Background

In the 33 years since Jameton's inaugural work, many nurses, inspired by the concept of moral distress, have continued to explore what happens when nurses are constrained from translating moral choice into moral action, and are consequently unable to uphold their sense of integrity and the values emphasized in the American Nurses Association's Code of Ethics for Nurses with Interpretive Statements. Moral distress might occur when, say, a nurse on a busy acute medical unit can't provide comfort and supportive care to a dying patient because of insufficient staffing.

The article is here.

Saturday, July 23, 2016

Four Ways Your Leadership May Be Encouraging Unethical Behavior

Ron Carucci
Forbes.com
Originally published June 14, 2016

Most leaders would claim they want the utmost ethical standards upheld by those they lead. But they might be shocked to discover that, even with the best of intentions, their own leadership may be corrupting the choices of those they lead.

(cut)

1. You are making it psychologically unsafe to speak up. Despite saying things like, “I have an open door policy,” where employees can express even controversial issues, some leadership actions may dissuade the courage needed to raise ethical concerns . Creating a culture in which people freely speak up is vital to ensuring people don’t collude with, or incite misconduct.

(cut)

2. You are applying excessive pressure to reach unrealistic performance targets. Significant research suggests that unfettered goal setting can encourage people to make compromising choices in order to reach targets, especially if those targets seem unrealistic. Leaders may be inviting people to cheat in two ways. They will cut corners on the way they reach a goal, or they will lie when reporting how much of the goal they actually achieved.

The article is here.

Friday, May 6, 2016

How Not to Explain Success

By Christopher Chabris and Joshua Hart
The New York Times - Gray Matter
Originally posted April 8, 2016

Here is an excerpt:

This finding is exactly what you would expect from accepted social science. Long before “The Triple Package,” researchers determined that the personality trait of conscientiousness, which encompasses the triple package’s impulse control component, was an important predictor of success — but that a person’s intelligence and socioeconomic background were equally or even more important.

Our second finding was that the more successful participants did not possess greater feelings of ethnocentrism or personal insecurity. In fact, for insecurity, the opposite was true: Emotional stability was related to greater success.

Finally, we found no special “synergy” among the triple package traits. According to Professors Chua and Rubenfeld, the three traits have to work together to create success — a sense of group superiority creates drive only in people who also view themselves as not good enough, for example, and drive is useless without impulse control. But in our data, people scoring in the top half on all three traits were no more successful than everyone else.

The article is here.