Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Deception. Show all posts
Showing posts with label Deception. Show all posts

Thursday, September 1, 2022

When does moral engagement risk triggering a hypocrite penalty?

Jordan, J. & Sommers, R.
Current Opinion in Psychology
Volume 47, October 2022, 101404

Abstract

Society suffers when people stay silent on moral issues. Yet people who engage morally may appear hypocritical if they behave imperfectly themselves. Research reveals that hypocrites can—but do not always—trigger a “hypocrisy penalty,” whereby they are evaluated as more immoral than ordinary (non-hypocritical) wrongdoers. This pattern reflects that moral engagement can confer reputational benefits, but can also carry reputational costs when paired with inconsistent moral conduct. We discuss mechanisms underlying these costs and benefits, illuminating when hypocrisy is (and is not) evaluated negatively. Our review highlights the role that dishonesty and other factors play in engendering disdain for hypocrites, and offers suggestions for how, in a world where nobody is perfect, people can engage morally without generating backlash.

Conclusion: how to walk the moral tightrope

To summarize, hypocrites can—but do not always—incur a “hypocrisy penalty,” whereby they are evaluated more negatively than they would have been absent engaging. As this review has suggested, when observers scrutinize hypocritical moral engagement, they seem to ask at least three questions. First, does the actor signal to others, through his engagement, that he behaves more morally than he actually does? Second, does the actor, by virtue of his engagement, see himself as more moral than he really is? And third, is the actor's engagement preventing others from reaping benefits that he has already enjoyed? Evidence suggests that hypocritical moral engagement is more likely to carry reputational costs when the answer to these questions is “yes.” At the same time, observers do not seem to reliably impose a hypocrisy penalty just because the transgressions of hypocrites constitute personal moral failings—even as these failings convey weakness of will, highlight inconsistency with the actor's personal values, and reveal that the actor has knowingly done something that she believes to be wrong.

In a world where nobody is perfect, then, how can one engage morally while limiting the risk of subsequently being judged negatively as a hypocrite? We suggest that the answer comes down to two key factors: maximizing the reputational benefits that flow directly from one's moral engagement, and minimizing the reputational costs that flow from the combination of one's engagement and imperfect track record. While more research is needed, here we draw on the mechanisms we have reviewed to highlight four suggestions for those seeking to walk the moral tightrope.

Thursday, December 23, 2021

New York’s Met museum to remove Sackler name from exhibits

Sarah Cascone
artnet.com
Originally posted 9 DEC 21

The Metropolitan Museum of Art in New York has dropped the Sackler name from its building. The move is perhaps the museum world’s most prominent cutting of ties with the disgraced family since their company Purdue Pharma’s guilty plea to criminal charges connected to marketing of addictive painkiller OxyContin in 2020.

The decision, which came after more than a yearlong review by the museum, was reportedly mutual and made “in order to allow the Met to further its core mission,” according to a joint statement issued by the Sackler family and the institution.

“Our families have always strongly supported the Met, and we believe this to be in the best interest of the museum and the important mission that it serves,” the descendants of Mortimer Sackler and Raymond Sackler said in a statement. “The earliest of these gifts were made almost 50 years ago, and now we are passing the torch to others who might wish to step forward to support the museum.”

Institutions have faced increasing pressure to sever relations with the Sacklers in recent years as part of a growing push to hold institutions and other cultural groups accountable over where their money is coming from. (Other donors that have come under fire include arms dealers and oil companies.)

Seven spaces at the Fifth Avenue flagship bore the Sackler name. The biggest was the Sackler Wing, which opened in 1978, and includes the Sackler Gallery for Egyptian Art, the Temple of Dendur in the Sackler Wing, and the 1987 addition of the Sackler Wing Galleries.

The day of the announcement, Patrick Radden Keefe, the author of Empire of Pain: The Secret History of the Sackler Dynasty, visited the museum to find that the family’s name had already been removed.

Saturday, September 18, 2021

Fraudulent data raise questions about superstar honesty researcher

Cathleen O'Grady
Sciencemag.com
Originally posted 24 Aug 21

Here is an excerpt:

Some time later, a group of anonymous researchers downloaded those data, according to last week’s post on Data Colada. A simple look at the participants’ mileage distribution revealed something very suspicious. Other data sets of people’s driving distances show a bell curve, with some people driving a lot, a few very little, and most somewhere in the middle. In the 2012 study, there was an unusually equal spread: Roughly the same number of people drove every distance between 0 and 50,000 miles. “I was flabbergasted,” says the researcher who made the discovery. (They spoke to Science on condition of anonymity because of fears for their career.)

Worrying that PNAS would not investigate the issue thoroughly, the whistleblower contacted the Data Colada bloggers instead, who conducted a follow-up review that convinced them the field study results were statistically impossible.

For example, a set of odometer readings provided by customers when they first signed up for insurance, apparently real, was duplicated to suggest the study had twice as many participants, with random numbers between one and 1000 added to the original mileages to disguise the deceit. In the spreadsheet, the original figures appeared in the font Calibri, but each had a close twin in another font, Cambria, with the same number of cars listed on the policy, and odometer readings within 1000 miles of the original. In 1 million simulated versions of the experiment, the same kind of similarity appeared not a single time, Simmons, Nelson, and Simonsohn found. “These data are not just excessively similar,” they write. “They are impossibly similar.”

Ariely calls the analysis “damning” and “clear beyond doubt.” He says he has requested a retraction, as have his co-authors, separately. “We are aware of the situation and are in communication with the authors,” PNAS Editorial Ethics Manager Yael Fitzpatrick said in a statement to Science.

Three of the authors say they were only involved in the two lab studies reported in the paper; a fourth, Boston University behavioral economist Nina Mazar, forwarded the Data Colada investigators a 16 February 2011 email from Ariely with an attached Excel file that contains the problems identified in the blog post. Its metadata suggest Ariely had created the file 3 days earlier.

Ariely tells Science he made a mistake in not checking the data he received from the insurance company, and that he no longer has the company’s original file. He says Duke’s integrity office told him the university’s IT department does not have email records from that long ago. His contacts at the insurance company no longer work there, Ariely adds, but he is seeking someone at the company who could find archived emails or files that could clear his name. His publication of the full data set last year showed he was unaware of any problems with it, he says: “I’m not an idiot. This is a very easy fraud to catch.”

Monday, September 14, 2020

Trump lied about science

H. Holden Thorp
Science
Originally published 11 Sept 20

When President Donald Trump began talking to the public about coronavirus disease 2019 (COVID-19) in February and March, scientists were stunned at his seeming lack of understanding of the threat. We assumed that he either refused to listen to the White House briefings that must have been occurring or that he was being deliberately sheltered from information to create plausible deniability for federal inaction. Now, because famed Washington Post journalist Bob Woodward recorded him, we can hear Trump’s own voice saying that he understood precisely that severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was deadly and spread through the air. As he was playing down the virus to the public, Trump was not confused or inadequately briefed: He flat-out lied, repeatedly, about science to the American people. These lies demoralized the scientific community and cost countless lives in the United States.

Over the years, this page has commented on the scientific foibles of U.S. presidents. Inadequate action on climate change and environmental degradation during both Republican and Democratic administrations have been criticized frequently. Editorials have bemoaned endorsements by presidents on teaching intelligent design, creationism, and other antiscience in public schools. These matters are still important. But now, a U.S. president has deliberately lied about science in a way that was imminently dangerous to human health and directly led to widespread deaths of Americans.

This may be the most shameful moment in the history of U.S. science policy.

In an interview with Woodward on 7 February 2020, Trump said he knew that COVID-19 was more lethal than the flu and that it spread through the air. “This is deadly stuff,” he said. But on 9 March, he tweeted that the “common flu” was worse than COVID-19, while economic advisor Larry Kudlow and presidential counselor Kellyanne Conway assured the public that the virus was contained. On 19 March, Trump told Woodward that he did not want to level with the American people about the danger of the virus. “I wanted to always play it down,” he said, “I still like playing it down.” Playing it down meant lying about the fact that he knew the country was in grave danger.

The info is here.

Tuesday, June 9, 2020

Intending to deceive versus deceiving intentionally in indifferent lies

Alex Wiegmann & Ronja Rutschmann
(2020) Philosophical Psychology,
DOI: 10.1080/09515089.2020.1761544

Abstract

Indifferent lies have been proposed as a counterexample to the claim that lying requires an intention to deceive. In indifferent lies, the speaker says something she believes to be false (in a truth-warranting context) but does not really care about whether the addressee believes what she says. Krstić (2019) argues that in such cases, the speaker deceives the addressee intentionally and, therefore, indifferent lies do not show that lying does not require an intention to deceive. While we agree that the speaker deceives the addressee intentionally, we resist Krstić’s conclusion by pointing out that there is a difference between deceiving intentionally and intending to deceive. To this aim, we presented 268 participants with a new variant of an indifferent lie and asked whether the speaker lied, whether she had an intention to deceive, and whether she deceived intentionally. Whereas the majority of participants considered the speaker to have deceived the addressee intentionally, most denied that the speaker had an intention to deceive the addressee. Hence, indifferent lies still challenge widely accepted definitions of lying.

The research is here.

Friday, May 8, 2020

Social-media companies must flatten the curve of misinformation

Joan Donovan
nature.com
Originally posted 14 April 20

Here is an excerpt:

After blanket coverage of the distortion of the 2016 US election, the role of algorithms in fanning the rise of the far right in the United States and United Kingdom, and of the antivax movement, tech companies have announced policies against misinformation. But they have slacked off on building the infrastructure to do commercial-content moderation and, despite the hype, artificial intelligence is not sophisticated enough to moderate social-media posts without human supervision. Tech companies acknowledge that groups, such as The Internet Research Agency and Cambridge Analytica, used their platforms for large-scale operations to influence elections within and across borders. At the same time, these companies have balked at removing misinformation, which they say is too difficult to identify reliably.

Moderating content after something goes wrong is too late. Preventing misinformation requires curating knowledge and prioritizing science, especially during a public crisis. In my experience, tech companies prefer to downplay the influence of their platforms, rather than to make sure that influence is understood. Proper curation requires these corporations to engage independent researchers, both to identify potential manipulation and to provide context for ‘authoritative content’.

Early this April, I attended a virtual meeting hosted by the World Health Organization, which had convened journalists, medical researchers, social scientists, tech companies and government representatives to discuss health misinformation. This cross-sector collaboration is a promising and necessary start. As I listened, though, I could not help but to feel teleported back to 2017, when independent researchers first began uncovering the data trails of the Russian influence operations. Back then, tech companies were dismissive. If we can take on health misinformation collaboratively now, then we will have a model for future efforts.

The info is here.

Thursday, February 27, 2020

Liar, Liar, Liar

S. Vedantam, M. Penmann, & T. Boyle
Hidden Brain - NPR.org
Originally posted 17 Feb 20

When we think about dishonesty, we mostly think about the big stuff.

We see big scandals, big lies, and we think to ourselves, I could never do that. We think we're fundamentally different from Bernie Madoff or Tiger Woods.

But behind big lies are a series of small deceptions. Dan Ariely, a professor of psychology and behavioral economics at Duke University, writes about this in his book The Honest Truth about Dishonesty.

"One of the frightening conclusions we have is that what separates honest people from not-honest people is not necessarily character, it's opportunity," he said.

These small lies are quite common. When we lie, it's not always a conscious or rational choice. We want to lie and we want to benefit from our lying, but we want to be able to look in the mirror and see ourselves as good, honest people. We might go a little too fast on the highway, or pocket extra change at a gas station, but we're still mostly honest ... right?

That's why Ariely describes honesty as something of a state of mind. He thinks the IRS should have people sign a pledge committing to be honest when they start working on their taxes, not when they're done. Setting the stage for honesty is more effective than asking someone after the fact whether or not they lied.

The info is here.

There is a 30 minute audio file worth listening.

Sunday, December 29, 2019

It Loves Me, It Loves Me Not Is It Morally Problematic to Design Sex Robots that Appear to Love Their Owners?

Sven Nyholm and Lily Eva Frank
Techné: Research in Philosophy and Technology
DOI: 10.5840/techne2019122110

Abstract

Drawing on insights from robotics, psychology, and human-computer interaction, developers of sex robots are currently aiming to create emotional bonds of attachment and even love between human users and their products. This is done by creating robots that can exhibit a range of facial expressions, that are made with human-like artificial skin, and that possess a rich vocabulary with many conversational possibilities. In light of the human tendency to anthropomorphize artifacts, we can expect that designers will have some success and that this will lead to the attribution of mental states to the robot that the robot does not actually have, as well as the inducement of significant emotional responses in the user. This raises the question of whether it might be ethically problematic to try to develop robots that appear to love their users. We discuss three possible ethical concerns about this aim: first, that designers may be taking advantage of users’ emotional vulnerability; second, that users may be deceived; and, third, that relationships with robots may block off the possibility of more meaningful relationships with other humans. We argue that developers should attend to the ethical constraints suggested by these concerns in their development of increasingly humanoid sex robots. We discuss two different ways in which they might do so.

Thursday, December 5, 2019

How Misinformation Spreads--and Why We Trust It

Cailin O'Connor and James Owen Weatherall
Scientific American
Originally posted September 2019

Here is an excerpt:

Many communication theorists and social scientists have tried to understand how false beliefs persist by modeling the spread of ideas as a contagion. Employing mathematical models involves simulating a simplified representation of human social interactions using a computer algorithm and then studying these simulations to learn something about the real world. In a contagion model, ideas are like viruses that go from mind to mind.

You start with a network, which consists of nodes, representing individuals, and edges, which represent social connections.  You seed an idea in one “mind” and see how it spreads under various assumptions about when transmission will occur.

Contagion models are extremely simple but have been used to explain surprising patterns of behavior, such as the epidemic of suicide that reportedly swept through Europe after publication of Goethe's The Sorrows of Young Werther in 1774 or when dozens of U.S. textile workers in 1962 reported suffering from nausea and numbness after being bitten by an imaginary insect. They can also explain how some false beliefs propagate on the Internet.

Before the last U.S. presidential election, an image of a young Donald Trump appeared on Facebook. It included a quote, attributed to a 1998 interview in People magazine, saying that if Trump ever ran for president, it would be as a Republican because the party is made up of “the dumbest group of voters.” Although it is unclear who “patient zero” was, we know that this meme passed rapidly from profile to profile.

The meme's veracity was quickly evaluated and debunked. The fact-checking Web site Snopes reported that the quote was fabricated as early as October 2015. But as with the tomato hornworm, these efforts to disseminate truth did not change how the rumors spread. One copy of the meme alone was shared more than half a million times. As new individuals shared it over the next several years, their false beliefs infected friends who observed the meme, and they, in turn, passed the false belief on to new areas of the network.

This is why many widely shared memes seem to be immune to fact-checking and debunking. Each person who shared the Trump meme simply trusted the friend who had shared it rather than checking for themselves.

Putting the facts out there does not help if no one bothers to look them up. It might seem like the problem here is laziness or gullibility—and thus that the solution is merely more education or better critical thinking skills. But that is not entirely right.

Sometimes false beliefs persist and spread even in communities where everyone works very hard to learn the truth by gathering and sharing evidence. In these cases, the problem is not unthinking trust. It goes far deeper than that.

The info is here.

Tuesday, November 19, 2019

Medical board declines to act against fertility doctor who inseminated woman with his own sperm

Image result for dr. mcmorries texas
Dr. McMorries
Marie Saavedra and Mark Smith
wfaa.com
Originally posted Oct 28, 2019

The Texas Medical Board has declined to act against a fertility doctor who inseminated a woman with his own sperm rather than from a donor the mother selected.

Though Texas lawmakers have now made such an act illegal, the Texas Medical Board found the actions did not “fall below the acceptable standard of care,” and declined further review, according to a response to a complaint obtained by WFAA.

In a follow-up email, a spokesperson told WFAA the board was hamstrung because it can't review complaints for instances that happened seven years or more past the medical treatment. 

The complaint was filed on behalf of 32-year-old Eve Wiley, of Dallas, who only recently learned her biological father wasn't the sperm donor selected by her mother. Instead, Wiley discovered her biological father was her mother’s fertility doctor in Nacogdoches.

Now 65, Wiley's mother, Margo Williams, had sought help from Dr. Kim McMorries because her husband was infertile.

The info is here.

Thursday, October 3, 2019

Deception and self-deception

Peter Schwardmann and Joel van der Weele
Nature Human Behaviour (2019)

Abstract

There is ample evidence that the average person thinks he or she is more skillful, more beautiful and kinder than others and that such overconfidence may result in substantial personal and social costs. To explain the prevalence of overconfidence, social scientists usually point to its affective benefits, such as those stemming from a good self-image or reduced anxiety about an uncertain future. An alternative theory, first advanced by evolutionary biologist Robert Trivers, posits that people self-deceive into higher confidence to more effectively persuade or deceive others. Here we conduct two experiments (combined n = 688) to test this strategic self-deception hypothesis. After performing a cognitively challenging task, half of our subjects are informed that they can earn money if, during a short face-to-face interaction, they convince others of their superior performance. We find that the privately elicited beliefs of the group that was informed of the profitable deception opportunity exhibit significantly more overconfidence than the beliefs of the control group. To test whether higher confidence ultimately pays off, we experimentally manipulate the confidence of the subjects by means of a noisy feedback signal. We find that this exogenous shift in confidence makes subjects more persuasive in subsequent face-to-face interactions. Overconfidence emerges from these results as the product of an adaptive cognitive technology with important social benefits, rather than some deficiency or bias.

From the Discussion section

The results of our experiment demonstrate that the strategic environment matters for cognition about the self. We observe that deception opportunities increase average overconfidence relative to others, and that, under the right circumstances, increased confidence can pay off. Our data thus support the the idea that overconfidence is strategically employed for social gain.

Our results do not allow for decisive statements about the exact cognitive channels underlying such self-deception. While we find some indications that an aversion to lying increases overconfidence, the evidence is underwhelming.13 When it comes to the ability to deceive others, we find that even when we control for the message, confidence leads to higher evaluations in some conditions. This is  consistent with the idea that self-deception improves the deception technology of contestants, possibly by eliminating non-verbal give-away cues.

The research is here. 

Sunday, September 22, 2019

The Ethics Of Hiding Your Data From the Machines

Molly Wood
wired.com
Originally posted August 22, 2019

Here is an excerpt:

There’s also a real and reasonable fear that companies or individuals will take ethical liberties in the name of pushing hard toward a good solution, like curing a disease or saving lives. This is not an abstract problem: The co-founder of Google’s artificial intelligence lab, DeepMind, was placed on leave earlier this week after some controversial decisions—one of which involved the illegal use of over 1.5 million hospital patient records in 2017.

So sticking with the medical kick I’m on here, I propose that companies work a little harder to imagine the worst-case scenario surrounding the data they’re collecting. Study the side effects like you would a drug for restless leg syndrome or acne or hepatitis, and offer us consumers a nice, long, terrifying list of potential outcomes so we actually know what we’re getting into.

And for we consumers, well, a blanket refusal to offer up our data to the AI gods isn’t necessarily the good choice either. I don’t want to be the person who refuses to contribute my genetic data via 23andMe to a massive research study that could, and I actually believe this is possible, lead to cures and treatments for diseases like Parkinson’s and Alzheimer’s and who knows what else.

I also think I deserve a realistic assessment of the potential for harm to find its way back to me, because I didn’t think through or wasn’t told all the potential implications of that choice—like how, let’s be honest, we all felt a little stung when we realized the 23andMe research would be through a partnership with drugmaker (and reliable drug price-hiker) GlaxoSmithKline. Drug companies, like targeted ads, are easy villains—even though this partnership actually could produce a Parkinson’s drug. But do we know what GSK’s privacy policy looks like? That deal was a level of sharing we didn’t necessarily expect.

The info is here.

Sunday, July 28, 2019

Community Standards of Deception

Levine, Emma
Booth School of Business
(June 17, 2019).
Available at SSRN: https://ssrn.com/abstract=3405538

Abstract

We frequently claim that lying is wrong, despite modeling that it is often right. The present research sheds light on this tension by unearthing systematic cases in which people believe lying is ethical in everyday communication and by proposing and testing a theory to explain these cases. Using both inductive and experimental approaches, I demonstrate that deception is perceived to be ethical, and individuals want to be deceived, when deception is perceived to prevent unnecessary harm. I identify nine implicit rules – pertaining to the targets of deception and the topic and timing of a conversation – that specify the systematic circumstances in which deception is perceived to cause unnecessary harm, and I document the causal effect of each implicit rule on the endorsement of deception. This research provides insight into when and why people value honesty, and paves the way for future research on when and why people embrace deception.

Thursday, April 4, 2019

I’m a Journalist. Apparently, I’m Also One of America’s “Top Doctors.”

Marshall Allen
Propublica.org
Originally posted Feb. 28, 2019

Here is an excerpt:

And now, for reasons still unclear, Top Doctor Awards had chosen me — and I was almost perfectly the wrong person to pick. I’ve spent the last 13 years reporting on health care, a good chunk of it examining how our health care system measures the quality of doctors. Medicine is complex, and there’s no simple way of saying some doctors are better than others. Truly assessing the performance of doctors, from their diagnostic or surgical outcomes to the satisfaction of their patients, is challenging work. And yet, for-profit companies churn out lists of “Super” or “Top” or “Best” physicians all the time, displaying them in magazine ads, online listings or via shiny plaques or promotional videos the companies produce for an added fee.

On my call with Anne from Top Doctors, the conversation took a surreal turn.

“It says you work for a company called ProPublica,” she said, blithely. At least she had that right.

I responded that I did and that I was actually a journalist, not a doctor. Is that going to be a problem? I asked. Or can you still give me the “Top Doctor” award?

There was a pause. Clearly, I had thrown a baffling curve into her script. She quickly regrouped. “Yes,” she decided, I could have the award.

Anne’s bonus, I thought, must be volume based.

Then we got down to business. The honor came with a customized plaque, with my choice of cherry wood with gold trim or black with chrome trim. I mulled over which vibe better fit my unique brand of medicine: the more traditional cherry or the more modern black?

The info is here.

Thursday, March 28, 2019

Behind the Scenes, Health Insurers Use Cash and Gifts to Sway Which Benefits Employers Choose

Marshall Allen
Propublica.org
Originally posted February 20, 2019

Here is an excerpt:

These industry payments can’t help but influence which plans brokers highlight for employers, said Eric Campbell, director of research at the University of Colorado Center for Bioethics and Humanities.

“It’s a classic conflict of interest,” Campbell said.

There’s “a large body of virtually irrefutable evidence,” Campbell said, that shows drug company payments to doctors influence the way they prescribe. “Denying this effect is like denying that gravity exists.” And there’s no reason, he said, to think brokers are any different.

Critics say the setup is akin to a single real estate agent representing both the buyer and seller in a home sale. A buyer would not expect the seller’s agent to negotiate the lowest price or highlight all the clauses and fine print that add unnecessary costs.

“If you want to draw a straight conclusion: It has been in the best interest of a broker, from a financial point of view, to keep that premium moving up,” said Jeffrey Hogan, a regional manager in Connecticut for a national insurance brokerage and one of a band of outliers in the industry pushing for changes in the way brokers are paid.

The info is here.

Saturday, March 23, 2019

The Fake Sex Doctor Who Conned the Media Into Publicizing His Bizarre Research on Suicide, Butt-Fisting, and Bestiality

Jennings Brown
www.gizmodo.com
Originally published March 1, 2019

Here is an excerpt:

Despite Sendler’s claims that he is a doctor, and despite the stethoscope in his headshot, he is not a licensed doctor of medicine in the U.S. Two employees of the Harvard Medical School registrar confirmed to me that Sendler was never enrolled and never received a MD from the medical school. A Harvard spokesperson told me Sendler never received a PhD or any degree from Harvard University.

“I got into Harvard Medical School for MD, PhD, and Masters degree combined,” Sendler told me. I asked if he was able to get a PhD in sexual behavior from Harvard Medical School (Harvard Medical School does not provide any sexual health focuses) and he said “Yes. Yes,” without hesitation, then doubled-down: “I assume that there’s still some kind of sense of wonder on campus [about me]. Because I can see it when I go and visit [Harvard], that people are like, ‘Wow you had the balls, because no one else did that,’” presumably referring to his academic path.

Sendler told me one of his mentors when he was at Harvard Medical School was Yi Zhang, a professor of genetics at the school. Sendler said Zhang didn’t believe in him when he was studying at Harvard. But, Sendler said, he met with Zhang in Boston just a month prior to our interview. And Zhang was now impressed by Sendler’s accomplishments.

Sendler said Zhang told him in January, “Congrats. You did what you felt was right... Turns out, wow, you have way more power in research now than I do. And I’m just very proud of you, because I have people that I really put a lot of effort, after you left, into making them the best and they didn’t turn out that well.”

The info is here.

This is a fairly bizarre story and worth the long read.

Friday, March 22, 2019

We need to talk about systematic fraud

Jennifer Byrne
Nature 566, 9 (2019)
doi: 10.1038/d41586-019-00439-9

Here is an excerpt:

Some might argue that my efforts are inconsequential, and that the publication of potentially fraudulent papers in low-impact journals doesn’t matter. In my view, we can’t afford to accept this argument. Such papers claim to uncover mechanisms behind a swathe of cancers and rare diseases. They could derail efforts to identify easily measurable biomarkers for use in predicting disease outcomes or whether a drug will work. Anyone trying to build on any aspect of this sort of work would be wasting time, specimens and grant money. Yet, when I have raised the issue, I have had comments such as “ah yes, you’re working on that fraud business”, almost as a way of closing down discussion. Occasionally, people’s reactions suggest that ferreting out problems in the literature is a frivolous activity, done for personal amusement, or that it is vindictive, pursued to bring down papers and their authors.

Why is there such enthusiasm for talking about faulty research practices, yet such reluctance to discuss deliberate deception? An analysis of the Diederik Stapel fraud case that rocked the psychology community in 2011 has given me some ideas (W. Stroebe et al. Perspect. Psychol. Sci. 7, 670–688; 2012). Fraud departs from community norms, so scientists do not want to think about it, let alone talk about it. It is even more uncomfortable to think about organized fraud that is so frequently associated with one country. This becomes a vicious cycle: because fraud is not discussed, people don’t learn about it, so they don’t consider it, or they think it’s so rare that it’s unlikely to affect them, and so papers are less likely to come under scrutiny. Thinking and talking about systematic fraud is essential to solving this problem. Raising awareness and the risk of detection may well prompt new ways to identify papers produced by systematic fraud.

Last year, China announced sweeping plans to curb research misconduct. That’s a great first step. Next should be a review of publication quotas and cash rewards, and the closure of ‘paper factories’.

The info is here.

Tuesday, March 5, 2019

Former Ethics Chief Blasts Groups for Holding Events at Trump Hotel

Charles Clark
www.govexec.com
Originally posted March 4, 2019

Here is an excerpt:

“How many members of Congress, who have a constitutional duty to conduct meaningful oversight of the executive, giddily participate in events at the Trump International Hotel, a taxpayer owned landmark where Trump is his own landlord and the emoluments flow like the $35 martinis?” Shaub wrote.

The criticism of Kuwait was prompted by a letter tweeted earlier by Rep. Ted Lieu, D-Calif. Kuwait's ambassador to Washington, Salem Abdullah Al-Jaber Al-Sabah, had invited Lieu to the February celebration of Kuwait’s 58th National Day and 28th Liberation Day.

Lieu wrote the ambassador on Feb. 11 saying that while he looked forward to a continuing productive partnership, “Regrettably, the event will take place at the Trump International Hotel, which is owned by the President of the United States. I must therefore decline your invitation, as the Emoluments Clause of the U.S. Constitution (Article 1, Section 9, Paragraph 8) stipulates that no federal officeholders shall receive gifts or payments from foreign state or rulers without the consent of Congress.”

Lieu then warned the embassy that the issue raises “serious ethical and legal questions,” and that continuing to hold events “could amount to a violation of the U.S. Constitution.”

The info is here.

Sunday, November 18, 2018

Bornstein claims Trump dictated the glowing health letter

Alex Marquardt and Lawrence Crook
CNN.com
Originally posted May 2, 2018

When Dr. Harold Bornstein described in hyperbolic prose then-candidate Donald Trump's health in 2015, the language he used was eerily similar to the style preferred by his patient.

It turns out the patient himself wrote it, according to Bornstein.

"He dictated that whole letter. I didn't write that letter," Bornstein told CNN on Tuesday. "I just made it up as I went along."

The admission is an about face from his answer more than two years when the letter was released and answers one of the lingering questions about the last presidential election. The letter thrust the eccentric Bornstein, with his shoulder-length hair and round eyeglasses, into public view.

"His physical strength and stamina are extraordinary," he crowed in the letter, which was released by Trump's campaign in December 2015. "If elected, Mr. Trump, I can state unequivocally, will be the healthiest individual ever elected to the presidency."

The missive didn't offer much medical evidence for those claims beyond citing a blood pressure of 110/65, described by Bornstein as "astonishingly excellent." It claimed Trump had lost 15 pounds over the preceding year. And it described his cardiovascular health as "excellent."

The info is here.

Friday, April 20, 2018

Feds: Pitt professor agrees to pay government more than $130K to resolve claims of research grant misdeeds

Sean D. Hamill and Jonathan D. Silver
Pittsburgh Post-Gazette
Originally posted March 21, 2018

Here is an excerpt:

A prolific researcher, Mr. Schunn, pulled in more than $50 million in 24 NSF grants over the past 20 years, as well as another $25 million in 24 other grants from the military and private foundations, most of it researching how people learn, according to his personal web page.

Now, according to the government, Mr. Schunn must “provide certifications and assurances of truthfulness to NSF for up to five years, and agree not to serve as a reviewer, adviser or consultant to NSF for a period of three years.”

But all that may be the least of the fallout from Mr. Schunn’s settlement, according to a fellow researcher who worked on a grant with him in the past.

Though the settlement only involved fraud accusations on four NSF grants from 2006 to 2016, it will bring additional scrutiny to all of his work, not only of the grants themselves, but results, said Joseph Merlino, president of the 21st Century Partnership for STEM Education, a nonprofit based in Conshohocken.

“That’s what I’m thinking: Can I trust the data he gave us?” Mr. Merlino said of a project that he worked on with Mr. Schunn, and for which they just published a research article.

The information is here.

Note: The article refers to Dr. Schunn as Mr. Shunn throughout, even though he has a PhD in Psychology at Carnegie Mellon University.