Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Deception. Show all posts
Showing posts with label Deception. Show all posts

Friday, April 20, 2018

Feds: Pitt professor agrees to pay government more than $130K to resolve claims of research grant misdeeds

Sean D. Hamill and Jonathan D. Silver
Pittsburgh Post-Gazette
Originally posted March 21, 2018

Here is an excerpt:

A prolific researcher, Mr. Schunn, pulled in more than $50 million in 24 NSF grants over the past 20 years, as well as another $25 million in 24 other grants from the military and private foundations, most of it researching how people learn, according to his personal web page.

Now, according to the government, Mr. Schunn must “provide certifications and assurances of truthfulness to NSF for up to five years, and agree not to serve as a reviewer, adviser or consultant to NSF for a period of three years.”

But all that may be the least of the fallout from Mr. Schunn’s settlement, according to a fellow researcher who worked on a grant with him in the past.

Though the settlement only involved fraud accusations on four NSF grants from 2006 to 2016, it will bring additional scrutiny to all of his work, not only of the grants themselves, but results, said Joseph Merlino, president of the 21st Century Partnership for STEM Education, a nonprofit based in Conshohocken.

“That’s what I’m thinking: Can I trust the data he gave us?” Mr. Merlino said of a project that he worked on with Mr. Schunn, and for which they just published a research article.

The information is here.

Note: The article refers to Dr. Schunn as Mr. Shunn throughout, even though he has a PhD in Psychology at Carnegie Mellon University.

Saturday, March 24, 2018

Facebook employs psychologist whose firm sold data to Cambridge Analytica

Paul Lewis and Julia Carrie Wong
The Guardian
Originally published March 18, 2018

Here are two excerpts:

The co-director of a company that harvested data from tens of millions of Facebook users before selling it to the controversial data analytics firms Cambridge Analytica is currently working for the tech giant as an in-house psychologist.

Joseph Chancellor was one of two founding directors of Global Science Research (GSR), the company that harvested Facebook data using a personality app under the guise of academic research and later shared the data with Cambridge Analytica.

He was hired to work at Facebook as a quantitative social psychologist around November 2015, roughly two months after leaving GSR, which had by then acquired data on millions of Facebook users.

Chancellor is still working as a researcher at Facebook’s Menlo Park headquarters in California, where psychologists frequently conduct research and experiments using the company’s vast trove of data on more than 2 billion users.

(cut)

In the months that followed the creation of GSR, the company worked in collaboration with Cambridge Analytica to pay hundreds of thousands of users to take the test as part of an agreement in which they agreed for their data to be collected for academic use.

However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions strong.

That data sold to Cambridge Analytica as part of a commercial agreement.

Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising.

The information is here.

Friday, March 23, 2018

Mark Zuckerberg Has No Way Out of Facebook's Quagmire

Leonid Bershidsky
Bloomberg News
Originally posted March 21, 2018

Here is an excerpt:

"Making sure time spent on Facebook is time well spent," as Zuckerberg puts it, should lead to the collection of better-quality data. If nobody is setting up fake accounts to spread disinformation, users are more likely to be their normal selves. Anyone analyzing these healthier interactions will likely have more success in targeting commercial and, yes, political offerings to real people. This would inevitably be a smaller yet still profitable enterprise, and no longer a growing one, at least in the short term. But the Cambridge Analytica scandal shows people may not be okay with Facebook's data gathering, improved or not.

The scandal follows the revelation (to most Facebook users who read about it) that, until 2015, application developers on the social network's platform were able to get information about a user's Facebook friends after asking permission in the most perfunctory way. The 2012 Obama campaign used this functionality. So -- though in a more underhanded way -- did Cambridge Analytica, which may or may not have used the data to help elect President Donald Trump.

Many people are angry at Facebook for not acting more resolutely to prevent CA's abuse, but if that were the whole problem, it would have been enough for Zuckerberg to apologize and point out that the offending functionality hasn't been available for several years. The #deletefacebook campaign -- now backed by WhatsApp co-founder Brian Acton, whom Facebook made a billionaire -- is, however, powered by a bigger problem than that. People are worried about the data Facebook is accumulating about them and about how these data are used. Facebook itself works with political campaigns to help them target messages; it did so for the Trump campaign, too, perhaps helping it more than CA did.

The article is here.

First Question: Should you stop using Facebook because they violated your trust?

Second Question: Is Facebook a defective product?

Facebook Woes: Data Breach, Securities Fraud, or Something Else?

Matt Levine
Bloomberg.com
Originally posted March 21, 2018

Here is an excerpt:

But the result is always "securities fraud," whatever the nature of the underlying input. An undisclosed data breach is securities fraud, but an undisclosed sexual-harassment problem or chicken-mispricing conspiracy will get you to the same place. There is an important practical benefit to a legal regime that works like this: It makes it easy to punish bad behavior, at least by public companies, because every sort of bad behavior is also securities fraud. You don't have to prove that the underlying chicken-mispricing conspiracy was illegal, or that the data breach was due to bad security procedures. All you have to prove is that it happened, and it wasn't disclosed, and the stock went down when it was. The evaluation of the badness is in a sense outsourced to the market: We know that the behavior was illegal, not because there was a clear law against it, but because the stock went down. Securities law is an all-purpose tool for punishing corporate badness, a one-size-fits-all approach that makes all badness commensurable using the metric of stock price. It has a certain efficiency.

On the other hand it sometimes makes me a little uneasy that so much of our law ends up working this way. "In a world of dysfunctional government and pervasive financial capitalism," I once wrote, "more and more of our politics is contested in the form of securities regulation." And: "Our government's duty to its citizens is mediated by their ownership of our public companies." When you punish bad stuff because it is bad for shareholders, you are making a certain judgment about what sort of stuff is bad and who is entitled to be protected from it.

Anyway Facebook Inc. wants to make it very clear that it did not suffer a data breach. When a researcher got data about millions of Facebook users without those users' explicit permission, and when the researcher turned that data over to Cambridge Analytica for political targeting in violation of Facebook's terms, none of that was a data breach. Facebook wasn't hacked. What happened was somewhere between a contractual violation and ... you know ... just how Facebook works? There is some splitting of hairs over this, and you can understand why -- consider that SEC guidance about when companies have to disclose data breaches -- but in another sense it just doesn't matter. You don't need to know whether the thing was a "data breach" to know how bad it was. You can just look at the stock price. The stock went down...

The article is here.

Tuesday, March 13, 2018

Doctors In Maine Say Halt In OxyContin Marketing Comes '20 Years Late'

Patty Wight
npr.org
Originally posted February 13, 2018

The maker of OxyContin, one of the most prescribed and aggressively marketed opioid painkillers, will no longer tout the drug or any other opioids to doctors.

The announcement, made Saturday, came as drugmaker Purdue Pharma faces lawsuits for deceptive marketing brought by cities and counties across the U.S., including several in Maine. The company said it's cutting its U.S. sales force by more than half.

Just how important are these steps against the backdrop of a raging opioid epidemic that took the lives of more than 300 Maine residents in 2016, and accounted for more than 42,000 deaths nationwide?

"They're 20 years late to the game," says Dr. Noah Nesin, a family physician and vice president of medical affairs at Penobscot Community Health Care.

Nesin says even after Purdue Pharma paid $600 million in fines about a decade ago for misleading doctors and regulators about the risks opioids posed for addiction and abuse, it continued marketing them.

The article is here.

Tuesday, January 9, 2018

Drug Companies’ Liability for the Opioid Epidemic

Rebecca L. Haffajee and Michelle M. Mello
N Engl J Med 2017; 377:2301-2305
December 14, 2017
DOI: 10.1056/NEJMp1710756

Here is an excerpt:

Opioid products, they alleged, were defectively designed because companies failed to include safety mechanisms, such as an antagonist agent or tamper-resistant formulation. Manufacturers also purportedly failed to adequately warn about addiction risks on drug packaging and in promotional activities. Some claims alleged that opioid manufacturers deliberately withheld information about their products’ dangers, misrepresenting them as safer than alternatives.

These suits faced formidable barriers that persist today. As with other prescription drugs, persuading a jury that an opioid is defectively designed if the Food and Drug Administration approved it is challenging. Furthermore, in most states, a drug manufacturer’s duty to warn about risks is limited to issuing an adequate warning to prescribers, who are responsible for communicating with patients. Finally, juries may resist laying legal responsibility at the manufacturer’s feet when the prescriber’s decisions and the patient’s behavior contributed to the harm. Some individuals do not take opioids as prescribed or purchase them illegally. Companies may argue that such conduct precludes holding manufacturers liable, or at least should reduce damages awards.

One procedural strategy adopted in opioid litigation that can help overcome defenses based on users’ conduct is the class action suit, brought by a large group of similarly situated individuals. In such suits, the causal relationship between the companies’ business practices and the harm is assessed at the group level, with the focus on statistical associations between product use and injury. The use of class actions was instrumental in overcoming tobacco companies’ defenses based on smokers’ conduct. But early attempts to bring class actions against opioid manufacturers encountered procedural barriers. Because of different factual circumstances surrounding individuals’ opioid use and clinical conditions, judges often deemed proposed class members to lack sufficiently common claims.

The article is here.

Wednesday, July 26, 2017

Everybody lies: how Google search reveals our darkest secrets

Seth Stephens-Davidowitz
The Guardian
Originally published July 9, 2017

Everybody lies. People lie about how many drinks they had on the way home. They lie about how often they go to the gym, how much those new shoes cost, whether they read that book. They call in sick when they’re not. They say they’ll be in touch when they won’t. They say it’s not about you when it is. They say they love you when they don’t. They say they’re happy while in the dumps. They say they like women when they really like men. People lie to friends. They lie to bosses. They lie to kids. They lie to parents. They lie to doctors. They lie to husbands. They lie to wives. They lie to themselves. And they damn sure lie to surveys. Here’s my brief survey for you:

Have you ever cheated in an exam?

Have you ever fantasised about killing someone?

Were you tempted to lie?

Many people underreport embarrassing behaviours and thoughts on surveys. They want to look good, even though most surveys are anonymous. This is called social desirability bias. An important paper in 1950 provided powerful evidence of how surveys can fall victim to such bias. Researchers collected data, from official sources, on the residents of Denver: what percentage of them voted, gave to charity, and owned a library card. They then surveyed the residents to see if the percentages would match. The results were, at the time, shocking. What the residents reported to the surveys was very different from the data the researchers had gathered. Even though nobody gave their names, people, in large numbers, exaggerated their voter registration status, voting behaviour, and charitable giving.

The article is here.

Thursday, June 15, 2017

How the Science of “Blue Lies” May Explain Trump’s Support

Jeremy Adam Smith
Scientific American
Originally posted on March 24, 2017

Here are two excerpts:

This has led many people to ask themselves: How does the former reality-TV star get away with it? How can he tell so many lies and still win support from many Americans?

Journalists and researchers have suggested many answers, from a hyperbiased, segmented media to simple ignorance on the part of GOP voters. But there is another explanation that no one seems to have entertained. It is that Trump is telling “blue lies”—a psychologist’s term for falsehoods, told on behalf of a group, that can actually strengthen bonds among the members of that group.

(cut)

This research—and these stories—highlights a difficult truth about our species: we are intensely social creatures, but we are prone to divide ourselves into competitive groups, largely for the purpose of allocating resources. People can be prosocial—compassionate, empathetic, generous, honest—in their group and aggressively antisocial toward out-groups. When we divide people into groups, we open the door to competition, dehumanization, violence—and socially sanctioned deceit.

“People condone lying against enemy nations, and since many people now see those on the other side of American politics as enemies, they may feel that lies, when they recognize them, are appropriate means of warfare,” says George Edwards, a political scientist at Texas A&M University and one of the country’s leading scholars of the presidency.

The article is here.

Tuesday, May 9, 2017

Inside Libratus, the Poker AI That Out-Bluffed the Best Humans

Cade Metz
Wired Magazine
Originally published February 1, 2017

Here is an excerpt:

Libratus relied on three different systems that worked together, a reminder that modern AI is driven not by one technology but many. Deep neural networks get most of the attention these days, and for good reason: They power everything from image recognition to translation to search at some of the world’s biggest tech companies. But the success of neural nets has also pumped new life into so many other AI techniques that help machines mimic and even surpass human talents.

Libratus, for one, did not use neural networks. Mainly, it relied on a form of AI known as reinforcement learning, a method of extreme trial-and-error. In essence, it played game after game against itself. Google’s DeepMind lab used reinforcement learning in building AlphaGo, the system that that cracked the ancient game of Go ten years ahead of schedule, but there’s a key difference between the two systems. AlphaGo learned the game by analyzing 30 million Go moves from human players, before refining its skills by playing against itself. By contrast, Libratus learned from scratch.

Through an algorithm called counterfactual regret minimization, it began by playing at random, and eventually, after several months of training and trillions of hands of poker, it too reached a level where it could not just challenge the best humans but play in ways they couldn’t—playing a much wider range of bets and randomizing these bets, so that rivals have more trouble guessing what cards it holds. “We give the AI a description of the game. We don’t tell it how to play,” says Noam Brown, a CMU grad student who built the system alongside his professor, Tuomas Sandholm. “It develops a strategy completely independently from human play, and it can be very different from the way humans play the game.”

The article is here.

Thursday, February 23, 2017

How To Spot A Fake Science News Story

Alex Berezow
American Council on Science and Health
Originally published January 31, 2017

Here is an excerpt:

How to Detect a Fake Science News Story

Often, I have been asked, "How can you tell if a science story isn't legitimate?" Here are some red flags:

1) The article is very similar to the press release on which it was based. This indicates whether the article is science journalism or just public relations.

2) The article makes no attempt to explain methodology or avoids using any technical terminology. (This indicates the author may be incapable of understanding the original paper.)

3) The article does not indicate any limitations on the conclusions of the research. (For example, a study conducted entirely in mice cannot be used to draw firm conclusions about humans.)

4) The article treats established scientific facts and fringe ideas on equal terms.

5) The article is sensationalized; i.e., it draws huge, sweeping conclusions from a single study. (This is particularly common in stories on scary chemicals and miracle vegetables.)

6) The article fails to separate scientific evidence from science policy. Reasonable people should be able to agree on the former while debating the latter. (This arises from the fact that people ascribe to different values and priorities.)

The article is here.

Saturday, October 15, 2016

Should non-disclosures be considered as morally equivalent to lies within the doctor–patient relationship?

Caitriona L Cox and Zoe Fritz
J Med Ethics 2016;42:632-635
doi:10.1136/medethics-2015-103014

Abstract

In modern practice, doctors who outright lie to their patients are often condemned, yet those who employ non-lying deceptions tend to be judged less critically. Some areas of non-disclosure have recently been challenged: not telling patients about resuscitation decisions; inadequately informing patients about risks of alternative procedures and withholding information about medical errors. Despite this, there remain many areas of clinical practice where non-disclosures of information are accepted, where lies about such information would not be. Using illustrative hypothetical situations, all based on common clinical practice, we explore the extent to which we should consider other deceptive practices in medicine to be morally equivalent to lying. We suggest that there is no significant moral difference between lying to a patient and intentionally withholding relevant information: non-disclosures could be subjected to Bok's ‘Test of Publicity’ to assess permissibility in the same way that lies are. The moral equivalence of lying and relevant non-disclosure is particularly compelling when the agent's motivations, and the consequences of the actions (from the patient's perspectives), are the same. We conclude that it is arbitrary to claim that there is anything inherently worse about lying to a patient to mislead them than intentionally deceiving them using other methods, such as euphemism or non-disclosure. We should question our intuition that non-lying deceptive practices in clinical practice are more permissible and should thus subject non-disclosures to the same scrutiny we afford to lies.

The article is here.

Friday, June 10, 2016

Lay attitudes toward deception in medicine: Theoretical considerations and empirical evidence

Jonathan Pugh, Guy Kahanea, Hannah Maslena & Julian Savulescua
AJOB Empirical Bioethics
Volume 7, Issue 1, 2016

Here is an excerpt:

In these cases, two fundamental principles of medical ethics—the principle of beneficence and the principle of respect for autonomy—appear to conflict (Beauchamp and Childress 2009). While ethicists have long been interested in the conflict between these two principles in cases of deception in medical practice, there is comparatively little empirical evidence concerning whether lay people—the potential targets of such deception—regard deception as morally acceptable across different medical contexts. Empirical studies that have been carried out thus far have concerned patient attitudes toward deception in specific medical contexts, such as cancer treatment (Jenkins, Fallowfield, and Saul, 2001; Yu and Bernstein 2011), palliative care, (Fallowfield, Jenkins, and Beveridge 2002), or more generally the use of placebo treatments in medical practice (Chen and Johnson, 2009; Hull et al. 2013). Similar studies have also been carried out on physician attitudes toward deception in these contexts (Howick et al. 2013; Lynöe, Mattsson, and Sandlund 1993).

However, several important dimensions of deception in medicine have not yet been addressed. Previous empirical studies have not directly compared patient attitudes toward deception across different medical contexts, nor have they investigated the relationship between patient attitudes toward deception in medicine and their attitudes toward truthfulness in nonmedical contexts. It remains unclear whether observed attitudes to deception reflect more general views about deception or whether they are specific to the medical sphere or even to particular medical contexts.

The article is here.

Friday, March 18, 2016

Document Claims Drug Makers Deceived a Top Medical Journal

By Katie Thomas
The New York Times
Originally published March 1, 2016

It is a startling accusation, buried in a footnote in a legal briefing filed recently in federal court: Did two major pharmaceutical companies, in an effort to protect their blockbuster drug, mislead editors at one of the world’s most prestigious medical journals?

Lawyers for patients suing Johnson & Johnson and Bayer over the safety of the anticlotting drug Xarelto say the answer is yes, claiming that a letter published in The New England Journal of Medicine and written primarily by researchers at Duke University left out critical laboratory data. They claim the companies were complicit by staying silent, helping deceive the editors while the companies were in the midst of providing the very same data to regulators in the United States and Europe.

Duke and Johnson & Johnson contend that they worked independently of each other. Bayer declined to comment. And top editors at The New England Journal of Medicine said they did not know that separate laboratory data existed until a reporter contacted them last week, but they dismissed its relevance and said they stood by the article’s analysis.

The article is here.

Friday, January 8, 2016

Peer-Review Fraud — Hacking the Scientific Publication Process

Charlotte J. Haug
N Engl J Med 373;25 nejm.org december 17, 2015

Here is an excerpt:

How is it possible to fake peer review? Moon, who studies medicinal plants, had set up a simple
procedure. He gave journals recommendations for peer reviewers for his manuscripts, providing
them with names and email addresses.  But these addresses were ones he created, so the requests
to review went directly to him or his colleagues. Not surprisingly, the editor would be sent favorable
reviews — sometimes within hours after the reviewing requests had been sent out. The fallout from Moon’s confession: 28 articles in various journals published by Informa were retracted, and one editor resigned.

Peter Chen, who was an engineer at Taiwan’s National Pingtung University of Education at the time, developed a more sophisticated scheme: he constructed a “peer review and citation ring” in which he used 130 bogus e-mail addresses and fabricated identities to generate fake reviews. An editor at one of the journals published by Sage Publications became suspicious, sparking a lengthy and comprehensive investigation, which resulted in the retraction of 60 articles in July 2014.

The article is here. 

Sunday, August 23, 2015

Psychologist's Work For GCHQ Deception Unit Inflames Debate Among Peers

By Andrew Fishman
The Intercept
Originally posted August 7, 2015

A British psychologist is receiving sharp criticism from some professional peers for providing expert advice to help the U.K. surveillance agency GCHQ manipulate people online.

The debate brings into focus the question of how or whether psychologists should offer their expertise to spy agencies engaged in deception and propaganda.

Dr. Mandeep K. Dhami, in a 2011 paper, provided the controversial GCHQ spy unit JTRIG with advice, research pointers, training recommendations, and thoughts on psychological issues, with the goal of improving the unit’s performance and effectiveness. JTRIG’s operations have been referred to as “dirty tricks,” and Dhami’s paper notes that the unit’s own staff characterize their work using “terms such as ‘discredit,’ promote ‘distrust,’ ‘dissuade,’ ‘deceive,’ ‘disrupt,’ ‘delay,’ ‘deny,’ ‘denigrate/degrade,’ and ‘deter.’” The unit’s targets go beyond terrorists and foreign militaries and include groups considered “domestic extremist[s],” criminals, online “hacktivists,” and even “entire countries.”

The entire article is here.

Saturday, March 14, 2015

What pushes scientists to lie? The disturbing but familiar story of Haruko Obokata

By John Rasko and Carl Power
The Guardian
Originally posted February 18, 2015

Here is an excerpt:

Two obvious reasons spring to mind. First, unbelievable carelessness. Obokata drew suspicion upon her Nature papers by the inept way she manipulated images and plagiarised text. It is often easy to spot such transgressions, and the top science journals are supposed to check for them; but it is also easy enough to hide them. Nature’s editors are scratching their heads wondering how they let themselves be fooled by Obokata’s clumsy tricks. However, we are more surprised that she didn’t try harder to cover her tracks, especially since her whole career was at stake.

Second, hubris. If Obokata hadn’t tried to be a world-beater, chances are her sleights of hand would have gone unnoticed and she would still be looking forward to a long and happy career in science. Experiments usually escape the test of reproducibility unless they prove something particularly important, controversial or commercialisable. Stap cells tick all three of these boxes. Because Obokata claimed such a revolutionary discovery, everyone wanted to know exactly how she had done it and how they could do it themselves. By stepping into the limelight, she exposed her work to greater scrutiny than it could bear.

The entire article is here.

Monday, March 2, 2015

Physician guidelines for Googling patients need revision

By Jennifer Abbasi
Penn State News
Originally posted February 2, 2015

With the Internet and social media becoming woven into the modern medical practice, Penn State College of Medicine researchers contend that professional medical societies must update or amend their Internet guidelines to address when it is ethical to "Google" a patient.

"As time goes on, Googling patients is going to become more and more common, especially with doctors who grew up with the Internet," says Maria J. Baker, associate professor of medicine.

Baker has dealt with the question first hand in her role as a genetic counselor and medical geneticist. In a case that inspired her recent paper in the Journal of General Internal Medicine, a patient consulted her regarding prophylactic mastectomies. The patient's family history of cancer could not be verified and then a pathology report revealed that a melanoma the patient listed had actually been a non-cancerous, shape-changing mole.

The entire article is here.

Tuesday, February 17, 2015

How Diederik Stapel Became A Science Fraud

By Neuroskeptic
Discover Magazine Blog
Originally published January 20, 2015

Two years ago, Dutch science fraudster Diederik Stapel published a book, Ontsporing (“Derailment”), describing how he became one of the world’s leading social psychologists, before falling from grace when it emerged that he’d fabricated the data in dozens of papers.

The entire blog post is here.

Friday, January 16, 2015

The effects of punishment and appeals for honesty on children’s truth-telling behavior

By Victoria Talwar, Cindy Arruda, & Sarah Yachison
Journal of Experimental Child Psychology
Volume 130, February 2015, Pages 209–217

Abstract

This study examined the effectiveness of two types of verbal appeals (external and internal motivators) and expected punishment in 372 children’s (4- to 8-year-olds) truth-telling behavior about a transgression. External appeals to tell the truth emphasized social approval by stating that the experimenter would be happy if the children told the truth. Internal appeals to tell the truth emphasized internal standards of behavior by stating that the children would be happy with themselves if they told the truth. Results indicate that with age children are more likely to lie and maintain their lie during follow-up questioning. Overall, children in the External Appeal conditions told the truth significantly more compared with children in the No Appeal conditions. Children who heard internal appeals with no expected punishment were significantly less likely to lie compared with children who heard internal appeals when there was expected punishment. The results have important implications regarding the impact of socialization on children’s honesty and promoting children’s veracity in applied situations where children’s honesty is critical.

Highlights

• The effectiveness of verbal appeals and punishment on children’s honesty was examined.
• External appeals emphasized the importance of truth-telling for social approval.
• Internal appeals emphasized internal standards of behavior.
•Overall children in the external appeal conditions were least likely to lie.
•The efficacy of internal appeals was attenuated by expected punishment.

Friday, December 19, 2014

The effects of punishment and appeals for honesty on children’s truth-telling behavior

By Victoria Talwar,  Cindy Arruda, Sarah Yachison
Journal of Experimental Child Psychology
Volume 130, February 2015, Pages 209–217

Abstract

This study examined the effectiveness of two types of verbal appeals (external and internal motivators) and expected punishment in 372 children’s (4- to 8-year-olds) truth-telling behavior about a transgression. External appeals to tell the truth emphasized social approval by stating that the experimenter would be happy if the children told the truth. Internal appeals to tell the truth emphasized internal standards of behavior by stating that the children would be happy with themselves if they told the truth. Results indicate that with age children are more likely to lie and maintain their lie during follow-up questioning. Overall, children in the External Appeal conditions told the truth significantly more compared with children in the No Appeal conditions. Children who heard internal appeals with no expected punishment were significantly less likely to lie compared with children who heard internal appeals when there was expected punishment. The results have important implications regarding the impact of socialization on children’s honesty and promoting children’s veracity in applied situations where children’s honesty is critical.

Highlights

• The effectiveness of verbal appeals and punishment on children’s honesty was examined.
• External appeals emphasized the importance of truth-telling for social approval.
• Internal appeals emphasized internal standards of behavior.
• Overall children in the external appeal conditions were least likely to lie.
• The efficacy of internal appeals was attenuated by expected punishment.

The entire article is here.