Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Sunday, August 30, 2020

Prosocial modeling: A meta-analytic review and synthesis

Jung, H., Seo, E., et al. (2020).
Psychological Bulletin, 146(8), 635–663.
https://doi.org/10.1037/bul0000235

Abstract
Exposure to prosocial models is commonly used to foster prosocial behavior in various domains of society. The aim of the current article is to apply meta-analytic techniques to synthesize several decades of research on prosocial modeling, and to examine the extent to which prosocial modeling elicits helping behavior. We also identify the theoretical and methodological variables that moderate the prosocial modeling effect. Eighty-eight studies with 25,354 participants found a moderate effect (g = 0.45) of prosocial modeling in eliciting subsequent helping behavior. The prosocial modeling effect generalized across different types of helping behaviors, different targets in need of help, and was robust to experimenter bias. Nevertheless, there was cross-societal variation in the magnitude of the modeling effect, and the magnitude of the prosocial modeling effect was larger when participants were presented with an opportunity to help the model (vs. a third-party) after witnessing the model’s generosity. The prosocial modeling effect was also larger for studies with higher percentage of female in the sample, when other people (vs. participants) benefitted from the model’s prosocial behavior, and when the model was rewarded for helping (vs. was not). We discuss the publication bias in the prosocial modeling literature, limitations of our analyses and identify avenues for future research. We end with a discussion of the theoretical and practical implications of our findings.

Impact Statement

Public Significance Statement: This article synthesizes several decades of research on prosocial modeling and shows that witnessing others’ helpful acts encourages prosocial behavior through prosocial goal contagion. The magnitude of the prosocial modeling effect, however, varies across societies, gender and modeling contexts. The prosocial modeling effect is larger when the model is rewarded for helping. These results have important implications for our understanding of why, how, and when the prosocial modeling effect occurs. 

Sunday, July 26, 2020

The trolley problem problem

James Wilson
aeon.com
Originally posted 20 May 20

Here is an excerpt:

Some philosophers think that ethical thought experiments either are, or have a strong affinity with, scientific experiments. On such a view, thought experiments, like other experiments, when well-designed can allow knowledge to be built via rigorous and unbiased testing of hypotheses. Just as in the randomised controlled trials in which new pharmaceuticals are tested, the circumstances and the types of control in thought experiments could be such as to make the situation very unlike everyday situations, but that is a virtue rather than a vice, insofar as it allows ethical hypotheses to be tested cleanly and rigorously.

If thought experiments are – literally – experiments, this helps to explain how they might provide insights into the way the world is. But it would also mean that thought experiments would inherit the two methodological challenges that attend to experiments more generally, known as internal and external validity. Internal validity relates to the extent to which an experiment succeeds in providing an unbiased test of the variable or hypothesis in question. External validity relates to the extent to which the results in the controlled environment translate to other contexts, and in particular to our own. External validity is a major challenge, as the very features that make an environment controlled and suitable to obtain internal validity often make it problematically different from the uncontrolled environments in which interventions need to be applied.

There are significant challenges with both the internal and the external validity of thought experiments. It is useful to compare the kind of care with which medical researchers or psychologists design experiments – including validation of questionnaires, double-blinding of trials, placebo control, power calculations to determine the cohort size required and so on – with the typically rather more casual approach taken by philosophers. Until recently, there has been little systematic attempt within normative ethics to test variations of different phrasing of thought experiments, or to think about framing effects, or sample sizes; or the extent to which the results from the thought experiment are supposed to be universal or could be affected by variables such as gender, class or culture. A central ambiguity has been whether the implied readers of ethical thought experiments should be just anyone, or other philosophers; and, as a corollary, whether judgments elicited are supposed to be expert judgments, or the judgments of ordinary human beings. As the vast majority of ethical thought experiments in fact remain confined to academic journals, and are tested only informally on other philosophers, de facto they are tested only on those with expertise in the construction of ethical theories, rather than more generally representative samples or those with expertise in the contexts that the thought experiments purport to describe.

The info is here.

Friday, June 12, 2020

The science behind human irrationality just passed a huge test

Cathleen O’Grady
Ars Technica
Originally posted 22 May 20

Here are two excerpts:

People don’t approach things like loss and risk as purely rational agents. We weigh losses more heavily than gains. We feel like the difference between 1 percent and 2 percent is bigger than the difference between 50 percent and 51 percent. This observation of our irrationality is one of the most influential concepts in behavioral science: skyscrapers of research have been built on Daniel Kahneman and Amos Tversky’s foundational 1979 paper that first described the paradoxes of how people make decisions when faced with uncertainty.

So when researchers raised questions about the foundations of those skyscrapers, it caused alarm. A large team of researchers set out to check whether the results of Kahneman and Tversky’s crucial paper would replicate if the same experiment were conducted now.

Behavioral scientists can heave a sigh of relief: the original results held up, and robustly. With more than 4,000 participants in 19 countries, nearly every question in the original paper was answered the same way by people today as they were by their 1970s counterparts.

(cut)

Many of the results in the replication are more moderate than in the original paper. That’s a tendency that has been found in other replications and is probably best explained by the small samples in the original research. Getting accurate results (which often means less extreme results) needs big samples to get a proper read on how people in general behave. Smaller sample sizes were typical of the work at the time, and even today, it’s often hard to justify the effort of starting work on a new question with a huge sample size.

The info is here.

Friday, June 5, 2020

These are the Decade’s Biggest Discoveries in Human Evolution

Briana Pobiner and Rick Potts
smithsonianmag.com
Originally posted 28 April 20

Here is an excerpt:

We’re older than we thought

Stone tools aren’t the only things that are older than we thought. Humans are too.

Just three years ago, a team of scientists made a discovery that pushed back the origin of our species, Homo sapiens. The team re-excavated a cave in Morocco where a group of miners found skulls in 1961. They collected sediments and more fossils to help them identify and date the remains. Using CT scans, the scientists confirmed that the remains belonged to our species. They also used modern dating techniques on the remains. To their surprise, the remains dated to about 300,000 years ago, which means that our species originated 100,000 years earlier than we thought.

Social Networking Isn’t New

With platforms like Facebook, Twitter and Instagram, it hard to imagine social networking being old. But it is. And, now, it’s even older than we thought.

In 2018, scientists discovered that social networks were used to trade obsidian, valuable for its sharp edges, by around 300,000 years ago. After excavating and analyzing stone tools from southern Kenya, the team found that the stones chemically matched to obsidian sources in multiple directions of up to 55 miles away. The findings show how early humans related to and kept track of a larger social world.

We left Africa earlier than we thought

We’ve long known that early humans migrated from Africa not once but at least twice. But we didn’t know just how early those migrations happened.

We thought Homo erectus spread beyond Africa as far as eastern Asia by about 1.7 million years ago. But, in 2018, scientists dated new stone tools and fossils from China to about 2.1 million years ago, pushing the Homo erectus migration to Asia back by 400,000 years.

The info is here.

Sunday, May 31, 2020

The Answer to a COVID-19 Vaccine May Lie in Our Genes, But ...

Ifeoma Ajunwa & Forrest Briscoe
Scientific American
Originally posted 13 May 2020

Here is an excerpt:

Although the rationale for expanded genetic testing is obviously meant for the greater good, such testing could also bring with it a host of privacy and economic harms. In the past, genetic testing has also been associated with employment discrimination. Even before the current crisis, companies like 23andMe and Ancestry assembled and started operating their own private long-term large-scale databases of U.S. citizens’ genetic and health data. 23andMe and Ancestry recently announced they would use their databases to identify genetic factors that predict COVID-19 susceptibility.

Other companies are growing similar databases, for a range of purposes. And the NIH’s AllofUs program is constructing a genetic database, owned by the federal government, in which data from one million people will be used to study various diseases. These new developments indicate an urgent need for appropriate genetic data governance.

Leaders from the biomedical research community recently proposed a voluntary code of conduct for organizations constructing and sharing genetic databases. We believe that the public has a right to understand the risks of genetic databases and a right to have a say in how those databases will be governed. To ascertain public expectations about genetic data governance, we surveyed over two thousand (n=2,020) individuals who altogether are representative of the general U.S. population. After educating respondents about the key benefits and risks associated with DNA databases—using information from recent mainstream news reports—we asked how willing they would be to provide their DNA data for such a database.

The info is here.

Saturday, April 18, 2020

Experimental Philosophical Bioethics

Brain Earp, and others
AJOB Empirical Bioethics (2020), 11:1, 30-33
DOI: 10.1080/23294515.2020.1714792

There is a rich tradition in bioethics of gathering empirical data to inform, supplement, or test the implications of normative ethical analysis. To this end, bioethicists have drawn on diverse methods, including qualitative interviews, focus groups, ethnographic studies, and opinion surveys to advance understanding of key issues in bioethics. In so doing, they have developed strong ties with neighboring disciplines such as anthropology, history, law, and sociology.  Collectively, these lines of research have flourished in the broader field of “empirical bioethics” for more than 30 years (Sugarman and Sulmasy 2010).

More recently, philosophers from outside the field of bioethics have similarly employed empirical
methods—drawn primarily from psychology, the cognitive sciences, economics, and related disciplines—to advance theoretical debates. This approach, which has come to be called experimental philosophy (or x-phi), relies primarily on controlled experiments to interrogate the concepts, intuitions, reasoning, implicit mental processes, and empirical assumptions about the mind that play a role in traditional philosophical arguments (Knobe et al. 2012). Within the moral domain, for example, experimental philosophy has begun to contribute to long-standing debates about the nature of moral judgment and reasoning; the sources of our moral emotions and biases; the qualities of a good person or a good life; and the psychological basis of moral theory itself (Alfano, Loeb, and Plakias 2018). We believe that experimental philosophical bioethics—or “bioxphi”—can similarly contribute to bioethical scholarship and debate.1 Here, we introduce this emerging discipline, explain how it is distinct from empirical bioethics more broadly construed, and attempt to characterize how it might advance theory and practice in this area.

The paper is here.

Saturday, April 11, 2020

The Tyranny of Time: How Long Does Effective Therapy Really Take?

Jonathan Shedler & Enrico Gnaulati
Psychotherapy Networker
Originally posted March/April 20

Here is an excerpt:

Like the Consumer Reports study, this study also found a dose–response relation between therapy sessions and improvement. In this case, the longer therapy continued, the more clients achieved clinically significant change. So just how much therapy did it take? It took 21 sessions, or about six months of weekly therapy, for 50 percent of clients to see clinically significant change. It took more than 40 sessions, almost a year of weekly therapy, for 75 percent to see clinically significant change.

Information from the surveys of clients and therapists turned out to be pretty spot on. Three independent data sources converge on similar time frames. Every client is different, and no one can predict how much therapy is enough for a specific person, but on average, clinically meaningful change begins around the six-month mark and grows from there. And while some people will get what they need with less therapy, others will need a good deal more.

This is consistent with what clinical theorists have been telling us for the better part of a century. It should come as no surprise. Nothing of deep and lasting value is cheap or easy, and changing oneself and the course of one’s life may be most valuable of all.

Consider what it takes to master any new and complex skill, say learning a language, playing a musical instrument, learning to ski, or becoming adept at carpentry. With six months of practice, you might attain beginner- or novice-level proficiency, maybe. If someone promised to make you an expert in six months, you’d suspect they were selling snake oil. Meaningful personal development takes time and effort. Why would psychotherapy be any different?

The info is here.

Monday, March 30, 2020

The race to develop coronavirus treatments pushes the ethics of clinical trials

Olivia Goldhill
Quartz.com
Originally posted 28 March 20

Here is an excerpt:

But others are more pragmatic. Arthur Caplan, director of NYU Langone’s Division of Medical Ethics says that when doctors are faced with suffering patients, it’s ethical for them to use drugs that have been approved for other health conditions as treatments. This happened with Ebola, swine flu, Zika, and now coronavirus, he says.

Some of the first coronavirus patients in China, for example, were experimentally given the HIV treatment lopinavir–ritonavir and the rheumatoid arthritis drug Actemra. Now, as the virus continues its rampage around the globe, doctors are eyeballing an increasing number of treatment possibilities—and dealing with the challenging ethics of testing their efficacy while making the safest choices for their patients.

Controlled trials—with caveats

When choosing to use an experimental treatment, doctors have to be as methodical as possible—taking careful note of how sick patients are when given treatment, the dose and timing of medication, and how they fared. “It’s not a study, not controlled, but you want observations to be systematic,” says Caplan.

If, after a couple of weeks and 10 or 20 patients the drug doesn’t seem to cause active harm, Caplan says scientists can quickly move to the first stage of clinical research.

Many of the current coronavirus clinical trials are based on those early experimental treatments. Early research on lopinavir–ritonavir suggests that the drug is not effective, though as the first study was small, researchers plan to investigate further. There are also ongoing trials into arthritis medication Actemra,  antimalarial chloroquine, and Japanese flu drug favipiravir.

While clinical trials typically take months to years to get started, Li believes the current coronavirus trials will set records for speed: “I don’t think they could go any faster,” she says. It helps that there are a lot of coronavirus patients, so it’s easy to quickly enroll study participants.

The info is here.

Tuesday, March 17, 2020

Some Researchers Wear Yellow Pants, but Even Fewer Participants Read Consent Forms

B, Douglas, E. McGorray, & P. Ewell
PsyArXiv
Originally published 5 Feb 20

Abstract

Though consent forms include important information, those experienced with behavioral research often observe that participants do not carefully read consent forms. Three studies examined participants’ reading of consent forms for in-person experiments. In each study, we inserted the phrase “some researchers wear yellow pants” into sections of the consent form and measured participants’ reading of the form by testing their recall of the color yellow. In Study 1, we found that the majority of participants did not read consent forms thoroughly. This suggests that overall, participants sign consent forms that they have not read, confirming what has been observed anecdotally and documented in other research domains. Study 2 examined which sections of consent forms participants read and found that participants were more likely to read the first two sections of a consent form (procedure and risks) than later sections (benefits and anonymity and confidentiality). Given that rates of recall of the target phrase were under 70% even when the sentence was inserted into earlier sections of the form, we explored ways to improve participant reading in Study 3. Theorizing that the presence of a researcher may influence participants’ retention of the form, we assigned participants to read the form with or without a researcher present. Results indicated that removing the researcher from the room while participants read the consent form decreased recall of the target phrase. Implications of these results and suggestions for future researchers are discussed.

The research is here.

Thursday, March 12, 2020

Business gets ready to trip

Jeffrey O'Brien
Forbes. com
Originally posted 17 Feb 20

Here is an excerpt:

The need for a change in approach is clear. “Mental illness” is an absurdly large grab bag of disorders, but taken as a whole, it exacts an astronomical toll on society. The National Institute of Mental Health says nearly one in five U.S. adults lives with some form of it. According to the World Health Organization, 300 million people worldwide have an anxiety disorder. And there’s a death by suicide every 40 seconds—that includes 20 veterans a day, according to the U.S. Department of Veterans Affairs. Almost 21 million Americans have at least one addiction, per the U.S. Surgeon General, and things are only getting worse. The Lancet Commission—a group of experts in psychiatry, public health, neuroscience, etc.—projects that the cost of mental disorders, currently on the rise in every country, will reach $16 trillion by 2030, including lost productivity. The current standard of care clearly benefits some. Antidepressant medication sales in 2017 surpassed $14 billion. But SSRI drugs—antidepressants that boost the level of serotonin in the brain—can take months to take hold; the first prescription is effective only about 30% of the time. Up to 15% of benzodiazepine users become addicted, and adults on antidepressants are 2.5 times as likely to attempt suicide.

Meanwhile, in various clinical trials, psychedelics are demonstrating both safety and efficacy across the terrain. Scientific papers have been popping up like, well, mushrooms after a good soaking, producing data to blow away conventional methods. Psilocybin, the psychoactive ingredient in magic mushrooms, has been shown to cause a rapid and sustained reduction in anxiety and depression in a group of patients with life-threatening cancer. When paired with counseling, it has improved the ability of some patients suffering from treatment-resistant depression to recognize and process emotion on people’s faces. That correlates to reducing anhedonia, or the inability to feel pleasure. The other psychedelic agent most commonly being studied, MDMA, commonly called ecstasy or molly, has in some scientific studies proved highly effective at treating patients with persistent PTSD. In one Phase II trial of 107 patients who’d had PTSD for an average of over 17 years, 56% no longer showed signs of the affliction after one session of MDMA-assisted therapy. Psychedelics are helping to break addictions, as well. A combination of psilocybin and cognitive therapy enabled 80% of one study’s participants to kick cigarettes for at least six months. Compare that with the 35% for the most effective available smoking-cessation drug, varenicline.

The info is here.

Friday, January 31, 2020

Most scientists 'can't replicate studies by their peers'

Test tubesTom Feilden
BBC.com
Originally posted 22 Feb 17

Here is an excerpt:

The authors should have done it themselves before publication, and all you have to do is read the methods section in the paper and follow the instructions.

Sadly nothing, it seems, could be further from the truth.

After meticulous research involving painstaking attention to detail over several years (the project was launched in 2011), the team was able to confirm only two of the original studies' findings.

Two more proved inconclusive and in the fifth, the team completely failed to replicate the result.

"It's worrying because replication is supposed to be a hallmark of scientific integrity," says Dr Errington.

Concern over the reliability of the results published in scientific literature has been growing for some time.

According to a survey published in the journal Nature last summer, more than 70% of researchers have tried and failed to reproduce another scientist's experiments.

Marcus Munafo is one of them. Now professor of biological psychology at Bristol University, he almost gave up on a career in science when, as a PhD student, he failed to reproduce a textbook study on anxiety.

"I had a crisis of confidence. I thought maybe it's me, maybe I didn't run my study well, maybe I'm not cut out to be a scientist."

The problem, it turned out, was not with Marcus Munafo's science, but with the way the scientific literature had been "tidied up" to present a much clearer, more robust outcome.

The info is here.

Friday, January 24, 2020

Psychology accused of ‘collective self-deception’ over results

Image result for psychology as scienceJack Grove
The Times Higher Education
Originally published 10 Dec 19

Here is an excerpt:

If psychologists are serious about doing research that could make “useful real-world predictions”, rather than conducting highly contextualised studies, they should use “much larger and more complex datasets, experimental designs and statistical models”, Dr Yarkoni advises.

He also suggests that the “sweeping claims” made by many papers bear little relation to their results, maintaining that a “huge proportion of the quantitative inferences drawn in the published psychology literature are so inductively weak as to be at best questionable and at worst utterly insensible”.

Many psychologists were indulging in a “collective self-deception” and should start “acknowledging the fundamentally qualitative nature of their work”, he says, stating that “a good deal of what currently passes for empirical psychology is already best understood as insightful qualitative analysis dressed up as shoddy quantitative science”.

That would mean no longer including “scientific-looking inferential statistics” within papers, whose appearance could be considered an “elaborate rhetorical ruse used to mathematicise people into believing claims they would otherwise find logically unsound”.

The info is here.

Wednesday, January 8, 2020

Many Public Universities Refuse to Reveal Professors’ Conflicts of Interest

Annie Waldman and David Armstrong
Chronicle of Higher Ed and
ProPublica
Originally posted 6 Dec 19

Here is an excerpt:

All too often, what’s publicly known about faculty members’ outside activities, even those that could influence their teaching, research, or public-policy views, depends on where they teach. Academic conflicts of interest elude scrutiny because transparency varies from one university and one state to the next. ProPublica discovered those inconsistencies over the past year as we sought faculty outside-income forms from at least one public university in all 50 states.

About 20 state universities complied with our requests. The rest didn't, often citing exemptions from public-information laws for personnel records, or offering to provide the documents only if ProPublica first paid thousands of dollars. And even among those that released at least some records, there’s a wide range in what types of information are collected and disclosed, and whether faculty members actually fill out the forms as required. Then there's the universe of private universities that aren't subject to public-records laws and don't disclose professors’ potential conflicts at all. While researchers are supposed to acknowledge industry ties in scientific journals, those caveats generally don’t list compensation amounts.

We've accumulated by far the largest collection of university faculty and staff conflict-of-interest reports available anywhere, with more than 29,000 disclosures from state schools, which you can see in our new Dollars for Profs database. But there are tens of thousands that we haven't been able to get from other public universities, and countless more from private universities.

Sheldon Krimsky, a bioethics expert and professor of urban and environmental planning and policy at Tufts University, said that the fractured disclosure landscape deprives the public of key information for understanding potential bias in research. “Financial conflicts of interest influence outcomes,” he said. “Even if the researchers are honorable people, they don’t know how the interests affect their own research. Even honorable people can’t figure out why they have a predilection toward certain views. It’s because they internalize the values of people from whom they are getting funding, even if it’s not on the surface."

The info is here.

Friday, December 6, 2019

The female problem: how male bias in medical trials ruined women's health

Gabrielle Jackson
The Guardian
Originally posted 13 Nov 19

Here is an excerpt:

The result of this male bias in research extends beyond clinical practice. Of the 10 prescription drugs taken off the market by the US Food and Drug Administration between 1997 and 2000 due to severe adverse effects, eight caused greater health risks in women. A 2018 study found this was a result of “serious male biases in basic, preclinical, and clinical research”.

The campaign had an effect in the US: in 1993, the FDA and the NIH mandated the inclusion of women in clinical trials. Between the 70s and 90s, these organisations and many other national and international regulators had a policy that ruled out women of so-called childbearing potential from early-stage drug trials.

The reasoning went like this: since women are born with all the eggs they will ever produce, they should be excluded from drug trials in case the drug proves toxic and impedes their ability to reproduce in the future.

The result was that all women were excluded from trials, regardless of their age, gender status, sexual orientation or wish or ability to bear children. Men, on the other hand, constantly reproduce their sperm, meaning they represent a reduced risk. It sounds like a sensible policy, except it treats all women like walking wombs and has introduced a huge bias into the health of the human race.

In their 1994 book Outrageous Practices, Leslie Laurence and Beth Weinhouse wrote: “It defies logic for researchers to acknowledge gender difference by claiming women’s hormones can affect study results – for instance, by affecting drug metabolism – but then to ignore these differences, study only men and extrapolate the results to women.”

The info is here.

Saturday, October 26, 2019

Treatments for the Prevention and Management of Suicide: A Systematic Review.

D'Anci KE, Uhl S, Giradi G, et al.
Ann Intern Med. 
doi: 10.7326/M19-0869

Abstract

Background:
Suicide is a growing public health problem, with the national rate in the United States increasing by 30% from 2000 to 2016.

Purpose:
To assess the benefits and harms of nonpharmacologic and pharmacologic interventions to prevent suicide and reduce suicide behaviors in at-risk adults.

Conclusion:
Both CBT and DBT showed modest benefit in reducing suicidal ideation compared with TAU or wait-list control, and CBT also reduced suicide attempts compared with TAU. Ketamine and lithium reduced the rate of suicide compared with placebo, but there was limited information on harms. Limited data are available to support the efficacy of other nonpharmacologic or pharmacologic interventions.

Discussion

In this SR, we reviewed and synthesized evidence from 8 SRs and 15 RCTs of nonpharmacologic and pharmacologic interventions intended to prevent suicide in at-risk persons. These interventions are a subset of topics included in the updated VA/DoD 2019 CPG for assessment and management of patients at risk for suicide. The full final guideline is available from the VA Web site (www.healthquality.va.gov).

Nonpharmacologic interventions encompassed a range of approaches delivered either face-to-face or via the Internet or other technology. We found moderate-strength evidence supporting the use of face-to-face or Internet-delivered CBT in reducing suicide attempts, suicidal ideation, and hopelessness compared with TAU. We found low-strength evidence suggesting that CBT was not effective in reducing suicides. However, rates of suicide were generally low in the included studies, which limits our ability to draw firm conclusions about this outcome. Data from small studies provide low-strength evidence supporting the use of DBT over client-oriented therapy or control for reducing suicidal ideation. For other outcomes and other comparisons, we found no benefit of DBT. There was low-strength evidence supporting use of WHO-BIC to reduce suicide, CRP to reduce suicide attempts, and Window to Hope to reduce suicidal ideation and hopelessness.

Wednesday, October 9, 2019

Whistle-blowers act out of a sense of morality

Alice Walton
review.chicagobooth.edu
Originally posted September 16, 2019

Here is an excerpt:

To understand the factors that predict the likelihood of whistle-blowing, the researchers analyzed data from more than 42,000 participants in the ongoing Merit Principles Survey, which has polled US government employees since 1979, and which covers whistle-blowing. Respondents answer questions about their past experiences with unethical behavior, the approaches they’d take in dealing with future unethical behavior, and their personal characteristics, including their concern for others and their feelings about their organizations.

Concern for others was the strongest predictor of whistle-blowing, the researchers find. This was true both of people who had already blown the whistle on bad behavior and of people who expected they might in the future.

Loyalty to an immediate community—or ingroup, in psychological terms—was also linked to whistle-blowing, but in an inverse way. “The greater people’s concern for loyalty, the less likely they were to blow the whistle,” write the researchers. 

Organizational factors—such as people’s perceptions about their employer, their concern for their job, and their level of motivation or engagement—were largely unconnected to whether people spoke up. The only ones that appeared to matter were how fair people perceived their organization to be, as well as the extent to which the organization educated its employees about ways to expose bad behavior and the rights of whistle-blowers. The data suggest these two factors were linked to whether whistle-blowers opted to address the unethical behavior through internal or external avenues. 

The info is here.

Friday, October 4, 2019

When Patients Request Unproven Treatments

Casey Humbyrd and Matthew Wynia
medscape.com
Originally posted March 25, 2019

Here is an excerpt:

Ethicists have made a variety of arguments about these injections. The primary arguments against them have focused on the perils of physicians becoming sellers of "snake oil," promising outlandish benefits and charging huge sums for treatments that might not work. The conflict of interest inherent in making money by providing an unproven therapy is a legitimate ethical concern. These treatments are very expensive and, as they are unproven, are rarely covered by insurance. As a result, some patients have turned to crowdfunding sites to pay for these questionable treatments.

But the profit motive may not be the most important ethical issue at stake. If it were removed, hypothetically, and physicians provided the injections at cost, would that make this practice more acceptable?

No. We believe that physicians who offer these injections are skipping the most important step in the ethical adoption of any new treatment modality: research that clarifies the benefits and risks. The costs of omitting that important step are much more than just monetary.

For the sake of argument, let's assume that stem cells are tremendously successful and that they heal arthritic joints, making them as good as new. By selling these injections to those who can pay before the treatment is backed by research, physicians are ensuring unavailability to patients who can't pay, because insurance won't cover unproven treatments.

The info is here.

Friday, September 20, 2019

Why Moral Emotions Go Viral Online

Ana P. Gantman, William J. Brady, & Jay Van Bavel
Scientific American
Originally posted August 20, 2019

Social media is changing the character of our political conversations. As many have pointed out, our attention is a scarce resource that politicians and journalists are constantly fighting to attract, and the online world has become a primary trigger of our moral outrage. These two ideas, it turns out, are fundamentally related. According to our forthcoming paper, words that appeal to one’s sense of right and wrong are particularly effective at capturing attention, which may help explain this new political reality.

It occurred to us that the way people scroll through their social media feeds is very similar to a classic method psychologists use to measure people’s ability to pay attention. When we mindlessly browse social media, we are rapidly presenting a stream of verbal stimuli to ourselves. Psychologists have been studying this issue in the lab for decades, displaying to subjects a rapid succession of words, one after another, in the blink of an eye. In the lab, people are asked to find a target word among a collection of other words. Once they find it, there’s a short window of time in which that word captures their attention. If there’s a second target word in that window, most people don’t even see it—almost as if they had blinked with their eyes open.

There is an exception: if the second target word is emotionally significant to the viewer, that person will see it. Some words are so important to us that they are able to capture our attention even when we are already paying attention to something else.

The info is here.

Saturday, September 7, 2019

Debunking the Stanford Prison Experiment

Thibault Le Texier
PsyArXiv
Originally posted August 8, 2019

Abstract

The Stanford Prison Experiment (SPE) is one of psychology’s most famous studies. It has been criticized on many grounds, and yet a majority of textbook authors have ignored these criticisms in their discussions of the SPE, thereby misleading both students and the general public about the study’s questionable scientific validity. Data collected from a thorough investigation of the SPE archives and interviews with 15 of the participants in the experiment further question the study’s scientific merit. These data are not only supportive of previous criticisms of the SPE, such as the presence of demand characteristics, but provide new criticisms of the SPE based on heretofore unknown information. These new criticisms include the biased and incomplete collection of data, the extent to which the SPE drew on a prison experiment devised and conducted by students in one of Zimbardo’s classes 3 months earlier, the fact that the guards received precise instructions regarding the treatment of the prisoners, the fact that the guards were not told they were subjects, and the fact that participants were almost never completely immersed by the situation. Possible explanations of the inaccurate textbook portrayal and general misperception of the SPE’s scientific validity over the past 5 decades, in spite of its flaws and shortcomings, are discussed.

From the Conclusion:

4) The SPE survived for almost 50 years because no researcher has been through its archives. This was, I must say, one of the most puzzling facts that I discovered during my investigation. The experiment had been criticized by major figures such as Fromm (1973) and Festinger (1980), and the accounts of the experiment have been far from disclosing all of the details of the study; yet no psychologist seems to have wanted to know if the archives what exactly did the archives contain. Is it a lack of curiosity? Is it an excessive respect for the tenured professor of a prestigious university? Is it due to possible access restrictions imposed by Zimbardo? Is it because archival analyses are a time-consuming and work-intensive activity?  Is it due to the belief that no archives had been kept? The answer remains unknown.The recent replication crisis in psychology has shown, however, that psychologists are not indifferent to the functioning of science. This crisis can be seen as a sign of the good health and vigor of the field of psychology, which can correct its errors and improve its methodology (Chambers, 2017, p.171-217). Hopefully, the present study will contribute to psychology’s epistemological self-examination, and expose the SPE for what it was: an incredibly flawed study that should have died an early death.

Sunday, August 18, 2019

Social physics

Despite the vagaries of free will and circumstance, human behaviour in bulk is far more predictable than we like to imagine

Ian Stewart
www.aeon.co
Originally posted July 9, 2019

Here is an excerpt:

Polling organisations use a variety of methods to try to minimise these sources of error. Many of these methods are mathematical, but psychological and other factors also come into consideration. Most of us know of stories where polls have confidently indicated the wrong result, and it seems to be happening more often. Special factors are sometimes invoked to ‘explain’ why, such as a sudden late swing in opinion, or people deliberately lying to make the opposition think it’s going to win and become complacent. Nevertheless, when performed competently, polling has a fairly good track-record overall. It provides a useful tool for reducing uncertainty. Exit polls, where people are asked whom they voted for soon after they cast their vote, are often very accurate, giving the correct result long before the official vote count reveals it, and can’t influence the result.

Today, the term ‘social physics’ has acquired a less metaphorical meaning. Rapid progress in information technology has led to the ‘big data’ revolution, in which gigantic quantities of information can be obtained and processed. Patterns of human behaviour can be extracted from records of credit-card purchases, telephone calls and emails. Words suddenly becoming more common on social media, such as ‘demagogue’ during the 2016 US presidential election, can be clues to hot political issues.

The mathematical challenge is to find effective ways to extract meaningful patterns from masses of unstructured information, and many new methods.

The info is here.