Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Research Method. Show all posts
Showing posts with label Research Method. Show all posts

Monday, November 17, 2014

Is Social Psychology Biased Against Republicans?

By Maria Konnikova
The New Yorker
Originally published October 30, 2014

Here is an excerpt:

Social psychology, Haidt went on, had an obvious problem: a lack of political diversity that was every bit as dangerous as a lack of, say, racial or religious or gender diversity. It discouraged conservative students from joining the field, and it discouraged conservative members from pursuing certain lines of argument. It also introduced bias into research questions, methodology, and, ultimately, publications. The topics that social psychologists chose to study and how they chose to study them, he argued, suffered from homogeneity. The effect was limited, Haidt was quick to point out, to areas that concerned political ideology and politicized notions, like race, gender, stereotyping, and power and inequality. “It’s not like the whole field is undercut, but when it comes to research on controversial topics, the effect is most pronounced,” he later told me. (Haidt has now put his remarks in more formal terms, complete with data, in a paper forthcoming this winter in Behavioral and Brain Sciences.)

The entire article is here.

Monday, December 2, 2013

The Pervasive Problem With Placebos in Psychology

By Walter R. Boot, Daniel J. Simons, Cary Stothart, and Cassie Stutts
doi: 10.1177/1745691613491271
Perspectives on Psychological Science July 2013 vol. 8 no. 4 445-454

Abstract

To draw causal conclusions about the efficacy of a psychological intervention, researchers must compare the treatment condition with a control group that accounts for improvements caused by factors other than the treatment. Using an active control helps to control for the possibility that improvement by the experimental group resulted from a placebo effect. Although active control groups are superior to “no-contact” controls, only when the active control group has the same expectation of improvement as the experimental group can we attribute differential improvements to the potency of the treatment. Despite the need to match expectations between treatment and control groups, almost no psychological interventions do so. This failure to control for expectations is not a minor omission—it is a fundamental design flaw that potentially undermines any causal inference. We illustrate these principles with a detailed example from the video-game-training literature showing how the use of an active control group does not eliminate expectation differences. The problem permeates other interventions as well, including those targeting mental health, cognition, and educational achievement. Fortunately, measuring expectations and adopting alternative experimental designs makes it possible to control for placebo effects, thereby increasing confidence in the causal efficacy of psychological interventions.

The entire article is here.

Friday, October 18, 2013

10 ways to create “False Knowledge” in Psychology

By Graham Davey
The Graham Davey Blog
Originally posted September 30, 201

There’s been quite a good deal of discussion recently about (1) how we validate a scientific fact (herehere and here), and (2) whether psychology – and in particular some branches of psychology – are prone to generate fallacious scientific knowledge (here and here). As psychologists, we are all trained (I hope) to be scientists – exploring the boundaries of knowledge and trying as best we can’ to create new knowledge, but in many of our attempts to pursue our careers and pay the mortgage, are we badly prone to creating false knowledge? Yes – we probably are! Here are just a few examples, and I challenge most of you psychology researchers who read this post to say you haven’t been a culprit in at least one of these processes!

Here are 10 ways to risk creating false knowledge in psychology.

1.  Create your own psychological construct. Constructs can be very useful ways of summarizing and formalizing unobservable psychological processes, but researchers who invent constructs need to know a lot about the scientific process, make sure they don’t create circular arguments, and must be in touch with other psychological research that is relevant to the understanding they are trying to create. 


Thanks to Ed Zuckerman for this information.

Wednesday, May 2, 2012

La Trobe 'torture' study anguish

By Tim Elliott
theage.com.au
Originally published April 26, 2012

Diane Blackwell as university student
IN 1973, arts student Dianne Backwell tortured her roommate to death. Or so she thought.

Ms Backwell, then a 19-year-old student at La Trobe University, believed she was taking part in research into the effect of punishment on learning. But the friend whose screams she heard from another room every time she pushed a button was only pretending to receive electric shocks.

Nonetheless, the experiment, record of which has only now come to light, traumatised Ms Backwell for years. According to a new book, Behind the Shock Machine, by Melbourne psychologist Gina Perry, Ms Backwell was one of about 200 La Trobe students who took part in 1973 and 1974 in controversial experiments conducted by the university's psychology department.

The experiments were modelled on the notorious ''obedience tests'' carried out by US psychologist Stanley Milgram at Yale University in 1961, in which participants were ordered to shock students in another room, even when they believed it would kill them.

The entire story is here.

Thanks to Gary Schoener for this story.

Wednesday, April 11, 2012

Drug Data Shouldn’t Be Secret

By Peter Doshi and Tom Jefferson
The New York Times - Opinion
Originally published April 10, 2012


IN the fall of 2009, at the height of fears over swine flu, our research group discovered that a majority of clinical trial data for the anti-influenza drug Tamiflu — data that proved, according to its manufacturer, that the drug reduced the risk of hospitalization, serious complications and transmission — were missing, unpublished and inaccessible to the research community. From what we could tell from the limited clinical data that had been published in medical journals, the country’s most widely used and heavily stockpiled influenza drug appeared no more effective than aspirin.

After we published this finding in the British Medical Journal at the end of that year, Tamiflu’s manufacturer, Roche, announced that it would release internal reports to back up its claims that the drug was effective in reducing the complications of influenza. Roche promised access to data from 10 clinical trials, 8 of which had not been published a decade after completion, representing more than 4,000 patients from every continent except Antarctica.

(cut)

In response to our conclusions, which we published in January, the C.D.C. defended its stance by once again pointing to Roche’s analyses. This is not the way medical science should progress. Data secrecy is a disservice to those who volunteer their bodies for clinical trials, and is dangerous to those being asked to swallow approved medicines. Governments need to become better stewards of the scientific process.