Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Sunday, March 3, 2024

Is Dan Ariely Telling the Truth?

Tom Bartlett
The Chronicle of Higher Ed
Originally posted 18 Feb 24

Here is an excerpt:

In August 2021, the blog Data Colada published a post titled “Evidence of Fraud in an Influential Field Experiment About Dishonesty.” Data Colada is run by three researchers — Uri Simonsohn, Leif Nelson, and Joe Simmons — and it serves as a freelance watchdog for the field of behavioral science, which has historically done a poor job of policing itself. The influential field experiment in question was described in a 2012 paper, published in the Proceedings of the National Academy of Sciences, by Ariely and four co-authors. In the study, customers of an insurance company were asked to report how many miles they had driven over a period of time, an answer that might affect their premiums. One set of customers signed an honesty pledge at the top of the form, and another signed at the bottom. The study found that those who signed at the top reported higher mileage totals, suggesting that they were more honest. The authors wrote that a “simple change of the signature location could lead to significant improvements in compliance.” The study was classic Ariely: a slight tweak to a system that yields real-world results.

But did it actually work? In 2020, an attempted replication of the effect found that it did not. In fact, multiple attempts to replicate the 2012 finding all failed (though Ariely points to evidence in a recent, unpublished paper, on which he is a co-author, indicating that the effect might be real). The authors of the attempted replication posted the original data from the 2012 study, which was then scrutinized by a group of anonymous researchers who found that the data, or some of it anyway, had clearly been faked. They passed the data along to the Data Colada team. There were multiple red flags. For instance, the number of miles customers said they’d driven was unrealistically uniform. About the same number of people drove 40,000 miles as drove 500 miles. No actual sampling would look like that — but randomly generated data would. Two different fonts were used in the file, apparently because whoever fudged the numbers wasn’t being careful.

In short, there is no doubt that the data were faked. The only question is, who did it?


This article discusses an investigation into the research conduct of Dr. Dan Ariely, a well-known behavioral economist at Duke University. The investigation, prompted by concerns about potential data fabrication, concluded that while no evidence of fabricated data was found, Ariely did commit research misconduct by failing to adequately vet findings and maintain proper records.

The article highlights several specific issues identified by the investigation, including inconsistencies in data and a lack of supporting documentation for key findings. It also mentions that Ariely made inaccurate statements about his personal history, such as misrepresenting his age at the time of a childhood accident.

While Ariely maintains that he did not intentionally fabricate data and attributes the errors to negligence and a lack of awareness, the investigation's findings have damaged his reputation and raised questions about the integrity of his research. The article concludes by leaving the reader to ponder whether Ariely's transgressions can be forgiven or if they represent a deeper pattern of dishonesty.

It's important to note that the article presents one perspective on a complex issue and doesn't offer definitive answers. Further research and analysis are necessary to form a complete understanding of the situation.

Saturday, November 18, 2023

Resolving the battle of short- vs. long-term AI risks

Sætra, H.S., Danaher, J.
AI Ethics (2023).

Abstract

AI poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to think two thoughts at the same time. While disagreements over the exact probabilities and impacts of risks will remain, fostering a more productive dialogue will be important. This entails, for example, distinguishing between evaluations of particular risks and the politics of risk. Without proper discussions of AI risk, it will be difficult to properly manage them, and we could end up in a situation where neither short- nor long-term risks are managed and mitigated.


Here is my summary:

Artificial intelligence (AI) poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to prioritize these risks. Some argue that short-term risks, such as bias and discrimination, are more pressing and should be addressed first, while others argue that long-term risks, such as the possibility of AI surpassing human intelligence and becoming uncontrollable, are more serious and should be prioritized.

Sætra and Danaher argue that it is important to consider both short- and long-term risks when developing AI policies and regulations. They point out that short-term risks can have long-term consequences, and that long-term risks can have short-term impacts. For example, if AI is biased against certain groups of people, this could lead to long-term inequality and injustice. Conversely, if we take steps to mitigate long-term risks, such as by developing safety standards for AI systems, this could also reduce short-term risks.

Sætra and Danaher offer a number of suggestions for how to better balance short- and long-term AI risks. One suggestion is to develop a risk matrix that categorizes risks by their impact and likelihood. This could help policymakers to identify and prioritize the most important risks. Another suggestion is to create a research agenda that addresses both short- and long-term risks. This would help to ensure that we are investing in the research that is most needed to keep AI safe and beneficial.

Friday, October 27, 2023

Theory of consciousness branded 'pseudoscience' by neuroscientists

Clare Wilson
New Scientist
Originally posted 19 Sept 23

Consciousness is one of science’s deepest mysteries; it is considered so difficult to explain how physical entities like brain cells produce subjective sensory experiences, such as the sensation of seeing the colour red, that this is sometimes called “the hard problem” of science.

While the question has long been investigated by studying the brain, IIT came from considering the mathematical structure of information-processing networks and could also apply to animals or artificial intelligence.

It says that a network or system has a higher level of consciousness if it is more densely interconnected, such that the interactions between its connection points or nodes yield more information than if it is reduced to its component parts.

IIT predicts that it is theoretically possible to calculate a value for the level of consciousness, termed phi, of any network with known structure and functioning. But as the number of nodes within a network grows, the sums involved get exponentially bigger, meaning that it is practically impossible to calculate phi for the human brain – or indeed any information-processing network with more than about 10 nodes.

(cut)

Giulio Tononi at the University of Wisconsin-Madison, who first developed IIT and took part in the recent testing, did not respond to New Scientist’s requests for comment. But Johannes Fahrenfort at VU Amsterdam in the Netherlands, who was not involved in the recent study, says the letter went too far. “There isn’t a lot of empirical support for IIT. But that doesn’t warrant calling it pseudoscience.”

Complicating matters, there is no single definition of pseudoscience. But ITT is not in the same league as astrology or homeopathy, says James Ladyman at the University of Bristol in the UK. “It looks like a serious attempt to understand consciousness. It doesn’t make it a theory pseudoscience just because some people are making exaggerated claims.”


Summary:

A group of 124 neuroscientists, including prominent figures in the field, have criticized the integrated information theory (IIT) of consciousness in an open letter. They argue that recent experimental evidence said to support IIT didn't actually test its core ideas and is practically impossible to perform. IIT suggests that the level of consciousness, called "phi," can be calculated for any network with known structure and functioning, but this becomes impractical for networks with many nodes, like the human brain. Some critics believe that IIT has been overhyped and may have unintended consequences for policies related to consciousness in fetuses and animals. However, not all experts consider IIT pseudoscience, with some seeing it as a serious attempt to understand consciousness.

The debate surrounding the integrated information theory (IIT) of consciousness is a complex one. While it's clear that the recent experimental evidence has faced criticism for not directly testing the core ideas of IIT, it's important to recognize that the study of consciousness is a challenging and ongoing endeavor.

Consciousness is indeed one of science's profound mysteries, often referred to as "the hard problem." IIT, in its attempt to address this problem, has sparked valuable discussions and research. It may not be pseudoscience, but the concerns raised about overhyping its findings are valid. It's crucial for scientific theories to be communicated accurately to avoid misinterpretation and potential policy implications.

Ultimately, the study of consciousness requires a multidisciplinary approach and the consideration of various theories, and it's important to maintain a healthy skepticism while promoting rigorous scientific inquiry in this complex field.

Monday, October 9, 2023

They Studied Dishonesty. Was Their Work a Lie?

Gideon Lewis-Kraus
The New Yorker
Originally published 30 Sept 23

Here is an excerpt:

Despite a good deal of readily available evidence to the contrary, neoclassical economics took it for granted that humans were rational. Kahneman and Tversky found flaws in this assumption, and built a compendium of our cognitive biases. We rely disproportionately on information that is easily retrieved: a recent news article about a shark attack seems much more relevant than statistics about how rarely such attacks actually occur. Our desires are in flux—we might prefer pizza to hamburgers, and hamburgers to nachos, but nachos to pizza. We are easily led astray by irrelevant details. In one experiment, Kahneman and Tversky described a young woman who had studied philosophy and participated in anti-nuclear demonstrations, then asked a group of participants which inference was more probable: either “Linda is a bank teller” or “Linda is a bank teller and is active in the feminist movement.” More than eighty per cent chose the latter, even though it is a subset of the former. We weren’t Homo economicus; we were giddy and impatient, our thoughts hasty, our actions improvised. Economics tottered.

Behavioral economics emerged for public consumption a generation later, around the time of Ariely’s first book. Where Kahneman and Tversky held that we unconsciously trick ourselves into doing the wrong thing, behavioral economists argued that we might, by the same token, be tricked into doing the right thing. In 2008, Richard Thaler and Cass Sunstein published “Nudge,” which argued for what they called “libertarian paternalism”—the idea that small, benign alterations of our environment might lead to better outcomes. When employees were automatically enrolled in 401(k) programs, twice as many saved for retirement. This simple bureaucratic rearrangement improved a great many lives.

Thaler and Sunstein hoped that libertarian paternalism might offer “a real Third Way—one that can break through some of the least tractable debates in contemporary democracies.” Barack Obama, who hovered above base partisanship, found much to admire in the promise of technocratic tinkering. He restricted his outfit choices mostly to gray or navy suits, based on research into “ego depletion,” or the concept that one might exhaust a given day’s reservoir of decision-making energy. When, in the wake of the 2008 financial crisis, Obama was told that money “framed” as income was more likely to be spent than money framed as wealth, he enacted monthly tax deductions instead of sending out lump-sum stimulus checks. He eventually created a behavioral-sciences team in the White House. (Ariely had once found that our decisions in a restaurant are influenced by whoever orders first; it’s possible that Obama was driven by the fact that David Cameron, in the U.K., was already leaning on a “nudge unit.”)

The nudge, at its best, was modest—even a minor potential benefit at no cost pencilled out. In the Obama years, a pop-up on computers at the Department of Agriculture reminded employees that single-sided printing was a waste, and that advice reduced paper use by six per cent. But as these ideas began to intermingle with those in the adjacent field of social psychology, the reasonable notion that some small changes could have large effects at scale gave way to a vision of individual human beings as almost boundlessly pliable. Even Kahneman was convinced. He told me, “People invented things that shouldn’t have worked, and they were working, and I was enormously impressed by it.” Some of these interventions could be implemented from above. 


Thursday, August 31, 2023

It’s not only political conservatives who worry about moral purity

K. Gray, W. Blakey, & N. DiMaggio
psychce.co
Originally posted 13 July 23

Here are two excerpts:

What does this have to do with differences in moral psychology? Well, moral psychologists have suggested that politically charged arguments about sexuality, spirituality and other subjects reflect deep differences in the moral values of liberals and conservatives. Research involving scenarios like this one has seemed to indicate that conservatives, unlike liberals, think that maintaining ‘purity’ is a moral good in itself – which for them might mean supporting what they construe as the ‘sanctity of marriage’, for example.

It may seem strange to think about ‘purity’ as a core driver of political differences. But purity, in the moral sense, is an old concept. It pops up in the Hebrew Bible a lot, in taboos around food, menstruation, and divine encounters. When Moses meets God at the Burning Bush, God says to Moses: ‘Do not come any closer, take off your sandals, for the place where you are standing is holy ground.’ Why does God tell Moses to take off his shoes? Not because his shoes magically hurt God, but because shoes are dirty, and it’s disrespectful to wear your shoes in the presence of the creator of the universe. Similarly, in ancient Greece, worshippers were often required to endure long purification rituals before looking at sacred religious idols or engaging in different spiritual rites. These ancient moral practices seem to reflect an intuition that ‘cleanliness is next to Godliness’.

In the modern era, purity has repeatedly appeared at the centre of political battlegrounds, as in clashes between US conservatives and liberals over sexual education and mores in the 1990s. It was around this time that the psychologist Jonathan Haidt began formulating a theory to help explain the moral divide. Moral foundations theory argues that liberals and conservatives are divided because they rely on distinct moral values, including purity, to different degrees.

(cut)

A harm-focused perspective on moral judgments related to ‘purity’ could help us better understand and communicate with moral opponents. We all grasp the importance of protecting ourselves and our loved ones from harm. Learning that people on the ‘other side’ of a political divide care about questions of purity because they connect these to their understanding of harm can help us empathise with different moral opinions. It is easy for a liberal to dismiss a conservative’s condemnation of dead-chicken sex when it is merely said to be ‘impure’; it is harder to be dismissive if it’s suggested that someone who makes a habit of that behaviour might end up harming people.

Explicitly grounding discussions of morality in perceptions of harm could help us all to be better citizens of a ‘small-L liberal’ society – one in which the right to swing our fists ends where others’ noses begin. If something seems disgusting, impure and immoral to you, take some time to try to articulate the harms you intuitively perceive. Talking about these potential harms may help other people understand where you are coming from. Of course, someone might not share your judgment that harm is being done. But identifying perceived harms at least puts the conversation in terms that everyone understands.


Here is my summary:

The authors define purity as "the state of being free from contamination or pollution."  They argue that people on both the left and the right care about purity because they associate it with safety and well-being.
They provide examples of how liberals and conservatives can both use purity-related language, such as "desecrate" and "toxic." They propose a new explanation of moral judgments that suggests that people care about purity when they perceive that 'impure' acts can lead to harm.

Sunday, August 6, 2023

Harvard professor accused of research fraud files defamation lawsuit against university, academics

Alex Koller
The Boston Globe
Originally posted 4 August 23

Here is an excerpt:

In the filing, Gino, a renowned behavioral scientist who studies the psychology of decisions, denied having ever falsified or fabricated data. She alleged that Harvard’s investigation into her work was unfair and biased.

The lawsuit alleges that the committee did not prove by a preponderance of the evidence that Gino “intentionally, knowingly, or recklessly” falsified or fabricated data, as Harvard policy required, and “ignored” exculpatory evidence. The suit also decries Data Colada’s posts as a “vicious, defamatory smear campaign.” The blog’s inquiries into Gino’s work initially sparked Harvard’s investigation.

In a statement posted to LinkedIn Wednesday, Gino refuted allegations against her and explained her decision to take legal action against Harvard and Data Colada.

“I want to be very clear: I have never, ever falsified data or engaged in research misconduct of any kind,” she wrote. “Today I had no choice but to file a lawsuit against Harvard University and members of the Data Colada group, who worked together to destroy my career and reputation despite admitting they have no evidence proving their allegations.”

She added that the university and authors “reached outrageous conclusions based entirely on inference, assumption, and implausible leaps of logic.”

The lawsuit accuses all of the defendants of defamation, and also accuses Harvard of gender discrimination, breach of contract, and bad faith and unfair dealing with Gino, who has been a tenured professor of business administration at Harvard since 2014.

Gino was first notified by Harvard of fraud allegations against her work in October 2021, according to the suit. She then learned that the university would conduct its own investigation in April 2022.

The filing alleges that Harvard’s investigation committee interviewed six of Gino’s collaborators and two research assistants, all of whom defended the integrity of Gino’s practices and said they had no evidence Gino had ever pressured anyone to produce a specific result.

Sunday, June 18, 2023

Gender-Affirming Care for Trans Youth Is Neither New nor Experimental: A Timeline and Compilation of Studies

Julia Serano
Medium.com
Originally posted 16 May 23

Trans and gender-diverse people are a pancultural and transhistorical phenomenon. It is widely understood that we, like LGBTQ+ people more generally, arise due to natural variation rather than the result of pathology, modernity, or the latest conspiracy theory.

Gender-affirming healthcare has a long history. The first trans-related surgeries were carried out in the 1910s–1930s (Meyerowitz, 2002, pp. 16–21). While some doctors were supportive early on, most were wary. Throughout the mid-twentieth century, these skeptical doctors subjected trans people to all sorts of alternate treatments — from perpetual psychoanalysis, to aversion and electroshock therapies, to administering assigned-sex-consistent hormones (e.g., testosterone for trans female/feminine people), and so on — but none of them worked. The only treatment that reliably allowed trans people to live happy and healthy lives was allowing them to transition. While doctors were initially worried that many would eventually come to regret that decision, study after study has shown that gender-affirming care has a far lower regret rate (typically around 1 or 2 percent) than virtually any other medical procedure. Given all this, plus the fact that there is no test for being trans (medical, psychological, or otherwise), around the turn of the century, doctors began moving away from strict gatekeeping and toward an informed consent model for trans adults to attain gender-affirming care.

Trans children have always existed — indeed most trans adults can tell you about their trans childhoods. During the twentieth century, while some trans kids did socially transition (Gill-Peterson, 2018), most had their gender identities disaffirmed, either by parents who disbelieved them or by doctors who subjected them to “gender reparative” or “conversion” therapies. The rationale behind the latter was a belief at that time that gender identity was flexible and subject to change during early childhood, but we now know that this is not true (see e.g., Diamond & Sigmundson, 1997; Reiner & Gearhart, 2004). Over the years, it became clear that these conversion efforts were not only ineffective, but they caused real harm — this is why most health professional organizations oppose them today.

Given the harm caused by gender-disaffirming approaches, around the turn of the century, doctors and gender clinics began moving toward what has come to be known as the gender affirmative model — here’s how I briefly described this approach in my 2016 essay Detransition, Desistance, and Disinformation: A Guide for Understanding Transgender Children Debates:

Rather than being shamed by their families and coerced into gender conformity, these children are given the space to explore their genders. If they consistently, persistently, and insistently identify as a gender other than the one they were assigned at birth, then their identity is respected, and they are given the opportunity to live as a member of that gender. If they remain happy in their identified gender, then they may later be placed on puberty blockers to stave off unwanted bodily changes until they are old enough (often at age sixteen) to make an informed decision about whether or not to hormonally transition. If they change their minds at any point along the way, then they are free to make the appropriate life changes and/or seek out other identities.

Thursday, April 13, 2023

Why artificial intelligence needs to understand consequences

Neil Savage
Nature
Originally published 24 FEB 23

Here is an excerpt:

The headline successes of AI over the past decade — such as winning against people at various competitive games, identifying the content of images and, in the past few years, generating text and pictures in response to written prompts — have been powered by deep learning. By studying reams of data, such systems learn how one thing correlates with another. These learnt associations can then be put to use. But this is just the first rung on the ladder towards a loftier goal: something that Judea Pearl, a computer scientist and director of the Cognitive Systems Laboratory at the University of California, Los Angeles, refers to as “deep understanding”.

In 2011, Pearl won the A.M. Turing Award, often referred to as the Nobel prize for computer science, for his work developing a calculus to allow probabilistic and causal reasoning. He describes a three-level hierarchy of reasoning4. The base level is ‘seeing’, or the ability to make associations between things. Today’s AI systems are extremely good at this. Pearl refers to the next level as ‘doing’ — making a change to something and noting what happens. This is where causality comes into play.

A computer can develop a causal model by examining interventions: how changes in one variable affect another. Instead of creating one statistical model of the relationship between variables, as in current AI, the computer makes many. In each one, the relationship between the variables stays the same, but the values of one or several of the variables are altered. That alteration might lead to a new outcome. All of this can be evaluated using the mathematics of probability and statistics. “The way I think about it is, causal inference is just about mathematizing how humans make decisions,” Bhattacharya says.

Bengio, who won the A.M. Turing Award in 2018 for his work on deep learning, and his students have trained a neural network to generate causal graphs5 — a way of depicting causal relationships. At their simplest, if one variable causes another variable, it can be shown with an arrow running from one to the other. If the direction of causality is reversed, so too is the arrow. And if the two are unrelated, there will be no arrow linking them. Bengio’s neural network is designed to randomly generate one of these graphs, and then check how compatible it is with a given set of data. Graphs that fit the data better are more likely to be accurate, so the neural network learns to generate more graphs similar to those, searching for one that fits the data best.

This approach is akin to how people work something out: people generate possible causal relationships, and assume that the ones that best fit an observation are closest to the truth. Watching a glass shatter when it is dropped it onto concrete, for instance, might lead a person to think that the impact on a hard surface causes the glass to break. Dropping other objects onto concrete, or knocking a glass onto a soft carpet, from a variety of heights, enables a person to refine their model of the relationship and better predict the outcome of future fumbles.

Sunday, January 29, 2023

UCSF Issues Report, Apologizes for Unethical 1960-70’s Prison Research

Restorative Justice Calls for Continued Examination of the Past

Laura Kurtzman
Press Release
Originally posted 20 DEC 22

Recognizing that justice, healing and transformation require an acknowledgment of past harms, UCSF has created the Program for Historical Reconciliation (PHR). The program is housed under the Office of the Executive Vice Chancellor and Provost, and was started by current Executive Vice Chancellor and Provost, Dan Lowenstein, MD.

The program’s first report, released this month, investigates experiments from the 1960s and 1970s involving incarcerated men at the California Medical Facility (CMF) in Vacaville. Many of these men were being assessed or treated for psychiatric diagnoses.

The research reviewed in the report was performed by Howard Maibach, MD, and William Epstein, MD, both faculty in UCSF’s Department of Dermatology. Epstein was a former chair of the department who died in 2006. The committee was asked to focus on the work of Maibach, who remains an active member of the department.

Some of the experiments exposed research subjects to pesticides and herbicides or administered medications with side effects. In all, some 2,600 incarcerated men were experimented on.

The men volunteered for the studies and were paid for participating. But the report raises ethical concerns over how the research was conducted. In many cases there was no record of informed consent. The subjects also did not have any of the medical conditions that any of the experiments could have potentially treated or ameliorated.

Such practices were common in the U.S. at the time and were increasingly being criticized both by experts and in the lay press. The research continued until 1977, when the state of California halted all human subject research in state prisons, a year after the federal government did the same.

The report acknowledges that Maibach was working during a time when the governance of human subjects research was evolving, both at UCSF and at institutions across the country. Over a six-month period, the committee gathered some 7,000 archival documents, medical journal articles, interviews, documentaries and books, much of which has yet to be analyzed. UCSF has acknowledged that it may issue a follow-up report.

The report found that “Maibach practiced questionable research methods. Archival records and published articles have failed to show any protocols that were adopted regarding informed consent and communicating research risks to participants who were incarcerated.”

In a review of publications between 1960 and 1980, the committee found virtually all of Maibach’s studies lacked documentation of informed consent despite a requirement for formal consent instituted in 1966 by the newly formed Committee on Human Welfare and Experimentation. Only one article, published in 1975, indicated the researchers had obtained informed consent as well as approval from UCSF’s Committee for Human Research (CHR), which began in 1974 as a result of new federal requirements.


Saturday, January 21, 2023

Kindness Can Have Unexpectedly Positive Consequences

Amit Kumar
Scientific American
December 12, 2022

Scientists who study happiness know that being kind to others can improve well-being. Acts as simple as buying a cup of coffee for someone can boost a person’s mood, for example. Everyday life affords many opportunities for such actions, yet people do not always take advantage of them.

In a set of studies published online in the Journal of Experimental Psychology: General, Nick Epley, a behavioral scientist at the University of Chicago Booth School of Business, and I examined a possible explanation. We found that people who perform random acts of kindness do not always realize how much of an impact they are having on another individual. People consistently and systematically underestimate how others value these acts.

Across multiple experiments involving approximately 1,000 participants, people performed a random act of kindness—that is, an action done with the primary intention of making someone else (who isn’t expecting the gesture) feel good. Those who perform such actions expect nothing in return.

From one procedure to the next, the specific acts of kindness varied. For instance, in one experiment, people wrote notes to friends and family “just because.” In another, they gave cupcakes away. Across these experiments, we asked both the person performing a kind act and the one receiving it to fill out questionnaires. We asked the person who had acted with kindness to report their own experience and predict their recipient’s response. We wanted to understand how valuable people perceived these acts to be, so both the performer and recipient had to rate how “big” the act seemed. In some cases, we also inquired about the actual or perceived cost in time, money or effort. In all cases, we compared the performer’s expectations of the recipient’s mood with the recipient’s actual experience.

Across our investigations, several robust patterns emerged. For one, both performers and recipients of the acts of kindness were in more positive moods than normal after these exchanges. For another, it was clear that performers undervalued their impact: recipients felt significantly better than the kind actors expected. The recipients also reliably rated these acts as “bigger” than the people performing them did.



Thursday, November 3, 2022

What Makes a Great Life?

Jon Clifton
Gallup.com
Originally posted 22 SEPT 22

While many things contribute to a great life, Gallup finds five aspects that all people have in common: their work, finances, physical health, communities, and relationships with family and friends. If you are excelling in each of these elements of wellbeing, you are highly likely to be thriving in life.

(cut)

Gallup's research as well as research by the global community of wellbeing practitioners has produced hundreds, if not thousands, of discoveries.

One of the most famous discoveries is the U-curve of happiness, which shows how age is associated with wellbeing. Young people rate their lives high, and so do older people. But middle-aged people rate their lives the lowest. This trend holds every year in almost every country in the world. It is nicknamed the "U-curve" of happiness because when you look at the graph, it looks like a "U." Some jokingly say that the chart is smiling.

Some discoveries are astonishing; others feel like they reveal a "blandly sophomoric secret," as George Gallup referred to some of his longevity findings. For example, you could argue that the U-curve of happiness simply quantifies conventional wisdom -- that people have midlife crises.

Here are a few of the discoveries that are truly compelling:
  • People who love their jobs do not hate Mondays.
  • Education-related debt can cause an emotional scar that remains even after you pay off the debt.
  • Volunteering is not just good for the people you are helping; it is also good for you.
  • Exercising is better at eliminating fatigue than prescription drugs.
  • Loneliness can double your risk of dying from heart disease.
We could list every insight ever produced from this research and encourage leaders to work on all of them. Instead, we took another approach. Using all these insights from across the industry combined with our surveys and analysis, we created the five elements of wellbeing. And our ongoing global research confirms that the five elements of wellbeing are significant drivers of a great life everywhere.

Wednesday, November 2, 2022

How the Classics Changed Research Ethics

Scott Sleek
Psychological Science
Originally posted 31 AUG 22

Here is an excerpt:

Social scientists have long contended that the Common Rule was largely designed to protect participants in biomedical experiments—where scientists face the risk of inducing physical harm on subjects—but fits poorly with the other disciplines that fall within its reach.

“It’s not like the IRBs are trying to hinder research. It’s just that regulations continue to be written in the medical model without any specificity for social science research,” she explained. 

The Common Rule was updated in 2018 to ease the level of institutional review for low-risk research techniques (e.g., surveys, educational tests, interviews) that are frequent tools in social and behavioral studies. A special committee of the National Research Council (NRC), chaired by APS Past President Susan Fiske, recommended many of those modifications. Fisher was involved in the NRC committee, along with APS Fellows Richard Nisbett (University of Michigan) and Felice J. Levine (American Educational Research Association), and clinical psychologist Melissa Abraham of Harvard University. But the Common Rule reforms have yet to fully expedite much of the research, partly because the review boards remain confused about exempt categories, Fisher said.  

Interference or support? 

That regulatory confusion has generated sour sentiments toward IRBs. For decades, many social and behavioral scientists have complained that IRBs effectively impede scientific progress through arbitrary questions and objections. 

In a Perspectives on Psychological Science paper they co-authored, APS Fellows Stephen Ceci of Cornell University and Maggie Bruck of Johns Hopkins University discussed an IRB rejection of their plans for a study with 6- to 10-year-old participants. Ceci and Bruck planned to show the children videos depicting a fictional police officer engaging in suggestive questioning of a child.  

“The IRB refused to approve the proposal because it was deemed unethical to show children public servants in a negative light,” they wrote, adding that the IRB held firm on its rejection despite government funders already having approved the study protocol (Ceci & Bruck, 2009). 

Other scientists have complained the IRBs exceed their Common Rule authority by requiring review of studies that are not government funded. In 2011, psychological scientist Jin Li sued Brown University in federal court for barring her from using data she collected in a privately funded study on educational testing. Brown’s IRB objected to the fact that she paid her participants different amounts of compensation based on need. (A year later, the university settled the case with Li.) 

In addition, IRBs often hover over minor aspects of a study that have no genuine relation to participant welfare, Ceci said in an email interview.  

Tuesday, November 1, 2022

LinkedIn ran undisclosed social experiments on 20 million users for years to study job success

Kathleen Wong
USAToday.com
Originally posted 25 SEPT 22

A new study analyzing the data of over 20 million LinkedIn users over the timespan of five years reveals that our acquaintances may be more helpful in finding a new job than close friends.

Researchers behind the study say the findings will improve job mobility on the platform, but since users were unaware of their data being studied, some may find the lack of transparency concerning.  

Published this month in Science, the study was conducted by researchers from LinkedIn, Harvard Business School and the Massachusetts Institute of Technology between 2015 and 2019. Researchers ran "multiple large-scale randomized experiments" on the platform's "People You May Know" algorithm, which suggests new connections to users. 

In a practice known as A/B testing, the experiments included giving certain users an algorithm that offered different (like close or not-so-close) contact recommendations and then analyzing the new jobs that came out of those two billion new connections.

(cut)

A question of ethics

Privacy advocates told the New York Times Sunday that some of the 20 million LinkedIn users may not be happy  that their data was used without consent. That resistance is part of a longstanding  pattern of people's data being tracked and used by tech companies without their knowledge.

LinkedIn told the paper it "acted consistently" with its user agreement, privacy policy and member settings.

LinkedIn did not respond to an email sent by USA TODAY on Sunday. 

The paper reports that LinkedIn's privacy policy does state the company reserves the right to use its users' personal data.

That access can be used "to conduct research and development for our Services in order to provide you and others with a better, more intuitive and personalized experience, drive membership growth and engagement on our Services, and help connect professionals to each other and to economic opportunity." 

It can also be deployed to research trends.

The company also said it used "noninvasive" techniques for the study's research. 

Aral told USA TODAY that researchers "received no private or personally identifying data during the study and only made aggregate data available for replication purposes to ensure further privacy safeguards."

Sunday, October 30, 2022

The uselessness of AI ethics

Munn, L. The uselessness of AI ethics.
AI Ethics (2022).

Abstract

As the awareness of AI’s power and danger has risen, the dominant response has been a turn to ethical principles. A flood of AI guidelines and codes of ethics have been released in both the public and private sector in the last several years. However, these are meaningless principles which are contested or incoherent, making them difficult to apply; they are isolated principles situated in an industry and education system which largely ignores ethics; and they are toothless principles which lack consequences and adhere to corporate agendas. For these reasons, I argue that AI ethical principles are useless, failing to mitigate the racial, social, and environmental damages of AI technologies in any meaningful sense. The result is a gap between high-minded principles and technological practice. Even when this gap is acknowledged and principles seek to be “operationalized,” the translation from complex social concepts to technical rulesets is non-trivial. In a zero-sum world, the dominant turn to AI principles is not just fruitless but a dangerous distraction, diverting immense financial and human resources away from potentially more effective activity. I conclude by highlighting alternative approaches to AI justice that go beyond ethical principles: thinking more broadly about systems of oppression and more narrowly about accuracy and auditing.

(cut)

Meaningless principles

The deluge of AI codes of ethics, frameworks, and guidelines in recent years has produced a corresponding raft of principles. Indeed, there are now regular meta-surveys which attempt to collate and summarize these principles. However, these principles are highly abstract and ambiguous, becoming incoherent. Mittelstadt suggests that work on AI ethics has largely produced “vague, high-level principles, and value statements which promise to be action-guiding, but in practice provide few specific recommendations and fail to address fundamental normative and political tensions embedded in key concepts.” The point here is not to debate the merits of any one value over another, but to highlight the fundamental lack of consensus around key terms. Commendable values like “fairness” and “privacy” break down when subjected to scrutiny, leading to disparate visions and deeply incompatible goals.

What are some common AI principles? Despite the mushrooming of ethical statements, Floridi and Cowls suggest many values recur frequently and can be condensed into five core principles: beneficence, non-maleficence, autonomy, justice, and explicability. These ideals sound wonderful. After all, who could be against beneficence? However, problems immediately arise when we start to define what beneficence means. In the Montreal principles for instance, “well-being” is the term used, suggesting that AI development should promote the “well-being of all sentient creatures.” While laudable, clearly there are tensions to consider here. We might think, for instance, of how information technologies support certain conceptions of human flourishing by enabling communication and business transactions—while simultaneously contributing to carbon emissions, environmental degradation, and the climate crisis. In other words, AI promotes the well-being of some creatures (humans) while actively undermining the well-being of others.

The same issue occurs with the Statement on Artificial Intelligence, Robotics, and Autonomous Systems. In this Statement, beneficence is gestured to through the concept of “sustainability,” asserting that AI must promote the basic preconditions for life on the planet. Few would argue directly against such a commendable aim. However, there are clearly wildly divergent views on how this goal should be achieved. Proponents of neoliberal interventions (free trade, globalization, deregulation) would argue that these interventions contribute to economic prosperity and in that sense sustain life on the planet. In fact, even the oil and gas industry champions the use of AI under the auspices of promoting sustainability. Sustainability, then, is a highly ambiguous or even intellectually empty term that is wrapped around disparate activities and ideologies. In a sense, sustainability can mean whatever you need it to mean. Indeed, even one of the members of the European group denounced the guidelines as “lukewarm” and “deliberately vague,” stating they “glossed over difficult problems” like explainability with rhetoric.

Wednesday, October 12, 2022

Gender-diverse teams produce more novel and higher-impact scientific ideas

Yang, Y., Tian, T. Y., et al. (2022, August 29). 
Proceedings of the National Academy of Sciences, 119(36).
https://doi.org/10.1073/pnas.2200841119

Abstract

Science’s changing demographics raise new questions about research team diversity and research outcomes. We study mixed-gender research teams, examining 6.6 million papers published across the medical sciences since 2000 and establishing several core findings. First, the fraction of publications by mixed-gender teams has grown rapidly, yet mixed-gender teams continue to be underrepresented compared to the expectations of a null model. Second, despite their underrepresentation, the publications of mixed-gender teams are substantially more novel and impactful than the publications of same-gender teams of equivalent size. Third, the greater the gender balance on a team, the better the team scores on these performance measures. Fourth, these patterns generalize across medical subfields. Finally, the novelty and impact advantages seen with mixed-gender teams persist when considering numerous controls and potential related features, including fixed effects for the individual researchers, team structures, and network positioning, suggesting that a team’s gender balance is an underrecognized yet powerful correlate of novel and impactful scientific discoveries.

Significance

Science teams made up of men and women produce papers that are more novel and highly cited than those of all-men or all-women teams. These performance advantages increase the greater the team’s gender balance and appear nearly universal. On average, they hold for small and large teams, the 45 subfields of medicine, and women- or men-led teams and generalize to published papers in all science fields over the last 20 y. Notwithstanding these benefits, gender-diverse teams remain underrepresented in science when compared to what is expected if the teams in the data had been formed without regard to gender. These findings reveal potentially new gender and teamwork synergies that correlate with scientific discoveries and inform diversity, equity, and inclusion (DEI) initiatives.

Discussion

Conducting an analysis of 6.6 million published papers from more than 15,000 different medical journals worldwide, we find that mixed-gender teams—teams combining women and men scientists—produce more novel and more highly cited papers than all-women or all-men teams. Mixed-gender teams publish papers that are up to 7% more novel and 14.6% more likely to be upper-tail papers than papers published by same-gender teams, results that are robust to numerous institutional, team, and individual controls and further generalize by subfield. Finally, in exploring gender in science through the lens of teamwork, the results point to a potentially transformative approach for thinking about and capturing the value of gender diversity in science.

Another key finding of this work is that mixed-gender teams are significantly underrepresented compared to what would be expected by chance. This underrepresentation is all the more striking given the findings that gender-diverse teams produce more novel and high-impact research and suggests that gender-diverse teams may have substantial untapped potential for medical research. Nevertheless, the underrepresentation of gender-diverse teams may reflect research showing that women receive less credit for their successes than do men teammates, which in turn inhibits the formation of gender-diverse teams and women’s success in receiving grants, prizes, and promotions.

Friday, August 5, 2022

The Neuroscience Behind Bad Decisions

Emily Singer
Quanta Magazine
Originally posted 13 AUG 16

Here are excerpts:

Economists have spent more than 50 years cataloging irrational choices like these. Nobel Prizes have been earned; millions of copies of Freakonomics have been sold. But economists still aren’t sure why they happen. “There had been a real cottage industry in how to explain them and lots of attempts to make them go away,” said Eric Johnson, a psychologist and co-director of the Center for Decision Sciences at Columbia University. But none of the half-dozen or so explanations are clear winners, he said.

In the last 15 to 20 years [this article was written in 2016], neuroscientists have begun to peer directly into the brain in search of answers. “Knowing something about how information is represented in the brain and the computational principles of the brain helps you understand why people make decisions how they do,” said Angela Yu, a theoretical neuroscientist at the University of California, San Diego.

Glimcher is using both the brain and behavior to try to explain our irrationality. He has combined results from studies like the candy bar experiment with neuroscience data — measurements of electrical activity in the brains of animals as they make decisions — to develop a theory of how we make decisions and why that can lead to mistakes.

(cut)

But the decision-making system operates under more complex constraints and has to consider many different types of information. For example, a person might choose which house to buy depending on its location, size or style. But the relative importance of each of these factors, as well as their optimal value — city or suburbs, Victorian or modern — is fundamentally subjective. It varies from person to person and may even change for an individual depending on their stage of life. “There is not one simple, easy-to-measure mathematical quantity like redundancy that decision scientists universally agree on as being a key factor in the comparison of competing alternatives,” Yu said.

She suggests that uncertainty in how we value different options is behind some of our poor decisions. “If you’ve bought a lot of houses, you’ll evaluate houses differently than if you were a first-time homebuyer,” Yu said. “Or if your parents bought a house during the housing crisis, it may later affect how you buy a house.”

Moreover, Yu argues, the visual and decision-making systems have different end-goals. “Vision is a sensory system whose job is to recover as much information as possible from the world,” she said. “Decision-making is about trying to make a decision you’ll enjoy. I think the computational goal is not just information, it’s something more behaviorally relevant like total enjoyment.”

For many of us, the main concern over decision-making is practical — how can we make better decisions? Glimcher said that his research has helped him develop specific strategies. “Rather than pick what I hope is the best, instead I now always start by eliminating the worst element from a choice set,” he said, reducing the number of options to something manageable, like three.


Curator's note: Oddly enough, this last sentence is what personalized algorithms do.  Pushing people to limited options has both positive and negative aspects.  While it may help with decision-making, it also helps with political polarization.

Thursday, July 14, 2022

What nudge theory got wrong

Tim Harford
The Financial Times
Originally posted 

Here is an excerpt:

Chater and Loewenstein argue that behavioural scientists naturally fall into the habit of seeing problems in the same way. Why don’t people have enough retirement savings? Because they are impatient and find it hard to save rather than spend. Why are so many greenhouse gases being emitted? Because it’s complex and tedious to switch to a green electricity tariff. If your problem is basically that fallible individuals are making bad choices, behavioural science is an excellent solution.

If, however, the real problem is not individual but systemic, then nudges are at best limited, and at worst, a harmful diversion. Historians such as Finis Dunaway now argue that the Crying Indian campaign was a deliberate attempt by corporate interests to change the subject. Is behavioural public policy, accidentally or deliberately, a similar distraction?

A look at climate change policy suggests it might be. Behavioural scientists themselves are clear enough that nudging is no real substitute for a carbon price — Thaler and Sunstein say as much in Nudge. Politicians, by contrast, have preferred to bypass the carbon price and move straight to the pain-free nudging.

Nudge enthusiast David Cameron, in a speech given shortly before he became prime minister, declared that “the best way to get someone to cut their electricity bill” was to cleverly reformat the bill itself. This is politics as the art of avoiding difficult decisions. No behavioural scientist would suggest that it was close to sufficient. Yet they must be careful not to become enablers of the One Weird Trick approach to making policy.

-------

Behavioural science has a laudable focus on rigorous evidence, yet even this can backfire. It is much easier to produce a quick randomised trial of bill reformatting than it is to evaluate anything systemic. These small quick wins are only worth having if they lead us towards, rather than away from, more difficult victories.

Another problem is that empirically tested, behaviourally rigorous bad policy can be bad policy nonetheless. For example, it has become fashionable to argue that people should be placed on an organ donor registry by default, because this dramatically expands the number of people registered as donors. But, as Thaler and Sunstein themselves keep having to explain, this is a bad idea. Most organ donation happens only after consultation with a grieving family — and default-bloated donor registries do not help families work out what their loved one might have wanted.


Tuesday, July 5, 2022

A study gave cash and therapy to men at risk of criminal behavior

Sigal Samuel 
vox.com
Originally posted 31 MAY 22

Here is an excerpt:

Inspired by the program in Liberia, Chicago has been implementing a similar but more intensive program called READI. Over the course of 18 months, men in the city’s most violent districts participate in therapy sessions in the morning, followed by job training in the afternoon. The rationale for the latter is that in a place with a well-developed labor market like Chicago, the best way to improve earnings is probably to get people into the market, whereas in Liberia, the labor market is much less efficient, so it made more sense to offer people cash.

“We’ll have more results this summer,” said Blattman of the READI program, which he is helping to advise. So far, “it doesn’t look like a slam dunk.”

Still, Chicago is eager to try these therapy-based approaches, having already had some success with them. The city is also home to a program called Becoming a Man (BAM), where high schoolers do CBT-inspired group sessions. A randomized controlled trial showed that criminal arrests fell by about half during the BAM program. Even though effects dissipated over time, the program looks to be very cost-effective.

But this isn’t just a story about the growing recognition that therapy can play a useful role in preventing crime. That trend is part of a broader movement to adopt an approach to crime that is more carrot, less stick.

“It’s all about a progressive, rational policy for social control. Social inclusion is the most productive means of social control,” David Brotherton, a sociologist at the City University of New York, explained to me in 2019.

Brotherton has long argued that mainstream US policy is counterproductively coercive and punitive. His research has shown that helping at-risk people reintegrate into mainstream society — including by offering them cash — is much more effective at reducing violence.

Sunday, June 26, 2022

What drives mass shooters? Grievance, despair, and anger are more likely triggers than mental illness, experts say

Deanna Pan
Boston Globe
Originally posted 3 JUN 22

Here is an excerpt:

A 2018 study by the FBI’s Behavioral Analysis Unit evaluating 63 active shooters between 2000 and 2013 found that a quarter were known to have been diagnosed with any kind of mental illness, and just 3 of the 63 had a verified psychotic disorder.

Although 62 percent of shooters showed signs that they were struggling with issues like depression, anxiety, or paranoia, their symptoms, the study notes, may ultimately have been “transient manifestations of behaviors and moods” that would not qualify them for a formal diagnosis.

Formally diagnosed mental illness, the study concludes, “is not a very specific predictor of violence of any type, let alone targeted violence,” given that roughly half of the US population experiences symptoms of mental illness over the course of their lifetimes.

Forensic psychologist Jillian Peterson, cofounder of The Violence Project, a think tank dedicated to reducing violence, said mass shooters are typically younger men, channeling their pain and anger through acts of violence and aggression. For many mass shooters, Peterson said, their path to violence begins with early childhood trauma. They often share a sense of “entitlement,” she said — to wealth, power, romance, and success. When they don’t achieve those goals, they become enraged and search for a scapegoat.

”As they get older, you see a lot of despair, hopelessness, self-hate — many of them attempt suicide — isolation. And then that kind of despair, isolation, that self-hatred turns outward,” Peterson said. “School shooters blame their schools. Some people blame a racial group or women or a religious group or the workplace.”

But mental illness, she said, is rarely an exclusive motive for mass shooters. In a study published last year, Peterson and her colleagues analyzed a dataset of 172 mass shooters for signs of psychosis — features of schizophrenia and other mood disorders. Although mental illness and psychotic disorders were overrepresented among the mass shooters they studied, Peterson’s study found most mass shooters were motivated by other factors, such as interpersonal conflicts, relationship problems, or a desire for fame.

Peterson’s study found psychotic symptoms, such as delusions or hallucinations, played no role in almost 70 percent of cases, and only a minor role in 11 percent of cases, where the shooters had other motives. In just 10 percent of cases, perpetrators were directly responding to their delusions or hallucinations when they were planning and committing their attacks.

Thursday, June 2, 2022

How Plain Talk Helps You "Walk the Walk"

Brett Beasley
Notre Dame Center for Ethical Leadership
Originally posted April 2022

Here is an excerpt:

How Unclear Values Cloud Our Moral Vision

Was Orwell right? Some may disagree with his take on the link between bad writing and bad politics. But it appears that Orwell's theory applies well to something he never considered: Corporate values statements. A new study shows that unclear writing in values statements matters. Unclarity sends a signal that a corporation can't be trusted. And, according to the study's authors, it's a reliable signal, too. They find that corporations that hide behind fuzzy, unclear values often do have something to hide.

The team of researchers behind the study, led by David Markowitz (Oregon), considered the values statements of 188 S&P 500 manufacturing companies. Markowitz was joined by Maryam Kouchaki (Northwestern), Jeffrey T. Hancock (Stanford), and Francesca Gino (Harvard).

They drew inspiration from earlier studies that had shown that companies with negative annual earnings write in a less clear manner in their reports to the Securities and Exchange Commission (SEC). They reasoned that a similar process might occur with ethics as well.

Together the team was able to chronicle which companies had ethics infractions (like environmental violations, fraud, and anticompetitive activity). They also determined which codes of conduct were "linguistically obfuscated." These codes were full of abstraction, jargon, and long, overly complex explanations.

The results of the study proved their hypothesis correct: Companies with ethics infractions did resort to unclear language in order to hide them.

The researchers began to ask additional questions. They wanted to know if unclear language actually works. Does it effectively hide a company's problems? They showed corporate values statements to study participants and asked about their perceptions of the companies behind them. The participants saw the companies with clearly-written values statements as more moral, warmer, and more trustworthy, compared to those with jargon-laden values statements.

The Deception Spiral

Then the researchers decided to go a step further. They had shown that unclear language is often a consequence of unethical behavior. Now they wanted to see if it could cause unethical behavior as well. This would help them determine if something like the vicious cycle Orwell theorized really could exist.

This time, they took their work to the lab. They showed study participants values statements and then handed participants a list with scrambled words like “TTISRA” and “LONSEM.” They asked participants to unscramble the words and gave them opportunities to earn money. They introduced an element of competition as well. They could earn bonuses for unscrambling a greater number of words than 80% of the participants in their group.

At the same time, the researchers laid a trap. “TTISRA,” could be unscrambled to spell “ARTIST.” “LONSEM” could become “LEMONS.” But they included some words like OPOER, ALVNO, and ANHDU, which do not spell a word no matter how participants rearranged the letters. This trap enabled them to measure whether people cheated during the activity. If the participants said they unscramble the words without solutions, the researchers concluded they must have cheated in reporting their score.

The participants who had seen the unclear statements were more likely to cave to the temptation. Those who had seen the clear statement tended to stay on the moral path. Most importantly, this meant that the researchers had found clear support for a cycle similar to the one Orwell had described. This "deception spiral" as they call it, meant that unethical behavior can lead to unclear statements about values. And unclear statements about values can, in turn, contribute even more to unethical behavior.