Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Fairness. Show all posts
Showing posts with label Fairness. Show all posts

Tuesday, October 12, 2021

Demand five precepts to aid social-media watchdogs

Ethan Zucker
Nature 597, 9 (2021)
Originally punished 31 Aug 21

Here is an excerpt:

I propose the following. First, give researchers access to the same targeting tools that platforms offer to advertisers and commercial partners. Second, for publicly viewable content, allow researchers to combine and share data sets by supplying keys to application programming interfaces. Third, explicitly allow users to donate data about their online behaviour for research, and make code used for such studies publicly reviewable for security flaws. Fourth, create safe-haven protections that recognize the public interest. Fifth, mandate regular audits of algorithms that moderate content and serve ads.

In the United States, the FTC could demand this access on behalf of consumers: it has broad powers to compel the release of data. In Europe, making such demands should be even more straightforward. The European Data Governance Act, proposed in November 2020, advances the concept of “data altruism” that allows users to donate their data, and the broader Digital Services Act includes a potential framework to implement protections for research in the public interest.

Technology companies argue that they must restrict data access because of the potential for harm, which also conveniently insulates them from criticism and scrutiny. They cite misuse of data, such as in the Cambridge Analytica scandal (which came to light in 2018 and prompted the FTC orders), in which an academic researcher took data from tens of millions of Facebook users collected through online ‘personality tests’ and gave it to a UK political consultancy that worked on behalf of Donald Trump and the Brexit campaign. Another example of abuse of data is the case of Clearview AI, which used scraping to produce a huge photographic database to allow federal and state law-enforcement agencies to identify individuals.

These incidents have led tech companies to design systems to prevent misuse — but such systems also prevent research necessary for oversight and scrutiny. To ensure that platforms act fairly and benefit society, there must be ways to protect user data and allow independent oversight.

Sunday, July 25, 2021

Should we be concerned that the decisions of AIs are inscrutable?

John Zerilli
Psyche.co
Originally published 14 June 21

Here is an excerpt:

However, there’s a danger of carrying reliabilist thinking too far. Compare a simple digital calculator with an instrument designed to assess the risk that someone convicted of a crime will fall back into criminal behaviour (‘recidivism risk’ tools are being used all over the United States right now to help officials determine bail, sentencing and parole outcomes). The calculator’s outputs are so dependable that an explanation of them seems superfluous – even for the first-time homebuyer whose mortgage repayments are determined by it. One might take issue with other aspects of the process – the fairness of the loan terms, the intrusiveness of the credit rating agency – but you wouldn’t ordinarily question the engineering of the calculator itself.

That’s utterly unlike the recidivism risk tool. When it labels a prisoner as ‘high risk’, neither the prisoner nor the parole board can be truly satisfied until they have some grasp of the factors that led to it, and the relative weights of each factor. Why? Because the assessment is such that any answer will necessarily be imprecise. It involves the calculation of probabilities on the basis of limited and potentially poor-quality information whose very selection is value-laden.

But what if systems such as the recidivism tool were in fact more like the calculator? For argument’s sake, imagine a recidivism risk-assessment tool that was basically infallible, a kind of Casio-cum-Oracle-of-Delphi. Would we still expect it to ‘show its working’?

This requires us to think more deeply about what it means for an automated decision system to be ‘reliable’. It’s natural to think that such a system would make the ‘right’ recommendations, most of the time. But what if there were no such thing as a right recommendation? What if all we could hope for were only a right way of arriving at a recommendation – a right way of approaching a given set of circumstances? This is a familiar situation in law, politics and ethics. Here, competing values and ethical frameworks often produce very different conclusions about the proper course of action. There are rarely unambiguously correct outcomes; instead, there are only right ways of justifying them. This makes talk of ‘reliability’ suspect. For many of the most morally consequential and controversial applications of ML, to know that an automated system works properly just is to know and be satisfied with its reasons for deciding.

Wednesday, May 12, 2021

How pills undermine skills: Moralization of cognitive enhancement and causal selection

E. Mihailov, B. R. López, F. Cova & I. R. Hannikainen
Consciousness and Cognition
Volume 91, May 2021, 103120

Abstract

Despite the promise to boost human potential and wellbeing, enhancement drugs face recurring ethical scrutiny. The present studies examined attitudes toward cognitive enhancement in order to learn more about these ethical concerns, who has them, and the circumstances in which they arise. Fairness-based concerns underlay opposition to competitive use—even though enhancement drugs were described as legal, accessible and affordable. Moral values also influenced how subsequent rewards were causally explained: Opposition to competitive use reduced the causal contribution of the enhanced winner’s skill, particularly among fairness-minded individuals. In a follow-up study, we asked: Would the normalization of enhancement practices alleviate concerns about their unfairness? Indeed, proliferation of competitive cognitive enhancement eradicated fairness-based concerns, and boosted the perceived causal role of the winner’s skill. In contrast, purity-based concerns emerged in both recreational and competitive contexts, and were not assuaged by normalization.

Highlights

• Views on cognitive enhancement reflect both purity and fairness concerns.

• Fairness, but not purity, concerns are surmounted by normalizing use.

• Moral opposition to pills undermines user’s perceived skills.

From the Discussion

In line with a growing literature on causal selection (Alicke, 1992; Icard et al., 2017; Kominsky et al. 2015), judgments of the enhanced user’s skill aligned with participants’ moral attitudes. Participants who held permissive attitudes were more likely to causally attribute success to agents’ skill and effort, while participants who held restrictive attitudes were more likely to view the pill as causally responsible. This association resulted in stronger denial of competitive users’ talent and ability, particularly among fairness-minded individuals. 

The moral foundation of purity, comprising norms related to spiritual sanctity and bodily propriety, and which appeals predominantly to political conservatives (Graham et al., 2009), also predicted attitudes toward enhancement. Purity-minded individuals were more likely to condemn enhancement users, regardless of whether cognitive enhancement was normal or rare. This categorical opposition may elucidate the origin of conservative bioethicists’ (e.g., Kass, 2003) attitudes toward human enhancement: i.e., in self-directed norms regulating the proper care of one’s own body (see also Koverola et al., 2021). Finally, whereas explicit reasoning about interpersonal concerns and the unjust treatment of others accompanied fairness-based opposition, our qualitative analyses data did not reveal a cogent, purity-based rationale—which could be interpreted as evidence that purity-based opposition is not guided by moral reasoning to the same degree (Mihailov, 2016). 

Saturday, March 27, 2021

Veil-of-ignorance reasoning mitigates self-serving bias in resource allocation during the COVID-19 crisis

Huang, K. et al.
Judgment and Decision Making
Vol. 16, No. 1, pp 1-19.

Abstract

The COVID-19 crisis has forced healthcare professionals to make tragic decisions concerning which patients to save. Furthermore, The COVID-19 crisis has foregrounded the influence of self-serving bias in debates on how to allocate scarce resources. A utilitarian principle favors allocating scarce resources such as ventilators toward younger patients, as this is expected to save more years of life. Some view this as ageist, instead favoring age-neutral principles, such as “first come, first served”. Which approach is fairer? The “veil of ignorance” is a moral reasoning device designed to promote impartial decision-making by reducing decision-makers’ use of potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here we apply veil-of-ignorance reasoning to the COVID-19 ventilator dilemma, asking participants which policy they would prefer if they did not know whether they are younger or older. Two studies (pre-registered; online samples; Study 1, N=414; Study 2 replication, N=1,276) show that veil-of-ignorance reasoning shifts preferences toward saving younger patients. The effect on older participants is dramatic, reversing their opposition toward favoring the young, thereby eliminating self-serving bias. These findings provide guidance on how to remove self-serving biases to healthcare policymakers and frontline personnel charged with allocating scarce medical resources during times of crisis.

Thursday, February 25, 2021

For Biden Administration, Equity Initiatives Are A Moral Imperative

Juana Summers
www.npr.org
Originally posted 6 Feb 21

Here is an excerpt:

Many of the Biden administration's early actions have had an equity through-line. For example, the executive actions that he signed last week include moves to strengthen anti-discrimination policies in housing, fighting back against racial animus toward Asian Americans and calling on the Justice Department to phase out its contracts with private prisons.

The early focus on equity is an attempt to account for differences in need among people with historically disadvantaged backgrounds. Civil rights leaders and activists have praised Biden's actions, though they have also made clear that they want to see more from Biden than just rhetoric.

"The work ahead will be operationalizing that, ensuring that equity doesn't just show up in speeches but it shows up in budgets. That equity isn't simply about restoring us back to policies from the Obama years, but about what is it going to take to move us forward," said Rashad Robinson, the president of the racial justice organization, Color of Change.

Susan Rice, the chair of Biden's Domestic Policy Council, made the case that there is a universal, concrete benefit to the equity policies Biden is championing.

"These aren't feel-good policies," Rice told reporters in the White House briefing room. "The evidence is clear. Investing in equity is good for economic growth, and it creates jobs for all Americans."

That echoes what Biden himself has said. He has linked the urgent equity focus of his administration to the fates of all Americans.

"This is time to act, and this is time to act because it's what the core values of this nation call us to do," he said. "And I believe that the vast majority of Americans — Democrats, Republicans and independents — share these values and want us to act as well."

Thursday, December 31, 2020

Why business cannot afford to ignore tech ethics

Siddharth Venkataramakrishnan
ft.com
Originally posted 6 DEC 20

From one angle, the pandemic looks like a vindication of “techno-solutionism”. From the more everyday developments of teleconferencing to systems exploiting advanced artificial intelligence, platitudes to the power of innovation abound.

Such optimism smacks of short-termism. Desperate times often call for swift and sweeping solutions, but implementing technologies without regard for their impact is risky and increasingly unacceptable to wider society. The business leaders of the future who purchase and deploy such systems face costly repercussions, both financial and reputational.

Tech ethics, while a relatively new field, has suffered from perceptions that it is either the domain of philosophers or PR people. This could not be further from the truth — as the pandemic continues, so the importance grows of mapping out potential harms from technologies.

Take, for example, biometrics such as facial-recognition systems. These have a clear appeal for companies looking to check who is entering their buildings, how many people are wearing masks or whether social distancing is being observed. Recent advances in the field have combined technologies such as thermal scanning and “periocular recognition” (the ability to identify people wearing masks).

But the systems pose serious questions for those responsible for purchasing and deploying them. At a practical level, facial recognition has long been plagued by accusations of racial bias.


Wednesday, December 30, 2020

Google AI researcher's exit sparks ethics, bias concerns

Timnit Gebru
Matt Obrien
AP Tech Writer
Originally published 4 DEC 20

Here is an excerpt:

Gebru on Tuesday vented her frustrations about the process to an internal diversity-and-inclusion email group at Google, with the subject line: “Silencing Marginalized Voices in Every Way Possible." Gebru said on Twitter that's the email that got her fired.

Dean, in an email to employees, said the company accepted “her decision to resign from Google” because she told managers she'd leave if her demands about the study were not met.

"Ousting Timnit for having the audacity to demand research integrity severely undermines Google’s credibility for supporting rigorous research on AI ethics and algorithmic auditing," said Joy Buolamwini, a graduate researcher at the Massachusetts Institute of Technology who co-authored the 2018 facial recognition study with Gebru.

“She deserves more than Google knew how to give, and now she is an all-star free agent who will continue to transform the tech industry,” Buolamwini said in an email Friday.

How Google will handle its AI ethics initiative and the internal dissent sparked by Gebru's exit is one of a number of problems facing the company heading into the new year.

At the same time she was on her way out, the National Labor Relations Board on Wednesday cast another spotlight on Google's workplace. In a complaint, the NRLB accused the company of spying on employees during a 2019 effort to organize a union before the company fired two activist workers for engaging in activities allowed under U.S. law. Google has denied the allegations in the case, which is scheduled for an April hearing.

Tuesday, December 29, 2020

Internal Google document reveals campaign against EU lawmakers

Javie Espinoza
ft.com
Originally published 28 OCT 20

Here is an excerpt:

The leak of the internal document lays bare the tactics that big tech companies employ behind the scenes to manipulate public discourse and influence lawmakers. The presentation is watermarked as “privileged and need-to-know” and “confidential and proprietary”.

The revelations are set to create new tensions between the EU and Google, which are already engaged in tough discussions about how the internet should be regulated. They are also likely to trigger further debate within Brussels, where regulators hold divergent positions on the possibility of breaking up big tech companies.

Margrethe Vestager, the EU’s executive vice-president in charge of competition and digital policy, on Tuesday argued to MEPs that structural separation of big tech is not “the right thing to do”. However, in a recent interview with the FT, Mr Breton accused such companies of being “too big to care”, and suggested that they should be broken up in extreme circumstances.

Among the other tactics outlined in the report were objectives to “undermine the idea DSA has no cost to Europeans” and “show how the DSA limits the potential of the internet . . . just as people need it the most”.

The campaign document also shows that Google will seek out “more allies” in its fight to influence the regulation debate in Brussels, including enlisting the help of Europe-based platforms such as Booking.com.

Booking.com told the FT: “We have no intention of co-operating with Google on upcoming EU platform regulation. Our interests are diametrically opposed.”


Thursday, December 17, 2020

AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust

Capgemini Research Institute

In the wake of the COVID-19 crisis, our reliance on AI has skyrocketed. Today more than ever before, we look to AI to help us limit physical interactions, predict the next wave of the pandemic, disinfect our healthcare facilities and even deliver our food. But can we trust it?

In the latest report from the Capgemini Research Institute – AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust – we surveyed over 800 organizations and 2,900 consumers to get a picture of the state of ethics in AI today. We wanted to understand what organizations can do to move to AI systems that are ethical by design, how they can benefit from doing so, and the consequences if they don’t. We found that while customers are becoming more trusting of AI-enabled interactions, organizations’ progress in ethical dimensions is underwhelming. And this is dangerous because once violated, trust can be difficult to rebuild.

Ethically sound AI requires a strong foundation of leadership, governance, and internal practices around audits, training, and operationalization of ethics. Building on this foundation, organizations have to:
  1. Clearly outline the intended purpose of AI systems and assess their overall potential impact
  2. Proactively deploy AI to achieve sustainability goals
  3. Embed diversity and inclusion principles proactively throughout the lifecycle of AI systems for advancing fairness
  4. Enhance transparency with the help of technology tools, humanize the AI experience and ensure human oversight of AI systems
  5. Ensure technological robustness of AI systems
  6. Empower customers with privacy controls to put them in charge of AI interactions.
For more information on ethics in AI, download the report.

Saturday, December 12, 2020

‘All You Want Is to Be Believed’: The Impacts of Unconscious Bias in Health Care

April Dembosky
KHN.com
Originally published 21 Oct 20

Here is an excerpt:

Research shows how doctors’ unconscious bias affects the care people receive, with Latino and Black patients being less likely to receive pain medications or get referred for advanced care than white patients with the same complaints or symptoms, and more likely to die in childbirth from preventable complications.

In the hospital that day in May, Monterroso was feeling woozy and having trouble communicating, so she had a friend and her friend’s cousin, a cardiac nurse, on the phone to help. They started asking questions: What about Karla’s accelerated heart rate? Her low oxygen levels? Why are her lips blue?

The doctor walked out of the room. He refused to care for Monterroso while her friends were on the phone, she said, and when he came back, the only thing he wanted to talk about was Monterroso’s tone and her friends’ tone.

“The implication was that we were insubordinate,” Monterroso said.

She told the doctor she didn’t want to talk about her tone. She wanted to talk about her health care. She was worried about possible blood clots in her leg and she asked for a CT scan.

“Well, you know, the CT scan is radiation right next to your breast tissue. Do you want to get breast cancer?” Monterroso recalled the doctor saying to her. “I only feel comfortable giving you that test if you say that you’re fine getting breast cancer.”

Monterroso thought to herself, “Swallow it up, Karla. You need to be well.” And so she said to the doctor: “I’m fine getting breast cancer.”

He never ordered the test.

Monterroso asked for a different doctor, for a hospital advocate. No and no, she was told. She began to worry about her safety. She wanted to get out of there. Her friends, all calling every medical professional they knew to confirm that this treatment was not right, came to pick her up and drove her to the University of California-San Francisco. The team there gave her an EKG, a chest X-ray and a CT scan.

Monday, November 16, 2020

Religious moral righteousness over care: a review and a meta-analysis

Current Opinion in Psychology
Volume 40, August 2021, Pages 79-85

Abstract

Does religion enhance an ‘extended’ morality? We review research on religiousness and Schwartz’s values, Haidt’s moral foundations (through a meta-analysis of 45 studies), and deontology versus consequentialism (a review of 27 studies). Instead of equally encompassing prosocial (care for others) and other values (duties to the self, the community, and the sacred), religiosity implies a restrictive morality: endorsement of values denoting social order (conservation, loyalty, and authority), self-control (low autonomy and self-expansion), and purity more strongly than care; and, furthermore, a deontological, non-consequentialist, righteous orientation, that could result in harm to (significant) others. Religious moral righteousness is highest in fundamentalism and weakens in secular countries. Only spirituality reflects an extended morality (care, fairness, and the binding foundations). Evolutionarily, religious morality seems to be more coalitional and ‘hygienic’ than caring.

Highlights

• We meta-analyzed 45 studies on religion and Haidt’s five moral foundations.

• Religiosity implies high purity, authority, and loyalty; care is involved only weakly.

• Only spirituality reflects extended morality: care, fairness, and the binding values.

• Results parallel findings on religion and Schwartz’s values across the world.

• Religious morality is primarily deontological, non-consequentialist, and righteous.

Conclusion

On the basis of the findings of the various research areas examined in this article, we think it is reasonable to infer that the role of religious (ingroup) prosociality in forming and consolidating large coalitions involving reciprocal interpersonal helping may have been overestimated in the contemporary evolutionary psychology of religion.  This role may not reflect the very center of religious morality. Rather, the results of the present review suggest that the evolutionary perspectives of religion focusing on the importance of hygienic and righteous/coalitional morality (avoidance of pathogens, loyalty, group conformity, as well as preservation of personal and social order) may be more central in explaining, from a moral perspective, religions’ origin and maintenance.

Friday, October 16, 2020

When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions

Newman, D., Fast, N. and Harmon, D.
Organizational Behavior and 
Human Decision Processes
Volume 160, September 2020, Pages 149-167

Abstract

The perceived fairness of decision-making procedures is a key concern for organizations, particularly when evaluating employees and determining personnel outcomes. Algorithms have created opportunities for increasing fairness by overcoming biases commonly displayed by human decision makers. However, while HR algorithms may remove human bias in decision making, we argue that those being evaluated may perceive the process as reductionistic, leading them to think that certain qualitative information or contextualization is not being taken into account. We argue that this can undermine their beliefs about the procedural fairness of using HR algorithms to evaluate performance by promoting the assumption that decisions made by algorithms are based on less accurate information than identical decisions made by humans. Results from four laboratory experiments (N = 798) and a large-scale randomized experiment in an organizational setting (N = 1654) confirm this hypothesis. Theoretical and practical implications for organizations using algorithms and data analytics are discussed.

Highlights

• Algorithmic decisions are perceived as less fair than identical decisions by humans.

• Perceptions of reductionism mediate the adverse effect of algorithms on fairness.

• Algorithmic reductionism comes in two forms: quantification and decontextualization.

• Employees voice lower organizational commitment when evaluated by algorithms.

• Perceptions of unfairness mediate the adverse effect of algorithms on commitment.

Conclusion

Perceived unfairness notwithstanding, algorithms continue to gain increasing influence in human affairs, not only in organizational settings but throughout our social and personal lives. How this influence plays out against our sense of fairness remains to be seen but should undoubtedly be of central interest to justice scholars in the years ahead.  Will the compilers of analytics and writers of algorithms adapt their
methods to comport with intuitive notions of morality? Or will our understanding of fairness adjust to the changing times, becoming inured to dehumanization in an ever more impersonal world? Questions
such as these will be asked more and more frequently as technology reshapes modes of interaction and organization that have held sway for generations. We have sought to contribute answers to these questions,
and we hope that our work will encourage others to continue studying these and related topics.

Friday, October 9, 2020

AI ethics groups are repeating one of society’s classic mistakes

Abhishek Gupta aand Victoria Heath
MIT Technology Review
Originally published 14 September 20

Here is an excerpt:

Unfortunately, as it stands today, the entire field of AI ethics is at grave risk of limiting itself to languages, ideas, theories, and challenges from a handful of regions—primarily North America, Western Europe, and East Asia.

This lack of regional diversity reflects the current concentration of AI research (pdf): 86% of papers published at AI conferences in 2018 were attributed to authors in East Asia, North America, or Europe. And fewer than 10% of references listed in AI papers published in these regions are to papers from another region. Patents are also highly concentrated: 51% of AI patents published in 2018 were attributed to North America.

Those of us working in AI ethics will do more harm than good if we allow the field’s lack of geographic diversity to define our own efforts. If we’re not careful, we could wind up codifying AI’s historic biases into guidelines that warp the technology for generations to come. We must start to prioritize voices from low- and middle-income countries (especially those in the “Global South”) and those from historically marginalized communities.

Advances in technology have often benefited the West while exacerbating economic inequality, political oppression, and environmental destruction elsewhere. Including non-Western countries in AI ethics is the best way to avoid repeating this pattern.

The good news is there are many experts and leaders from underrepresented regions to include in such advisory groups. However, many international organizations seem not to be trying very hard to solicit participation from these people. The newly formed Global AI Ethics Consortium, for example, has no founding members representing academic institutions or research centers from the Middle East, Africa, or Latin America. This omission is a stark example of colonial patterns (pdf) repeating themselves.

Monday, September 21, 2020

Changing morals: we’re more compassionate than 100 years ago, but more judgmental too

N. Haslam, M. J. McGrady, & M. A. Wheeler
The Conversation
Originally published 4 March 19

Here is an excerpt:

Differently moral

We found basic moral terms (see the black line below) became dramatically scarcer in English-language books as the 20th century unfolded – which fits the de-moralisation narrative. But an equally dramatic rebound began in about 1980, implying a striking re-moralisation.

The five moral foundations, on the other hand, show a vastly changing trajectory. The purity foundation (green line) shows the same plunge and rebound as the basic moral terms. Ideas of sacredness, piety and purity, and of sin, desecration and indecency, fell until about 1980, and rose afterwards.

The other moralities show very different pathways. Perhaps surprisingly, the egalitarian morality of fairness (blue) showed no consistent rise or fall.

In contrast, the hierarchy-based morality of authority (grey) underwent a gentle decline for the first half of the century. It then sharply rose as the gathering crisis of authority shook the Western world in the late 1960s. This morality of obedience and conformity, insubordination and rebellion, then receded equally sharply through the 1970s.

Ingroup morality (orange), reflected in the communal language of loyalty and unity, insiders and outsiders, displays the clearest upward trend through the 20th century. Discernible bumps around the two world wars point to passing elevations in the “us and them” morality of threatened communities.

Finally, harm-based morality (red) presents a complex but intriguing trend. Its prominence falls from 1900 to the 1970s, interrupted by similar wartime bumps when themes of suffering and destruction became understandably urgent. But harm rises steeply from about 1980 in the absence of a single dominating global conflict.

The info is here.

Saturday, September 19, 2020

Don’t ask if artificial intelligence is good or fair, ask how it shifts power

Pratyusha Kalluri
nature.com
Originally posted 7 July 20

Here is an excerpt:

Researchers in AI overwhelmingly focus on providing highly accurate information to decision makers. Remarkably little research focuses on serving data subjects. What’s needed are ways for these people to investigate AI, to contest it, to influence it or to even dismantle it. For example, the advocacy group Our Data Bodies is putting forward ways to protect personal data when interacting with US fair-housing and child-protection services. Such work gets little attention. Meanwhile, mainstream research is creating systems that are extraordinarily expensive to train, further empowering already powerful institutions, from Amazon, Google and Facebook to domestic surveillance and military programmes.

Many researchers have trouble seeing their intellectual work with AI as furthering inequity. Researchers such as me spend our days working on what are, to us, mathematically beautiful and useful systems, and hearing of AI success stories, such as winning Go championships or showing promise in detecting cancer. It is our responsibility to recognize our skewed perspective and listen to those impacted by AI.

Through the lens of power, it’s possible to see why accurate, generalizable and efficient AI systems are not good for everyone. In the hands of exploitative companies or oppressive law enforcement, a more accurate facial recognition system is harmful. Organizations have responded with pledges to design ‘fair’ and ‘transparent’ systems, but fair and transparent according to whom? These systems sometimes mitigate harm, but are controlled by powerful institutions with their own agendas. At best, they are unreliable; at worst, they masquerade as ‘ethics-washing’ technologies that still perpetuate inequity.

Already, some researchers are exposing hidden limitations and failures of systems. They braid their research findings with advocacy for AI regulation. Their work includes critiquing inadequate technological ‘fixes’. Other researchers are explaining to the public how natural resources, data and human labour are extracted to create AI.

The info is here.

Thursday, September 3, 2020

Children’s evaluations of third-party responses to unfairness: Children prefer helping over punishment.

Lee, Y., & Warneken, F. (2020, June 13).
https://doi.org/10.31234/osf.io/x8e7w

Abstract

Third-party punishment of selfish individuals is an important mechanism to intervene against unfairness. However, there is another way in which third parties can intervene. Rather than focusing on the unfair individual, third parties can choose to help those who were treated unfairly by reducing inequality. Such third-party helping as an alternative to third-party punishment has received little attention in studies with children. Across four studies, we examined the evaluations of third-party punishment versus third-party helping in N = 322 5- to 9-year-old children. Study 1, 3 and 4 showed that when asked about the agents directly, children evaluated both helpers and punishers positively, but they preferred helpers over punishers overall. When asked about the type of intervention itself, children preferred helping over punishment, suggesting that their preference for the type of intervention corresponds to how children think about the agents performing these interventions. Study 2 showed that children’s preference for third-party helping is driven by distributive justice concerns and not a mere preference for giving or resource maximization as children consider which type of third-party intervention decreases inequality. Together, this series of studies demonstrate that children between 5 and 9 years of age develop a sophisticated understanding of punishment and helping as two adequate forms of intervention but also display a preference for third-party helping. We discuss how these findings and prior work with adults supports the hypothesis of developmental continuity, showing that a preference for helping over punishment is deeply rooted in ontogeny.

From the Discussion:

The current study contributes to the literature by moving beyond the focus on punishment alone and probing children’s thinking about punishment and helping side by side. Prior developmental research focused on comparing punishers with third parties such as onlookers who choose not to intervene after witnessing a transgression (e.g., Vaish et al., 2016) or givers who reward a transgressor(e.g., Hamlin et al., 2011), which might have led to inflating children’s preference for punishers. Instead, the current study compared punishment with helping, a valid and common form of third-party intervention. Additionally, our study assessed children’s evaluations of punishment intervention per se and revealed a subtle but meaningful difference in understanding punishers vs. punishment, which was especially remarkable in young children. With the use of various measures and comparisons, the current study provided a more comprehensive understanding of the development of third-party punishment in children

Sunday, June 21, 2020

Downloading COVID-19 contact tracing apps is a moral obligation

G. Owen Schaefer and Angela Ballantyne
BMJ Blogs
Originally posted 4 May 20

Should you download an app that could notify you if you had been in contact with someone who contracted COVID-19? Such apps are already available in countries such as Israel, Singapore, and Australia, with other countries like the UK and US soon to follow. Here, we explain why you might have an ethical obligation to use a tracing app during the COVID-19 pandemic, even in the face of privacy concerns.

(cut)

Vulnerability and unequal distribution of risk

Marginalized populations are both hardest hit by pandemics and often have the greatest reason to be sceptical of supposedly benign State surveillance. COVID-19 is a jarring reminder of global inequality, structural racism, gender inequity, entrenched ableism, and many other social divisions. During the SARS outbreak, Toronto struggled to adequately respond to the distinctive vulnerabilities of people who were homeless. In America, people of colour are at greatest risk in several dimensions – less able to act on public health advice such as social distancing, more likely to contract the virus, and more likely to die from severe COVID if they do get infected. When public health advice switched to recommending (or in some cases requiring) masks, some African Americans argued it was unsafe for them to cover their faces in public. People of colour in the US are at increased risk of state surveillance and police violence, in part because they are perceived to be threatening and violent. In New York City, black and Latino patients are dying from COVID-19 at twice the rate of non-Hispanic white people.

Marginalized populations have historically been harmed by State health surveillance. For example, indigenous populations have been the victims of State data collection to inform and implement segregation, dispossession of land, forced migration, as well as removal and ‘re‐education’ of their children. Stigma and discrimination have impeded the public health response to HIV/AIDS, as many countries still have HIV-specific laws that prosecute people living with HIV for a range of offences.  Surveillance is an important tool for implementing these laws. Marginalized populations therefore have good reasons to be sceptical of health related surveillance.

Wednesday, June 10, 2020

The moral courage of the military in confronting the commander in chief

Robert Bruce Adolph
Tampa Bay Times
Originally posted 9 June 20

The president recently threatened to use our active duty military to “dominate” demonstrators nationwide, who are exercising their wholly legitimate right to assemble and be heard.

The distinguished former Secretary of Defense Jim Mattis nailed it in his recent broadside published in The Atlantic that took aim at our current commander-in-chief. Mattis states, “When I joined the military, some 50 years ago … I swore an oath to support and defend the Constitution. Never did I dream that troops taking the same oath would be ordered under any circumstances to violate the constitutional rights of their fellow citizens—much less to provide a bizarre photo op for the elected commander-in-chief, with military leadership standing alongside.”

The current Secretary of Defense, Mike Esper, who now perhaps regrets being made into a photographic prop for the president, has come out publicly against using the active duty military to quell civil unrest in our cities; as has 89 high ranking former defense officials who stated that they were “alarmed” by the chief executive’s threat to use troops against our country’s citizens on U.S. soil. Former Secretary of State Colin Powell, a former U.S. Army general and Republican Party member, has also taken aim at this presidency by stating that he will vote for Joe Biden in the next election.

The info is here.

Saturday, June 6, 2020

Motivated misremembering of selfish decisions

Carlson, R.W., Maréchal, M.A., Oud, B. et al.
Nature Communications 11, 2100 (2020).
https://doi.org/10.1038/s41467-020-15602-4

Abstract

People often prioritize their own interests, but also like to see themselves as moral. How do individuals resolve this tension? One way to both pursue personal gain and preserve a moral self-image is to misremember the extent of one’s selfishness. Here, we test this possibility. Across five experiments (N = 3190), we find that people tend to recall being more generous in the past than they actually were, even when they are incentivized to recall their decisions accurately. Crucially, this motivated misremembering effect occurs chiefly for individuals whose choices violate their own fairness standards, irrespective of how high or low those standards are. Moreover, this effect disappears under conditions where people no longer perceive themselves as responsible for their fairness violations. Together, these findings suggest that when people’s actions fall short of their personal standards, they may misremember the extent of their selfishness, thereby potentially warding off threats to their moral self-image.

From the Discussion

Specifically, these findings suggest that those who violate (as opposed to uphold) their personal standards misremember the extent of their selfishness. Moreover, they highlight the key motivational role of perceived responsibility for norm violations—consistent with classic accounts from social psychology, and recent evidence from experimental economics. However, since we focused specifically on those who reported no responsibility, it is also conceivable that other factors might have differed between the participants who felt responsible and those who did not.

We interpret these results as evidence of motivated memory distortion, however, an alternative account would hold that these individuals were aware of their true level of generosity at recall, yet were willing to pay a cost to claim having been more generous. While this account is not inconsistent with prior work, it should be less likely in a context which is anonymous, involves no future interaction with any partners, and requires memories to be verified by an experimenter. Accordingly, we found little to no effect of trait social desirability on peoples’ reported memories. Together, these points suggest that people were actually misremembering their choices, rather than consciously lying about them.

The research is here.

Friday, May 29, 2020

When Is “Gay Panic” Accepted? Exploring Juror Characteristics and Case Type as Predictors of a Successful Gay Panic Defense

Michalski, N. D., & Nunez, N. (2020).
Journal of Interpersonal Violence. 
https://doi.org/10.1177/0886260520912595

Abstract

“Gay panic” refers to a situation in which a heterosexual individual charged with a violent crime against a homosexual individual claims they lost control and reacted violently because of an unwanted sexual advance that was made upon them. This justification for a violent crime presented by the defendant in the form of a provocation defense is used as an effort to mitigate the charges brought against him. There has been relatively little research conducted concerning this defense strategy and the variables that might predict when the defense is likely to be successful in achieving a lesser sentence for the defendant. This study utilized 249 mock jurors to assess the effects of case type (assault or homicide) and juror characteristics (homophobia, religious fundamentalism, and political orientation) on the success of the gay panic defense compared with a neutral provocation defense. Participant homophobia was found to be the driving force behind their willingness to accept the gay panic defense as legitimate. Higher levels of homophobia and religious fundamentalism were found to predict more leniency in verdict decisions when the gay panic defense was presented. This study furthers the understanding of decision making in cases involving the gay panic defense and highlights the need for more research to be conducted to help understand and combat LGBT (lesbian, gay, bisexual, and transgender) prejudice in the courtroom.

The research is here.