Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Projection. Show all posts
Showing posts with label Projection. Show all posts

Wednesday, August 17, 2022

Robots became racist after AI training, always chose Black faces as ‘criminals’

Pranshu Verma
The Washington Post
Originally posted 16 JUL 22

As part of a recent experiment, scientists asked specially programmed robots to scan blocks with people’s faces on them, then put the “criminal” in a box. The robots repeatedly chose a block with a Black man’s face.

Those virtual robots, which were programmed with a popular artificial intelligence algorithm, were sorting through billions of images and associated captions to respond to that question and others, and may represent the first empirical evidence that robots can be sexist and racist, according to researchers. Over and over, the robots responded to words like “homemaker” and “janitor” by choosing blocks with women and people of color.

The study, released last month and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligence systems can translate into robots that use them to guide their operations.

Companies have been pouring billions of dollars into developing more robots to help replace humans for tasks such as stocking shelves, delivering goods or even caring for hospital patients. Heightened by the pandemic and a resulting labor shortage, experts describe the current atmosphere for robotics as something of a gold rush. But tech ethicists and researchers are warning that the quick adoption of the new technology could result in unforeseen consequences down the road as the technology becomes more advanced and ubiquitous.

“With coding, a lot of times you just build the new software on top of the old software,” said Zac Stewart Rogers, a supply chain management professor from Colorado State University. “So, when you get to the point where robots are doing more … and they’re built on top of flawed roots, you could certainly see us running into problems.”

Researchers in recent years have documented multiple cases of biased artificial intelligence algorithms. That includes crime prediction algorithms unfairly targeting Black and Latino people for crimes they did not commit, as well as facial recognition systems having a hard time accurately identifying people of color.

Wednesday, April 20, 2022

The human black-box: The illusion of understanding human better than algorithmic decision-making

Bonezzi, A., Ostinelli, M., & Melzner, J. (2022). 
Journal of Experimental Psychology: General.

Abstract

As algorithms increasingly replace human decision-makers, concerns have been voiced about the black-box nature of algorithmic decision-making. These concerns raise an apparent paradox. In many cases, human decision-makers are just as much of a black-box as the algorithms that are meant to replace them. Yet, the inscrutability of human decision-making seems to raise fewer concerns. We suggest that one of the reasons for this paradox is that people foster an illusion of understanding human better than algorithmic decision-making, when in fact, both are black-boxes. We further propose that this occurs, at least in part, because people project their own intuitive understanding of a decision-making process more onto other humans than onto algorithms, and as a result, believe that they understand human better than algorithmic decision-making, when in fact, this is merely an illusion.

General Discussion

Our work contributes to prior literature in two ways. First, it bridges two streams of research that have thus far been considered in isolation: IOED (Illusion of Explanatory Depth) (Rozenblit & Keil, 2002) and projection (Krueger,1998). IOED has mostly been documented for mechanical devices and natural phenomena and has been attributed to people confusing a superficial understanding of what something does for how it does it (Keil, 2003). Our research unveils a previously unexplored driver ofIOED, namely, the tendency to project one’s own cognitions on to others, and in so doing extends the scope of IOED to human deci-sion-making. Second, our work contributes to the literature on clinical versus statistical judgments (Meehl, 1954). Previous research shows that people tend to trust humans more than algorithms (Dietvorst et al., 2015). Among the many reasons for this phenomenon (see Grove & Meehl, 1996), one is that people do not understand how algorithms work (Yeomans et al., 2019). Our research suggests that people’s distrust toward algorithms may stem not only from alack of understanding how algorithms work but also from an illusion of understanding how their human counterparts operate.

Our work can be extended by exploring other consequences and psychological processes associated with the illusion of understand-ing humans better than algorithms. As for consequences, more research is needed to explore how illusory understanding affects trust in humans versus algorithms. Our work suggests that the illusion of understanding humans more than algorithms can yield greater trust in decisions made by humans. Yet, to the extent that such an illusion stems from a projection mechanism, it might also lead to favoring algorithms over humans, depending on the underly-ing introspections. Because people’s introspections can be fraught with biases and idiosyncrasies they might not even be aware of (Nisbett & Wilson, 1977;Wilson, 2004), people might erroneously project these same biases and idiosyncrasies more onto other humans than onto algorithms and consequently trust those humans less than algorithms. To illustrate, one might expect a recruiter to favor people of the same gender or ethnic background just because one may be inclined to do so. In these circumstances, the illusion to understand humans better than algorithms might yield greater trust in algorithmic than human decisions (Bonezzi & Ostinelli, 2021).

Monday, February 14, 2022

Beauty Goes Down to the Core: Attractiveness Biases Moral Character Attributions

Klebl, C., Rhee, J.J., Greenaway, K.H. et al. 
J Nonverbal Behav (2021). 
https://doi.org/10.1007/s10919-021-00388-w

Abstract

Physical attractiveness is a heuristic that is often used as an indicator of desirable traits. In two studies (N = 1254), we tested whether facial attractiveness leads to a selective bias in attributing moral character—which is paramount in person perception—over non-moral traits. We argue that because people are motivated to assess socially important traits quickly, these may be the traits that are most strongly biased by physical attractiveness. In Study 1, we found that people attributed more moral traits to attractive than unattractive people, an effect that was stronger than the tendency to attribute positive non-moral traits to attractive (vs. unattractive) people. In Study 2, we conceptually replicated the findings while matching traits on perceived warmth. The findings suggest that the Beauty-is-Good stereotype particularly skews in favor of the attribution of moral traits. As such, physical attractiveness biases the perceptions of others even more fundamentally than previously understood.

From the Discussion

The present investigation advances the Beauty-is-Good stereotype literature. Our findings are consistent with extensive research showing that people attribute positive traits more strongly to attractive compared to unattractive individuals (Dion et al., 1972). Most significantly, the present studies add to the previous literature by providing evidence that attractiveness does not bias the attribution of positive traits uniformly. Attractiveness especially biases the attribution of moral traits compared to positive non-moral traits, constituting an update to the Beauty-is-Good stereotype. One possible explanation for this selective bias is that because people are particularly motivated to assess socially important traits—traits that help us quickly decide who our allies are (Goodwin et  al., 2014)—physical attractiveness selectively biases the attribution of those traits over socially less important traits. While in many instances, this may allow us to assess moral character quickly and accurately (cf. Ambady et al., 2000) and thus obtain valuable information about whether the target is a threat or ally, where morally relevant information is absent (such as during initial impression formation) this motivation to assess moral character may lead to an over reliance on heuristic cues. 

Sunday, June 7, 2020

Friends, Lovers or Nothing: Men and Women Differ in Their Perceptions of Sex Robots and Platonic Love Robots

M. Nordmo, J. O. Naess, M. and others
Front. Psychol., 13 March 2020
https://doi.org/10.3389/fpsyg.2020.00355

Abstract

Physical and emotional intimacy between humans and robots may become commonplace over the next decades, as technology improves at a rapid rate. This development provides new questions pertaining to how people perceive robots designed for different kinds of intimacy, both as companions and potentially as competitors. We performed a randomized experiment where participants read of either a robot that could only perform sexual acts, or only engage in non-sexual platonic love relationships. The results of the current study show that females have less positive views of robots, and especially of sex robots, compared to men. Contrary to the expectation rooted in evolutionary psychology, females expected to feel more jealousy if their partner got a sex robot, rather than a platonic love robot. The results further suggests that people project their own feelings about robots onto their partner, erroneously expecting their partner to react as they would to the thought of ones’ partner having a robot.

From the Discussion

The results of the analysis confirms previous findings that males are more positive toward the advent of robots than females (Scheutz and Arnold, 2016). Females who had read about the sex robot reported particularly elevated levels of jealousy, less favorable attitudes, more dislike and more predicted partner’s dislike. This pattern was not found in the male sample, whose feelings were largely unaffected by the type of robot they were made to envision.

One possible explanation for the gender difference could be a combination of differences in how males and females frame the concept of human-robot sexual relations, as well as different attitudes toward masturbation and the use of artificial stimulants for masturbatory purposes.

The research is here.

Thursday, May 23, 2019

Pre-commitment and Updating Beliefs

Charles R. Ebersole
Doctoral Dissertation, University of Virginia

Abstract

Beliefs help individuals make predictions about the world. When those predictions are incorrect, it may be useful to update beliefs. However, motivated cognition and biases (notably, hindsight bias and confirmation bias) can instead lead individuals to reshape interpretations of new evidence to seem more consistent with prior beliefs. Pre-committing to a prediction or evaluation of new evidence before knowing its results may be one way to reduce the impact of these biases and facilitate belief updating. I first examined this possibility by having participants report predictions about their performance on a challenging anagrams task before or after completing the task. Relative to those who reported predictions after the task, participants who pre-committed to predictions reported predictions that were more discrepant from actual performance and updated their beliefs about their verbal ability more (Studies 1a and 1b). The effect on belief updating was strongest among participants who directly tested their predictions (Study 2) and belief updating was related to their evaluations of the validity of the task (Study 3). Furthermore, increased belief updating seemed to not be due to faulty or shifting memory of initial ratings of verbal ability (Study 4), but rather reflected an increase in the discrepancy between predictions and observed outcomes (Study 5). In a final study (Study 6), I examined pre-commitment as an intervention to reduce confirmation bias, finding that pre-committing to evaluations of new scientific studies eliminated the relation between initial beliefs and evaluations of evidence while also increasing belief updating. Together, these studies suggest that pre-commitment can reduce biases and increase belief updating in light of new evidence.

The dissertation is here.

Friday, July 13, 2018

Rorschach (regarding AI)

Michael Solana
Medium.com
Originally posted June 7, 2018

Here is an excerpt:

Here we approach our inscrutable abstract, and our robot Rorschach test. But in this contemporary version of the famous psychological prompts, what we are observing is not even entirely ambiguous. We are attempting to imagine a greatly-amplified mind. Here, each of us has a particularly relevant data point — our own. In trying to imagine the amplified intelligence, it is natural to imagine our own intelligence amplified. In imagining the motivations of this amplified intelligence, we naturally imagine ourselves. If, as you try to conceive of a future with machine intelligence, a monster comes to mind, it is likely you aren’t afraid of something alien at all. You’re afraid of something exactly like you. What would you do with unlimited power?

Psychological projection seems to work in several contexts outside of general artificial intelligence. In the technology industry the concept of “meritocracy” is now hotly debated. How much of your life is determined by luck, and how much by chance? There’s no answer here we know for sure, but has there ever been a better Rorschach test for separating high-achievers from people who were given what they have? Questions pertaining to human nature are almost open self-reflection. Are we basically good, with some exceptions, or are humans basically beasts, with an animal nature just barely contained by a set of slowly-eroding stories we tell ourselves — law, faith, society. The inner workings of a mind can’t be fully shared, and they can’t be observed by a neutral party. We therefore do not — can not, currently — know anything of the inner workings of people in general. But we can know ourselves. So in the face of large abstractions concerning intelligence, we hold up a mirror.

Not everyone who fears general artificial intelligence would cause harm to others. There are many people who haven’t thought deeply about these questions at all. They look to their neighbors for cues on what to think, and there is no shortage of people willing to tell them. The media has ads to sell, after all, and historically they have found great success in doing this with horror stories. But as we try to understand the people who have thought about these questions with some depth — with the depth required of a thoughtful screenplay, for example, or a book, or a company — it’s worth considering the inkblot.

The article is here.

Saturday, December 9, 2017

Evidence-Based Policy Mistakes

Kausik Basu
Project Syndicate
Originally published November 30, 2017

Here is an excerpt:

Likewise, US President Donald Trump cites simplistic trade-deficit figures to justify protectionist policies that win him support among a certain segment of the US population. In reality, the evidence suggests that such policies will hurt the very people Trump claims to be protecting.

Now, the chair of Trump’s Council of Economic Advisers, Kevin Hassett, is attempting to defend Congressional Republicans’ effort to slash corporate taxes by claiming that, when developed countries have done so in the past, workers gained “well north of” $4,000 per year. Yet there is ample evidence that the benefits of such tax cuts accrue disproportionately to the rich, largely via companies buying back stock and shareholders earning higher dividends.

It is not clear whence Hassett is getting his data. But chances are that, at the very least, he is misinterpreting it. And he is far from alone in failing to reach accurate conclusions when assessing a given set of data.

Consider the oft-repeated refrain that, because there is evidence that virtually all jobs over the last decade were created by the private sector, the private sector must be the most effective job creator. At first glance, the logic might seem sound. But, on closer examination, the statement begs the question. Imagine a Soviet economist claiming that, because the government created virtually all jobs in the Soviet Union, the government must be the most effective job creator. To find the truth, one would need, at a minimum, data on who else tried to create jobs, and how.

The article is here.

Wednesday, October 29, 2014

Beliefs About God and Mental Health Among American Adults

Nava R. Silton, Kevin J. Flannelly, Kathleen Galek, Christopher G. Ellison
Journal of Religion and Health
October 2014, Volume 53, Issue 5, pp 1285-1296

Abstract

This study examines the association between beliefs about God and psychiatric symptoms in the context of Evolutionary Threat Assessment System Theory, using data from the 2010 Baylor Religion Survey of US Adults (N = 1,426). Three beliefs about God were tested separately in ordinary least squares regression models to predict five classes of psychiatric symptoms: general anxiety, social anxiety, paranoia, obsession, and compulsion. Belief in a punitive God was positively associated with four psychiatric symptoms, while belief in a benevolent God was negatively associated with four psychiatric symptoms, controlling for demographic characteristics, religiousness, and strength of belief in God. Belief in a deistic God and one’s overall belief in God were not significantly related to any psychiatric symptoms.

The entire article is here.