Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Humans. Show all posts
Showing posts with label Humans. Show all posts

Thursday, May 27, 2021

How Adobe’s Ethics Committee Helps Manage AI Bias

Jared Council
The Wall Street Journal
Originally posted 5 May 21

Review boards can help companies mitigate some of the risks associated with using artificial intelligence, according to Adobe Inc. executive Dana Rao.

Mr. Rao, Adobe’s general counsel, said one of the top risks in using AI systems is that the technology can perpetuate harmful bias against certain demographics, based on what it learns from data. Ethics committees can be one way of managing those risks and putting organizational values into practice.

Adobe’s AI ethics committee, launched two years ago, has been able to review new features for potential bias before those features are deployed, Mr. Rao said Wednesday at The Wall Street Journal’s Risk & Compliance Forum. The committee is made up of employees of various ethnicities and genders from different parts of the company, including legal, government relations and marketing.

“It takes a lot of people across your company to help figure this out,” he said. “Sometimes we might look at it and say there’s not an issue here,” he said, but getting a diverse group of people together can help identify issues product developers might miss.

Monday, May 24, 2021

The evolutionary origin of human hyper-cooperation

Burkart, J., Allon, O., Amici, F. et al. 
Nat Commun 5, 4747 (2014). 
https://doi.org/10.1038/ncomms5747

Abstract

Proactive, that is, unsolicited, prosociality is a key component of our hyper-cooperation, which in turn has enabled the emergence of various uniquely human traits, including complex cognition, morality and cumulative culture and technology. However, the evolutionary foundation of the human prosocial sentiment remains poorly understood, largely because primate data from numerous, often incommensurable testing paradigms do not provide an adequate basis for formal tests of the various functional hypotheses. We therefore present the results of standardized prosociality experiments in 24 groups of 15 primate species, including humans. Extensive allomaternal care is by far the best predictor of interspecific variation in proactive prosociality. Proactive prosocial motivations therefore systematically arise whenever selection favours the evolution of cooperative breeding. Because the human data fit this general primate pattern, the adoption of cooperative breeding by our hominin ancestors also provides the most parsimonious explanation for the origin of human hyper-cooperation.

(cut)

Our results demonstrate that the extent of allomaternal care provides the best explanation for the distribution of proactive prosociality among primate species, including humans. This conclusion is not affected when using different ways of quantifying allomaternal care. Importantly, we find no support for any of the other hypotheses, even when more refined analyses of within-species, dyad-level variation are conducted. The adoption of extensive allomaternal care by our hominin ancestors thus provides the most parsimonious explanation for the origin of human hyper-cooperation.

Tuesday, May 4, 2021

Human cells grown in monkey embryos reignite ethics debate

Nicola Davis
The Guardian
Originally published 15 Apr 21

Monkey embryos containing human cells have been produced in a laboratory, a study has confirmed, spurring fresh debate into the ethics of such experiments.

The embryos are known as chimeras, organisms whose cells come from two or more “individuals”, and in this case, different species: a long-tailed macaque and a human.

In recent years researchers have produced pig embryos and sheep embryos that contain human cells – research they say is important as it could one day allow them to grow human organs inside other animals, increasing the number of organs available for transplant.

Now scientists have confirmed they have produced macaque embryos that contain human cells, revealing the cells could survive and even multiply.

In addition, the researchers, led by Prof Juan Carlos Izpisua Belmonte from the Salk Institute in the US, said the results offer new insight into communications pathways between cells of different species: work that could help them with their efforts to make chimeras with species that are less closely related to our own.

“These results may help to better understand early human development and primate evolution and develop effective strategies to improve human chimerism in evolutionarily distant species,” the authors wrote.

The study confirms rumours reported in the Spanish newspaper El PaĆ­s in 2019 that a team of researchers led by Belmonte had produced monkey-human chimeras. The word chimera comes from a beast in Greek mythology that was said to be part lion, part goat and part snake.

Friday, February 28, 2020

Slow response times undermine trust in algorithmic (but not human) predictions

E Efendic, P van de Calseyde, & A Evans
PsyArXiv PrePrints
Lasted Edited 22 Jan 20

Abstract

Algorithms consistently perform well on various prediction tasks, but people often mistrust their advice. Here, we demonstrate one component that affects people’s trust in algorithmic predictions: response time. In seven studies (total N = 1928 with 14,184 observations), we find that people judge slowly generated predictions from algorithms as less accurate and they are less willing to rely on them. This effect reverses for human predictions, where slowly generated predictions are judged to be more accurate. In explaining this asymmetry, we find that slower response times signal the exertion of effort for both humans and algorithms. However, the relationship between perceived effort and prediction quality differs for humans and algorithms. For humans, prediction tasks are seen as difficult and effort is therefore positively correlated with the perceived quality of predictions. For algorithms, however, prediction tasks are seen as easy and effort is therefore uncorrelated to the quality of algorithmic predictions. These results underscore the complex processes and dynamics underlying people’s trust in algorithmic (and human) predictions and the cues that people use to evaluate their quality.

General discussion 

When are people reluctant to trust algorithm-generated advice? Here, we demonstrate that it depends on the algorithm’s response time. People judged slowly (vs. quickly) generated predictions by algorithms as being of lower quality. Further, people were less willing to use slowly generated algorithmic predictions. For human predictions, we found the opposite: people judged slow human-generated predictions as being of higher quality. Similarly, they were more likely to use slowly generated human predictions. 

We find that the asymmetric effects of response time can be explained by different expectations of task difficulty for humans vs. algorithms. For humans, slower responses were congruent with expectations; the prediction task was presumably difficult so slower responses, and more effort, led people to conclude that the predictions were high quality. For algorithms, slower responses were incongruent with expectations; the prediction task was presumably easy so slower speeds, and more effort, were unrelated to prediction quality. 

The research is here.

Monday, February 24, 2020

An emotionally intelligent AI could support astronauts on a trip to Mars

Neel Patel
MIT Technology Review
Originally published 14 Jan 20

Here are two excerpts:

Keeping track of a crew’s mental and emotional health isn’t really a problem for NASA today. Astronauts on the ISS regularly talk to psychiatrists on the ground. NASA ensures that doctors are readily available to address any serious signs of distress. But much of this system is possible only because the astronauts are in low Earth orbit, easily accessible to mission control. In deep space, you would have to deal with lags in communication that could stretch for hours. Smaller agencies or private companies might not have mental health experts on call to deal with emergencies. An onboard emotional AI might be better equipped to spot problems and triage them as soon as they come up.

(cut)

Akin’s biggest obstacles are those that plague the entire field of emotional AI. Lisa Feldman Barrett, a psychologist at Northeastern University who specializes in human emotion, has previously pointed out that the way most tech firms train AI to recognize human emotions is deeply flawed. “Systems don’t recognize psychological meaning,” she says. “They recognize physical movements and changes, and they infer psychological meaning.” Those are certainly not the same thing.

But a spacecraft, it turns out, might actually be an ideal environment for training and deploying an emotionally intelligent AI. Since the technology would be interacting with just the small group of people onboard, says Barrett, it would be able to learn each individual’s “vocabulary of facial expressions” and how they manifest in the face, body, and voice.

The info is here.

Tuesday, July 3, 2018

What does a portrait of Erica the android tell us about being human?

Nigel Warburton
The Guardian
Originally posted September 9, 2017

Here are two excerpts:

Another traditional answer to the question of what makes us so different, popular for millennia, has been that humans have a non-physical soul, one that inhabits the body but is distinct from it, an ethereal ghostly wisp that floats free at death to enjoy an after-life which may include reunion with other souls, or perhaps a new body to inhabit. To many of us, this is wishful thinking on an industrial scale. It is no surprise that survey results published last week indicate that a clear majority of Britons (53%) describe themselves as non-religious, with a higher percentage of younger people taking this enlightened attitude. In contrast, 70% of Americans still describe themselves as Christians, and a significant number of those have decidedly unscientific views about human origins. Many, along with St Augustine, believe that Adam and Eve were literally the first humans, and that everything was created in seven days.

(cut)

Today a combination of evolutionary biology and neuroscience gives us more plausible accounts of what we are than Descartes did. These accounts are not comforting. They reverse the priority and emphasise that we are animals and provide no evidence for our non-physical existence. Far from it. Nor are they in any sense complete, though there has been great progress. Since Charles Darwin disabused us of the notion that human beings are radically different in kind from other apes by outlining in broad terms the probable mechanics of evolution, evolutionary psychologists have been refining their hypotheses about how we became this kind of animal and not another, why we were able to surpass other species in our use of tools, communication through language and images, and ability to pass on our cultural discoveries from generation to generation.

The article is here.

Tuesday, April 24, 2018

The Next Best Version of Me: How to Live Forever

David Ewing Duncan
Wired.com
Originally published March 27, 2018

Here is an excerpt:

There are also the ethics of using a powerful new technology to muck around with life’s basic coding. Theoretically, scientists could one day manufacture genomes, human or otherwise, almost as easily as writing code on a computer, transforming digital DNA on someone’s laptop into living cells of, say, Homo sapiens. Mindful of the controversy, Church and his HGP-Write colleagues insist that minting people is not their goal, though the sheer audacity of making genome-scale changes to human DNA is enough to cause controversy. “People get upset if you put a gene from another species into something you eat,” says Stanford bioethicist and legal scholar Henry Greely. “Now we’re talking about a thorough rewriting of life? Hairs will stand on end. Hackles will be raised.”

Raised hackles or not, Church and his team are forging ahead. “We want to start with a human Y,” he says, referring to the male sex chromosome, which he explains has the fewest genes of a person’s 23 chromo­somes and is thus easier to build. And he doesn’t want to synthesize just any Y chromosome. He and his team want to use the Y chromosome sequence from an actual person’s genome: mine.

“Can you do that?” I stammer.

“Of course we can—with your permission,” he says, reminding me that it would be easy to tap into my genome, since it was stored digitally in his lab’s computers as part of an effort he launched in 2005 called the Personal Genome Project.

The article is here.

Saturday, April 14, 2018

The AI Cargo Cult: The Myth of a Superhuman AI

Kevin Kelly
www.wired.com
Originally published April 25, 2017

Here is an excerpt:

The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.

This model is topologically equivalent to a ladder, so that each rung of intelligence is a step higher than the one before. Inferior animals are situated on lower rungs below us, while higher-level intelligence AIs will inevitably overstep us onto higher rungs. Time scales of when it happens are not important; what is important is the ranking—the metric of increasing intelligence.

The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.

The information is here.

Wednesday, October 18, 2017

Danny Kahneman on AI versus Humans


NBER Economics of AI Workshop 2017

Here is a rough translation of an excerpt:

One point made yesterday was the uniqueness of humans when it comes to evaluations. It was called “judgment”. Here in my noggin it’s “evaluation of outcomes”: the utility side of the decision function. I really don’t see why that should be reserved to humans.

I’d like to make the following argument:
  1. The main characteristic of people is that they’re very “noisy”.
  2. You show them the same stimulus twice, they don’t give you the same response twice.
  3. You show the same choice twice I mean—that’s why we had stochastic choice theory because thereis so much variability in people’s choices given the same stimuli.
  4. Now what can be done even without AI is a program that observes an individual that will be better than the individual and will make better choices for the individual by because it will be noise-free.
  5. We know from the literature that Colin cited on predictions an interesting tidbit:
  6. If you take clinicians and you have them predict some criterion a large number of time and then you develop a simple equation that predicts not the outcome but the clinicians judgment, that model does better in predicting the outcome then the clinician.
  7. That is fundamental.
This is telling you that one of the major limitations on human performance is not bias it is just noise.
I’m maybe partly responsible for this, but people now when they talk about error tend to think of bias as an explanation: the first thing that comes to mind. Well, there is bias. And it is an error. But in fact most of the errors that people make are better viewed as this random noise. And there’s an awful lot of it.

The entire transcript and target article is here.

Monday, May 8, 2017

Raising good robots

Regina Rini
aeon.com
Originally published April 18, 2017

Intelligent machines, long promised and never delivered, are finally on the horizon. Sufficiently intelligent robots will be able to operate autonomously from human control. They will be able to make genuine choices. And if a robot can make choices, there is a real question about whether it will make moral choices. But what is moral for a robot? Is this the same as what’s moral for a human?

Philosophers and computer scientists alike tend to focus on the difficulty of implementing subtle human morality in literal-minded machines. But there’s another problem, one that really ought to come first. It’s the question of whether we ought to try to impose our own morality on intelligent machines at all. In fact, I’d argue that doing so is likely to be counterproductive, and even unethical. The real problem of robot morality is not the robots, but us. Can we handle sharing the world with a new type of moral creature?

We like to imagine that artificial intelligence (AI) will be similar to humans, because we are the only advanced intelligence we know. But we are probably wrong. If and when AI appears, it will probably be quite unlike us. It might not reason the way we do, and we could have difficulty understanding its choices.

The article is here.

Friday, March 6, 2015

The Evolution of Altruism

By Oren Harman
The Chronicle of Higher Education
Originally published February 9, 2015

Here is an excerpt:

But if Wilson pulls back from entering the mind, focusing instead on evolutionary dynamics, a cottage industry has grown in recent years around theories purporting to explain how our brains produce empathy, morality, and good will. One recent example comes from Donald W. Pfaff, a professor of neurobiology at Rockefeller University. Stepping, as he says, out of his "comfort zone" studying steroid hormones’ effects on nerve cells in mice, Pfaff argues that recognizing our inborn goodness can add to our capacity for benevolence. "If a person simply realizes that he is wired for good, altruistic behavior and behaves accordingly," he promises, "and if the person toward whom he is about to behave does the same thing, then everything is likely to come out OK." Happily, "science now knows that we are wired to empathize." Really, it isn’t all that complicated.

The entire article is here.