Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Facial Recognition. Show all posts
Showing posts with label Facial Recognition. Show all posts

Monday, July 13, 2020

Amazon Halts Police Use Of Its Facial Recognition Technology

Bobby Allyn
www.npr.org
Originally posted 10 June 20

Amazon announced on Wednesday a one-year moratorium on police use of its facial-recognition technology, yielding to pressure from police-reform advocates and civil rights groups.

It is unclear how many law enforcement agencies in the U.S. deploy Amazon's artificial intelligence tool, but an official with the Washington County Sheriff's Office in Oregon confirmed that it will be suspending its use of Amazon's facial recognition technology.

Researchers have long criticized the technology for producing inaccurate results for people with darker skin. Studies have also shown that the technology can be biased against women and younger people.

IBM said earlier this week that it would quit the facial-recognition business altogether. In a letter to Congress, chief executive Arvind Krishna condemned software that is used "for mass surveillance, racial profiling, violations of basic human rights and freedoms."

And Microsoft President Brad Smith told The Washington Post during a livestream Thursday morning that his company has not been selling its technology to law enforcement. Smith said he has no plans to until there is a national law.

The info is here.

Tuesday, October 22, 2019

AI used for first time in job interviews in UK to find best applicants

Charles Hymas
The Telegraph
Originally posted September 27, 2019

Artificial intelligence (AI) and facial expression technology is being used for the first time in job interviews in the UK to identify the best candidates.

Unilever, the consumer goods giant, is among companies using AI technology to analyse the language, tone and facial expressions of candidates when they are asked a set of identical job questions which they film on their mobile phone or laptop.

The algorithms select the best applicants by assessing their performances in the videos against about 25,000 pieces of facial and linguistic information compiled from previous interviews of those who have gone on to prove to be good at the job.

Hirevue, the US company which has developed the interview technology, claims it enables hiring firms to interview more candidates in the initial stage rather than simply relying on CVs and that it provides a more reliable and objective indicator of future performance free of human bias.

However, academics and campaigners warned that any AI or facial recognition technology would inevitably have in-built biases in its databases that could discriminate against some candidates and exclude talented applicants who might not conform to the norm.

The info is here.

Saturday, July 20, 2019

Microsoft Reconsidering AI Ethics Review Plan

Microsoft executives are reconsidering plans to add AI ethics to audits for products to be releasedDeborah Todd
Forbes.com
Originally posted June 24, 2019


Here is an except:

In March, Microsoft executive vice president of AI and Research Harry Shum told the crowd at MIT Technology Review’s EmTech Digital Conference the company would  someday add AI ethics reviews to a standard checklist of audits for products to be released. However, a Microsoft spokesperson said in an interview that the plan was only one of “a number of options being discussed,” and its implementation isn’t guaranteed. He said efforts are underway for an AI strategy that will influence operations companywide, in addition to the product stage.

“Microsoft has implemented its internal facial recognition principles and is continuing work to operationalize its broader AI principles across the company,” said the spokesman.

The adjustment comes during a time when executives across Silicon Valley are grappling with the best ways to ensure the implicit biases affecting human programmers don’t make their way into machine learning and artificial intelligence architecture. It also comes as the industry works to address issues where bias may have already crept in, including facial recognition systems that misidentify individuals with dark skin tones, autonomous vehicles with detection systems that fail dark-skinned pedestrians more than any other group and voice recognition systems that struggle to recognize non-native English speakers.

The info is here.

Tuesday, November 14, 2017

Facial recognition may reveal things we’d rather not tell the world. Are we ready?

Amitha Kalaichandran
The Boston Globe
Originally published October 27, 2017

Here is an excerpt:

Could someone use a smartphone snapshot, for example, to diagnose another person’s child at the playground? The Face2Gene app is currently limited to clinicians; while anyone can download it from the App Store on an iPhone, it can only be used after the user’s healthcare credentials are verified. “If the technology is widespread,” says Lin, “do I see people taking photos of others for diagnosis? That would be unusual, but people take photos of others all the time, so maybe it’s possible. I would obviously worry about the invasion of privacy and misuse if that happened.”

Humans are pre-wired to discriminate against others based on physical characteristics, and programmers could easily manipulate AI programming to mimic human bias. That’s what concerns Anjan Chatterjee, a neuroscientist who specializes in neuroesthetics, the study of what our brains find pleasing. He has found that, relying on baked-in prejudices, we often quickly infer character just from seeing a person’s face. In a paper slated for publication in Psychology of Aesthetics, Creativity, and the Arts, Chatterjee reports that a person’s appearance — and our interpretation of that appearance — can have broad ramifications in professional and personal settings. This conclusion has serious implications for artificial intelligence.

“We need to distinguish between classification and evaluation,” he says. “Classification would be, for instance, using it for identification purposes like fingerprint recognition. . . which was once a privacy concern but seems to have largely faded away. Using the technology for evaluation would include discerning someone’s sexual orientation or for medical diagnostics.” The latter raises serious ethical questions, he says. One day, for example, health insurance companies could use this information to adjust premiums based on a predisposition to a condition.

The article is here.

Wednesday, February 24, 2016

Ethical aspects of facial recognition systems in public places

Philip Brey
Journal of Information, Communication and Ethics in Society
Vol. 2 Iss: 2, pp.97 - 109

This essay examines ethical aspects of the use of facial recognition technology for surveillance purposes in public and semipublic areas, focusing particularly on the balance between security and privacy and civil liberties. As a case study, the FaceIt facial recognition engine of Identix Corporation will be analyzed, as well as its use in “Smart” video surveillance (CCTV) systems in city centers and airports. The ethical analysis will be based on a careful analysis of current facial recognition technology, of its use in Smart CCTV systems, and of the arguments used by proponents and opponents of such systems. It will be argued that Smart CCTV, which integrates video surveillance technology and biometric technology, faces ethical problems of error, function creep and privacy. In a concluding section on policy, it will be discussed whether such problems outweigh the security value of Smart CCTV in public places.

The article is here.