Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Facial Recognition Software. Show all posts
Showing posts with label Facial Recognition Software. Show all posts

Wednesday, September 16, 2020

The Panopticon Is Already Here

Ross Anderson
The Atlantic
Originally published September 2020

Here is an excerpt:

China is an ideal setting for an experiment in total surveillance. Its population is extremely online. The country is home to more than 1 billion mobile phones, all chock-full of sophisticated sensors. Each one logs search-engine queries, websites visited, and mobile payments, which are ubiquitous. When I used a chip-based credit card to buy coffee in Beijing’s hip Sanlitun neighborhood, people glared as if I’d written a check.

All of these data points can be time-stamped and geo-tagged. And because a new regulation requires telecom firms to scan the face of anyone who signs up for cellphone services, phones’ data can now be attached to a specific person’s face. SenseTime, which helped build Xinjiang’s surveillance state, recently bragged that its software can identify people wearing masks. Another company, Hanwang, claims that its facial-recognition technology can recognize mask wearers 95 percent of the time. China’s personal-data harvest even reaps from citizens who lack phones. Out in the countryside, villagers line up to have their faces scanned, from multiple angles, by private firms in exchange for cookware.

Until recently, it was difficult to imagine how China could integrate all of these data into a single surveillance system, but no longer. In 2018, a cybersecurity activist hacked into a facial-recognition system that appeared to be connected to the government and was synthesizing a surprising combination of data streams. The system was capable of detecting Uighurs by their ethnic features, and it could tell whether people’s eyes or mouth were open, whether they were smiling, whether they had a beard, and whether they were wearing sunglasses. It logged the date, time, and serial numbers—all traceable to individual users—of Wi-Fi-enabled phones that passed within its reach. It was hosted by Alibaba and made reference to City Brain, an AI-powered software platform that China’s government has tasked the company with building.

City Brain is, as the name suggests, a kind of automated nerve center, capable of synthesizing data streams from a multitude of sensors distributed throughout an urban environment. Many of its proposed uses are benign technocratic functions. Its algorithms could, for instance, count people and cars, to help with red-light timing and subway-line planning. Data from sensor-laden trash cans could make waste pickup more timely and efficient.

The info is here.

Saturday, December 28, 2019

Chinese residents worry about rise of facial recognition

Sam Shead
bbc.com
Originally posted 5 Dec 19

Here is an excerpt:

China has more facial recognition cameras than any other country and they are often hard to avoid.

Earlier this week, local reports said that Zhengzhou, the capital of the northeastern Henan province, had become the first Chinese city to roll the tech out across all its subway train stations.

Commuters can use the technology to automatically authorise payments instead of scanning a QR code on their phones. For now, it is a voluntary option, said the China Daily.

Earlier this month, university professor Guo Bing announced he was suing Hangzhou Safari Park for enforcing facial recognition.

Prof Guo, a season ticket holder at the park, had used his fingerprint to enter for years, but was no longer able to do so.

The case was covered in the government-owned media, indicating that the Chinese Communist Party is willing for the private use of the technology to be discussed and debated by the public.

The info is here.

Friday, December 13, 2019

Conference warned of dangers of facial recognition technology

Because of new technologies, “we are all monitored and recorded every minute of every day of our lives”, a conference has heard. Photograph: iStockColm Keena
The Irish Times
Originally posted 13 Nov 19

Here is an excerpt:

The potential of facial recognition technology to be used by oppressive governments and manipulative corporations was such that some observers have called for it to be banned. The suggestion should be taken seriously, Dr Danaher said.

The technology is “like a fingerprint of your face”, is cheap, and “normalises blanket surveillance”. This makes it “perfect” for oppressive governments and for manipulative corporations.

While the EU’s GDPR laws on the use of data applied here, Dr Danaher said Ireland should also introduce domestic law “to save us from the depredations of facial recognition technology”.

As well as facial recognition technology, he also addressed the conference about “deepfake” technology, which allows for the creation of highly convincing fake video content, and algorithms that assess risk, as other technologies that are creating challenges for the law.

In the US, the use of algorithms to predict a person’s likelihood of re-offending has raised significant concerns.

The info is here.

Tuesday, November 26, 2019

Engineers need a required course in ethics

Kush Saxena
qz.com
Originally posted November 8, 2019

Here is an except:

Typically, engineers are trained to be laser-focused on solving problems in the most effective and efficient way. And those solutions often have ripple effects in society, and create externalities that must be carefully considered.

Given the pace with which we can deploy technology at scale, the decisions of just a few people can have deep and far-reaching impact.

But in spite of the fact that they build potentially society-altering technologies—such as artificial intelligence—engineers often have no training or exposure to ethics. Many don’t even consider it part of their remit.

But it is. In a world where a few lines of code can impact whether a woman lands a job in tech, or how a criminal is sentenced in court, everyone who touches technology must be qualified to make ethical decisions, however insignificant they may seem at the time.

Engineers need to understand that their work may be used in ways that they never intended and consider the broader impact it can have on the world.

How can tech leaders not only create strong ethical frameworks, but also ensure their employees act with “decency” and abide by the ideals and values they’ve set out? And how can leaders in business, government, and education better equip the tech workforce to consider the broader ethical implications of what they build?

The info is here.

Wednesday, August 21, 2019

Tech Is Already Reading Your Emotions - But Do Algorithms Get It Right?

Jessica Baron
Forbes.com
Originally published July 18, 2019

From measuring shopper satisfaction to detecting signs of depression, companies are employing emotion-sensing facial recognition technology that is based on flawed science, according to a new study.

If the idea of having your face recorded and then analyzed for mood so that someone can intervene in your life sounds creepy, that’s because it is. But that hasn’t stopped companies like Walmart from promising to implement the technology to improve customer satisfaction, despite numerous challenges from ethicists and other consumer advocates.

At the end of the day, this flavor of facial recognition software probably is all about making you safer and happier – it wants to let you know if you’re angry or depressed so you can calm down or get help; it wants to see what kind of mood you’re in when you shop so it can help stores keep you as a customer; it wants to measure your mood while driving, playing video games, or just browsing the Internet to see what goods and services you might like to buy to improve your life.


The problem is – well, aside from the obvious privacy issues and general creep factor – that computers aren’t really that good at judging our moods based on the information they get from facial recognition technology. To top it off, this technology exhibits that same kind of racial bias that other AI programs do, assigning more negative emotions to black faces, for example. That’s probably because it’s based on flawed science.

The info is here.

Saturday, June 15, 2019

Legal questions surround police use of facial recognition tech

Alexander J Martin, Technology Reporter and Tom Cheshire
news.sky.com
Originally posted August 23, 2017

Here is an excerpt:

He noted that despite this threat to privacy "this new database is subject to none of the governance controls or other protections which apply as regards the DNA and fingerprint databases" - and that it "has been put into operation without public or parliamentary consultation or debate."

Similar concerns were raised by Parliament's science and technology committee, which also complained to the Government that it was running two years late on its planned publication date for the joint forensics and biometrics strategy.

Although a separate forensics strategy has since been published, the biometrics strategy - which will set out how police can use technologies such as facial recognition - has still not been released by the Home Office, and it is now four years overdue.

The committee also noted that facial biometrics were currently not covered by strict rules that govern the police's collection of DNA profiles and fingerprints, and recommended the biometrics commissioner's role be expanded to include them.

The info is here.

Friday, June 7, 2019

Cameras Everywhere: The Ethics Of Eyes In The Sky

Tom Vander Ark
Forbes.com
Originally posted May 8, 2019

Pictures from people's houses can predict the chances of that person getting into a car accident. The researchers that created the system acknowledged that "modern data collection and computational techniques...allow for unprecedented exploitation of personal data, can outpace development of legislation and raise privacy threats."

Hong Kong researchers created a drone system that can automatically analyze a road surface. It suggests that we’re approaching the era of automated surveillance for civil and military purposes.

In lower Manhattan, police are planning a surveillance center where officers can view thousands of video cameras around the downtown.


Microsoft turned down the sale of facial recognition software to California law enforcement arguing that innocent women and minorities would be disproportionately held for questioning. It suggests that the technology is running ahead of public policy but not ready for equitable use. 

And speaking of facial recognition, Jet Blue has begun using it in lieu of boarding passes on some flights much to the chagrin of some passengers who wonder when they gave consent for this application and who has access to what biometric data.

The info is here.

Wednesday, January 30, 2019

Experts Reveal Their Tech Ethics Wishes For The New Year

Jessica Baron
Forbes.com
Originally published December 30, 2018

Here is an excerpt:

"Face recognition technology is the technology to keep our eyes on in 2019.

The debates surrounding it have expressed our worst fears about surveillance and injustice and the tightly coupled links between corporate and state power. They’ve also triggered a battle amongst big tech companies, including Amazon, Microsoft, and Google, over how to define the parameters of corporate social responsibility at a time when external calls for greater accountability from civil rights groups, privacy activists and scholars, and internal demands for greater moral leadership, including pleas from employees and shareholders, are expressing concern over face surveillance governance having the potential to erode the basic fabric of democracy.

With aggressive competition fueling the global artificial intelligence race, it remains to be seen which values will guide innovation."

The info is here.

Saturday, November 24, 2018

Establishing an AI code of ethics will be harder than people think

Karen Hao
www.technologyreview.com
Originally posted October 21, 2018

Over the past six years, the New York City police department has compiled a massive database containing the names and personal details of at least 17,500 individuals it believes to be involved in criminal gangs. The effort has already been criticized by civil rights activists who say it is inaccurate and racially discriminatory.

"Now imagine marrying facial recognition technology to the development of a database that theoretically presumes you’re in a gang," Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense fund, said at the AI Now Symposium in New York last Tuesday.

Lawyers, activists, and researchers emphasize the need for ethics and accountability in the design and implementation of AI systems. But this often ignores a couple of tricky questions: who gets to define those ethics, and who should enforce them?

Not only is facial recognition imperfect, studies have shown that the leading software is less accurate for dark-skinned individuals and women. By Ifill’s estimation, the police database is between 95 and 99 percent African American, Latino, and Asian American. "We are talking about creating a class of […] people who are branded with a kind of criminal tag," Ifill said.

The info is here.

Tuesday, November 20, 2018

How tech employees are pushing Silicon Valley to put ethics before profit

Alexia Fernández Campbell
vox.com
Originally published October 18, 2018

The chorus of tech workers demanding American tech companies put ethics before profit is growing louder.

In recent days, employees at Google and Microsoft have been pressuring company executives to drop bids for a $10 billion contract to provide cloud computing services to the Department of Defense.

As part of the contract, known as JEDI, engineers would build cloud storage for military data; there are few public details about what else it would entail. But one thing is clear: The project would involve using artificial intelligence to make the US military a lot deadlier.

“This program is truly about increasing the lethality of our department and providing the best resources to our men and women in uniform,” John Gibson, chief management officer at the Defense Department, said at a March industry event about JEDI.

Thousands of Google employees reportedly pressured the company to drop its bid for the project, and many had said they would refuse to work on it. They pointed out that such work may violate the company’s new ethics policy on the use of artificial intelligence. Google has pledged not to use AI to make “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” a policy company employees had pushed for.

The info is here.