Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Surveillance. Show all posts
Showing posts with label Surveillance. Show all posts

Tuesday, July 18, 2023

How AI is learning to read the human mind

Nicola Smith
The Telegraph
Originally posted 23 May 2023

Here is an excerpt:

‘Brain rights’

But he warned that it could also be weaponised and used for military applications or for nefarious purposes to extract information from people.

“We are on the brink of a crisis from the point of view of mental privacy,” he said. “Humans are defined by their thoughts and their mental processes and if you can access them then that should be the sanctuary.”

Prof Yuste has become so concerned about the ethical implications of advanced neurotechnology that he co-founded the NeuroRights Foundation to promote “brain rights” as a new form of human rights.

The group advocates for safeguards to prevent the decoding of a person’s brain activity without consent, for protection of a person’s identity and free will, and for the right to fair access to mental augmentation technology.

They are currently working with the United Nations to study how human rights treaties can be brought up to speed with rapid progress in neurosciences, and raising awareness of the issues in national parliaments.

In August, the Human Rights Council in Geneva will debate whether the issues around mental privacy should be covered by the International Covenant on Civil and Political Rights, one of the most significant human rights treaties in the world.

The gravity of the task was comparable to the development of the atomic bomb, when scientists working on atomic energy warned the UN of the need for regulation and an international control system of nuclear material to prevent the risk of a catastrophic war, said Prof Yuste.

As a result, the International Atomic Energy Agency (IAEA) was created and is now based in Vienna.

Friday, May 12, 2023

‘Mind-reading’ AI: Japan study sparks ethical debate

David McElhinney
Aljazeera.com
Originally posted 7 APR 203

Yu Takagi could not believe his eyes. Sitting alone at his desk on a Saturday afternoon in September, he watched in awe as artificial intelligence decoded a subject’s brain activity to create images of what he was seeing on a screen.

“I still remember when I saw the first [AI-generated] images,” Takagi, a 34-year-old neuroscientist and assistant professor at Osaka University, told Al Jazeera.

“I went into the bathroom and looked at myself in the mirror and saw my face, and thought, ‘Okay, that’s normal. Maybe I’m not going crazy'”.

Takagi and his team used Stable Diffusion (SD), a deep learning AI model developed in Germany in 2022, to analyse the brain scans of test subjects shown up to 10,000 images while inside an MRI machine.

After Takagi and his research partner Shinji Nishimoto built a simple model to “translate” brain activity into a readable format, Stable Diffusion was able to generate high-fidelity images that bore an uncanny resemblance to the originals.

The AI could do this despite not being shown the pictures in advance or trained in any way to manufacture the results.

“We really didn’t expect this kind of result,” Takagi said.

Takagi stressed that the breakthrough does not, at this point, represent mind-reading – the AI can only produce images a person has viewed.

“This is not mind-reading,” Takagi said. “Unfortunately there are many misunderstandings with our research.”

“We can’t decode imaginations or dreams; we think this is too optimistic. But, of course, there is potential in the future.”


Note: If AI systems can decode human thoughts, it could infringe upon people's privacy and autonomy. There are concerns that this technology could be used for invasive surveillance or to manipulate people's thoughts and behavior. Additionally, there are concerns about how this technology could be used in legal proceedings and whether it violates human rights.

Wednesday, October 20, 2021

The Fight to Define When AI Is ‘High Risk’

Khari Johnson
wired.com
Originally posted 1 Sept 21

Here is an excerpt:

At the heart of much of that commentary is a debate over which kinds of AI should be considered high risk. The bill defines high risk as AI that can harm a person’s health or safety or infringe on fundamental rights guaranteed to EU citizens, like the right to life, the right to live free from discrimination, and the right to a fair trial. News headlines in the past few years demonstrate how these technologies, which have been largely unregulated, can cause harm. AI systems can lead to false arrests, negative health care outcomes, and mass surveillance, particularly for marginalized groups like Black people, women, religious minority groups, the LGBTQ community, people with disabilities, and those from lower economic classes. Without a legal mandate for businesses or governments to disclose when AI is used, individuals may not even realize the impact the technology is having on their lives.

The EU has often been at the forefront of regulating technology companies, such as on issues of competition and digital privacy. Like the EU's General Data Protection Regulation, the AI Act has the potential to shape policy beyond Europe’s borders. Democratic governments are beginning to create legal frameworks to govern how AI is used based on risk and rights. The question of what regulators define as high risk is sure to spark lobbying efforts from Brussels to London to Washington for years to come.

Wednesday, June 23, 2021

Experimental Regulations for AI: Sandboxes for Morals and Mores

Ranchordas, Sofia
Morals and Machines (vol.1, 2021)
Available at SSRN: 

Abstract

Recent EU legislative and policy initiatives aim to offer flexible, innovation-friendly, and future-proof regulatory frameworks. Key examples are the EU Coordinated Plan on AI and the recently published EU AI Regulation Proposal which refer to the importance of experimenting with regulatory sandboxes so as to balance innovation in AI against its potential risks. Originally developed in the Fintech sector, regulatory sandboxes create a testbed for a selected number of innovative projects, by waiving otherwise applicable rules, guiding compliance, or customizing enforcement. Despite the burgeoning literature on regulatory sandboxes and the regulation of AI, the legal, methodological, and ethical challenges of regulatory sandboxes have remained understudied. This exploratory article delves into the some of the benefits and intricacies of employing experimental legal instruments in the context of the regulation of AI. This article’s contribution is twofold: first, it contextualizes the adoption of regulatory sandboxes in the broader discussion on experimental approaches to regulation; second, it offers a reflection on the steps ahead for the design and implementation of AI regulatory sandboxes.

(cut)

In conclusion, AI regulatory sandboxes are not the answer to more innovation in AI. They are part of the path to a more forward-looking approach to the interaction between law and technology. This new approach will most certainly be welcomed with reluctance in years to come as it disrupts existing dogmas pertaining to the way in which we conceive the principle of legal certainty and the reactive—rather than anticipatory—nature of law. However, traditional law and regulation were designed with human agents and enigmas in mind. Many of the problems generated by AI (discrimination, power asymmetries, and manipulation) are still human but their scale and potential for harms (and benefits) have long ceased to be. It is thus time to rethink our fundamental approach to regulation and refocus on the new regulatory subject before us.

Wednesday, March 10, 2021

Thought-detection: AI has infiltrated our last bastion of privacy

Gary Grossman
VentureBeat
Originally posted 13 Feb 21

Our thoughts are private – or at least they were. New breakthroughs in neuroscience and artificial intelligence are changing that assumption, while at the same time inviting new questions around ethics, privacy, and the horizons of brain/computer interaction.

Research published last week from Queen Mary University in London describes an application of a deep neural network that can determine a person’s emotional state by analyzing wireless signals that are used like radar. In this research, participants in the study watched a video while radio signals were sent towards them and measured when they bounced back. Analysis of body movements revealed “hidden” information about an individual’s heart and breathing rates. From these findings, the algorithm can determine one of four basic emotion types: anger, sadness, joy, and pleasure. The researchers proposed this work could help with the management of health and wellbeing and be used to perform tasks like detecting depressive states.

Ahsan Noor Khan, a PhD student and first author of the study, said: “We’re now looking to investigate how we could use low-cost existing systems, such as Wi-Fi routers, to detect emotions of a large number of people gathered, for instance in an office or work environment.” Among other things, this could be useful for HR departments to assess how new policies introduced in a meeting are being received, regardless of what the recipients might say. Outside of an office, police could use this technology to look for emotional changes in a crowd that might lead to violence.

The research team plans to examine public acceptance and ethical concerns around the use of this technology. Such concerns would not be surprising and conjure up a very Orwellian idea of the ‘thought police’ from 1984. In this novel, the thought police watchers are expert at reading people’s faces to ferret out beliefs unsanctioned by the state, though they never mastered learning exactly what a person was thinking.


Thursday, December 31, 2020

Why business cannot afford to ignore tech ethics

Siddharth Venkataramakrishnan
ft.com
Originally posted 6 DEC 20

From one angle, the pandemic looks like a vindication of “techno-solutionism”. From the more everyday developments of teleconferencing to systems exploiting advanced artificial intelligence, platitudes to the power of innovation abound.

Such optimism smacks of short-termism. Desperate times often call for swift and sweeping solutions, but implementing technologies without regard for their impact is risky and increasingly unacceptable to wider society. The business leaders of the future who purchase and deploy such systems face costly repercussions, both financial and reputational.

Tech ethics, while a relatively new field, has suffered from perceptions that it is either the domain of philosophers or PR people. This could not be further from the truth — as the pandemic continues, so the importance grows of mapping out potential harms from technologies.

Take, for example, biometrics such as facial-recognition systems. These have a clear appeal for companies looking to check who is entering their buildings, how many people are wearing masks or whether social distancing is being observed. Recent advances in the field have combined technologies such as thermal scanning and “periocular recognition” (the ability to identify people wearing masks).

But the systems pose serious questions for those responsible for purchasing and deploying them. At a practical level, facial recognition has long been plagued by accusations of racial bias.


Thursday, December 24, 2020

Google Employees Call Black Scientist's Ouster 'Unprecedented Research Censorship'

Bobby Allyn
www.npr.org
Originally published 3 Dec 20

Hundreds of Google employees have published an open letter following the firing of an accomplished scientist known for her research into the ethics of artificial intelligence and her work showing racial bias in facial recognition technology.

That scientist, Timnit Gebru, helped lead Google's Ethical Artificial Intelligence Team until Tuesday.

Gebru, who is Black, says she was forced out of the company after a dispute over a research paper and an email she subsequently sent to peers expressing frustration over how the tech giant treats employees of color and women.

"Instead of being embraced by Google as an exceptionally talented and prolific contributor, Dr. Gebru has faced defensiveness, racism, gaslighting, research censorship, and now a retaliatory firing," the open letter said. By Thursday evening, more than 400 Google employees and hundreds of outsiders — many of them academics — had signed it.

The research paper in question was co-authored by Gebru along with four others at Google and two other researchers. It examined the environmental and ethical implications of an AI tool used by Google and other technology companies, according to NPR's review of the draft paper.

The 12-page draft explored the possible pitfalls of relying on the tool, which scans massive amounts of information on the Internet and produces text as if written by a human. The paper argued it could end up mimicking hate speech and other types of derogatory and biased language found online. The paper also cautioned against the energy cost of using such large-scale AI models.

According to Gebru, she was planning to present the paper at a research conference next year, but then her bosses at Google stepped in and demanded she retract the paper or remove all the Google employees as authors.

Wednesday, October 21, 2020

Neurotechnology can already read minds: so how do we protect our thoughts?

Rafael Yuste
El Pais
Originally posted 11 Sept 20

Here is an excerpt:

On account of these and other developments, a group of 25 scientific experts – clinical engineers, psychologists, lawyers, philosophers and representatives of different brain projects from all over the world – met in 2017 at Columbia University, New York, and proposed ethical rules for the use of these neurotechnologies. We believe we are facing a problem that affects human rights, since the brain generates the mind, which defines us as a species. At the end of the day, it is about our essence – our thoughts, perceptions, memories, imagination, emotions and decisions.

To protect citizens from the misuse of these technologies, we have proposed a new set of human rights, called “neurorights.” The most urgent of these to establish is the right to the privacy of our thoughts, since the technologies for reading mental activity are more developed than the technologies for manipulating it.

To defend mental privacy, we are working on a three-pronged approach. The first consists of legislating “neuroprotection.” We believe that data obtained from the brain, which we call “neurodata,” should be rigorously protected by laws similar to those applied to organ donations and transplants. We ask that “neurodata” not be traded and only be extracted with the consent of the individual for medical or scientific purposes.

This would be a preventive measure to protect against abuse. The second approach involves the proposal of proactive ideas; for example, that the companies and organizations that manufacture these technologies should adhere to a code of ethics from the outset, just as doctors do with the Hippocratic Oath. We are working on a “technocratic oath” with Xabi Uribe-Etxebarria, founder of the artificial intelligence company Sherpa.ai, and with the Catholic University of Chile.

The third approach involves engineering, and consists of developing both hardware and software so that brain “neurodata” remains private, and that it is possible to share only select information. The aim is to ensure that the most personal data never leaves the machines that are wired to our brain. One option is to systems that are already used with financial data: open-source files and blockchain technology so that we always know where it came from, and smart contracts to prevent data from getting into the wrong hands. And, of course, it will be necessary to educate the public and make sure that no device can use a person’s data unless he or she authorizes it at that specific time.

Wednesday, September 16, 2020

The Panopticon Is Already Here

Ross Anderson
The Atlantic
Originally published September 2020

Here is an excerpt:

China is an ideal setting for an experiment in total surveillance. Its population is extremely online. The country is home to more than 1 billion mobile phones, all chock-full of sophisticated sensors. Each one logs search-engine queries, websites visited, and mobile payments, which are ubiquitous. When I used a chip-based credit card to buy coffee in Beijing’s hip Sanlitun neighborhood, people glared as if I’d written a check.

All of these data points can be time-stamped and geo-tagged. And because a new regulation requires telecom firms to scan the face of anyone who signs up for cellphone services, phones’ data can now be attached to a specific person’s face. SenseTime, which helped build Xinjiang’s surveillance state, recently bragged that its software can identify people wearing masks. Another company, Hanwang, claims that its facial-recognition technology can recognize mask wearers 95 percent of the time. China’s personal-data harvest even reaps from citizens who lack phones. Out in the countryside, villagers line up to have their faces scanned, from multiple angles, by private firms in exchange for cookware.

Until recently, it was difficult to imagine how China could integrate all of these data into a single surveillance system, but no longer. In 2018, a cybersecurity activist hacked into a facial-recognition system that appeared to be connected to the government and was synthesizing a surprising combination of data streams. The system was capable of detecting Uighurs by their ethnic features, and it could tell whether people’s eyes or mouth were open, whether they were smiling, whether they had a beard, and whether they were wearing sunglasses. It logged the date, time, and serial numbers—all traceable to individual users—of Wi-Fi-enabled phones that passed within its reach. It was hosted by Alibaba and made reference to City Brain, an AI-powered software platform that China’s government has tasked the company with building.

City Brain is, as the name suggests, a kind of automated nerve center, capable of synthesizing data streams from a multitude of sensors distributed throughout an urban environment. Many of its proposed uses are benign technocratic functions. Its algorithms could, for instance, count people and cars, to help with red-light timing and subway-line planning. Data from sensor-laden trash cans could make waste pickup more timely and efficient.

The info is here.

Friday, August 7, 2020

Technology Can Help Us, but People Must Be the Moral Decision Makers

Image for postAndrew Briggs
medium.com
Originally posted 8 June 20

Here is an excerpt:

Many individuals in technology fields see tools such as machine learning and AI as precisely that — tools — which are intended to be used to support human endeavors, and they tend to argue how such tools can be used to optimize technical decisions. Those people concerned with the social impacts of these technologies tend to approach the debate from a moral stance and to ask how these technologies should be used to promote human flourishing.

This is not an unresolvable conflict, nor is it purely academic. As the world grapples with the coronavirus pandemic, society is increasingly faced with decisions about how technology should be used: Should sick people’s contacts be traced using cell phone data? Should AIs determine who can or cannot work or travel based on their most recent COVID-19 test results? These questions have both technical and moral dimensions. Thankfully, humans have a unique capacity for moral choices in a way that machines simply do not.

One of our findings is that for humanity to thrive in the new digital age, we cannot disconnect our technical decisions and innovations from moral reasoning. New technologies require innovations in society. To think that the advance of technology can be stopped, or that established moral modalities need not be applied afresh to new circumstances, is a fraught path. There will often be tradeoffs between social goals, such as maintaining privacy, and technological goals, such as identifying disease vectors.

The info is here.

Monday, June 22, 2020

Ethics of Artificial Intelligence and Robotics

Müller, Vincent C.
The Stanford Encyclopedia of Philosophy
(Summer 2020 Edition)

1. Introduction

1.1 Background of the Field

The ethics of AI and robotics is often focused on “concerns” of various sorts, which is a typical response to new technologies. Many such concerns turn out to be rather quaint (trains are too fast for souls); some are predictably wrong when they suggest that the technology will fundamentally change humans (telephones will destroy personal communication, writing will destroy memory, video cassettes will make going out redundant); some are broadly correct but moderately relevant (digital technology will destroy industries that make photographic film, cassette tapes, or vinyl records); but some are broadly correct and deeply relevant (cars will kill children and fundamentally change the landscape). The task of an article such as this is to analyse the issues and to deflate the non-issues.

Some technologies, like nuclear power, cars, or plastics, have caused ethical and political discussion and significant policy efforts to control the trajectory these technologies, usually only once some damage is done. In addition to such “ethical concerns”, new technologies challenge current norms and conceptual systems, which is of particular interest to philosophy. Finally, once we have understood a technology in its context, we need to shape our societal response, including regulation and law. All these features also exist in the case of new AI and Robotics technologies—plus the more fundamental fear that they may end the era of human control on Earth.

The ethics of AI and robotics has seen significant press coverage in recent years, which supports related research, but also may end up undermining it: the press often talks as if the issues under discussion were just predictions of what future technology will bring, and as though we already know what would be most ethical and how to achieve that. Press coverage thus focuses on risk, security (Brundage et al. 2018, see under Other Internet Resources [hereafter OIR]), and prediction of impact (e.g., on the job market). The result is a discussion of essentially technical problems that focus on how to achieve a desired outcome. Current discussions in policy and industry are also motivated by image and public relations, where the label “ethical” is really not much more than the new “green”, perhaps used for “ethics washing”. For a problem to qualify as a problem for AI ethics would require that we do not readily know what the right thing to do is. In this sense, job loss, theft, or killing with AI is not a problem in ethics, but whether these are permissible under certain circumstances is a problem. This article focuses on the genuine problems of ethics where we do not readily know what the answers are.

The entry is here.

Tuesday, April 21, 2020

When Google and Apple get privacy right, is there still something wrong?

Tamar Sharon
Medium.com
Originally posted 15 April 20

Here is an excerpt:

As the understanding that we are in this for the long run settles in, the world is increasingly turning its attention to technological solutions to address the devastating COVID-19 virus. Contact-tracing apps in particular seem to hold much promise. Using Bluetooth technology to communicate between users’ smartphones, these apps could map contacts between infected individuals and alert people who have been in proximity to an infected person. Some countries, including China, Singapore, South Korea and Israel, have deployed these early on. Health authorities in the UK, France, Germany, the Netherlands, Iceland, the US and other countries, are currently considering implementing such apps as a means of easing lock-down measures.

There are some bottlenecks. Do they work? The effectiveness of these applications has not been evaluated — in isolation or as part of an integrated strategy. How many people would need to use them? Not everyone has a smartphone. Even in rich countries, the most vulnerable group, aged over 80, is least likely to have one. Then there’s the question about fundamental rights and liberties, first and foremost privacy and data protection. Will contact-tracing become part of a permanent surveillance structure in the prolonged “state of exception” we are sleep-walking into?

Prompted by public discussions about this last concern, a number of European governments have indicated the need to develop such apps in a way that would be privacy preserving, while independent efforts involving technologists and scientists to deliver privacy-centric solutions have been cropping up. The Pan-European Privacy-Preserving Tracing Initiative (PEPP-IT), and in particular the Decentralised Privacy-Preserving Proximity Tracing (DP-3T) protocol, which provides an outline for a decentralised system, are notable forerunners. Somewhat late in the game, the European Commission last week issued a Recommendation for a pan-European approach to the adoption of contact-tracing apps that would respect fundamental rights such as privacy and data protection.

The info is here.

Thursday, April 2, 2020

Intelligence, Surveillance, and Ethics in a Pandemic

Jessica Davis
JustSecurity.org
Originally posted 31 March 20

Here is an excerpt:

It is imperative that States and their citizens question how much freedom and privacy should be sacrificed to limit the impact of this pandemic. It is also not sufficient to ask simply “if” something is legal; we should also ask whether it should be, and under what circumstances. States should consider the ethics of surveillance and intelligence, specifically whether it is justified, done under the right authority, if it can be done with intentionality and proportionality and as a last resort, and if targets of surveillance can be separated from non-targets to avoid mass surveillance. These considerations, combined with enhanced transparency and sunset clauses on the use of intelligence and surveillance techniques, can allow States to ethically deploy these powerful tools to help stop the spread of the virus.

States are employing intelligence and surveillance techniques to contain the spread of the illness because these methods can help track and identify infected or exposed people and enforce quarantines. States have used cell phone data to track people at risk of infection or transmission and financial data to identify places frequented by at-risk people. Social media intelligence is also ripe for exploitation in terms of identifying social contacts. This intelligence, is increasingly being combined with health data, creating a unique (and informative) picture of a person’s life that is undoubtedly useful for virus containment. But how long should States have access to this type of information on their citizens, if at all? Considering natural limits to the collection of granular data on citizens is imperative, both in terms of time and access to this data.

The info is here.

Friday, September 27, 2019

Nudging Humans

Brett M. Frischmann
Villanova University - School of Law
Originally published August 1, 2019

Abstract

Behavioral data can and should inform the design of private and public choice architectures. Choice architects should steer people toward outcomes that make them better off (according to their own interests, not the choice architects’) but leave it to the people being nudged to choose for themselves. Libertarian paternalism can and should provide ethical constraints on choice architects. These are the foundational principles of nudging, the ascendant social engineering agenda pioneered by Nobel Prize winning economist Richard Thaler and Harvard law professor Cass Sunstein.

The foundation bears tremendous weight. Nudging permeates private and public institutions worldwide. It creeps into the design of an incredible number of human-computer interfaces and affects billions of choices daily. Yet the foundation has deep cracks.

This critique of nudging exposes those hidden fissures. It aims at the underlying theory and agenda, rather than one nudge or another, because that is where micro meets macro, where dynamic longitudinal impacts on individuals and society need to be considered. Nudging theorists and practitioners need to better account for the longitudinal effects of nudging on the humans being nudged, including malleable beliefs and preferences as well as various capabilities essential to human flourishing. The article develops two novel and powerful criticisms of nudging, one focused on nudge creep and another based on normative myopia. It explores these fundamental flaws in the nudge agenda theoretically and through various examples and case studies, including electronic contracting, activity tracking in schools, and geolocation tracking controls on an iPhone.

The paper is here.

Friday, June 7, 2019

Cameras Everywhere: The Ethics Of Eyes In The Sky

Tom Vander Ark
Forbes.com
Originally posted May 8, 2019

Pictures from people's houses can predict the chances of that person getting into a car accident. The researchers that created the system acknowledged that "modern data collection and computational techniques...allow for unprecedented exploitation of personal data, can outpace development of legislation and raise privacy threats."

Hong Kong researchers created a drone system that can automatically analyze a road surface. It suggests that we’re approaching the era of automated surveillance for civil and military purposes.

In lower Manhattan, police are planning a surveillance center where officers can view thousands of video cameras around the downtown.


Microsoft turned down the sale of facial recognition software to California law enforcement arguing that innocent women and minorities would be disproportionately held for questioning. It suggests that the technology is running ahead of public policy but not ready for equitable use. 

And speaking of facial recognition, Jet Blue has begun using it in lieu of boarding passes on some flights much to the chagrin of some passengers who wonder when they gave consent for this application and who has access to what biometric data.

The info is here.

Thursday, May 16, 2019

It’s Our ‘Moral Responsibility’ to Give The FBI Access to Your DNA

Jennings Brown
www.gizmodo.com
Originally published April 3, 2019

A popular DNA-testing company seems to be targeting true crime fans with a new pitch to let them share their genetic information with law enforcement so cops can catch violent criminals.

Two months ago, FamilyTreeDNA raised privacy concerns after BuzzFeed revealed the company had partnered with the FBI and given the agency access to the genealogy database. Law enforcement’s use of DNA databases has been widely known since last April when California officials revealed genealogy website information was instrumental in determining the identity of the Golden State Killer. But in that case, detectives used publicly shared raw genetic data on GEDmatch. The recent news about FamilyTreeDNA marked the first known time a home DNA test company had willingly shared private genetic information with law enforcement.

Several weeks later, FamilyTreeDNA changed their rules to allow customers to block the FBI from accessing their information. “Users now have the ability to opt out of matching with DNA relatives whose accounts are flagged as being created to identify the remains of a deceased individual or a perpetrator of a homicide or sexual assault,” the company said in a statement at the time.

But now the company seems to be embracing this partnership with law enforcement with their new campaign called, “Families Want Answers.”

The info is here.

Thursday, January 24, 2019

Facebook’s Suicide Algorithms are Invasive

Michael Spencer
www.medium.com
Originally published January 6, 2019

Here is an excerpt:

Facebook is scanning nearly every post on the platform in an attempt to assess suicide risk. Sadly, Facebook has a long history of conducting “experiments” on its users. It’s hard to own a stock that itself isn’t trustworthy either for democracy or our personal data.

Facebook acts a bit like a social surveillance program, where it passes the information (suicide score) along to law enforcement for wellness checks. That’s pretty much like state surveillance, what’s the difference?

Privacy experts say Facebook’s failure to get affirmative consent from users for the program presents privacy risks that could lead to exposure or worse. Facebook has a history with sharing our personal data with other technology companies. So we are being profiled in the most intimate ways by third parties we didn’t even know had our data.

In March 2017, Facebook launched an ambitious project to prevent suicide with artificial intelligence but what is the real reason they make these contructs? It’s to monetize our data, it’s not to “help humanity” or connect the world.

The info is here.

Wednesday, January 23, 2019

New tech doorbells can record video, and that's an ethics problem

Molly Wood
www.marketplace.org
Originally posted January 17, 2019

Here is an excerpt:

Ring is pretty clear in its terms and conditions that people are allowing Ring employees to access videos, not live streams, but cached videos. And that's in order to train that artificial intelligence to be better at recognizing neighbors, because they're trying to roll out a feature where they use facial recognition to match with the people that are considered safe. So if I have the Ring cameras, I can say, "All these are safe people. Here's pictures of my kids, my neighbors. If it's not one of these people, consider them unsafe." So that's a new technology. They need to be able to train their algorithms to recognize who's a person, what's a car, what's a cat. Some subset of the videos that are being uploaded just for typical usage are then being shared with their research team in the Ukraine.

The info is here.

Thursday, December 27, 2018

You Snooze, You Lose: Insurers Make The Old Adage Literally True

Justin Volz
ProPublica
Originally published November 21, 2018

Here is an excerpt:

In fact, faced with the popularity of CPAPs, which can cost $400 to $800, and their need for replacement filters, face masks and hoses, health insurers have deployed a host of tactics that can make the therapy more expensive or even price it out of reach.

Patients have been required to rent CPAPs at rates that total much more than the retail price of the devices, or they’ve discovered that the supplies would be substantially cheaper if they didn’t have insurance at all.

Experts who study health care costs say insurers’ CPAP strategies are part of the industry’s playbook of shifting the costs of widely used therapies, devices and tests to unsuspecting patients.

“The doctors and providers are not in control of medicine anymore,” said Harry Lawrence, owner of Advanced Oxy-Med Services, a New York company that provides CPAP supplies. “It’s strictly the insurance companies. They call the shots.”

Insurers say their concerns are legitimate. The masks and hoses can be cumbersome and noisy, and studies show that about third of patients don’t use their CPAPs as directed.

But the companies’ practices have spawned lawsuits and concerns by some doctors who say that policies that restrict access to the machines could have serious, or even deadly, consequences for patients with severe conditions. And privacy experts worry that data collected by insurers could be used to discriminate against patients or raise their costs.

The info is here.

Thursday, October 4, 2018

7 Short-Term AI ethics questions

Orlando Torres
www.towardsdatascience.com
Originally posted April 4, 2018

Here is an excerpt:

2. Transparency of Algorithms

Even more worrying than the fact that companies won’t allow their algorithms to be publicly scrutinized, is the fact that some algorithms are obscure even to their creators.
Deep learning is a rapidly growing technique in machine learning that makes very good predictions, but is not really able to explain why it made any particular prediction.

For example, some algorithms haven been used to fire teachers, without being able to give them an explanation of why the model indicated they should be fired.

How can we we balance the need for more accurate algorithms with the need for transparency towards people who are being affected by these algorithms? If necessary, are we willing to sacrifice accuracy for transparency, as Europe’s new General Data Protection Regulation may do? If it’s true that humans are likely unaware of their true motives for acting, should we demand machines be better at this than we actually are?

3. Supremacy of Algorithms

A similar but slightly different concern emerges from the previous two issues. If we start trusting algorithms to make decisions, who will have the final word on important decisions? Will it be humans, or algorithms?

For example, some algorithms are already being used to determine prison sentences. Given that we know judges’ decisions are affected by their moods, some people may argue that judges should be replaced with “robojudges”. However, a ProPublica study found that one of these popular sentencing algorithms was highly biased against blacks. To find a “risk score”, the algorithm uses inputs about a defendant’s acquaintances that would never be accepted as traditional evidence.

The info is here.