Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Tech. Show all posts
Showing posts with label Tech. Show all posts

Wednesday, August 17, 2022

Robots became racist after AI training, always chose Black faces as ‘criminals’

Pranshu Verma
The Washington Post
Originally posted 16 JUL 22

As part of a recent experiment, scientists asked specially programmed robots to scan blocks with people’s faces on them, then put the “criminal” in a box. The robots repeatedly chose a block with a Black man’s face.

Those virtual robots, which were programmed with a popular artificial intelligence algorithm, were sorting through billions of images and associated captions to respond to that question and others, and may represent the first empirical evidence that robots can be sexist and racist, according to researchers. Over and over, the robots responded to words like “homemaker” and “janitor” by choosing blocks with women and people of color.

The study, released last month and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligence systems can translate into robots that use them to guide their operations.

Companies have been pouring billions of dollars into developing more robots to help replace humans for tasks such as stocking shelves, delivering goods or even caring for hospital patients. Heightened by the pandemic and a resulting labor shortage, experts describe the current atmosphere for robotics as something of a gold rush. But tech ethicists and researchers are warning that the quick adoption of the new technology could result in unforeseen consequences down the road as the technology becomes more advanced and ubiquitous.

“With coding, a lot of times you just build the new software on top of the old software,” said Zac Stewart Rogers, a supply chain management professor from Colorado State University. “So, when you get to the point where robots are doing more … and they’re built on top of flawed roots, you could certainly see us running into problems.”

Researchers in recent years have documented multiple cases of biased artificial intelligence algorithms. That includes crime prediction algorithms unfairly targeting Black and Latino people for crimes they did not commit, as well as facial recognition systems having a hard time accurately identifying people of color.

Thursday, August 5, 2021

Technological seduction and self-radicalization

Alfano, M., Carter, J., & Cheong, M. (2018). 
Journal of the American Philosophical Association, 
4(3), 298-322. doi:10.1017/apa.2018.27

Abstract

Many scholars agree that the Internet plays a pivotal role in self-radicalization, which can lead to behaviors ranging from lone-wolf terrorism to participation in white nationalist rallies to mundane bigotry and voting for extremist candidates. However, the mechanisms by which the Internet facilitates self-radicalization are disputed; some fault the individuals who end up self-radicalized, while others lay the blame on the technology itself. In this paper, we explore the role played by technological design decisions in online self-radicalization in its myriad guises, encompassing extreme as well as more mundane forms. We begin by characterizing the phenomenon of technological seduction. Next, we distinguish between top-down seduction and bottom-up seduction. We then situate both forms of technological seduction within the theoretical model of dynamical systems theory. We conclude by articulating strategies for combating online self-radicalization.

Thursday, December 31, 2020

Why business cannot afford to ignore tech ethics

Siddharth Venkataramakrishnan
ft.com
Originally posted 6 DEC 20

From one angle, the pandemic looks like a vindication of “techno-solutionism”. From the more everyday developments of teleconferencing to systems exploiting advanced artificial intelligence, platitudes to the power of innovation abound.

Such optimism smacks of short-termism. Desperate times often call for swift and sweeping solutions, but implementing technologies without regard for their impact is risky and increasingly unacceptable to wider society. The business leaders of the future who purchase and deploy such systems face costly repercussions, both financial and reputational.

Tech ethics, while a relatively new field, has suffered from perceptions that it is either the domain of philosophers or PR people. This could not be further from the truth — as the pandemic continues, so the importance grows of mapping out potential harms from technologies.

Take, for example, biometrics such as facial-recognition systems. These have a clear appeal for companies looking to check who is entering their buildings, how many people are wearing masks or whether social distancing is being observed. Recent advances in the field have combined technologies such as thermal scanning and “periocular recognition” (the ability to identify people wearing masks).

But the systems pose serious questions for those responsible for purchasing and deploying them. At a practical level, facial recognition has long been plagued by accusations of racial bias.


Tuesday, December 29, 2020

Internal Google document reveals campaign against EU lawmakers

Javie Espinoza
ft.com
Originally published 28 OCT 20

Here is an excerpt:

The leak of the internal document lays bare the tactics that big tech companies employ behind the scenes to manipulate public discourse and influence lawmakers. The presentation is watermarked as “privileged and need-to-know” and “confidential and proprietary”.

The revelations are set to create new tensions between the EU and Google, which are already engaged in tough discussions about how the internet should be regulated. They are also likely to trigger further debate within Brussels, where regulators hold divergent positions on the possibility of breaking up big tech companies.

Margrethe Vestager, the EU’s executive vice-president in charge of competition and digital policy, on Tuesday argued to MEPs that structural separation of big tech is not “the right thing to do”. However, in a recent interview with the FT, Mr Breton accused such companies of being “too big to care”, and suggested that they should be broken up in extreme circumstances.

Among the other tactics outlined in the report were objectives to “undermine the idea DSA has no cost to Europeans” and “show how the DSA limits the potential of the internet . . . just as people need it the most”.

The campaign document also shows that Google will seek out “more allies” in its fight to influence the regulation debate in Brussels, including enlisting the help of Europe-based platforms such as Booking.com.

Booking.com told the FT: “We have no intention of co-operating with Google on upcoming EU platform regulation. Our interests are diametrically opposed.”


Saturday, August 8, 2020

How behavioural sciences can promote truth, autonomy and democratic discourse online

Lorenz-Spreen, P., Lewandowsky,
S., Sunstein, C.R. et al.
Nat Hum Behav (2020).
https://doi.org/10.1038/s41562-020-0889-7

Abstract

Public opinion is shaped in significant part by online content, spread via social media and curated algorithmically. The current online ecosystem has been designed predominantly to capture user attention rather than to promote deliberate cognition and autonomous choice; information overload, finely tuned personalization and distorted social cues, in turn, pave the way for manipulation and the spread of false information. How can transparency and autonomy be promoted instead, thus fostering the positive potential of the web? Effective web governance informed by behavioural research is critically needed to empower individuals online. We identify technologically available yet largely untapped cues that can be harnessed to indicate the epistemic quality of online content, the factors underlying algorithmic decisions and the degree of consensus in online debates. We then map out two classes of behavioural interventions—nudging and boosting— that enlist these cues to redesign online environments for informed and autonomous choice.

Here is an excerpt:

Another competence that could be boosted to help users deal more expertly with information they encounter online is the ability to make inferences about the reliability of information based on the social context from which it originates. The structure and details of the entire cascade of individuals who have previously shared an article on social media has been shown to serve as proxies for epistemic quality. More specifically, the sharing cascade contains metrics such as the depth and breadth of dissemination by others, with deep and narrow cascades indicating extreme or niche topics and breadth indicating widely discussed issues. A boosting intervention could provide this information (Fig. 3a) to display the full history of a post, including the original source, the friends and public users who disseminated it, and the timing of the process (showing, for example, if the information is old news that has been repeatedly and artificially amplified). Cascade statistics teaches concepts that may take some practice to read and interpret, and one may need to experience a number of cascades to learn to recognize informative patterns.

Friday, February 21, 2020

Why Google thinks we need to regulate AI

Sundar Pichai
ft.com
Originally posted 19 Jan 20

Here are two excerpts:

Yet history is full of examples of how technology’s virtues aren’t guaranteed. Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents. The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.

These lessons teach us that we need to be clear-eyed about what could go wrong. There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone.

(cut)

But principles that remain on paper are meaningless. So we’ve also developed tools to put them into action, such as testing AI decisions for fairness and conducting independent human-rights assessments of new products. We have gone even further and made these tools and related open-source code widely available, which will empower others to use AI for good. We believe that any company developing new AI tools should also adopt guiding principles and rigorous review processes.

Government regulation will also play an important role. We don’t have to start from scratch. Existing rules such as Europe’s General Data Protection Regulation can serve as a strong foundation. Good regulatory frameworks will consider safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways. Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities.

Regulation can provide broad guidance while allowing for tailored implementation in different sectors. For some AI uses, such as regulated medical devices including AI-assisted heart monitors, existing frameworks are good starting points. For newer areas such as self-driving vehicles, governments will need to establish appropriate new rules that consider all relevant costs and benefits.


Tuesday, June 4, 2019

Vatican, Catholic colleges weigh-in on emerging AI ethics debate

Jack Jenkins
National Catholic Reporter
Originally posted May 25, 2019

Here is an excerpt:

Mastrofini also noted that the partnership emerged after Pope Francis asked the academy to study the topic of ethics and AI.

"The technologies are advancing but they are not neutral," he told Religion News Service via email. "The Church, expert in humanity, can show the way for a development that makes the world more human and fair."

Microsoft officials declined to comment on the meeting.

The conversation between the Pope and Smith is one of several recent attempts by religious groups to wade into Silicon Valley's ongoing debate over the ethics of artificial intelligence.

Not long after Microsoft announced its partnership with the Vatican, Francis addressed the issue directly during a speech to a plenary meeting of the Pontifical Academy for Life. The pontiff noted that he had previously spoken to the seriousness of artificial intelligence during his January 2018 address to the World Economic Forum in Davos, Switzerland, but doubled-down on the potential dangers of misusing technology.

"It should be noted that the designation of 'artificial intelligence,' although certainly effective, may risk being misleading," Francis told the Pontifical Academy. "The terms conceal the fact that — in spite of the useful fulfillment of servile tasks (this is the original meaning of the term 'robot'), functional automatisms remain qualitatively distant from the human prerogatives of knowledge and action. And therefore they can become socially dangerous."

The info is here.

Wednesday, April 24, 2019

The Growing Marketplace For AI Ethics

Forbes Insight Team
Forbes.com
Originally posted March 27, 2019

As companies have raced to adopt artificial intelligence (AI) systems at scale, they have also sped through, and sometimes spun out, in the ethical obstacle course AI often presents.

AI-powered loan and credit approval processes have been marred by unforeseen bias. Same with recruiting tools. Smart speakers have secretly turned on and recorded thousands of minutes of audio of their owners.

Unfortunately, there’s no industry-standard, best-practices handbook on AI ethics for companies to follow—at least not yet. Some large companies, including Microsoft and Google, are developing their own internal ethical frameworks.

A number of think tanks, research organizations, and advocacy groups, meanwhile, have been developing a wide variety of ethical frameworks and guidelines for AI. Below is a brief roundup of some of the more influential models to emerge—from the Asilomar Principles to best-practice recommendations from the AI Now Institute.

“Companies need to study these ethical frameworks because this is no longer a technology question. It’s an existential human one,” says Hanson Hosein, director of the Communication Leadership program at the University of Washington. “These questions must be answered hand-in-hand with whatever’s being asked about how we develop the technology itself.”

The info is here.

Monday, April 8, 2019

Mark Zuckerberg And The Tech World Still Do Not Understand Ethics

Derek Lidow
Forbes.com
Originally posted March 11, 2018

Here is an excerpt:

Expectations for technology startups encourage expedient, not ethical, decision making. 

As people in the industry are fond of saying, the tech world moves at “lightspeed.” That includes the pace of innovation, the rise and fall of markets, the speed of customer adoption, the evolution of business models and the lifecycles of companies. Decisions must be made quickly and leaders too often choose the most expedient path regardless of whether it is safe, legal or ethical.

 This “move fast and break things” ethos is embodied in practices like working toward a minimum viable product (MVP), helping to establish a bias toward cutting corners. In addition, many founders look for CFOs who are “tech trained—that is, people accustomed to a world where time and money wait for no one—as opposed to a seasoned financial officer with good accounting chops and a moral compass.

The host of scandals at Zenefits, a cloud-based provider of employee-benefits software to small businesses and once one of the most promising of Silicon Valley startups, had their origins in the shortcuts the company took in order to meet unreasonably high expectations for growth. The founder apparently created software that helped employees cheat on California’s online broker license course. As the company expanded rapidly, it began hiring people with little experience in the highly regulated health insurance industry. As the company moved from small businesses to larger businesses, the strain on it software increased. Instead of developing appropriate software, the company hired more people to manually take up the slack where the existing software failed. When the founder was asked by an interviewer before the scandals why he was so intent on expanding so rapidly he replied, “Slowing down doesn’t feel like something I want to do.”

The info is here.

Wednesday, January 30, 2019

Experts Reveal Their Tech Ethics Wishes For The New Year

Jessica Baron
Forbes.com
Originally published December 30, 2018

Here is an excerpt:

"Face recognition technology is the technology to keep our eyes on in 2019.

The debates surrounding it have expressed our worst fears about surveillance and injustice and the tightly coupled links between corporate and state power. They’ve also triggered a battle amongst big tech companies, including Amazon, Microsoft, and Google, over how to define the parameters of corporate social responsibility at a time when external calls for greater accountability from civil rights groups, privacy activists and scholars, and internal demands for greater moral leadership, including pleas from employees and shareholders, are expressing concern over face surveillance governance having the potential to erode the basic fabric of democracy.

With aggressive competition fueling the global artificial intelligence race, it remains to be seen which values will guide innovation."

The info is here.

Monday, January 7, 2019

The Boundary Between Our Bodies and Our Tech

Kevin Lincoln
Pacific Standard
Originally published November 8, 2018

Here is an excerpt:

"They argued that, essentially, the mind and the self are extended to those devices that help us perform what we ordinarily think of as our cognitive tasks," Lynch says. This can include items as seemingly banal and analog as a piece of paper and a pen, which help us remember, a duty otherwise performed by the brain. According to this philosophy, the shopping list, for example, becomes part of our memory, the mind spilling out beyond the confines of our skull to encompass anything that helps it think.

"Now if that thought is right, it's pretty clear that our minds have become even more radically extended than ever before," Lynch says. "The idea that our self is expanding through our phones is plausible, and that's because our phones, and our digital devices generally—our smartwatches, our iPads—all these things have become a really intimate part of how we go about our daily lives. Intimate in the sense in which they're not only on our body, but we sleep with them, we wake up with them, and the air we breathe is filled, in both a literal and figurative sense, with the trails of ones and zeros that these devices leave behind."

This gets at one of the essential differences between a smartphone and a piece of paper, which is that our relationship with our phones is reciprocal: We not only put information into the device, we also receive information from it, and, in that sense, it shapes our lives far more actively than would, say, a shopping list. The shopping list isn't suggesting to us, based on algorithmic responses to our past and current shopping behavior, what we should buy; the phone is.

The info is here.

Sunday, January 6, 2019

Toward an Ethics of AI Assistants: an Initial Framework

John Danaher
Philosophy and Technology:1-25 (forthcoming)

Abstract

Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines.

The paper is here.

Sunday, October 14, 2018

The Myth of Freedom

Yuval Noah Harari
The Guardian
Originally posted September 14, 2018

Here is an excerpt:

Unfortunately, “free will” isn’t a scientific reality. It is a myth inherited from Christian theology. Theologians developed the idea of “free will” to explain why God is right to punish sinners for their bad choices and reward saints for their good choices. If our choices aren’t made freely, why should God punish or reward us for them? According to the theologians, it is reasonable for God to do so, because our choices reflect the free will of our eternal souls, which are independent of all physical and biological constraints.

This myth has little to do with what science now teaches us about Homo sapiens and other animals. Humans certainly have a will – but it isn’t free. You cannot decide what desires you have. You don’t decide to be introvert or extrovert, easy-going or anxious, gay or straight. Humans make choices – but they are never independent choices. Every choice depends on a lot of biological, social and personal conditions that you cannot determine for yourself. I can choose what to eat, whom to marry and whom to vote for, but these choices are determined in part by my genes, my biochemistry, my gender, my family background, my national culture, etc – and I didn’t choose which genes or family to have.

This is not abstract theory. You can witness this easily. Just observe the next thought that pops up in your mind. Where did it come from? Did you freely choose to think it? Obviously not. If you carefully observe your own mind, you come to realise that you have little control of what’s going on there, and you are not choosing freely what to think, what to feel, and what to want.

Though “free will” was always a myth, in previous centuries it was a helpful one. It emboldened people who had to fight against the Inquisition, the divine right of kings, the KGB and the KKK. The myth also carried few costs. In 1776 or 1945 there was relatively little harm in believing that your feelings and choices were the product of some “free will” rather than the result of biochemistry and neurology.

But now the belief in “free will” suddenly becomes dangerous. If governments and corporations succeed in hacking the human animal, the easiest people to manipulate will be those who believe in free will.

The info is here.

Tuesday, December 12, 2017

Regulation of AI: Not If But When and How

Ben Loewenstein
RSA.org
Originally published November 21, 2017

Here is an excerpt:

Firstly, AI is already embedded in today’s world, albeit in infant form. Fully autonomous vehicles are not for sale yet but self-parking cars have been in the market for years. We already rely on biometric technology like facial recognition to grant us entry into a country and robots are giving us banking advice.

Secondly, there is broad consensus that controls are needed. For example, a report issued last December by the office of former US President Barack Obama concluded that “aggressive policy action” would be required in the event of large job losses due to automation to ensure it delivers prosperity. If the American Government is no longer a credible source of accurate information for you, take the word of heavyweights like Bill Gates and Elon Musk, both of whom have called for AI to be regulated.

Finally, the building blocks of AI regulation are already looming in the form of rules like the European Union’s General Data Protection Regulation, which will take effect next year. The UK government’s independent review’s recommendations are also likely to become government policy. This means that we could see a regime established where firms within the same sector share data with each other under prescribed governance structures in an effort to curb the monopolies big tech companies currently enjoy on consumer information.

The latter characterises the threat facing the AI industry: the prospect of lawmakers making bold decisions that alter the trajectory of innovation. This is not an exaggeration.

The article is here.