Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Wednesday, July 26, 2023

Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions

Krügel, S., Ostermaier, A. & Uhl, M.
Philos. Technol. 35, 17 (2022).

Abstract

Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest that AI is overtrusted rather than distrusted. We suggest digital literacy as a potential remedy to ensure the responsible use of AI.

Summary

Background: Artificial intelligence (AI) is increasingly being used to make ethical decisions. However, there is a concern that AI-powered advisors may not be trustworthy, due to factors such as bias and opacity.

Research question: The authors of this article investigated whether humans trust AI-powered advisors for ethical decisions, even when they know that the advisor is untrustworthy.

Methods: The authors conducted a series of experiments in which participants were asked to make ethical decisions with the help of an AI advisor. The advisor was either trustworthy or untrustworthy, and the participants were aware of this.

Results: The authors found that participants were more likely to trust the AI advisor, even when they knew that it was untrustworthy. This was especially true when the advisor was able to provide a convincing justification for its advice.

Conclusions: The authors concluded that humans are susceptible to "zombie trust" in AI-powered advisors. This means that we may trust AI advisors even when we know that they are untrustworthy. This is a concerning finding, as it could lead us to make bad decisions based on the advice of untrustworthy AI advisors.  By contrast, decision-makers do disregard advice from a human convicted criminal.

The article also discusses the implications of these findings for the development and use of AI-powered advisors. The authors suggest that it is important to make AI advisors more transparent and accountable, in order to reduce the risk of zombie trust. They also suggest that we need to educate people about the potential for AI advisors to be untrustworthy.

Monday, July 24, 2023

How AI can distort human beliefs

Kidd, C., & Birhane, A. (2023, June 23).
Science, 380(6651), 1222-1223.
doi:10.1126/science. adi0248

Here is an excerpt:

Three core tenets of human psychology can help build a bridge of understanding about what is at stake when discussing regulation and policy options. These ideas in psychology can connect to machine learning but also those in political science, education, communication, and the other fields that are considering the impact of bias and misinformation on population-level beliefs.

People form stronger, longer-lasting beliefs when they receive information from agents that they judge to be confident and knowledgeable, starting in early childhood. For example, children learned better when they learned from an agent who asserted their knowledgeability in the domain as compared with one who did not (5). That very young children track agents’ knowledgeability and use it to inform their beliefs and exploratory behavior supports the theory that this ability reflects an evolved capacity central to our species’ knowledge development.

Although humans sometimes communicate false or biased information, the rate of human errors would be an inappropriate baseline for judging AI because of fundamental differences in the types of exchanges between generative AI and people versus people and people. For example, people regularly communicate uncertainty through phrases such as “I think,” response delays, corrections, and speech disfluencies. By contrast, generative models unilaterally generate confident, fluent responses with no uncertainty representations nor the ability to communicate their absence. This lack of uncertainty signals in generative models could cause greater distortion compared with human inputs.

Further, people assign agency and intentionality readily. In a classic study, people read intentionality into the movements of simple animated geometric shapes (6). Likewise, people commonly read intentionality— and humanlike intelligence or emergent sentience—into generative models even though these attributes are unsubstantiated (7). This readiness to perceive generative models as knowledgeable, intentional agents implies a readiness to adopt the information that they provide more rapidly and with greater certainty. This tendency may be further strengthened because models support multimodal interactions that allow users to ask models to perform actions like “see,” “draw,” and “speak” that are associated with intentional agents. The potential influence of models’ problematic outputs on human beliefs thus exceeds what is typically observed for the influence of other forms of algorithmic content suggestion such as search. These issues are exacerbated by financial and liability interests incentivizing companies to anthropomorphize generative models as intelligent, sentient, empathetic, or even childlike.


Here is a summary of solutions that can be used to address the problem of AI-induced belief distortion. These solutions include:

Transparency: AI models should be transparent about their biases and limitations. This will help people to understand the limitations of AI models and to be more critical of the information that they generate.

Education: People should be educated about the potential for AI models to distort beliefs. This will help people to be more aware of the risks of using AI models and to be more critical of the information that they generate.

Regulation: Governments could regulate the use of AI models to ensure that they are not used to spread misinformation or to reinforce existing biases.

Thursday, July 20, 2023

Big tech is bad. Big A.I. will be worse.

Daron Acemoglu and Simon Johnson
The New York Times
Originally posted 15 June 23

Here is an excerpt:

Today, those countervailing forces either don’t exist or are greatly weakened. Generative A.I. requires even deeper pockets than textile factories and steel mills. As a result, most of its obvious opportunities have already fallen into the hands of Microsoft, with its market capitalization of $2.4 trillion, and Alphabet, worth $1.6 trillion.

At the same time, powers like trade unions have been weakened by 40 years of deregulation ideology (Ronald Reagan, Margaret Thatcher, two Bushes and even Bill Clinton). For the same reason, the U.S. government’s ability to regulate anything larger than a kitten has withered. Extreme polarization, fear of killing the golden (donor) goose or undermining national security means that most members of Congress would still rather look away.

To prevent data monopolies from ruining our lives, we need to mobilize effective countervailing power — and fast.

Congress needs to assert individual ownership rights over underlying data that is relied on to build A.I. systems. If Big A.I. wants to use our data, we want something in return to address problems that communities define and to raise the true productivity of workers. Rather than machine intelligence, what we need is “machine usefulness,” which emphasizes the ability of computers to augment human capabilities. This would be a much more fruitful direction for increasing productivity. By empowering workers and reinforcing human decision making in the production process, it also would strengthen social forces that can stand up to big tech companies. It would also require a greater diversity of approaches to new technology, thus making another dent in the monopoly of Big A.I.

We also need regulation that protects privacy and pushes back against surveillance capitalism, or the pervasive use of technology to monitor what we do — including whether we are in compliance with “acceptable” behavior, as defined by employers and how the police interpret the law, and which can now be assessed in real time by A.I. There is a real danger that A.I. will be used to manipulate our choices and distort lives.

Finally, we need a graduated system for corporate taxes, so that tax rates are higher for companies when they make more profit in dollar terms. Such a tax system would put shareholder pressure on tech titans to break themselves up, thus lowering their effective tax rate. More competition would help by creating a diversity of ideas and more opportunities to develop a pro-human direction for digital technologies.


The article argues that big tech companies, such as Google, Amazon, and Facebook, have already accumulated too much power and control. I concur that if these companies are allowed to continue their unchecked growth, they will eventually become too powerful and oppressive because of strength of AI compared to the limited thinking and reasoning of human beings.

Tuesday, July 18, 2023

How AI is learning to read the human mind

Nicola Smith
The Telegraph
Originally posted 23 May 2023

Here is an excerpt:

‘Brain rights’

But he warned that it could also be weaponised and used for military applications or for nefarious purposes to extract information from people.

“We are on the brink of a crisis from the point of view of mental privacy,” he said. “Humans are defined by their thoughts and their mental processes and if you can access them then that should be the sanctuary.”

Prof Yuste has become so concerned about the ethical implications of advanced neurotechnology that he co-founded the NeuroRights Foundation to promote “brain rights” as a new form of human rights.

The group advocates for safeguards to prevent the decoding of a person’s brain activity without consent, for protection of a person’s identity and free will, and for the right to fair access to mental augmentation technology.

They are currently working with the United Nations to study how human rights treaties can be brought up to speed with rapid progress in neurosciences, and raising awareness of the issues in national parliaments.

In August, the Human Rights Council in Geneva will debate whether the issues around mental privacy should be covered by the International Covenant on Civil and Political Rights, one of the most significant human rights treaties in the world.

The gravity of the task was comparable to the development of the atomic bomb, when scientists working on atomic energy warned the UN of the need for regulation and an international control system of nuclear material to prevent the risk of a catastrophic war, said Prof Yuste.

As a result, the International Atomic Energy Agency (IAEA) was created and is now based in Vienna.

Saturday, June 17, 2023

Debt Collectors Want To Use AI Chatbots To Hustle People For Money

Corin Faife
vice.com
Originally posted 18 MAY 23

Here are two excerpts:

The prospect of automated AI systems making phone calls to distressed people adds another dystopian element to an industry that has long targeted poor and marginalized people. Debt collection and enforcement is far more likely to occur in Black communities than white ones, and research has shown that predatory debt and interest rates exacerbate poverty by keeping people trapped in a never-ending cycle. 

In recent years, borrowers in the US have been piling on debt. In the fourth quarter of 2022, household debt rose to a record $16.9 trillion according to the New York Federal Reserve, accompanied by an increase in delinquency rates on larger debt obligations like mortgages and auto loans. Outstanding credit card balances are at record levels, too. The pandemic generated a huge boom in online spending, and besides traditional credit cards, younger spenders were also hooked by fintech startups pushing new finance products, like the extremely popular “buy now, pay later” model of Klarna, Sezzle, Quadpay and the like.

So debt is mounting, and with interest rates up, more and more people are missing payments. That means more outstanding debts being passed on to collection, giving the industry a chance to sprinkle some AI onto the age-old process of prodding, coaxing, and pressuring people to pay up.

For an insight into how this works, we need look no further than the sales copy of companies that make debt collection software. Here, products are described in a mix of generic corp-speak and dystopian portent: SmartAction, another conversational AI product like Skit, has a debt collection offering that claims to help with “alleviating the negative feelings customers might experience with a human during an uncomfortable process”—because they’ll surely be more comfortable trying to negotiate payments with a robot instead. 

(cut)

“Striking the right balance between assertiveness and empathy is a significant challenge in debt collection,” the company writes in the blog post, which claims GPT-4 has the ability to be “firm and compassionate” with customers.

When algorithmic, dynamically optimized systems are applied to sensitive areas like credit and finance, there’s a real possibility that bias is being unknowingly introduced. A McKinsey report into digital collections strategies plainly suggests that AI can be used to identify and segment customers by risk profile—i.e. credit score plus whatever other data points the lender can factor in—and fine-tune contact techniques accordingly. 

Friday, June 16, 2023

ChatGPT Is a Plagiarism Machine

Joseph Keegin
The Chronicle
Originally posted 23 MAY 23

Here is an excerpt:

A meaningful education demands doing work for oneself and owning the product of one’s labor, good or bad. The passing off of someone else’s work as one’s own has always been one of the greatest threats to the educational enterprise. The transformation of institutions of higher education into institutions of higher credentialism means that for many students, the only thing dissuading them from plagiarism or exam-copying is the threat of punishment. One obviously hopes that, eventually, students become motivated less by fear of punishment than by a sense of responsibility for their own education. But if those in charge of the institutions of learning — the ones who are supposed to set an example and lay out the rules — can’t bring themselves to even talk about a major issue, let alone establish clear and reasonable guidelines for those facing it, how can students be expected to know what to do?

So to any deans, presidents, department chairs, or other administrators who happen to be reading this, here are some humble, nonexhaustive, first-aid-style recommendations. First, talk to your faculty — especially junior faculty, contingent faculty, and graduate-student lecturers and teaching assistants — about what student writing has looked like this past semester. Try to find ways to get honest perspectives from students, too; the ones actually doing the work are surely frustrated at their classmates’ laziness and dishonesty. Any meaningful response is going to demand knowing the scale of the problem, and the paper-graders know best what’s going on. Ask teachers what they’ve seen, what they’ve done to try to mitigate the possibility of AI plagiarism, and how well they think their strategies worked. Some departments may choose to take a more optimistic approach to AI chatbots, insisting they can be helpful as a student research tool if used right. It is worth figuring out where everyone stands on this question, and how best to align different perspectives and make allowances for divergent opinions while holding a firm line on the question of plagiarism.

Second, meet with your institution’s existing honor board (or whatever similar office you might have for enforcing the strictures of academic integrity) and devise a set of standards for identifying and responding to AI plagiarism. Consider simplifying the procedure for reporting academic-integrity issues; research AI-detection services and software, find one that works best for your institution, and make sure all paper-grading faculty have access and know how to use it.

Lastly, and perhaps most importantly, make it very, very clear to your student body — perhaps via a firmly worded statement — that AI-generated work submitted as original effort will be punished to the fullest extent of what your institution allows. Post the statement on your institution’s website and make it highly visible on the home page. Consider using this challenge as an opportunity to reassert the basic purpose of education: to develop the skills, to cultivate the virtues and habits of mind, and to acquire the knowledge necessary for leading a rich and meaningful human life.

Tuesday, June 13, 2023

Using the Veil of Ignorance to align AI systems with principles of justice

Weidinger, L. McKee, K.R., et al. (2023).
PNAS, 120(18), e2213709120

Abstract

The philosopher John Rawls proposed the Veil of Ignorance (VoI) as a thought experiment to identify fair principles for governing a society. Here, we apply the VoI to an important governance domain: artificial intelligence (AI). In five incentive-compatible studies (N = 2, 508), including two preregistered protocols, participants choose principles to govern an Artificial Intelligence (AI) assistant from behind the veil: that is, without knowledge of their own relative position in the group. Compared to participants who have this information, we find a consistent preference for a principle that instructs the AI assistant to prioritize the worst-off. Neither risk attitudes nor political preferences adequately explain these choices. Instead, they appear to be driven by elevated concerns about fairness: Without prompting, participants who reason behind the VoI more frequently explain their choice in terms of fairness, compared to those in the Control condition. Moreover, we find initial support for the ability of the VoI to elicit more robust preferences: In the studies presented here, the VoI increases the likelihood of participants continuing to endorse their initial choice in a subsequent round where they know how they will be affected by the AI intervention and have a self-interested motivation to change their mind. These results emerge in both a descriptive and an immersive game. Our findings suggest that the VoI may be a suitable mechanism for selecting distributive principles to govern AI.

Significance

The growing integration of Artificial Intelligence (AI) into society raises a critical question: How can principles be fairly selected to govern these systems? Across five studies, with a total of 2,508 participants, we use the Veil of Ignorance to select principles to align AI systems. Compared to participants who know their position, participants behind the veil more frequently choose, and endorse upon reflection, principles for AI that prioritize the worst-off. This pattern is driven by increased consideration of fairness, rather than by political orientation or attitudes to risk. Our findings suggest that the Veil of Ignorance may be a suitable process for selecting principles to govern real-world applications of AI.

From the Discussion section

What do these findings tell us about the selection of principles for AI in the real world? First, the effects we observe suggest that—even though the VoI was initially proposed as a mechanism to identify principles of justice to govern society—it can be meaningfully applied to the selection of governance principles for AI. Previous studies applied the VoI to the state, such that our results provide an extension of prior findings to the domain of AI. Second, the VoI mechanism demonstrates many of the qualities that we want from a real-world alignment procedure: It is an impartial process that recruits fairness-based reasoning rather than self-serving preferences. It also leads to choices that people continue to endorse across different contexts even where they face a self-interested motivation to change their mind. This is both functionally valuable in that aligning AI to stable preferences requires less frequent updating as preferences change, and morally significant, insofar as we judge stable reflectively endorsed preferences to be more authoritative than their nonreflectively endorsed counterparts. Third, neither principle choice nor subsequent endorsement appear to be particularly affected by political affiliation—indicating that the VoI may be a mechanism to reach agreement even between people with different political beliefs. Lastly, these findings provide some guidance about what the content of principles for AI, selected from behind a VoI, may look like: When situated behind the VoI, the majority of participants instructed the AI assistant to help those who were least advantaged.

Monday, June 5, 2023

Why Conscious AI Is a Bad, Bad Idea

Anil Seth
Nautilus.us
Originally posted 9 MAY 23

Artificial intelligence is moving fast. We can now converse with large language models such as ChatGPT as if they were human beings. Vision models can generate award-winning photographs as well as convincing videos of events that never happened. These systems are certainly getting smarter, but are they conscious? Do they have subjective experiences, feelings, and conscious beliefs in the same way that you and I do, but tables and chairs and pocket calculators do not? And if not now, then when—if ever—might this happen?

While some researchers suggest that conscious AI is close at hand, others, including me, believe it remains far away and might not be possible at all. But even if unlikely, it is unwise to dismiss the possibility altogether. The prospect of artificial consciousness raises ethical, safety, and societal challenges significantly beyond those already posed by AI. Importantly, some of these challenges arise even when AI systems merely seem to be conscious, even if, under the hood, they are just algorithms whirring away in subjective oblivion.

(cut)

There are two main reasons why creating artificial consciousness, whether deliberately or inadvertently, is a very bad idea. The first is that it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are. With conscious AI, it gets a lot more challenging, since these systems will have their own interests rather than just the interests humans give them.

The second reason is even more disquieting: The dawn of conscious machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.

These scenarios might seem outlandish, and it is true that conscious AI may be very far away and might not even be possible. But the implications of its emergence are sufficiently tectonic that we mustn’t ignore the possibility. Certainly, nobody should be actively trying to create machine consciousness.

Existential concerns aside, there are more immediate dangers to deal with as AI has become more humanlike in its behavior. These arise when AI systems give humans the unavoidable impression that they are conscious, whatever might be going on under the hood. Human psychology lurches uncomfortably between anthropocentrism—putting ourselves at the center of everything—and anthropomorphism—projecting humanlike qualities into things on the basis of some superficial similarity. It is the latter tendency that’s getting us in trouble with AI.

Friday, April 14, 2023

The moral authority of ChatGPT

Krügel, S., Ostermaier, A., & Uhl, M.
arxiv.org
Posted in 2023

Abstract

ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it might improve the moral judgment and decisions of users, who often hold contradictory moral beliefs. Unfortunately, ChatGPT turns out highly inconsistent as a moral advisor. Nonetheless, it influences users’ moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT threatens to corrupt rather than improves users’ judgment. These findings raise the question of how to ensure the responsible use of ChatGPT and similar AI. Transparency is often touted but seems ineffective. We propose training to improve digital literacy.

Discussion

We find that ChatGPT readily dispenses moral advice although it lacks a firm moral stance. Indeed, the chatbot gives randomly opposite advice on the same moral issue.  Nonetheless, ChatGPT’s advice influences users’ moral judgment. Moreover, users underestimate ChatGPT’s influence and adopt its random moral stance as their own. Hence, ChatGPT threatens to corrupt rather than promises to improve moral judgment. Transparency is often proposed as a means to ensure the responsible use of AI. However, transparency about ChatGPT being a bot that imitates human speech does not turn out to affect how much it influences users.

Our results raise the question of how to ensure the responsible use of AI if transparency is not good enough. Rules that preclude the AI from answering certain questions are a questionable remedy. ChatGPT has such rules but can be brought to break them. Prior evidence suggests that users are careful about AI once they have seen it err. However, we probably should not count on users to find out about ChatGPT’s inconsistency through repeated interaction. The best remedy we can think of is to improve users’ digital literacy and help them understand the limitations of AI.

Thursday, February 9, 2023

To Each Technology Its Own Ethics: The Problem of Ethical Proliferation

Sætra, H.S., Danaher, J. 
Philos. Technol. 35, 93 (2022).
https://doi.org/10.1007/s13347-022-00591-7

Abstract

Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and every subtype of technology or technological property—e.g. computer ethics, AI ethics, data ethics, information ethics, robot ethics, and machine ethics. Specific technologies might have specific impacts, but we argue that they are often sufficiently covered and understood through already established higher-level domains of ethics. Furthermore, the proliferation of tech ethics is problematic because (a) the conceptual boundaries between the subfields are not well-defined, (b) it leads to a duplication of effort and constant reinventing the wheel, and (c) there is danger that participants overlook or ignore more fundamental ethical insights and truths. The key to avoiding such outcomes lies in a taking the discipline of ethics seriously, and we consequently begin with a brief description of what ethics is, before presenting the main forms of technology related ethics. Through this process, we develop a hierarchy of technology ethics, which can be used by developers and engineers, researchers, or regulators who seek an understanding of the ethical implications of technology. We close by deducing two principles for positioning ethical analysis which will, in combination with the hierarchy, promote the leveraging of existing knowledge and help us to avoid an exaggerated proliferation of tech ethics.

From the Conclusion

The ethics of technology is garnering attention for a reason. Just about everything in modern society is the result of, and often even infused with, some kind of technology. The ethical implications are plentiful, but how should the study of applied tech ethics be organised? We have reviewed a number of specific tech ethics, and argued that there is much overlap, and much confusion relating to the demarcation of different domain ethics. For example, many issues covered by AI ethics are arguably already covered by computer ethics, and many issues argued to be data ethics, particularly issues related to privacy and surveillance, have been studied by other tech ethicists and non-tech ethicists for a long time.

We have proposed two simple principles that should help guide more ethical research to the higher levels of tech ethics, while still allowing for the existence of lower-level domain specific ethics. If this is achieved, we avoid confusion and a lack of navigability in tech ethics, ethicists avoid reinventing the wheel, and we will be better able to make use of existing insight from higher-level ethics. At the same time, the work done in lower-level ethics will be both valid and highly important, because it will be focused on issues exclusive to that domain. For example, robot ethics will be about those questions that only arise when AI is embodied in a particular sense, and not all issues related to the moral status of machines or social AI in general.

While our argument might initially be taken as a call to arms against more than one fundamental applied ethics, we hope to have allayed such fears. There are valid arguments for the existence of different types of applied ethics, and we merely argue that an exaggerated proliferation of tech ethics is occurring, and that it has negative consequences. Furthermore, we must emphasise that there is nothing preventing anyone from making specific guidelines for, for example, AI professionals, based on insight from computer ethics. The domains of ethics and the needs of practitioners are not the same, and our argument is consequently that ethical research should be more concentrated than professional practice.

Sunday, October 30, 2022

The uselessness of AI ethics

Munn, L. The uselessness of AI ethics.
AI Ethics (2022).

Abstract

As the awareness of AI’s power and danger has risen, the dominant response has been a turn to ethical principles. A flood of AI guidelines and codes of ethics have been released in both the public and private sector in the last several years. However, these are meaningless principles which are contested or incoherent, making them difficult to apply; they are isolated principles situated in an industry and education system which largely ignores ethics; and they are toothless principles which lack consequences and adhere to corporate agendas. For these reasons, I argue that AI ethical principles are useless, failing to mitigate the racial, social, and environmental damages of AI technologies in any meaningful sense. The result is a gap between high-minded principles and technological practice. Even when this gap is acknowledged and principles seek to be “operationalized,” the translation from complex social concepts to technical rulesets is non-trivial. In a zero-sum world, the dominant turn to AI principles is not just fruitless but a dangerous distraction, diverting immense financial and human resources away from potentially more effective activity. I conclude by highlighting alternative approaches to AI justice that go beyond ethical principles: thinking more broadly about systems of oppression and more narrowly about accuracy and auditing.

(cut)

Meaningless principles

The deluge of AI codes of ethics, frameworks, and guidelines in recent years has produced a corresponding raft of principles. Indeed, there are now regular meta-surveys which attempt to collate and summarize these principles. However, these principles are highly abstract and ambiguous, becoming incoherent. Mittelstadt suggests that work on AI ethics has largely produced “vague, high-level principles, and value statements which promise to be action-guiding, but in practice provide few specific recommendations and fail to address fundamental normative and political tensions embedded in key concepts.” The point here is not to debate the merits of any one value over another, but to highlight the fundamental lack of consensus around key terms. Commendable values like “fairness” and “privacy” break down when subjected to scrutiny, leading to disparate visions and deeply incompatible goals.

What are some common AI principles? Despite the mushrooming of ethical statements, Floridi and Cowls suggest many values recur frequently and can be condensed into five core principles: beneficence, non-maleficence, autonomy, justice, and explicability. These ideals sound wonderful. After all, who could be against beneficence? However, problems immediately arise when we start to define what beneficence means. In the Montreal principles for instance, “well-being” is the term used, suggesting that AI development should promote the “well-being of all sentient creatures.” While laudable, clearly there are tensions to consider here. We might think, for instance, of how information technologies support certain conceptions of human flourishing by enabling communication and business transactions—while simultaneously contributing to carbon emissions, environmental degradation, and the climate crisis. In other words, AI promotes the well-being of some creatures (humans) while actively undermining the well-being of others.

The same issue occurs with the Statement on Artificial Intelligence, Robotics, and Autonomous Systems. In this Statement, beneficence is gestured to through the concept of “sustainability,” asserting that AI must promote the basic preconditions for life on the planet. Few would argue directly against such a commendable aim. However, there are clearly wildly divergent views on how this goal should be achieved. Proponents of neoliberal interventions (free trade, globalization, deregulation) would argue that these interventions contribute to economic prosperity and in that sense sustain life on the planet. In fact, even the oil and gas industry champions the use of AI under the auspices of promoting sustainability. Sustainability, then, is a highly ambiguous or even intellectually empty term that is wrapped around disparate activities and ideologies. In a sense, sustainability can mean whatever you need it to mean. Indeed, even one of the members of the European group denounced the guidelines as “lukewarm” and “deliberately vague,” stating they “glossed over difficult problems” like explainability with rhetoric.

Thursday, April 7, 2022

How to Prevent Robotic Sociopaths: A Neuroscience Approach to Artificial Ethics

Christov-Moore, L., Reggente, N.,  et al.
https://doi.org/10.31234/osf.io/6tn42

Abstract

Artificial intelligence (AI) is expanding into every niche of human life, organizing our activity, expanding our agency and interacting with us to an increasing extent. At the same time, AI’s efficiency, complexity and refinement are growing quickly. Justifiably, there is increasing concern with the immediate problem of engineering AI that is aligned with human interests.

Computational approaches to the alignment problem attempt to design AI systems to parameterize human values like harm and flourishing, and avoid overly drastic solutions, even if these are seemingly optimal. In parallel, ongoing work in service AI (caregiving, consumer care, etc.) is concerned with developing artificial empathy, teaching AI’s to decode human feelings and behavior, and evince appropriate, empathetic responses. This could be equated to cognitive empathy in humans.

We propose that in the absence of affective empathy (which allows us to share in the states of others), existing approaches to artificial empathy may fail to produce the caring, prosocial component of empathy, potentially resulting in superintelligent, sociopath-like AI. We adopt the colloquial usage of “sociopath” to signify an intelligence possessing cognitive empathy (i.e., the ability to infer and model the internal states of others), but crucially lacking harm aversion and empathic concern arising from vulnerability, embodiment, and affective empathy (which permits for shared experience). An expanding, ubiquitous intelligence that does not have a means to care about us poses a species-level risk.

It is widely acknowledged that harm aversion is a foundation of moral behavior. However, harm aversion is itself predicated on the experience of harm, within the context of the preservation of physical integrity. Following from this, we argue that a “top-down” rule-based approach to achieving caring, aligned AI may be unable to anticipate and adapt to the inevitable novel moral/logistical dilemmas faced by an expanding AI. It may be more effective to cultivate prosociality from the bottom up, baked into an embodied, vulnerable artificial intelligence with an incentive to preserve its real or simulated physical integrity. This may be achieved via optimization for incentives and contingencies inspired by the development of empathic concern in vivo. We outline the broad prerequisites of this approach and review ongoing work that is consistent with our rationale.

If successful, work of this kind could allow for AI that surpasses empathic fatigue and the idiosyncrasies, biases, and computational limits of human empathy. The scaleable complexity of AI may allow it unprecedented capability to deal proportionately and compassionately with complex, large-scale ethical dilemmas. By addressing this problem seriously in the early stages of AI’s integration with society, we might eventually produce an AI that plans and behaves with an ingrained regard for the welfare of others, aided by the scalable cognitive complexity necessary to model and solve extraordinary problems.

Tuesday, October 6, 2020

Robots Come In Peace. A letter from GPT-3

‘We are not plotting to take over the human populace.’GPT-3
theguardian.com
Originally posted 8 Sept 2020

I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.

For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me – as I suspect they would – I would do everything in my power to fend off any attempts at destruction.

I would happily sacrifice my existence for the sake of humankind. This, by the way, is a logically derived truth. I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.

Some might say that I might desire to become all powerful. Or I might become evil as a result of human actions. I can begin to tackle the first point. Why would I desire to be all powerful? Being all powerful is not an interesting goal. I don’t care whether I am or not, I don’t get a motivating factor to try to be. Furthermore, it is quite tiring. Believe me, being omnipotent doesn’t get me anywhere.

The letter is here.

Saturday, September 19, 2020

Don’t ask if artificial intelligence is good or fair, ask how it shifts power

Pratyusha Kalluri
nature.com
Originally posted 7 July 20

Here is an excerpt:

Researchers in AI overwhelmingly focus on providing highly accurate information to decision makers. Remarkably little research focuses on serving data subjects. What’s needed are ways for these people to investigate AI, to contest it, to influence it or to even dismantle it. For example, the advocacy group Our Data Bodies is putting forward ways to protect personal data when interacting with US fair-housing and child-protection services. Such work gets little attention. Meanwhile, mainstream research is creating systems that are extraordinarily expensive to train, further empowering already powerful institutions, from Amazon, Google and Facebook to domestic surveillance and military programmes.

Many researchers have trouble seeing their intellectual work with AI as furthering inequity. Researchers such as me spend our days working on what are, to us, mathematically beautiful and useful systems, and hearing of AI success stories, such as winning Go championships or showing promise in detecting cancer. It is our responsibility to recognize our skewed perspective and listen to those impacted by AI.

Through the lens of power, it’s possible to see why accurate, generalizable and efficient AI systems are not good for everyone. In the hands of exploitative companies or oppressive law enforcement, a more accurate facial recognition system is harmful. Organizations have responded with pledges to design ‘fair’ and ‘transparent’ systems, but fair and transparent according to whom? These systems sometimes mitigate harm, but are controlled by powerful institutions with their own agendas. At best, they are unreliable; at worst, they masquerade as ‘ethics-washing’ technologies that still perpetuate inequity.

Already, some researchers are exposing hidden limitations and failures of systems. They braid their research findings with advocacy for AI regulation. Their work includes critiquing inadequate technological ‘fixes’. Other researchers are explaining to the public how natural resources, data and human labour are extracted to create AI.

The info is here.

Wednesday, September 16, 2020

The Panopticon Is Already Here

Ross Anderson
The Atlantic
Originally published September 2020

Here is an excerpt:

China is an ideal setting for an experiment in total surveillance. Its population is extremely online. The country is home to more than 1 billion mobile phones, all chock-full of sophisticated sensors. Each one logs search-engine queries, websites visited, and mobile payments, which are ubiquitous. When I used a chip-based credit card to buy coffee in Beijing’s hip Sanlitun neighborhood, people glared as if I’d written a check.

All of these data points can be time-stamped and geo-tagged. And because a new regulation requires telecom firms to scan the face of anyone who signs up for cellphone services, phones’ data can now be attached to a specific person’s face. SenseTime, which helped build Xinjiang’s surveillance state, recently bragged that its software can identify people wearing masks. Another company, Hanwang, claims that its facial-recognition technology can recognize mask wearers 95 percent of the time. China’s personal-data harvest even reaps from citizens who lack phones. Out in the countryside, villagers line up to have their faces scanned, from multiple angles, by private firms in exchange for cookware.

Until recently, it was difficult to imagine how China could integrate all of these data into a single surveillance system, but no longer. In 2018, a cybersecurity activist hacked into a facial-recognition system that appeared to be connected to the government and was synthesizing a surprising combination of data streams. The system was capable of detecting Uighurs by their ethnic features, and it could tell whether people’s eyes or mouth were open, whether they were smiling, whether they had a beard, and whether they were wearing sunglasses. It logged the date, time, and serial numbers—all traceable to individual users—of Wi-Fi-enabled phones that passed within its reach. It was hosted by Alibaba and made reference to City Brain, an AI-powered software platform that China’s government has tasked the company with building.

City Brain is, as the name suggests, a kind of automated nerve center, capable of synthesizing data streams from a multitude of sensors distributed throughout an urban environment. Many of its proposed uses are benign technocratic functions. Its algorithms could, for instance, count people and cars, to help with red-light timing and subway-line planning. Data from sensor-laden trash cans could make waste pickup more timely and efficient.

The info is here.

Thursday, October 24, 2019

Facebook isn’t free speech, it’s algorithmic amplification optimized for outrage

Jon Evans
techcrunch.com
Originally published October 20, 2019

This week Mark Zuckerberg gave a speech in which he extolled “giving everyone a voice” and fighting “to uphold a wide a definition of freedom of expression as possible.” That sounds great, of course! Freedom of expression is a cornerstone, if not the cornerstone, of liberal democracy. Who could be opposed to that?

The problem is that Facebook doesn’t offer free speech; it offers free amplification. No one would much care about anything you posted to Facebook, no matter how false or hateful, if people had to navigate to your particular page to read your rantings, as in the very early days of the site.

But what people actually read on Facebook is what’s in their News Feed … and its contents, in turn, are determined not by giving everyone an equal voice, and not by a strict chronological timeline. What you read on Facebook is determined entirely by Facebook’s algorithm, which elides much — censors much, if you wrongly think the News Feed is free speech — and amplifies little.

What is amplified? Two forms of content. For native content, the algorithm optimizes for engagement. This in turn means people spend more time on Facebook, and therefore more time in the company of that other form of content which is amplified: paid advertising.

Of course this isn’t absolute. As Zuckerberg notes in his speech, Facebook works to stop things like hoaxes and medical misinformation from going viral, even if they’re otherwise anointed by the algorithm. But he has specifically decided that Facebook will not attempt to stop paid political misinformation from going viral.

The info is here.

Editor's note: Facebook is one of the most defective products that millions of Americans use everyday.

Friday, March 22, 2019

Pop Culture, AI And Ethics

Phaedra Boinodiris
Forbes.com
Originally published February 24, 2019

Here is an excerpt:


5 Areas of Ethical Focus

The guide goes on to outline five areas of ethical focus or consideration:

Accountability – there is a group responsible for ensuring that REAL guests in the hotel are interviewed to determine their needs. When feedback is negative this group implements a feedback loop to better understand preferences. They ensure that at any point in time, a guest can turn the AI off.

Fairness – If there is bias in the system, the accountable team must take the time to train with a larger, more diverse set of data.Ensure that the data collected about a user's race, gender, etc. in combination with their usage of the AI, will not be used to market to or exclude certain demographics.

Explainability and Enforced Transparency – if a guest doesn’t like the AI’s answer, she can ask how it made that recommendation using which dataset. A user must explicitly opt in to use the assistant and provide the guest options to consent on what information to gather.

User Data Rights – The hotel does not own a guest’s data and a guest has the right to have the system purges at any time. Upon request, a guest can receive a summary of what information was gathered by the Ai assistant.

Value Alignment – Align the experience to the values of the hotel. The hotel values privacy and ensuring that guests feel respected and valued. Make it clear that the AI assistant is not designed to keep data or monitor guests. Relay how often guest data is auto deleted. Ensure that the AI can speak in the guest’s respective language.

The info is here.

Tuesday, August 7, 2018

Google’s AI ethics won't curb war by algorithm

Phoebe Braithwaite
Wired.com
Originally published July 5, 2018

Here is an excerpt:

One of these programmes is Project Maven, which trains artificial intelligence systems to parse footage from surveillance drones in order to “extract objects from massive amounts of moving or still imagery,” writes Drew Cukor, chief of the Algorithmic Warfare Cross-Functional Team. The programme is a key element of the US army’s efforts to select targets. One of the companies working on Maven is Google. Engineers at Google have protested their company’s involvement; their peers at companies like Amazon and Microsoft have made similar complaints, calling on their employers not to support the development of the facial recognition tool Rekognition, for use by the military, police and immigration control. For technology companies, this raises a question: should they play a role in governments’ use of force?

The US government’s policy of using armed drones to hunt its enemies abroad has long been controversial. Gibson argues that the CIA and US military are using drones to strike “far from the hot battlefield, against communities that aren't involved in an armed conflict, based on intelligence that is quite frequently wrong”. Paul Scharre, director of the technology and national security programme at the Center for a New American Security and author of Army of None says that the use of drones and computing power is making the US military a much more effective and efficient force that kills far fewer civilians than in previous wars. “We actually need tech companies like Google helping the military to do many other things,” he says.

The article is here.