Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Automation. Show all posts
Showing posts with label Automation. Show all posts

Sunday, May 21, 2023

Artificial intelligence, superefficiency and the end of work: a humanistic perspective on meaning in life

Knell, S., RĂ¼ther, M.
AI Ethics (2023).
https://doi.org/10.1007/s43681-023-00273-w

Abstract

How would it be assessed from an ethical point of view if human wage work were replaced by artificially intelligent systems (AI) in the course of an automation process? An answer to this question has been discussed above all under the aspects of individual well-being and social justice. Although these perspectives are important, in this article, we approach the question from a different perspective: that of leading a meaningful life, as understood in analytical ethics on the basis of the so-called meaning-in-life debate. Our thesis here is that a life without wage work loses specific sources of meaning, but can still be sufficiently meaningful in certain other ways. Our starting point is John Danaher’s claim that ubiquitous automation inevitably leads to an achievement gap. Although we share this diagnosis, we reject his provocative solution according to which game-like virtual realities could be an adequate substitute source of meaning. Subsequently, we outline our own systematic alternative which we regard as a decidedly humanistic perspective. It focuses both on different kinds of social work and on rather passive forms of being related to meaningful contents. Finally, we go into the limits and unresolved points of our argumentation as part of an outlook, but we also try to defend its fundamental persuasiveness against a potential objection.

From Concluding remarks

In this article, we explored the question of how we can find meaning in a post-work world. Our answer relies on a critique of John Danaher’s utopia of games and tries to stick to the humanistic idea, namely to the idea that we do not have to alter our human lifeform in an extensive way and also can keep up our orientation towards common ideals, such as working towards the good, the true and the beautiful.

Our proposal still has some shortcomings, which include the following two that we cannot deal with extensively but at least want to briefly comment on. First, we assumed that certain professional fields, especially in the meaning conferring area of the good, cannot be automated, so that the possibility of mini-jobs in these areas can be considered. This assumption is based on a substantial thesis from the philosophy of mind, namely that AI systems cannot develop consciousness and consequently also no genuine empathy. This assumption needs to be further elaborated, especially in view of some forecasts that even the altruistic and philanthropic professions are not immune to the automation of superefficient systems. Second, we have adopted without further critical discussion the premise of the hybrid standard model of a meaningful life according to which meaning conferring objective value is to be found in the three spheres of the true, the good, and the beautiful. We take this premise to be intuitively appealing, but a further elaboration of our argumentation would have to try to figure out, whether this trias is really exhaustive, and if so, due to which underlying more general principle. Third, the receptive side of finding meaning in the realm of the true and beautiful was emphasized and opposed to the active striving towards meaningful aims. Here, we have to more precisely clarify what axiological status reception has in contrast to active production—whether it is possibly meaning conferring to a comparable extent or whether it is actually just a less meaningful form. This is particularly important to be able to better assess the appeal of our proposal, which depends heavily on the attractiveness of the vita contemplativa.

Sunday, April 24, 2022

Individual vulnerability to industrial robot adoption increases support for the radical right

Anelli, M., Colantone, I., & Stanig, P. 
(2021). PNAS, 118(47), e2111611118.
https://doi.org/10.1073/pnas.2111611118

Significance

The success of radical-right parties across western Europe has generated much concern. These parties propose making borders less permeable, oppose ethnic diversity, and often express impatience with the institutions of representative democracy. Part of their recent success has been shown to be driven by structural economic changes, such as globalization, which triggers distributional consequences that, in turn, translate into voting behavior. We ask what are the political consequences of a different structural change: robotization of manufacturing. We propose a measure of individual exposure to automation and show that individuals more vulnerable to negative consequences of automation tend to display more support for the radical right. Automation exposure raises support for the radical left too, but to a significantly lower extent.

Abstract

The increasing success of populist and radical-right parties is one of the most remarkable developments in the politics of advanced democracies. We investigate the impact of industrial robot adoption on individual voting behavior in 13 western European countries between 1999 and 2015. We argue for the importance of the distributional consequences triggered by automation, which generates winners and losers also within a given geographic area. Analysis that exploits only cross-regional variation in the incidence of robot adoption might miss important facets of this process. In fact, patterns in individual indicators of economic distress and political dissatisfaction are masked in regional-level analysis, but can be clearly detected by exploiting individual-level variation. We argue that traditional measures of individual exposure to automation based on the current occupation of respondents are potentially contaminated by the consequences of automation itself, due to direct and indirect occupational displacement. We introduce a measure of individual exposure to automation that combines three elements: 1) estimates of occupational probabilities based on employment patterns prevailing in the preautomation historical labor market, 2) occupation-specific automatability scores, and 3) the pace of robot adoption in a given country and year. We find that individuals more exposed to automation tend to display higher support for the radical right. This result is robust to controlling for several other drivers of radical-right support identified by earlier literature: nativism, status threat, cultural traditionalism, and globalization. We also find evidence of significant interplay between automation and these other drivers.

Conclusion

We study the effects of robot adoption on voting behavior in western Europe. We find that higher exposure to automation increases support for radical-right parties. We argue that an individual-level analysis of vulnerability to automation is required, given the prominent role played by the distributional effects of automation unfolding within geographic areas. We also argue that measures of automation exposure based on an individual’s current occupation, as used in previous studies, are potentially problematic, due to direct and indirect displacement induced by automation. We then propose an approach that combines individual observable features with historical labor-market data. Our paper provides further evidence on the material drivers behind the increasing support for the radical right. At the same time, it takes into account the role of cultural factors and shows evidence of their interplay with automation in explaining the political realignment witnessed by advanced Western democracies.

Friday, October 29, 2021

Harms of AI

Daron Acemoglu
NBER Working Paper No. 29247
September 2021

Abstract

This essay discusses several potential economic, political and social costs of the current path of AI technologies. I argue that if AI continues to be deployed along its current trajectory and remains unregulated, it may produce various social, economic and political harms. These include: damaging competition, consumer privacy and consumer choice; excessively automating work, fueling inequality, inefficiently pushing down wages, and failing to improve worker productivity; and damaging political discourse, democracy's most fundamental lifeblood. Although there is no conclusive evidence suggesting that these costs are imminent or substantial, it may be useful to understand them before they are fully realized and become harder or even impossible to reverse, precisely because of AI's promising and wide-reaching potential. I also suggest that these costs are not inherent to the nature of AI technologies, but are related to how they are being used and developed at the moment - to empower corporations and governments against workers and citizens. As a result, efforts to limit and reverse these costs may need to rely on regulation and policies to redirect AI research. Attempts to contain them just by promoting competition may be insufficient.

Conclusion

In this essay, I explored several potential economic, political and social costs of the current path of AI technologies. I suggested that if AI continues to be deployed along its current trajectory and remains unregulated, then it can harm competition, consumer privacy and consumer choice, it may excessively automate work, fuel inequality, inefficiently push down wages, and fail to improve productivity. It may also make political discourse increasingly distorted, cutting one of the lifelines of democracy. I also mentioned several other potential social costs from the current path of AI research.

I should emphasize again that all of these potential harms are theoretical. Although there is much evidence indicating that not all is well with the deployment of AI technologies and the problems of increasing market power, disappearance of work, inequality, low wages, and meaningful challenges to democratic discourse and practice are all real, we do not have sufficient evidence to be sure that AI has been a serious contributor to these troubling trends.  Nevertheless, precisely because AI is a promising technological platform, aiming to transform every sector of the economy and every aspect of our social lives, it is imperative for us to study what its downsides are, especially on its current trajectory. It is in this spirit that I discussed the potential costs of AI this paper.

Monday, January 4, 2021

Is AI Making People Lose A Sense Of Achievement

Kashyap Raibagi
Analytics India Magazine
Originally published 27 Nov 20

Here is an excerpt:

The achievement gaps

In the case of ‘total displacement’ of jobs due to automation, there is nothing to consider in terms of achievement. But if there is a ‘collaborative replacement’, then it has the potential to create achievement gaps, noted by the study.

Where once workers used their cognitive and physical abilities to be creative, efficient and hard-working to produce a commodifiable output, automation has reduced their roles to merely maintain, take orders, or supervise. 

For instance, since an AI-based tool can help find the perfect acoustics in a room, a musician’s road crew’s job that was once considered very significant is reduced to a mere ‘maintenance’ role. Or an Amazon worker has to only ‘take orders’ to place packages to keep the storage organised. Even coders who created the best AI chess players are only ‘supervising’ the AI-player to beat other players, not playing chess themselves. This reduces the value of their role in the output produced.

Also, in terms of a worker’s commitments, while one may substitute the other, the substitution does not necessarily ensure a sense of achievement. An Uber driver’s effort to find customers may have reduced, but his substituted effort in doing more rides, which is a more physical effort, does not necessarily give him a better sense of achievement. 

Sunday, January 19, 2020

A Right to a Human Decision

Aziz Z. Huq
Virginia Law Review, Vol. 105
U of Chicago, Public Law Working Paper No. 713

Abstract

Recent advances in computational technologies have spurred anxiety about a shift of power from human to machine decision-makers. From prison sentences to loan approvals to college applications, corporate and state actors increasingly lean on machine learning tools (a subset of artificial intelligence) to allocate goods and to assign coercion. Machine-learning tools are perceived to be eclipsing, even extinguishing, human agency in ways that sacrifice important individual interests. An emerging legal response to such worries is a right to a human decision. European law has already embraced the idea in the General Data Protection Regulation. American law, especially in the criminal justice domain, is already moving in the same direction. But no jurisdiction has defined with precision what that right entails, or furnished a clear justification for its creation.


This Article investigates the legal possibilities of a right to a human decision. I first define the conditions of technological plausibility for that right as applied against state action. To understand its technological predicates, I specify the margins along which machine decisions are distinct from human ones. Such technological contextualization enables a nuanced exploration of why, or indeed whether, the gaps that do separate human and machine decisions might have normative import. Based on this technological accounting, I then analyze the normative stakes of a right to a human decision. I consider three potential normative justifications: (a) an appeal to individual interests to participation and reason-giving; (b) worries about the insufficiently reasoned or individuated quality of state action; and (c) arguments based on negative externalities. A careful analysis of these three grounds suggests that there is no general justification for adopting a right to a human decision by the state. Normative concerns about insufficiently reasoned or accurate decisions, which have a particularly powerful hold on the legal imagination, are best addressed in other ways. Similarly, concerns about the ways that algorithmic tools create asymmetries of social power are not parried by a right to a human decision. Indeed, rather than firmly supporting a right to a human decision, available evidence tentatively points toward a countervailing ‘right to a well-calibrated machine decision’ as ultimately more normatively well- grounded.

The paper can be downloaded here.

Saturday, January 18, 2020

Could a Rising Robot Workforce Make Humans Less Prejudiced?

Jackson, J., Castelo, N. & Gray, K. (2019).
American Psychologist. (2019)

Automation is becoming ever more prevalent, with robot workers replacing many human employees. Many perspectives have examined the economic impact of a robot workforce, but here we consider its social impact: how will the rise of robot workers affect intergroup relations? Whereas some past research suggests that more robots will lead to more intergroup prejudice, we suggest that robots could also reduce prejudice by highlighting commonalities between all humans. As robot workers become more salient, intergroup differences—including racial and religious differences—may seem less important, fostering a perception of a common human identity (i.e., “panhumanism.”) Six studies (∑N= 3,312) support this hypothesis. Anxiety about the rising robot workforce predicts less anxiety about human out-groups (Study 1) and priming the salience of a robot workforce reduces prejudice towards out-groups (Study 2), makes people more accepting of out-group members as leaders and family members (Study 3), and increases wage equality across in-group and out-group members in an economic simulation (Study 4). This effect is mediated by panhumanism (Studies 5-6), suggesting that the perception of a common human in-group explains why robot salience reduces prejudice. We discuss why automation may sometimes exacerbate intergroup tensions and other-times reduce them.

From the General Discussion

An open question remains about when automation helps versus harms intergroup relations. Our evidence is optimistic, showing that robot workers can increase solidarity between human groups. Yet other studies are pessimistic, showing that reminders of rising automation can increase people’s perceived material insecurity, leading them to feel more threatened by immigrants and foreign workers (Im et al., in press; Frey, Berger, & Chen, 2017), and data that we gathered across 37 nations—summarized in our supplemental materials—suggest that the countries that have automated the fastest over the last 42 years have also increased more in explicit prejudice towards out-groups, an effect that is partially explained by rising unemployment rates.

The research is here.

Sunday, December 22, 2019

What jobs are affected by AI? Better-paid, better-educated workers face the most exposure

M. Muro, J. Whiton, & R. Maxim
Brookings
Originally posted 20 Nov 19

Here is an excerpt:

AI could affect work in virtually every occupational group. However, whereas research on automation’s robotics and software continues to show that less-educated, lower-wage workers may be most exposed to displacement, the present analysis suggests that better-educated, better-paid workers (along with manufacturing and production workers) will be the most affected by the new AI technologies, with some exceptions.

Our analysis shows that workers with graduate or professional degrees will be almost four times as exposed to AI as workers with just a high school degree. Holders of bachelor’s degrees will be the most exposed by education level, more than five times as exposed to AI than workers with just a high school degree.

Our analysis shows that AI will be a significant factor in the future work lives of relatively well-paid managers, supervisors, and analysts. Also exposed are factory workers, who are increasingly well-educated in many occupations as well as heavily involved with AI on the shop floor. AI may be much less of a factor in the work of most lower-paid service workers.

Men, who are overrepresented in both analytic-technical and professional roles (as well as production), work in occupations with much higher AI exposure scores. Meanwhile, women’s heavy involvement in “interpersonal” education, health care support, and personal care services appears to shelter them. This both tracks with and accentuates the finding from our earlier automation analysis.

The info is here.

Sunday, December 15, 2019

The automation of ethics: the case of self-driving cars

Raffaele Rodogno, Marco Nørskov
Forthcoming in C. Hasse and
D. M. Søndergaard (Eds.)
Designing Robots.

Abstract
This paper explores the disruptive potential of artificial moral decision-makers on our moral practices by investigating the concrete case of self-driving cars. Our argument unfolds in two movements, the first purely philosophical, the second bringing self-driving cars into the picture.  More in particular, in the first movement, we bring to the fore three features of moral life, to wit, (i) the limits of the cognitive and motivational capacities of moral agents; (ii) the limited extent to which moral norms regulate human interaction; and (iii) the inescapable presence of tragic choices and moral dilemmas as part of moral life. Our ultimate aim, however, is not to provide a mere description of moral life in terms of these features but to show how a number of central moral practices can be envisaged as a response to, or an expression of these features. With this understanding of moral practices in hand, we broach the second movement. Here we expand our study to the transformative potential that self-driving cars would have on our moral practices . We locate two cases of potential disruption. First, the advent of self-driving cars would force us to treat as unproblematic the normative regulation of interactions that are inescapably tragic. More concretely, we will need to programme these cars’ algorithms as if there existed uncontroversial answers to the moral dilemmas that they might well face once on the road. Second, the introduction of this technology would contribute to the dissipation of accountability and lead to the partial truncation of our moral practices. People will be harmed and suffer in road related accidents, but it will be harder to find anyone to take responsibility for what happened—i.e. do what we normally do when we need to repair human relations after harm, loss, or suffering has occurred.

The book chapter is here.

Tuesday, August 6, 2019

Ethics and automation: What to do when workers are displaced

Tracy Mayor
MIT School of Management
Originally published July 8, 2019

As companies embrace automation and artificial intelligence, some jobs will be created or enhanced, but many more are likely to go away. What obligation do organizations have to displaced workers in such situations? Is there an ethical way for business leaders to usher their workforces through digital disruption?

Researchers wrestled with those questions recently at MIT Technology Review’s EmTech Next conference. Their conclusion: Company leaders need to better understand the negative repercussions of the technologies they adopt and commit to building systems that drive economic growth and social cohesion.

Pramod Khargonekar, vice chancellor for research at University of California, Irvine, and Meera Sampath, associate vice chancellor for research at the State University of New York, presented findings from their paper, “Socially Responsible Automation: A Framework for Shaping the Future.”

The research makes the case that “humans will and should remain critical and central to the workplace of the future, controlling, complementing and augmenting the strengths of technological solutions.” In this scenario, automation, artificial intelligence, and related technologies are tools that should be used to enrich human lives and livelihoods.

Aspirational, yes, but how do we get there?

The info is here.

Monday, March 25, 2019

U.S. companies put record number of robots to work in 2018

Reuters
Originally published February 28, 2019


U.S. companies installed more robots last year than ever before, as cheaper and more flexible machines put them within reach of businesses of all sizes and in more corners of the economy beyond their traditional foothold in car plants.

Shipments hit 28,478, nearly 16 percent more than in 2017, according to data seen by Reuters that was set for release on Thursday by the Association for Advancing Automation, an industry group based in Ann Arbor, Michigan.

Shipments increased in every sector the group tracks, except automotive, where carmakers cut back after finishing a major round of tooling up for new truck models.

The info is here.

Wednesday, June 20, 2018

How the Enlightenment Ends

Henry A. Kissinger
The Atlantic
Posted in the June 2018 Issue

Here are two excerpts:

Second, that in achieving intended goals, AI may change human thought processes and human values. AlphaGo defeated the world Go champions by making strategically unprecedented moves—moves that humans had not conceived and have not yet successfully learned to overcome. Are these moves beyond the capacity of the human brain? Or could humans learn them now that they have been demonstrated by a new master?

(cut)

Other AI projects work on modifying human thought by developing devices capable of generating a range of answers to human queries. Beyond factual questions (“What is the temperature outside?”), questions about the nature of reality or the meaning of life raise deeper issues. Do we want children to learn values through discourse with untethered algorithms? Should we protect privacy by restricting AI’s learning about its questioners? If so, how do we accomplish these goals?

If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made: to make mistakes faster and of greater magnitude than humans do. It may be impossible to temper those mistakes, as researchers in AI often suggest, by including in a program caveats requiring “ethical” or “reasonable” outcomes. Entire academic disciplines have arisen out of humanity’s inability to agree upon how to define these terms. Should AI therefore become their arbiter?

The article is here.

Monday, May 14, 2018

Computer Says No: Part 2 - Explainability

Jasmine Leonard
theRSA.org
Originally posted March 23, 2018

Here is an expert:

The trouble is, since many decisions should be explainable, it’s tempting to assume that automated decision systems should also be explainable.  But as discussed earlier, automated decisions systems don’t actually make decisions; they make predictions.  And when a prediction is used to guide a decision, the prediction is itself part of the explanation for that decision.  It therefore doesn’t need to be explained itself, it merely needs to be justifiable.

This is a subtle but important distinction.  To illustrate it, imagine you were to ask your doctor to explain her decision to prescribe you a particular drug.  She could do so by saying that the drug had cured many other people with similar conditions in the past and that she therefore predicted it would cure you too.  In this case, her prediction that the drug will cure you is the explanation for her decision to prescribe it.  And it’s a good explanation because her prediction is justified – not on the basis of an explanation of how the drug works, but on the basis that it’s proven to be effective in previous cases.  Indeed, explanations of how drugs work are often not available because the biological mechanisms by which they operate are poorly understood, even by those who produce them.  Moreover, even if your doctor could explain how the drug works, unless you have considerable knowledge of pharmacology, it’s unlikely that the explanation would actually increase your understanding of her decision to prescribe the drug.

If explanations of predictions are unnecessary to justify their use in decision-making then, what else can justify the use of a prediction made by an automated decision system?  The best answer, I believe, is that the system is shown to be sufficiently accurate.  What “sufficiently accurate” means is obviously up for debate, but at a minimum I would suggest that it means the system’s predictions are at least as accurate as those produced by a trained human.  It also means that there are no other readily available systems that produce more accurate predictions.

The article is here.

Sunday, April 15, 2018

What’s Next for Humanity: Automation, New Morality and a ‘Global Useless Class’

Kimiko de Freytas-Tamura
The New York Times
Originally published March 19, 2018

What will our future look like — not in a century but in a mere two decades?

Terrifying, if you’re to believe Yuval Noah Harari, the Israeli historian and author of “Sapiens” and “Homo Deus,” a pair of audacious books that offer a sweeping history of humankind and a forecast of what lies ahead: an age of algorithms and technology that could see us transformed into “super-humans” with godlike qualities.

In an event organized by The New York Times and How To Academy, Mr. Harari gave his predictions to the Times columnist Thomas L. Friedman. Humans, he warned, “have created such a complicated world that we’re no longer able to make sense of what is happening.” Here are highlights of the interview.

Artificial intelligence and automation will create a ‘global useless class.’

Just as the Industrial Revolution created the working class, automation could create a “global useless class,” Mr. Harari said, and the political and social history of the coming decades will revolve around the hopes and fears of this new class. Disruptive technologies, which have helped bring enormous progress, could be disastrous if they get out of hand.

“Every technology has a good potential and a bad potential,” he said. “Nuclear war is obviously terrible. Nobody wants it. The question is how to prevent it. With disruptive technology the danger is far greater, because it has some wonderful potential. There are a lot of forces pushing us faster and faster to develop these disruptive technologies and it’s very difficult to know in advance what the consequences will be, in terms of community, in terms of relations with people, in terms of politics.”

The article is here.

The video is worth watching.

Please read Sapiens and Homo Deus by Yuval Harari.

Monday, November 13, 2017

Will life be worth living in a world without work? Technological Unemployment and the Meaning of Life

John Danaher
forthcoming in Science and Engineering Ethics

Abstract

Suppose we are about to enter an era of increasing technological unemployment. What implications does this have for society? Two distinct ethical/social issues would seem to arise. The first is one of distributive justice: how will the (presumed) efficiency gains from automated labour be distributed through society? The second is one of personal fulfillment and meaning: if  people no longer have to work, what will they do with their lives? In this article, I set aside the first issue and focus on the second. In doing so, I make three arguments. First, I argue that there are good reasons to embrace non-work and that these reasons become more compelling in an era of technological unemployment. Second, I argue that the technological advances that make widespread technological unemployment possible could still threaten or undermine human flourishing and meaning, especially if (as is to be expected) they do not remain confined to the economic sphere. And third, I argue that this threat could be contained if we adopt an integrative  approach to our relationship with technology. In advancing these arguments, I draw on three distinct literatures: (i) the literature on technological unemployment and workplace automation; (ii) the antiwork critique — which I argue gives reasons to embrace technological unemployment; and (iii) the philosophical debate about the conditions for meaning in life — which I argue gives reasons for concern.

The article is here.
 

Thursday, October 26, 2017

DeepMind launches new research team to investigate AI ethics

James Vincent
The Verge
Originally posted October 4, 2017

Google’s AI subsidiary DeepMind is getting serious about ethics. The UK-based company, which Google bought in 2014, today announced the formation of a new research group dedicated to the thorniest issues in artificial intelligence. These include the problems of managing AI bias; the coming economic impact of automation; and the need to ensure that any intelligent systems we develop share our ethical and moral values.

DeepMind Ethics & Society (or DMES, as the new team has been christened) will publish research on these topics and others starting early 2018. The group has eight full-time staffers at the moment, but DeepMind wants to grow this to around 25 in a year’s time. The team has six unpaid external “fellows” (including Oxford philosopher Nick Bostrom, who literally wrote the book on AI existential risk) and will partner with academic groups conducting similar research, including The AI Now Institute at NYU, and the Leverhulme Centre for the Future of Intelligence.

The article is here.

Friday, October 13, 2017

Automation on our own terms

Benedict Dellot and Fabian Wallace-Stephens
Medium.com
Originally published September 17, 2017

Here is an excerpt:

There are three main risks of embracing AI and robotics unreservedly:
  1. A rise in economic inequality — To the extent that technology deskills jobs, it will put downward pressure on earnings. If jobs are removed altogether as a result of automation, the result will be greater returns for those who make and deploy the technology, as well as the elite workers left behind in firms. The median OECD country has already seen a decrease in its labour share of income of about 5 percentage points since the early 1990s, with capital’s share swallowing the difference. Another risk here is market concentration. If large firms continue to adopt AI and robotics at a faster rate than small firms, they will gain enormous efficiency advantages and as a result could take excessive share of markets. Automation could lead to oligopolistic markets, where a handful of firms dominate at the expense of others.
  2. A deepening of geographic disparities — Since the computer revolution of the 1980s, cities that specialise in cognitive work have gained a comparative advantage in job creation. In 2014, 5.5 percent of all UK workers operated in new job types that emerged after 1990, but the figure for workers in London was almost double that at 9.8 percent. The ability of cities to attract skilled workers, as well as the diverse nature of their economies, makes them better placed than rural areas to grasp the opportunities of AI and robotics. The most vulnerable locations will be those that are heavily reliant on a single automatable industry, such as parts of the North East that have a large stock of call centre jobs.
  3. An entrenchment of demographic biases — If left untamed, automation could disadvantage some demographic groups. Recall our case study analysis of the retail sector, which suggested that AI and robotics might lead to fewer workers being required in bricks and mortar shops, but more workers being deployed in warehouse operative roles. Given women are more likely to make up the former and men the latter, automation in this case could exacerbate gender pay and job differences. It is also possible that the use of AI in recruitment (e.g. algorithms that screen CVs) could amplify workplace biases and block people from employment based on their age, ethnicity or gender.

Saturday, March 25, 2017

Will Democracy Survive Big Data and Artificial Intelligence?

Dirk Helbing, Bruno S. Frey, Gerd Gigerenzer,  and others
Scientific American
Originally posted February 25, 2017

Here is an excerpt:

One thing is clear: the way in which we organize the economy and society will change fundamentally. We are experiencing the largest transformation since the end of the Second World War; after the automation of production and the creation of self-driving cars the automation of society is next. With this, society is at a crossroads, which promises great opportunities, but also considerable risks. If we take the wrong decisions it could threaten our greatest historical achievements.

(cut)

These technologies are also becoming increasingly popular in the world of politics. Under the label of “nudging,” and on massive scale, governments are trying to steer citizens towards healthier or more environmentally friendly behaviour by means of a "nudge"—a modern form of paternalism. The new, caring government is not only interested in what we do, but also wants to make sure that we do the things that it considers to be right. The magic phrase is "big nudging", which is the combination of big data with nudging. To many, this appears to be a sort of digital scepter that allows one to govern the masses efficiently, without having to involve citizens in democratic processes. Could this overcome vested interests and optimize the course of the world? If so, then citizens could be governed by a data-empowered “wise king”, who would be able to produce desired economic and social outcomes almost as if with a digital magic wand.

The article is here.

Friday, December 30, 2016

The ethics of algorithms: Mapping the debate

Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, Luciano Floridi
Big Data and Society
DOI: 10.1177/2053951716679679, Dec 2016

Abstract

In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

The article is here.

Monday, November 14, 2016

A Bright Robot Future Awaits, Once This Downer Election Is Over

By Andrew Mayeda
Bloomberg
Originally published October 24, 2016

Here is an excerpt:

‘Singularity Is Near’

An hour’s drive away, in San Francisco, the influx of tech workers has helped push the median single-family home price to $1.26 million. Private buses carry them to jobs at Apple Inc., Alphabet Inc.’s Google, or Facebook. Meanwhile, one former mayor has proposed using a decommissioned aircraft carrier to house the city’s homeless, who throng the sidewalks along Market Street, home to Uber and Twitter Inc.

How much will the “second machine age” deepen such divisions? Last month, a trio of International Monetary Fund economists came up with some chilling answers. Even if humans retain their creative edge over robots, they found, it will likely take two decades before productivity gains outweigh the downward pressure on wages from automation; meanwhile, “inequality will be worse, possibly dramatically so.”

And if the robots become perfect substitutes, the paper envisages an extreme scenario in which labor becomes wholly redundant as “capital takes over the entire economy.” The IMF economists even invoke futurist Ray Kurzweil’s 2006 bestseller, “The Singularity Is Near.”

Silicon Valley executives say alarm bells have been ringing for decades about job-killing technology, and they’re usually false alarms.

The article is here.

Thursday, November 10, 2016

The Ethics of Algorithms: Mapping the Debate

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S. and Floridi, L. 2016 (in press). ‘The Ethics of Algorithms: Mapping the Debate’. Big Data & Society

Abstract

In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms.And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

The book chapter is here.