Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Work. Show all posts
Showing posts with label Work. Show all posts

Tuesday, March 12, 2024

Discerning Saints: Moralization of Intrinsic Motivation and Selective Prosociality at Work

Kwon, M., Cunningham, J. L., & 
Jachimowicz, J. M. (2023).
Academy of Management Journal, 66(6),
1625–1650.

Abstract

Intrinsic motivation has received widespread attention as a predictor of positive work outcomes, including employees’ prosocial behavior. We offer a more nuanced view by proposing that intrinsic motivation does not uniformly increase prosocial behavior toward all others. Specifically, we argue that employees with higher intrinsic motivation are more likely to value intrinsic motivation and associate it with having higher morality (i.e., they moralize it). When employees moralize intrinsic motivation, they perceive others with higher intrinsic motivation as being more moral and thus engage in more prosocial behavior toward those others, and judge others who are less intrinsically motivated as less moral and thereby engage in less prosocial behaviors toward them. We provide empirical support for our theoretical model across a large-scale, team-level field study in a Latin American financial institution (n = 784, k = 185) and a set of three online studies, including a preregistered experiment (n = 245, 243, and 1,245), where we develop a measure of the moralization of intrinsic motivation and provide both causal and mediating evidence. This research complicates our understanding of intrinsic motivation by revealing how its moralization may at times dim the positive light of intrinsic motivation itself.

The article is paywalled.  Here are some thoughts:

This study focuses on how intrinsically motivated employees (those who enjoy their work) might act differently towards other employees depending on their own level of intrinsic motivation. The key points are:

Main finding: Employees with high intrinsic motivation tend to associate higher morality with others who also have high intrinsic motivation. This leads them to offer more help and support to those similar colleagues, while judging and helping less to those with lower intrinsic motivation.

Theoretical framework: The concept of "moralization of intrinsic motivation" (MOIM) explains this behavior. Essentially, intrinsic motivation becomes linked to moral judgment, influencing who is seen as "good" and deserving of help.

Implications:
  • For theory: This research adds a new dimension to understanding intrinsic motivation, highlighting the potential for judgment and selective behavior.
  • For practice: Managers and leaders should be aware of the unintended consequences of promoting intrinsic motivation, as it might create bias and division among employees.
  • For employees: Those lacking intrinsic motivation might face disadvantages due to judgment from colleagues. They could try job crafting or seeking alternative support strategies.
Overall, the study reveals a nuanced perspective on intrinsic motivation, acknowledging its positive aspects while recognizing its potential to create inequality and ethical concerns.

Thursday, September 21, 2023

The Myth of the Secret Genius

Brian Klaas
The Garden of Forking Path
Originally posted 30 Nov 22

Here are two excepts: 

A recent research study, involving a collaboration between physicists who model complex systems and an economist, however, has revealed why billionaires are so often mediocre people masquerading as geniuses. Using computer modelling, they developed a fake society in which there is a realistic distribution of talent among competing agents in the simulation. They then applied some pretty simple rules for their model: talent helps, but luck also plays a role.

Then, they tried to see what would happen if they ran and re-ran the simulation over and over.

What did they find? The most talented people in society almost never became extremely rich. As they put it, “the most successful individuals are not the most talented ones and, on the other hand, the most talented individuals are not the most successful ones.”

Why? The answer is simple. If you’ve got a society of, say, 8 billion people, there are literally billions of humans who are in the middle distribution of talent, the largest area of the Bell curve. That means that in a world that is partly defined by random chance, or luck, the odds that someone from the middle levels of talent will end up as the richest person in the society are extremely high.

Look at this first plot, in which the researchers show capital/success (being rich) on the vertical/Y-axis, and talent on the horizontal/X-axis. What’s clear is that society’s richest person is only marginally more talented than average, and there are a lot of people who are extremely talented that are not rich.

Then, they tried to figure out why this was happening. In their simulated world, lucky and unlucky events would affect agents every so often, in a largely random pattern. When they measured the frequency of luck or misfortune for any individual in the simulation, and then plotted it against becoming rich or poor, they found a strong relationship.

(cut)

The authors conclude by stating “Our results highlight the risks of the paradigm that we call “naive meritocracy", which fails to give honors and rewards to the most competent people, because it underestimates the role of randomness among the determinants of success.”

Indeed.


Here is my summary:

The myth of the secret genius: The belief that some people are just born with natural talent and that there is nothing we can do to achieve the same level of success.

The importance of hard work: The vast majority of successful people are not geniuses. They are simply people who have worked hard and persevered in the face of setbacks.

The power of luck: Luck plays a role in everyone's success. Some people are luckier than others, and most people do not factor in luck, as well as other external variables, into their assessment.  This bias is another form of the Fundamental Attribution Error.

The importance of networks: Our networks play a big role in our success. We need to be proactive in building relationships with people who can help us achieve our goals.

Tuesday, September 19, 2023

‘Bullshit’ After All? Why People Consider Their Jobs Socially Useless

Walo, S. (2023).
Employment and Society, 0(0).

Abstract

Recent studies show that many workers consider their jobs socially useless. Thus, several explanations for this phenomenon have been proposed. David Graeber’s ‘bullshit jobs theory’, for example, claims that some jobs are in fact objectively useless, and that these are found more often in certain occupations than in others. Quantitative research on Europe, however, finds little support for Graeber’s theory and claims that alienation may be better suited to explain why people consider their jobs socially useless. This study extends previous analyses by drawing on a rich, under-utilized dataset and provides new evidence for the United States specifically. Contrary to previous studies, it thus finds robust support for Graeber’s theory on bullshit jobs. At the same time, it also confirms existing evidence on the effects of various other factors, including alienation. Work perceived as socially useless is therefore a multifaceted issue that must be addressed from different angles.

Discussion and conclusion

Using survey data from the US, this article tests Graeber’s (2018) argument that socially useless jobs are primarily found in specific occupations. Doing so, it finds that working in one of Graeber’s occupations significantly increases the probability that workers perceive their job as socially useless (compared with all others). This is true for administrative support occupations, sales occupations, business and finance occupations, and managers. Only legal occupations did not show a significant effect as predicted by Graeber’s theory. More detailed analyses even reveal that, of all 21 occupations, Graeber’s occupations are the ones that are most strongly associated with socially useless jobs when other factors are controlled for. This article is therefore the first to find quantitative evidence supporting Graeber’s argument. In addition, this article also confirms existing evidence on various other factors that can explain why people consider their jobs socially useless, including alienation, social interaction and public service motivation.

These findings may seem somewhat contradictory to the results of Soffia et al. (2022) who find that Graeber’s theory is not supported by their data. This can be explained by several differences between their study and this one. First, Soffia et al. ask people whether they ‘have the feeling of doing useful work’, while this study asks them whether they think they are making a ‘positive impact on [their] community and society’. These differently worded questions may elicit different responses. However, additional analyses show that results do not differ much between these questions (see online supplementary appendix C). Second, Soffia et al. examine data from Europe, while this study uses data from the US. This supports the notion that Graeber’s theory may only apply to heavily financialized Anglo-Saxon countries. Third, the results of Soffia et al. are based on raw distributions over occupations, while the findings presented here are mainly based on regression models that control for various other factors. If only raw distributions are analysed, however, this article also finds only limited support for Graeber’s theory.


My take for clinical psychologists:

Bullshit jobs are not just a problem for the people who do them. They also have a negative impact on society as a whole. For example, they can lead to a decline in productivity, a decrease in innovation, and an increase in inequality.

Bullshit jobs are often created by the powerful in society in order to maintain their own power and privilege. For example, managers may create bullshit jobs in order to justify their own positions or to make themselves look more important.

There is a growing awareness of the problem of bullshit jobs, and there are a number of initiatives underway to address it. For example, some organizations are now hiring "bullshit detectives" to identify and eliminate bullshit jobs.

Thursday, September 7, 2023

AI Should Be Terrified of Humans

Brian Kateman
Time.com
Originally posted 24 July 23

Here are two excerpts:

Humans have a pretty awful track record for how we treat others, including other humans. All manner of exploitation, slavery, and violence litters human history. And today, billions upon billions of animals are tortured by us in all sorts of obscene ways, while we ignore the plight of others. There’s no quick answer to ending all this suffering. Let’s not wait until we’re in a similar situation with AI, where their exploitation is so entrenched in our society that we don’t know how to undo it. If we take for granted starting right now that maybe, just possibly, some forms of AI are or will be capable of suffering, we can work with the intention to build a world where they don’t have to.

(cut)

Today, many scientists and philosophers are looking at the rise of artificial intelligence from the other end—as a potential risk to humans or even humanity as a whole. Some are raising serious concerns over the encoding of social biases like racism and sexism into computer programs, wittingly or otherwise, which can end up having devastating effects on real human beings caught up in systems like healthcare or law enforcement. Others are thinking earnestly about the risks of a digital-being-uprising and what we need to do to make sure we’re not designing technology that will view humans as an adversary and potentially act against us in one way or another. But more and more thinkers are rightly speaking out about the possibility that future AI should be afraid of us.

“We rationalize unmitigated cruelty toward animals—caging, commodifying, mutilating, and killing them to suit our whims—on the basis of our purportedly superior intellect,” Marina Bolotnikova writes in a recent piece for Vox. “If sentience in AI could ever emerge…I’m doubtful we’d be willing to recognize it, for the same reason that we’ve denied its existence in animals.” Working in animal protection, I’m sadly aware of the various ways humans subjugate and exploit other species. Indeed, it’s not only our impressive reasoning skills, our use of complex language, or our ability to solve difficult problems and introspect that makes us human; it’s also our unparalleled ability to increase non-human suffering. Right now there’s no reason to believe that we aren’t on a path to doing the same thing to AI. Consider that despite our moral progress as a species, we torture more non-humans today than ever before. We do this not because we are sadists, but because even when we know individual animals feel pain, we derive too much profit and pleasure from their exploitation to stop.


Wednesday, August 2, 2023

The dark side of generosity: Employees with a reputation for giving are selectively targeted for exploitation


Stanley, M. L., Neck, C. P., & Neck, C. B. (2023). Journal of Experimental Social Psychology, 108, 104503.

Abstract

People endorse generosity as a moral virtue worth exemplifying, and those who acquire reputations for generosity are admired and publicly celebrated. In an organizational context, hiring, retaining, and promoting generous employees can make organizations more appealing to customers, suppliers, and top talent. However, using complementary methods and experimental designs with large samples of full-time managers, we find consistent evidence that managers are inclined to take unfair advantage of employees with reputations for generosity, selectively targeting them for exploitation in ways that likely, and ironically, hamper long-term organizational success. This selective targeting of generous employees for exploitation was statistically explained by a problematic assumption: Since they have reputations for generosity, managers assume that, if they had the opportunity, they would have freely volunteered for their own exploitation. We also investigate a possible solution to the targeting of more generous employees for exploitative practices. Merely asking managers to make a judgment about the ethics of an exploitative request eliminates their propensity to target generous employees over other employees for exploitation.

The article is behind a paywall.

Here is a summary:

The research suggests that organizations should be aware of the potential for managers to exploit employees with a reputation for generosity. They also suggest that organizations should implement policies and procedures to protect employees from exploitation.

Here are some of the key takeaways from the study:
  • Employees with a reputation for generosity are more likely to be targeted for exploitation by managers.
  • Managers are more likely to make exploitative requests of employees who they have a personal relationship with.
  • Organizations should be aware of the potential for managers to exploit employees with a reputation for generosity and implement policies and procedures to protect employees from exploitation.
The study also suggests that there are a number of factors that may contribute to the exploitation of generous employees, including:
  • The manager's perception of the employee's willingness to comply with exploitative requests.
  • The manager's personal relationship with the employee.
  • The organization's culture and policies.
It is important to note that the study did not find that all managers exploit generous employees. However, the study does suggest that it is a phenomenon that organizations should be aware of and take steps to prevent.

Friday, June 16, 2023

ChatGPT Is a Plagiarism Machine

Joseph Keegin
The Chronicle
Originally posted 23 MAY 23

Here is an excerpt:

A meaningful education demands doing work for oneself and owning the product of one’s labor, good or bad. The passing off of someone else’s work as one’s own has always been one of the greatest threats to the educational enterprise. The transformation of institutions of higher education into institutions of higher credentialism means that for many students, the only thing dissuading them from plagiarism or exam-copying is the threat of punishment. One obviously hopes that, eventually, students become motivated less by fear of punishment than by a sense of responsibility for their own education. But if those in charge of the institutions of learning — the ones who are supposed to set an example and lay out the rules — can’t bring themselves to even talk about a major issue, let alone establish clear and reasonable guidelines for those facing it, how can students be expected to know what to do?

So to any deans, presidents, department chairs, or other administrators who happen to be reading this, here are some humble, nonexhaustive, first-aid-style recommendations. First, talk to your faculty — especially junior faculty, contingent faculty, and graduate-student lecturers and teaching assistants — about what student writing has looked like this past semester. Try to find ways to get honest perspectives from students, too; the ones actually doing the work are surely frustrated at their classmates’ laziness and dishonesty. Any meaningful response is going to demand knowing the scale of the problem, and the paper-graders know best what’s going on. Ask teachers what they’ve seen, what they’ve done to try to mitigate the possibility of AI plagiarism, and how well they think their strategies worked. Some departments may choose to take a more optimistic approach to AI chatbots, insisting they can be helpful as a student research tool if used right. It is worth figuring out where everyone stands on this question, and how best to align different perspectives and make allowances for divergent opinions while holding a firm line on the question of plagiarism.

Second, meet with your institution’s existing honor board (or whatever similar office you might have for enforcing the strictures of academic integrity) and devise a set of standards for identifying and responding to AI plagiarism. Consider simplifying the procedure for reporting academic-integrity issues; research AI-detection services and software, find one that works best for your institution, and make sure all paper-grading faculty have access and know how to use it.

Lastly, and perhaps most importantly, make it very, very clear to your student body — perhaps via a firmly worded statement — that AI-generated work submitted as original effort will be punished to the fullest extent of what your institution allows. Post the statement on your institution’s website and make it highly visible on the home page. Consider using this challenge as an opportunity to reassert the basic purpose of education: to develop the skills, to cultivate the virtues and habits of mind, and to acquire the knowledge necessary for leading a rich and meaningful human life.

Wednesday, April 19, 2023

Meaning in Life in AI Ethics—Some Trends and Perspectives

Nyholm, S., Rüther, M. 
Philos. Technol. 36, 20 (2023). 
https://doi.org/10.1007/s13347-023-00620-z

Abstract

In this paper, we discuss the relation between recent philosophical discussions about meaning in life (from authors like Susan Wolf, Thaddeus Metz, and others) and the ethics of artificial intelligence (AI). Our goal is twofold, namely, to argue that considering the axiological category of meaningfulness can enrich AI ethics, on the one hand, and to portray and evaluate the small, but growing literature that already exists on the relation between meaning in life and AI ethics, on the other hand. We start out our review by clarifying the basic assumptions of the meaning in life discourse and how it understands the term ‘meaningfulness’. After that, we offer five general arguments for relating philosophical questions about meaning in life to questions about the role of AI in human life. For example, we formulate a worry about a possible meaningfulness gap related to AI on analogy with the idea of responsibility gaps created by AI, a prominent topic within the AI ethics literature. We then consider three specific types of contributions that have been made in the AI ethics literature so far: contributions related to self-development, the future of work, and relationships. As we discuss those three topics, we highlight what has already been done, but we also point out gaps in the existing literature. We end with an outlook regarding where we think the discussion of this topic should go next.

(cut)

Meaning in Life in AI Ethics—Summary and Outlook

We have tried to show at least three things in this paper. First, we have noted that there is a growing debate on meaningfulness in some sub-areas of AI ethics, and particularly in relation to meaningful self-development, meaningful work, and meaningful relationships. Second, we have argued that this should come as no surprise. Philosophers working on meaning in life share the assumption that meaning in life is a partly autonomous value concept, which deserves ethical consideration. Moreover, as we argued in Section 4 above, there are at least five significant general arguments that can be formulated in support of the claim that questions of meaningfulness should play a prominent role in ethical discussions of newly emerging AI technologies. Third, we have also stressed that, although there is already some debate about AI and meaning in life, it does not mean that there is no further work to do. Rather, we think that the area of AI and its potential impacts on meaningfulness in life is a fruitful topic that philosophers have only begun to explore, where there is much room for additional in-depth discussions.

We will now close our discussion with three general remarks. The first is led by the observation that some of the main ethicists in the field have yet to explore their underlying meaning theory and its normative claims in a more nuanced way. This is not only a shortcoming on its own, but has some effect on how the field approaches issues. Are agency extension or moral abilities important for meaningful self-development? Should achievement gaps really play a central role in the discussion of meaningful work? And what about the many different aspects of meaningful relationships? These are only a few questions which can shed light on the presupposed underlying normative claims that are involved in the field. Here, further exploration at deeper levels could help us to see which things are important and which are not more clear, and finally in which directions the field should develop.

Wednesday, September 7, 2022

The moralization of effort

Celniker, J. B., et al. (2022).
Journal of Experimental Psychology:
General. Advance online publication.
https://doi.org/10.1037/xge0001259

Abstract

People believe that effort is valuable, but what kind of value does it confer? We find that displays of effort signal moral character. Eight studies (N = 5,502) demonstrate the nature of these effects in the domains of paid employment, personal fitness, and charitable fundraising. The exertion of effort is deemed morally admirable (Studies 1–6) and is monetarily rewarded (Studies 2–6), even in situations where effort does not directly generate additional product, quality, or economic value. Convergent patterns of results emerged in South Korean and French cross-cultural replications (Studies 2b and 2c). We contend that the seeming irrationality of valuing effort for its own sake, such as in situations where one’s efforts do not directly increase economic output (Studies 3–6), reveals a “deeply rational” social heuristic for evaluating potential cooperation partners. Specifically, effort cues engender broad moral trait ascriptions, and this moralization of effort influences donation behaviors (Study 5) and cooperative partner choice decision-making (Studies 4 and 6). In situating our account of effort moralization into past research and theorizing, we also consider the implications of these effects for social welfare policy and the future of work.

General Discussion

Is effort deemed socially valuable, even in situations where one’s efforts have no direct economic utility? Eight studies using multiple methodologies and cross-cultural samples indicate that it is. We provided evidence of effort moralization—displays of effort increased the moral qualities ascribed to individuals (we did not, we should note, provide evidence of the specific process by which effort cues shift from having a nonmoral to moral status, a more limited definition of moralization; Rhee et al., 2019). Moreover, the moralization of effort guided participants’ allocations of monetary resources and selections of cooperation partners. These data support our argument that effort moralization is a “deeply rational” social heuristic for navigating cooperation markets (Barclay, 2013; Kenrick et al., 2009). Even in circumstances where effort was economically unnecessary, people believed such efforts reflected others’ inner virtues.

(cut)

This evolutionary perspective may provide a more parsimonious framework for integrating research on effort evaluations: the “effort heuristic” (Kruger et al., 2004) may be more functionally dynamic than previously recognized, with effort moralization constituting one of its social functions. Thus, rather than directly causing people to moralize effort, cultural beliefs like the PWE may be scaffolded on evolved psychological mechanisms such as shared intuitions about the value of effort. The PWE (and similar work ethics among other populations) may have emerged, then, because it benefited from a combination of being well fit to our psychology (in appealing to an underlying tendency for effort moralization) and culturally useful (in promoting cooperation and industriousness; Henrich, 2020; Henrich & Boyd, 2016).


Note: Hardworking people are often seen as more moral than those perceived or believed as lazy. Yet people who work harder are not always more economically productive.  Capitalist fantasies play into these moral stereotypes.  Effort moralization plays right into misconceptions about poor people being lazy and rich people as hard workers.  Neither stereotype is accurate.

Thursday, April 14, 2022

AI won’t steal your job, just make it meaningless

John Danaher
iainews.com
Originally published 18 MAR 22

New technologies are often said to be in danger of making humans redundant, replacing them with robots and AI, and making work disappear altogether. A crisis of identity and purpose might result from that, but Silicon Valley tycoons assure us that a universal basic income could at least take care of people’s material needs, leaving them with plenty of leisure time in which to forge new identities and find new sources of purpose.

This, however, paints an overly simplistic picture. What seems more likely to happen is that new technologies will not make humans redundant at a mass scale, but will change the nature of work, making it worse for many, and sapping the elements that give it some meaning and purpose. It’s the worst of both worlds: we’ll continue working, but our jobs will become increasingly meaningless. 

History has some lessons to teach us here. Technology has had a profound effect on work in the past, not just on the ways in which we carry out our day-to-day labour, but also on how we understand its value. Consider the humble plough. In its most basic form, it is a hand-operated tool, consisting of little more than a pointed stick that scratches a furrow through the soil. This helps a farmer to sow seeds but does little else. Starting in the middle ages, however, more complex, ‘heavy’ ploughs began to be used by farmers in Northern Europe. These heavy ploughs rotated and turned the earth, bringing nutrient rich soils to the surface, and radically altering the productivity of farming. Farming ceased being solely about subsistence. It started to be about generating wealth.

The argument about how the heavy plough transformed the nature of work was advanced by historian Lynn White Jr in his classic study Medieval Technology and Social Change. Writing in the idiom of the early 1960s, he argued that “No more fundamental change in the idea of man’s relation to the soil can be imagined: once man had been part of nature; now he became her exploiter.”

It is easy to trace a line – albeit one that takes a detour through Renaissance mercantilism and the Industrial revolution – from the development of the heavy plough to our modern conception of work. Although work is still an economic necessity for many people, it is not just that. It is something more. We don’t just do it to survive; we do it to thrive. Through our work we can buy into a certain lifestyle and affirm a certain identity. We can develop mastery and cultivate self-esteem; we make a contribution to our societies and a name for ourselves. 

Wednesday, September 8, 2021

America Runs on ‘Dirty Work’ and Moral Inequality

Eyal Press
The New York Times
Originally posted 13 Aug 21

Here is an excerpt:

“Dirty work” can refer to any unpleasant job, but among social scientists, the term has a more pointed meaning. In 1962, Everett Hughes, an American sociologist, published an essay titled “Good People and Dirty Work” that drew on conversations he’d had in postwar Germany about the mass atrocities of the Nazi era. Mr. Hughes argued that the persecution of Jews proceeded with the unspoken assent of many supposedly enlightened Germans, who refrained from asking too many questions because, on some level, they were not entirely displeased.

This was the nature of dirty work as Mr. Hughes conceived of it: unethical activity that was delegated to certain agents and then disavowed by society, even though the perpetrators had an “unconscious mandate” from their fellow citizens. As extreme as the Nazi example was, this dynamic existed in every society, Mr. Hughes wrote, enabling respectable citizens to distance themselves from the morally troubling things being done in their name. The dirty workers were not rogue actors but “agents” of “good people” who passively stood by.

Contemporary America runs on dirty work. Some of the people who do this work are our agents by virtue of the fact that they perform public functions, such as running the world’s largest penal system. Others qualify as such by catering to our consumption habits — the food we eat, the fossil fuels we burn, which are drilled and fracked by dirty workers in places like the Gulf of Mexico. The high-tech gadgets in our pockets rely on yet another form of dirty work — the mining of cobalt — that has been outsourced to workers in Africa and to foreign subcontractors that often brutally exploit them.

Like the essential jobs performed by grocery clerks and other low-wage workers during the Covid-19 pandemic, this work sustains our lifestyles and undergirds the prevailing social order, but privileged people are generally spared from having to think about it. One reason is that the dirty work occurs far away from them, in isolated institutions — prisons, slaughterhouses — that are closed to the public. Another reason is that the privileged rarely have to do it. Although there is no shortage of it to go around, dirty work in America is not randomly distributed. 

Thursday, January 28, 2021

Automation, work and the achievement gap

Danaher, J., Nyholm, S. 
AI Ethics (2020). 

Abstract

Rapid advances in AI-based automation have led to a number of existential and economic concerns. In particular, as automating technologies develop enhanced competency, they seem to threaten the values associated with meaningful work. In this article, we focus on one such value: the value of achievement. We argue that achievement is a key part of what makes work meaningful and that advances in AI and automation give rise to a number achievement gaps in the workplace. This could limit people’s ability to participate in meaningful forms of work. Achievement gaps are interesting, in part, because they are the inverse of the (negative) responsibility gaps already widely discussed in the literature on AI ethics. Having described and explained the problem of achievement gaps, the article concludes by identifying four possible policy responses to the problem.

(cut)

Conclusion

Achievement is an important part of the well-lived life. It is the positive side of responsibility. Where we blame ourselves and others for doing bad things, we also praise ourselves for achieving positive (or value neutral) things. Achievement is particularly important when it comes to meaningful work. One of the problems with widespread automation is that it threatens to undermine at least three of the four main conditions for achievement in the workplace: it can reduce the value of work tasks; reduce the cost of committing to those work tasks; and sever the causal connection between human effort and workplace outcome. This opens up ‘achievement gaps’ in the workplace. There are, however, some potential ways to manage the threat of achievement gaps: we can focus on other aspects of meaningful work; we can find some ways to retain the human touch in the production of workplace outputs; we can emphasise the importance of teamwork in producing valuable outputs; and we can find outlets for achievement outside of the workplace.

Monday, January 4, 2021

Is AI Making People Lose A Sense Of Achievement

Kashyap Raibagi
Analytics India Magazine
Originally published 27 Nov 20

Here is an excerpt:

The achievement gaps

In the case of ‘total displacement’ of jobs due to automation, there is nothing to consider in terms of achievement. But if there is a ‘collaborative replacement’, then it has the potential to create achievement gaps, noted by the study.

Where once workers used their cognitive and physical abilities to be creative, efficient and hard-working to produce a commodifiable output, automation has reduced their roles to merely maintain, take orders, or supervise. 

For instance, since an AI-based tool can help find the perfect acoustics in a room, a musician’s road crew’s job that was once considered very significant is reduced to a mere ‘maintenance’ role. Or an Amazon worker has to only ‘take orders’ to place packages to keep the storage organised. Even coders who created the best AI chess players are only ‘supervising’ the AI-player to beat other players, not playing chess themselves. This reduces the value of their role in the output produced.

Also, in terms of a worker’s commitments, while one may substitute the other, the substitution does not necessarily ensure a sense of achievement. An Uber driver’s effort to find customers may have reduced, but his substituted effort in doing more rides, which is a more physical effort, does not necessarily give him a better sense of achievement. 

Thursday, January 9, 2020

Artificial Intelligence Is Superseding Well-Paying Wall Street Jobs

Deutsche Boerse To Acquire NYSE Euronext To Create Largest Exchange OwnerJack Kelly
forbes.com
Originally posted 10 Dec 19

Here is an excerpt:

Compliance people run the risk of being replaced too. “As bad actors become more sophisticated, it is vital that financial regulators have the funding resources, technological capacity and access to AI and automated technologies to be a strong and effective cop on the beat,” said Martina Rejsjö, head of Nasdaq Surveillance North America Equities.

Nasdaq, a tech-driven trading platform, has an associated regulatory body that offers over 40 different algorithms, using 35,000 parameters, to spot possible market abuse and manipulation in real time. “The massive and, in many cases, exponential growth in market data is a significant challenge for surveillance professionals," Rejsjö said. “Market abuse attempts have become more sophisticated, putting more pressure on surveillance teams to find the proverbial needle in the data haystack." In layman's terms, she believes that the future is in tech overseeing trading activities, as the human eye is unable to keep up with the rapid-fire, sophisticated global trading dominated by algorithms.

When people say not to worry, that’s the precise time to worry. Companies—whether they are McDonald’s, introducing self-serve kiosks and firing hourly workers to cut costs, or top-tier investment banks that rely on software instead of traders to make million-dollar bets on the stock market—will continue to implement technology and downsize people in an effort to enhance profits and cut down on expenses. This trend will be hard to stop and have serious future consequences for the workers at all levels and salaries. 

The info is here.

Saturday, January 4, 2020

Robots in Finance Could Wipe Out Some of Its Highest-Paying Jobs

Lananh Nguyen
Bloomberg.com
Originally poste 6 Dec 19

Robots have replaced thousands of routine jobs on Wall Street. Now, they’re coming for higher-ups.

That’s the contention of Marcos Lopez de Prado, a Cornell University professor and the former head of machine learning at AQR Capital Management LLC, who testified in Washington on Friday about the impact of artificial intelligence on capital markets and jobs. The use of algorithms in electronic markets has automated the jobs of tens of thousands of execution traders worldwide, and it’s also displaced people who model prices and risk or build investment portfolios, he said.

“Financial machine learning creates a number of challenges for the 6.14 million people employed in the finance and insurance industry, many of whom will lose their jobs -- not necessarily because they are replaced by machines, but because they are not trained to work alongside algorithms,” Lopez de Prado told the U.S. House Committee on Financial Services.

During the almost two-hour hearing, lawmakers asked experts about racial and gender bias in AI, competition for highly skilled technology workers, and the challenges of regulating increasingly complex, data-driven financial markets.

The info is here.

Monday, December 23, 2019

Will The Future of Work Be Ethical?

Greg Epstein
Interview at TechCrunch.com
Originally posted 28 Nov 19

Here is an excerpt:

AI and climate: in a sense, you’ve already dealt with this new field people are calling the ethics of technology. When you hear that term, what comes to mind?

As a consumer of a lot of technology and as someone of the generation who has grown up with a phone in my hand, I’m aware my data is all over the internet. I’ve had conversations [with friends] about personal privacy and if I look around the classroom, most people have covers for the cameras on their computers. This generation is already aware [of] ethics whenever you’re talking about computing and the use of computers.

About AI specifically, as someone who’s interested in the field and has been privileged to be able to take courses and do research projects about that, I’m hearing a lot about ethics with algorithms, whether that’s fake news or bias or about applying algorithms for social good.

What are your biggest concerns about AI? What do you think needs to be addressed in order for us to feel more comfortable as a society with increased use of AI?

That’s not an easy answer; it’s something our society is going to be grappling with for years. From what I’ve learned at this conference, from what I’ve read and tried to understand, it’s a multidimensional solution. You’re going to need computer programmers to learn the technical skills to make their algorithms less biased. You’re going to need companies to hire those people and say, “This is our goal; we want to create an algorithm that’s fair and can do good.” You’re going to need the general society to ask for that standard. That’s my generation’s job, too. WikiLeaks, a couple of years ago, sparked the conversation about personal privacy and I think there’s going to be more sparks.

The info is here.

Monday, December 9, 2019

Escaping Skinner's Box: AI and the New Era of Techno-Superstition

John Danaher
Philosophical Disquisitions
Originally posted October 10, 2019

Here is an excerpt:

The findings were dispiriting. Green and Chen found that using algorithms did improve the overall accuracy of decision-making across all conditions, but this was not because adding information and explanations enabled the humans to play a more meaningful role in the process. On the contrary, adding more information often made the human interaction with the algorithm worse. When given the opportunity to learn from the real-world outcomes, the humans became overconfident in their own judgments, more biased, and less accurate overall. When given explanations, they could maintain accuracy but only to the extent that they deferred more to the algorithm. In short, the more transparent the system seemed to the worker, the more the workers made them worse or limited their own agency.

It is important not to extrapolate too much from one study, but the findings here are consistent what has been found in other cases of automation in the workplace: humans are often the weak link in the chain. They need to be kept in check. This suggests that if we want to reap the benefits of AI and automation, we may have to create an environment that is much like that of the Skinner box, one in which humans can flap their wings, convinced they are making a difference, but prevented from doing any real damage. This is the enchanted world of techno-superstition: a world in which we adopt odd rituals and habits (explainable AI; fair AI etc) to create an illusion of control.

(cut)

These two problems combine into a third problem: the erosion of the possibility of achievement. One reason why we work is so that we can achieve certain outcomes. But when we lack understanding and control it undermines our sense of achievement. We achieve things when we use our reason to overcome obstacles to problem-solving in the real world. Some people might argue that a human collaborating with an AI system to produce some change in the world is achieving something through the combination of their efforts. But this is only true if the human plays some significant role in the collaboration. If humans cannot meaningfully make a difference to the success of AI or accurately calibrate their behaviour to produce better outcomes in tandem with the AI, then the pathway to achievement is blocked. This seems to be what happens, even when we try to make the systems more transparent.

The info is here.

Monday, August 26, 2019

Psychological reactions to human versus robotic job replacement

Armin Granulo, Christoph Fuchs & Stefano Puntoni
Nature.com
Originally posted August 5, 2019

Abstract

Advances in robotics and artificial intelligence are increasingly enabling organizations to replace humans with intelligent machines and algorithms. Forecasts predict that, in the coming years, these new technologies will affect millions of workers in a wide range of occupations, replacing human workers in numerous tasks, but potentially also in whole occupations. Despite the intense debate about these developments in economics, sociology and other social sciences, research has not examined how people react to the technological replacement of human labour. We begin to address this gap by examining the psychology of technological replacement. Our investigation reveals that people tend to prefer workers to be replaced by other human workers (versus robots); however, paradoxically, this preference reverses when people consider the prospect of their own job loss. We further demonstrate that this preference reversal occurs because being replaced by machines, robots or software (versus other humans) is associated with reduced self-threat. In contrast, being replaced by robots is associated with a greater perceived threat to one’s economic future. These findings suggest that technological replacement of human labour has unique psychological consequences that should be taken into account by policy measures (for example, appropriately tailoring support programmes for the unemployed).

The info is here.

Thursday, August 8, 2019

Microsoft wants to build artificial general intelligence: an AI better than humans at everything

A humanoid robot stands in front of a screen displaying the letters “AI.”Kelsey Piper 
www.vox.com
Originally published July 22, 2019

Here is an excerpt:

Existing AI systems beat humans at lots of narrow tasks — chess, Go, Starcraft, image generation — and they’re catching up to humans at others, like translation and news reporting. But an artificial general intelligence would be one system with the capacity to surpass us at all of those things. Enthusiasts argue that it would enable centuries of technological advances to arrive, effectively, all at once — transforming medicine, food production, green technologies, and everything else in sight.

Others warn that, if poorly designed, it could be a catastrophe for humans in a few different ways. A sufficiently advanced AI could pursue a goal that we hadn’t intended — a recipe for catastrophe. It could turn out unexpectedly impossible to correct once running. Or it could be maliciously used by a small group of people to harm others. Or it could just make the rich richer and leave the rest of humanity even further in the dust.

Getting AGI right may be one of the most important challenges ahead for humanity. Microsoft’s billion dollar investment has the potential to push the frontiers forward for AI development, but to get AGI right, investors have to be willing to prioritize safety concerns that might slow commercial development.

The info is here.

Tuesday, August 6, 2019

Ethics and automation: What to do when workers are displaced

Tracy Mayor
MIT School of Management
Originally published July 8, 2019

As companies embrace automation and artificial intelligence, some jobs will be created or enhanced, but many more are likely to go away. What obligation do organizations have to displaced workers in such situations? Is there an ethical way for business leaders to usher their workforces through digital disruption?

Researchers wrestled with those questions recently at MIT Technology Review’s EmTech Next conference. Their conclusion: Company leaders need to better understand the negative repercussions of the technologies they adopt and commit to building systems that drive economic growth and social cohesion.

Pramod Khargonekar, vice chancellor for research at University of California, Irvine, and Meera Sampath, associate vice chancellor for research at the State University of New York, presented findings from their paper, “Socially Responsible Automation: A Framework for Shaping the Future.”

The research makes the case that “humans will and should remain critical and central to the workplace of the future, controlling, complementing and augmenting the strengths of technological solutions.” In this scenario, automation, artificial intelligence, and related technologies are tools that should be used to enrich human lives and livelihoods.

Aspirational, yes, but how do we get there?

The info is here.

Monday, March 25, 2019

U.S. companies put record number of robots to work in 2018

Reuters
Originally published February 28, 2019


U.S. companies installed more robots last year than ever before, as cheaper and more flexible machines put them within reach of businesses of all sizes and in more corners of the economy beyond their traditional foothold in car plants.

Shipments hit 28,478, nearly 16 percent more than in 2017, according to data seen by Reuters that was set for release on Thursday by the Association for Advancing Automation, an industry group based in Ann Arbor, Michigan.

Shipments increased in every sector the group tracks, except automotive, where carmakers cut back after finishing a major round of tooling up for new truck models.

The info is here.