Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Disruption. Show all posts
Showing posts with label Disruption. Show all posts

Wednesday, October 19, 2022

Technology and moral change: the transformation of truth and trust

Danaher, J., Sætra, H.S. 
Ethics Inf Technol 24, 35 (2022).
https://doi.org/10.1007/s10676-022-09661-y

Abstract

Technologies can have profound effects on social moral systems. Is there any way to systematically investigate and anticipate these potential effects? This paper aims to contribute to this emerging field on inquiry through a case study method. It focuses on two core human values—truth and trust—describes their structural properties and conceptualisations, and then considers various mechanisms through which technology is changing and can change our perspective on those values. In brief, the paper argues that technology is transforming these values by changing the costs/benefits of accessing them; allowing us to substitute those values for other, closely-related ones; increasing their perceived scarcity/abundance; and disrupting traditional value-gatekeepers. This has implications for how we study other, technologically-mediated, value changes.

(cut)

Conclusion: lessons learned

Having examined our two case studies, it remains to consider whether or not there are similarities in how technology affects trust and truth, and if there are general lessons to be learned here about how technology may impact values in the future.

The two values we have considered are structurally similar and interrelated. They are both intrinsically and instrumentally valuable. They are both epistemic and practical in nature: we value truth and trust (at least in part) because they give us access to knowledge and help us to resolve the decision problems we face on a daily basis. We also see, in both case studies, similar mechanisms of value change at work. The most interesting, to our minds, are the following:
  • Technology changes the costs associated with accessing certain values, making them less or more important as a result Digital disinformation technology increases the cost of finding out the truth, but reduces the cost of finding and reinforcing a shared identity community; reliable AI and robotics gives us an (often cheaper and more efficient) substitute for trust in humans, while still giving us access to useful cognitive, emotional and physical assistance.
  • Technology makes it easier, or more attractive to trade off or substitute some values against others Digital disinformation technology allows us to obviate the need for finding out the truth and focus on other values instead; reliable machines allow us to substitute the value of reliability for the value of trust. This is a function of the plural nature of values, their scarcity, and the changing cost structure of values caused by technology.
  • Technology can make some values seem more scarce (rare, difficult to obtain), thereby increasing their perceived intrinsic value Digital disinformation makes truth more elusive, thereby increasing its perceived value which, in turn, encourages some moral communities to increase their fixation on it; robots and AI make trust in humans less instrumentally necessary, thereby increasing the expressive value of trust in others.
  • Technology can disrupt power networks, thereby altering the social gatekeepers to value to the extent that we still care about truth, digital disinformation increases the power of the epistemic elites that can help us to access the truth; trust-free or trust-alternative technologies can disrupt the power of traditional trusted third parties (professionals, experts etc.) and redistribute power onto technology or a technological elite.

Friday, February 5, 2021

Shaking Things Up: Unintended Consequences of Firm Acquisitions on Racial and Gender Inequality

Letian Zhang
Harvard Business School
Originally published 23 Jan20

Abstract

This paper develops a theory of how disruptive events shape organizational inequality.  Despite various organizational efforts, racial and gender inequality in the workplace remains high. I theorize that because the persistence of such inequality is reinforced by organizational structures and practices, disruptive events that shake up old hierarchies and break down routines and culture should give racial minority and women workers more opportunities to advance. To examine this theory, I explore a critical but seldom analyzed organizational event in the inequality literature - mergers and acquisitions. I propose that post-acquisition restructuring could offer an opportunity for firms to advance diversity initiatives and to objectively re-evaluate workers. Using a difference-in-differences design on a nationally representative sample covering 37,343 acquisitions from 1971 to 2015, I find that although acquisitions lead to occupational reconfiguration that favors higher-skilled workers, they also reduce racial and gender inequality. In particular, I find improved managerial representation of racial minorities and women and reduced racial and gender segregation in the acquired workplace. This post-acquisition effect is stronger when (a) the acquiring firm values race and gender equality more and (b) the acquired workplace had higher racial and gender inequality.  These findings suggest that disruptive events could produce an unintended consequence of increasing racial and gender equality in the workplace.

Managerial Implications

From a managerial perspective, disruptive events offer an opportunity to advance diversity or equality-related goals that might be difficult to pursue during normal times.  As my analyses show, acquisition amplifies the race and gender differences between those acquiring firms that value diversity and those that do not. For managers concerned about race and gender issues, acquisitions and other disruptive events might serve as suitable moments to improve race and gender gaps effectively and at a relatively lower cost. Thus, despite the disruption and uncertainty during these periods, managers should see disruptive events as prime opportunities to make positive changes.

Saturday, August 15, 2020

Disrupting the System Constructively: Testing the Effectiveness of Nonnormative Nonviolent Collective Action

Shuman, E. (2020, June 21).
https://doi.org/10.31234/osf.io/rvgup

Abstract

Collective action research tends to focus on motivations of the disadvantaged group, rather than on which tactics are effective at driving the advantaged group to make concessions to the disadvantaged. We focused on the potential of nonnormative nonviolent action as a tactic to generate support for concessions among advantaged group members who are resistant to social change. We propose that this tactic, relative to normative nonviolent and to violent action, is particularly effective because it reflects constructive disruption: a delicate balance between disruption (which can put pressure on the advantaged group to respond), and perceived constructive intentions (which can help ensure that the response to action is a conciliatory one). We test these hypotheses across four contexts (total N = 3650). Studies 1-3 demonstrate that nonnormative nonviolent action (compared to inaction, normative nonviolent action, and violent action) is uniquely effective at increasing support for concessions to the disadvantaged among resistant advantaged group members (compared to advantaged group members more open to social change). Study 3 shows that constructive disruption mediates this effect. Study 4 shows that perceiving a real-world ongoing protest as constructively disruptive predicts support for the disadvantaged, while Study 5 examines these processes longitudinally over 2 months in the context of an ongoing social movement. Taken together, we show that nonnormative nonviolent action can be an effective tactic for generating support for concessions to the disadvantaged among those who are most resistant because it generates constructive disruption.

From the General Discussion

Based on this research, which collective action tactic should disadvantaged groups choose to advance their status? While a simple reading of these findings might suggest that nonnormative nonviolent action is the “most effective” form of action, a closer reading of these findings and other research (Saguy & Szekeres, 2018; Teixeira et al., 2020; Thomas & Louis, 2014) would suggest that what type of action is most effective depends on the goal. We demonstrate that nonnormative nonviolent action is effective for generating support for concessions to the protest that would advance its policy goals from those who were more resistant. On the other hand, other prior research has found that normative nonviolent action was more effective at turning sympathizers into active supporters (Teixeira et al., 2020; Thomas & Louis, 2014)16. Thus, which action tactic will be most useful to the disadvantaged may depend on the goal: If they are facing resistance from the advantaged blocking the achievement of their goals, nonnormative nonviolent action may be more effective. However, if the disadvantaged are seeking to build a movement that includes members of the advantaged group, the nonnormative nonviolent action will likely be more effective. The question is thus not which tactic is “most effective”, but which tactic is most effective to achieve which goal for what audience.

Monday, September 2, 2019

The Robotic Disruption of Morality

John Danaher
Philosophical Disquisitions
Originally published August 2, 2019

Here is an excerpt:

2. The Robotic Disruption of Human Morality

From my perspective, the most interesting aspect of Tomasello’s theory is the importance he places on the second personal psychology (an idea he takes from the philosopher Stephen Darwall). In essence, what he is arguing is that all of human morality — particularly the institutional superstructure that reinforces it — is premised on how we understand those with whom we interact. It is because we see them as intentional agents, who experience and understand the world in much the same way as we do, that we start to sympathise with them and develop complex beliefs about what we owe each other. This, in turn, was made possible by the fact that humans rely so much on each other to get things done.

This raises the intriguing question: what happens if we no longer rely on each other to get things done? What if our primary collaborative and cooperative partners are machines and not our fellow human beings? Will this have some disruptive impact on our moral systems?

The answer to this depends on what these machines are or, more accurately, what we perceive them to be. Do we perceive them to be intentional agents just like other human beings or are they perceived as something else — something different from what we are used to? There are several possibilities worth considering. I like to think of these possibilities as being arranged along a spectrum that classifies robots/AIs according to how autonomous or tool-like they perceived to be.

At one extreme end of the spectrum we have the perception of robots/AIs as tools, i.e. as essentially equivalent to hammers and wheelbarrows. If we perceive them to be tools, then the disruption to human morality is minimal, perhaps non-existent. After all, if they are tools then they are not really our collaborative partners; they are just things we use. Human actors remain in control and they are still our primary collaborative partners. We can sustain our second personal morality by focusing on the tool users and not the tools.

The blog post is here.

Friday, August 30, 2019

The Technology of Kindness—How social media can rebuild our empathy—and why it must.

Jamil Zaki
Scientific American
Originally posted August 6, 2019

Here is an excerpt:

Technology also builds new communities around kindness. Consider the paradox of rare illnesses such as cystic fibrosis or myasthenia gravis. Each affects fewer than one in 1,000 people but there are many such conditions, meaning there are many people who suffer in ways their friends and neighbors don’t understand. Millions have turned to online forums, such as Facebook groups or the site RareConnect. In 2011 Priya Nambisan, a health policy expert, surveyed about 800 members of online health forums. Users reported that these groups offer helpful tips and information but also described them as heartfelt communities, full of compassion and commiseration.

allowing anyone to count on the kindness of strangers. These sites train users to provide empathetic social support and then unleash their goodwill on one another. Some express their struggles; others step in to provide support. Users find these platforms deeply soothing. In a 2015 survey, 7cups users described the kindness they received on the site to be as helpful as professional psychotherapy. Users on these sites also benefit from helping others. In a 2017 study, psychologist Bruce Doré and his colleagues assigned people to use either Koko or another Web site and tested their subsequent well-being. Koko users’ levels of depression dropped after spending time on the site, especially when they used it to support others.

The info is here.

Thursday, August 29, 2019

Why Businesses Need Ethics to Survive Disruption

Mathew Donald
Business EthicsHR Technologist
Originally posted July 29, 2019

Here is an excerpt:

Using Ethics as the Guideline

An alternative model for an organization in disruption may be to connect staff and their organization to society values. Whilst these standards may not all be written, the staff will generally know right from wrong, where they live in harmony with the broad rule of society. People do not normally steal, drive on the wrong side of the road or take advantage of the poor. Whilst written laws may prevail and guide society, it is clear that most people follow unwritten society values. People make decisions on moral grounds daily, each based on their beliefs, refraining from actions that may be frowned upon by their friends and neighbors.

Ethics may be a key ingredient to add to your organization in a disruptive environment, as it may guide your staff through new situations without the necessity for a written rule or government law. It would seem that ethics based on a sense of fair play, not taking undue advantage, not overusing power and control, alignment with everyday society values may address some of this heightened risk in the disruption. Once the set of ethics is agreed upon and imbibed by the staff, it may be possible for them to review new transactions, new situations, and potential opportunities without necessarily needing to see written guidelines.

The info is here.

Tuesday, July 16, 2019

The possibility and risks of artificial general intelligence

Phil Torres
Bulletin of the Atomic Scientists 
Volume 75, 2019 - Issue 3: Special issue: The global competition for AI dominance

Abstract

This article offers a survey of why artificial general intelligence (AGI) could pose an unprecedented threat to human survival on Earth. If we fail to get the “control problem” right before the first AGI is created, the default outcome could be total human annihilation. It follows that since an AI arms race would almost certainly compromise safety precautions during the AGI research and development phase, an arms race could prove fatal not just to states but for the entire human species. In a phrase, an AI arms race would be profoundly foolish. It could compromise the entire future of humanity.

Here is part of the paper:

AGI arms races

An AGI arms race could be extremely dangerous, perhaps far more dangerous than any previous arms race, including the one that lasted from 1947 to 1991. The Cold War race was kept in check by the logic of mutually-assured destruction, whereby preemptive first strikes would be met with a retaliatory strike that would leave the first state as wounded as its rival. In an AGI arms race, however, if the AGI’s goal system is aligned with the interests of a particular state, the result could be a winner-take-all scenario.

The info is here.


Monday, April 8, 2019

Mark Zuckerberg And The Tech World Still Do Not Understand Ethics

Derek Lidow
Forbes.com
Originally posted March 11, 2018

Here is an excerpt:

Expectations for technology startups encourage expedient, not ethical, decision making. 

As people in the industry are fond of saying, the tech world moves at “lightspeed.” That includes the pace of innovation, the rise and fall of markets, the speed of customer adoption, the evolution of business models and the lifecycles of companies. Decisions must be made quickly and leaders too often choose the most expedient path regardless of whether it is safe, legal or ethical.

 This “move fast and break things” ethos is embodied in practices like working toward a minimum viable product (MVP), helping to establish a bias toward cutting corners. In addition, many founders look for CFOs who are “tech trained—that is, people accustomed to a world where time and money wait for no one—as opposed to a seasoned financial officer with good accounting chops and a moral compass.

The host of scandals at Zenefits, a cloud-based provider of employee-benefits software to small businesses and once one of the most promising of Silicon Valley startups, had their origins in the shortcuts the company took in order to meet unreasonably high expectations for growth. The founder apparently created software that helped employees cheat on California’s online broker license course. As the company expanded rapidly, it began hiring people with little experience in the highly regulated health insurance industry. As the company moved from small businesses to larger businesses, the strain on it software increased. Instead of developing appropriate software, the company hired more people to manually take up the slack where the existing software failed. When the founder was asked by an interviewer before the scandals why he was so intent on expanding so rapidly he replied, “Slowing down doesn’t feel like something I want to do.”

The info is here.

Friday, October 9, 2015

'Disruptive' doctors rattle nurses, increase safety risks

Jayne O'Donnell and Laura Ungar
USAToday
Originally published September 20, 2015

Here are two excerpts:

Disruptive behavior leads to increased medication errors, more infections and other bad patient outcomes — partly because staff members are often afraid to speak up in the face of bullying by a physician, Wyatt says. That "hidden code of silence" keeps many incidents from being reported or adequately addressed, says physician Alan Rosenstein, an expert in disruptive behavior.

(cut)

Most experts estimate that up to 5% of physicians exhibit disruptive behavior, although fear of retaliation and other factors make it difficult to determine the extent of the problem. A 2008 survey of nurses and doctors at more than 100 hospitals showed that 77% of respondents said they witnessed physicians engaging in disruptive behavior, which often meant the verbal abuse of another staff member. Sixty-five percent said they saw nurses exhibit such behavior.

Most said such actions raise the risk of errors and deaths.

About two-thirds of the most serious medical incidents — those involving death or serious physical or psychological injury — can be traced back to communication errors, according to a health care accrediting organization called the Joint Commission. Getting nurses and other medical assistants rattled during surgery can be a big safety risk, Bartholomew says.

The entire article is here.