Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Hierarchy. Show all posts
Showing posts with label Hierarchy. Show all posts

Monday, September 25, 2023

The Young Conservatives Trying to Make Eugenics Respectable Again

Adam Serwer
The Atlantic
Originally posted 15 September 23

Here are two excerpts:

One explanation for the resurgence of scientific racism—what the psychologist Andrew S. Winston defines as the use of data to promote the idea of an “enduring racial hierarchy”—is that some very rich people are underwriting it. Mathias notes that “rich benefactors, some of whose identities are unknown, have funneled hundreds of thousands of dollars into a think tank run by Hanania.” As the biological anthropologist Jonathan Marks tells the science reporter Angela Saini in her book Superior, “There are powerful forces on the right that fund research into studying human differences with the goal of establishing those differences as a basis of inequalities.”

There is no great mystery as to why eugenics has exerted such a magnetic attraction on the wealthy. From god emperors, through the divine right of kings, to social Darwinism, the rich have always sought an uncontestable explanation for why they have so much more money and power than everyone else. In a modern, relatively secular nation whose inequalities of race and class have been shaped by slavery and its legacies, the justifications tend toward the pseudoscience of an unalterable genetic aristocracy with white people at the top and Black people at the bottom.

“The lay concept of race does not correspond to the variation that exists in nature,” the geneticist Joseph L. Graves wrote in The Emperor’s New Clothes: Biological Theories of Race at the Millennium. “Instead, the American concept of race is a social construction, resulting from the unique political and cultural history of the United States.”

Because race is a social reality, genuine disparities among ethnic groups persist in measures such as education and wealth. Contemporary believers in racial pseudoscience insist these disparities must necessarily have a genetic explanation, one that happens to correspond to shifting folk categories of race solidified in the 18th century to justify colonialism and enslavement. They point to the external effects of things like war, poverty, public policy, and discrimination and present them as caused by genetics. For people who have internalized the logic of race, the argument may seem intuitive. But it is just astrology for racists.


Race is a sociopolitical category, not a biological one. There is no genetic support for the idea that humans are divided into distinct races with immutable traits shared by others who have the same skin color. Although qualified geneticists have debunked the shoddy arguments of race scientists over and over, the latter maintain their relevance in part by casting substantive objections to their assumptions, methods, and conclusions as liberal censorship. There are few more foolproof ways to get Trump-era conservatives to believe falsehoods than to insist that liberals are suppressing them. Race scientists also understand that most people can evaluate neither the pseudoscience they offer as proof of racial differences nor the actual science that refutes it, and will default to their political sympathies.

Three political developments helped renew this pseudoscience’s appeal. The first was the election of Barack Obama, an emotional blow to those adhering to the concept of racial hierarchy from which they have yet to recover. Then came the rise of Bernie Sanders, whose left-wing populism blamed the greed of the ultra-wealthy for the economic struggles of both the American working class and everyone in between. Both men—one a symbol of racial equality, the other of economic justice—drew broad support within the increasingly liberal white-collar workforce from which the phrenologist billionaires of Big Tech draw their employees. The third was the election of Donald Trump, itself a reaction to Obama and an inspiration to those dreaming of a world where overt bigotry does not carry social consequences.

Here is my brief synopsis:

Young conservatives are often influenced by far-right ideologues who believe in the superiority of the white race and the need to improve the human gene pool.  Serwer argues that the resurgence of interest in eugenics is part of a broader trend on the right towards embracing racist and white supremacist ideas. He also notes that the pseudoscience of race is being used to justify hierarchies and provide an enemy to rail against.

It is important to note that eugenics is a dangerous and discredited ideology. It has been used to justify forced sterilization, genocide, and other atrocities. The resurgence of interest in eugenics is a threat to all people, especially those who are already marginalized and disadvantaged.

Thursday, February 9, 2023

To Each Technology Its Own Ethics: The Problem of Ethical Proliferation

Sætra, H.S., Danaher, J. 
Philos. Technol. 35, 93 (2022).


Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and every subtype of technology or technological property—e.g. computer ethics, AI ethics, data ethics, information ethics, robot ethics, and machine ethics. Specific technologies might have specific impacts, but we argue that they are often sufficiently covered and understood through already established higher-level domains of ethics. Furthermore, the proliferation of tech ethics is problematic because (a) the conceptual boundaries between the subfields are not well-defined, (b) it leads to a duplication of effort and constant reinventing the wheel, and (c) there is danger that participants overlook or ignore more fundamental ethical insights and truths. The key to avoiding such outcomes lies in a taking the discipline of ethics seriously, and we consequently begin with a brief description of what ethics is, before presenting the main forms of technology related ethics. Through this process, we develop a hierarchy of technology ethics, which can be used by developers and engineers, researchers, or regulators who seek an understanding of the ethical implications of technology. We close by deducing two principles for positioning ethical analysis which will, in combination with the hierarchy, promote the leveraging of existing knowledge and help us to avoid an exaggerated proliferation of tech ethics.

From the Conclusion

The ethics of technology is garnering attention for a reason. Just about everything in modern society is the result of, and often even infused with, some kind of technology. The ethical implications are plentiful, but how should the study of applied tech ethics be organised? We have reviewed a number of specific tech ethics, and argued that there is much overlap, and much confusion relating to the demarcation of different domain ethics. For example, many issues covered by AI ethics are arguably already covered by computer ethics, and many issues argued to be data ethics, particularly issues related to privacy and surveillance, have been studied by other tech ethicists and non-tech ethicists for a long time.

We have proposed two simple principles that should help guide more ethical research to the higher levels of tech ethics, while still allowing for the existence of lower-level domain specific ethics. If this is achieved, we avoid confusion and a lack of navigability in tech ethics, ethicists avoid reinventing the wheel, and we will be better able to make use of existing insight from higher-level ethics. At the same time, the work done in lower-level ethics will be both valid and highly important, because it will be focused on issues exclusive to that domain. For example, robot ethics will be about those questions that only arise when AI is embodied in a particular sense, and not all issues related to the moral status of machines or social AI in general.

While our argument might initially be taken as a call to arms against more than one fundamental applied ethics, we hope to have allayed such fears. There are valid arguments for the existence of different types of applied ethics, and we merely argue that an exaggerated proliferation of tech ethics is occurring, and that it has negative consequences. Furthermore, we must emphasise that there is nothing preventing anyone from making specific guidelines for, for example, AI professionals, based on insight from computer ethics. The domains of ethics and the needs of practitioners are not the same, and our argument is consequently that ethical research should be more concentrated than professional practice.

Friday, December 10, 2021

How social relationships shape moral wrongness judgments

Earp, B.D., McLoughlin, K.L., Monrad, J.T. et al. 
Nat Commun 12, 5776 (2021).


Judgments of whether an action is morally wrong depend on who is involved and the nature of their relationship. But how, when, and why social relationships shape moral judgments is not well understood. We provide evidence to address these questions, measuring cooperative expectations and moral wrongness judgments in the context of common social relationships such as romantic partners, housemates, and siblings. In a pre-registered study of 423 U.S. participants nationally representative for age, race, and gender, we show that people normatively expect different relationships to serve cooperative functions of care, hierarchy, reciprocity, and mating to varying degrees. In a second pre-registered study of 1,320 U.S. participants, these relationship-specific cooperative expectations (i.e., relational norms) enable highly precise out-of-sample predictions about the perceived moral wrongness of actions in the context of particular relationships. In this work, we show that this ‘relational norms’ model better predicts patterns of moral wrongness judgments across relationships than alternative models based on genetic relatedness, social closeness, or interdependence, demonstrating how the perceived morality of actions depends not only on the actions themselves, but also on the relational context in which those actions occur.

From the General Discussion

From a theoretical perspective, one aspect of our current account that requires further attention is the reciprocity function. In contrast with the other three functions considered, relationship-specific prescriptions for reciprocity did not significantly predict moral judgments for reciprocity violations. Why might this be so? One possibility is that the model we tested did not distinguish between two different types of reciprocity. In some relationships, such as those between strangers, acquaintances, or individuals doing business with one another, each party tracks the specific benefits contributed to, and received from, the other. In these relationships, reciprocity thus takes a tit-for-tat form in which benefits are offered and accepted on a highly contingent basis. This type of reciprocity is transactional, in that resources are provided, not in response to a real or perceived need on the part of the other, but rather, in response to the past or expected future provision of a similarly valued resource from the cooperation partner. In this, it relies on an explicit accounting of who owes what to whom, and is thus characteristic of so-called “exchange” relationships.

In other relationships, by contrast, such as those between friends, family members, or romantic partners – so-called “communal” relationships – reciprocity takes a different form: that of mutually expected responsiveness to one another’s needs. In this form of reciprocity, each party tracks the other’s needs (rather than specific benefits provided) and strives to meet these needs to the best of their respective abilities, in proportion to the degree of responsibility each has assumed for the other’s welfare. Future work on moral judgments in relational context should distinguish between these two types of reciprocity: that is, mutual care-based reciprocity in communal relationships (when both partners have similar needs and abilities) and tit-for-tat reciprocity between “transactional” cooperation partners who have equal standing or claim on a resource.

Wednesday, November 11, 2020

How social relationships shape moral judgment

Earp, B. D.,  et al. (2020, September 18).


Our judgments of whether an action is morally wrong depend on who is involved and their relationship to one another. But how, when, and why do social relationships shape such judgments? Here we provide new theory and evidence to address this question. In a pre- registered study of U.S. participants (n = 423, nationally representative for age, race and gender), we show that particular social relationships (like those between romantic partners, housemates, or siblings) are normatively expected to serve distinct cooperative functions – including care, reciprocity, hierarchy, and mating – to different degrees. In a second pre- registered study (n = 1,320) we show that these relationship-specific norms, in turn, influence the severity of moral judgments concerning the wrongness of actions that violate cooperative expectations. These data provide evidence for a unifying theory of relational morality that makes highly precise out-of-sample predictions about specific patterns of moral judgments across relationships. Our findings show how the perceived morality of actions depends not only on the actions themselves, but also on the relational context in which those actions occur.

From the Discussion

In other relationships, by contrast, such as those between friends, family members, or romantic partners --so-called “communal” relationships --reciprocity takes a different form: that of mutually expected responsiveness to one another’s needs. In this form of reciprocity, each party tracks the other’s needs (rather than specific benefits provided) and strives to meet these needs to the best of their respective abilities, in proportion to the degree of responsibility each has assumed for the other’s welfare. Future work should distinguish between these two types of reciprocity: that is, mutual care-based reciprocity in communal relationships (when both partners have similar needs and abilities) and tit-for-tat reciprocity between “transactional” cooperation partners who have equal standing or claim on a resource.

Wednesday, October 2, 2019

Evolutionary Thinking Can Help Companies Foster More Ethical Culture

Brian Gallagher
Originally published August 20, 2019

Here are two excerpts:

How might human beings be mismatched to the modern business environment?

Many problems of the modern workplace have not been viewed through a mismatch lens, so at this point these are still hypotheses. But let’s take the role of managers, for example. Humans have a strong aversion to dominance, which is a result of our egalitarian nature that served us well in the small-scale societies in which we evolved. One of the biggest causes of job dissatisfaction, people report, is the interaction with their line manager. Many people find this relationship extremely stressful, as it infringes on their sense of autonomy, to be dominated by someone who controls them and gives them orders. Or take the physical work environment that looks nothing like our ancestral environment—our ancestors were always outside, working as they socialized and getting plenty of physical exercise while they hunted and gathered in tight social groups. Now we are forced to spend much of our daytime in tall buildings with small offices surrounded by genetic strangers and no natural scenes to speak of.


What can business leaders learn from evolutionary psychology about how to structure relationships between bosses and employees?

One of the most important lessons from our research is that leaders are effective to the extent that they enable their teams to be effective. This sounds obvious, but leadership is really about the team and the followers. Individuals gladly follow leaders who they respect because of their skills and competence, and they have a hard time, by contrast, following a leader who is dominant and threatening. Yet human nature is also such that if you give someone power, they will use it—there is a fundamental leader-follower conflict. To keep managers from following the easy route of threat and dominance, every healthy organization should have mechanisms in place to curtail their power. In small-scale societies, as the anthropological literature makes clear, leaders are kept in check because they can only exercise influence in their domain of expertise, nothing else. What’s more, there should be room to gossip about and ridicule leaders, and leaders should be regularly replaced in order to prevent them building up a power base. Why not have feedback sessions where employees can provide regular inputs in the assessment of their bosses? Why not include workers in hiring board members? Many public and private organizations in Europe are currently experimenting with these power-leveling mechanisms.

The info is here.

Wednesday, July 17, 2019

Responsibility for Killer Robots

Johannes Himmelreich
Ethic Theory Moral Prac (2019).


Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, I take steps towards vindicating the more general idea that superiors can be morally responsible in virtue of being in command.

Wednesday, June 20, 2018

Can a machine be ethical? Why teaching AI ethics is a minefield.

Scotty Hendricks
Originally published May 31, 2018

Here is an excerpt:

Dr. Moor gives the example of Isaac Asimov’s three rules of robotics. For those who need a refresher, they are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The rules are hierarchical, and the robots in Asimov’s books are all obligated to follow them.

Dr. Moor suggests that the problems with these rules are obvious. The first rule is so general that an artificial intelligence following them “might be obliged by the First Law to roam the world attempting to prevent harm from befalling human beings” and therefore be useless for its original function!

Such problems can be common in deontological systems, where following good rules can lead to funny results. Asimov himself wrote several stories about potential problems with the laws. Attempts to solve this issue abound, but the challenge of making enough rules to cover all possibilities remains. 

On the other hand, a machine could be programmed to stick to utilitarian calculus when facing an ethical problem. This would be simple to do, as the computer would only have to be given a variable and told to make choices that would maximize the occurrence of it. While human happiness is a common choice, wealth, well-being, or security are also possibilities.

The article is here.

Sunday, January 21, 2018

Cognitive Economics: How Self-Organization and Collective Intelligence Works

Geoff Mulgan
Originally published December 22, 2017

Here are two excerpts:

But self-organization is not an altogether-coherent concept and has often turned out to be misleading as a guide to collective intelligence. It obscures the work involved in organization and in particular the hard work involved in high-dimensional choices. If you look in detail at any real example—from the family camping trip to the operation of the Internet, open-source software to everyday markets, these are only self-organizing if you look from far away. Look more closely and different patterns emerge. You quickly find some key shapers—like the designers of underlying protocols, or the people setting the rules for trading. There are certainly some patterns of emergence. Many ideas may be tried and tested before only a few successful ones survive and spread. To put it in the terms of network science, the most useful links survive and are reinforced; the less useful ones wither. The community decides collectively which ones are useful. Yet on closer inspection, there turn out to be concentrations of power and influence even in the most decentralized communities, and when there’s a crisis, networks tend to create temporary hierarchies—or at least the successful ones do—to speed up decision making. As I will show, almost all lasting examples of social coordination combine some elements of hierarchy, solidarity, and individualism.


Here we see a more common pattern. The more dimensional any choice is, the more work is needed to think it through. If it is cognitively multidimensional, we may need many people and more disciplines to help us toward a viable solution. If it is socially dimensional, then there is no avoiding a good deal of talk, debate, and argument on the way to a solution that will be supported. And if the choice involves long feedback loops, where results come long after actions have been taken, there is the hard labor of observing what actually happens and distilling conclusions. The more dimensional the choice in these senses, the greater the investment of time and cognitive energy needed to make successful decisions.

Again, it is possible to overshoot: to analyze a problem too much or from too many angles, bring too many people into the conversation, or wait too long for perfect data and feedback rather than relying on rough-and-ready quicker proxies. All organizations struggle to find a good enough balance between their allocation of cognitive resources and the pressures of the environment they’re in. But the long-term trend of more complex societies is to require ever more mediation and intellectual labor of this kind.

The article is here.