Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Ethics Codes. Show all posts
Showing posts with label Ethics Codes. Show all posts

Saturday, March 4, 2023

Divide and Rule? Why Ethical Proliferation is not so Wrong for Technology Ethics.

Llorca Albareda, J., Rueda, J.
Philos. Technol. 36, 10 (2023).
https://doi.org/10.1007/s13347-023-00609-8

Abstract

Although the map of technology ethics is expanding, the growing subdomains within it may raise misgivings. In a recent and very interesting article, Sætra and Danaher have argued that the current dynamic of sub-specialization is harmful to the ethics of technology. In this commentary, we offer three reasons to diminish their concern about ethical proliferation. We argue first that the problem of demarcation is weakened if we attend to other sub-disciplines of technology ethics not mentioned by these authors. We claim secondly that the logic of sub-specializations is less problematic if one does adopt mixed models (combining internalist and externalist approaches) in applied ethics. We finally reject that clarity and distinction are necessary conditions for defining sub-fields within ethics of technology, defending the porosity and constructive nature of ethical disciplines.

Conclusion

Sætra and Danaher have initiated a necessary discussion about the increasing proliferation of neighboring sub-disciplines in technology ethics. Although we do not share their concern, we believe that this debate should continue in the future. Just as some subfields have recently been consolidated, others may do the same in the coming decades. The possible emergence of novel domain-specific technology ethics (say Virtual Reality Ethics) suggests that future proposals will point to as yet unknown positive and negative aspects of this ethical proliferation. In part, the creation of new sub-disciplines will depend on the increasing social prominence of other emerging and future technologies. The map of technology ethics thus includes uncharted waters and new subdomains to discover. This makes ethics of technology a fascinatingly lively and constantly evolving field of knowledge.

Tuesday, August 31, 2021

What Causes Unethical Behavior? A Meta-Analysis to Set an Agenda for Public Administration Research

Nicola Belle & Paola Cantarelli
(2017)
Public Administration Review,
Vol. 77, Iss. 3, pp. 327–339

Abstract

This article uses meta-analysis to synthesize 137 experiments in 73 articles on the causes of unethical behavior. Results show that exposure to in-group members who misbehave or to others who benefit from unethical actions, greed, egocentrism, self-justification, exposure to incremental dishonesty, loss aversion, challenging performance goals, or time pressure increase unethical behavior. In contrast, monitoring of employees, moral reminders, and individuals’ willingness to maintain a positive self-view decrease unethical conduct. Findings on the effect of self-control depletion on unethical behavior are mixed. Results also present subgroup analyses and several measures of study heterogeneity and likelihood of publication bias. The implications are of interest to both scholars and practitioners. The article concludes by discussing which of the factors analyzed should gain prominence in public administration research and uncovering several unexplored causes of unethical behavior.

From the Discussion

Among the factors that our meta-analyses identified as determinants of unethical behavior, the following may be elevated to prominence for public administration research and practice. First, results from the meta-analyses on social influences suggest that being exposed to corrupted colleagues may enhance the likelihood that one engages in unethical conduct. These findings are particularly relevant because “[c]orruption in the public sector hampers the efficiency of public services, undermines confidence in public institutions and increases the cost of public transactions” (OECD 2015 ). Moreover, corruption “may distort government ’ s public resource allocations” (Liu and Mikesell 2014 , 346). 

Thursday, October 17, 2019

AI ethics and the limits of code(s)

Machine learningGeoff Mulgan
nesta.org.uk
Originally published September 16, 2019

Here is an excerpt:

1. Ethics involve context and interpretation - not just deduction from codes.

Too much writing about AI ethics uses a misleading model of what ethics means in practice. It assumes that ethics can be distilled into principles from which conclusions can then be deduced, like a code. The last few years have brought a glut of lists of principles (including some produced by colleagues at Nesta). Various overviews have been attempted in recent years. A recent AI Ethics Guidelines Global Inventory collects over 80 different ethical frameworks. There’s nothing wrong with any of them and all are perfectly sensible and reasonable. But this isn’t how most ethical reasoning happens. The lists assume that ethics is largely deductive, when in fact it is interpretive and context specific, as is wisdom. One basic reason is that the principles often point in opposite directions - for example, autonomy, justice and transparency. Indeed, this is also the lesson of medical ethics over many decades. Intense conversation about specific examples, working through difficult ambiguities and contradictions, counts for a lot more than generic principles.

The info is here.

Monday, September 23, 2019

Three things digital ethics can learn from medical ethics

Carissa Véliz
Nature Electronics 2:316-318 (2019)

Here is an excerpt:

Similarly, technological decisions are not only about facts (for example, about what is more efficient), but also about the kind of life we want and the kind of society we strive to build. The beginning of the digital age has been plagued by impositions, with technology companies often including a disclaimer in their terms and conditions that “they can unilaterally change their terms of service agreement without any notice of changes to the users”. Changes towards more respect for autonomy, however, can already be seen. With the implementation of the GDPR in Europe, for instance, tech
companies are being urged to accept that people may prefer services that are less efficient or possess less functionality if that means they get to keep their privacy.

One of the ways in which technology has failed to respect autonomy is through the use of persuasive technologies. Digital technologies that are designed to chronically distract us not only jeopardize our attention, but also our will, both individually and collectively. Technologies that constantly hijack our attention threaten the resources we need to exercise our autonomy.  If one were to ask people about their goals in life, most people would likely mention things such as “spending more time with family” — not many people would suggest “spending more time on Facebook”.  Yet most people do not accomplish their goals — we get distracted.

The info is here.

Monday, August 12, 2019

Why it now pays for businesses to put ethics before economics

John Drummond
The National
Originally published July 14, 2019

Here is an excerpt:

All major companies today have an ethics code or a statement of business principles. I know this because at one time my company designed such codes for many FTSE companies. And all of these codes enshrine a commitment to moral standards. And these standards are often higher than those required by law.

When the boards of companies agree to these principles they largely do so because they believe in them – at the time. However, time moves on. People move on. The business changes. Along the way, company people forget.

So how can you tell if a business still believes in its stated principles? Actually, it is very simple. When an ethical problem, such as Mossmorran, happens, look to see who turns up to answer concerns. If it is a public relations man or woman, the company has lost the plot. By contrast, if it is the executive who runs the business, then the company is likely still in close touch with its ethical standards.

Economics and ethics can be seen as a spectrum. Ethics is at one side of the spectrum and economics at the other. Few organisations, or individuals for that matter, can operate on purely ethical lines alone, and few operate on solely economic considerations. Most organisations can be placed somewhere along this spectrum.

So, if a business uses public relations to shield top management from a problem, it occupies a position closer to economics than to ethics. On the other hand, where corporate executives face their critics directly, then the company would be located nearer to ethics.

The info is here.

Wednesday, September 20, 2017

Companies should treat cybersecurity as a matter of ethics

Thomas Lee
The San Francisco Chronicle
Originally posted September 2, 2017

Here is an excerpt:

An ethical code will force companies to rethink how they approach research and development. Instead of making stuff first and then worrying about data security later, companies will start from the premise that they need to protect consumer privacy before they start designing new products and services, Harkins said.

There is precedent for this. Many professional organizations like the American Medical Association and American Bar Association require members to follow a code of ethics. For example, doctors must pledge above all else not to harm a patient.

A code of ethics for cybersecurity will no doubt slow the pace of innovation, said Maurice Schweitzer, a professor of operations, information and decisions at the University of Pennsylvania’s Wharton School.

Ultimately, though, following such a code could boost companies’ reputations, Schweitzer said. Given the increasing number and severity of hacks, consumers will pay a premium for companies dedicated to security and privacy from the get-go, he said.

In any case, what’s wrong with taking a pause so we can catch our breath? The ethical quandaries technology poses to mankind are only going to get more complex as we increasingly outsource our lives to thinking machines.

That’s why a code of ethics is so important. Technology may come and go, but right and wrong never changes.

The article is here.

Wednesday, August 2, 2017

Ships in the Rising Sea? Changes Over Time in Psychologists’ Ethical Beliefs and Behaviors

Rebecca A. Schwartz-Mette & David S. Shen-Miller
Ethics & Behavior 

Abstract

Beliefs about the importance of ethical behavior to competent practice have prompted major shifts in psychology ethics over time. Yet few studies examine ethical beliefs and behavior after training, and most comprehensive research is now 30 years old. As such, it is unclear whether shifts in the field have resulted in general improvements in ethical practice: Are we psychologists “ships in the rising sea,” lifted by changes in ethical codes and training over time? Participants (N = 325) completed a survey of ethical beliefs and behaviors (Pope, Tabachnick, & Keith-Spiegel, 1987). Analyses examined group differences, consistency of frequency and ethicality ratings, and comparisons with past data. More than half of behaviors were rated as less ethical and occurring less frequently than in 1987, with early career psychologists generally reporting less ethically questionable behavior. Recommendations for enhancing ethics education are discussed.

The article is here.

Monday, June 26, 2017

What’s the Point of Professional Ethical Codes?

Iain Brassington
BMJ Blogs
June 13, 2017

Here is an excerpt:

They can’t be meant as a particularly useful tool for solving deep moral dilemmas: they’re much too blunt for that, often presuppose too much, and tend to bend to suit the law.  To think that because the relevant professional code enjoins x it follows that x is permissible or right smacks of a simple appeal to authority, and this flies in the face of what it is to be a moral agent in the first place.  But what a professional code of ethics may do is to provide a certain kind of Bolamesque legal defence: if your having done φ attracts a claim that it’s negligent or unreasonable or something like that, being able to point out that your professional body endorses φ-ing will help you out.  But professional ethics, and what counts as professional discipline, stretches way beyond that.  For example, instances of workplace bullying can be matters of great professional and ethical import, but it’s not at all obvious that the law should be involved.

There’s a range of reasons why someone’s behaviour might be of professional ethical concern.  Perhaps the most obvious is a concern for public protection.  If someone has been found to have behaved in a way that endangers third parties, then the profession may well want to intervene.

The blog post is here.

Saturday, November 12, 2016

Moral Dilemmas and Guilt

Patricia S. Greenspan
Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition
Vol. 43, No. 1 (Jan., 1983), pp. 117-125

In 'Moral dilemmas and ethical consistency', Ruth Marcus argues that moral dilemmas are 'real': there are cases where an agent ought to perform each of two incompatible actions.  Thus, a doctor with two patients equally in need of his attention ought to save each, even though he cannot save both. By
claiming that his dilemma is real, I take Marcus to be denying (rightly) that it is merely epistemic - a matter of uncertainty as to which patient to save.  Rather, she wants to say, the moral code yields two opposing recommendations, both telling him what he ought to do. The code is not inconsistent,
however, as long as its rules are all obeyable in some possible world; and it is not deficient as a guide to action, as long as it contains a second order principle, directing an agent to avoid situations of conflict. Where a dilemma does arise, though, the agent is guilty no matter what he does.

This last point seems implausible for the doctor's case; but here I shall consider a case which does fit Marcus's comments on guilt - if not all her views on the nature of moral dilemma.  I think that she errs, first of all, in counting as a dilemma any case where there are some considerations favoring each of two incompatible actions, even if it is clear that one of them is right. For instance, in the case of withholding weapons from someone who has gone mad, it would be unreasonable for the agent to feel guilty about breaking his promise, since he has done exactly as he should. But secondly, even in
Marcus's 'strong' cases, I do not think that dilemmas must be taken as yielding opposing all-things-considered ought-judgments, viewed as recommendations for action, rather than stopping with judgments of obligation, or reports of commitments. The latter do not imply 'can' (in the sense of physical possibility); and where they are jointly unsatisfiable, and supported by reasons of equal weight, I think we should say that the moral code yields no particular recommendations, rather than two which conflict.

The article is here.