Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Sunday, May 21, 2023

Artificial intelligence, superefficiency and the end of work: a humanistic perspective on meaning in life

Knell, S., Rüther, M.
AI Ethics (2023).
https://doi.org/10.1007/s43681-023-00273-w

Abstract

How would it be assessed from an ethical point of view if human wage work were replaced by artificially intelligent systems (AI) in the course of an automation process? An answer to this question has been discussed above all under the aspects of individual well-being and social justice. Although these perspectives are important, in this article, we approach the question from a different perspective: that of leading a meaningful life, as understood in analytical ethics on the basis of the so-called meaning-in-life debate. Our thesis here is that a life without wage work loses specific sources of meaning, but can still be sufficiently meaningful in certain other ways. Our starting point is John Danaher’s claim that ubiquitous automation inevitably leads to an achievement gap. Although we share this diagnosis, we reject his provocative solution according to which game-like virtual realities could be an adequate substitute source of meaning. Subsequently, we outline our own systematic alternative which we regard as a decidedly humanistic perspective. It focuses both on different kinds of social work and on rather passive forms of being related to meaningful contents. Finally, we go into the limits and unresolved points of our argumentation as part of an outlook, but we also try to defend its fundamental persuasiveness against a potential objection.

From Concluding remarks

In this article, we explored the question of how we can find meaning in a post-work world. Our answer relies on a critique of John Danaher’s utopia of games and tries to stick to the humanistic idea, namely to the idea that we do not have to alter our human lifeform in an extensive way and also can keep up our orientation towards common ideals, such as working towards the good, the true and the beautiful.

Our proposal still has some shortcomings, which include the following two that we cannot deal with extensively but at least want to briefly comment on. First, we assumed that certain professional fields, especially in the meaning conferring area of the good, cannot be automated, so that the possibility of mini-jobs in these areas can be considered. This assumption is based on a substantial thesis from the philosophy of mind, namely that AI systems cannot develop consciousness and consequently also no genuine empathy. This assumption needs to be further elaborated, especially in view of some forecasts that even the altruistic and philanthropic professions are not immune to the automation of superefficient systems. Second, we have adopted without further critical discussion the premise of the hybrid standard model of a meaningful life according to which meaning conferring objective value is to be found in the three spheres of the true, the good, and the beautiful. We take this premise to be intuitively appealing, but a further elaboration of our argumentation would have to try to figure out, whether this trias is really exhaustive, and if so, due to which underlying more general principle. Third, the receptive side of finding meaning in the realm of the true and beautiful was emphasized and opposed to the active striving towards meaningful aims. Here, we have to more precisely clarify what axiological status reception has in contrast to active production—whether it is possibly meaning conferring to a comparable extent or whether it is actually just a less meaningful form. This is particularly important to be able to better assess the appeal of our proposal, which depends heavily on the attractiveness of the vita contemplativa.

Saturday, May 13, 2023

Doctors are drowning in paperwork. Some companies claim AI can help

Geoff Brumfiel
NPR.org - Health Shots
Originally posted 5 APR 23

Here are two excerpts:

But Paul kept getting pinged from younger doctors and medical students. They were using ChatGPT, and saying it was pretty good at answering clinical questions. Then the users of his software started asking about it.

In general, doctors should not be using ChatGPT by itself to practice medicine, warns Marc Succi, a doctor at Massachusetts General Hospital who has conducted evaluations of how the chatbot performs at diagnosing patients. When presented with hypothetical cases, he says, ChatGPT could produce a correct diagnosis accurately at close to the level of a third- or fourth-year medical student. Still, he adds, the program can also hallucinate findings and fabricate sources.

"I would express considerable caution using this in a clinical scenario for any reason, at the current stage," he says.

But Paul believed the underlying technology can be turned into a powerful engine for medicine. Paul and his colleagues have created a program called "Glass AI" based off of ChatGPT. A doctor tells the Glass AI chatbot about a patient, and it can suggest a list of possible diagnoses and a treatment plan. Rather than working from the raw ChatGPT information base, the Glass AI system uses a virtual medical textbook written by humans as its main source of facts – something Paul says makes the system safer and more reliable.

(cut)

Nabla, which he co-founded, is now testing a system that can, in real time, listen to a conversation between a doctor and a patient and provide a summary of what the two said to one another. Doctors inform their patients that the system is being used in advance, and as a privacy measure, it doesn't actually record the conversation.

"It shows a report, and then the doctor will validate with one click, and 99% of the time it's right and it works," he says.

The summary can be uploaded to a hospital records system, saving the doctor valuable time.

Other companies are pursuing a similar approach. In late March, Nuance Communications, a subsidiary of Microsoft, announced that it would be rolling out its own AI service designed to streamline note-taking using the latest version of ChatGPT, GPT-4. The company says it will showcase its software later this month.

Wednesday, April 19, 2023

Meaning in Life in AI Ethics—Some Trends and Perspectives

Nyholm, S., Rüther, M. 
Philos. Technol. 36, 20 (2023). 
https://doi.org/10.1007/s13347-023-00620-z

Abstract

In this paper, we discuss the relation between recent philosophical discussions about meaning in life (from authors like Susan Wolf, Thaddeus Metz, and others) and the ethics of artificial intelligence (AI). Our goal is twofold, namely, to argue that considering the axiological category of meaningfulness can enrich AI ethics, on the one hand, and to portray and evaluate the small, but growing literature that already exists on the relation between meaning in life and AI ethics, on the other hand. We start out our review by clarifying the basic assumptions of the meaning in life discourse and how it understands the term ‘meaningfulness’. After that, we offer five general arguments for relating philosophical questions about meaning in life to questions about the role of AI in human life. For example, we formulate a worry about a possible meaningfulness gap related to AI on analogy with the idea of responsibility gaps created by AI, a prominent topic within the AI ethics literature. We then consider three specific types of contributions that have been made in the AI ethics literature so far: contributions related to self-development, the future of work, and relationships. As we discuss those three topics, we highlight what has already been done, but we also point out gaps in the existing literature. We end with an outlook regarding where we think the discussion of this topic should go next.

(cut)

Meaning in Life in AI Ethics—Summary and Outlook

We have tried to show at least three things in this paper. First, we have noted that there is a growing debate on meaningfulness in some sub-areas of AI ethics, and particularly in relation to meaningful self-development, meaningful work, and meaningful relationships. Second, we have argued that this should come as no surprise. Philosophers working on meaning in life share the assumption that meaning in life is a partly autonomous value concept, which deserves ethical consideration. Moreover, as we argued in Section 4 above, there are at least five significant general arguments that can be formulated in support of the claim that questions of meaningfulness should play a prominent role in ethical discussions of newly emerging AI technologies. Third, we have also stressed that, although there is already some debate about AI and meaning in life, it does not mean that there is no further work to do. Rather, we think that the area of AI and its potential impacts on meaningfulness in life is a fruitful topic that philosophers have only begun to explore, where there is much room for additional in-depth discussions.

We will now close our discussion with three general remarks. The first is led by the observation that some of the main ethicists in the field have yet to explore their underlying meaning theory and its normative claims in a more nuanced way. This is not only a shortcoming on its own, but has some effect on how the field approaches issues. Are agency extension or moral abilities important for meaningful self-development? Should achievement gaps really play a central role in the discussion of meaningful work? And what about the many different aspects of meaningful relationships? These are only a few questions which can shed light on the presupposed underlying normative claims that are involved in the field. Here, further exploration at deeper levels could help us to see which things are important and which are not more clear, and finally in which directions the field should develop.

Sunday, March 5, 2023

Four Recommendations for Ethical AI in Healthcare

Lindsey Jarrett
Center for Practical Bioethics

For several decades now, we have been having conversations about the impact that technology, from the voyage into space to the devices in our pockets, will have on society. The force with which technology alters our lives at times feels abrupt. It has us feeling excited one day and fearful the next.

If your experiences in life are not dependent on the use of technology — especially if your work still allows for you to disconnect from the virtual world – it may feel like technology is working at a decent pace. However, many of us require some sort of technology to work, to communicate with others, to develop relationships, and to disseminate ideas into the world. Further, we also increasingly need technology to help us make decisions. These decisions vary in complexity from auto-correcting our messages to connecting to someone on a dating app, and without access to a piece of technology, it is increasingly challenging to rely on anything but technology.

Is the use of technology for decision making a problem in and of itself due to its entrenched use across our lives, or are there particular components and contexts that need attention? Your answer may depend on what you want to use it for, how you want others to use it to know you, and why the technology is needed over other tools. These considerations are widely discussed in the areas of criminal justice, finance, security, hiring practices, and conversations are developing in other sectors as issues of inequity, injustice and power differentials begin to emerge.

Issues emerging in the healthcare sector is of particular interest to many, especially since the coronavirus pandemic. As these conversations unfold, people start to unpack the various dilemmas that exist within the intersection of technology and healthcare. Scholars have engaged in theoretical rhetoric to examine ethical implications, researchers have worked to evaluate the decision-making processes of data scientists who build clinical algorithms, and healthcare executives have tried to stay ahead of regulation that is looming over their hospital systems.

However, recommendations tend to focus exclusively on those involved with algorithm creation and offer little support to other stakeholders across the healthcare industry. While this guidance turns into practice across data science teams building algorithms, especially those building machine learning based tools, the Ethical AI Initiative sees opportunities to examine decisions that are made regarding these tools before they get to a data scientist’s queue and after they are ready for production. These opportunities are where systemic change can occur, and without that level of change, we will continue to build products to put on the shelf and more products to fill the shelf when those fail.

Healthcare is not unique in facing these types of challenges, and I will outline a few recommendations on how an adapted, augmented system of healthcare technology can operate, as the industry prepares for more forceful regulation of the use of machine learning-based tools in healthcare practice.

Friday, March 3, 2023

The Petrov dilemma: Moral responsibility in the age of ChatGPT

Julian Savulescu
NUS News
Originally published 13 FEB 23

Here is an excerpt:

3. Act ethically

What is to act ethically has been debated for thousands of years. But at its heart is a simple answer: add value to the lives of other people. For Lt-Col Petrov, doing nothing was the hardest choice: “All I had to do was to reach for the phone; to raise the direct line to our top commanders – but I couldn’t move. I felt like I was sitting on a hot frying pan.”

ChatGPT is an extremely powerful technology with enormous potential for human benefit. It can distill human knowledge and wisdom, quickly create factual and normative content, enable communication between speakers of different languages, and ultimately enhance or even surpass human creativity.

As with any powerful technology, there are significant risks. At present, there are numerous mistakes and false claims, but these presumably will be ironed out. Longer term, risks include to threats to privacy; large-scale perpetuation of biases; plagiarism; displacement of human workers and creativity; deception and malicious use.

Acting ethically requires realising the value of technology while ensuring the risks are reasonable. Institutional or structural changes may be necessary, for example, labelling and creation of watermarks to prevent plagiarism.

To be human is to make your own ethical decisions

One ethical question that confronts regulators and industry is: should a driverless car swerve and sacrifice an innocent bystander (pedestrian or cyclist) to save a greater number of lives in the vehicle? I asked ChatGPT whether it is right to kill one to save five. Here is the answer:

“Whether or not it is right to kill one person to save five people is a complex ethical question. Some people may argue that taking one life to save five is the more logical option, and is therefore justified. However, it is also important to consider the ethical principle of respect for autonomy.

“This principle states that all individuals have an inherent right to make autonomous and informed decisions about their lives, including their own death. Taking this into consideration, it may not be morally right to force someone to accept the sacrifice of their life in order to save the lives of five other people. Ultimately, the answer to this question is subjective and depends on one’s own ethical and moral beliefs.”

But as driverless cars are rolled out, we won’t have the luxury of remaining on the fence, or relying on drivers’ reflexes to get us out of a decision. We will have to make moral choices and give reasons, and apply them consistently to programming the vehicles. This requires deep – and human – ethical thinking and expertise.

Thursday, February 9, 2023

To Each Technology Its Own Ethics: The Problem of Ethical Proliferation

Sætra, H.S., Danaher, J. 
Philos. Technol. 35, 93 (2022).
https://doi.org/10.1007/s13347-022-00591-7

Abstract

Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and every subtype of technology or technological property—e.g. computer ethics, AI ethics, data ethics, information ethics, robot ethics, and machine ethics. Specific technologies might have specific impacts, but we argue that they are often sufficiently covered and understood through already established higher-level domains of ethics. Furthermore, the proliferation of tech ethics is problematic because (a) the conceptual boundaries between the subfields are not well-defined, (b) it leads to a duplication of effort and constant reinventing the wheel, and (c) there is danger that participants overlook or ignore more fundamental ethical insights and truths. The key to avoiding such outcomes lies in a taking the discipline of ethics seriously, and we consequently begin with a brief description of what ethics is, before presenting the main forms of technology related ethics. Through this process, we develop a hierarchy of technology ethics, which can be used by developers and engineers, researchers, or regulators who seek an understanding of the ethical implications of technology. We close by deducing two principles for positioning ethical analysis which will, in combination with the hierarchy, promote the leveraging of existing knowledge and help us to avoid an exaggerated proliferation of tech ethics.

From the Conclusion

The ethics of technology is garnering attention for a reason. Just about everything in modern society is the result of, and often even infused with, some kind of technology. The ethical implications are plentiful, but how should the study of applied tech ethics be organised? We have reviewed a number of specific tech ethics, and argued that there is much overlap, and much confusion relating to the demarcation of different domain ethics. For example, many issues covered by AI ethics are arguably already covered by computer ethics, and many issues argued to be data ethics, particularly issues related to privacy and surveillance, have been studied by other tech ethicists and non-tech ethicists for a long time.

We have proposed two simple principles that should help guide more ethical research to the higher levels of tech ethics, while still allowing for the existence of lower-level domain specific ethics. If this is achieved, we avoid confusion and a lack of navigability in tech ethics, ethicists avoid reinventing the wheel, and we will be better able to make use of existing insight from higher-level ethics. At the same time, the work done in lower-level ethics will be both valid and highly important, because it will be focused on issues exclusive to that domain. For example, robot ethics will be about those questions that only arise when AI is embodied in a particular sense, and not all issues related to the moral status of machines or social AI in general.

While our argument might initially be taken as a call to arms against more than one fundamental applied ethics, we hope to have allayed such fears. There are valid arguments for the existence of different types of applied ethics, and we merely argue that an exaggerated proliferation of tech ethics is occurring, and that it has negative consequences. Furthermore, we must emphasise that there is nothing preventing anyone from making specific guidelines for, for example, AI professionals, based on insight from computer ethics. The domains of ethics and the needs of practitioners are not the same, and our argument is consequently that ethical research should be more concentrated than professional practice.

Thursday, January 26, 2023

The AI Ethicist's Dirty Hands Problem

H. S. Sætra, M. Coeckelbergh, & J. Danaher
Communications of the ACM, January 2023, 
Vol. 66 No. 1, Pages 39-41

Assume an AI ethicist uncovers objectionable effects related to the increased usage of AI. What should they do about it? One option is to seek alliances with Big Tech in order to "borrow" their power to change things for the better. Another option is to seek opportunities for change that actively avoids reliance on Big Tech.

The choice between these two strategies gives rise to an ethical dilemma. For example, if the ethicist's research emphasized the grave and unfortunate consequences of Twitter and Facebook, should they promote this research by building communities on said networks? Should they take funding from Big Tech to promote the reform of Big Tech? Should they seek opportunities at Google or OpenAI if they are deeply concerned about the negative implications of large-scale language models?

The AI ethicist’s dilemma emerges when an ethicist must consider how their success in communicating an
identified challenge is associated with a high risk of decreasing the chances of successfully addressing the challenge.  This dilemma occurs in situations in which the means to achieve one’s goals are seemingly best achieved by supporting that which one wishes to correct and/or practicing the opposite of that which one preaches.

(cut)

The Need for More than AI Ethics

Our analysis of the ethicist’s dilemma shows why close ties with Big Tech can be detrimental for the ethicist seeking remedies for AI related problems.   It is important for ethicists, and computer scientists in general, to be aware of their links to the sources of ethical challenges related to AI.  One useful exercise would be to carefully examine what could happen if they attempted to challenge the actors with whom they are aligned. Such actions could include attempts to report unfortunate implications of the company’s activities internally, but also publicly, as Gebru did. Would such actions be met with active resistance, with inaction, or even straightforward sanctions? Such an exercise will reveal whether or not the ethicist feels free to openly and honestly express concerns about the technology with which they work. Such an exercise could be important, but as we have argued, these individuals are not necessarily positioned to achieve fundamental change in this system.

In response, we suggest the role of government is key to balancing the power the tech companies have
through employment, funding, and their control of modern digital infrastructure. Some will rightly argue that political power is also dangerous.   But so are the dangers of technology and unbridled innovation, and private corporations are central sources of these dangers. We therefore argue that private power must be effectively bridled by the power of government.  This is not a new argument, and is in fact widely accepted.

Wednesday, October 19, 2022

Technology and moral change: the transformation of truth and trust

Danaher, J., Sætra, H.S. 
Ethics Inf Technol 24, 35 (2022).
https://doi.org/10.1007/s10676-022-09661-y

Abstract

Technologies can have profound effects on social moral systems. Is there any way to systematically investigate and anticipate these potential effects? This paper aims to contribute to this emerging field on inquiry through a case study method. It focuses on two core human values—truth and trust—describes their structural properties and conceptualisations, and then considers various mechanisms through which technology is changing and can change our perspective on those values. In brief, the paper argues that technology is transforming these values by changing the costs/benefits of accessing them; allowing us to substitute those values for other, closely-related ones; increasing their perceived scarcity/abundance; and disrupting traditional value-gatekeepers. This has implications for how we study other, technologically-mediated, value changes.

(cut)

Conclusion: lessons learned

Having examined our two case studies, it remains to consider whether or not there are similarities in how technology affects trust and truth, and if there are general lessons to be learned here about how technology may impact values in the future.

The two values we have considered are structurally similar and interrelated. They are both intrinsically and instrumentally valuable. They are both epistemic and practical in nature: we value truth and trust (at least in part) because they give us access to knowledge and help us to resolve the decision problems we face on a daily basis. We also see, in both case studies, similar mechanisms of value change at work. The most interesting, to our minds, are the following:
  • Technology changes the costs associated with accessing certain values, making them less or more important as a result Digital disinformation technology increases the cost of finding out the truth, but reduces the cost of finding and reinforcing a shared identity community; reliable AI and robotics gives us an (often cheaper and more efficient) substitute for trust in humans, while still giving us access to useful cognitive, emotional and physical assistance.
  • Technology makes it easier, or more attractive to trade off or substitute some values against others Digital disinformation technology allows us to obviate the need for finding out the truth and focus on other values instead; reliable machines allow us to substitute the value of reliability for the value of trust. This is a function of the plural nature of values, their scarcity, and the changing cost structure of values caused by technology.
  • Technology can make some values seem more scarce (rare, difficult to obtain), thereby increasing their perceived intrinsic value Digital disinformation makes truth more elusive, thereby increasing its perceived value which, in turn, encourages some moral communities to increase their fixation on it; robots and AI make trust in humans less instrumentally necessary, thereby increasing the expressive value of trust in others.
  • Technology can disrupt power networks, thereby altering the social gatekeepers to value to the extent that we still care about truth, digital disinformation increases the power of the epistemic elites that can help us to access the truth; trust-free or trust-alternative technologies can disrupt the power of traditional trusted third parties (professionals, experts etc.) and redistribute power onto technology or a technological elite.

Wednesday, August 24, 2022

Dual use of artifcial-intelligence-powered drug discovery

Urbina, F., Lentzos, F., Invernizzi, C. et al. 
Nat Mach Intell 4, 189–191 (2022). 
https://doi.org/10.1038/s42256-022-00465-9

The Swiss Federal Institute for NBC (nuclear, biological and chemical) Protection —Spiez Laboratory— convenes the ‘convergence’ conference series set up by the Swiss government to identify developments in chemistry, biology and enabling technologies that may have implications for the Chemical and Biological Weapons Conventions. Meeting every two years, the conferences bring together an international group of scientific and disarmament experts to explore the current state of the art in the chemical and biological fields and their trajectories, to think through potential security implications and to consider how these implications can most effectively be managed internationally.  The meeting convenes for three days of discussion on the possibilities of harm, should the intent be there, from cutting-edge chemical and biological technologies.  Our drug discovery company received an invitation to contribute a presentation on how AI technologies for drug discovery could potentially be misused.

Risk of misuse

The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting.  Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery. We have spent decades using computers and AI to improve human health—not to degrade it. We were naive in thinking about the potential misuse of our trade, as our aim had always been to avoid molecular features that could interfere with the many different classes of proteins essential to human life. Even our projects on Ebola and neurotoxins, which could have sparked thoughts about the potential negative implications of our machine learning models, had not set our alarm bells ringing.

(cut)

Broader effects on society

There is a need for discussions across traditional boundaries and multiple disciplines to allow for a fresh look at AI for de novo design and related technologies from different perspectives and with a wide variety of mindsets. Here, we give some recommendations that we believe will reduce potential dual-use concerns for AI in drug discovery. Scientific conferences, such as the Society of Toxicology and American Chemical Society, should actively foster a dialogue among experts from industry, academia and policy making on the implications of our computational tools.

Friday, July 8, 2022

AI bias can arise from annotation instructions

K. Wiggers & D. Coldeway
TechCrunch
Originally posted 8 MAY 22

Here is an excerpt:

As it turns out, annotators’ predispositions might not be solely to blame for the presence of bias in training labels. In a preprint study out of Arizona State University and the Allen Institute for AI, researchers investigated whether a source of bias might lie in the instructions written by dataset creators to serve as guides for annotators. Such instructions typically include a short description of the task (e.g., “Label all birds in these photos”) along with several examples.

The researchers looked at 14 different “benchmark” datasets used to measure the performance of natural language processing systems, or AI systems that can classify, summarize, translate and otherwise analyze or manipulate text. In studying the task instructions provided to annotators that worked on the datasets, they found evidence that the instructions influenced the annotators to follow specific patterns, which then propagated to the datasets. For example, over half of the annotations in Quoref, a dataset designed to test the ability of AI systems to understand when two or more expressions refer to the same person (or thing), start with the phrase “What is the name,” a phrase present in a third of the instructions for the dataset.

The phenomenon, which the researchers call “instruction bias,” is particularly troubling because it suggests that systems trained on biased instruction/annotation data might not perform as well as initially thought. Indeed, the co-authors found that instruction bias overestimates the performance of systems and that these systems often fail to generalize beyond instruction patterns.

The silver lining is that large systems, like OpenAI’s GPT-3, were found to be generally less sensitive to instruction bias. But the research serves as a reminder that AI systems, like people, are susceptible to developing biases from sources that aren’t always obvious. The intractable challenge is discovering these sources and mitigating the downstream impact.



Thursday, April 14, 2022

AI won’t steal your job, just make it meaningless

John Danaher
iainews.com
Originally published 18 MAR 22

New technologies are often said to be in danger of making humans redundant, replacing them with robots and AI, and making work disappear altogether. A crisis of identity and purpose might result from that, but Silicon Valley tycoons assure us that a universal basic income could at least take care of people’s material needs, leaving them with plenty of leisure time in which to forge new identities and find new sources of purpose.

This, however, paints an overly simplistic picture. What seems more likely to happen is that new technologies will not make humans redundant at a mass scale, but will change the nature of work, making it worse for many, and sapping the elements that give it some meaning and purpose. It’s the worst of both worlds: we’ll continue working, but our jobs will become increasingly meaningless. 

History has some lessons to teach us here. Technology has had a profound effect on work in the past, not just on the ways in which we carry out our day-to-day labour, but also on how we understand its value. Consider the humble plough. In its most basic form, it is a hand-operated tool, consisting of little more than a pointed stick that scratches a furrow through the soil. This helps a farmer to sow seeds but does little else. Starting in the middle ages, however, more complex, ‘heavy’ ploughs began to be used by farmers in Northern Europe. These heavy ploughs rotated and turned the earth, bringing nutrient rich soils to the surface, and radically altering the productivity of farming. Farming ceased being solely about subsistence. It started to be about generating wealth.

The argument about how the heavy plough transformed the nature of work was advanced by historian Lynn White Jr in his classic study Medieval Technology and Social Change. Writing in the idiom of the early 1960s, he argued that “No more fundamental change in the idea of man’s relation to the soil can be imagined: once man had been part of nature; now he became her exploiter.”

It is easy to trace a line – albeit one that takes a detour through Renaissance mercantilism and the Industrial revolution – from the development of the heavy plough to our modern conception of work. Although work is still an economic necessity for many people, it is not just that. It is something more. We don’t just do it to survive; we do it to thrive. Through our work we can buy into a certain lifestyle and affirm a certain identity. We can develop mastery and cultivate self-esteem; we make a contribution to our societies and a name for ourselves. 

Tuesday, March 29, 2022

Gene editing gets safer thanks to redesigned Cas9 protein

Science Daily
Originally posted 2 MAR 22

Summary:

Scientists have redesigned a key component of a widely used CRISPR-based gene-editing tool, called Cas9, to be thousands of times less likely to target the wrong stretch of DNA while remaining just as efficient as the original version, making it potentially much safer.

-----------------

Scientists have redesigned a key component of a widely used CRISPR-based gene-editing tool, called Cas9, to be thousands of times less likely to target the wrong stretch of DNA while remaining just as efficient as the original version, making it potentially much safer.

One of the grand challenges with using CRISPR-based gene editing on humans is that the molecular machinery sometimes makes changes to the wrong section of a host's genome, creating the possibility that an attempt to repair a genetic mutation in one spot in the genome could accidentally create a dangerous new mutation in another.

But now, scientists at The University of Texas at Austin have redesigned a key component of a widely used CRISPR-based gene-editing tool, called Cas9, to be thousands of times less likely to target the wrong stretch of DNA while remaining just as efficient as the original version, making it potentially much safer. The work is described in a paper published today in the journal Nature.

"This really could be a game changer in terms of a wider application of the CRISPR Cas systems in gene editing," said Kenneth Johnson, a professor of molecular biosciences and co-senior author of the study with David Taylor, an assistant professor of molecular biosciences. The paper's co-first authors are postdoctoral fellows Jack Bravo and Mu-Sen Liu.


Journal Reference:

Jack P. K. Bravo, Mu-Sen Liu, et al. Structural basis for mismatch surveillance by CRISPR–Cas9. Nature, 2022; DOI: 10.1038/s41586-022-04470-1

Monday, January 31, 2022

The future of work: freedom, justice and capital in the age of artificial intelligence

F. S. de Sio, T. Almeida & J. van den Hoven
(2021) Critical Review of International Social
 and Political Philosophy
DOI: 10.1080/13698230.2021.2008204

Abstract

Artificial Intelligence (AI) is predicted to have a deep impact on the future of work and employment. The paper outlines a normative framework to understand and protect human freedom and justice in this transition. The proposed framework is based on four main ideas: going beyond the idea of a Basic Income to compensate the losers in the transition towards AI-driven work, towards a Responsible Innovation approach, in which the development of AI technologies is governed by an inclusive and deliberate societal judgment; going beyond a philosophical conceptualisation of social justice only focused on the distribution of ‘primary goods’, towards one focused on the different goals, values, and virtues of various social practices (Walzer’s ‘spheres of justice’) and the different individual capabilities of persons (Sen’s ‘capabilities’); going beyond a classical understanding of capital, towards one explicitly including mental capacities as a source of value for AI-driven activities. In an effort to promote an interdisciplinary approach, the paper combines political and economic theories of freedom, justice and capital with recent approaches in applied ethics of technology, and starts applying its normative framework to some concrete example of AI-based systems: healthcare robotics, ‘citizen science’, social media and platform economy.

From the Conclusion

Whether or not it will create a net job loss (aka technological unemployment), Artificial Intelligence and digital technologies will change the nature of work, and will have a deep impact on people’s work lives. New political action is needed to govern this transition. In this paper we have claimed that also new philosophical concepts are needed, if the transition has to be governed responsibly and in the interest of everybody. The paper has outlined a general normative framework to make sense of- and address the issue of human freedom and justice in the age of AI at work. The framework is based on four ideas. First, in general freedom and justice cannot be achieved by only protecting existing jobs as a goal in itself, inviting persons to find ways for to remain relevant in a new machine-driven word, or offering financial compensation to those who are (permanently) left unemployed, for instance, via a Universal Basic Income. We should rather prevent technological unemployment and the worsening of working condition to happen, as a result of a Responsible Innovation approach to technology, where freedom and justice are built into the technical and institutional structures of the work of the future. Second, more in particular, we have argued, freedom and justice may be best promoted by a politics and an economics of technology informed by the recognition of different virtues and values as constitutive of different activities, following a Walzerian (‘spheres of justice’) approach to technological and institutional design, possibly integrated by a virtue ethics component. 

Sunday, November 14, 2021

A brain implant that zaps away negative thoughts

Nicole Karlis
Salon.com
Originally published 14 OCT 21

Here is an excerpt:

Still, the prospect of clinicians manipulating and redirecting one's thoughts, using electricity, raises potential ethical conundrums for researchers — and philosophical conundrums for patients. 

"A person implanted with a closed-loop system to target their depressive episodes could find themselves unable to experience some depressive phenomenology when it is perfectly normal to experience this outcome, such as a funeral," said Frederic Gilbert Ph.D. Senior Lecturer in Ethics at the University of Tasmania, in an email to Salon. "A system program to administer a therapeutic response when detecting a specific biomarker will not capture faithfully the appropriateness of some context; automated invasive systems implanted in the brain might constantly step up in your decision-making . . . as a result, it might compromise you as a freely thinking agent."

Gilbert added there is the potential for misuse — and that raises novel moral questions. 

"There are potential degrees of misuse of some of the neuro-data pumping out of the brain (some believe these neuro-data may be our hidden and secretive thoughts)," Gilbert said. "The possibility of biomarking neuronal activities with AI introduces the plausibility to identify a large range of future applications (e.g. predicting aggressive outburst, addictive impulse, etc). It raises questions about the moral, legal and medical obligations to prevent foreseeable and harmful behaviour."

For these reasons, Gilbert added, it's important "at all costs" to "keep human control in the loop," in both activation and control of one's own neuro-data. 

Tuesday, November 2, 2021

Our evolved intuitions about privacy aren’t made for this era

Joe Green & Azim Shariff
psyche.co
Originally published September 16, 2021

Here is an excerpt:

Our concern for privacy has its evolutionary roots in the need to maintain boundaries between the self and others, for safety and security. The motivation for personal space and territoriality is a common phenomenon within the animal kingdom. Among humans, this concern about regulating physical access is complemented by one about regulating informational access. The language abilities, complex social lives and long memories of human beings made protecting our social reputations almost as important as protecting our physical bodies. Norms about sexual privacy, for instance, are common across cultures and time periods. Establishing basic seclusion for secret trysts would have allowed for all the carnal benefits without the unwelcome reputational scrutiny.

Since protection and seclusion must be balanced with interaction, our privacy concern is tuned to flexibly respond to cues in our environment, helping to determine when and what and with whom we share our physical space and personal information. We reflexively lower our voices when strange or hostile interlopers come within earshot. We experience an uneasy creepiness when someone peers over our shoulder. We viscerally feel the presence of a crowd and the public scrutiny that comes with it.

However, just as the turtles’ light-orienting reflex was confounded by the glow of urban settlements, so too have our privacy reactions been confounded by technology. Cameras and microphones – with their superhuman sensory abilities – were challenging enough. But the migration of so much of our lives online is arguably the largest environmental shift in our species’ history with regard to privacy. And our evolved privacy psychology has not caught up. Consider how most people respond to the presence of others when they are in a crowd. Humans use a host of social cues to regulate how much distance they keep between themselves and others. These include facial expression, gaze, vocal quality, posture and hand gestures. In a crowd, such cues can produce an anxiety-inducing cacophony. Moreover, our hair-trigger reputation-management system – critical to keeping us in good moral standing within our group – can drive us into a delirium of self-consciousness.

However, there is some wisdom in this anxiety. Looking into the whites of another’s eyes anchors us within the social milieu, along with all of its attendant norms and expectations. As a result, we tread carefully. Our private thoughts generally remain just that – private, conveyed only to small, trusted groups or confined to our own minds. But as ‘social networks’ suddenly switched from being small, familiar, in-person groupings to online social media platforms connecting millions of users, things changed. Untethered from recognisable social cues such as crowding and proximity, thoughts better left for a select few found their way in front of a much wider array of people, many of whom do not have our best interests at heart. Online we can feel alone and untouchable when we are neither.

Consider, too, our intuitions about what belongs to whom. Ownership can be complicated from a legal perspective but, psychologically, it is readily inferred from an early age (as anyone with young children will have realised). This is achieved through a set of heuristics that provide an intuitive ‘folk psychology’ of ownership. First possession (who first possessed an object), labour investment (who made or modified an object), and object history (information about past transfer of ownership) are all cues that people reflexively use in attributing the ownership of physical things – and consequently, the right to open, inspect or enter them.

Monday, November 1, 2021

Social Media and Mental Health

Luca Braghieri, Ro’ee Levy, and Alexey Makarin
Independent Research
August 21

Abstract 

The diffusion of social media coincided with a worsening of mental health conditions among adolescents and young adults in the United States, giving rise to speculation that social media might be detrimental to mental health. In this paper, we provide the first quasi-experimental estimates of the impact of social media on mental health by leveraging a unique natural experiment: the staggered introduction of Facebook across U.S. colleges. Our analysis couples data on student mental health around the years of Facebook’s expansion with a generalized difference-in-differences empirical strategy. We find that the roll-out of Facebook at a college increased symptoms of poor mental health, especially depression, and led to increased utilization of mental healthcare services. We also find that, according to the students’ reports, the decline in mental health translated into worse academic performance. Additional evidence on mechanisms suggests the results are due to Facebook fostering unfavorable social comparisons. 

Discussion 

Implications for social media today 

Our estimates of the effects of social media on mental health rely on quasi-experimental variation in Facebook access among college students around the years 2004 to 2006. Such population and time window are directly relevant to the discussion about the severe worsening of mental health conditions among adolescents and young adults over the last two decades. In this section, we elaborate on the extent to which our findings have the potential to inform our understanding of the effects of social media on mental health today. 

Over the last two decades, Facebook underwent a host of important changes. Such changes include: i) the introduction of a personalized feed where posts are ranked by an algorithm; ii) the growth of Facebook’s user base from U.S. college students to almost three billion active users around the globe (Facebook, 2021); iii) video often replacing images and text; iv) increased usage of Facebook on mobile phones instead of computers; and v) the introduction of Facebook pages for brands, businesses, and organizations. 

The nature of the variation we are exploiting in this paper does not allow us to identify the impact of these features of social media. For example, the introduction of pages, along with other changes, made news consumption on Facebook more common over the last decade than it was at inception. Our estimates cannot shed light on whether the increased reliance on Facebook for news consumption has exacerbated or mitigated the effects of Facebook on mental health. 

Despite these caveats, we believe the estimates presented in this paper are still highly relevant today for two main reasons: first, the mechanisms whereby social media use might affect mental health arguably relate to core features of social media platforms that have been present since inception and that remain integral parts of those platforms today; second, the technological changes undergone by Facebook and related platforms might have amplified rather than mitigated the effect of those mechanisms. 

Tuesday, October 26, 2021

The Fragility of Moral Traits to Technological Interventions

J. Fabiano
Neuroethics 14, 269–281 (2021). 
https://doi.org/10.1007/s12152-020-09452-6

Abstract

I will argue that deep moral enhancement is relatively prone to unexpected consequences. I first argue that even an apparently straightforward example of moral enhancement such as increasing human co-operation could plausibly lead to unexpected harmful effects. Secondly, I generalise the example and argue that technological intervention on individual moral traits will often lead to paradoxical effects on the group level. Thirdly, I contend that insofar as deep moral enhancement targets higher-order desires (desires to desire something), it is prone to be self-reinforcing and irreversible. Fourthly, I argue that the complex causal history of moral traits, with its relatively high frequency of contingencies, indicates their fragility. Finally, I conclude that attempts at deep moral enhancement pose greater risks than other enhancement technologies. For example, one of the major problems that moral enhancement is hoped to address is lack of co-operation between groups. If humanity developed and distributed a drug that dramatically increased co-operation between individuals, we would likely see a paradoxical decrease in co-operation between groups and a self-reinforcing increase in the disposition to engage in further modifications – both of which are potential problems.

Conclusion: Fragility Leads to Increased Risks 

Any substantial technological modification of moral traits would be more likely to cause harm than benefit. Moral traits have a particularly high proclivity to unexpected disturbances, as exemplified by the co-operation case, amplified by its self-reinforcing and irreversible nature and finally as their complex aetiology would lead one to suspect. Even the most seemingly simple improvement, if only slightly mistaken, is likely to lead to significant negative outcomes. Unless we produce an almost perfectly calibrated deep moral enhancement, its implementation will carry large risks. Deep moral enhancement is likely to be hard to develop safely, but not necessarily be impossible or undesirable. Given that deep moral enhancement could prevent extreme risks for humanity, in particular decreasing the risk of human extinction, it might as well be the case that we still should attempt to develop it. I am not claiming that our current traits are well suited to dealing with global problems. On the contrary, there are certainly reasons to expect that there are better traits that could be brought about by enhancement technologies. However, I believe my arguments indicate there are also much worse, more socially disruptive, traits accessible through technological intervention.

Thursday, October 7, 2021

Axiological futurism: The systematic study of the future of values

J. Danaher
Futures
Volume 132, September 2021

Abstract

Human values seem to vary across time and space. What implications does this have for the future of human value? Will our human and (perhaps) post-human offspring have very different values from our own? Can we study the future of human values in an insightful and systematic way? This article makes three contributions to the debate about the future of human values. First, it argues that the systematic study of future values is both necessary in and of itself and an important complement to other future-oriented inquiries. Second, it sets out a methodology and a set of methods for undertaking this study. Third, it gives a practical illustration of what this ‘axiological futurism’ might look like by developing a model of the axiological possibility space that humans are likely to navigate over the coming decades.


Highlights

• Outlines a new field of inquiry: axiological futurism.

• Defends the role of axiological futurism in understanding technology in society.

• Develops a set of methods for undertaking this inquiry into the axiological future.

• Presents a model for understanding the impact of AI, robotics and ICTs on human values.


From the Conclusion

In conclusion, axiological futurism is the systematic and explicit inquiry into the axiological possibility space for future human (and post-human) civilisations. Axiological futurism is necessary because, given the history of axiological change and variation, it is very unlikely that our current axiological systems will remain static and unchanging in the future. Axiological futurism is also important because it is complementary to other futurological inquiries. While it might initially seem that axiological futurism cannot be a systematic inquiry, this is not the case. Axiological futurism is an exercise in informed speculation.

Saturday, July 24, 2021

Freezing Eggs and Creating Patients: Moral Risks of Commercialized Fertility

E. Reis & S. Reis-Dennis
The Hastings Center Report
Originally published 24 Nov 17

Abstract

There's no doubt that reproductive technologies can transform lives for the better. Infertile couples and single, lesbian, gay, intersex, and transgender people have the potential to form families in ways that would have been inconceivable years ago. Yet we are concerned about the widespread commercialization of certain egg-freezing programs, the messages they propagate about motherhood, the way they blur the line between care and experimentation, and the manipulative and exaggerated marketing that stretches the truth and inspires false hope in women of various ages. We argue that although reproductive technology, and egg freezing in particular, promise to improve women's care by offering more choices to achieve pregnancy and childbearing, they actually have the potential to be disempowering. First, commercial motives in the fertility industry distort women's medical deliberations, thereby restricting their autonomy; second, having the option to freeze their eggs can change the meaning of women's reproductive choices in a way that is limiting rather than liberating.

Here is an excerpt:

Egg banks are offering presumably fertile women a solution for potential infertility that they may never face. These women might pay annual egg-freezing storage rates but never use their eggs. In fact, even if a woman who froze eggs in her early twenties waited until her late thirties to use them, there can be no guarantee that those eggs would produce a viable pregnancy. James A. Grifo, program director of NYU Langone Health Fertility Center, has speculated, “[T]here have been reports of embryos that have been frozen for over 15 years making babies, and we think the same thing is going to be true of eggs.” But the truth is that the technology is so new that neither he nor we know how frozen eggs will hold up over a long period of time.

Some women in their twenties might want to hedge their bets against future infertility by freezing their eggs as a part of an egg-sharing program; others might hope to learn from a simple home test of hormone levels whether their egg supply (ovarian reserve) is low—a relatively rare condition. However, these tests are not foolproof. The ASRM has cautioned against home tests of ovarian reserve for women in their twenties because it may lead to “false reassurance or unnecessary anxiety and concern.” This kind of medicalization of fertility may not be liberating; instead, it will exert undue pressure on women and encourage them to rely on egg freezing over other reproductive options when it is far from guaranteed that those frozen eggs (particularly if the women have the condition known as premature ovarian aging) will ultimately lead to successful pregnancies and births.

Friday, July 9, 2021

Why It’s Time To Modernize Your Ethics Hotline

Claire Schmidt
Forbes.com
Originally posted 18 Jun 21

Traditional whistleblower hotlines are going to be a thing of the past.

They certainly served a purpose and pioneered a way for employees to report wrongdoing at their companies confidentially. But the reasons are stacking up against them as to why they’re no longer serving companies and employees in 2021. And if companies continue to use them, they need to realize that issues or concerns may go unreported because employees don’t want to use that channel to report.

After all, the function of a whistleblower hotline is to encourage employees to report any wrongdoing they see in the workplace through a confidential channel, which means that the channels for reporting should get an upgrade.

But there are deeper reasons why issues remain unreported — and it goes beyond just offering a hotline to use. Today, companies need to give their employees better ways to report wrongdoing, as well as tell them the value of why they should do so. Otherwise, companies won’t hear about the full extent of wrongdoing happening in the workplace, whatever channel they provide.

The Evolution Of Workplace Reporting Channels

Whistleblower or ethics hotlines were initially that: a phone number — because that was the technology at the time — that employees could anonymously call to report wrongdoings at a company. The Sarbanes-Oxley Act of 2002 mandated that companies set up a method for “the confidential, anonymous submission by employees of the issuer of concerns regarding questionable accounting or auditing matters.”