Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts sorted by date for query free will. Sort by relevance Show all posts
Showing posts sorted by date for query free will. Sort by relevance Show all posts

Thursday, April 4, 2024

Ready or not, AI chatbots are here to help with Gen Z’s mental health struggles

Matthew Perrone
AP.com
Originally posted 23 March 24

Here is an excerpt:

Earkick is one of hundreds of free apps that are being pitched to address a crisis in mental health among teens and young adults. Because they don’t explicitly claim to diagnose or treat medical conditions, the apps aren’t regulated by the Food and Drug Administration. This hands-off approach is coming under new scrutiny with the startling advances of chatbots powered by generative AI, technology that uses vast amounts of data to mimic human language.

The industry argument is simple: Chatbots are free, available 24/7 and don’t come with the stigma that keeps some people away from therapy.

But there’s limited data that they actually improve mental health. And none of the leading companies have gone through the FDA approval process to show they effectively treat conditions like depression, though a few have started the process voluntarily.

“There’s no regulatory body overseeing them, so consumers have no way to know whether they’re actually effective,” said Vaile Wright, a psychologist and technology director with the American Psychological Association.

Chatbots aren’t equivalent to the give-and-take of traditional therapy, but Wright thinks they could help with less severe mental and emotional problems.

Earkick’s website states that the app does not “provide any form of medical care, medical opinion, diagnosis or treatment.”

Some health lawyers say such disclaimers aren’t enough.


Here is my summary:

AI chatbots can provide personalized, 24/7 mental health support and guidance to users through convenient mobile apps. They use natural language processing and machine learning to simulate human conversation and tailor responses to individual needs.

 This can be especially beneficial for those who face barriers to accessing traditional in-person therapy, such as cost, location, or stigma.

Research has shown that AI chatbots can be effective in reducing the severity of mental health issues like anxiety, depression, and stress for diverse populations.  They can deliver evidence-based interventions like cognitive behavioral therapy and promote positive psychology.  Some well-known examples include Wysa, Woebot, Replika, Youper, and Tess.

However, there are also ethical concerns around the use of AI chatbots for mental health. There are risks of providing inadequate or even harmful support if the chatbot cannot fully understand the user's needs or respond empathetically. Algorithmic bias in the training data could also lead to discriminatory advice. It's crucial that users understand the limitations of the therapeutic relationship with an AI chatbot versus a human therapist.

Overall, AI chatbots have significant potential to expand access to mental health support, but must be developed and deployed responsibly with strong safeguards to protect user wellbeing. Continued research and oversight will be needed to ensure these tools are used effectively and ethically.

Thursday, March 14, 2024

A way forward for responsibility in the age of AI

Gogoshin, D.L.
Inquiry (2024)

Abstract

Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what are the goods attached to them? The debate concerning ‘machine morality’ is often hinged on whether artificial agents are or could ever be morally responsible, and it is generally taken for granted (following Matthias 2004) that if they cannot, they pose a threat to the moral responsibility system and associated goods. In this paper, I challenge this assumption by asking what the goods of this system, if any, are, and what happens to them in the face of artificially intelligent agents. I will argue that they neither introduce new problems for the moral responsibility system nor do they threaten what we really (ought to) care about. I conclude the paper with a proposal for how to secure this objective.


Here is my summary:

While AI may not possess true moral agency, it's crucial to consider how the development and use of AI can be made more responsible. The author challenges the assumption that AI's lack of moral responsibility inherently creates problems for our current system of ethics. Instead, they focus on the "goods" this system provides, such as deserving blame or praise, and how these can be upheld even with AI's presence. To achieve this, the author proposes several steps, including:
  1. Shifting the focus from AI's moral agency to the agency of those who design, build, and use it. This means holding these individuals accountable for the societal impacts of AI.
  2. Developing clear ethical guidelines for AI development and use. These guidelines should be comprehensive, addressing issues like fairness, transparency, and accountability.
  3. Creating robust oversight mechanisms. This could involve independent bodies that monitor AI development and use, and have the power to intervene when necessary.
  4. Promoting public understanding of AI. This will help people make informed decisions about how AI is used in their lives and hold developers and users accountable.

Saturday, January 13, 2024

Consciousness does not require a self

James Coook
iai.tv
Originally published 14 DEC 23

Here is an excerpt:

Beyond the neuroscientific study of consciousness, phenomenological analysis also reveals the self to not be the possessor of experience. In mystical experiences induced by meditation or psychedelics, individuals typically enter a mode of experience in which the psychological self is absent, yet consciousness remains. While this is not the default state of the mind, the presence of consciousness in the absence of a self shows that consciousness is not dependent on an experiencing subject. What is consciousness if not a capacity of an experiencing subject? Such an experience reveals consciousness to consist of a formless awareness at its core, an empty space in which experience arises, including the experience of being a self. The self does not possess consciousness, consciousness is the experiential space in which the image of a psychological self can appear. This mode of experience can be challenging to conceptualise but is very simple when experienced – it is a state of simple appearances arising without the extra add-on of a psychological self inspecting them.

We can think of a conscious system as a system that is capable of holding beliefs about the qualitative character of the world. We should not think of belief here as referring to complex conceptual beliefs, such as believing that Paris is the capital of France, but as the simple ability to hold that the world is a certain way. You do this when you visually perceive a red apple in front of you, the experience is one of believing the apple to exist with all of its qualities such as roundness and redness. This way of thinking is in line with the work of Immanuel Kant, who argued that we never come to know reality as it is but instead only experience phenomenal representations of reality [9]. We are not conscious of the world as it is, but as we believe it to be.


Here is my take:

For centuries, we've assumed consciousness and the sense of self are one and the same. This article throws a wrench in that assumption, proposing that consciousness can exist without a self. Imagine experiencing sights, sounds, and sensations without the constant "me" narrating it all. That's what "selfless consciousness" means – raw awareness untouched by self-reflection.

The article then posits that our familiar sense of self, complete with its stories and memories, isn't some fundamental truth but rather a clever prediction concocted by our brains. This "predicted self" helps us navigate the world and interact with others, but it's not necessarily who we truly are.

Decoupling consciousness from the self opens a Pandora's box of possibilities. We might find consciousness in unexpected places, like animals or even artificial intelligence. Understanding brain function could shift dramatically, and our very notions of identity, free will, and reality might need a serious rethink. This is a bold new perspective on what it means to be conscious, and its implications are quite dramatic.

Friday, December 8, 2023

Professional Judges’ Disbelief in Free Will Does Not Decrease Punishment

Genschow, O., Hawickhorst, H., et al. (2020).
Social Psychological and Personality Science,
12, 357 - 362.

Abstract

There is a debate in psychology and philosophy on the societal consequences of casting doubts about individuals’ belief in free will. Research suggests that experimentally reducing free will beliefs might affect how individuals evaluate others’ behavior. Past research has demonstrated that reduced free will beliefs decrease laypersons’ tendency toward retributive punishment. This finding has been used as an argument for the idea that promoting anti-free will viewpoints in the public media might have severe consequences for the legal system because it may move judges toward softer retributive punishments. However, actual implications for the legal system can only be drawn by investigating professional judges. In the present research, we investigated whether judges (N = 87) are affected by reading anti-free will messages. The results demonstrate that although reading anti-free will texts reduces judges’ belief in free will, their recommended sentences are not influenced by their (manipulated) belief in free will.


Here is my take:

The results showed that the judges who read the anti-free will passage did indeed have a reduced belief in free will. However, there were no differences in the recommended sentences between the two groups of judges. This suggests that judges' disbelief in free will does not lead them to recommend lighter sentences for criminals.

The study's authors suggest that this finding may be due to the fact that judges are trained to uphold the law and to base their sentencing decisions on legal factors, such as the severity of the crime and the defendant's criminal history. They also suggest that judges may be reluctant to reduce sentences based on metaphysical beliefs about free will.

Key findings:
  • Reading anti-free will messages reduces judges' belief in free will.
  • Judges' disbelief in free will does not lead them to recommend lighter sentences for criminals.

Sunday, November 26, 2023

How robots can learn to follow a moral code

Neil Savage
Nature.com
Originally posted 26 OCT 23

Here is an excerpt:

Defining ethics

The ability to fine-tune an AI system’s behaviour to promote certain values has inevitably led to debates on who gets to play the moral arbiter. Vosoughi suggests that his work could be used to allow societies to tune models to their own taste — if a community provides examples of its moral and ethical values, then with these techniques it could develop an LLM more aligned with those values, he says. However, he is well aware of the possibility for the technology to be used for harm. “If it becomes a free for all, then you’d be competing with bad actors trying to use our technology to push antisocial views,” he says.

Precisely what constitutes an antisocial view or unethical behaviour, however, isn’t always easy to define. Although there is widespread agreement about many moral and ethical issues — the idea that your car shouldn’t run someone over is pretty universal — on other topics there is strong disagreement, such as abortion. Even seemingly simple issues, such as the idea that you shouldn’t jump a queue, can be more nuanced than is immediately obvious, says Sydney Levine, a cognitive scientist at the Allen Institute. If a person has already been served at a deli counter but drops their spoon while walking away, most people would agree it’s okay to go back for a new one without waiting in line again, so the rule ‘don’t cut the line’ is too simple.

One potential approach for dealing with differing opinions on moral issues is what Levine calls a moral parliament. “This problem of who gets to decide is not just a problem for AI. It’s a problem for governance of a society,” she says. “We’re looking to ideas from governance to help us think through these AI problems.” Similar to a political assembly or parliament, she suggests representing multiple different views in an AI system. “We can have algorithmic representations of different moral positions,” she says. The system would then attempt to calculate what the likely consensus would be on a given issue, based on a concept from game theory called cooperative bargaining.


Here is my summary:

Autonomous robots will need to be able to make ethical decisions in order to safely and effectively interact with humans and the world around them.

The article proposes a number of ways that robots can be taught to follow a moral code. One approach is to use supervised learning, in which robots are trained on a dataset of moral dilemmas and their corresponding solutions. Another approach is to use reinforcement learning, in which robots are rewarded for making ethical decisions and punished for making unethical decisions.

The article also discusses the challenges of teaching robots to follow a moral code. One challenge is that moral codes are often complex and nuanced, and it can be difficult to define them in a way that can be understood by a robot. Another challenge is that moral codes can vary across cultures, and it is important to develop robots that can adapt to different moral frameworks.

The article concludes by arguing that teaching robots to follow a moral code is an important ethical challenge that we need to address as we develop more sophisticated artificial intelligence systems.

Sunday, November 19, 2023

AI Will—and Should—Change Medical School, Says Harvard’s Dean for Medical Education

Hswen Y, Abbasi J.
JAMA. Published online October 25, 2023.

Here is an excerpt:

Dr Bibbins-Domingo: When these types of generative AI tools first came into prominence or awareness, educators, whatever level of education they were involved with, had to scramble because their students were using them. They were figuring out how to put up the right types of guardrails, set the right types of rules. Are there rules or danger zones right now that you’re thinking about?

Dr Chang: Absolutely, and I think there’s quite a number of these. This is a focus that we’re embarking on right now because as exciting as the future is and as much potential as these generative AI tools have, there are also dangers and there are also concerns that we have to address.

One of them is helping our students, who like all of us are still new to this within the past year, understand the limitations of these tools. Now these tools are going to get better year after year after year, but right now they are still prone to hallucinations, or basically making up facts that aren’t really true and yet saying them with confidence. Our students need to recognize why it is that these tools might come up with those hallucinations to try to learn how to recognize them and to basically be on guard for the fact that just because ChatGPT is giving you a very confident answer, it doesn’t mean it’s the right answer. And in medicine of course, that’s very, very important. And so that’s one—just the accuracy and the validity of the content that comes out.

As I wrote about in my Viewpoint, the way that these tools work is basically a very fancy form of autocomplete, right? It is essentially using a probabilistic prediction of what the next word is going to be. And so there’s no separate validity or confirmation of the factual material, and that’s something that we need to make sure that our students understand.

The other thing is to address the fact that these tools may inherently be structurally biased. Now, why would that be? Well, as we know, ChatGPT and these other large language models [LLMs] are trained on the world’s internet, so to speak, right? They’re trained on the noncopyrighted corpus of material that’s out there on the web. And to the extent that that corpus of material was generated by human beings who in their postings and their writings exhibit bias in one way or the other, whether intentionally or not, that’s the corpus on which these LLMs are trained. So it only makes sense that when we use these tools, these tools are going to potentially exhibit evidence of bias. And so we need our students to be very aware of that. As we have worked to reduce the effects of systematic bias in our curriculum and in our clinical sphere, we need to recognize that as we introduce this new tool, this will be another potential source of bias.


Here is my summary:

Bernard Chang, the Dean for Medical Education at Harvard Medical School, argues that artificial intelligence (AI) is poised to transform medical education. AI has the potential to improve the way medical students learn and train, and that medical schools should not only embrace AI, but also take an active role in shaping its development and use.

Chang identifies several areas where AI could have a significant impact on medical education. First, AI could be used to personalize learning and provide students with more targeted feedback. For example, AI-powered tutors could help students learn complex medical concepts at their own pace, and AI-powered diagnostic tools could help students practice their clinical skills.

Second, AI could be used to automate tasks that are currently performed by human instructors, such as grading exams and providing feedback on student assignments. This would free up instructors to focus on more high-value activities, such as mentoring students and leading discussions.

Third, AI could be used to create new educational experiences that are not possible with traditional methods. For example, AI could be used to create virtual patients that students can interact with to practice their clinical skills. AI could also be used to develop simulations of complex medical procedures that students can practice in a safe environment.

Chang argues that medical schools have a responsibility to prepare students for the future of medicine, which will be increasingly reliant on AI. He writes that medical schools should teach students how to use AI effectively, and how to critically evaluate AI-generated information. Medical schools should also develop new curricula that take into account the potential impact of AI on medical practice.

Thursday, October 19, 2023

10 Things Your Corporate Culture Needs to Get Right

D. Sull and C. Sull
MIT Sloan Management Review
Originally posted 16 September 21

Here are two excerpts:

What distinguishes a good corporate culture from a bad one in the eyes of employees? This is a trickier question than it might appear at first glance. Most leaders agree in principle that culture matters but have widely divergent views about which elements of culture are most important. In an earlier study, we identified more than 60 distinct values that companies listed among their official “core values.” Most often, an organization’s official core values signal top executives’ cultural aspirations, rather than reflecting the elements of corporate culture that matter most to employees.

Which elements of corporate life shape how employees rate culture? To address this question, we analyzed the language workers used to describe their employers. When they complete a Glassdoor review, employees not only rate corporate culture on a 5-point scale, but also describe — in their own words — the pros and cons of working at their organization. The topics they choose to write about reveal which factors are most salient to them, and sentiment analysis reveals how positively (or negatively) they feel about each topic. (Glassdoor reviews are remarkably balanced between positive and negative observations.) By analyzing the relationship between their descriptions and rating of culture, we can start to understand what employees are talking about when they talk about culture.

(cut)

The following chart summarizes the factors that best predict whether employees love (or loathe) their companies. The bars represent each topic’s relative importance in predicting a company’s culture rating. Whether employees feel respected, for example, is 18 times more powerful as a predictor of a company’s culture rating compared with the average topic. We’ve grouped related factors to tease out broader themes that emerge from our analysis.

Here are the 10 cultural dynamics and my take
  1. Employees feel respected. Employees want to be treated with consideration, courtesy, and dignity. They want their perspectives to be taken seriously and their contributions to be valued.
  2. Employees have supportive leaders. Employees need leaders who will help them to do their best work, respond to their requests, accommodate their individual needs, offer encouragement, and have their backs.
  3. Leaders live core values. Employees need to see that their leaders are committed to the company's core values and that they are willing to walk the talk.
  4. Toxic managers. Toxic managers can create a poisonous work environment and lead to high turnover rates and low productivity.
  5. Unethical behavior. Employees need to have confidence that their colleagues and leaders are acting ethically and honestly.
  6. Employees have good benefits. Employees expect to be compensated fairly and to have access to a good benefits package.
  7. Perks. Perks can be anything from free snacks to on-site childcare to flexible work arrangements. They can help to make the workplace more enjoyable and improve employee morale.
  8. Employees have opportunities for learning and development. Employees want to grow and develop in their careers. They need to have access to training and development opportunities that will help them to reach their full potential.
  9. Job security. Employees need to feel secure in their jobs in order to focus on their work and be productive.
  10. Reorganizations. How employees view reorganizations, including frequency and quality.
The authors argue that these ten elements are essential for creating a corporate culture that is attractive to top talent, drives innovation and productivity, and leads to long-term success.

Additional thoughts

In addition to the ten elements listed above, there are a number of other factors that can contribute to a strong and positive corporate culture. These include:
  • Diversity and inclusion. Employees want to work in a company where they feel respected and valued, regardless of their race, ethnicity, gender, sexual orientation, or other factors.
  • Collaboration and teamwork. Employees want to work in a company where they can collaborate with others and achieve common goals.
  • Open communication and feedback. Employees need to feel comfortable communicating with their managers and colleagues, and they need to be open to receiving feedback.
  • Celebration of success. It is important to celebrate successes and recognize employees for their contributions. This helps to create a positive and supportive work environment.
  • By investing in these factors, companies can create a corporate culture that is both attractive to employees and beneficial to the bottom line.

Tuesday, July 18, 2023

How AI is learning to read the human mind

Nicola Smith
The Telegraph
Originally posted 23 May 2023

Here is an excerpt:

‘Brain rights’

But he warned that it could also be weaponised and used for military applications or for nefarious purposes to extract information from people.

“We are on the brink of a crisis from the point of view of mental privacy,” he said. “Humans are defined by their thoughts and their mental processes and if you can access them then that should be the sanctuary.”

Prof Yuste has become so concerned about the ethical implications of advanced neurotechnology that he co-founded the NeuroRights Foundation to promote “brain rights” as a new form of human rights.

The group advocates for safeguards to prevent the decoding of a person’s brain activity without consent, for protection of a person’s identity and free will, and for the right to fair access to mental augmentation technology.

They are currently working with the United Nations to study how human rights treaties can be brought up to speed with rapid progress in neurosciences, and raising awareness of the issues in national parliaments.

In August, the Human Rights Council in Geneva will debate whether the issues around mental privacy should be covered by the International Covenant on Civil and Political Rights, one of the most significant human rights treaties in the world.

The gravity of the task was comparable to the development of the atomic bomb, when scientists working on atomic energy warned the UN of the need for regulation and an international control system of nuclear material to prevent the risk of a catastrophic war, said Prof Yuste.

As a result, the International Atomic Energy Agency (IAEA) was created and is now based in Vienna.

Saturday, July 15, 2023

Christ, Country, and Conspiracies? Christian Nationalism, Biblical Literalism, and Belief in Conspiracy Theories

Walker, B., & Vegter, A.
Journal for the Study of Religion
May 8, 2023.

Abstract

When misinformation is rampant, “fake news” is rising, and conspiracy theories are widespread, social scientists have a vested interest in understanding who is most susceptible to these false narratives and why. Recent research suggests Christians are especially susceptible to belief in conspiracy theories in the United States, but scholars have yet to ascertain the role of religiopolitical identities and epistomological approaches, specifically Christian nationalism and biblical literalism, in generalized conspiracy thinking. Because Christian nationalists sense that the nation is under cultural threat and biblical literalism provides an alternative (often anti-elite) source of information, we predict that both will amplify conspiracy thinking. We find that Christian nationalism and biblical literalism independently predict conspiracy thinking, but that the effect of Christian nationalism increases with literalism. Our results point to the contingent effects of Christian nationalism and the need for the religious variables in understanding conspiracy thinking.

---------------------------

I could not find a free pdf.  Here is  summary.

The study's findings suggest that Christian nationalism and biblical literalism may be contributing factors to the rise of conspiracy theories in the United States. The study also suggests that efforts to address the problem of conspiracy theories may need to focus on addressing these underlying beliefs.

Here are some additional details from the study:
  • The study surveyed a nationally representative sample of U.S. adults.
  • The study found that 25% of Christian nationalists and 20% of biblical literalists believe in at least one conspiracy theory, compared to 12% of people who do not hold these beliefs.
  • The study found that the belief in conspiracy theories is amplified when people feel that their nation is under cultural threat. For example, Christian nationalists who believe that the nation is under cultural threat are more likely to believe that the government is hiding information about extraterrestrial life.

Wednesday, June 28, 2023

Forgetting is a Feature, not a Bug: Intentionally Forgetting Some Things Helps Us Remember Others by Freeing up Working Memory Resources

Popov, V., Marevic, I., Rummel, J., & Reder, L. M. (2019).
Psychological Science, 30(9), 1303–1317.
https://doi.org/10.1177/0956797619859531

Abstract

We used an item-method directed forgetting paradigm to test whether instructions to forget or to remember one item in a list affects memory for the subsequent item in that list. In two experiments, we found that free and cued recall were higher when a word-pair was preceded during study by a to-be-forgotten (TBF) word pair. This effect was cumulative – performance was higher when more of the preceding items during study were TBF. It also interacted with lag between study items – the effect decreased as the lag between the current and a prior item increased.  Experiment 2 used a dual-task paradigm in which we suppressed either verbal rehearsal or attentional refreshing during encoding. We found that neither task removed the effect, thus the advantage from previous TBF items could not be due to rehearsal or attentional borrowing. We propose that storing items in long-term memory depletes a limited pool of resources that recovers over time, and that TBF items deplete fewer resources, leaving more available for storing subsequent items. A computational model implementing the theory provided excellent fits to the data.

General Discussion

We demonstrated a previously unknown DF (Directed Forgetting) after-effect of remember and forget instructions in an item method DF paradigm on memory for the items that follow a pair that was to be remembered versus forgotten: cued and free recall for word pairs was higher when people were instructed to forget the preceding word pair. This effect was cumulative, such that performance was even better when more of the preceding pairs had to be forgotten. The size of the DF after-effect depended on how many pairs ago the DF instruction appeared during study. Specifically, the immediately preceding word-pair provided a stronger DF aftereffect than when the DF instruction appeared several word-pairs ago. Finally, neither increased rehearsal nor attentional borrowing of TBR items could explain why memory for the subsequent item was worse in those cases – the DF after-effects remained stable, even when rehearsal was suppressed or attention divided in a dual-task paradigm.

The DF after-effects are replicable and are remarkably consistent across the two experiments – the odds
ratio associated with items preceded by TBR items rather than TBF items at lag one was 0.66 in the prior
study and 0.67 in the new experiment. Similarly, the odds ratio for the effect of cues at lag two were 0.77
and 0.76 in the two studies. Thus, this represents a robust and replicable phenomenon. Additionally, the
multinomial storage–retrieval model confirmed that DF after-effects are clearly a storage phenomenon.


Summary: forgetting is not always a bad thing. In fact, it can sometimes be helpful. For example, if we are trying to learn a new skill, it may be helpful to forget some of the old information that is no longer relevant. This will free up working memory resources, which can then be used to store the new information. It may be helpful to include instructions to forget some information in learning materials. This will help to ensure that the learners are able to focus on the most important information.

Friday, February 17, 2023

Free Will Is Only an Illusion if You Are, Too

Alessandra Buccella and Tomáš Dominik
Scientific American
Originally posted January 16, 2023

Here is an excerpt:

In 2019 neuroscientists Uri Maoz, Liad Mudrik and their colleagues investigated that idea. They presented participants with a choice of two nonprofit organizations to which they could donate $1,000. People could indicate their preferred organization by pressing the left or right button. In some cases, participants knew that their choice mattered because the button would determine which organization would receive the full $1,000. In other cases, people knowingly made meaningless choices because they were told that both organizations would receive $500 regardless of their selection. The results were somewhat surprising. Meaningless choices were preceded by a readiness potential, just as in previous experiments. Meaningful choices were not, however. When we care about a decision and its outcome, our brain appears to behave differently than when a decision is arbitrary.

Even more interesting is the fact that ordinary people’s intuitions about free will and decision-making do not seem consistent with these findings. Some of our colleagues, including Maoz and neuroscientist Jake Gavenas, recently published the results of a large survey, with more than 600 respondents, in which they asked people to rate how “free” various choices made by others seemed. Their ratings suggested that people do not recognize that the brain may handle meaningful choices in a different way from more arbitrary or meaningless ones. People tend, in other words, to imagine all their choices—from which sock to put on first to where to spend a vacation—as equally “free,” even though neuroscience suggests otherwise.

What this tells us is that free will may exist, but it may not operate in the way we intuitively imagine. In the same vein, there is a second intuition that must be addressed to understand studies of volition. When experiments have found that brain activity, such as the readiness potential, precedes the conscious intention to act, some people have jumped to the conclusion that they are “not in charge.” They do not have free will, they reason, because they are somehow subject to their brain activity.

But that assumption misses a broader lesson from neuroscience. “We” are our brain. The combined research makes clear that human beings do have the power to make conscious choices. But that agency and accompanying sense of personal responsibility are not supernatural. They happen in the brain, regardless of whether scientists observe them as clearly as they do a readiness potential.

So there is no “ghost” inside the cerebral machine. But as researchers, we argue that this machinery is so complex, inscrutable and mysterious that popular concepts of “free will” or the “self” remain incredibly useful. They help us think through and imagine—albeit imperfectly—the workings of the mind and brain. As such, they can guide and inspire our investigations in profound ways—provided we continue to question and test these assumptions along the way.


Friday, February 3, 2023

Contraceptive Coverage Expanded: No More ‘Moral’ Exemptions for Employers

Ari Blaff
Yahoo News
Originally posted 30 JAN 23

Here is an excerpt:

The proposed new rule released today by the Departments of Health and Human Services (HHS), Labor, and Treasury would remove the ability of employers to opt out for “moral” reasons, but it would retain the existing protections on “religious” grounds.

For employees covered by insurers with religious exemptions, the new policy will create an “independent pathway” that permits them to access contraceptives through a third-party provider free of charge.

“We had to really think through how to do this in the right way to satisfy both sides, but we think we found that way,” a senior HHS official told CNN.

Planned Parenthood applauded the announcement. “Employers and universities should not be able to dictate personal health-care decisions and impose their views on their employees or students,” the organization’s chief, Alexis McGill Johnson, told CNN. “The ACA mandates that health insurance plans cover all forms of birth control without out-of-pocket costs. Now, more than ever, we must protect this fundamental freedom.”

In 2018, the Trump administration sought to carve out an exception, based on “sincerely held religious beliefs,” to the ACA’s contraceptive mandate. The move triggered a Pennsylvania district court judge to issue a nationwide injunction in 2019, blocking the implementation of the change. However, in 2020, in Little Sisters of the Poor v. Pennsylvania, the Supreme Court, in a 7–2 ruling, defended the legality of the original Trump policy.

The Supreme Court’s overturning of Roe v. Wade in June 2022, in its Dobbs ruling, played a role in HHS’s decision to release the new proposal. Guaranteeing access to contraceptions at no cost to the individual “is a national public health imperative,” HHS said in the proposal. And the Dobbs ruling “has placed a heightened importance on access to contraceptive services nationwide.”

Thursday, January 26, 2023

The AI Ethicist's Dirty Hands Problem

H. S. Sætra, M. Coeckelbergh, & J. Danaher
Communications of the ACM, January 2023, 
Vol. 66 No. 1, Pages 39-41

Assume an AI ethicist uncovers objectionable effects related to the increased usage of AI. What should they do about it? One option is to seek alliances with Big Tech in order to "borrow" their power to change things for the better. Another option is to seek opportunities for change that actively avoids reliance on Big Tech.

The choice between these two strategies gives rise to an ethical dilemma. For example, if the ethicist's research emphasized the grave and unfortunate consequences of Twitter and Facebook, should they promote this research by building communities on said networks? Should they take funding from Big Tech to promote the reform of Big Tech? Should they seek opportunities at Google or OpenAI if they are deeply concerned about the negative implications of large-scale language models?

The AI ethicist’s dilemma emerges when an ethicist must consider how their success in communicating an
identified challenge is associated with a high risk of decreasing the chances of successfully addressing the challenge.  This dilemma occurs in situations in which the means to achieve one’s goals are seemingly best achieved by supporting that which one wishes to correct and/or practicing the opposite of that which one preaches.

(cut)

The Need for More than AI Ethics

Our analysis of the ethicist’s dilemma shows why close ties with Big Tech can be detrimental for the ethicist seeking remedies for AI related problems.   It is important for ethicists, and computer scientists in general, to be aware of their links to the sources of ethical challenges related to AI.  One useful exercise would be to carefully examine what could happen if they attempted to challenge the actors with whom they are aligned. Such actions could include attempts to report unfortunate implications of the company’s activities internally, but also publicly, as Gebru did. Would such actions be met with active resistance, with inaction, or even straightforward sanctions? Such an exercise will reveal whether or not the ethicist feels free to openly and honestly express concerns about the technology with which they work. Such an exercise could be important, but as we have argued, these individuals are not necessarily positioned to achieve fundamental change in this system.

In response, we suggest the role of government is key to balancing the power the tech companies have
through employment, funding, and their control of modern digital infrastructure. Some will rightly argue that political power is also dangerous.   But so are the dangers of technology and unbridled innovation, and private corporations are central sources of these dangers. We therefore argue that private power must be effectively bridled by the power of government.  This is not a new argument, and is in fact widely accepted.

Monday, January 16, 2023

The origins of human prosociality: Cultural group selection in the workplace and the laboratory

Francois, P., Fujiwara, T., & van Ypersele, T. (2018).
Science Advances, 4(9).
https://doi.org/10.1126/sciadv.aat2201

Abstract

Human prosociality toward non-kin is ubiquitous and almost unique in the animal kingdom. It remains poorly understood, although a proliferation of theories has arisen to explain it. We present evidence from survey data and laboratory treatment of experimental subjects that is consistent with a set of theories based on group-level selection of cultural norms favoring prosociality. In particular, increases in competition increase trust levels of individuals who (i) work in firms facing more competition, (ii) live in states where competition increases, (iii) move to more competitive industries, and (iv) are placed into groups facing higher competition in a laboratory experiment. The findings provide support for cultural group selection as a contributor to human prosociality.

Discussion

There is considerable experimental evidence, referenced earlier, supporting the conclusion that people are conditional cooperators: They condition actions based on their beliefs regarding prevailing norms of behavior. They cooperate if they believe their partners are also likely to do so, and they are unlikely to act cooperatively if they believe that others will not.

The environment in which people interact shapes both the social and economic returns to following cooperative norms. For instance, many aspects of groups within the work environment will determine whether cooperation can be an equilibrium in behavior among group members or whether it is strictly dominated by more selfish actions. Competition across firms can play two distinct roles in affecting this. First, there is a static equilibrium effect, which arises from competition altering rewards from cooperative versus selfish behavior, even without changing the distribution of firms. Competition across firms punishes individual free-riding behavior and rewards cooperative behavior. In the absence of competitive threats, members of groups can readily shirk without serious payoff consequences for their firm. This is not so if a firm faces an existential threat. Less markedly, even if a firm is not close to the brink of survival, more intense market competition renders firm-level payoffs more responsive to the efforts of group members. With intense competition, the deleterious effects of shirking are magnified by large loss of market share, revenues, and, in turn, lower group-level payoffs. Without competition, attendant declines in quality or efficiency arising from poor performance have weaker, and perhaps nonexistent, payoff consequences. These effects on individuals are likely to be small in large firms where any specific worker’s actions are unlikely to be pivotal. However, it is possible that employees overestimate the impact of their actions or instinctively respond to competition with more prosocial attitudes, even in large teams.

Competition across firms does not typically lead to a unique equilibrium in social norms but, if intense enough, can sustain a cooperative group norm. Depending on the setting, multiple different cooperative group equilibria differentiated by the level of costly effort can also be sustained. For example, if individuals are complementary in production, then an individual believing co-workers to all be shirkers and thus unable to produce a viable product will similarly also choose to exert low effort. An equilibrium where no one voluntarily contributes to cooperative tasks is sustained, and such a workplace looks to have noncooperative norms. In contrast, with the same complementary production process, and a workplace where all other workers are believed to be contributing high effort, a single worker will optimally choose to exert high effort as well to ensure viable output. In that case, a cooperative norm is sustained. When payoffs are continuous in both the quality of the product and the intensity of the competition, then the degree of cooperative effort that can be sustained can be continuously increasing in the intensity of market competition across firms. We have formalized this in an economic model that we include in the Supplementary Materials.

Competition’s first effect is thus to make it possible, but not necessary, for group-level cooperative norms to arise as equilibria. The literature has shown that there are many other ways to stabilize cooperative norms as equilibria, such as institutional punishment, third-party punishment, or reputations. Cross-group competition may also enhance these other well-studied mechanisms for generating cooperative norm equilibria, but with or without these factors, it has a general effect of tilting the set of equilibria toward those featuring cooperative norms.

Saturday, January 7, 2023

Artificial intelligence and consent: a feminist anti-colonial critique

Varon, J., & Peña, P. (2021). 
Internet Policy Review, 10(4).
https://doi.org/10.14763/2021.4.1602

Abstract

Feminist theories have extensively debated consent in sexual and political contexts. But what does it mean to consent when we are talking about our data bodies feeding artificial intelligence (AI) systems? This article builds a feminist and anti-colonial critique about how an individualistic notion of consent is being used to legitimate practices of the so-called emerging Digital Welfare States, focused on digitalisation of anti-poverty programmes. The goal is to expose how the functional role of digital consent has been enabling data extractivist practices for control and exclusion, another manifestation of colonialism embedded in cutting-edge digital technology.

Here is an excerpt:

Another important criticism of this traditional idea of consent in sexual relationships is the forced binarism of yes/no. According to Gira Grant (2016), consent is not only given but also is built from multiple factors such as the location, the moment, the emotional state, trust, and desire. In fact, for this author, the example of sex workers could demonstrate how desire and consent are different, although sometimes confused as the same. For her there are many things that sex workers do without necessarily wanting to. However, they give consent for legitimate reasons.

It is also important how we express consent. For feminists such as Fraisse (2012), there is no consent without the body. In other words, consent has a relational and communication-based (verbal and nonverbal) dimension where power relationships matter (Tinat, 2012; Fraisse, 2012). This is very relevant when we discuss “tacit consent” in sexual relationships. In another dimension of how we express consent, Fraisse (2012) distinguishes between choice (the consent that is accepted and adhered to) and coercion (the "consent" that is allowed and endured).

According to Fraisse (2012), the critical view of consent that is currently claimed by feminist theories is not consent as a symptom of contemporary individualism; it has a collective approach through the idea of “the ethics of consent”, which provides attention to the "conditions" of the practice; the practice adapted to a contextual situation, therefore rejecting universal norms that ignore the diversified conditions of domination (Fraisse, 2012).

In the same sense, Lucia Melgar (2012) asserts that, in the case of sexual consent, it is not just an individual right, but a collective right of women to say "my body is mine" and from there it claims freedom to all bodies. As Sarah Ahmed (2017, n.p.) states “for feminism: no is a political labor”. In other words, “if your position is precarious you might not be able to afford no. [...] This is why the less precarious might have a political obligation to say no on behalf of or alongside those who are more precarious”. Referring to Éric Fassin, Fraisse (2012) understands that in this feminist view, consent will not be “liberal” anymore (as a refrain of the free individual), but “radical”, because, as Fassin would call, seeing in a collective act, it could function as some sort of consensual exchange of power.

Monday, November 14, 2022

Your Land Acknowledgment Is Not Enough

Joseph Pierce
hyperallergic.com
Originally posted 12 OCT 22

Here is an excerpt:

Museums that once stole Indigenous bones now celebrate Indigenous Peoples’ Day. Organizations that have never hired an Indigenous person now admit the impact of Indigenous genocide through social media. Land-grant universities scramble to draft statements about their historical ties to fraudulent treaties and pilfered graves. Indeed, these are challenging times for institutions trying to do right by Indigenous peoples.

Some institutions will seek the input of an Indigenous scholar or perhaps a community. They will feel contented and “diverse” because of this input. They want a decolonial to-do list. But what we have are questions: What changes when an institution publishes a land acknowledgment? What material, tangible changes are enacted?

Without action, without structural change, acknowledging stolen land is what Eve Tuck and K. Wayne Yang call a “settler move to innocence.” Institutions are not innocent. Settlers are not innocent.

The problem with land acknowledgments is that they are almost never followed by meaningful action. Acknowledgment without action is an empty gesture, exculpatory and self-serving. What is more, such gestures shift the onus of action back onto Indigenous people, who neither asked for an apology nor have the ability to forgive on behalf of the land that has been stolen and desecrated. It is not my place to forgive on behalf of the land.

A land acknowledgment is not enough.

This is what settler institutions do not understand: Land does not require that you confirm it exists, but that you reciprocate the care it has given you. Land is not asking for acknowledgment. It is asking to be returned to itself. It is asking to be heard and cared for and attended to. It is asking to be free.

Land is not an object, not a thing. Land does not require recognition. It requires care. It requires presence.

Land is a gift, a relative, a body that sustains other bodies. And if the land is our relative, then we cannot simply acknowledge it as land. We must understand what our responsibilities are to the land as our kin. We must engage in a reciprocal relationship with the land. Land is — in its animate multiplicities — an ongoing enactment of reciprocity.

A land acknowledgment is not enough.

Sunday, October 9, 2022

A Normative Approach to Artificial Moral Agency

Behdadi, D., Munthe, C.
Minds & Machines 30, 195–218 (2020).
https://doi.org/10.1007/s11023-020-09525-8

Abstract

This paper proposes a methodological redirection of the philosophical debate on artificial moral agency (AMA) in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the suggested normative approach to AMA.

Free will and Autonomy

Several AMA debaters have claimed that free will is necessary for being a moral agent (Himma 2009; Hellström 2012; Friedman and Kahn 1992). Others make a similar (and perhaps related) claim that autonomy is necessary (Lin et al. 2008; Schulzke 2013). In the AMA debate, some argue that artificial entities can never have free will (Bringsjord 1992; Shen 2011; Bringsjord 2007) while others, like James Moor (2006, 2009), are open to the possibility that future machines might acquire free will.Footnote15 Others (Powers 2006; Tonkens 2009) have proposed that the plausibility of a free will condition on moral agency may vary depending on what type of normative ethical theory is assumed, but they have not developed this idea further.

Despite appealing to the concept of free will, this portion of the AMA debate does not engage with key problems in the free will literature, such as the debate about compatibilism and incompatibilism (O’Connor 2016). Those in the AMA debate assume the existence of free will among humans, and ask whether artificial entities can satisfy a source control condition (McKenna et al. 2015). That is, the question is whether or not such entities can be the origins of their actions in a way that allows them to control what they do in the sense assumed of human moral agents.

An exception to this framing of the free will topic in the AMA debate occurs when Johnson writes that ‘… the non-deterministic character of human behavior makes it somewhat mysterious, but it is only because of this mysterious, non-deterministic aspect of moral agency that morality and accountability are coherent’ (Johnson 2006 p. 200). This is a line of reasoning that seems to assume an incompatibilist and libertarian sense of free will, assuming both that it is needed for moral agency and that humans do possess it. This, of course, makes the notion of human moral agents vulnerable to standard objections in the general free will debate (Shaw et al. 2019). Additionally, we note that Johnson’s idea about the presence of a ‘mysterious aspect’ of human moral agents might allow for AMA in the same way as Dreyfus and Hubert’s reference to the subconscious: artificial entities may be built to incorporate this aspect.

The question of sourcehood in the AMA debate connects to the independence argument: For instance, when it is claimed that machines are created for a purpose and therefore are nothing more than advanced tools (Powers 2006; Bryson 2010; Gladden 2016) or prosthetics (Johnson and Miller 2008), this is thought to imply that machines can never be the true or genuine source of their own actions. This argument questions whether the independence required for moral agency (by both functionalists and standardists) can be found in a machine. If a machine’s repertoire of behaviors and responses is the result of elaborate design then it is not independent, the argument goes. Floridi and Sanders question this proposal by referring to the complexity of ‘human programming’, such as genes and arranged environmental factors (e.g. education). 

Tuesday, August 30, 2022

Free Ethics CE

I will support Beth Rom-Rymer's voting kickoff for APA President-Elect with a continuing education program!!

I will be presenting a free CE on September 15, 2022 on the first day of voting for APA President-Elect.  I will be encouraging participants to vote in the APA election, and put Beth in the #1 space.

This will be the first in a series of three free workshops to promote Beth's candidacy.


Here is the long link.


Thursday, July 28, 2022

Justice Alito's bad theology: Abortion foes don't have "morality" on their side

E. M. Freese & A. T. Taylor
Salon.com
Originally posted 26 JUL 22

Here is an excerpt:

Morality has thus become the reigning justification for the state to infringe upon the liberty of female Americans and to subjugate their reproductive labor to its power. An interrogation of this morality, however, reveals that it is underpinned by a theology that both erases and assumes the subjugation of female gestational labor in procreation to patriarchy. We must shatter this male-dominant moral logic and foreground female personhood and agency in order for every American to be equally free.

According to Alito, moral concern for "an unborn human being" apparently exempts pregnant people from the right to "liberty" otherwise guaranteed by the 14th Amendment. In other words, the supposed immorality of abortion is weighty enough to restrict bodily autonomy for all pregnant people in this country and to terrorize potentially pregnant females more broadly. This logic implies that pregnant people also lack 13th Amendment protection from "involuntary servitude," contrary to the strong argument made by legal scholar Michele Goodwin in a recent New York Times op-ed. Consequently, the court has now granted permission to states to force pregnant people to gestate against their will.

To be clear, the 13th and 14th Amendments are specifically about bodily autonomy and freedom from forced labor. They were created after the Civil War in an attempt to end slavery for good, and forced reproduction was correctly understood as a dimension of slavery. But Justice Alito asserts that abortion morality puts pregnant bodies in a "different" category with fewer rights. What, exactly, is the logic here?

At its heart, the theological premise of the anti-abortion argument is that male fertilization essentially equals procreation of a "life" that has equal moral and legal standing to a pregnant person, prior to any female gestation. In effect, this argument holds that the enormous female gestation labor over time, which is literally fundamental to the procreation of a viable "new life," can be ignored as a necessary precursor to the very existence of that life. On a practical level, this amounts to claiming that a habitable house exists at the stage of an architectural drawing, prior to any material labor by the general contractor and the construction workers who literally build it.

Abortion opponents draw upon the biblical story of creation found in the book of Genesis (chapters 1-3) to ostensibly ground their theology in tradition. But Genesis narrates that multiple participants labor at God's direction to create various forms of life through a material process over time, which actually contradicts a theology claiming that male fertilization equals instant-procreation. The real political value is the story's presumption of a male God's dominance and appropriation of others' labor for "His" ends. Using this frame, abortion opponents insert a "sovereign" God into the wombs of pregnant people — exactly at the moment of male fertilization. From that point, the colonization of the female body and female labor becomes not only morally acceptable, but necessary.

Monday, May 30, 2022

Free will without consciousness?

L. Mudrik, I. G. Arie, et al.
Trends in Cognitive Sciences
Available online 12 April 2022

Abstract

Findings demonstrating decision-related neural activity preceding volitional actions have dominated the discussion about how science can inform the free will debate. These discussions have largely ignored studies suggesting that decisions might be influenced or biased by various unconscious processes. If these effects are indeed real, do they render subjects’ decisions less free or even unfree? Here, we argue that, while unconscious influences on decision-making do not threaten the existence of free will in general, they provide important information about limitations on freedom in specific circumstances. We demonstrate that aspects of this long-lasting controversy are empirically testable and provide insight into their bearing on degrees of freedom, laying the groundwork for future scientific-philosophical approaches.

Highlights
  • A growing body of literature argues for unconscious effects on decision-making.
  • We review a body of such studies while acknowledging methodological limitations, and categorize the types of unconscious influence reported.
  • These effects intuitively challenge free will, despite being generally overlooked in the free will literature. To what extent can decisions be free if they are affected by unconscious factors?
  • Our analysis suggests that unconscious influences on behavior affect degrees of control or reasons-responsiveness. We argue that they do not threaten the existence of free will in general, but only the degree to which we can be free in specific circumstances.

Concluding remarks

Current findings of unconscious effects on decision-making do not threaten the existence of free will in general. Yet, the results still show ways in which our freedom can be compromised under specific circumstances. More experimental and philosophical work is needed to delineate the limits and scope of these effects on our freedom (see Outstanding questions). We have evolved to be the decision-makers that we are; thus, our decisions are affected by biases, internal states, and external contexts. However, we can at least sometimes resist those, if we want, and this ability to resist influences contrary to our preferences and reasons is considered a central feature of freedom. As long as this ability is preserved, and the reviewed findings do not suggest otherwise, we are still free, at least usually and to a significant degree.