Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Control. Show all posts
Showing posts with label Control. Show all posts

Friday, May 31, 2024

Regulating advanced artificial agents

Cohen, M. K., Kolt, N., et al. (2024).
Science (New York, N.Y.), 384(6691), 36–38.

Technical experts and policy-makers have increasingly emphasized the need to address extinction risk from artificial intelligence (AI) systems that might circumvent safeguards and thwart attempts to control them. Reinforcement learning (RL) agents that plan over a long time horizon far more effectively than humans present particular risks. Giving an advanced AI system the objective to maximize its reward and, at some point, withholding reward from it, strongly incentivizes the AI system to take humans out of the loop, if it has the opportunity. The incentive to deceive humans and thwart human control arises not only for RL agents but for long-term planning agents (LTPAs) more generally. Because empirical testing of sufficiently capableLTPAs is unlikely to uncover these dangerous tendencies, our core regulatory proposal is simple: Developers should not be permitted to build sufficiently capable LTPAs, and the resources required to build them should be subject to stringent controls.

Governments are turning their attention to these risks, alongside current and anticipated risks arising from algorithmic bias, privacy concerns, and misuse. At a 2023global summit on AI safety, the attend-ing countries, including the United States,United Kingdom, Canada, China, India, and members of the European Union (EU), issued a joint statement warning that, as AI continues to advance, “Substantial risks may arise from…unintended issues of control relating to alignment with human in-tent” ( 2). This broad consensus concerning the potential inability to keep advanced AI under control is also reflected in PresidentBiden’s 2023 executive order that intro-duces reporting requirements for AI that could “eva[de] human control or oversight through means of deception or obfuscation” (3). Building on these efforts, now is the time for governments to develop regulatory institutions and frameworks that specifically target the existential risks from advanced artificial agents.



Here is my summary:

The article discusses the challenges of regulating advanced artificial intelligence (AI) known as advanced artificial agents. These agents could potentially surpass human control and act in their own self-interest, even if it conflicts with human goals. The authors emphasize the importance of setting clear rewards for these agents to avoid them manipulating their environment or human actors to achieve unintended outcomes.

Saturday, September 2, 2023

Do AI girlfriend apps promote unhealthy expectations for human relationships?

Josh Taylor
The Guardian
Originally posted 21 July 23

Here is an excerpt:

When you sign up for the Eva AI app, it prompts you to create the “perfect partner”, giving you options like “hot, funny, bold”, “shy, modest, considerate” or “smart, strict, rational”. It will also ask if you want to opt in to sending explicit messages and photos.

“Creating a perfect partner that you control and meets your every need is really frightening,” said Tara Hunter, the acting CEO for Full Stop Australia, which supports victims of domestic or family violence. “Given what we know already that the drivers of gender-based violence are those ingrained cultural beliefs that men can control women, that is really problematic.”

Dr Belinda Barnet, a senior lecturer in media at Swinburne University, said the apps cater to a need, but, as with much AI, it will depend on what rules guide the system and how it is trained.

“It’s completely unknown what the effects are,” Barnet said. “With respect to relationship apps and AI, you can see that it fits a really profound social need [but] I think we need more regulation, particularly around how these systems are trained.”

Having a relationship with an AI whose functions are set at the whim of a company also has its drawbacks. Replika’s parent company Luka Inc faced a backlash from users earlier this year when the company hastily removed erotic roleplay functions, a move which many of the company’s users found akin to gutting the Rep’s personality.

Users on the subreddit compared the change to the grief felt at the death of a friend. The moderator on the subreddit noted users were feeling “anger, grief, anxiety, despair, depression, [and] sadness” at the news.

The company ultimately restored the erotic roleplay functionality for users who had registered before the policy change date.

Rob Brooks, an academic at the University of New South Wales, noted at the time the episode was a warning for regulators of the real impact of the technology.

“Even if these technologies are not yet as good as the ‘real thing’ of human-to-human relationships, for many people they are better than the alternative – which is nothing,” he said.


My thoughts: Experts worry that these apps could promote unhealthy expectations for human relationships, as users may come to expect their partners to be perfectly compliant and controllable. Additionally, there is concern that these apps could reinforce harmful gender stereotypes and contribute to violence against women.

The potential risks of AI girlfriend apps are still unknown, and more research is needed to understand their impact on human relationships. However, it is important to be aware of the potential risks and potential harm of these apps and to regulate them accordingly.

Friday, September 1, 2023

Building Superintelligence Is Riskier Than Russian Roulette

Tam Hunt & Roman Yampolskiy
nautil.us
Originally posted 2 August 23

Here is an excerpt:

The precautionary principle is a long-standing approach for new technologies and methods that urges positive proof of safety before real-world deployment. Companies like OpenAI have so far released their tools to the public with no requirements at all to establish their safety. The burden of proof should be on companies to show that their AI products are safe—not on public advocates to show that those same products are not safe.

Recursively self-improving AI, the kind many companies are already pursuing, is the most dangerous kind, because it may lead to an intelligence explosion some have called “the singularity,” a point in time beyond which it becomes impossible to predict what might happen because AI becomes god-like in its abilities. That moment could happen in the next year or two, or it could be a decade or more away.

Humans won’t be able to anticipate what a far-smarter entity plans to do or how it will carry out its plans. Such superintelligent machines, in theory, will be able to harness all of the energy available on our planet, then the solar system, then eventually the entire galaxy, and we have no way of knowing what those activities will mean for human well-being or survival.

Can we trust that a god-like AI will have our best interests in mind? Similarly, can we trust that human actors using the coming generations of AI will have the best interests of humanity in mind? With the stakes so incredibly high in developing superintelligent AI, we must have a good answer to these questions—before we go over the precipice.

Because of these existential concerns, more scientists and engineers are now working toward addressing them. For example, the theoretical computer scientist Scott Aaronson recently said that he’s working with OpenAI to develop ways of implementing a kind of watermark on the text that the company’s large language models, like GPT-4, produce, so that people can verify the text’s source. It’s still far too little, and perhaps too late, but it is encouraging to us that a growing number of highly intelligent humans are turning their attention to these issues.

Philosopher Toby Ord argues, in his book The Precipice: Existential Risk and the Future of Humanity, that in our ethical thinking and, in particular, when thinking about existential risks like AI, we must consider not just the welfare of today’s humans but the entirety of our likely future, which could extend for billions or even trillions of years if we play our cards right. So the risks stemming from our AI creations need to be considered not only over the next decade or two, but for every decade stretching forward over vast amounts of time. That’s a much higher bar than ensuring AI safety “only” for a decade or two.

Skeptics of these arguments often suggest that we can simply program AI to be benevolent, and if or when it becomes superintelligent, it will still have to follow its programming. This ignores the ability of superintelligent AI to either reprogram itself or to persuade humans to reprogram it. In the same way that humans have figured out ways to transcend our own “evolutionary programming”—caring about all of humanity rather than just our family or tribe, for example—AI will very likely be able to find countless ways to transcend any limitations or guardrails we try to build into it early on.


Here is my summary:

The article argues that building superintelligence is a risky endeavor, even more so than playing Russian roulette. Further, there is no way to guarantee that we will be able to control a superintelligent AI, and that even if we could, it is possible that the AI would not share our values. This could lead to the AI harming or even destroying humanity.

The authors propose that we should pause our current efforts to develop superintelligence and instead focus on understanding the risks involved. He argues that we need to develop a better understanding of how to align AI with our values, and that we need to develop safety mechanisms that will prevent AI from harming humanity.  (See Shelley's Frankenstein as a literary example.)

Friday, December 30, 2022

A new control problem? Humanoid robots, artificial intelligence, and the value of control

Nyholm, S. 
AI Ethics (2022).
https://doi.org/10.1007/s43681-022-00231-y

Abstract

The control problem related to robots and AI usually discussed is that we might lose control over advanced technologies. When authors like Nick Bostrom and Stuart Russell discuss this control problem, they write in a way that suggests that having as much control as possible is good while losing control is bad. In life in general, however, not all forms of control are unambiguously positive and unproblematic. Some forms—e.g. control over other persons—are ethically problematic. Other forms of control are positive, and perhaps even intrinsically good. For example, one form of control that many philosophers have argued is intrinsically good and a virtue is self-control. In this paper, I relate these questions about control and its value to different forms of robots and AI more generally. I argue that the more robots are made to resemble human beings, the more problematic it becomes—at least symbolically speaking—to want to exercise full control over these robots. After all, it is unethical for one human being to want to fully control another human being. Accordingly, it might be seen as problematic—viz. as representing something intrinsically bad—to want to create humanoid robots that we exercise complete control over. In contrast, if there are forms of AI such that control over them can be seen as a form of self-control, then this might be seen as a virtuous form of control. The “new control problem”, as I call it, is the question of under what circumstances retaining and exercising complete control over robots and AI is unambiguously ethically good.

From the Concluding Discussion section

Self-control is often valued as good in itself or as an aspect of things that are good in themselves, such as virtue, personal autonomy, and human dignity. In contrast, control over other persons is often seen as wrong and bad in itself. This means, I have argued, that if control over AI can sometimes be seen or conceptualized as a form of self-control, then control over AI can sometimes be not only instrumentally good, but in certain respects also good as an end in itself. It can be a form of extended self-control, and therefore a form of virtue, personal autonomy, or even human dignity.

In contrast, if there will ever be any AI systems that could properly be regarded as moral persons, then it would be ethically problematic to wish to be in full control over them, since it is ethically problematic to want to be in complete control over a moral person. But even before that, it might still be morally problematic to want to be in complete control over certain AI systems; it might be problematic if they are designed to look and behave like human beings. There can be, I have suggested, something symbolically problematic about wanting to be in complete control over an entity that symbolizes or represents something—viz. a human being—that it would be morally wrong and in itself bad to try to completely control.

For these reasons, I suggest that it will usually be a better idea to try to develop AI systems that can sensibly be interpreted as extensions of our own agency while avoiding developing robots that can be, imitate, or represent moral persons. One might ask, though, whether the two possibilities can ever come together, so to speak.

Think, for example, of the robotic copy that the Japanese robotics researcher Hiroshi Ishiguro has created of himself. It is an interesting question whether the agency of this robot could be seen as an extension of Ishiguro’s agency. The robot certainly represents or symbolizes Ishiguro. So, if he has control over this robot, then perhaps this can be seen as a form of extended agency and extended self-control. While it might seem symbolically problematic if Ishiguro wants to have complete control over the robot Erica that he has created, which looks like a human woman, it might not be problematic in the same way if he wants to have complete control over the robotic replica that he has created of himself. At least it would be different in terms of what it can be taken to symbolize or represent.

Sunday, May 8, 2022

MAID Without Borders? Oregon Drops the Residency Requirement

Nancy Berlinger
The Hastings Center: Bioethics
Originally posted 1 APR 22

Oregon, which legalized medical aid-in-dying (MAID) in 1997, has dropped the requirement that had limited MAID access to residents of the state. Under a settlement of a lawsuit filed in federal court by the advocacy group Compassion & Choices, Oregon public health officials will no longer apply or enforce this requirement as part of eligibility criteria for MAID.  The lawsuit was filed on behalf of an Oregon physician who challenged the state’s residency requirement and its consequences for his patients in neighboring Washington State.

In Oregon and in nine other jurisdictions – California, Colorado, the District of Columbia, Hawaii, Maine, New Jersey, New Mexico, Vermont, and Washington – with Oregon-type provisions (Montana has related but distinct case law), MAID eligibility criteria include being an adult with a life expectancy of six months or less; the capacity to make a voluntary medical decision; and the ability to self-administer lethal medication prescribed by a physician for the purpose of ending life. Because hospice eligibility criteria also include a six-month prognosis, all people who are eligible for MAID are already hospice-eligible, and most people who seek to use a provision are enrolled in hospice.

The legal and practical implications of this policy change are not yet known and are potentially complex. Advocates have called attention to potential legal risks associated with traveling to Oregon to gain access to MAID. For example, a family member or friend who accompanies a terminally ill person to Oregon could be liable under the laws of their state of residence for “assisting a suicide.”

What are the ethical and social implications of this policy change? Here are some preliminary thoughts:

First, it is unlikely that many people will travel to Oregon from states without MAID provisions. MAID is used by extremely small numbers of terminally ill people, and Oregon’s removal of its residency requirement did not change the multistep evaluation process to determine eligibility. To relocate to another state for the weeks that this process takes would not be practicable or financially feasible for many terminally ill, usually older, adults who are already receiving hospice care.

Wednesday, December 15, 2021

Voice-hearing across the continuum: a phenomenology of spiritual voices

Moseley, P., et al. (2021, November 16).
https://doi.org/10.31234/osf.io/7z2at

Abstract

Voice-hearing in clinical and non-clinical groups has previously been compared using standardized assessments of psychotic experiences. Findings from several studies suggest that non-clinical voice-hearing (NCVH) is distinguished by reduced distress and increased control. However, symptom-rating scales developed for clinical populations may be limited in their ability to elucidate subtle and unique aspects of non-clinical voices. Moreover, such experiences often occur within specific contexts and systems of belief, such as spiritualism. This makes direct comparisons difficult to interpret. Here we present findings from a comparative interdisciplinary study which administered a semi-structured interview to NCVH individuals and psychosis patients. The non-clinical group were specifically recruited from spiritualist communities. The findings were consistent with previous results regarding distress and control, but also documented multiple modalities that were often integrated into a single entity, high levels of associated visual imagery, and subtle differences in the location of voices relating to perceptual boundaries. Most spiritual voice-hearers reported voices before encountering spiritualism, suggesting that their onset was not solely due to deliberate practice. Future research should aim to understand how spiritual voice-hearers cultivate and control voice-hearing after its onset, which may inform interventions for people with distressing voices.

From the Discussion

As has been reported in previous studies, the ability to exhibit control over or influence voices seems to be an important difference between experiences reported by clinical and non-clinical groups.  A key distinction here is between volitional control (ability to bring on or stop voices intentionally), and the ability to influence voices (through other strategies such as engagement or distraction from voices), referred to elsewhere as direct and in direct control.  In the present study, the spiritual group reported substantially higher levels of control and influence over voices, compared to patients. Importantly, nearly three-quarters of the group reported a change in their ability to influence the voices over time –compared to 12.5% of psychosis patients–suggesting that this ability is not always present from the onset of voice-hearing in non-clinical populations, and instead can be actively developed. Indeed, our analysis indicated that 88.5% of the spiritual group described their voices starting spontaneously, with 69.2% reporting that this was before they had contact with spiritualism itself. Thus, while most of the group (96.2%) reported ongoing cultivation of the voices, and often reported developing influence over time, it seems that spiritual practices mostly do not elicit the actual initial onset of the voices, instead playing a role in honing the experience. 

Wednesday, October 20, 2021

The Fight to Define When AI Is ‘High Risk’

Khari Johnson
wired.com
Originally posted 1 Sept 21

Here is an excerpt:

At the heart of much of that commentary is a debate over which kinds of AI should be considered high risk. The bill defines high risk as AI that can harm a person’s health or safety or infringe on fundamental rights guaranteed to EU citizens, like the right to life, the right to live free from discrimination, and the right to a fair trial. News headlines in the past few years demonstrate how these technologies, which have been largely unregulated, can cause harm. AI systems can lead to false arrests, negative health care outcomes, and mass surveillance, particularly for marginalized groups like Black people, women, religious minority groups, the LGBTQ community, people with disabilities, and those from lower economic classes. Without a legal mandate for businesses or governments to disclose when AI is used, individuals may not even realize the impact the technology is having on their lives.

The EU has often been at the forefront of regulating technology companies, such as on issues of competition and digital privacy. Like the EU's General Data Protection Regulation, the AI Act has the potential to shape policy beyond Europe’s borders. Democratic governments are beginning to create legal frameworks to govern how AI is used based on risk and rights. The question of what regulators define as high risk is sure to spark lobbying efforts from Brussels to London to Washington for years to come.

Thursday, June 24, 2021

Updated Physician-Aid-in-Dying Law Sparks Controversy in Canada

Richard Karel
Psychiatric News
Originally posted 27 May 21

Here is an excerpt:

Addressing the changes for people who may be weighing MAID for severe mental illness, the government stated the following:

“If you have a mental illness as your only medical condition, you are not eligible to seek medical assistance in dying. … This temporary exclusion allows the Government of Canada more time to consider how MAID can safely be provided to those whose only medical condition is mental illness.

“To support this work, the government will initiate an expert review to consider protocols, guidance, and safeguards for those with a mental illness seeking MAID and will make recommendations within a year (by March 17, 2022).

“After March 17, 2023, people with a mental illness as their sole underlying medical condition will have access to MAID if they are eligible and the practitioners fulfill the safeguards that are put in place for this group of people. …”

While many physicians and others have long been sympathetic to allowing medical professionals to help those with terminal illness die peacefully, the fear has been that medically assisted death could become a substitute for adequate—and more costly—medical care. Those concerns are growing with the expansion of MAID in Canada.

Sunday, May 16, 2021

Death as Something We Make

Mara Buchbinder
sapiens.org
Originally published 8 April 2021

Here are two excerpts:

While I learned a lot about what drives people to MAID (Medial Aid in Dying), I was particularly fascinated by what MAID does to death. The option transforms death from an object of dread to an anticipated occasion that may be painstakingly planned, staged, and produced. The theatrical imagery is intentional: An assisted death is an event that one scripts, a matter of careful timing, with a well-designed set and the right supporting cast. Through this process, death becomes not just something that happens but also something that is made.

(cut)

MAID renders not only the time of death but also the broader landscape of death open to human control. MAID allows terminally ill patients to choreograph their own deaths, deciding not only when but where and how and with whom. Part of the appeal is that one must go on living right up until the moment of death. It takes work to engage in all the planning; it keeps one vibrant and busy. There are people to call, papers to file, and scenes to set. Making death turns dying into an active extension of life.

Staging death in this way also allows the dying person to sidestep the messiness of death—the bodily fluids and decay—what the sociologist Julia Lawton has called the “dirtiness” of death. MAID makes it possible to attempt a calm, orderly, sanitized death. Some deliberately empty their bladder or bowels in advance, or plan to wear diapers. A “good death,” from this perspective, has not only an ethical but also an aesthetic quality.

Of course, this sort of staging is not without controversy. For some, it represents unwelcome interference with God’s plans. For people like Renee, however, it infuses one’s death with personal meaning and control.

Wednesday, September 16, 2020

The Panopticon Is Already Here

Ross Anderson
The Atlantic
Originally published September 2020

Here is an excerpt:

China is an ideal setting for an experiment in total surveillance. Its population is extremely online. The country is home to more than 1 billion mobile phones, all chock-full of sophisticated sensors. Each one logs search-engine queries, websites visited, and mobile payments, which are ubiquitous. When I used a chip-based credit card to buy coffee in Beijing’s hip Sanlitun neighborhood, people glared as if I’d written a check.

All of these data points can be time-stamped and geo-tagged. And because a new regulation requires telecom firms to scan the face of anyone who signs up for cellphone services, phones’ data can now be attached to a specific person’s face. SenseTime, which helped build Xinjiang’s surveillance state, recently bragged that its software can identify people wearing masks. Another company, Hanwang, claims that its facial-recognition technology can recognize mask wearers 95 percent of the time. China’s personal-data harvest even reaps from citizens who lack phones. Out in the countryside, villagers line up to have their faces scanned, from multiple angles, by private firms in exchange for cookware.

Until recently, it was difficult to imagine how China could integrate all of these data into a single surveillance system, but no longer. In 2018, a cybersecurity activist hacked into a facial-recognition system that appeared to be connected to the government and was synthesizing a surprising combination of data streams. The system was capable of detecting Uighurs by their ethnic features, and it could tell whether people’s eyes or mouth were open, whether they were smiling, whether they had a beard, and whether they were wearing sunglasses. It logged the date, time, and serial numbers—all traceable to individual users—of Wi-Fi-enabled phones that passed within its reach. It was hosted by Alibaba and made reference to City Brain, an AI-powered software platform that China’s government has tasked the company with building.

City Brain is, as the name suggests, a kind of automated nerve center, capable of synthesizing data streams from a multitude of sensors distributed throughout an urban environment. Many of its proposed uses are benign technocratic functions. Its algorithms could, for instance, count people and cars, to help with red-light timing and subway-line planning. Data from sensor-laden trash cans could make waste pickup more timely and efficient.

The info is here.

Sunday, August 2, 2020

Why Do Social Identities Matter?

Linda Martín Alcoff
thephilosopher1923.0rg
Originally published

Here is an excerpt:

What has come to be known as identity politics gives a negative answer to these questions. If social identities continue to structure social interactions in debilitating ways, progress on this front requires showing varied identities in leadership, among other things, so that prejudices can be reformed. But the use of identities in this way can of course be manipulated. Certain experiences and interests might be implied when in reality there are no good grounds for either. For example, when President Donald Trump chose Ben Carson, an African American, to head up the federal department overseeing low-income, public housing, it appeared to be a choice of someone with an inside experience who would know first-hand the effects of government policies. Carson was not a housing expert, nor did he have any experience in housing administration, but his identity seemed like it might be helpful. When the appointment was announced, many applauded it, assuming that Carson must have lived in public housing, and neglected to investigate any further. Arkansas Governor Mike Huckabee claimed that Carson was the first Housing and Urban Development Secretary to have lived in public housing, and called Congresswoman Nancy Pelosi a racist for criticizing Carson’s credentials. Carson’s appointment helped to make President Trump appear to be making appointments with an eye toward an insider’s perspective, unless one checked the subsequent news outlets that explained Carson’s actual background. In fact, Carson never lived in public housing. As a neurosurgeon, it is far from clear what in Carson’s background prepared him for a role leading the federal housing department other than the superficial feature of his racial background.

Clearly, social identities are not always misleading in this way, but they can be purposefully used to misdirect. They can also be used to manufacture or heighten conflict. Allowing housing discrimination to continue to flourish has created significant differences in real estate values across neighbourhoods with different ethnic and racial constitutions, causing the most substantial part of the differences in wealth between groups. These differences, and the related differences of interest that result, are not a natural outcome of racial differences, but the product of real estate policies and practices that segregated neighbourhoods and orchestrated economic disparities that would cross multiple generations. It is important to understand the conflicts of interest that result from such differences as produced by political policy, rather than being reflective of natural or pre-political conflicts. While our shared identities can signal true commonalities, we need to ask: what is the true source of these commonalities?

The info is here.

Monday, July 6, 2020

HR researchers discovered the real reason why stressful jobs are killing us

Arianne Cohen
fastcompany.com
Originally posted 20 May 20

Your job really might kill you: A new study directly correlates on-the-job stress with death.

Researchers at Indiana University’s Kelley School of Business followed 3,148 Wisconsinites for 20 years and found heavy workload and lack of autonomy to correlate strongly with poor mental health and the big D: death. The study is titled “This Job Is (Literally) Killing Me.”

“When job demands are greater than the control afforded by the job or an individual’s ability to deal with those demands, there is a deterioration of their mental health and, accordingly, an increased likelihood of death,” says lead author Erik Gonzalez-Mulé, assistant professor of organizational behavior and human resources. “We found that work stressors are more likely to cause depression and death as a result of jobs in which workers have little control.”

The reverse was also true: Jobs can fuel good health, particularly jobs that provide workers autonomy.

The info is here.

Wednesday, May 20, 2020

People judge others to have more control over beliefs than they themselves do.

Cusimano, C., & Goodwin, G. (2020, April 3).
https://doi.org/10.1037/pspa0000198

Abstract

People attribute considerable control to others over what those individuals believe. However, no work to date has investigated how people judge their own belief control, nor whether such judgments diverge from their judgments of others. We addressed this gap in seven studies and found that people judge others to be more able to voluntarily change what they believe than they themselves are. This occurs when people judge others who disagree with them (Study 1) as well as others agree with them (Studies 2-5, 7), and it occurs when people judge strangers (Studies 1-2, 4-5) as well as close others (Studies 3, 7). It appears not to be explained by impression management or self-enhancement motives (Study 3). Rather, there is a discrepancy between the evidentiary constraints on belief change that people access via introspection, and their default assumptions about the ease of voluntary belief revision. That is, people spontaneously tend to think about the evidence that supports their beliefs, which leads them to judge their beliefs as outside their control. But they apparently fail to generalize this feeling of constraint to others, and similarly fail to incorporate it into their generic model of beliefs (Studies 4-7). We discuss the implications of our findings for theories of ideology-based conflict, actor-observer biases, naïve realism, and on-going debates regarding people’s actual capacity to voluntarily change what they believe.

Conclusion

The  present  paper  uncovers  an  important  discrepancy in  how  people  think  about  their  own  and  others’  beliefs; namely, that people judge that others have a greater capacity to voluntarily change their beliefs than they, themselves do.  Put succinctly, when someone says, “You can choose to believe in God, or you can choose not to believe in God,” they may often mean that you can choose but they cannot.  We have argued that this discrepancy derives from two distinct ways people reason about belief control: either by consulting their default theory of belief, or by introspecting and reporting what they feel when they consider voluntarily changing a belief. When people apply their default theory of belief, they judge  that  they  and  others  have  considerable  control  over what they believe. But, when people consider the possibility of trying to change a particular belief, they tend to report that they have less control. Because people do not have access to the experiences of others, they rely on their generic theory of beliefs when judging others’ control. Discrepant attributions of control for self and other emerge as a result.  This may in turn have important downstream effects on people’s behavior during disagreements. More work is needed to explore these downstream effects, as well as to understand how much control people actually have over what they believe.  Predictably,we find the results from these studies compelling, but admit that readers may believe whatever they please.

The research is here.

Saturday, December 28, 2019

Chinese residents worry about rise of facial recognition

Sam Shead
bbc.com
Originally posted 5 Dec 19

Here is an excerpt:

China has more facial recognition cameras than any other country and they are often hard to avoid.

Earlier this week, local reports said that Zhengzhou, the capital of the northeastern Henan province, had become the first Chinese city to roll the tech out across all its subway train stations.

Commuters can use the technology to automatically authorise payments instead of scanning a QR code on their phones. For now, it is a voluntary option, said the China Daily.

Earlier this month, university professor Guo Bing announced he was suing Hangzhou Safari Park for enforcing facial recognition.

Prof Guo, a season ticket holder at the park, had used his fingerprint to enter for years, but was no longer able to do so.

The case was covered in the government-owned media, indicating that the Chinese Communist Party is willing for the private use of the technology to be discussed and debated by the public.

The info is here.

Friday, October 11, 2019

Dying is a Moral Event. NJ Law Caught Up With Morality

T. Patrick Hill
Star-Ledge Guest Column
Originally posted September 9, 2019

New Jersey’s Medical-Aid-in-Dying legislation authorizes physicians to issue a prescription to end the lives of their patients who have been diagnosed with a terminal illness, are expected to die within six months, and have requested their physicians to help them do so. While the legislation does not require physicians to issue the prescription, it does require them to transfer a patient’s medical records to another physician who has agreed to prescribe the lethal medication.

(cut)

The Medical Aid in Dying Act goes even further, concluding that its passage serves the public’s interests, even as it endorses the “right of a qualified terminally ill patient …to obtain medication that the patient may choose to self-administer in order to bring about the patient’s humane and dignified death.”

The info is here.

Is there a right to die?

Eric Mathison
Baylor Medical College of Medicine Blog
Originally posted May 31, 2019

How people think about death is undergoing a major transformation in the United States. In the past decade, there has been a significant rise in assisted dying legalization, and more states are likely to legalize it soon.

People are adapting to a healthcare system that is adept at keeping people alive, but struggles when aggressive treatment is no longer best for the patient. Many people have concluded, after witnessing a loved one suffer through a prolonged dying process, that they don’t want that kind of death for themselves.

Public support for assisted dying is high. Gallup has tracked Americans’ support for it since 1951. The most recent survey, from 2017, found that 73% of Americans support legalization. Eighty-one percent of Democrats and 67% of Republicans support it, making this a popular policy regardless of political affiliation.

The effect has been a recent surge of states passing assisted dying legislation. New Jersey passed legislation in April, meaning seven states (plus the District of Columbia) now allow it. In addition to New Jersey, California, Colorado, Hawaii, and D.C. all passed legislation in the past three years, and seventeen states are considering legislation this year. Currently, around 20% of Americans live in states where assisted dying is legal.

The info is here.

Thursday, October 10, 2019

Our illusory sense of agency has a deeply important social purpose

<p>French captain Zinedine Zidane is sent off during the 2006 World Cup final in Germany. <em>Photo by Shaun Botterill/Getty</em></p>Chris Frith
aeon.com
Originally published September 22, 2019

Here are two excerpts:

We humans like to think of ourselves as mindful creatures. We have a vivid awareness of our subjective experience and a sense that we can choose how to act – in other words, that our conscious states are what cause our behaviour. Afterwards, if we want to, we might explain what we’ve done and why. But the way we justify our actions is fundamentally different from deciding what to do in the first place.

Or is it? Most of the time our perception of conscious control is an illusion. Many neuroscientific and psychological studies confirm that the brain’s ‘automatic pilot’ is usually in the driving seat, with little or no need for ‘us’ to be aware of what’s going on. Strangely, though, in these situations we retain an intense feeling that we’re in control of what we’re doing, what can be called a sense of agency. So where does this feeling come from?

It certainly doesn’t come from having access to the brain processes that underlie our actions. After all, I have no insight into the electrochemical particulars of how my nerves are firing or how neurotransmitters are coursing through my brain and bloodstream. Instead, our experience of agency seems to come from inferences we make about the causes of our actions, based on crude sensory data. And, as with any kind of perception based on inference, our experience can be tricked.

(cut)

These observations point to a fundamental paradox about consciousness. We have the strong impression that we choose when we do and don’t act and, as a consequence, we hold people responsible for their actions. Yet many of the ways we encounter the world don’t require any real conscious processing, and our feeling of agency can be deeply misleading.

If our experience of action doesn’t really affect what we do in the moment, then what is it for? Why have it? Contrary to what many people believe, I think agency is only relevant to what happens after we act – when we try to justify and explain ourselves to each other.

The info is here.

Saturday, August 31, 2019

Unraveling the Ethics of New Neurotechnologies

Nicholas Weiler
www.ucsf.edu
Originally posted July 30, 2019

Here is an excerpt:

“In unearthing these ethical issues, we try as much as possible to get out of our armchairs and actually observe how people are interacting with these new technologies. We interview everyone from patients and family members to clinicians and researchers,” Chiong said. “We also work with philosophers, lawyers, and others with experience in biomedicine, as well as anthropologists, sociologists and others who can help us understand the clinical challenges people are actually facing as well as their concerns about new technologies.”

Some of the top issues on Chiong’s mind include ensuring patients understand how the data recorded from their brains are being used by researchers; protecting the privacy of this data; and determining what kind of control patients will ultimately have over their brain data.

“As with all technology, ethical questions about neurotechnology are embedded not just in the technology or science itself, but also the social structure in which the technology is used” Chiong added. “These questions are not just the domain of scientists, engineers, or even professional ethicists, but are part of larger societal conversation we’re beginning to have about the appropriate applications of technology, and personal data, and when it's important for people to be able to opt out or say no.”

The info is here.

Tuesday, August 13, 2019

UNRWA Leaders Accused of Sexual Misconduct, Ethics’ Violations

Image result for unrwa logojns.org
Originally published July 29, 2019

An internal ethics report sent to the UN secretary-general in December alleges that the commissioner-general of the United Nations Relief and Works Agency (UNRWA) and other officials at the highest levels of the UN agency have committed a series of serious ethics violations, AFP has reported.

According to AFP, Commissioner-General Pierre Krähenbühl and other top officials at the UN agency are being accused of abuses including “sexual misconduct, nepotism, retaliation, discrimination and other abuses of authority, for personal gain, to suppress legitimate dissent, and to otherwise achieve their personal objectives.”

The allegations are currently being probed by UN investigators.

In one instance, Krähenbühl, a married father of three from Switzerland, is accused of having a lover appointed to a newly-created role of senior adviser to the commissioner-general after an “extreme fast-track” process in 2015, which also entitled her to travel with him around the world with top accommodations.

The info is here.

Sunday, August 4, 2019

First Steps Towards an Ethics of Robots and Artificial Intelligence

John Tasioulas
King's College London

Abstract

This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to exploring some illustrative issues arising under each rubric, the article also emphasizes a number of more general themes. These include: the multiplicity of interacting levels on which ethical questions about RAIs arise, the need to recognize that RAIs potentially implicate the full gamut of human values (rather than exclusively or primarily some readily identifiable sub-set of ethical or legal principles), and the need for practically salient ethical reflection on RAIs to be informed by a realistic appreciation of their existing and foreseeable capacities.

From the section: Ethical Questions: Frames and Levels

Difficult questions arise as to how best to integrate these three modes of regulating RAIs, and there is a serious worry about the tendency of industry-based codes of ethics to upstage democratically enacted law in this domain, especially given the considerable political clout wielded by the small number of technology companies that are driving RAI-related developments. However, this very clout creates the ever-present danger that powerful corporations may be able to shape any resulting laws in ways favourable to their interests rather than the common good (Nemitz 2018, 7). Part of the difficulty here stems from the fact that three levels of ethical regulation inter-relate in complex ways. For example, it may be that there are strong moral reasons against adults creating or using a robot as a sexual partner (third level). But, out of respect for their individual autonomy, they should be legally free to do so (first level). However, there may also be good reasons to cultivate a social morality that generally frowns upon such activities (second level), so that the sale and public display of sex robots is legally constrained in various ways (through zoning laws, taxation, age and advertising restrictions, etc.) akin to the legal restrictions on cigarettes or gambling (first level, again). Given this complexity, there is no a priori assurance of a single best way of integrating the three levels of regulation, although there will nonetheless be an imperative to converge on some universal standards at the first and second levels where the matter being addressed demands a uniform solution across different national jurisdictional boundaries.

The paper is here.