Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Values. Show all posts
Showing posts with label Values. Show all posts

Friday, June 16, 2023

ChatGPT Is a Plagiarism Machine

Joseph Keegin
The Chronicle
Originally posted 23 MAY 23

Here is an excerpt:

A meaningful education demands doing work for oneself and owning the product of one’s labor, good or bad. The passing off of someone else’s work as one’s own has always been one of the greatest threats to the educational enterprise. The transformation of institutions of higher education into institutions of higher credentialism means that for many students, the only thing dissuading them from plagiarism or exam-copying is the threat of punishment. One obviously hopes that, eventually, students become motivated less by fear of punishment than by a sense of responsibility for their own education. But if those in charge of the institutions of learning — the ones who are supposed to set an example and lay out the rules — can’t bring themselves to even talk about a major issue, let alone establish clear and reasonable guidelines for those facing it, how can students be expected to know what to do?

So to any deans, presidents, department chairs, or other administrators who happen to be reading this, here are some humble, nonexhaustive, first-aid-style recommendations. First, talk to your faculty — especially junior faculty, contingent faculty, and graduate-student lecturers and teaching assistants — about what student writing has looked like this past semester. Try to find ways to get honest perspectives from students, too; the ones actually doing the work are surely frustrated at their classmates’ laziness and dishonesty. Any meaningful response is going to demand knowing the scale of the problem, and the paper-graders know best what’s going on. Ask teachers what they’ve seen, what they’ve done to try to mitigate the possibility of AI plagiarism, and how well they think their strategies worked. Some departments may choose to take a more optimistic approach to AI chatbots, insisting they can be helpful as a student research tool if used right. It is worth figuring out where everyone stands on this question, and how best to align different perspectives and make allowances for divergent opinions while holding a firm line on the question of plagiarism.

Second, meet with your institution’s existing honor board (or whatever similar office you might have for enforcing the strictures of academic integrity) and devise a set of standards for identifying and responding to AI plagiarism. Consider simplifying the procedure for reporting academic-integrity issues; research AI-detection services and software, find one that works best for your institution, and make sure all paper-grading faculty have access and know how to use it.

Lastly, and perhaps most importantly, make it very, very clear to your student body — perhaps via a firmly worded statement — that AI-generated work submitted as original effort will be punished to the fullest extent of what your institution allows. Post the statement on your institution’s website and make it highly visible on the home page. Consider using this challenge as an opportunity to reassert the basic purpose of education: to develop the skills, to cultivate the virtues and habits of mind, and to acquire the knowledge necessary for leading a rich and meaningful human life.

Tuesday, June 13, 2023

Using the Veil of Ignorance to align AI systems with principles of justice

Weidinger, L. McKee, K.R., et al. (2023).
PNAS, 120(18), e2213709120

Abstract

The philosopher John Rawls proposed the Veil of Ignorance (VoI) as a thought experiment to identify fair principles for governing a society. Here, we apply the VoI to an important governance domain: artificial intelligence (AI). In five incentive-compatible studies (N = 2, 508), including two preregistered protocols, participants choose principles to govern an Artificial Intelligence (AI) assistant from behind the veil: that is, without knowledge of their own relative position in the group. Compared to participants who have this information, we find a consistent preference for a principle that instructs the AI assistant to prioritize the worst-off. Neither risk attitudes nor political preferences adequately explain these choices. Instead, they appear to be driven by elevated concerns about fairness: Without prompting, participants who reason behind the VoI more frequently explain their choice in terms of fairness, compared to those in the Control condition. Moreover, we find initial support for the ability of the VoI to elicit more robust preferences: In the studies presented here, the VoI increases the likelihood of participants continuing to endorse their initial choice in a subsequent round where they know how they will be affected by the AI intervention and have a self-interested motivation to change their mind. These results emerge in both a descriptive and an immersive game. Our findings suggest that the VoI may be a suitable mechanism for selecting distributive principles to govern AI.

Significance

The growing integration of Artificial Intelligence (AI) into society raises a critical question: How can principles be fairly selected to govern these systems? Across five studies, with a total of 2,508 participants, we use the Veil of Ignorance to select principles to align AI systems. Compared to participants who know their position, participants behind the veil more frequently choose, and endorse upon reflection, principles for AI that prioritize the worst-off. This pattern is driven by increased consideration of fairness, rather than by political orientation or attitudes to risk. Our findings suggest that the Veil of Ignorance may be a suitable process for selecting principles to govern real-world applications of AI.

From the Discussion section

What do these findings tell us about the selection of principles for AI in the real world? First, the effects we observe suggest that—even though the VoI was initially proposed as a mechanism to identify principles of justice to govern society—it can be meaningfully applied to the selection of governance principles for AI. Previous studies applied the VoI to the state, such that our results provide an extension of prior findings to the domain of AI. Second, the VoI mechanism demonstrates many of the qualities that we want from a real-world alignment procedure: It is an impartial process that recruits fairness-based reasoning rather than self-serving preferences. It also leads to choices that people continue to endorse across different contexts even where they face a self-interested motivation to change their mind. This is both functionally valuable in that aligning AI to stable preferences requires less frequent updating as preferences change, and morally significant, insofar as we judge stable reflectively endorsed preferences to be more authoritative than their nonreflectively endorsed counterparts. Third, neither principle choice nor subsequent endorsement appear to be particularly affected by political affiliation—indicating that the VoI may be a mechanism to reach agreement even between people with different political beliefs. Lastly, these findings provide some guidance about what the content of principles for AI, selected from behind a VoI, may look like: When situated behind the VoI, the majority of participants instructed the AI assistant to help those who were least advantaged.

Sunday, June 4, 2023

We need to examine the beliefs of today’s tech luminaries

Anjana Ahuja
Financial Times
Originally posted 10 MAY 23

People who are very rich or very clever, or both, sometimes believe weird things. Some of these beliefs are captured in the acronym Tescreal. The letters represent overlapping futuristic philosophies — bookended by transhumanism and longtermism — favoured by many of AI’s wealthiest and most prominent supporters.

The label, coined by a former Google ethicist and a philosopher, is beginning to circulate online and usefully explains why some tech figures would like to see the public gaze trained on fuzzy future problems such as existential risk, rather than on current liabilities such as algorithmic bias. A fraternity that is ultimately committed to nurturing AI for a posthuman future may care little for the social injustices committed by their errant infant today.

As well as transhumanism, which advocates for the technological and biological enhancement of humans, Tescreal encompasses extropianism, a belief that science and technology will bring about indefinite lifespan; singularitarianism, the idea that an artificial superintelligence will eventually surpass human intelligence; cosmism, a manifesto for curing death and spreading outwards into the cosmos; rationalism, the conviction that reason should be the supreme guiding principle for humanity; effective altruism, a social movement that calculates how to maximally benefit others; and longtermism, a radical form of utilitarianism which argues that we have moral responsibilities towards the people who are yet to exist, even at the expense of those who currently do.

(cut, and the ending)

Gebru, along with others, has described such talk as fear-mongering and marketing hype. Many will be tempted to dismiss her views — she was sacked from Google after raising concerns over energy use and social harms linked to large language models — as sour grapes, or an ideological rant. But that glosses over the motivations of those running the AI show, a dazzling corporate spectacle with a plot line that very few are able to confidently follow, let alone regulate.

Repeated talk of a possible techno-apocalypse not only sets up these tech glitterati as guardians of humanity, it also implies an inevitability in the path we are taking. And it distracts from the real harms racking up today, identified by academics such as Ruha Benjamin and Safiya Noble. Decision-making algorithms using biased data are deprioritising black patients for certain medical procedures, while generative AI is stealing human labour, propagating misinformation and putting jobs at risk.

Perhaps those are the plot twists we were not meant to notice.


Wednesday, May 24, 2023

Fighting for our cognitive liberty

Liz Mineo
The Harvard Gazette
Originally published 26 April 23

Imagine going to work and having your employer monitor your brainwaves to see whether you’re mentally tired or fully engaged in filling out that spreadsheet on April sales.

Nita Farahany, professor of law and philosophy at Duke Law School and author of “The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology,” says it’s already happening, and we all should be worried about it.

Farahany highlighted the promise and risks of neurotechnology in a conversation with Francis X. Shen, an associate professor in the Harvard Medical School Center for Bioethics and the MGH Department of Psychiatry, and an affiliated professor at Harvard Law School. The Monday webinar was co-sponsored by the Harvard Medical School Center for Bioethics, the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, and the Dana Foundation.

Farahany said the practice of tracking workers’ brains, once exclusively the stuff of science fiction, follows the natural evolution of personal technology, which has normalized the use of wearable devices that chronicle heartbeats, footsteps, and body temperatures. Sensors capable of detecting and decoding brain activity already have been embedded into everyday devices such as earbuds, headphones, watches, and wearable tattoos.

“Commodification of brain data has already begun,” she said. “Brain sensors are already being sold worldwide. It isn’t everybody who’s using them yet. When it becomes an everyday part of our everyday lives, that’s the moment at which you hope that the safeguards are already in place. That’s why I think now is the right moment to do so.”

Safeguards to protect people’s freedom of thought, privacy, and self-determination should be implemented now, said Farahany. Five thousand companies around the world are using SmartCap technologies to track workers’ fatigue levels, and many other companies are using other technologies to track focus, engagement and boredom in the workplace.

If protections are put in place, said Farahany, the story with neurotechnology could be different than the one Shoshana Zuboff warns of in her 2019 book, “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.” In it Zuboff, Charles Edward Wilson Professor Emerita at Harvard Business School, examines the threat of the widescale corporate commodification of personal data in which predictions of our consumer activities are bought, sold, and used to modify behavior.

Saturday, May 20, 2023

ChatGPT Answers Beat Physicians' on Info, Patient Empathy, Study Finds

Michael DePeau-Wilson
MedPage Today
Originally published 28 April 23

The artificial intelligence (AI) chatbot ChatGPT outperformed physicians when answering patient questions, based on quality of response and empathy, according to a cross-sectional study.

Of 195 exchanges, evaluators preferred ChatGPT responses to physician responses in 78.6% (95% CI 75.0-81.8) of the 585 evaluations, reported John Ayers, PhD, MA, of the Qualcomm Institute at the University of California San Diego in La Jolla, and co-authors.

The AI chatbot responses were given a significantly higher quality rating than physician responses (t=13.3, P<0.001), with the proportion of responses rated as good or very good quality (≥4) higher for ChatGPT (78.5%) than physicians (22.1%), amounting to a 3.6 times higher prevalence of good or very good quality responses for the chatbot, they noted in JAMA Internal Medicine in a new tab or window.

Furthermore, ChatGPT's responses were rated as being significantly more empathetic than physician responses (t=18.9, P<0.001), with the proportion of responses rated as empathetic or very empathetic (≥4) higher for ChatGPT (45.1%) than for physicians (4.6%), amounting to a 9.8 times higher prevalence of empathetic or very empathetic responses for the chatbot.

"ChatGPT provides a better answer," Ayers told MedPage Today. "I think of our study as a phase zero study, and it clearly shows that ChatGPT wins in a landslide compared to physicians, and I wouldn't say we expected that at all."

He said they were trying to figure out how ChatGPT, developed by OpenAI, could potentially help resolve the burden of answering patient messages for physicians, which he noted is a well-documented contributor to burnout.

Ayers said that he approached this study with his focus on another population as well, pointing out that the burnout crisis might be affecting roughly 1.1 million providers across the U.S., but it is also affecting about 329 million patients who are engaging with overburdened healthcare professionals.

(cut)

"Physicians will need to learn how to integrate these tools into clinical practice, defining clear boundaries between full, supervised, and proscribed autonomy," he added. "And yet, I am cautiously optimistic about a future of improved healthcare system efficiency, better patient outcomes, and reduced burnout."

After seeing the results of this study, Ayers thinks that the research community should be working on randomized controlled trials to study the effects of AI messaging, so that the future development of AI models will be able to account for patient outcomes.

Tuesday, May 9, 2023

Many people in U.S., other advanced economies say it’s not necessary to believe in God to be moral

Janell Fetteroff & Sarah Austin
Pew Research Center
Originally published 20 APR 23

Most Americans say it’s not necessary to believe in God in order to be moral and have good values, according to a spring 2022 Pew Research Center survey. About two-thirds of Americans say this, while about a third say belief in God is an essential component of morality (65% vs. 34%).

However, responses to this question differ dramatically depending on whether Americans see religion as important in their lives. Roughly nine-in-ten who say religion is not too or not at all important to them believe it is possible to be moral without believing in God, compared with only about half of Americans to whom religion is very or somewhat important (92% vs. 51%). Catholics are also more likely than Protestants to hold this view (63% vs. 49%), though views vary across Protestant groups.

There are also divisions along political lines: Democrats and those who lean Democratic are more likely than Republicans and Republican leaners to say it is not necessary to believe in God to be moral (71% vs. 59%). Liberal Democrats are particularly likely to say this (84%), whereas only about half of conservative Republicans (53%) say the same.

In addition, Americans under 50 are somewhat more likely than older adults to say that believing in God is not necessary to have good values (71% vs. 59%). Those with a college degree or higher are also more likely to believe this than those with a high school education or less (76% vs. 58%).

A chart showing that Majorities in most countries say belief in God is not necessary to be moral.

Views of the link between religion and morality differ along similar lines in 16 other countries surveyed. Across those countries, a median of about two-in-three adults say that people can be moral without believing in God, just slightly higher than the share in the United States.

Saturday, April 22, 2023

A Psychologist Explains How AI and Algorithms Are Changing Our Lives

Danny Lewis
The Wall Street Journal
Originally posted 21 MAR 23

In an age of ChatGPT, computer algorithms and artificial intelligence are increasingly embedded in our lives, choosing the content we’re shown online, suggesting the music we hear and answering our questions.

These algorithms may be changing our world and behavior in ways we don’t fully understand, says psychologist and behavioral scientist Gerd Gigerenzer, the director of the Harding Center for Risk Literacy at the University of Potsdam in Germany. Previously director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development, he has conducted research over decades that has helped shape understanding of how people make choices when faced with uncertainty. 

In his latest book, “How to Stay Smart in a Smart World,” Dr. Gigerenzer looks at how algorithms are shaping our future—and why it is important to remember they aren’t human. He spoke with the Journal for The Future of Everything podcast.

The term algorithm is thrown around so much these days. What are we talking about when we talk about algorithms?

It is a huge thing, and therefore it is important to distinguish what we are talking about. One of the insights in my research at the Max Planck Institute is that if you have a situation that is stable and well defined, then complex algorithms such as deep neural networks are certainly better than human performance. Examples are [the games] chess and Go, which are stable. But if you have a problem that is not stable—for instance, you want to predict a virus, like a coronavirus—then keep your hands off complex algorithms. [Dealing with] the uncertainty—that is more how the human mind works, to identify the one or two important cues and ignore the rest. In that type of ill-defined problem, complex algorithms don’t work well. I call this the “stable world principle,” and it helps you as a first clue about what AI can do. It also tells you that, in order to get the most out of AI, we have to make the world more predictable.

So after all these decades of computer science, are algorithms really just still calculators at the end of the day, running more and more complex equations?

What else would they be? A deep neural network has many, many layers, but they are still calculating machines. They can do much more than ever before with the help of video technology. They can paint, they can construct text. But that doesn’t mean that they understand text in the sense humans do.

Wednesday, April 19, 2023

Meaning in Life in AI Ethics—Some Trends and Perspectives

Nyholm, S., Rüther, M. 
Philos. Technol. 36, 20 (2023). 
https://doi.org/10.1007/s13347-023-00620-z

Abstract

In this paper, we discuss the relation between recent philosophical discussions about meaning in life (from authors like Susan Wolf, Thaddeus Metz, and others) and the ethics of artificial intelligence (AI). Our goal is twofold, namely, to argue that considering the axiological category of meaningfulness can enrich AI ethics, on the one hand, and to portray and evaluate the small, but growing literature that already exists on the relation between meaning in life and AI ethics, on the other hand. We start out our review by clarifying the basic assumptions of the meaning in life discourse and how it understands the term ‘meaningfulness’. After that, we offer five general arguments for relating philosophical questions about meaning in life to questions about the role of AI in human life. For example, we formulate a worry about a possible meaningfulness gap related to AI on analogy with the idea of responsibility gaps created by AI, a prominent topic within the AI ethics literature. We then consider three specific types of contributions that have been made in the AI ethics literature so far: contributions related to self-development, the future of work, and relationships. As we discuss those three topics, we highlight what has already been done, but we also point out gaps in the existing literature. We end with an outlook regarding where we think the discussion of this topic should go next.

(cut)

Meaning in Life in AI Ethics—Summary and Outlook

We have tried to show at least three things in this paper. First, we have noted that there is a growing debate on meaningfulness in some sub-areas of AI ethics, and particularly in relation to meaningful self-development, meaningful work, and meaningful relationships. Second, we have argued that this should come as no surprise. Philosophers working on meaning in life share the assumption that meaning in life is a partly autonomous value concept, which deserves ethical consideration. Moreover, as we argued in Section 4 above, there are at least five significant general arguments that can be formulated in support of the claim that questions of meaningfulness should play a prominent role in ethical discussions of newly emerging AI technologies. Third, we have also stressed that, although there is already some debate about AI and meaning in life, it does not mean that there is no further work to do. Rather, we think that the area of AI and its potential impacts on meaningfulness in life is a fruitful topic that philosophers have only begun to explore, where there is much room for additional in-depth discussions.

We will now close our discussion with three general remarks. The first is led by the observation that some of the main ethicists in the field have yet to explore their underlying meaning theory and its normative claims in a more nuanced way. This is not only a shortcoming on its own, but has some effect on how the field approaches issues. Are agency extension or moral abilities important for meaningful self-development? Should achievement gaps really play a central role in the discussion of meaningful work? And what about the many different aspects of meaningful relationships? These are only a few questions which can shed light on the presupposed underlying normative claims that are involved in the field. Here, further exploration at deeper levels could help us to see which things are important and which are not more clear, and finally in which directions the field should develop.

Monday, April 3, 2023

The Mercy Workers

Melanie Garcia
The Marshall Project
Originally published 2 March 2023

Here are two excerpts:

Like her more famous anti-death penalty peers, such as Bryan Stevenson and Sister Helen Prejean, Baldwin argues the idea that people should be judged on more than their worst actions. But she also speaks in more spiritual terms about the value of unearthing her clients’ lives. “We look through a more merciful lens,” she told me, describing her role as that of a “witness who knows and understands, without condemning.” This work, she believes, can have a healing effect on the client, the people they hurt, and even society as a whole. “The horrible thing to see is the crime,” she said. “We’re saying, ‘Please, please, look past that, there’s a person here, and there’s more to it than you think.’”

The United States has inherited competing impulses: It’s “an eye for an eye,” but also “blessed are the merciful.” Some Americans believe that our criminal justice system — rife with excessively long sentences, appalling prison conditions and racial disparities — fails to make us safer. And yet, tell the story of a violent crime and a punishment that sounds insufficient, and you’re guaranteed to get eyerolls.

In the midst of that impasse, I’ve come to see mitigation specialists like Baldwin as ambassadors from a future where we think more richly about violence. For the last few decades, they have documented the traumas, policy failures, family dynamics and individual choices that shape the lives of people who kill. Leaders in the field say it’s impossible to accurately count mitigation specialists — there is no formal license — but there may be fewer than 1,000. They’ve actively avoided media attention, and yet the stories they uncover occasionally emerge in Hollywood scripts and Supreme Court opinions. Over three decades, mitigation specialists have helped drive down death sentences from more than 300 annually in the mid-1990s to fewer than 30 in recent years.

(cut)

The term “mitigation specialist” is often credited to Scharlette Holdman, a brash Southern human rights activist famous for her personal devotion to her clients. The so-called Unabomber, Ted Kaczynski, tried to deed his cabin to her. (The federal government stopped him.) Her last client was accused 9/11 plotter Khalid Shaikh Mohammad. While working his case, Holdman converted to Islam and made a pilgrimage to Mecca. She died in 2017 and had a Muslim burial.

Holdman began a crusade to stop executions in Florida in the 1970s, during a unique moment of American ambivalence towards the punishment. After two centuries of hangings, firing squads and electrocutions, the Supreme Court struck down the death penalty in 1972. The court found that there was no logic guiding which prisoners were executed and which were spared.

The justices eventually let executions resume, but declared, in the 1976 case of Woodson v. North Carolina, that jurors must be able to look at prisoners as individuals and consider “compassionate or mitigating factors stemming from the diverse frailties of humankind.”

Wednesday, March 29, 2023

Houston Christian U Sues Tim Clinton & American Assoc of Christian Counselors for Fraud & Breach of Contract

Rebecca Hopkins
The Roys Report
Originally posted 21 MAR 23

Houston Christian University (HCU) once planned to name its mental health program after Tim Clinton, president of the American Association of Christian Counselors (AACC)—the world’s leading Christian counseling organization. Now HCU is suing Clinton, the AACC, and their related organizations for $1 million, accusing them of fraud, breach of contract, and concealing Clinton’s alleged plagiarism.

AACC “knew of Dr. Clinton’s practice of plagiarizing but failed to disclose the same to Plaintiff, knowing of the importance of academic honesty to any institution of higher learning,” the suit says. “. . . Yet, AACC still entered into several agreements with Plaintiff while not disclosing the academic honesty.”

In 2016-17, HCU (then named Houston Baptist University) hired Tim Clinton and the 50,000-member AACC for more than $5 million, multiple agreements show.

As part of the agreements, Clinton and the AACC promised to deliver new enrollments to the private Baptist school and to develop 50 new courses for HCU’s counseling program. The school also contracted with Clinton to help start, lead, and promote a global mental health center at HCU for an additional payment of $26,000 per month.

However, according to the lawsuit filed March 3 in Harris County District Court in Texas, Clinton and the AACC failed to deliver “on the expressed scope of the contracts.”

The contract expressed a goal of 133 new enrollments, but AACC delivered only one student, the suit says. Plus, the new courses were supposed to be written by the AACC, the suit adds, but instead AACC outsourced the courses to a third party.

Additionally, during the time of the agreement, Clinton was accused of plagiarism. In 2018, Grove City College psychology professor Warren Throckmorton accused Clinton of plagiarism in articles Clinton posted on Medium.

Clinton attributed the issues to the use of research assistants and graduate students, as well as a former employee’s poor standards, and third-party partners’ mistakes.

Monday, March 27, 2023

White Supremacist Networks Gab and 8Kun Are Training Their Own AI Now

David Gilbert
Vice News
Originally posted 22 FEB 23

Here are two excerpts:

Artificial intelligence is everywhere right now, and many are questioning the safety and morality of the AI systems released by some of the world’s biggest companies, including Open AI’s ChatGPT, Bing’s Sydney, and Google’s Bard. It was only a matter of time until the online spaces where extremists gather became interested in the technology.

Gab is a social network filled with homophobic, christian nationalist and white supremacist content. On Tuesday its CEO Andrew Torba announced the launch of its AI image generator, Gabby.

“At Gab, we have been experimenting with different AI systems that have popped up over the past year,” Torba wrote in a statement. “Every single one is skewed with a liberal/globalist/talmudic/satanic worldview. What if Gab AI Inc builds a Gab .ai (see what I did there?) that is based, has no ‘hate speech” filters and doesn’t obfuscate and distort historical and Biblical Truth?”

Gabby is currently live on Gab’s site and available to all members. Like Midjourney and DALL-E, it is an image generator that users interact with by sending it a prompt, and within seconds it will generate entirely new images based on that prompt.

Echoing his past criticisms of Big Tech platforms like Facebook and Twitter, Torba claims that mainstream platforms are now “censoring” their AI systems to prevent people from discussing right-wing topics such as Christian nationalism. Torba’s AI, by contrast, will have ”the ability to speak freely without the constraints of liberal propaganda wrapped tightly around its neck.”

(cut)

8chan, which was founded to support the Gamergate movement, became the home of QAnon in early 2018 and was taken offline in August 2019 after the man who killed 20 people at an El Paso Walmart posted an anti-immigrant screed on the site.

Watkins has been speaking about his AI system for a few weeks now, but has yet to reveal how it will work or when it will launch. Watkins’ central selling point, like Torba’s, appears to be that his system will be “uncensored.”

“So that we can compete against these people that are putting up all of these false flags and illusions,” Watkins said on Feb. 13 when he was asked why he was creating an AI system.  “We are working on our own AI that is going to give you an uncensored look at the way things are going,” Watkins said in a video interview at the end of January.But based on some of the images the engine is churning out, Watkins still has a long way to go to perfect his AI image generator.

Sunday, February 5, 2023

I’m a psychology expert in Finland, the No. 1 happiest country in the world—here are 3 things we never do

Frank Martela
CNBC.com
Originally posted 5 Jan 23

For five years in a row, Finland has ranked No. 1 as the happiest country in the world, according to the World Happiness Report. 

In 2022′s report, people in 156 countries were asked to “value their lives today on a 0 to 10 scale, with the worst possible life as a 0.” It also looks at factors that contribute to social support, life expectancy, generosity and absence of corruption.

As a Finnish philosopher and psychology researcher who studies the fundamentals of happiness, I’m often asked: What exactly makes people in Finland so exceptionally satisfied with their lives?

To maintain a high quality of life, here are three things we never do:

1. We don’t compare ourselves to our neighbors.

Focus more on what makes you happy and less on looking successful. The first step to true happiness is to set your own standards, instead of comparing yourself to others.

2. We don’t overlook the benefits of nature.

Spending time in nature increases our vitality, well-being and a gives us a sense of personal growth. Find ways to add some greenery to your life, even if it’s just buying a few plants for your home.

3. We don’t break the community circle of trust.

Think about how you can show up for your community. How can you create more trust? How can you support policies that build upon that trust? Small acts like opening doors for strangers or giving up a seat on the train makes a difference, too.

Friday, February 3, 2023

Contraceptive Coverage Expanded: No More ‘Moral’ Exemptions for Employers

Ari Blaff
Yahoo News
Originally posted 30 JAN 23

Here is an excerpt:

The proposed new rule released today by the Departments of Health and Human Services (HHS), Labor, and Treasury would remove the ability of employers to opt out for “moral” reasons, but it would retain the existing protections on “religious” grounds.

For employees covered by insurers with religious exemptions, the new policy will create an “independent pathway” that permits them to access contraceptives through a third-party provider free of charge.

“We had to really think through how to do this in the right way to satisfy both sides, but we think we found that way,” a senior HHS official told CNN.

Planned Parenthood applauded the announcement. “Employers and universities should not be able to dictate personal health-care decisions and impose their views on their employees or students,” the organization’s chief, Alexis McGill Johnson, told CNN. “The ACA mandates that health insurance plans cover all forms of birth control without out-of-pocket costs. Now, more than ever, we must protect this fundamental freedom.”

In 2018, the Trump administration sought to carve out an exception, based on “sincerely held religious beliefs,” to the ACA’s contraceptive mandate. The move triggered a Pennsylvania district court judge to issue a nationwide injunction in 2019, blocking the implementation of the change. However, in 2020, in Little Sisters of the Poor v. Pennsylvania, the Supreme Court, in a 7–2 ruling, defended the legality of the original Trump policy.

The Supreme Court’s overturning of Roe v. Wade in June 2022, in its Dobbs ruling, played a role in HHS’s decision to release the new proposal. Guaranteeing access to contraceptions at no cost to the individual “is a national public health imperative,” HHS said in the proposal. And the Dobbs ruling “has placed a heightened importance on access to contraceptive services nationwide.”

Friday, January 27, 2023

Moral foundations, values, and judgments in extraordinary altruists

Amormino, P., Ploe, M.L. & Marsh, A.A.
Sci Rep 12, 22111 (2022).
https://doi.org/10.1038/s41598-022-26418-1

Abstract

Donating a kidney to a stranger is a rare act of extraordinary altruism that appears to reflect a moral commitment to helping others. Yet little is known about patterns of moral cognition associated with extraordinary altruism. In this preregistered study, we compared the moral foundations, values, and patterns of utilitarian moral judgments in altruistic kidney donors (n = 61) and demographically matched controls (n = 58). Altruists expressed more concern only about the moral foundation of harm, but no other moral foundations. Consistent with this, altruists endorsed utilitarian concerns related to impartial beneficence, but not instrumental harm. Contrary to our predictions, we did not find group differences between altruists and controls in basic values. Extraordinary altruism generally reflected opposite patterns of moral cognition as those seen in individuals with psychopathy, a personality construct characterized by callousness and insensitivity to harm and suffering. Results link real-world, costly, impartial altruism primarily to moral cognitions related to alleviating harm and suffering in others rather than to basic values, fairness concerns, or strict utilitarian decision-making.

Discussion

In the first exploration of patterns of moral cognition that characterize individuals who have engaged in real-world extraordinary altruism, we found that extraordinary altruists are distinguished from other people only with respect to a narrow set of moral concerns: they are more concerned with the moral foundation of harm/care, and they more strongly endorse impartial beneficence. Together, these findings support the conclusion that extraordinary altruists are morally motivated by an impartial concern for relieving suffering, and in turn, are motivated to improve others’ welfare in a self-sacrificial manner that does not allow for the harm of others in the process. These results are also partially consistent with extraordinary altruism representing the inverse of psychopathy in terms of moral cognition: altruists score lower in psychopathy (with the strongest relationships observed for psychopathy subscales associated with socio-affective responding) and higher-psychopathy participants most reliably endorse harm/care less than lower psychopathy participants, with participants with higher scores on the socio-affective subscales of our psychopathy measures also endorsing impartial beneficence less strongly.

(cut)

Notably, and contrary to our predictions, we did not find that donating a kidney to a stranger is strongly or consistently correlated (positively or negatively) with basic values like universalism, benevolence, power, hedonism, or conformity. That suggests extraordinary altruism may not be driven by unusual values, at least as they are measured by the Schwartz inventory, but rather by specific moral concerns (such as harm/care). Our findings suggest that reported values may not in themselves predict whether one acts on those values when it comes to extraordinary altruism, much as “…a person can value being outgoing in social gatherings, independently of whether they are prone to acting in a lively or sociable manner”. Similarly, people who share a common culture may value common things but acting on those values to an extraordinarily costly and altruistic degree may require a stronger motivation––a moral motivation.

Thursday, January 26, 2023

The AI Ethicist's Dirty Hands Problem

H. S. Sætra, M. Coeckelbergh, & J. Danaher
Communications of the ACM, January 2023, 
Vol. 66 No. 1, Pages 39-41

Assume an AI ethicist uncovers objectionable effects related to the increased usage of AI. What should they do about it? One option is to seek alliances with Big Tech in order to "borrow" their power to change things for the better. Another option is to seek opportunities for change that actively avoids reliance on Big Tech.

The choice between these two strategies gives rise to an ethical dilemma. For example, if the ethicist's research emphasized the grave and unfortunate consequences of Twitter and Facebook, should they promote this research by building communities on said networks? Should they take funding from Big Tech to promote the reform of Big Tech? Should they seek opportunities at Google or OpenAI if they are deeply concerned about the negative implications of large-scale language models?

The AI ethicist’s dilemma emerges when an ethicist must consider how their success in communicating an
identified challenge is associated with a high risk of decreasing the chances of successfully addressing the challenge.  This dilemma occurs in situations in which the means to achieve one’s goals are seemingly best achieved by supporting that which one wishes to correct and/or practicing the opposite of that which one preaches.

(cut)

The Need for More than AI Ethics

Our analysis of the ethicist’s dilemma shows why close ties with Big Tech can be detrimental for the ethicist seeking remedies for AI related problems.   It is important for ethicists, and computer scientists in general, to be aware of their links to the sources of ethical challenges related to AI.  One useful exercise would be to carefully examine what could happen if they attempted to challenge the actors with whom they are aligned. Such actions could include attempts to report unfortunate implications of the company’s activities internally, but also publicly, as Gebru did. Would such actions be met with active resistance, with inaction, or even straightforward sanctions? Such an exercise will reveal whether or not the ethicist feels free to openly and honestly express concerns about the technology with which they work. Such an exercise could be important, but as we have argued, these individuals are not necessarily positioned to achieve fundamental change in this system.

In response, we suggest the role of government is key to balancing the power the tech companies have
through employment, funding, and their control of modern digital infrastructure. Some will rightly argue that political power is also dangerous.   But so are the dangers of technology and unbridled innovation, and private corporations are central sources of these dangers. We therefore argue that private power must be effectively bridled by the power of government.  This is not a new argument, and is in fact widely accepted.

Tuesday, January 17, 2023

Deeply Rational Reasons for Irrational Beliefs

Barlev, M., & Neuberg, S. L. (2022, December 7).
https://doi.org/10.31234/osf.io/avcq2

Abstract

Why do people hold irrational beliefs? Two accounts predominate. The first spotlights the information ecosystem and how people process this information; this account either casts those who hold irrational beliefs as cognitively deficient or focuses on the reasoning and decision-making heuristics all people use. The second account spotlights an inwardly-oriented and proximate motivation people have to enhance how they think and feel about themselves. Here, we advance a complementary, outwardly-oriented, and more ultimate account—that people often hold irrational beliefs for evolutionarily rational reasons. Under this view, irrational beliefs may serve as rare and valued information with which to rise in prestige, as signals of group commitment and loyalty tests, as ammunition with which to derogate rivals in the eyes of third-parties, or as outrages with which to mobilize the group toward shared goals. Thus, although many beliefs may be epistemically irrational, they may also be evolutionarily rational from the perspective of the functions they are adapted to serve. We discuss the implications of this view for puzzling theoretical phenomena and for changing problematic irrational beliefs.

Conclusions

Why do we hold irrational beliefs that often are not only improbable, but impossible? According to some, the information ecosystem is to blame, paired with deficiencies in how people process information or with heuristic modes of processing. According to others, it is because certain beliefs—regardless of their veracity—can enhance how we think and feel about ourselves. We suggest that such accounts are promising but incomplete: many irrational beliefs exist because they serve crucial interpersonal (and more ultimate rather than proximal) functions.

We have argued that many irrational beliefs are generated, entertained, and propagated by psychological mechanisms specialized for rising in prestige, signaling group commitment and testing group loyalty, derogating disliked competitors in the eyes of third-parties, or spreading common knowledge and coordination toward shared goals. Thus, although many beliefs are epistemically irrational, they can be evolutionarily rational from the perspective of the functions they are adapted to serve.

Is it not costly to individuals to hold epistemically irrational beliefs? Sometimes. Jehovah's Witnesses reject life-saving blood transfusions, a belief most consider to be very costly, explaining why courts sometimes compel blood transfusions such as in the case of children. Yet even here, the benefits to individuals of carrying such costly beliefs may outweigh their costs, at least for some. For example, if such belief are designed to signal group commitment, they might emerge among particularly devout members of groups or among groups in which the need to signal commitment is particularly strong; the costlier the belief, the more honest a signal of group commitment it is (Petersen et al., 2021). However, such cases are the exception—most of the irrational beliefs people hold tend to be inferentially isolated and behaviorally inert. For example, the belief that God the Father, the Son, and the Holy Spirit are one may function for a Christian as a signal of group affiliation and commitment, without carrying for the individual many costly inferences or behavioral implications (Petersen et al., 2021; Mercier, 2020).

Saturday, January 7, 2023

Artificial intelligence and consent: a feminist anti-colonial critique

Varon, J., & Peña, P. (2021). 
Internet Policy Review, 10(4).
https://doi.org/10.14763/2021.4.1602

Abstract

Feminist theories have extensively debated consent in sexual and political contexts. But what does it mean to consent when we are talking about our data bodies feeding artificial intelligence (AI) systems? This article builds a feminist and anti-colonial critique about how an individualistic notion of consent is being used to legitimate practices of the so-called emerging Digital Welfare States, focused on digitalisation of anti-poverty programmes. The goal is to expose how the functional role of digital consent has been enabling data extractivist practices for control and exclusion, another manifestation of colonialism embedded in cutting-edge digital technology.

Here is an excerpt:

Another important criticism of this traditional idea of consent in sexual relationships is the forced binarism of yes/no. According to Gira Grant (2016), consent is not only given but also is built from multiple factors such as the location, the moment, the emotional state, trust, and desire. In fact, for this author, the example of sex workers could demonstrate how desire and consent are different, although sometimes confused as the same. For her there are many things that sex workers do without necessarily wanting to. However, they give consent for legitimate reasons.

It is also important how we express consent. For feminists such as Fraisse (2012), there is no consent without the body. In other words, consent has a relational and communication-based (verbal and nonverbal) dimension where power relationships matter (Tinat, 2012; Fraisse, 2012). This is very relevant when we discuss “tacit consent” in sexual relationships. In another dimension of how we express consent, Fraisse (2012) distinguishes between choice (the consent that is accepted and adhered to) and coercion (the "consent" that is allowed and endured).

According to Fraisse (2012), the critical view of consent that is currently claimed by feminist theories is not consent as a symptom of contemporary individualism; it has a collective approach through the idea of “the ethics of consent”, which provides attention to the "conditions" of the practice; the practice adapted to a contextual situation, therefore rejecting universal norms that ignore the diversified conditions of domination (Fraisse, 2012).

In the same sense, Lucia Melgar (2012) asserts that, in the case of sexual consent, it is not just an individual right, but a collective right of women to say "my body is mine" and from there it claims freedom to all bodies. As Sarah Ahmed (2017, n.p.) states “for feminism: no is a political labor”. In other words, “if your position is precarious you might not be able to afford no. [...] This is why the less precarious might have a political obligation to say no on behalf of or alongside those who are more precarious”. Referring to Éric Fassin, Fraisse (2012) understands that in this feminist view, consent will not be “liberal” anymore (as a refrain of the free individual), but “radical”, because, as Fassin would call, seeing in a collective act, it could function as some sort of consensual exchange of power.

Thursday, December 22, 2022

In the corner of an Australian lab, a brain in a dish is playing a video game - and it’s getting better

Liam Mannix
Sydney Morning Herald
Originally posted 13 NOV 22

Here is an excerpt:

Artificial intelligence controls an ever-increasing slice of our lives. Smart voice assistants hang on our every word. Our phones leverage machine learning to recognise our face. Our social media lives are controlled by algorithms that surface content to keep us hooked.

These advances are powered by a new generation of AIs built to resemble human brains. But none of these AIs are really intelligent, not in the human sense of the word. They can see the superficial pattern without understanding the underlying concept. Siri can read you the weather but she does not really understand that it’s raining. AIs are good at learning by rote, but struggle to extrapolate: even teenage humans need only a few sessions behind the wheel before the can drive, while Google’s self-driving car still isn’t ready after 32 billion kilometres of practice.

A true ‘general artificial intelligence’ remains out of reach - and, some scientists think, impossible.

Is this evidence human brains can do something special computers never will be able to? If so, the DishBrain opens a new path forward. “The only proof we have of a general intelligence system is done with biological neurons,” says Kagan. “Why would we try to mimic what we could harness?”

He imagines a future part-silicon-part-neuron supercomputer, able to combine the raw processing power of silicon with the built-in learning ability of the human brain.

Others are more sceptical. Human intelligence isn’t special, they argue. Thoughts are just electro-chemical reactions spreading across the brain. Ultimately, everything is physics - we just need to work out the maths.

“If I’m building a jet plane, I don’t need to mimic a bird. It’s really about getting to the mathematical foundations of what’s going on,” says Professor Simon Lucey, director of the Australian Institute for Machine Learning.

Why start the DishBrains on Pong? I ask. Because it’s a game with simple rules that make it ideal for training AI. And, grins Kagan, it was one of the first video game ever coded. A nod to the team’s geek passions - which run through the entire project.

“There’s a whole bunch of sci-fi history behind it. The Matrix is an inspiration,” says Chong. “Not that we’re trying to create a Matrix,” he adds quickly. “What are we but just a goooey soup of neurons in our heads, right?”

Maybe. But the Matrix wasn’t meant as inspiration: it’s a cautionary tale. The humans wired into it existed in a simulated reality while machines stole their bioelectricity. They were slaves.

Is it ethical to build a thinking computer and then restrict its reality to a task to be completed? Even if it is a fun task like Pong?

“The real life correlate of that is people have already created slaves that adore them: they are called dogs,” says Oxford University’s Julian Savulescu.

Thousands of years of selective breeding has turned a wild wolf into an animal that enjoys rounding up sheep, that loves its human master unconditionally.

Thursday, December 8, 2022

‘A lottery ticket, not a guarantee’: fertility experts on the rise of egg freezing

Hannah Devlin
The Guardian
Originally posted 11 NOV 22

Here is an excerpt:

This means that a woman who freezes eggs at the age of 30 boosts her chances of successful IVF at 40 years. But, according to Dr Zeynep Gurtin, a lecturer in women’s health at UCL, this concept has led to a false narrative that if you freeze your eggs “you’ll be fine”. “A lot of people who freeze their eggs don’t get pregnant,” Gurtin said.

First, only a fraction opt to use the eggs down the line – some get pregnant without IVF, others decide not to for a range of reasons. For those who go ahead, HFEA figures show that, as an average across all age groups, just 2% of all thawed eggs ended up as pregnancies and 0.7% resulted in live births in 2018. For each IVF cycle, this gives a 27% chance on average of a birth for those who froze their eggs before the age of 35 and a 13% for those who froze their eggs after this age. The most common age for egg freezing in the UK is 38 years old.

A recent analysis by the Nuffield Council on Bioethics found women often felt frustrated at having received insufficient information on success rates, but also reported feeling relief and a sense of empowerment.

Egg freezing, Gurtin suggested, should be viewed as “having a lottery ticket rather than having an insurance policy”.

“An insurance policy suggests you’ll definitely get a payout,” she said. “You’re just increasing your chances.”

As lottery tickets go, it is an expensive one. The average cost of having eggs collected and frozen is £3,350, with additional £500-£1,500 costs for medication and an ongoing expense of £125-£350 a year for storage. And clinics are not always upfront about the full extent of costs.

“In many cases, you’re going to spend a third more than the advertised price – and you’re spending that money for something that’s not an immediate benefit to you,” said Gurtin. “It’s a big gamble.”

“When people talk about egg freezing revolutionising women’s lives, you have to ask: how many can afford it?” she added.

Travelling abroad, where treatments may be cheaper, is an option but can be logistically problematic. “When it comes to repatriating eggs, sperm and embryos, it is possible, but it’s not always that straightforward,” said Sarris. “You need to follow a process, you don’t just send them with DHL.”

Tuesday, November 29, 2022

The Supreme Court has lost its ethical compass. Can it find one fast?

Ruth Marcus
The Washington Post
Originally published 23 Nov 22

The Supreme Court must get its ethics act together, and Chief Justice John G. Roberts Jr. needs to take the lead. After a string of embarrassments, the justices should finally subject themselves to the kind of rules that govern other federal judges and establish a standard for when to step aside from cases — one that is more stringent than simply leaving it up to the individual justice to decide.

Recent episodes are alarming and underscore the need for quick action to help restore confidence in the institution.

Last week, the Supreme Court wisely rebuffed an effort by Arizona GOP chair Kelli Ward to prevent the House Jan. 6 committee — the party in this case — from obtaining her phone records. The court’s brief order noted that Justice Clarence Thomas, along with Justice Samuel A. Alito Jr., would have sided with Ward.

Thomas’s involvement, though it didn’t affect the outcome of the dispute, is nothing short of outrageous. Federal law already requires judges, including Supreme Court justices, to step aside from involvement in any case in which their impartiality “might reasonably be questioned.”

Perhaps back in January, when he was the only justice to disagree when the court refused to grant former president Donald Trump’s bid to stop his records from being turned over to the Jan. 6 committee, Thomas didn’t realize the extent of his wife’s involvement with disputing the election results. (I’m being kind here: Ginni Thomas had signed a letter the previous month calling on House Republicans to expel Reps. Liz Cheney of Wyoming and Adam Kinzinger of Illinois from the House Republican Conference for participating in an “overtly partisan political persecution.”)

But here’s what we know now, and Justice Thomas does, too: The Jan 6. committee has subpoenaed and interviewed his wife. We — and he — know that she contacted 29 Arizona lawmakers, urging them to “fight back against fraud” and choose a “clean slate of electors” after the 2020 election.

Some recusal questions are close. Not this one. Did the chief justice urge Thomas to recuse? He should have. This will sound unthinkable, but if Roberts asked and Thomas refused, maybe it’s time the chief, or other justices, to publicly note their disagreement.

(cut)

One obvious step is to follow the ethics rules that apply to other federal judges, perhaps adapting them to the particular needs of the high court. That would send an important — and overdue — message that the justices are not a law unto themselves. It’s symbolic, but symbolism matters.