Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Disinformation. Show all posts
Showing posts with label Disinformation. Show all posts

Thursday, March 21, 2024

AI-synthesized faces are indistinguishable from real faces and more trustworthy

Nightingale, S. J., & Farid, H. (2022).
PNAS of the USA, 119(8).

Abstract

Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, financial fraud, and disinformation campaigns. Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces.

Here is part of the Discussion section

Synthetically generated faces are not just highly photorealistic, they are nearly indistinguishable from real faces and are judged more trustworthy. This hyperphotorealism is consistent with recent findings. These two studies did not contain the same diversity of race and gender as ours, nor did they match the real and synthetic faces as we did to minimize the chance of inadvertent cues. While it is less surprising that White male faces are highly realistic—because these faces dominate the neural network training—we find that the realism of synthetic faces extends across race and gender. Perhaps most interestingly, we find that synthetically generated faces are more trustworthy than real faces. This may be because synthesized faces tend to look more like average faces which themselves are deemed more trustworthy. Regardless of the underlying reason, synthetically generated faces have emerged on the other side of the uncanny valley. This should be considered a success for the fields of computer graphics and vision. At the same time, easy access (https://thispersondoesnotexist.com) to such high-quality fake imagery has led and will continue to lead to various problems, including more convincing online fake profiles and—as synthetic audio and video generation continues to improve—problems of nonconsensual intimate imagery, fraud, and disinformation campaigns, with serious implications for individuals, societies, and democracies.

We, therefore, encourage those developing these technologies to consider whether the associated risks are greater than their benefits. If so, then we discourage the development of technology simply because it is possible. If not, then we encourage the parallel development of reasonable safeguards to help mitigate the inevitable harms from the resulting synthetic media. Safeguards could include, for example, incorporating robust watermarks into the image and video synthesis networks that would provide a downstream mechanism for reliable identification. Because it is the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.

Here are some important points:

This research raises concerns about the potential for misuse of AI-generated faces in areas like deepfakes and disinformation campaigns.

It also opens up interesting questions about how we perceive trust and authenticity in our increasingly digital world.

Monday, December 20, 2021

Parents protesting 'critical race theory' identify another target: Mental health programs

Tyler Kingkade and Mike Hixenbaugh
NBC News
Originally posted 15 NOV 21

At a September school board meeting in Southlake, Texas, a parent named Tara Eddins strode to the lectern during the public comment period and demanded to know why the Carroll Independent School District was paying counselors “at $90K a pop” to give students lessons on suicide prevention.

“At Carroll ISD, you are actually advertising suicide,” Eddins said, arguing that many parents in the affluent suburban school system have hired tutors because the district’s counselors are too focused on mental health instead of helping students prepare for college.

(cut)

In Carmel, Indiana, activists swarmed school board meetings this fall to demand that a district fire its mental health coordinator from what they said was a “dangerous, worthless” job. And in Fairfax County, Virginia, a national activist group condemned school officials for sending a survey to students that included questions like “During the past week, how often did you feel sad?”

Many of the school programs under attack fall under the umbrella of social emotional learning, or SEL, a teaching philosophy popularized in recent years that aims to help children manage their feelings and show empathy for others. Conservative groups argue that social emotional learning has become a “Trojan horse” for critical race theory, a separate academic concept that examines how systemic racism is embedded in society. They point to SEL lessons that encourage children to celebrate diversity, sometimes introducing students to conversations about race, gender and sexuality.

Activists have accused school districts of using the programs to ask children invasive questions — about their feelings, sexuality and the way race shapes their lives — as part of a ploy to “brainwash” them with liberal values and to trample parents’ rights. Groups across the country recently started circulating forms to get parents to opt their children out of surveys designed to measure whether students are struggling with their emotions or being bullied, describing the efforts as “data mining” and an invasion of privacy.

Monday, February 11, 2019

Escape the echo chamber

By C Thi Nguyen
aeon.co
Originally posted April 9, 2018

Here is an excerpt:

Epistemic bubbles also threaten us with a second danger: excessive self-confidence. In a bubble, we will encounter exaggerated amounts of agreement and suppressed levels of disagreement. We’re vulnerable because, in general, we actually have very good reason to pay attention to whether other people agree or disagree with us. Looking to others for corroboration is a basic method for checking whether one has reasoned well or badly. This is why we might do our homework in study groups, and have different laboratories repeat experiments. But not all forms of corroboration are meaningful. Ludwig Wittgenstein says: imagine looking through a stack of identical newspapers and treating each next newspaper headline as yet another reason to increase your confidence. This is obviously a mistake. The fact that The New York Times reports something is a reason to believe it, but any extra copies of The New York Times that you encounter shouldn’t add any extra evidence.

But outright copies aren’t the only problem here. Suppose that I believe that the Paleo diet is the greatest diet of all time. I assemble a Facebook group called ‘Great Health Facts!’ and fill it only with people who already believe that Paleo is the best diet. The fact that everybody in that group agrees with me about Paleo shouldn’t increase my confidence level one bit. They’re not mere copies – they actually might have reached their conclusions independently – but their agreement can be entirely explained by my method of selection. The group’s unanimity is simply an echo of my selection criterion. It’s easy to forget how carefully pre-screened the members are, how epistemically groomed social media circles might be.

The information is here.

Tuesday, March 13, 2018

Cognitive Ability and Vulnerability to Fake News

David Z. Hambrick and Madeline Marquardt
Scientific American
Originally posted on February 6, 2018

“Fake news” is Donald Trump’s favorite catchphrase. Since the election, it has appeared in some 180 tweets by the President, decrying everything from accusations of sexual assault against him to the Russian collusion investigation to reports that he watches up to eight hours of television a day. Trump may just use “fake news” as a rhetorical device to discredit stories he doesn’t like, but there is evidence that real fake news is a serious problem. As one alarming example, an analysis by the internet media company Buzzfeed revealed that during the final three months of the 2016 U.S. presidential campaign, the 20 most popular false election stories generated around 1.3 million more Facebook engagements—shares, reactions, and comments—than did the 20 most popular legitimate stories. The most popular fake story was “Pope Francis Shocks World, Endorses Donald Trump for President.”

Fake news can distort people’s beliefs even after being debunked. For example, repeated over and over, a story such as the one about the Pope endorsing Trump can create a glow around a political candidate that persists long after the story is exposed as fake. A study recently published in the journal Intelligence suggests that some people may have an especially difficult time rejecting misinformation.

The article is here.