Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Manipulation. Show all posts
Showing posts with label Manipulation. Show all posts

Sunday, May 7, 2023

Stolen elections: How conspiracy beliefs during the 2020 American presidential elections changed over time

Wang, H., & Van Prooijen, J. (2022).
Applied Cognitive Psychology.
https://doi.org/10.1002/acp.3996

Abstract

Conspiracy beliefs have been studied mostly through cross-sectional designs. We conducted a five-wave longitudinal study (N = 376; two waves before and three waves after the 2020 American presidential elections) to examine if the election results influenced specific conspiracy beliefs and conspiracy mentality, and whether effects differ between election winners (i.e., Biden voters) versus losers (i.e., Trump voters) at the individual level. Results revealed that conspiracy mentality kept unchanged over 2 months, providing first evidence that this indeed is a relatively stable trait. Specific conspiracy beliefs (outgroup and ingroup conspiracy beliefs) did change over time, however. In terms of group-level change, outgroup conspiracy beliefs decreased over time for Biden voters but increased for Trump voters. Ingroup conspiracy beliefs decreased over time across all voters, although those of Trump voters decreased faster. These findings illuminate how specific conspiracy beliefs are, and conspiracy mentality is not, influenced by an election event.

From the General Discussion

Most studies on conspiracy beliefs provide correlational evidence through cross-sectional designs (van Prooijen & Douglas, 2018). The present research took full advantage of the 2020 American presidential elections through a five-wave longitudinal design, enabling three complementary contributions. First, the results provide evidence that conspiracy mentality is a relatively stable individual difference trait (Bruder et al., 2013; Imhoff & Bruder, 2014): While the election did influence specific conspiracy beliefs (i.e., that the elections were rigged), it did not influence conspiracy mentality. Second, the results provide evidence for the notion that conspiracy beliefs are for election losers (Uscinski & Parent, 2014), as reflected in the finding that Biden voters' outgroup conspiracy beliefs decreased at the individual level, while Trump voters' did not. The group-level effects on changes in outgroup conspiracy beliefs also underscored the role of intergroup conflict in conspiracy theories (van Prooijen & Song, 2021). And third, the present research examined conspiracy theories about one's own political ingroup, and found that such ingroup conspiracy beliefs decreased over time.

The decrease over time for ingroup conspiracy beliefs occurred among both Biden and Trump voters. We speculate that, given its polarized nature and contested result, this election increased intergroup conflict between Biden and Trump voters. Such intergroup conflict may have increased feelings of ingroup loyalty within both voter groups (Druckman, 1994), therefore decreasing beliefs that members of one's own group were conspiring. Moreover, ingroup conspiracy beliefs were higher for Trump than Biden voters (particularly at the first measurement point). This difference might expand previous findings that Republicans are more susceptible to conspiracy cues than Democrats (Enders & Smallpage, 2019), by suggesting that these effects generalize to conspiracy cues coming from their own ingroup.

Conclusion

The 2020 American presidential elections yielded many conspiracy beliefs that the elections were rigged, and conspiracy beliefs generally have negative consequences for societies. One key challenge for scientists and policymakers is to establish how conspiracy theories develop over time. In this research, we conducted a longitudinal study to provide empirical insights into the temporal dynamics underlying conspiracy beliefs, in the setting of a polarized election. We conclude that specific conspiracy beliefs that the elections were rigged—but not conspiracy mentality—are malleable over time, depending on political affiliations and election results.

Friday, October 29, 2021

Harms of AI

Daron Acemoglu
NBER Working Paper No. 29247
September 2021

Abstract

This essay discusses several potential economic, political and social costs of the current path of AI technologies. I argue that if AI continues to be deployed along its current trajectory and remains unregulated, it may produce various social, economic and political harms. These include: damaging competition, consumer privacy and consumer choice; excessively automating work, fueling inequality, inefficiently pushing down wages, and failing to improve worker productivity; and damaging political discourse, democracy's most fundamental lifeblood. Although there is no conclusive evidence suggesting that these costs are imminent or substantial, it may be useful to understand them before they are fully realized and become harder or even impossible to reverse, precisely because of AI's promising and wide-reaching potential. I also suggest that these costs are not inherent to the nature of AI technologies, but are related to how they are being used and developed at the moment - to empower corporations and governments against workers and citizens. As a result, efforts to limit and reverse these costs may need to rely on regulation and policies to redirect AI research. Attempts to contain them just by promoting competition may be insufficient.

Conclusion

In this essay, I explored several potential economic, political and social costs of the current path of AI technologies. I suggested that if AI continues to be deployed along its current trajectory and remains unregulated, then it can harm competition, consumer privacy and consumer choice, it may excessively automate work, fuel inequality, inefficiently push down wages, and fail to improve productivity. It may also make political discourse increasingly distorted, cutting one of the lifelines of democracy. I also mentioned several other potential social costs from the current path of AI research.

I should emphasize again that all of these potential harms are theoretical. Although there is much evidence indicating that not all is well with the deployment of AI technologies and the problems of increasing market power, disappearance of work, inequality, low wages, and meaningful challenges to democratic discourse and practice are all real, we do not have sufficient evidence to be sure that AI has been a serious contributor to these troubling trends.  Nevertheless, precisely because AI is a promising technological platform, aiming to transform every sector of the economy and every aspect of our social lives, it is imperative for us to study what its downsides are, especially on its current trajectory. It is in this spirit that I discussed the potential costs of AI this paper.

Friday, August 27, 2021

It’s hard to be a moral person. Technology is making it harder.

Sigal Samuel
vox.com
Originally posted 3 Aug 21

Here is an excerpt:

People who point out the dangers of digital tech are often met with a couple of common critiques. The first one goes like this: It’s not the tech companies’ fault. It’s users’ responsibility to manage their own intake. We need to stop being so paternalistic!

This would be a fair critique if there were symmetrical power between users and tech companies. But as the documentary The Social Dilemma illustrates, the companies understand us better than we understand them — or ourselves. They’ve got supercomputers testing precisely which colors, sounds, and other design elements are best at exploiting our psychological weaknesses (many of which we’re not even conscious of) in the name of holding our attention. Compared to their artificial intelligence, we’re all children, Harris says in the documentary. And children need protection.

Another critique suggests: Technology may have caused some problems — but it can also fix them. Why don’t we build tech that enhances moral attention?

“Thus far, much of the intervention in the digital sphere to enhance that has not worked out so well,” says Tenzin Priyadarshi, the director of the Dalai Lama Center for Ethics and Transformative Values at MIT.

It’s not for lack of trying. Priyadarshi and designers affiliated with the center have tried creating an app, 20 Day Stranger, that gives continuous updates on what another person is doing and feeling. You get to know where they are, but never find out who they are. The idea is that this anonymous yet intimate connection might make you more curious or empathetic toward the strangers you pass every day.

They also designed an app called Mitra. Inspired by Buddhist notions of a “virtuous friend” (kalyāṇa-mitra), it prompts you to identify your core values and track how much you acted in line with them each day. The goal is to heighten your self-awareness, transforming your mind into “a better friend and ally.”

I tried out this app, choosing family, kindness, and creativity as the three values I wanted to track. For a few days, it worked great. Being primed with a reminder that I value family gave me the extra nudge I needed to call my grandmother more often. But despite my initial excitement, I soon forgot all about the app.

Saturday, January 30, 2021

Checked by reality, some QAnon supporters seek a way out

David Klepper
Associated Press
Originally posted 28 January 21

Here are two excerpts:

It's not clear exactly how many people believe some or all of the narrative, but backers of the movement were vocal in their support for Trump and helped fuel the insurrectionists who overran the U.S. Capitol this month. QAnon is also growing in popularity overseas.

Former believers interviewed by The Associated Press liken the process of leaving QAnon to kicking a drug addiction. QAnon, they say, offers simple explanations for a complicated world and creates an online community that provides escape and even friendship.

Smith's then-boyfriend introduced her to QAnon. It was all he could talk about, she said. At first she was skeptical, but she became convinced after the death of financier Jeffrey Epstein while in federal custody facing pedophilia charges. Officials debunked theories that he was murdered, but to Smith and other QAnon supporters, his suicide while facing child sex charges was too much to accept.

Soon, Smith was spending more time on fringe websites and on social media, reading and posting about the conspiracy theory. She said she fell for QAnon content that presented no evidence, no counter arguments, and yet was all too convincing.

(cut)

“This isn't about critical thinking, of having a hypothesis and using facts to support it," Cohen said of QAnon believers. “They have a need for these beliefs, and if you take that away, because the storm did not happen, they could just move the goal posts.”

Some now say Trump's loss was always part of the plan, or that he secretly remains president, or even that Joe Biden's inauguration was created using special effects or body doubles. They insist that Trump will prevail, and powerful figures in politics, business and the media will be tried and possibly executed on live television, according to recent social media posts.

“Everyone will be arrested soon. Confirmed information,” read a post viewed 130,000 times this week on Great Awakening, a popular QAnon channel on Telegram. “From the very beginning I said it would happen.”

But a different tone is emerging in the spaces created for those who have heard enough.

“Hi my name is Joe,” one man wrote on a Q recovery channel in Telegram. “And I’m a recovering QAnoner.”

Saturday, December 26, 2020

Baby God: how DNA testing uncovered a shocking web of fertility fraud

Arian Horton
The Guardian
Originally published 2 Dec 20

Here ate two excerpts:

The database unmasked, with detached clarity, a dark secret hidden in plain sight for decades: the physician once named Nevada’s doctor of the year, who died in 2006 at age 94, had impregnated numerous patients with his own sperm, unbeknownst to the women or their families. The decades-long fertility fraud scheme, unspooled in the HBO documentary Baby God, left a swath of families – 26 children as of this writing, spanning 40 years of the doctor’s treatments – shocked at long-obscured medical betrayal, unmoored from assumptions of family history and stumbling over the most essential questions of identity. Who are you, when half your DNA is not what you thought?

(cut)

That reality – a once unknowable crime now made plainly knowable – has now come to pass, and the film features interviews with several of Fortier’s previously unknown children, each grappling with and tracing their way into a new web of half-siblings, questions of lineage and inheritance, and reframing of family history. Babst, who started as a cop at 19, dove into her own investigation, sourcing records on Dr Fortier that eventually revealed allegations of sexual abuse and molestation against his own stepchildren.

Brad Gulko, a human genomics scientist in San Francisco who bears a striking resemblance to the young Fortier, initially approached the revelation from the clinical perspective of biological motivations for procreation. “I feel like Dr Fortier found a way to justify in his own mind doing what he wanted to do that didn’t violate his ethical norms too much, even if he pushed them really hard,” he says in the film. “I’m still struggling with that. I don’t know where I’ll end up.”

The film quickly morphed, according to Olson, from an investigation of the Fortier case and his potential motivations to the larger, unresolvable questions of identity, nature versus nurture. “At first it was like ‘let’s get all the facts, we’re going to figure it out, what are his motivations, it will be super clear,’” said Olson. 

Wednesday, October 21, 2020

Neurotechnology can already read minds: so how do we protect our thoughts?

Rafael Yuste
El Pais
Originally posted 11 Sept 20

Here is an excerpt:

On account of these and other developments, a group of 25 scientific experts – clinical engineers, psychologists, lawyers, philosophers and representatives of different brain projects from all over the world – met in 2017 at Columbia University, New York, and proposed ethical rules for the use of these neurotechnologies. We believe we are facing a problem that affects human rights, since the brain generates the mind, which defines us as a species. At the end of the day, it is about our essence – our thoughts, perceptions, memories, imagination, emotions and decisions.

To protect citizens from the misuse of these technologies, we have proposed a new set of human rights, called “neurorights.” The most urgent of these to establish is the right to the privacy of our thoughts, since the technologies for reading mental activity are more developed than the technologies for manipulating it.

To defend mental privacy, we are working on a three-pronged approach. The first consists of legislating “neuroprotection.” We believe that data obtained from the brain, which we call “neurodata,” should be rigorously protected by laws similar to those applied to organ donations and transplants. We ask that “neurodata” not be traded and only be extracted with the consent of the individual for medical or scientific purposes.

This would be a preventive measure to protect against abuse. The second approach involves the proposal of proactive ideas; for example, that the companies and organizations that manufacture these technologies should adhere to a code of ethics from the outset, just as doctors do with the Hippocratic Oath. We are working on a “technocratic oath” with Xabi Uribe-Etxebarria, founder of the artificial intelligence company Sherpa.ai, and with the Catholic University of Chile.

The third approach involves engineering, and consists of developing both hardware and software so that brain “neurodata” remains private, and that it is possible to share only select information. The aim is to ensure that the most personal data never leaves the machines that are wired to our brain. One option is to systems that are already used with financial data: open-source files and blockchain technology so that we always know where it came from, and smart contracts to prevent data from getting into the wrong hands. And, of course, it will be necessary to educate the public and make sure that no device can use a person’s data unless he or she authorizes it at that specific time.

Monday, June 22, 2020

Ethics of Artificial Intelligence and Robotics

Müller, Vincent C.
The Stanford Encyclopedia of Philosophy
(Summer 2020 Edition)

1. Introduction

1.1 Background of the Field

The ethics of AI and robotics is often focused on “concerns” of various sorts, which is a typical response to new technologies. Many such concerns turn out to be rather quaint (trains are too fast for souls); some are predictably wrong when they suggest that the technology will fundamentally change humans (telephones will destroy personal communication, writing will destroy memory, video cassettes will make going out redundant); some are broadly correct but moderately relevant (digital technology will destroy industries that make photographic film, cassette tapes, or vinyl records); but some are broadly correct and deeply relevant (cars will kill children and fundamentally change the landscape). The task of an article such as this is to analyse the issues and to deflate the non-issues.

Some technologies, like nuclear power, cars, or plastics, have caused ethical and political discussion and significant policy efforts to control the trajectory these technologies, usually only once some damage is done. In addition to such “ethical concerns”, new technologies challenge current norms and conceptual systems, which is of particular interest to philosophy. Finally, once we have understood a technology in its context, we need to shape our societal response, including regulation and law. All these features also exist in the case of new AI and Robotics technologies—plus the more fundamental fear that they may end the era of human control on Earth.

The ethics of AI and robotics has seen significant press coverage in recent years, which supports related research, but also may end up undermining it: the press often talks as if the issues under discussion were just predictions of what future technology will bring, and as though we already know what would be most ethical and how to achieve that. Press coverage thus focuses on risk, security (Brundage et al. 2018, see under Other Internet Resources [hereafter OIR]), and prediction of impact (e.g., on the job market). The result is a discussion of essentially technical problems that focus on how to achieve a desired outcome. Current discussions in policy and industry are also motivated by image and public relations, where the label “ethical” is really not much more than the new “green”, perhaps used for “ethics washing”. For a problem to qualify as a problem for AI ethics would require that we do not readily know what the right thing to do is. In this sense, job loss, theft, or killing with AI is not a problem in ethics, but whether these are permissible under certain circumstances is a problem. This article focuses on the genuine problems of ethics where we do not readily know what the answers are.

The entry is here.

Tuesday, May 12, 2020

Freedom in an Age of Algocracy

John Danaher
forthcoming in Oxford Handbook on the Philosophy of Technology
edited by Shannon Vallor

Abstract

There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus in this way enables us to see how algorithmic governance can be both emancipatory and enslaving, and provides a framework for future development and activism around the creation of this technology.

From the Conclusion:

Finally, I’ve outlined a framework for thinking about the likely impact of algocracy on freedom. Given the complexity of freedom and the complexity of algocracy, I’ve argued that there is unlikely to be a simple global assessment of the freedom-promoting or undermining power of algocracy. This is something that has to be assessed and determined on a case-by-case basis. Nevertheless, there are at least five interesting and relatively novel mechanisms through which algocratic systems can both promote and undermine freedom. We should pay attention to these different mechanisms, but do so in a properly contextualized manner, and not by ignoring the pre-existing mechanisms through which freedom is undermined and promoted.

The book chapter is here.

Monday, April 27, 2020

Experiments on Trial

Hannah Fry
The New Yorker
Originally posted 24 Feb 20

Here are two excerpts:

There are also times when manipulation leaves people feeling cheated. For instance, in 2018 the Wall Street Journal reported that Amazon had been inserting sponsored products in its consumers’ baby registries. “The ads look identical to the rest of the listed products in the registry, except for a small gray ‘Sponsored’ tag,” the Journal revealed. “Unsuspecting friends and family clicked on the ads and purchased the items,” assuming they’d been chosen by the expectant parents. Amazon’s explanation when confronted? “We’re constantly experimenting,” a spokesperson said. (The company has since ended the practice.)

But there are times when the experiments go further still, leaving some to question whether they should be allowed at all. There was a notorious experiment run by Facebook in 2012, in which the number of positive and negative posts in six hundred and eighty-nine thousand users’ news feeds was tweaked. The aim was to see how the unwitting participants would react. As it turned out, those who saw less negative content in their feeds went on to post more positive stuff themselves, while those who had positive posts hidden from their feeds used more negative words.

A public backlash followed; people were upset to discover that their emotions had been manipulated. Luca and Bazerman argue that this response was largely misguided. They point out that the effect was small. A person exposed to the negative news feed “ended up writing about four additional negative words out of every 10,000,” they note. Besides, they say, “advertisers and other groups manipulate consumers’ emotions all the time to suit their purposes. If you’ve ever read a Hallmark card, attended a football game or seen a commercial for the ASPCA, you’ve been exposed to the myriad ways in which products and services influence consumers’ emotions.”

(cut)

Medicine has already been through this. In the early twentieth century, without a set of ground rules on how people should be studied, medical experimentation was like the Wild West. Alongside a great deal of good work, a number of deeply unethical studies took place—including the horrifying experiments conducted by the Nazis and the appalling Tuskegee syphilis trial, in which hundreds of African-American men were denied treatment by scientists who wanted to see how the lethal disease developed. As a result, there are now clear rules about seeking informed consent whenever medical experiments use human subjects, and institutional procedures for reviewing the design of such experiments in advance. We’ve learned that researchers aren’t always best placed to assess the potential harm of their work.

The info is here.

Saturday, February 8, 2020

Bursting the Filter Bubble: Democracy, Design, and Ethics

V. E. Bozdag
Book/Thesis
Originally published in 2015

Online web services such as Google and Facebook started using personalization algorithms. Because information is customized per user by the algorithms of these services, two users who use the same search query or have the same friend list may get different results. Online services argue that by using personalization algorithms, they may show the most relevant information for each user, hence increasing user satisfaction. However, critics argue that the opaque filters used by online services will only show agreeable political viewpoints to the users and the users never get challenged by opposing perspectives. Considering users are already biased in seeking like-minded perspectives, viewpoint diversity will diminish and the users may get trapped in a “filter bubble”. This is an undesired behavior for almost all democracy models. In this thesis we first analyzed the filter bubble phenomenon conceptually, by identifying internal processes and factors in online web services that might cause filter bubbles. Later, we analyzed this issue empirically. We first studied existing metrics in viewpoint diversity research of the computer science literature. We also extended these metrics by adding a new one, namely minority access from media and communication studies. After conducting an empirical study for Dutch and Turkish Twitter users, we showed that minorities cannot reach a large percentage of users in Turkish Twittersphere. We also analyzed software tools and design attempts to combat filter bubbles. We showed that almost all of the tools implement norms required by two popular democracy models. We argue that democracy is essentially a contested concept, and other less popular democracy models should be included in the design of such tools as well.

The book/thesis can be downloaded here.

Sunday, January 12, 2020

Bias in algorithmic filtering and personalization

Engin Bozdag
Ethics Inf Technol (2013) 15: 209.

Abstract

Online information intermediaries such as Facebook and Google are slowly replacing traditional media channels thereby partly becoming the gatekeepers of our society. To deal with the growing amount of information on the social web and the burden it brings on the average user, these gatekeepers recently started to introduce personalization features, algorithms that filter information per individual. In this paper we show that these online services that filter information are not merely algorithms. Humans not only affect the design of the algorithms, but they also can manually influence the filtering process even when the algorithm is operational. We further analyze filtering processes in detail, show how personalization connects to other filtering techniques, and show that both human and technical biases are present in today’s emergent gatekeepers. We use the existing literature on gatekeeping and search engine bias and provide a model of algorithmic gatekeeping.

From the Discussion:

Today information seeking services can use interpersonal contacts of users in order to tailor information and to increase relevancy. This not only introduces bias as our model shows, but it also has serious implications for other human values, including user autonomy, transparency, objectivity, serendipity, privacy and trust. These values introduce ethical questions. Do private companies that are
offering information services have a social responsibility, and should they be regulated? Should they aim to promote values that the traditional media was adhering to, such as transparency, accountability and answerability? How can a value such as transparency be promoted in an algorithm?  How should we balance between autonomy and serendipity and between explicit and implicit personalization? How should we define serendipity? Should relevancy be defined as what is popular in a given location or by what our primary groups find interesting? Can algorithms truly replace human filterers?

The info can be downloaded here.

Thursday, October 24, 2019

Facebook isn’t free speech, it’s algorithmic amplification optimized for outrage

Jon Evans
techcrunch.com
Originally published October 20, 2019

This week Mark Zuckerberg gave a speech in which he extolled “giving everyone a voice” and fighting “to uphold a wide a definition of freedom of expression as possible.” That sounds great, of course! Freedom of expression is a cornerstone, if not the cornerstone, of liberal democracy. Who could be opposed to that?

The problem is that Facebook doesn’t offer free speech; it offers free amplification. No one would much care about anything you posted to Facebook, no matter how false or hateful, if people had to navigate to your particular page to read your rantings, as in the very early days of the site.

But what people actually read on Facebook is what’s in their News Feed … and its contents, in turn, are determined not by giving everyone an equal voice, and not by a strict chronological timeline. What you read on Facebook is determined entirely by Facebook’s algorithm, which elides much — censors much, if you wrongly think the News Feed is free speech — and amplifies little.

What is amplified? Two forms of content. For native content, the algorithm optimizes for engagement. This in turn means people spend more time on Facebook, and therefore more time in the company of that other form of content which is amplified: paid advertising.

Of course this isn’t absolute. As Zuckerberg notes in his speech, Facebook works to stop things like hoaxes and medical misinformation from going viral, even if they’re otherwise anointed by the algorithm. But he has specifically decided that Facebook will not attempt to stop paid political misinformation from going viral.

The info is here.

Editor's note: Facebook is one of the most defective products that millions of Americans use everyday.

Wednesday, August 21, 2019

Tech Is Already Reading Your Emotions - But Do Algorithms Get It Right?

Jessica Baron
Forbes.com
Originally published July 18, 2019

From measuring shopper satisfaction to detecting signs of depression, companies are employing emotion-sensing facial recognition technology that is based on flawed science, according to a new study.

If the idea of having your face recorded and then analyzed for mood so that someone can intervene in your life sounds creepy, that’s because it is. But that hasn’t stopped companies like Walmart from promising to implement the technology to improve customer satisfaction, despite numerous challenges from ethicists and other consumer advocates.

At the end of the day, this flavor of facial recognition software probably is all about making you safer and happier – it wants to let you know if you’re angry or depressed so you can calm down or get help; it wants to see what kind of mood you’re in when you shop so it can help stores keep you as a customer; it wants to measure your mood while driving, playing video games, or just browsing the Internet to see what goods and services you might like to buy to improve your life.


The problem is – well, aside from the obvious privacy issues and general creep factor – that computers aren’t really that good at judging our moods based on the information they get from facial recognition technology. To top it off, this technology exhibits that same kind of racial bias that other AI programs do, assigning more negative emotions to black faces, for example. That’s probably because it’s based on flawed science.

The info is here.

Friday, May 31, 2019

The Ethics of Smart Devices That Analyze How We Speak

Trevor Cox
Harvard Business Review
Originally posted May 20, 2019

Here is an excerpt:

But what happens when machines start analyzing how we talk? The big tech firms are coy about exactly what they are planning to detect in our voices and why, but Amazon has a patent that lists a range of traits they might collect, including identity (“gender, age, ethnic origin, etc.”), health (“sore throat, sickness, etc.”), and feelings, (“happy, sad, tired, sleepy, excited, etc.”).

This worries me — and it should worry you, too — because algorithms are imperfect. And voice is particularly difficult to analyze because the signals we give off are inconsistent and ambiguous. What’s more, the inferences that even humans make are distorted by stereotypes. Let’s use the example of trying to identify sexual orientation. There is a style of speaking with raised pitch and swooping intonations which some people assume signals a gay man. But confusion often arises because some heterosexuals speak this way, and many homosexuals don’t. Science experiments show that human aural “gaydar” is only right about 60% of the time. Studies of machines attempting to detect sexual orientation from facial images have shown a success rate of about 70%. Sound impressive? Not to me, because that means those machines are wrong 30% of the time. And I would anticipate success rates to be even lower for voices, because how we speak changes depending on who we’re talking to. Our vocal anatomy is very flexible, which allows us to be oral chameleons, subconsciously changing our voices to fit in better with the person we’re speaking with.

The info is here.

Friday, March 22, 2019

Pop Culture, AI And Ethics

Phaedra Boinodiris
Forbes.com
Originally published February 24, 2019

Here is an excerpt:


5 Areas of Ethical Focus

The guide goes on to outline five areas of ethical focus or consideration:

Accountability – there is a group responsible for ensuring that REAL guests in the hotel are interviewed to determine their needs. When feedback is negative this group implements a feedback loop to better understand preferences. They ensure that at any point in time, a guest can turn the AI off.

Fairness – If there is bias in the system, the accountable team must take the time to train with a larger, more diverse set of data.Ensure that the data collected about a user's race, gender, etc. in combination with their usage of the AI, will not be used to market to or exclude certain demographics.

Explainability and Enforced Transparency – if a guest doesn’t like the AI’s answer, she can ask how it made that recommendation using which dataset. A user must explicitly opt in to use the assistant and provide the guest options to consent on what information to gather.

User Data Rights – The hotel does not own a guest’s data and a guest has the right to have the system purges at any time. Upon request, a guest can receive a summary of what information was gathered by the Ai assistant.

Value Alignment – Align the experience to the values of the hotel. The hotel values privacy and ensuring that guests feel respected and valued. Make it clear that the AI assistant is not designed to keep data or monitor guests. Relay how often guest data is auto deleted. Ensure that the AI can speak in the guest’s respective language.

The info is here.

Tuesday, February 12, 2019

How to tell the difference between persuasion and manipulation

Robert Noggle
aeon.co
Originally published August 1, 2018

Here is an excerpt:

It appears, then, that whether an influence is manipulative depends on how it is being used. Iago’s actions are manipulative and wrong because they are intended to get Othello to think and feel the wrong things. Iago knows that Othello has no reason to be jealous, but he gets Othello to feel jealous anyway. This is the emotional analogue to the deception that Iago also practises when he arranges matters (eg, the dropped handkerchief) to trick Othello into forming beliefs that Iago knows are false. Manipulative gaslighting occurs when the manipulator tricks another into distrusting what the manipulator recognises to be sound judgment. By contrast, advising an angry friend to avoid making snap judgments before cooling off is not acting manipulatively, if you know that your friend’s judgment really is temporarily unsound. When a conman tries to get you to feel empathy for a non-existent Nigerian prince, he acts manipulatively because he knows that it would be a mistake to feel empathy for someone who does not exist. Yet a sincere appeal to empathy for real people suffering undeserved misery is moral persuasion rather than manipulation. When an abusive partner tries to make you feel guilty for suspecting him of the infidelity that he just committed, he is acting manipulatively because he is trying to induce misplaced guilt. But when a friend makes you feel an appropriate amount of guilt over having deserted him in his hour of need, this does not seem manipulative.

The info is here.

Sunday, November 4, 2018

When Tech Knows You Better Than You Know Yourself

Nicholas Thompson
www.wired.com
Originally published October 4, 2018

Here is an excerpt:

Hacking a Human

NT: Explain what it means to hack a human being and why what can be done now is different from what could be done 100 years ago.

YNH: To hack a human being is to understand what's happening inside you on the level of the body, of the brain, of the mind, so that you can predict what people will do. You can understand how they feel and you can, of course, once you understand and predict, you can usually also manipulate and control and even replace. And of course it can't be done perfectly and it was possible to do it to some extent also a century ago. But the difference in the level is significant. I would say that the real key is whether somebody can understand you better than you understand yourself. The algorithms that are trying to hack us, they will never be perfect. There is no such thing as understanding perfectly everything or predicting everything. You don't need perfect, you just need to be better than the average human being.

If you have an hour, please watch the video.

Saturday, September 15, 2018

Social Science One And How Top Journals View The Ethics Of Facebook Data Research

Kalev Leetaru
Forbes.com
Originally posted on August 13, 2018

Here is an excerpt:

At the same time, Social Science One’s decision to leave all ethical questions TBD and to eliminate the right to informed consent or the ability to opt out of research fundamentally redefines what it means to conduct research in the digital era, normalizing the removal of these once sacred ethical tenets. Given the refusal of one of its committee members to provide replication data for his own study and the statement by another committee member that “I have articulated the argument that ToS are not, and should not be considered, ironclad rules binding the activities of academic researchers. … I don't think researchers should reasonably be expected to adhere to such conditions, especially at a time when officially sanctioned options for collecting social media data are disappearing left and right,” the result is an ethically murky landscape in which it is unclear just where Social Science One draws the line at what it will or will not permit.

Given Facebook’s new focus on “privacy first” I asked the company whether it would commit to offering its two billion users a new profile setting allowing them to opt out of having their data made available to academic researchers such as Social Science One. As it has repeatedly done in the past, the company declined to comment.

The info is here.

Tuesday, July 24, 2018

Data ethics is more than just what we do with data, it’s also about who’s doing it

James Arvantitakis, Andrew Francis, and Oliver Obst
The Conversation
Originally posted June 21, 2018

If the recent Cambridge Analytica data scandal has taught us anything, it’s that the ethical cultures of our largest tech firms need tougher scrutiny.

But moral questions about what data should be collected and how it should be used are only the beginning. They raise broader questions about who gets to make those decisions in the first place.

We currently have a system in which power over the judicious and ethical use of data is overwhelmingly concentrated among white men. Research shows that the unconscious biases that emerge from a person’s upbringing and experiences can be baked into technology, resulting in negative consequences for minority groups.

(cut)

People noticed that Google Translate showed a tendency to assign feminine gender pronouns to certain jobs and masculine pronouns to others – “she is a babysitter” or “he is a doctor” – in a manner that reeked of sexism. Google Translate bases its decision about which gender to assign to a particular job on the training data it learns from. In this case, it’s picking up the gender bias that already exists in the world and feeding it back to us.

If we want to ensure that algorithms don’t perpetuate and reinforce existing biases, we need to be careful about the data we use to train algorithms. But if we hold the view that women are more likely to be babysitters and men are more likely to be doctors, then we might not even notice – and correct for – biased data in the tools we build.

So it matters who is writing the code because the code defines the algorithm, which makes the judgement on the basis of the data.

The information is here.

Sunday, May 13, 2018

Facebook Uses AI To Predict Your Future Actions for Advertizers

Sam Biddle
The Intercept
Originally posted April 13, 2018

Here is an excerpt:

Asked by Fortune’s Stacey Higginbotham where Facebook hoped its machine learning work would take it in five years, Chief Technology Officer Mike Schroepfer said in 2016 his goal was that AI “makes every moment you spend on the content and the people you want to spend it with.” Using this technology for advertising was left unmentioned. A 2017 TechCrunch article declared, “Machine intelligence is the future of monetization for Facebook,” but quoted Facebook executives in only the mushiest ways: “We want to understand whether you’re interested in a certain thing generally or always. Certain things people do cyclically or weekly or at a specific time, and it’s helpful to know how this ebbs and flows,” said Mark Rabkin, Facebook’s vice president of engineering for ads. The company was also vague about the melding of machine learning to ads in a 2017 Wired article about the company’s AI efforts, which alluded to efforts “to show more relevant ads” using machine learning and anticipate what ads consumers are most likely to click on, a well-established use of artificial intelligence. Most recently, during his congressional testimony, Zuckerberg touted artificial intelligence as a tool for curbing hate speech and terrorism.

The article is here.