Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Manipulation. Show all posts
Showing posts with label Manipulation. Show all posts

Monday, May 7, 2018

Microsoft is cutting off some sales over AI ethics

Alan Boyle
www.geekwire.com
Originally published April 9, 2018

Concerns over the potential abuse of artificial intelligence technology have led Microsoft to cut off some of its customers, says Eric Horvitz, technical fellow and director at Microsoft Research Labs.

Horvitz laid out Microsoft’s commitment to AI ethics today during the Carnegie Mellon University – K&L Gates Conference on Ethics and AI, presented in Pittsburgh.

One of the key groups focusing on the issue at Microsoft is the Aether Committee, where “Aether” stands for AI and Ethics in Engineering and Research.

“It’s been an intensive effort … and I’m happy to say that this committee has teeth,” Horvitz said during his lecture.

He said the committee reviews how Microsoft’s AI technology could be used by its customers, and makes recommendations that go all the way up to senior leadership.

“Significant sales have been cut off,” Horvitz said. “And in other sales, various specific limitations were written down in terms of usage, including ‘may not use data-driven pattern recognition for use in face recognition or predictions of this type.’ ”

Horvitz didn’t go into detail about which customers or specific applications have been ruled out as the result of the Aether Committee’s work, although he referred to Microsoft’s human rights commitments.

The information is here.

Thursday, April 12, 2018

The Tech Industry’s War on Kids

Richard Freed
Medium.com
Originally published March 12, 2018

Here is an excerpt:

Fogg speaks openly of the ability to use smartphones and other digital devices to change our ideas and actions: “We can now create machines that can change what people think and what people do, and the machines can do that autonomously.” Called “the millionaire maker,” Fogg has groomed former students who have used his methods to develop technologies that now consume kids’ lives. As he recently touted on his personal website, “My students often do groundbreaking projects, and they continue having impact in the real world after they leave Stanford… For example, Instagram has influenced the behavior of over 800 million people. The co-founder was a student of mine.”

Intriguingly, there are signs that Fogg is feeling the heat from recent scrutiny of the use of digital devices to alter behavior. His boast about Instagram, which was present on his website as late as January of 2018, has been removed. Fogg’s website also has lately undergone a substantial makeover, as he now seems to go out of his way to suggest his work has benevolent aims, commenting, “I teach good people how behavior works so they can create products & services that benefit everyday people around the world.” Likewise, the Stanford Persuasive Technology Lab website optimistically claims, “Persuasive technologies can bring about positive changes in many domains, including health, business, safety, and education. We also believe that new advances in technology can help promote world peace in 30 years.”

While Fogg emphasizes persuasive design’s sunny future, he is quite indifferent to the disturbing reality now: that hidden influence techniques are being used by the tech industry to hook and exploit users for profit. His enthusiastic vision also conveniently neglects to include how this generation of children and teens, with their highly malleable minds, is being manipulated and hurt by forces unseen.

The article is here.

Tuesday, April 3, 2018

Cambridge Analytica: You Can Have My Money but Not My Vote

Emily Feng-Gu
Practical Ethics
Originally posted March 31, 2018

Here is an excerpt:

On one level, the Cambridge Analytica scandal concerns data protection, privacy, and informed consent. The data involved was not, as Facebook insisted, obtained via a ‘breach’ or a ‘leak’. User data was as safe as it had always been – which is to say, not very safe at all. At the time, the harvesting of data, including that of unconsenting Facebook friends, by third-party apps was routine policy for Facebook, provided it was used only for academic purposes. Cambridge researcher and creator of the third-party app in question, Aleksandr Kogan, violated the agreement only when the data was passed onto Cambridge Analytica. Facebook failed to protect its users’ data privacy, that much is clear.

But are risks like these transparent to users? There is a serious concern about informed consent in a digital age. Most people are unlikely to have the expertise necessary to fully understand what it means to use online and other digital services.  Consider Facebook: users sign up for an ostensibly free social media service. Facebook did not, however, accrue billions in revenue by offering a service for nothing in return; they profit from having access to large amounts of personal data. It is doubtful that the costs to personal and data privacy are made clear to users, some of which are children or adolescents. For most people, the concept of big data is likely to be nebulous at best. What does it matter if someone has access to which Pages we have Liked? What exactly does it mean for third-party apps to be given access to data? When signing up to Facebook, I hazard that few people imagined clicking ‘I agree’ could play a role in attempts to influence election outcomes. A jargon laden ‘terms and conditions’ segment is not enough to inform users regarding what precisely it is they are consenting to.

The blog post is here.

Saturday, March 24, 2018

Facebook employs psychologist whose firm sold data to Cambridge Analytica

Paul Lewis and Julia Carrie Wong
The Guardian
Originally published March 18, 2018

Here are two excerpts:

The co-director of a company that harvested data from tens of millions of Facebook users before selling it to the controversial data analytics firms Cambridge Analytica is currently working for the tech giant as an in-house psychologist.

Joseph Chancellor was one of two founding directors of Global Science Research (GSR), the company that harvested Facebook data using a personality app under the guise of academic research and later shared the data with Cambridge Analytica.

He was hired to work at Facebook as a quantitative social psychologist around November 2015, roughly two months after leaving GSR, which had by then acquired data on millions of Facebook users.

Chancellor is still working as a researcher at Facebook’s Menlo Park headquarters in California, where psychologists frequently conduct research and experiments using the company’s vast trove of data on more than 2 billion users.

(cut)

In the months that followed the creation of GSR, the company worked in collaboration with Cambridge Analytica to pay hundreds of thousands of users to take the test as part of an agreement in which they agreed for their data to be collected for academic use.

However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions strong.

That data sold to Cambridge Analytica as part of a commercial agreement.

Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising.

The information is here.

Friday, March 23, 2018

Mark Zuckerberg Has No Way Out of Facebook's Quagmire

Leonid Bershidsky
Bloomberg News
Originally posted March 21, 2018

Here is an excerpt:

"Making sure time spent on Facebook is time well spent," as Zuckerberg puts it, should lead to the collection of better-quality data. If nobody is setting up fake accounts to spread disinformation, users are more likely to be their normal selves. Anyone analyzing these healthier interactions will likely have more success in targeting commercial and, yes, political offerings to real people. This would inevitably be a smaller yet still profitable enterprise, and no longer a growing one, at least in the short term. But the Cambridge Analytica scandal shows people may not be okay with Facebook's data gathering, improved or not.

The scandal follows the revelation (to most Facebook users who read about it) that, until 2015, application developers on the social network's platform were able to get information about a user's Facebook friends after asking permission in the most perfunctory way. The 2012 Obama campaign used this functionality. So -- though in a more underhanded way -- did Cambridge Analytica, which may or may not have used the data to help elect President Donald Trump.

Many people are angry at Facebook for not acting more resolutely to prevent CA's abuse, but if that were the whole problem, it would have been enough for Zuckerberg to apologize and point out that the offending functionality hasn't been available for several years. The #deletefacebook campaign -- now backed by WhatsApp co-founder Brian Acton, whom Facebook made a billionaire -- is, however, powered by a bigger problem than that. People are worried about the data Facebook is accumulating about them and about how these data are used. Facebook itself works with political campaigns to help them target messages; it did so for the Trump campaign, too, perhaps helping it more than CA did.

The article is here.

First Question: Should you stop using Facebook because they violated your trust?

Second Question: Is Facebook a defective product?

Friday, March 16, 2018

How Russia Hacked the American Mind

Maya Kosoff
Vanity Fair
Originally posted February 19, 2018

Here is an excerpt:

Social media certainly facilitated the Russian campaign. As part of Facebook’s charm offensive, Zuckerberg has since offered tangible fixes, including a plan to verify election advertisements and an effort to emphasize friends, family, and Groups. But Americans’ lack of news literacy transcends Facebook, and was created in part by the Internet itself. As news has shifted from print and television outlets to digital versions of those same outlets to information shared on social-media platforms (still the primary source of news for an overwhelming majority of Americans) audiences failed to keep pace; they never learned to vet the news they consume online.

It’s also a problem we’ve created ourselves. As we’ve become increasingly polarized, news outlets have correspondingly adjusted to cater to our tastes, resulting in a media landscape that’s split into separate, non-overlapping universes of conflicting facts—a world in which Fox News and CNN spout theories about the school shooting in Parkland, Florida, that are diametrically opposed. It was this atmosphere that made the U.S. fertile ground for foreign manipulation. As political scientists Jay J. Van Bavel and Andrea Pereira noted in a recent paper, “Partisanship can even alter memory, implicit evaluation, and even perceptual judgment,” fueling an “human attraction to fake and untrustworthy news” that “poses a serious problem for healthy democratic functioning.”

The article is here.

Monday, November 6, 2017

Is It Too Late For Big Data Ethics?

Kalev Leetaru
Forbes.com
Originally published October 16, 2017

Here is an excerpt:

AI researchers are rushing to create the first glimmers of general AI and hoping for the key breakthroughs that take us towards a world in which machines gain consciousness. The structure of academic IRBs means that little of this work is subject to ethical review of any kind and its highly technical nature means the general public is little aware of the rapid pace of progress until it comes into direct life-or-death contact with consumers such as driverless cars.

Could industry-backed initiatives like one announced by Bloomberg last month in partnership with BrightHive and Data for Democracy be the answer? It all depends on whether companies and organizations actively infuse these values into the work they perform and sponsor or whether these are merely public relations campaigns for them. As I wrote last month, when I asked the organizers of a recent data mining workshop as to why they did not require ethical review or replication datasets for their submissions, one of the organizers, a Bloomberg data scientist, responded only that the majority of other ACM computer science conferences don’t either. When asked why she and her co-organizers didn’t take a stand with their own workshop to require IRB review and replication datasets even if those other conferences did not, in an attempt to start a trend in the field, she would only repeat that such requirements are not common to their field. When asked whether Bloomberg would be requiring its own data scientists to adhere to its new data ethics initiative and/or mandate that they integrate its principles into external academic workshops they help organize, a company spokesperson said they would try to offer comment, but had nothing further to add after nearly a week.

The article is here.

Saturday, October 21, 2017

Thinking about the social cost of technology

Natasha Lomas
Tech Crunch
Originally posted September 30, 2017

Here is an excerpt:

Meanwhile, ‘users’ like my mum are left with another cryptic puzzle of unfamiliar pieces to try to slot back together and — they hope — return the tool to the state of utility it was in before everything changed on them again.

These people will increasingly feel left behind and unplugged from a society where technology is playing an ever greater day-to-day role, and also playing an ever greater, yet largely unseen role in shaping day to day society by controlling so many things we see and do. AI is the silent decision maker that really scales.

The frustration and stress caused by complex technologies that can seem unknowable — not to mention the time and mindshare that gets wasted trying to make systems work as people want them to work — doesn’t tend to get talked about in the slick presentations of tech firms with their laser pointers fixed on the future and their intent locked on winning the game of the next big thing.

All too often the fact that human lives are increasingly enmeshed with and dependent on ever more complex, and ever more inscrutable, technologies is considered a good thing. Negatives don’t generally get dwelled on. And for the most part people are expected to move along, or be moved along by the tech.

That’s the price of progress, goes the short sharp shrug. Users are expected to use the tool — and take responsibility for not being confused by the tool.

But what if the user can’t properly use the system because they don’t know how to? Are they at fault? Or is it the designers failing to properly articulate what they’ve built and pushed out at such scale? And failing to layer complexity in a way that does not alienate and exclude?

And what happens when the tool becomes so all consuming of people’s attention and so capable of pushing individual buttons it becomes a mainstream source of public opinion? And does so without showing its workings. Without making it clear it’s actually presenting a filtered, algorithmically controlled view.

There’s no newspaper style masthead or TV news captions to signify the existence of Facebook’s algorithmic editors. But increasingly people are tuning in to social media to consume news.

This signifies a major, major shift.

The article is here.

Thursday, September 28, 2017

How Much Do A Company's Ethics Matter In The Modern Professional Climate?

Larry Alton
Forbes
Originally posted September 12, 2017

More than ever, a company’s success depends on the talent it’s able to attract, but attracting the best talent is about more than just offering the best salary—or even the best benefits. Companies may have a lucrative offer for a prospective candidate, and a culture where they’ll feel at home, but how do corporate ethics stack up against those of its competition?

This may not seem like the most important question to ask when you’re trying to hire someone for a position—especially one that might not be directly affected by the actions of your corporation as a whole—but the modern workplace is changing, as are American professionals’ values, and if you want to keep up, you need to know just how significant those ethical values are.

What Qualifies as “Ethics”?

What do I mean by “ethics”? This is a broad category, and subjective in nature, but generally, I’m referring to these areas:
  • Fraud and manipulation. This should be obvious, but ethical companies don’t engage in shady or manipulative financial practices, such as fraud, bribery, or insider trading. The problem here is that individual actions are often associated with the company as a whole, so any individual within your company who behaves in an unethical way could compromise the reputation of your company. Setting strict no-tolerance policies and taking proper disciplinary action can mitigate these effects.

Monday, April 3, 2017

Conviction, persuasion and manipulation: the ethical dimension of epistemic vigilance

Johannes Mahr
Cognition and Culture Institute Blog
Originally posted 10 March 2017

In today’s political climate moral outrage about (alleged) propaganda and manipulation of public opinion dominate our discourse. Charges of manipulative information provision have arguably become the most widely used tool to discredit one’s political opponent. Of course, one reason for why such charges have become so prominent is that the way we consume information through online media has made us more vulnerable than ever to such manipulation. Take a recent story published by The Guardian, which describes the strategy of information dissemination allegedly used by the British ‘Leave Campaign’:
“The strategy involved harvesting data from people’s Facebook and other social media profiles and then using machine learning to ‘spread’ through their networks. Wigmore admitted the technology and the level of information it gathered from people was ‘creepy’. He said the campaign used this information, combined with artificial intelligence, to decide who to target with highly individualised advertisements and had built a database of more than a million people.”
This might not just strike you as “creepy” but as simply unethical just as it did one commentator cited in the article who called these tactics “extremely disturbing and quite sinister”. Here, I want to investigate where this intuition comes from.

The blog post is here.

Monday, October 17, 2016

Affective nudging

Eric Schliesser
Digressions and Impressions blog
Originally published September 30, 2016

Here is an excerpt:

Nudging is paternalist. But by making exit easy and avoidance cheap nudges are thought to avoid the worst moral and political problems of paternalism and (other) manipulative practices. (What counts as a significant change of economic incentives is, of course, very contestable, but we leave that aside here.) Nudges may, in fact, sometimes enhance autonomy and freedom, but the way Sunstein & Thaler define 'nudge' one may nudge also for immoral ends. Social engineering does not question the ends.

The modern administrative state is, however, not just a rule-following Weberian bureaucracy where the interaction between state and citizen is governed by the exchange of forms, information, and money. Many civil servants, including ones with very distinct expertise (physicians, psychologists, lawyers, engineers, social service workers, therapists, teachers, correction officers, etc.) enter quite intimately into the lives of lots of citizens. Increasingly (within the context of new public management), government professionals and hired consultants are given broad autonomy to meet certain targets (quotas, budget or volume numbers, etc.) within constrained parameters. (So, for example, a physician is not just a care provider, but also somebody who can control costs.) Bureaucratic management and the political class are agnostic about how the desired outcomes are met, as long as it is legal, efficient and does not generate bad media or adverse political push-back.

The blog post is here.

Thursday, June 9, 2016

Ethics and the Eye of the Beholder

Thomas Pogge, one of the world’s most prominent ethicists, stands accused of manipulating students to gain sexual advantage. Did the fierce champion of the world's disempowered abuse his own power?

Katie J.M. Baker
BuzzFeed News 
Originally posted May 20, 2016

Here is an excerpt:

But a recent federal civil rights complaint describes a distinction unlikely to appear on any curriculum vitae: It claims Pogge uses his fame and influence to manipulate much younger women in his field into sexual relationships. One former student said she was punished professionally after resisting his advances.

Pogge did not respond to more than a dozen emails and phone calls from BuzzFeed News, nor to a detailed letter laying out all the claims that were likely to appear in this article. Yale’s spokesperson, Thomas Conroy, declined to comment.


Editor's note: Research shows that those who teach ethics do not act more ethically than the rest of the population.

Tuesday, April 19, 2016

Good News! You're Not an Automaton

By Cass R. Sunstein
Bloomberg View
Originally published March 30, 2016

A good nudge is like a GPS device: A small, low-cost intervention that tells you how to get where you want to go -- and if you don’t like what it says, you're free to ignore it. But when, exactly, will people do that? A new study sheds important light on that question, by showing the clear limits of nudging. Improbably, this research is also good news: It shows that when people feel strongly, it’s not easy to influence them to make choices that they won’t like.

The focus of this new research, as with much recent work on behavioral science, is on what people eat. Numerous studies suggest that if healthy foods are made more visible or convenient to find, more people will choose them. We tend to make purchasing decisions quickly and automatically; if certain foods or drinks -- snickers bars, apples, orange juice -- are easy to see and grab, consumption will jump.

The article is here.

Note: The podcast on nudge theory and how it applies to psychotherapy can be found here.

Tuesday, November 10, 2015

Neuromodulation of Group Prejudice and Religious Belief

C. Holbrook, K. Izuma, C, Deblieck, D. Fessler, and M. Iacoboni
Social Cognitive and Affective Neuroscience (2015)
doi: 10.1093/scan/nsv107

Abstract

People cleave to ideological convictions with greater intensity in the aftermath of threat. The posterior medial frontal cortex (pMFC) plays a key role in both detecting discrepancies between desired and current conditions and adjusting subsequent behavior to resolve such conflicts. Building on prior literature examining the role of the pMFC in shifts in relatively low-level decision processes, we demonstrate that the pMFC mediates adjustments in adherence to political and religious ideologies. We presented participants with a reminder of death and a critique of their in-group ostensibly written by a member of an out-group, then experimentally decreased both avowed belief in God and out-group derogation by down-regulating pMFC activity via transcranial magnetic stimulation. The results provide the first evidence that group prejudice and religious belief are susceptible to targeted neuromodulation, and point to a shared cognitive mechanism underlying concrete and abstract decision processes. We discuss the implications of these findings for further research characterizing the cognitive and affective mechanisms at play.

The entire article is here.

Friday, September 11, 2015

Moral Panic: Who Benefits From Public Fear?

By Scott Bohn
Psychology Today Blog
Originally published July 20, 2015

Here is an excerpt:

Moral panics arise when distorted mass media campaigns are used to create fear, reinforce stereotypes and exacerbate pre-existing divisions in the world, often based on race, ethnicity and social class.

Additionally, moral panics have three distinguishing characteristics.  First, there is a focused attention on the behavior, whether real or imagined, of certain individuals or groups that are transformed into what Cohen referred to as “folk devils” by the mass media. This is accomplished when the media strip these folk devils of all favorable characteristics and apply exclusively negative ones.

Second, there is a gap between the concern over a condition and the objective threat it poses. Typically, the objective threat is far less than popularly perceived due to how it is presented by authorities.

Third, there is a great deal of fluctuation over time in the level of concern over a condition. The typical pattern begins with the discovery of the threat, followed by a rapid rise and then peak in public concern, which then subsequently, and often abruptly, subsides.

Finally, public hysteria over a perceived problem often results in the passing of legislation that is highly punitive, unnecessary, and serves to justify the agendas of those in positions of power and authority.

The entire article is here.

Sunday, August 9, 2015

Fifty Shades of Manipulation

Cass R. Sunstein
Journal of Behavioral Marketing, Forthcoming
February 18, 2015

Abstract:    

A statement or action can be said to be manipulative if it does not sufficiently engage or appeal to people’s capacity for reflective and deliberative choice. One problem with manipulation, thus understood, is that it fails to respect people’s autonomy and is an affront to their dignity. Another problem is that if they are products of manipulation, people’s choices might fail to promote their own welfare, and might instead promote the welfare of the manipulator. To that extent, the central objection to manipulation is rooted in a version of Mill’s Harm Principle: People know what is in their best interests and should have a (manipulation-free) opportunity to make that decision. On welfarist grounds, the norm against manipulation can be seen as a kind of heuristic, one that generally works well, but that can also lead to serious errors, at least when the manipulator is both informed and genuinely interested in the welfare of the chooser.

For the legal system, a pervasive puzzle is why manipulation is rarely policed. The simplest answer is that manipulation has so many shades, and in a social order that values free markets and is committed to freedom of expression, it is exceptionally difficult to regulate manipulation as such. But as the manipulator’s motives become more self-interested or venal, and as efforts to bypass people’s deliberative capacities becomes more successful, the ethical objections to manipulation become very forceful, and the argument for a legal response is fortified. The analysis of manipulation bears on emerging first amendment issues raised by compelled speech, especially in the context of graphic health warnings. Importantly, it can also help orient the regulation of financial products, where manipulation of consumer choices is an evident but rarely explicit concern.

The entire article is here.

Tuesday, January 27, 2015

Social Media Ethics

Religion and Ethics Newsweekly
Originally published January 9, 2015

Some social media companies—including Facebook—have run experiments to learn what influences user behavior. Many of these experiments have troubled both social media users and privacy advocates, who worry that this research and use of personal information is unethical.


Sunday, January 4, 2015

The Ethics of Nudging

By Cass Sunstein
Harvard Law School

Abstract:
 
This essay defends the following propositions. (1) It is pointless to object to choice architecture or nudging as such. Choice architecture cannot be avoided. Nature itself nudges; so does the weather; so do spontaneous orders and invisible hands. The private sector inevitably nudges, as does the government. It is reasonable to object to particular nudges, but not to nudging in general. (2) In this context, ethical abstractions (for example, about autonomy, dignity, and manipulation) can create serious confusion. To make progress, those abstractions must be brought into contact with concrete practices. Nudging and choice architecture take diverse forms, and the force of an ethical objection depends on the specific form. (3) If welfare is our guide, much nudging is actually required on ethical grounds. (4) If autonomy is our guide, much nudging is also required on ethical grounds. (5) Choice architecture should not, and need not, compromise either dignity or self-government, though imaginable forms could do both. (6) Some nudges are objectionable because the choice architect has illicit ends. When the ends are legitimate, and when nudges are fully transparent and subject to public scrutiny, a convincing ethical objection is less likely to be available. (7) There is, however, room for ethical objections in the case of well-motivated but manipulative interventions, certainly if people have not consented to them; such nudges can undermine autonomy and dignity. It follows that both the concept and the practice of manipulation deserve careful attention. The concept of manipulation has a core and a periphery; some interventions fit within the core, others within the periphery, and others outside of both.

The entire article is here.

Monday, September 22, 2014

The Dark Side of Emotional Intelligence

By Adam Grant
The Atlantic
Originally published January 2, 2014

Here is an excerpt:

Emotional intelligence is important, but the unbridled enthusiasm has obscured a dark side. New evidence shows that when people hone their emotional skills, they become better at manipulating others. When you’re good at controlling your own emotions, you can disguise your true feelings. When you know what others are feeling, you can tug at their heartstrings and motivate them to act against their own best interests.

Social scientists have begun to document this dark side of emotional intelligence. In emerging research led by University of Cambridge professor Jochen Menges, when a leader gave an inspiring speech filled with emotion, the audience was less likely to scrutinize the message and remembered less of the content. Ironically, audience members were so moved by the speech that they claimed to recall more of it.

The authors call this the awestruck effect, but it might just as easily be described as the dumbstruck effect. One observer reflected that Hitler’s persuasive impact came from his ability to strategically express emotions—he would “tear open his heart”—and these emotions affected his followers to the point that they would “stop thinking critically and just emote.”

The entire article is here.