Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Facebook. Show all posts
Showing posts with label Facebook. Show all posts

Thursday, December 5, 2019

How Misinformation Spreads--and Why We Trust It

Cailin O'Connor and James Owen Weatherall
Scientific American
Originally posted September 2019

Here is an excerpt:

Many communication theorists and social scientists have tried to understand how false beliefs persist by modeling the spread of ideas as a contagion. Employing mathematical models involves simulating a simplified representation of human social interactions using a computer algorithm and then studying these simulations to learn something about the real world. In a contagion model, ideas are like viruses that go from mind to mind.

You start with a network, which consists of nodes, representing individuals, and edges, which represent social connections.  You seed an idea in one “mind” and see how it spreads under various assumptions about when transmission will occur.

Contagion models are extremely simple but have been used to explain surprising patterns of behavior, such as the epidemic of suicide that reportedly swept through Europe after publication of Goethe's The Sorrows of Young Werther in 1774 or when dozens of U.S. textile workers in 1962 reported suffering from nausea and numbness after being bitten by an imaginary insect. They can also explain how some false beliefs propagate on the Internet.

Before the last U.S. presidential election, an image of a young Donald Trump appeared on Facebook. It included a quote, attributed to a 1998 interview in People magazine, saying that if Trump ever ran for president, it would be as a Republican because the party is made up of “the dumbest group of voters.” Although it is unclear who “patient zero” was, we know that this meme passed rapidly from profile to profile.

The meme's veracity was quickly evaluated and debunked. The fact-checking Web site Snopes reported that the quote was fabricated as early as October 2015. But as with the tomato hornworm, these efforts to disseminate truth did not change how the rumors spread. One copy of the meme alone was shared more than half a million times. As new individuals shared it over the next several years, their false beliefs infected friends who observed the meme, and they, in turn, passed the false belief on to new areas of the network.

This is why many widely shared memes seem to be immune to fact-checking and debunking. Each person who shared the Trump meme simply trusted the friend who had shared it rather than checking for themselves.

Putting the facts out there does not help if no one bothers to look them up. It might seem like the problem here is laziness or gullibility—and thus that the solution is merely more education or better critical thinking skills. But that is not entirely right.

Sometimes false beliefs persist and spread even in communities where everyone works very hard to learn the truth by gathering and sharing evidence. In these cases, the problem is not unthinking trust. It goes far deeper than that.

The info is here.

Friday, February 15, 2019

The Economic Effects of Facebook

Mosquera, Roberto,  Odunowo, Mofioluwasademi, and others
December 1, 2018.
http://dx.doi.org/10.2139/ssrn.3312462

Abstract

Social media permeates many aspects of our lives, including how we connect with others, where we get our news and how we spend our time. Yet, we know little about the economic effects for users. Using a large field experiment with over 1,765 individuals, we document the value of Facebook to users and its causal effect on news consumption and awareness, well-being and daily activities. Participants reveal how much they value one week of Facebook usage and are then randomly assigned to a validated Facebook restriction or normal use. Those who are off Facebook for a week reduce news consumption, are less likely to recognize politically-skewed news stories, report being less depressed and engage in healthier activities. One week of Facebook is worth $25, and this increases by 15% after experiencing a Facebook restriction (26% for women), reflecting information loss or that using Facebook may be addictive.

Ethical/Clinical Question: Knowing this research, is it ethical and clinically appropriate to recommend depressed patients to stop using Facebook?

Thursday, January 24, 2019

Facebook’s Suicide Algorithms are Invasive

Michael Spencer
www.medium.com
Originally published January 6, 2019

Here is an excerpt:

Facebook is scanning nearly every post on the platform in an attempt to assess suicide risk. Sadly, Facebook has a long history of conducting “experiments” on its users. It’s hard to own a stock that itself isn’t trustworthy either for democracy or our personal data.

Facebook acts a bit like a social surveillance program, where it passes the information (suicide score) along to law enforcement for wellness checks. That’s pretty much like state surveillance, what’s the difference?

Privacy experts say Facebook’s failure to get affirmative consent from users for the program presents privacy risks that could lead to exposure or worse. Facebook has a history with sharing our personal data with other technology companies. So we are being profiled in the most intimate ways by third parties we didn’t even know had our data.

In March 2017, Facebook launched an ambitious project to prevent suicide with artificial intelligence but what is the real reason they make these contructs? It’s to monetize our data, it’s not to “help humanity” or connect the world.

The info is here.

Friday, March 23, 2018

Mark Zuckerberg Has No Way Out of Facebook's Quagmire

Leonid Bershidsky
Bloomberg News
Originally posted March 21, 2018

Here is an excerpt:

"Making sure time spent on Facebook is time well spent," as Zuckerberg puts it, should lead to the collection of better-quality data. If nobody is setting up fake accounts to spread disinformation, users are more likely to be their normal selves. Anyone analyzing these healthier interactions will likely have more success in targeting commercial and, yes, political offerings to real people. This would inevitably be a smaller yet still profitable enterprise, and no longer a growing one, at least in the short term. But the Cambridge Analytica scandal shows people may not be okay with Facebook's data gathering, improved or not.

The scandal follows the revelation (to most Facebook users who read about it) that, until 2015, application developers on the social network's platform were able to get information about a user's Facebook friends after asking permission in the most perfunctory way. The 2012 Obama campaign used this functionality. So -- though in a more underhanded way -- did Cambridge Analytica, which may or may not have used the data to help elect President Donald Trump.

Many people are angry at Facebook for not acting more resolutely to prevent CA's abuse, but if that were the whole problem, it would have been enough for Zuckerberg to apologize and point out that the offending functionality hasn't been available for several years. The #deletefacebook campaign -- now backed by WhatsApp co-founder Brian Acton, whom Facebook made a billionaire -- is, however, powered by a bigger problem than that. People are worried about the data Facebook is accumulating about them and about how these data are used. Facebook itself works with political campaigns to help them target messages; it did so for the Trump campaign, too, perhaps helping it more than CA did.

The article is here.

First Question: Should you stop using Facebook because they violated your trust?

Second Question: Is Facebook a defective product?

Tuesday, March 28, 2017

Facebook Is Using Artificial Intelligence To Help Prevent Suicide

Alex Kantrowitz
BuzzFeed
Originally published March 1, 2017

Facebook is bringing its artificial intelligence expertise to bear on suicide prevention, an issue that’s been top of mind for CEO Mark Zuckerberg following a series of suicides livestreamed via the company’s Facebook Live video service in recent months.

“It’s hard to be running this company and feel like, okay, well, we didn’t do anything because no one reported it to us,” Zuckerberg told BuzzFeed News in an interview last month. “You want to go build the technology that enables the friends and people in the community to go reach out and help in examples like that.”

Today, Facebook is introducing an important piece of that technology — a suicide-prevention feature that uses AI to identify posts indicating suicidal or harmful thoughts. The AI scans the posts and their associated comments, compares them to others that merited intervention, and, in some cases, passes them along to its community team for review. The company plans to proactively reach out to users it believes are at risk, showing them a screen with suicide-prevention resources including options to contact a helpline or reach out to a friend.

The article is here.

Monday, July 11, 2016

Facebook has a new process for discussing ethics. But is it ethical?

Anna Lauren Hoffman
The Guardian
Originally posted Friday 17 June 2016

Here is an excerpt:

Tellingly, Facebook’s descriptions of procedure and process offer little insight into the values and ideals that drive its decision-making. Instead, the authors offer vague, hollow and at times conflicting statements such as noting how its reviewers “consider how the research will improve our society, our community, and Facebook”.

This seemingly innocuous statement raises more ethical questions than it answers. What does Facebook think an “improved” society looks like? Who or what constitutes “our community?” What values inform their ideas of a better society?

Facebook sidesteps this completely by saying that ethical oversight necessarily involves subjectivity and a degree of discretion on the part of reviewers – yet simply noting that subjectivity is unavoidable does not negate the fact that explicit discussion of ethical values is important.

The article is here.

Saturday, July 9, 2016

Facebook Offers Tools for Those Who Fear a Friend May Be Suicidal

By Mike Isaac
The New York Times
June 14, 2016

Here is an excerpt:

With more than 1.65 billion members worldwide posting regularly about their behavior, Facebook is planning to take a more direct role in stopping suicide. On Tuesday, in the biggest step by a major technology company to incorporate suicide prevention tools into its platform, the social network introduced mechanisms and processes to make it easier for people to help friends who post messages about suicide or self-harm. With the new features, people can flag friends’ posts that they deem suicidal; the posts will be reviewed by a team at the social network that will then provide language to communicate with the person who is at risk, as well as information on suicide prevention.

The timing coincides with a surge in suicide rates in the United States to a 30-year high. The increase has been particularly steep among women and middle-aged Americans, reflecting widespread desperation. Last year, President Obama declared a World Suicide Prevention Day in September, calling on people to recognize mental health issues early and to reach out to support one another.

Tuesday, February 2, 2016

The spreading of misinformation online

M. Del Vicarioa , A. Bessib , F. Zolloa , F. Petronic , A. Scalaa, G. Caldarellia, H. E. Stanley, and W. Quattrociocchia
Proceedings of the National Academy of Sciences

Abstract

The wide availability of user-provided content in online social media facilitates the aggregation of people around common interests, worldviews, and narratives. However, the World Wide Web (WWW) also allows for the rapid dissemination of unsubstantiated rumors and conspiracy theories that often elicit rapid, large, but naive social responses such as the recent case of Jade Helm 15––where a simple military exercise turned out to be perceived as the beginning of a new civil war in the United States. In this work, we address the determinants governing misinformation spreading through a thorough quantitative analysis. In particular, we focus on how Facebook users consume information related to two distinct narratives: scientific and conspiracy news. We find that, although consumers of scientific and conspiracy stories present similar consumption patterns with respect to content, cascade dynamics differ. Selective exposure to content is the primary driver of content diffusion and generates the formation of homogeneous clusters, i.e., “echo chambers.” Indeed, homogeneity appears to be the primary driver for the diffusion of contents and each echo chamber has its own cascade dynamics. Finally, we introduce a data-driven percolation model mimicking rumor spreading and we show that homogeneity and polarization are the main determinants for predicting cascades’ size.

The article is here.

Wednesday, June 17, 2015

Tim Cook says privacy is an issue of morality

By Chris Matyszczyk
cnet.com
Originally posted on June 3, 2015

Here is an excerpt:

Cook, though, presented the issue in deeply political terms. He said: "We believe that people have a fundamental right to privacy. The American people demand it, the constitution demands it, morality demands it."

Morality is a feast that moves as it's eaten. It's admirable that Cook would appeal to our moral core, but how much is there left? And how many can identify it?

The entire article is here.

Tuesday, January 27, 2015

Social Media Ethics

Religion and Ethics Newsweekly
Originally published January 9, 2015

Some social media companies—including Facebook—have run experiments to learn what influences user behavior. Many of these experiments have troubled both social media users and privacy advocates, who worry that this research and use of personal information is unethical.


Sunday, November 2, 2014

Do research ethics need updating for the digital age?

By Michael W. Ross, PhD, MD, MPH
The Monitor on Psychology
October 2014, Vol 45, No. 9
Print version: page 64

Over a week in early January 2012, the news feeds of more than 600,000 Facebook users changed subtly: Without users' knowledge, researchers manipulated the feeds' emotional content to examine how Facebook friends' emotions affected one another.

The study on "massive-scale emotional contagion through social networks" (PNAS, June 17, 2014) generated significant debate in both public and scientific spheres. Much of this debate centered on ethical aspects of the study. In an editorial, even the journal's editor-in-chief voiced concern that the "collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining fully informed consent and allowing participants to opt out" (Verma, 2014).

There has been extensive and incisive debate about the ethical and scientific issues arising from the study.

Saturday, September 20, 2014

When Mental Health Professionals are on Facebook

By Steven Petrow
The Washington Post
Originally posted on August 25, 2014

For the past two weeks, whenever I’ve scrolled through my Facebook newsfeed I’ve come to the section “People You May Know.” The suggestions offered have included relatives, co-workers, some people I don’t even like in “real” life — and my current psychologist. “OMG!” I’ve winced repeatedly at the profile photo of my shrink, who for the sake of his privacy I’ll just call Dr. E.

Still, being the curious sort, I clicked to view his page, which isn’t very well protected from eyes like mine. For starters, there are 12 photos of him available for all the world to enjoy, several of them shirtless and one that had a “friend” of his posting “Woof!” underneath it. I also discovered pictures of Dr. E from high school with two nice-looking young ladies. Although I’ve known he was gay, I started to wonder: Was he bisexual then? When did he come out? I found myself thinking much more about his personal life than any patient should.

Among Dr. E’s Facebook friends was another psychologist, one who seemed to deploy no privacy safeguards whatsoever. Any patient clicking on his Facebook page could see tons of photos, including those of his wedding and honeymoon, and even his attendance at a celebration of “Bush 43’s” last night in office. (That makes it a good bet he’s a Dem, which might be TMI for a GOP patient.)

The entire article is here.

Monday, August 18, 2014

When Cupid fires arrows double-blind: implicit informed agreement for online research?

By Anders Sandberg
Practical Ethics
Originally posted on Jul 31, 2014

A while ago Facebook got into the news for experimenting on its subscribers, leading to a fair bit of grumbling. Now the dating site OKCupid has proudly outed itself: We Experiment On Human Beings! Unethical or not?

(cut)

The harm angle is more interesting. While Facebook affected the emotions slightly on people who might not have expected emotional manipulation, OKCupid is all about emotions and emotion-laden social interaction. People date because of the site. People have sex because of the site. People marry because of the site. Potentially manipulations could have far more far reaching consequences on OKCupid than on Facebook.

The entire blog post is here.

Thursday, July 17, 2014

Furor Erupts Over Facebook's Experiment on Users

By Reed Albergotti
The Wall Street Journal
Originally published June 30, 2014

Here is an excerpt:

The research, published in the March issue of the Proceedings of the National Academy of Sciences, sparked a different emotion - outrage - among some people who say Facebook toyed with its users emotions and uses members as guinea pigs.

"What many of us feared is already a reality: Facebook is using us as lab rats, and not just to figure out which ads we'll respond to but actually change our emotion," wrote Animalnewyork.com, a blog post that drew attention to the study Friday morning.

Facebook has long run social experiments.  Its Data Science Team is tasked with turning the reams of information created by the more than 800 million people who log on every day into usable scientific research.

The entire article is here.

Thursday, July 10, 2014

Facebook’s Unethical Experiment

It intentionally manipulated users’ emotions without their knowledge.

By Katy Waldman
Slate
Originally published on June 28, 2014

Facebook has been experimenting on us. A new paper in the Proceedings of the National Academy of Sciences reveals that Facebook intentionally manipulated the news feeds of almost 700,000 users in order to study “emotional contagion through social networks.”

The researchers, who are affiliated with Facebook, Cornell, and the University of California–San Francisco, tested whether reducing the number of positive messages people saw made those people less likely to post positive content themselves. The same went for negative messages: Would scrubbing posts with sad or angry words from someone’s Facebook feed make that person write fewer gloomy updates?

They tweaked the algorithm by which Facebook sweeps posts into members’ news feeds, using a program to analyze whether any given textual snippet contained positive or negative words. Some people were fed primarily neutral to happy information from their friends; others, primarily neutral to sad. Then everyone’s subsequent posts were evaluated for affective meanings.

The entire story is here.

Wednesday, March 5, 2014

Senate challenger Milton Wolf apologizes for posting X-ray photos

By The Associated Press
The Kansas City Star
Originally published February 23, 2014

A tea party-backed Leawood radiologist who is trying to unseat longtime Republican U.S. Sen. Pat Roberts has apologized for posting X-ray photos of fatal gunshot wounds and medical injuries on his personal Facebook page several years ago. But he called the revelation about the images the work of a desperate incumbent.

In addition to the images, Milton Wolf also participated in online commentary layered with macabre jokes and descriptions of carnage, The Topeka Capital-Journal reported.

The report about the images, which came from hospitals in the Kansas City area on both sides of the state line, drew criticism from medical professionals who called their display on social media irresponsible.

The entire story is here.

Tuesday, February 11, 2014

Decline Facebook 'Friend' Appeals from Patients, Groups Say

By  David Pittman
Washington Correspondent, MedPage Today
Originally published April 12, 2013, and still relevant today

Physicians should avoid making or accepting "friend" requests through social networking websites with past or current patients, a new policy statement advises.

Instead, doctors should separate their professional and social lives online and direct patients to correct avenues of information if they contact doctors through social networks, according to the policy statement issued jointly on Thursday by the American College of Physicians (ACP) and the Federation of State Medical Boards (FSMB).

"There's this notion of blurring of our identity, blurring of our persona," David Fleming, MD, chair of ACP's Ethics, Professionalism, and Human Rights Committee, which helped draft the guidelines, said here at the ACP's annual meeting.

The entire article is here.

Thursday, July 18, 2013

When states monitored their citizens we used to call them authoritarian. Now we think this is what keeps us safe

By Susan Moore
The Guardian - Comments
Originally published July 3, 2013

Here is an excerpt:

What I failed to grasp, though, was quite how much I had already surrendered my liberty, not just personally but my political ideals about what liberty means. I simply took for granted that everyone can see everything and laughed at the idea that Obama will be looking at my pictures of a cat dressed as a lobster. I was resigned to the fact that some random FBI merchant will wonder at the inane and profane nature of my drunken tweets.

Slowly but surely, The Lives of Others have become ours. CCTV cameras everywhere watch us, so we no longer watch out for each other. Public space is controlled. Of course, much CCTV footage is never seen and often useless. But we don't need the panopticon once we have built one in our own minds. We are all suspects.

Or at least consumers. iTunes thinks I might like Bowie; Amazon thinks I want a compact tumble dryer. Really? Facebook seems to think I want to date men in uniform. I revel in the fact that the algorithms get it as wrong as the man who knocks on my door selling fish out of a van. "And not just fish," as he sometimes says mysteriously.

The entire comment is here.

Tuesday, June 4, 2013

Scholars call for new ethical guidelines to direct research on social networking

By Jennifer Sereno
University of Wisconsin-Madison News
Originally published January 2013

The unique data collection capabilities of social networking and online gaming websites require new ethical guidance from federal regulators concerning online research involving adolescent subjects, an ethics scholar from the Morgridge Institute for Research and a computer and learning sciences expert from Tufts University argue in the journal Science.

Increasingly, academics are designing and implementing research interventions on social network sites such as Facebook to learn how these interventions may affect user behavior, knowledge, attitudes and psychological health. Online games are being used as research interventions. However, the ability to mine user data (including information about Facebook "friends"), sensitive personal information and behavior raises concerns that deserve closer ethical scrutiny, say Pilar Ossorio and R. Benjamin Shapiro.

Ossorio is a bioethics scholar-in-residence at the Morgridge Institute, a private, nonprofit biomedical research institute on the University of Wisconsin-Madison campus. She also holds joint appointments as a professor of law and bioethics at the University of Wisconsin Law School and the School of Medicine and Public Health. Shapiro is an assistant professor in computer science and education at Tufts, where he is a member of the Center for Engineering Education and Outreach. He previously held appointments in educational research at Morgridge and the Wisconsin Institute for Discovery.

"Given the unprecedented ability of online research using social network sites to identify sensitive personal information concerning the research subject and the subject's online acquaintances, researchers need clarification concerning applicable ethical and regulatory standards," Ossorio says. "Regulators need greater insights into the possible benefits and harms of online social network research, and researchers need to better understand the relevant ethical and regulatory universe so they can design technical strategies for minimizing harm and complying with legal requirements."

For instance, Ossorio says, researchers may be able to design game features that detect player distress and respond by modifying the game environment, and marry those features to data collection technologies that maximally protect users' privacy while still offering useful data to researchers.

Consent for online research is tricky, particularly when it involves minors. Under Shapiro and Ossorio's analysis, current law does not require that researchers obtain parental permission to conduct studies of adolescents on social networking sites. Parental permission is required for younger children, while adolescents and adults provide their own consent. Of course, parents can prohibit their adolescents from any online activity, including research participation, regardless of legal limits on researchers. Parents have the same amount of control over their adolescents' online research participation as they do over any other online activity in which their teens engage.

"Researchers should use the online environment to deliver innovative, informative consent processes that help participants understand the dimensions of the research and the accompanying data collection," Shapiro says. "This is especially important given the general public's ignorance about the ability to collect massive amounts of personal data over the Internet."

If traditional approaches to consent are of limited value for protecting online subjects, Ossorio says, then researchers and regulators should emphasize other aspects of research ethics, such as using all reasonable approaches to minimize research risks. Also, researchers should seek innovative methods for generating transparency around the research enterprise.

Writing in the Policy Forum section of the Jan. 11 edition of Science, Shapiro and Ossorio conclude by emphasizing that the richness of online information should not become the sole domain of commercial marketing interests but should be used to advance understanding of human behavior and inspire positive social outcomes. Elucidating ethical and legal guidelines for design research on social media will create new opportunities for researchers to understand and improve society.

The news release is here.

Thursday, September 13, 2012

U.S. officials launch new strategy to prevent suicide

Reuters
Originally published September 10, 2012

A new nationwide strategy to prevent suicides, especially among U.S. military veterans and younger Americans, is tapping into Facebook, mobile apps and other technologies as part of a community-driven push to report concerns before someone takes his own life.

(cut)

The initiative includes $55.6 million in grant funding for suicide prevention programs.

Suicide is a growing concern and results in the deaths of more than twice as many people on average as homicide, officials said.

On average, about 100 Americans die each day from suicide, officials said. More than 8 million U.S. adults seriously thought about suicide in the last year, according to the Substance Abuse and Mental Health Services Administration.