Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Social Media. Show all posts
Showing posts with label Social Media. Show all posts

Thursday, June 27, 2019

This doctor is recruiting an army of medical experts to drown out fake health news on Instagram and Twitter

Christine Farr
CNBC.com
Originally published June 2, 2019

The antidote to fake health news? According to Austin Chiang, the first chief medical social media officer at a top hospital, it’s to drown out untrustworthy content with tweets, pics and posts from medical experts that the average American can relate to.

Chiang is a Harvard-trained gastroenterologist with a side passion for social media. On Instagram, where he refers to himself as a “GI Doctor,” he has 20,000 followers, making him one of the most influential docs aside from TV personalities, plastic surgeons and New York’s so-called “most eligible bachelor,” Dr. Mike.

Every few days, he’ll share a selfie or a photo of himself in scrubs along with captions about the latest research or insights from conferences he attends, or advice to patients trying to sort our real information from rumors. He’s also active on Twitter, Microsoft’s LinkedIn and Facebook (which owns Instagram).

But Chiang recognizes that his following pales in comparison to accounts like “Medical Medium,” where two million people tune in to the musings of a psychic, who raves about vegetables that will cure diseases ranging from depression to diabetes. (Gwyneth Paltrow’s Goop has written about the account’s creator glowingly.) Or on Pinterest and Facebook, where anti-vaccination content has been far more prominent than legitimate public health information. Meanwhile, on e-commerce sites like Amazon and eBay, vendors have hawked unproven and dangerous health “cures, ” including an industrial-strength bleach that is billed as eliminating autism in children.

The info is here.

Saturday, June 8, 2019

Anger, Fear, and Echo Chambers: The Emotional Basis for Online Behavior

Wollebæk, D., Karlsen, R., Steen-Johnsen, K., & Enjolras, B.
(2019). Social Media + Society. 
https://doi.org/10.1177/2056305119829859

Abstract

Emotions, such as anger and fear, have been shown to influence people’s political behavior. However, few studies link emotions specifically to how people debate political issues and seek political information online. In this article, we examine how anger and fear are related to politics-oriented digital behavior, attempting to bridge the gap between the thus far disconnected literature on political psychology and the digital media. Based on survey data, we show that anger and fear are connected to distinct behaviors online. Angry people are more likely to engage in debates with people having both similar and opposing views. They also seek out information confirming their views more frequently. Anxious individuals, by contrast, tend to seek out information contradicting their opinions. These findings reiterate predictions made in the extant literature concerning the role of emotions in politics. Thus, we argue that anger reinforces echo chamber dynamics and trench warfare dynamics in the digital public sphere, while fear counteracts these dynamics.

Discussion and Conclusion

The analyses have shown that anger and fear have distinct effects on echo chamber and trench warfare dynamics in the digital sphere. With regard to the debate dimension, we have shown that anger is positively related to participation in online debates. This finding confirms the results of a recent study by Hasell and Weeks (2016). Importantly, however, the impact of anger is not limited to echo chamber discussions with like-minded and similar people. Angry individuals are also over-represented in debates between people holding opposing views and belonging to a different class or
ethnic background. This entails that regarding online debates, anger contributes more to what has been previously labeled as trench warfare dynamics than to echo chamber dynamics.

The research is here.

Wednesday, May 29, 2019

The Problem with Facebook


Making Sense Podcast

Originally posted on March 27, 2019

In this episode of the Making Sense podcast, Sam Harris speaks with Roger McNamee about his book Zucked: Waking Up to the Facebook Catastrophe.

Roger McNamee has been a Silicon Valley investor for thirty-five years. He has cofounded successful venture funds including Elevation with U2’s Bono. He was a former mentor to Facebook CEO Mark Zuckerberg and helped recruit COO Sheryl Sandberg to the company. He holds a B.A. from Yale University and an M.B.A. from the Tuck School of Business at Dartmouth College.

The podcast is here.

The fundamental ethical problems with social media companies like Facebook and Google start about 20 minutes into the podcast.

Monday, May 6, 2019

Ethical Considerations Regarding Internet Searches for Patient Information.

Charles C. Dike, Philip Candilis, Barbara Kocsis  and others
Psychiatric Services
Published Online:17 Jan 2019

Abstract

In 2010, the American Medical Association developed policies regarding professionalism in the use of social media, but it did not present specific ethical guidelines on targeted Internet searches for information about a patient or the patient’s family members. The American Psychiatric Association (APA) provided some guidance in 2016 through the Opinions of the Ethics Committee, but published opinions are limited. On behalf of the APA Ethics Committee, the authors developed a resource document describing ethical considerations regarding Internet and social media searches for patient information, from which this article has been adapted. Recommendations include the following. Except in emergencies, it is advisable to obtain a patient’s informed consent before performing such a search. The psychiatrist should be aware of his or her motivations for performing a search and should avoid doing so unless it serves the patient’s best interests. Information obtained through such searches should be handled with sensitivity regarding the patient’s privacy. The psychiatrist should consider how the search might influence the clinician-patient relationship. When interpreted with caution, Internet- and social media–based information may be appropriate to consider in forensic evaluations.

The info is here.

Thursday, May 2, 2019

A Facebook request: Write a code of tech ethics

A Facebook request: Write a code of tech ethicsMike Godwin
www.latimes.com
Originally published April 30, 2019

Facebook is preparing to pay a multi-billion-dollar fine and dealing with ongoing ire from all corners for its user privacy lapses, the viral transmission of lies during elections, and delivery of ads in ways that skew along gender and racial lines. To grapple with these problems (and to get ahead of the bad PR they created), Chief Executive Mark Zuckerberg has proposed that governments get together and set some laws and regulations for Facebook to follow.

But Zuckerberg should be aiming higher. The question isn’t just what rules should a reformed Facebook follow. The bigger question is what all the big tech companies’ relationships with users should look like. The framework needed can’t be created out of whole cloth just by new government regulation; it has to be grounded in professional ethics.

Doctors and lawyers, as they became increasingly professionalized in the 19th century, developed formal ethical codes that became the seeds of modern-day professional practice. Tech-company professionals should follow their example. An industry-wide code of ethics could guide companies through the big questions of privacy and harmful content.

The info is here.

Editor's note: Many social media companies engage in unethical behavior on a regular basis, typically revolving around lack of consent, lack of privacy standards, filter bubble (personalized algorithms) issues, lack of accountability, lack of transparency, harmful content, and third party use of data.

Friday, April 26, 2019

Social media giants no longer can avoid moral compass

Don Hepburn
thehill.com
Originally published April 1, 2019

Here is an excerpt:

There are genuine moral, legal and technical dilemmas in addressing the challenges raised by the ubiquitous nature of the not-so-new social media conglomerates. Why, then, are social media giants avoiding the moral compass, evading legal guidelines and ignoring technical solutions available to them? The answer is, their corporate culture refuses to be held accountable to the same standards the public has applied to all other global corporations for the past five decades.

A wholesale change of culture and leadership is required within the social media industry. The culture of “everything goes” because “we are the future” needs to be more than tweaked; it must come to an end. Like any large conglomerate, social media platforms cannot ignore the public’s demand that they act with some semblance of responsibility. Just like the early stages of the U.S. coal, oil and chemical industries, the social media industry is impacting not only our physical environment but the social good and public safety. No serious journalism organization would ever allow a stranger to write their own hate-filled stories (with photos) for their newspaper’s daily headline — that’s why there’s a position called editor-in-chief.

If social media giants insist they are open platforms, then anyone can purposefully exploit them for good or evil. But if social media platforms demonstrate no moral or ethical standards, they should be subject to some form of government regulation. We have regulatory environments where we see the need to protect the public good against the need for profit-driven enterprises; why should social media platforms be given preferential treatment?

The info is here.

Thursday, April 25, 2019

The New Science of How to Argue—Constructively

Jesse Singal
The Atlantic
Originally published April 7, 2019

Here is an excerpt:

Once you know a term like decoupling, you can identify instances in which a disagreement isn’t really about X anymore, but about Y and Z. When some readers first raised doubts about a now-discredited Rolling Stone story describing a horrific gang rape at the University of Virginia, they noted inconsistencies in the narrative. Others insisted that such commentary fit into destructive tropes about women fabricating rape claims, and therefore should be rejected on its face. The two sides weren’t really talking; one was debating whether the story was a hoax, while the other was responding to the broader issue of whether rape allegations are taken seriously. Likewise, when scientists bring forth solid evidence that sexual orientation is innate, or close to it, conservatives have lashed out against findings that would “normalize” homosexuality. But the dispute over which sexual acts, if any, society should discourage is totally separate from the question of whether sexual orientation is, in fact, inborn. Because of a failure to decouple, people respond indignantly to factual claims when they’re actually upset about how those claims might be interpreted.

Nerst believes that the world can be divided roughly into “high decouplers,” for whom decoupling comes easy, and “low decouplers,” for whom it does not. This is the sort of area where erisology could produce empirical insights: What characterizes people’s ability to decouple? Nerst believes that hard-science types are better at it, on average, while artistic types are worse. After all, part of being an artist is seeing connections where other people don’t—so maybe it’s harder for them to not see connections in some cases. Nerst might be wrong. Either way, it’s the sort of claim that could be fairly easily tested if the discipline caught on.

The info is here.

Friday, March 29, 2019

The history and future of digital health in the field of behavioral medicine

Danielle Arigo, Danielle E. Jake-Schoffman, Kathleen Wolin, Ellen Beckjord, & Eric B. Hekler
J Behav Med (2019) 42: 67.
https://doi.org/10.1007/s10865-018-9966-z

Abstract

Since its earliest days, the field of behavioral medicine has leveraged technology to increase the reach and effectiveness of its interventions. Here, we highlight key areas of opportunity and recommend next steps to further advance intervention development, evaluation, and commercialization with a focus on three technologies: mobile applications (apps), social media, and wearable devices. Ultimately, we argue that future of digital health behavioral science research lies in finding ways to advance more robust academic-industry partnerships. These include academics consciously working towards preparing and training the work force of the twenty first century for digital health, actively working towards advancing methods that can balance the needs for efficiency in industry with the desire for rigor and reproducibility in academia, and the need to advance common practices and procedures that support more ethical practices for promoting healthy behavior.

Here is a portion of the Summary

An unknown landscape of privacy and data security

Another relatively new set of challenges centers around the issues of privacy and data security presented by digital health tools. First, some commercially available technologies that were originally produced for purposes other than promoting healthy behavior (e.g., social media) are now being used to study health behavior and deliver interventions. This poses a variety of potential privacy issues depending on the privacy settings used, including the fact that data from non-participants may inadvertently be viewed and collected, and their rights should also be considered as part of study procedures (Arigo et al., 2018).  Privacy may be of particular concern as apps begin to incorporate additional smartphone technologies such as GPS location tracking and cameras (Nebeker et al., 2015).  Second, for commercial products that were originally designed for health behavior change (e.g., apps), researchers need to carefully read and understand the associated privacy and security agreements, be sure that participants understand these agreements, and include a summary of this information in their applications to ethics review boards.

Sunday, March 24, 2019

An Ethical Obligation for Bioethicists to Utilize Social Media

Herron, PD
Hastings Cent Rep. 2019 Jan;49(1):39-40.
doi: 10.1002/hast.978.

Here is an excerpt:

Unfortunately, it appears that bioethicists are no better informed than other health professionals, policy experts, or (even) elected officials, and they are sometimes resistant to becoming informed. But bioethicists have a duty to develop our knowledge and usefulness with respect to social media; many of our skills can and should be adapted to this area. There is growing evidence of the power of social media to foster dissemination of misinformation. The harms associated with misinformation or “fake news” are not new threats. Historically, there have always been individuals or organized efforts to propagate false information or to deceive others. Social media and other technologies have provided the ability to rapidly and expansively share both information and misinformation. Bioethics serves society by offering guidance about ethical issues associated with advances in medicine, science, and technology. Much of the public’s conversation about and exposure to these emerging issues occurs online. If we bioethicists are not part of the mix, we risk yielding to alternative and less authoritative sources of information. Social media’s transformative impact has led some to view it as not just a personal tool but the equivalent to a public utility, which, as such, should be publicly regulated. Bioethicists can also play a significant part in this dialogue. But to do so, we need to engage with social media. We need to ensure that our understanding of social media is based on experiential use, not just abstract theory.

Bioethics has expanded over the past few decades, extending beyond the academy to include, for example, clinical ethics consultants and leadership positions in public affairs and public health policy. These varied roles bring weighty responsibilities and impose a need for critical reflection on how bioethicists can best serve the public interest in a way that reflects and is accountable to the public’s needs.

Wednesday, March 20, 2019

Should This Exist? The Ethics Of New Technology

Lulu Garcia-Navarro
www.NPR.org
Originally posted March 3, 2019

Not every new technology product hits the shelves.

Tech companies kill products and ideas all the time — sometimes it's because they don't work, sometimes there's no market.

Or maybe, it might be too dangerous.

Recently, the research firm OpenAI announced that it would not be releasing a version of a text generator they developed, because of fears that it could be misused to create fake news. The text generator was designed to improve dialogue and speech recognition in artificial intelligence technologies.

The organization's GPT-2 text generator can generate paragraphs of coherent, continuing text based off of a prompt from a human. For example, when inputted with the claim, "John F. Kennedy was just elected President of the United States after rising from the grave decades after his assassination," the generator spit out the transcript of "his acceptance speech" that read in part:
It is time once again. I believe this nation can do great things if the people make their voices heard. The men and women of America must once more summon our best elements, all our ingenuity, and find a way to turn such overwhelming tragedy into the opportunity for a greater good and the fulfillment of all our dreams.
Considering the serious issues around fake news and online propaganda that came to light during the 2016 elections, it's easy to see how this tool could be used for harm.

The info is here.

Wednesday, February 27, 2019

Business Ethics And Integrity: It Starts With The Tone At The Top

Betsy Atkins
Forbes.com
Originally posted 7, 2019

Here is the conclusion:

Transparency leads to empowerment:

Share your successes and your failures and look to everyone to help build a better company.  By including everyone, you create the illusive “we” that is the essence of company culture.  Transparency leads to a company culture that creates an outcome because the CEO creates a bigger purpose for the organization than just making money or reaching quarterly numbers.  Company culture guru Kenneth Kurtzman author of Common Purpose said it best when he said “CEOs need to know how to read their organizations’ emotional tone and need to engage behaviors that build trust including leading-by-listening, building bridges, showing compassion and caring, demonstrating their own commitment to the organization, and giving employees the authority to do their job while inspiring them to do their best work.”

There is no substitute for CEO leadership in creating a company culture of integrity.  A board that supports the CEO in building a company culture of integrity, transparency, and collaboration will be supporting a successful company.

The info is here.

Friday, February 22, 2019

Facebook Backs University AI Ethics Institute With $7.5 Million

Sam Shead
Forbes.com
Originally posted January 20, 2019

Facebook is backing an AI ethics institute at the Technical University of Munich with $7.5 million.

The TUM Institute for Ethics in Artificial Intelligence, which was announced on Sunday, will aim to explore fundamental issues affecting the use and impact of AI, Facebook said.

AI is poised to have a profound impact on areas like climate change and healthcare but it has its risks.

"We will explore the ethical issues of AI and develop ethical guidelines for the responsible use of the technology in society and the economy. Our evidence-based research will address issues that lie at the interface of technology and human values," said TUM Professor Dr. Christoph LĂĽtge, who will lead the institute.

"Core questions arise around trust, privacy, fairness or inclusion, for example, when people leave data traces on the internet or receive certain information by way of algorithms. We will also deal with transparency and accountability, for example in medical treatment scenarios, or with rights and autonomy in human decision-making in situations of human-AI interaction."

The info is here.

Friday, February 15, 2019

The Economic Effects of Facebook

Mosquera, Roberto,  Odunowo, Mofioluwasademi, and others
December 1, 2018.
http://dx.doi.org/10.2139/ssrn.3312462

Abstract

Social media permeates many aspects of our lives, including how we connect with others, where we get our news and how we spend our time. Yet, we know little about the economic effects for users. Using a large field experiment with over 1,765 individuals, we document the value of Facebook to users and its causal effect on news consumption and awareness, well-being and daily activities. Participants reveal how much they value one week of Facebook usage and are then randomly assigned to a validated Facebook restriction or normal use. Those who are off Facebook for a week reduce news consumption, are less likely to recognize politically-skewed news stories, report being less depressed and engage in healthier activities. One week of Facebook is worth $25, and this increases by 15% after experiencing a Facebook restriction (26% for women), reflecting information loss or that using Facebook may be addictive.

Ethical/Clinical Question: Knowing this research, is it ethical and clinically appropriate to recommend depressed patients to stop using Facebook?

Wednesday, January 16, 2019

What Is the Right to Privacy?

Andrei Marmor
(2015) Philosophy & Public Affairs, 43, 1, pp 3-26

The right to privacy is a curious kind of right. Most people think that we have a general right to privacy. But when you look at the kind of issues that lawyers and philosophers label as concerns about privacy, you see widely differing views about the scope of the right and the kind of cases that fall under its purview.1 Consequently, it has become difficult to articulate the underlying interest that the right to privacy is there to protect—so much so that some philosophers have come to doubt that there is any underlying interest protected by it. According to Judith Thomson, for example, privacy is a cluster of derivative rights, some of them derived from rights to own or use your property, others from the right to your person or your right to decide what to do with your body, and so on. Thomson’s position starts from a sound observation, and I will begin by explaining why. The conclusion I will reach, however, is very different. I will argue that there is a general right to privacy grounded in people’s interest in having a reasonable measure of control over the ways in which they can present themselves (and what is theirs) to others. I will strive to show that this underlying interest justifies the right to privacy and explains its proper scope, though the scope of the right might be narrower, and fuzzier in its boundaries, than is commonly understood.

The info is here.

Saturday, January 5, 2019

Emotion shapes the diffusion of moralized content in social networks

William J. Brady, Julian A. Wills, John T. Jost, Joshua A. Tucker, and Jay J. Van Bavel
PNAS July 11, 2017 114 (28) 7313-7318; published ahead of print June 26, 2017 https://doi.org/10.1073/pnas.1618923114

Abstract

Political debate concerning moralized issues is increasingly common in online social networks. However, moral psychology has yet to incorporate the study of social networks to investigate processes by which some moral ideas spread more rapidly or broadly than others. Here, we show that the expression of moral emotion is key for the spread of moral and political ideas in online social networks, a process we call “moral contagion.” Using a large sample of social media communications about three polarizing moral/political issues (n = 563,312), we observed that the presence of moral-emotional words in messages increased their diffusion by a factor of 20% for each additional word. Furthermore, we found that moral contagion was bounded by group membership; moral-emotional language increased diffusion more strongly within liberal and conservative networks, and less between them. Our results highlight the importance of emotion in the social transmission of moral ideas and also demonstrate the utility of social network methods for studying morality. These findings offer insights into how people are exposed to moral and political ideas through social networks, thus expanding models of social influence and group polarization as people become increasingly immersed in social media networks.

Significance

Twitter and other social media platforms are believed to have altered the course of numerous historical events, from the Arab Spring to the US presidential election. Online social networks have become a ubiquitous medium for discussing moral and political ideas. Nevertheless, the field of moral psychology has yet to investigate why some moral and political ideas spread more widely than others. Using a large sample of social media communications concerning polarizing issues in public policy debates (gun control, same-sex marriage, climate change), we found that the presence of moral-emotional language in political messages substantially increases their diffusion within (and less so between) ideological group boundaries. These findings offer insights into how moral ideas spread within networks during real political discussion.

Friday, December 14, 2018

Don’t Want to Fall for Fake News? Don’t Be Lazy

Robbie Gonzalez
www.wired.com
Originally posted November 9, 2018

Here are two excerpts:

Misinformation researchers have proposed two competing hypotheses for why people fall for fake news on social media. The popular assumption—supported by research on apathy over climate change and the denial of its existence—is that people are blinded by partisanship, and will leverage their critical-thinking skills to ram the square pegs of misinformation into the round holes of their particular ideologies. According to this theory, fake news doesn't so much evade critical thinking as weaponize it, preying on partiality to produce a feedback loop in which people become worse and worse at detecting misinformation.

The other hypothesis is that reasoning and critical thinking are, in fact, what enable people to distinguish truth from falsehood, no matter where they fall on the political spectrum. (If this sounds less like a hypothesis and more like the definitions of reasoning and critical thinking, that's because they are.)

(cut)

All of which suggests susceptibility to fake news is driven more by lazy thinking than by partisan bias. Which on one hand sounds—let's be honest—pretty bad. But it also implies that getting people to be more discerning isn't a lost cause. Changing people's ideologies, which are closely bound to their sense of identity and self, is notoriously difficult. Getting people to think more critically about what they're reading could be a lot easier, by comparison.

Then again, maybe not. "I think social media makes it particularly hard, because a lot of the features of social media are designed to encourage non-rational thinking." Rand says. Anyone who has sat and stared vacantly at their phone while thumb-thumb-thumbing to refresh their Twitter feed, or closed out of Instagram only to re-open it reflexively, has experienced firsthand what it means to browse in such a brain-dead, ouroboric state. Default settings like push notifications, autoplaying videos, algorithmic news feeds—they all cater to humans' inclination to consume things passively instead of actively, to be swept up by momentum rather than resist it.

The info is here.

Sunday, December 2, 2018

Re-thinking Data Protection Law in the Age of Big Data and AI

Sandra Wachter and Brent Mittelstadt
Oxford Internet Institute
Originally posted October 11,2 018

Numerous applications of ‘Big Data analytics’ drawing potentially troubling inferences about individuals and groups have emerged in recent years.  Major internet platforms are behind many of the highest profile examples: Facebook may be able to infer protected attributes such as sexual orientation, race, as well as political opinions and imminent suicide attempts, while third parties have used Facebook data to decide on the eligibility for loans and infer political stances on abortion. Susceptibility to depression can similarly be inferred via usage data from Facebook and Twitter. Google has attempted to predict flu outbreaks as well as other diseases and their outcomes. Microsoft can likewise predict Parkinson’s disease and Alzheimer’s disease from search engine interactions. Other recent invasive applications include prediction of pregnancy by Target, assessment of users’ satisfaction based on mouse tracking, and China’s far reaching Social Credit Scoring system.

Inferences in the form of assumptions or predictions about future behaviour are often privacy-invasive, sometimes counterintuitive and, in any case, cannot be verified at the time of decision-making. While we are often unable to predict, understand or refute these inferences, they nonetheless impact on our private lives, identity, reputation, and self-determination.

These facts suggest that the greatest risks of Big Data analytics do not stem solely from how input data (name, age, email address) is used. Rather, it is the inferences that are drawn about us from the collected data, which determine how we, as data subjects, are being viewed and evaluated by third parties, that pose the greatest risk. It follows that protections designed to provide oversight and control over how data is collected and processed are not enough; rather, individuals require meaningful protection against not only the inputs, but the outputs of data processing.

Unfortunately, European data protection law and jurisprudence currently fails in this regard.

The info is here.

Friday, November 30, 2018

To regulate AI we need new laws, not just a code of ethics

Paul Chadwick
The Guardian
Originally posted October 28, 2018

Here is an excerpt:

To Nemitz, “the absence of such framing for the internet economy has already led to a widespread culture of disregard of the law and put democracy in danger, the Facebook Cambridge Analytica scandal being only the latest wake-up call”.

Nemitz identifies four bases of digital power which create and then reinforce its unhealthy concentration in too few hands: lots of money, which means influence; control of “infrastructures of public discourse”; collection of personal data and profiling of people; and domination of investment in AI, most of it a “black box” not open to public scrutiny.

The key question is which of the challenges of AI “can be safely and with good conscience left to ethics” and which need law. Nemitz sees much that needs law.

In an argument both biting and sophisticated, Nemitz sketches a regulatory framework for AI that will seem to some like the GDPR on steroids.

Among several large claims, Nemitz argues that “not regulating these all pervasive and often decisive technologies by law would effectively amount to the end of democracy. Democracy cannot abdicate, and in particular not in times when it is under pressure from populists and dictatorships.”

The info is here.

Saturday, November 17, 2018

The New Age of Patient Autonomy: Implications for the Patient-Physician Relationship

Madison Kilbride and Steven Joffe
JAMA. Published online October 15, 2018.

Here is an excerpt:

The New Age of Patient Autonomy

The abandonment of strong medical paternalism led scholars to explore alternative models of the patient-physician relationship that emphasize patient choice. Shared decision making gained traction in the 1980s and remains the preferred model for health care interactions. Broadly, shared decision making involves the physician and patient working together to make medical decisions that accord with the patient’s values and preferences. Ideally, for many decisions, the physician and patient engage in an informational volley—the physician provides information about the range of options, and the patient expresses his or her values and preferences. In some cases, the physician may need to help the patient identify or clarify his or her values and goals of care in light of the available treatment options.

Although there is general consensus that patients should participate in and ultimately make their own medical decisions whenever possible, most versions of shared decision making take for granted that the physician has access to knowledge, understanding, and medical resources that the patient lacks. As such, the shift from medical paternalism to patient autonomy did not wholly transform the physician’s role in the therapeutic relationship.

In recent years, however, widespread access to the internet and social media has reduced physicians’ dominion over medical information and, increasingly, over patients’ access to medical products and services. It is no longer the case that patients simply visit their physicians, describe their symptoms, and wait for the differential diagnosis. Today, some patients arrive at the physician’s office having thoroughly researched their symptoms and identified possible diagnoses. Indeed, some patients who have lived with rare diseases may even know more about their conditions than some of the physicians with whom they consult.

The info is here.

Friday, November 16, 2018

Re-thinking Data Protection Law in the Age of Big Data and AI

Sandra Wachter and Brent Mittelstadt
Oxford Internet Institute
Originally published October 11, 2018

Numerous applications of ‘Big Data analytics’ drawing potentially troubling inferences about individuals and groups have emerged in recent years.  Major internet platforms are behind many of the highest profile examples: Facebook may be able to infer protected attributes such as sexual orientation, race, as well as political opinions and imminent suicide attempts, while third parties have used Facebook data to decide on the eligibility for loans and infer political stances on abortion. Susceptibility to depression can similarly be inferred via usage data from Facebook and Twitter. Google has attempted to predict flu outbreaks as well as other diseases and their outcomes. Microsoft can likewise predict Parkinson’s disease and Alzheimer’s disease from search engine interactions. Other recent invasive applications include prediction of pregnancy by Target, assessment of users’ satisfaction based on mouse tracking, and China’s far reaching Social Credit Scoring system.

Inferences in the form of assumptions or predictions about future behaviour are often privacy-invasive, sometimes counterintuitive and, in any case, cannot be verified at the time of decision-making. While we are often unable to predict, understand or refute these inferences, they nonetheless impact on our private lives, identity, reputation, and self-determination.

These facts suggest that the greatest risks of Big Data analytics do not stem solely from how input data (name, age, email address) is used. Rather, it is the inferences that are drawn about us from the collected data, which determine how we, as data subjects, are being viewed and evaluated by third parties, that pose the greatest risk. It follows that protections designed to provide oversight and control over how data is collected and processed are not enough; rather, individuals require meaningful protection against not only the inputs, but the outputs of data processing.

The information is here.