Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Big Data. Show all posts
Showing posts with label Big Data. Show all posts

Thursday, January 2, 2020

The Tricky Ethics of Google's Project Nightingale Effort

Cason Schmit
nextgov.com
Originally posted 3 Dec 19

The nation’s second-largest health system, Ascension, has agreed to allow the software behemoth Google access to tens of millions of patient records. The partnership, called Project Nightingale, aims to improve how information is used for patient care. Specifically, Ascension and Google are trying to build tools, including artificial intelligence and machine learning, “to make health records more useful, more accessible and more searchable” for doctors.

Ascension did not announce the partnership: The Wall Street Journal first reported it.

Patients and doctors have raised privacy concerns about the plan. Lack of notice to doctors and consent from patients are the primary concerns.

As a public health lawyer, I study the legal and ethical basis for using data to promote public health. Information can be used to identify health threats, understand how diseases spread and decide how to spend resources. But it’s more complicated than that.

The law deals with what can be done with data; this piece focuses on ethics, which asks what should be done.

Beyond Hippocrates

Big-data projects like this one should always be ethically scrutinized. However, data ethics debates are often narrowly focused on consent issues.

In fact, ethical determinations require balancing different, and sometimes competing, ethical principles. Sometimes it might be ethical to collect and use highly sensitive information without getting an individual’s consent.

The info is here.

Wednesday, May 8, 2019

The ethics of emerging technologies

artificial intelligenceJessica Hallman
www.techxplore.com
Originally posted April 10, 2019


Here is an excerpt:

"There's a new field called data ethics," said Fonseca, associate professor in Penn State's College of Information Sciences and Technology and 2019-2020 Faculty Fellow in Penn State's Rock Ethics Institute. "We are collecting data and using it in many different ways. We need to start thinking more about how we're using it and what we're doing with it."

By approaching emerging technology with a philosophical perspective, Fonseca can explore the ethical dilemmas surrounding how we gather, manage and use information. He explained that with the rise of big data, for example, many scientists and analysts are foregoing formulating hypotheses in favor of allowing data to make inferences on particular problems.

"Normally, in science, theory drives observations. Our theoretical understanding guides both what we choose to observe and how we choose to observe it," Fonseca explained. "Now, with so much data available, science's classical picture of theory-building is under threat of being inverted, with data being suggested as the source of theories in what is being called data-driven science."

Fonseca shared these thoughts in his paper, "Cyber-Human Systems of Thought and Understanding," which was published in the April 2019 issue of the Journal of the Association of Information Sciences and Technology. Fonseca co-authored the paper with Michael Marcinowski, College of Liberal Arts, Bath Spa University, United Kingdom; and Clodoveu Davis, Computer Science Department, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil.

In the paper, the researchers propose a concept to bridge the divide between theoretical thinking and a-theoretical, data-driven science.

The info is here.

Tuesday, January 8, 2019

Algorithmic governance: Developing a research agenda through the power of collective intelligence

John Danaher, Michael J Hogan, Chris Noone, Ronan Kennedy, et.al
Big Data & Society
July–December 2017: 1–21

Abstract

We are living in an algorithmic age where mathematics and computer science are coming together in powerful new ways to influence, shape and guide our behaviour and the governance of our societies. As these algorithmic governance structures proliferate, it is vital that we ensure their effectiveness and legitimacy. That is, we need to ensure that they are an effective means for achieving a legitimate policy goal that are also procedurally fair, open and unbiased. But how can we ensure that algorithmic governance structures are both? This article shares the results of a collective intelligence workshop that addressed exactly this question. The workshop brought together a multidisciplinary group of scholars to consider (a) barriers to legitimate and effective algorithmic governance and (b) the research methods needed to address the nature and impact of specific barriers. An interactive management workshop technique was used to harness the collective intelligence of this multidisciplinary group. This method enabled participants to produce a framework and research agenda for those who are concerned about algorithmic governance. We outline this research agenda below, providing a detailed map of key research themes, questions and methods that our workshop felt ought to be pursued. This builds upon existing work on research agendas for critical algorithm studies in a unique way through the method of collective intelligence.

The paper is here.

Sunday, December 2, 2018

Re-thinking Data Protection Law in the Age of Big Data and AI

Sandra Wachter and Brent Mittelstadt
Oxford Internet Institute
Originally posted October 11,2 018

Numerous applications of ‘Big Data analytics’ drawing potentially troubling inferences about individuals and groups have emerged in recent years.  Major internet platforms are behind many of the highest profile examples: Facebook may be able to infer protected attributes such as sexual orientation, race, as well as political opinions and imminent suicide attempts, while third parties have used Facebook data to decide on the eligibility for loans and infer political stances on abortion. Susceptibility to depression can similarly be inferred via usage data from Facebook and Twitter. Google has attempted to predict flu outbreaks as well as other diseases and their outcomes. Microsoft can likewise predict Parkinson’s disease and Alzheimer’s disease from search engine interactions. Other recent invasive applications include prediction of pregnancy by Target, assessment of users’ satisfaction based on mouse tracking, and China’s far reaching Social Credit Scoring system.

Inferences in the form of assumptions or predictions about future behaviour are often privacy-invasive, sometimes counterintuitive and, in any case, cannot be verified at the time of decision-making. While we are often unable to predict, understand or refute these inferences, they nonetheless impact on our private lives, identity, reputation, and self-determination.

These facts suggest that the greatest risks of Big Data analytics do not stem solely from how input data (name, age, email address) is used. Rather, it is the inferences that are drawn about us from the collected data, which determine how we, as data subjects, are being viewed and evaluated by third parties, that pose the greatest risk. It follows that protections designed to provide oversight and control over how data is collected and processed are not enough; rather, individuals require meaningful protection against not only the inputs, but the outputs of data processing.

Unfortunately, European data protection law and jurisprudence currently fails in this regard.

The info is here.

Friday, November 16, 2018

Re-thinking Data Protection Law in the Age of Big Data and AI

Sandra Wachter and Brent Mittelstadt
Oxford Internet Institute
Originally published October 11, 2018

Numerous applications of ‘Big Data analytics’ drawing potentially troubling inferences about individuals and groups have emerged in recent years.  Major internet platforms are behind many of the highest profile examples: Facebook may be able to infer protected attributes such as sexual orientation, race, as well as political opinions and imminent suicide attempts, while third parties have used Facebook data to decide on the eligibility for loans and infer political stances on abortion. Susceptibility to depression can similarly be inferred via usage data from Facebook and Twitter. Google has attempted to predict flu outbreaks as well as other diseases and their outcomes. Microsoft can likewise predict Parkinson’s disease and Alzheimer’s disease from search engine interactions. Other recent invasive applications include prediction of pregnancy by Target, assessment of users’ satisfaction based on mouse tracking, and China’s far reaching Social Credit Scoring system.

Inferences in the form of assumptions or predictions about future behaviour are often privacy-invasive, sometimes counterintuitive and, in any case, cannot be verified at the time of decision-making. While we are often unable to predict, understand or refute these inferences, they nonetheless impact on our private lives, identity, reputation, and self-determination.

These facts suggest that the greatest risks of Big Data analytics do not stem solely from how input data (name, age, email address) is used. Rather, it is the inferences that are drawn about us from the collected data, which determine how we, as data subjects, are being viewed and evaluated by third parties, that pose the greatest risk. It follows that protections designed to provide oversight and control over how data is collected and processed are not enough; rather, individuals require meaningful protection against not only the inputs, but the outputs of data processing.

The information is here.

Thursday, November 8, 2018

Do We Need To Teach Ethics And Empathy To Data Scientists?

Kalev Leetaru
Forbes.com
Originally posted October 8, 2018

Here is an excerpt:

One of the most frightening aspects of the modern web is the speed at which it has struck down decades of legislation and professional norms regarding personal privacy and the ethics of turning ordinary citizens into laboratory rats to be experimented on against their wills. In the space of just two decades the online world has weaponized personalization and data brokering, stripped away the last vestiges of privacy, centralized control over the world’s information and communications channels, changed the public’s understanding of the right over their digital selves and profoundly reshaped how the scholarly world views research ethics, informed consent and the right to opt out of being turned into a digital guinea pig.

It is the latter which in many ways has driven each of the former changes. Academia’s changing views towards IRB and ethical review has produced a new generation of programmers and data scientists who view research ethics as merely an outdated obsolete historical relic that was an obnoxious barrier preventing them from doing as they pleased to an unsuspecting public.

(cut)

Ironically, however, when asked whether she would consent to someone mass harvesting all of her own personal information from all of the sites she has willingly signed up for over the years, the answer was a resounding no. When asked how she reconciled the difference between her view that users of platforms willingly relinquish their right to privacy, while her own data should be strictly protected, she was unable to articulate a reason other than that those who create and study the platforms are members of the “societal elite” who must be granted an absolute right to privacy, while “ordinary” people can be mined and manipulated at will. Such an empathy gap is common in the technical world, in which people’s lives are dehumanized into spreadsheets of numbers that remove any trace of connection or empathy.

The info is here.

Saturday, May 5, 2018

Deep learning: Why it’s time for AI to get philosophical

Catherine Stinson
The Globe and Mail
Originally published March 23, 2018

Here is an excerpt:

Another kind of effort at fixing AI’s ethics problem is the proliferation of crowdsourced ethics projects, which have the commendable goal of a more democratic approach to science. One example is DJ Patil’s Code of Ethics for Data Science, which invites the data-science community to contribute ideas but doesn’t build up from the decades of work already done by philosophers, historians and sociologists of science. Then there’s MIT’s Moral Machine project, which asks the public to vote on questions such as whether a self-driving car with brake failure ought to run over five homeless people rather than one female doctor. Philosophers call these “trolley problems” and have published thousands of books and papers on the topic over the past half-century. Comparing the views of professional philosophers with those of the general public can be eye-opening, as experimental philosophy has repeatedly shown, but simply ignoring the experts and taking a vote instead is irresponsible.

The point of making AI more ethical is so it won’t reproduce the prejudices of random jerks on the internet. Community participation throughout the design process of new AI tools is a good idea, but let’s not do it by having trolls decide ethical questions. Instead, representatives from the populations affected by technological change should be consulted about what outcomes they value most, what needs the technology should address and whether proposed designs would be usable given the resources available. Input from residents of heavily policed neighbourhoods would have revealed that a predictive policing system trained on historical data would exacerbate racial profiling. Having a person of colour on the design team for that soap dispenser should have made it obvious that a peachy skin tone detector wouldn’t work for everyone. Anyone who has had a stalker is sure to notice the potential abuses of selfie drones. Diversifying the pool of talent in AI is part of the solution, but AI also needs outside help from experts in other fields, more public consultation and stronger government oversight.

The information is here.

Tuesday, April 3, 2018

Cambridge Analytica: You Can Have My Money but Not My Vote

Emily Feng-Gu
Practical Ethics
Originally posted March 31, 2018

Here is an excerpt:

On one level, the Cambridge Analytica scandal concerns data protection, privacy, and informed consent. The data involved was not, as Facebook insisted, obtained via a ‘breach’ or a ‘leak’. User data was as safe as it had always been – which is to say, not very safe at all. At the time, the harvesting of data, including that of unconsenting Facebook friends, by third-party apps was routine policy for Facebook, provided it was used only for academic purposes. Cambridge researcher and creator of the third-party app in question, Aleksandr Kogan, violated the agreement only when the data was passed onto Cambridge Analytica. Facebook failed to protect its users’ data privacy, that much is clear.

But are risks like these transparent to users? There is a serious concern about informed consent in a digital age. Most people are unlikely to have the expertise necessary to fully understand what it means to use online and other digital services.  Consider Facebook: users sign up for an ostensibly free social media service. Facebook did not, however, accrue billions in revenue by offering a service for nothing in return; they profit from having access to large amounts of personal data. It is doubtful that the costs to personal and data privacy are made clear to users, some of which are children or adolescents. For most people, the concept of big data is likely to be nebulous at best. What does it matter if someone has access to which Pages we have Liked? What exactly does it mean for third-party apps to be given access to data? When signing up to Facebook, I hazard that few people imagined clicking ‘I agree’ could play a role in attempts to influence election outcomes. A jargon laden ‘terms and conditions’ segment is not enough to inform users regarding what precisely it is they are consenting to.

The blog post is here.

Tuesday, November 28, 2017

Trusting big health data

Angela Villanueva
Baylor College of Medicine Blogs
Originally posted November 10, 2017

Here is an excerpt:

Potentially exacerbating this mistrust is a sense of loss of privacy and absence of control over information describing us and our habits. Given the extent of current “everyday” data collection and sharing for marketing and other purposes, this lack of trust is not unreasonable.

Health information sharing makes many people uneasy, particularly because of the potential harms such as insurance discrimination or stigmatization. Data breaches like the recent Equifax hack may add to these concerns and affect people’s willingness to share their health data.

But it is critical to encourage members of all groups to participate in big data initiatives focused on health in order for all to benefit from the resulting discoveries. My colleagues and I recently published an article detailing eight guiding principles for successful data sharing; building trust is one of them.

Here is the article.

Monday, November 6, 2017

Is It Too Late For Big Data Ethics?

Kalev Leetaru
Forbes.com
Originally published October 16, 2017

Here is an excerpt:

AI researchers are rushing to create the first glimmers of general AI and hoping for the key breakthroughs that take us towards a world in which machines gain consciousness. The structure of academic IRBs means that little of this work is subject to ethical review of any kind and its highly technical nature means the general public is little aware of the rapid pace of progress until it comes into direct life-or-death contact with consumers such as driverless cars.

Could industry-backed initiatives like one announced by Bloomberg last month in partnership with BrightHive and Data for Democracy be the answer? It all depends on whether companies and organizations actively infuse these values into the work they perform and sponsor or whether these are merely public relations campaigns for them. As I wrote last month, when I asked the organizers of a recent data mining workshop as to why they did not require ethical review or replication datasets for their submissions, one of the organizers, a Bloomberg data scientist, responded only that the majority of other ACM computer science conferences don’t either. When asked why she and her co-organizers didn’t take a stand with their own workshop to require IRB review and replication datasets even if those other conferences did not, in an attempt to start a trend in the field, she would only repeat that such requirements are not common to their field. When asked whether Bloomberg would be requiring its own data scientists to adhere to its new data ethics initiative and/or mandate that they integrate its principles into external academic workshops they help organize, a company spokesperson said they would try to offer comment, but had nothing further to add after nearly a week.

The article is here.

Wednesday, July 26, 2017

Everybody lies: how Google search reveals our darkest secrets

Seth Stephens-Davidowitz
The Guardian
Originally published July 9, 2017

Everybody lies. People lie about how many drinks they had on the way home. They lie about how often they go to the gym, how much those new shoes cost, whether they read that book. They call in sick when they’re not. They say they’ll be in touch when they won’t. They say it’s not about you when it is. They say they love you when they don’t. They say they’re happy while in the dumps. They say they like women when they really like men. People lie to friends. They lie to bosses. They lie to kids. They lie to parents. They lie to doctors. They lie to husbands. They lie to wives. They lie to themselves. And they damn sure lie to surveys. Here’s my brief survey for you:

Have you ever cheated in an exam?

Have you ever fantasised about killing someone?

Were you tempted to lie?

Many people underreport embarrassing behaviours and thoughts on surveys. They want to look good, even though most surveys are anonymous. This is called social desirability bias. An important paper in 1950 provided powerful evidence of how surveys can fall victim to such bias. Researchers collected data, from official sources, on the residents of Denver: what percentage of them voted, gave to charity, and owned a library card. They then surveyed the residents to see if the percentages would match. The results were, at the time, shocking. What the residents reported to the surveys was very different from the data the researchers had gathered. Even though nobody gave their names, people, in large numbers, exaggerated their voter registration status, voting behaviour, and charitable giving.

The article is here.

Saturday, March 25, 2017

Will Democracy Survive Big Data and Artificial Intelligence?

Dirk Helbing, Bruno S. Frey, Gerd Gigerenzer,  and others
Scientific American
Originally posted February 25, 2017

Here is an excerpt:

One thing is clear: the way in which we organize the economy and society will change fundamentally. We are experiencing the largest transformation since the end of the Second World War; after the automation of production and the creation of self-driving cars the automation of society is next. With this, society is at a crossroads, which promises great opportunities, but also considerable risks. If we take the wrong decisions it could threaten our greatest historical achievements.

(cut)

These technologies are also becoming increasingly popular in the world of politics. Under the label of “nudging,” and on massive scale, governments are trying to steer citizens towards healthier or more environmentally friendly behaviour by means of a "nudge"—a modern form of paternalism. The new, caring government is not only interested in what we do, but also wants to make sure that we do the things that it considers to be right. The magic phrase is "big nudging", which is the combination of big data with nudging. To many, this appears to be a sort of digital scepter that allows one to govern the masses efficiently, without having to involve citizens in democratic processes. Could this overcome vested interests and optimize the course of the world? If so, then citizens could be governed by a data-empowered “wise king”, who would be able to produce desired economic and social outcomes almost as if with a digital magic wand.

The article is here.

Friday, December 30, 2016

The ethics of algorithms: Mapping the debate

Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, Luciano Floridi
Big Data and Society
DOI: 10.1177/2053951716679679, Dec 2016

Abstract

In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

The article is here.

Thursday, November 10, 2016

The Ethics of Algorithms: Mapping the Debate

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S. and Floridi, L. 2016 (in press). ‘The Ethics of Algorithms: Mapping the Debate’. Big Data & Society

Abstract

In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms.And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

The book chapter is here.

Tuesday, September 20, 2016

Big data, Google and the end of free will

Yuval Noah Harari
Financial Times
Originally posted August August 26, 2016

Here are two excerpts:

This has already happened in the field of medicine. The most important medical decisions in your life are increasingly based not on your feelings of illness or wellness, or even on the informed predictions of your doctor — but on the calculations of computers who know you better than you know yourself. A recent example of this process is the case of the actress Angelina Jolie. In 2013, Jolie took a genetic test that proved she was carrying a dangerous mutation of the BRCA1 gene. According to statistical databases, women carrying this mutation have an 87 per cent probability of developing breast cancer. Although at the time Jolie did not have cancer, she decided to pre-empt the disease and undergo a double mastectomy. She didn’t feel ill but she wisely decided to listen to the computer algorithms. “You may not feel anything is wrong,” said the algorithms, “but there is a time bomb ticking in your DNA. Do something about it — now!”

(cut)

But even if Dataism is wrong about life, it may still conquer the world. Many previous creeds gained enormous popularity and power despite their factual mistakes. If Christianity and communism could do it, why not Dataism? Dataism has especially good prospects, because it is currently spreading across all scientific disciplines. A unified scientific paradigm may easily become an unassailable dogma.

The article is here.

Sunday, June 19, 2016

The Ethics of Large-Scale Genomic Research

Benjamin E. Berkman, Zachary E. Shapiro, Lisa Eckstein, Elizabeth R. Pike
Chapter in Ethical Reasoning in Big Data
Part of the series Computational Social Sciences pp 53-69

Abstract

The potential for big data to advance our understanding of human disease has been particularly heralded in the field of genomics. Recent technological advances have accelerated the massive data generation capabilities of genomic research, which has allowed researchers to undertake larger scale genomic research, with significantly more participants, further spurring the generation of massive amounts of data. The advance of technology has also triggered a significant reduction in cost, allowing large-scale genomic research to be increasingly feasible, even for smaller research sites. The rise of genetic research has triggered the creation of many large-scale genomic repositories (LSGRs) some of which contain the genomic information of millions of research participants. While LSGRs have genuine potential, they also have raised a number of ethical concerns. Most prominently, commentators have raised questions about the privacy implications of LSGRs, given that all genomic data is theoretically re-identifiable. Privacy can be further threatened by the possibility of aggregation of data sets, which can give rise to unexpected, and potentially sensitive, information. Beyond privacy concerns, LSGRs also raise questions about participant autonomy, public trust in research, and justice. In this chapter, we explore these ethical challenges, with the goal of elucidating which ones require closer scrutiny and perhaps policy action. Our analysis suggests that caution is warranted before any major policies are implemented. Much attention has been directed at privacy concerns raised by LSGRs, but perhaps for the wrong reasons, and perhaps at the expense of other relevant concerns. We do not think that there is yet sufficient evidence to motivate enactment of major policy changes in order to safeguard welfare interests, although there might be some stronger reasons to worry about subjects’ non-welfare interests. We also believe that LSGRs raise genuine concerns about autonomy and justice. Big data research, and LSGRs in particular, have the potential to radically advance our understanding of human disease. While these new research resources raise important ethical concerns, any policies implemented concerning LSGRs should be carefully tailored to ensure that research is not unduly burdened.

The abstract to the book chapter is here.

You may want to contact the author for a copy for personal use.

Saturday, June 11, 2016

Scientists Are Just as Confused About the Ethics of Big-Data Research as You

Sarah Zhang
Wired Magazine
Originally published May 20, 2016

Here is an excerpt:

Shockingly, though, the researchers behind both of those big data blowups never anticipated public outrage. (The OkCupid research does not seem to have gone through any kind of ethical review process, and a Cornell ethics review board approved the Facebook experiment.) And that shows just how untested the ethics of this new field of research is. Unlike medical research, which has been shaped by decades of clinical trials, the risks—and rewards—of analyzing big, semi-public databases are just beginning to become clear.

And the patchwork of review boards responsible for overseeing those risks are only slowly inching into the 21st century. Under the Common Rule in the US, federally funded research has to go through ethical review. Rather than one unified system though, every single university has its own institutional review board, or IRB. Most IRB members are researchers at the university, most often in the biomedical sciences. Few are professional ethicists.

The article is here.

Thursday, March 3, 2016

The Rise of Data-Driven Decision Making Is Real but Uneven

Kristina McElheran and Erik Brynjolfsson
Harvard Business Review
February 3, 2016

Growing opportunities to collect and leverage digital information have led many managers to change how they make decisions – relying less on intuition and more on data. As Jim Barksdale, the former CEO of Netscape quipped, “If we have data, let’s look at data. If all we have are opinions, let’s go with mine.” Following pathbreakers such as Caesar’s CEO Gary Loveman – who attributes his firm’s success to the use of databases and cutting-edge analytical tools – managers at many levels are now consuming data and analytical output in unprecedented ways.

This should come as no surprise. At their most fundamental level, all organizations can be thought of as “information processors” that rely on the technologies of hierarchy, specialization, and human perception to collect, disseminate, and act on insights. Therefore, it’s only natural that technologies delivering faster, cheaper, more accurate information create opportunities to re-invent the managerial machinery.

The article is here.

Monday, December 28, 2015

Computer-based personality judgments are more accurate than those made by humans

By Wu Youyou, Michal Kosinski, and David Stillwell
PNAS January 27, 2015 vol. 112 no. 4 1036-1040

Abstract

Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.

The article is here.