Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Consent. Show all posts
Showing posts with label Consent. Show all posts

Sunday, September 22, 2019

The Ethics Of Hiding Your Data From the Machines

Molly Wood
wired.com
Originally posted August 22, 2019

Here is an excerpt:

There’s also a real and reasonable fear that companies or individuals will take ethical liberties in the name of pushing hard toward a good solution, like curing a disease or saving lives. This is not an abstract problem: The co-founder of Google’s artificial intelligence lab, DeepMind, was placed on leave earlier this week after some controversial decisions—one of which involved the illegal use of over 1.5 million hospital patient records in 2017.

So sticking with the medical kick I’m on here, I propose that companies work a little harder to imagine the worst-case scenario surrounding the data they’re collecting. Study the side effects like you would a drug for restless leg syndrome or acne or hepatitis, and offer us consumers a nice, long, terrifying list of potential outcomes so we actually know what we’re getting into.

And for we consumers, well, a blanket refusal to offer up our data to the AI gods isn’t necessarily the good choice either. I don’t want to be the person who refuses to contribute my genetic data via 23andMe to a massive research study that could, and I actually believe this is possible, lead to cures and treatments for diseases like Parkinson’s and Alzheimer’s and who knows what else.

I also think I deserve a realistic assessment of the potential for harm to find its way back to me, because I didn’t think through or wasn’t told all the potential implications of that choice—like how, let’s be honest, we all felt a little stung when we realized the 23andMe research would be through a partnership with drugmaker (and reliable drug price-hiker) GlaxoSmithKline. Drug companies, like targeted ads, are easy villains—even though this partnership actually could produce a Parkinson’s drug. But do we know what GSK’s privacy policy looks like? That deal was a level of sharing we didn’t necessarily expect.

The info is here.

Thursday, August 1, 2019

Google Contractors Listen to Recordings of People Using Virtual Assistant

Sarah E. Needleman and Parmy Olson
The Wall Street Journal
Originally posted July 11, 2019

Here are two excerpts:

In a blog post Thursday, Google confirmed it employs people world-wide to listen to a small sample of recordings.

The public broadcaster’s report said the recordings potentially expose sensitive information about users such as names and addresses.

It also said Google, in some cases, is recording voices of customers even when they aren’t using Google Assistant [emphasis added].

In its blog post, Google said language experts listen to 0.2% of “audio snippets” taken from the Google Assistant to better understand different languages, accents and dialects.

(cut)

It is common practice for makers of virtual assistants to record and listen to some of what their users say so they can improve on the technology, said Bret Kinsella, chief executive of Voicebot.ai, a research firm focused on voice technology and artificial intelligence.

“Anything with speech recognition, you generally have humans at one point listening and annotating to sort out what types of errors are occurring,” he said.

In May, however, a coalition of privacy and child-advocacy groups filed a complaint with federal regulators about Amazon potentially preserving conversations of young users through its Echo Dot Kids devices.

The info is here.

Tuesday, July 30, 2019

Ethics In The Digital Age: Protect Others' Data As You Would Your Own

uncaptionedJeff Thomson
Forbes.com
Originally posted July 1, 2019

Here is an excerpt:

2. Ensure they are using people’s data with their consent. 

In theory, an increasing amount of rights to data use is willingly signed over by people through digital acceptance of privacy policies. But a recent investigation by the European Commission, following up on the impact of GDPR, indicated that corporate privacy policies remain too difficult for consumers to understand or even read. When analyzing the ethics of using data, finance professionals must personally reflect on whether the way information is being used is consistent with how consumers, clients or employees understand and expect it to be used. Furthermore, they should question if data is being used in a way that is necessary for achieving business goals in an ethical manner.

3. Follow the “golden rule” when it comes to data. 

Finally, finance professionals must reflect on whether they would want their own personal information being used to further business goals in the way that they are helping their organization use the data of others. This goes beyond regulations and the fine print of privacy agreements: it is adherence to the ancient, universal standard of refusing to do to other people what you would not want done to yourself. Admittedly, this is subjective and difficult to define. But finance professionals will be confronted with many situations in which there are no clear answers, and they must have the ability to think about the ethical implications of actions that might not necessarily be illegal.

The info is here.

Sunday, July 14, 2019

The Voluntariness of Voluntary Consent: Consent Searches and the Psychology of Compliance

Sommers, Roseanna and Bohns, Vanessa K.
Yale Law Journal, Vol. 128, No. 7, 2019. 
Available at SSRN: https://ssrn.com/abstract=3369844

Abstract

Consent-based searches are by far the most ubiquitous form of search undertaken by police. A key legal inquiry in these cases is whether consent was granted voluntarily. This Essay suggests that fact finders’ assessments of voluntariness are likely to be impaired by a systematic bias in social perception. Fact finders are likely to under appreciate the degree to which suspects feel pressure to comply with police officers’ requests to perform searches.

In two preregistered laboratory studies, we approached a total of 209 participants (“Experiencers”) with a highly intrusive request: to unlock their password-protected smartphones and hand them over to an experimenter to search through while they waited in another room. A separate 194 participants (“Forecasters”) were brought into the lab and asked whether a reasonable person would agree to the same request if hypothetically approached by the same researcher. Both groups then reported how free they felt, or would feel, to refuse the request.

Study 1 found that whereas most Forecasters believed a reasonable person would refuse the experimenter’s request, most Experiencers — 100 out of 103 people — promptly unlocked their phones and handed them over. Moreover, Experiencers reported feeling significantly less free to refuse than did Forecasters contemplating the same situation hypothetically.

Study 2 tested an intervention modeled after a commonly proposed reform of consent searches, in which the experimenter explicitly advises participants that they have the right to with- hold consent. We found that this advisory did not significantly reduce compliance rates or make Experiencers feel more free to say no. At the same time, the gap between Experiencers and Forecasters remained significant.

These findings suggest that decision makers judging the voluntariness of consent consistently underestimate the pressure to comply with intrusive requests. This is problematic because it indicates that a key justification for suspicionless consent searches — that they are voluntary — relies on an assessment that is subject to bias. The results thus provide support to critics who would like to see consent searches banned or curtailed, as they have been in several states.

The results also suggest that a popular reform proposal — requiring police to advise citizens of their right to refuse consent — may have little effect. This corroborates previous observational studies, which find negligible effects of Miranda warnings on confession rates among interrogees, and little change in rates of consent once police start notifying motorists of their right to refuse vehicle searches. We suggest that these warnings are ineffective because they fail to address the psychology of compliance. The reason people comply with police, we contend, is social, not informational. The social demands of police-citizen interactions persist even when people are informed of their rights. It is time to abandon the myth that notifying people of their rights makes them feel empowered to exercise those rights.

Tuesday, May 28, 2019

Values in the Filter Bubble Ethics of Personalization Algorithms in Cloud Computing

Engin Bozdag and Job Timmermans
Delft University of Technology
Faculty of Technology, Policy and Management

Abstract

Cloud services such as Facebook and Google search started to use personalization algorithms in order to deal with growing amount of data online. This is often done in order to reduce the “information overload”. User’s interaction with the system is recorded in a single identity, and the information is personalized for the user using this identity. However, as we argue, such filters often ignore the context of information and they are never value neutral. These algorithms operate without the control and knowledge of the user, leading to a “filter bubble”. In this paper we use Value Sensitive Design methodology to identify the values and value assumptions implicated in personalization algorithms. By building on existing philosophical work, we discuss three human values implicated in personalized filtering: autonomy, identity, and transparency.

A copy of the paper is here.

Friday, May 10, 2019

Privacy, data science and personalised medicine. Time for a balanced discussion

Claudia Pagliari
LinkedIn.com Post
Originally posted March 26, 2019

There are several fundamental truths that those of us working at the intersection of data science, ethics and medical research have recognised for some time. Firstly that 'anonymised’ and ‘pseudonymised' data can potentially be re-identified through the convergence of related variables, coupled with clever inference methods (although this is by no means easy). Secondly that genetic data is not just about individuals but also about families and generations, past and future. Thirdly, as we enter an increasingly digitized society where transactional, personal and behavioural data from public bodies, businesses, social media, mobile devices and IoT are potentially linkable, the capacity of data to tell meaningful stories about us is becoming constrained only by the questions we ask and the tools we are able to deploy to get the answers. Some would say that privacy is an outdated concept, and control and transparency are the new by-words. Others either disagree or are increasingly confused and disenfranchised.

Some of the quotes from the top brass of Iceland’s DeCODE Genetics, appearing in today’s BBC’s News, neatly illustrate why we need to remain vigilant to the ethical dilemmas presented by the use of data sciences for personalised medicine. For those of you who are not aware, this company has been at the centre of innovation in population genomics since its inception in the 1990s and overcame a state outcry over privacy and consent, which led to its temporary bankruptcy, before rising phoenix-like from the ashes. The fact that its work has been able to continue in an era of increasing privacy legislation and regulation shows just how far the promise of personalized medicine has skewed the policy narrative and the business agenda in recent years. What is great about Iceland, in terms of medical research, is that it is a relatively small country that has been subjected to historically low levels of immigration and has a unique family naming system and good national record keeping, which means that the pedigree of most of its citizens is easy to trace. This makes it an ideal Petri dish for genetic researchers. And here’s where the rub is. In short, by fully genotyping only 10,000 people from this small country, with its relatively stable gene pool, and integrating this with data on their family trees - and doubtless a whole heap of questionnaires and medical records - the company has, with the consent of a few, effectively seized the data of the "entire population".

The info is here.


Sunday, April 7, 2019

In Spain, prisoners’ brains are being electrically stimulated in the name of science

Sigal Samuel
vox.com
Originally posted March 9, 2019

A team of scientists in Spain is getting ready to experiment on prisoners. If the scientists get the necessary approvals, they plan to start a study this month that involves placing electrodes on inmates’ foreheads and sending a current into their brains. The electricity will target the prefrontal cortex, a brain region that plays a role in decision-making and social behavior. The idea is that stimulating more activity in that region may make the prisoners less aggressive.

This technique — transcranial direct current stimulation, or tDCS — is a form of neurointervention, meaning it acts directly on the brain. Using neurointerventions in the criminal justice system is highly controversial. In recent years, scientists and philosophers have been debating under what conditions (if any) it might be ethical.

The Spanish team is the first to use tDCS on prisoners. They’ve already done it in a pilot study, publishing their findings in Neuroscience in January, and they were all set to implement a follow-up study involving at least 12 convicted murderers and other inmates this month. On Wednesday, New Scientist broke news of the upcoming experiment, noting that it had approval from the Spanish government, prison officials, and a university ethics committee. The next day, the Interior Ministry changed course and put the study on hold.

Andrés Molero-Chamizo, a psychologist at the University of Huelva and the lead researcher behind the study, told me he’s trying to find out what led to the government’s unexpected decision. He said it makes sense to run such an experiment on inmates because “prisoners have a high level of aggressiveness.”

The info is here.

Tuesday, March 19, 2019

We're Teaching Consent All Wrong

Sarah Sparks
www.edweek.org
Originally published January 8, 2019

Here is an excerpt:

Instead, researchers and educators offer an alternative: Teach consent as a life skill—not just a sex skill—beginning in early childhood, and begin discussing consent and communication in the context of relationships by 5th or 6th grades, before kids start seriously thinking about sex. (Think that's too young? In yet another study, the CDC found 8 in 10 teenagers didn't get sex education until after they'd already had sex.)

Educators and parents often balk at discussing strategies for and examples of consent because "they incorrectly believe that if you teach consent, students will become more sexually active," said Mike Domitrz, founder of the Date Safe Project, a Milwaukee-based sexual-assault prevention program that focuses on consent education and bystander interventions. "It's a myth. Students of both genders are pretty consistent that a lot of the sexual activity that is going on is occurring under pressure."

Studies suggest young women are more likely to judge consent on verbal communication and young men relied more on nonverbal cues, though both groups said nonverbal signals are often misinterpreted. And teenagers can be particularly bad at making decisions about risky behavior, including sexual situations, while under social pressure. Brain studies have found adolescents are more likely to take risks and less likely to think about negative consequences when they are in emotionally arousing, or "hot," situations, and that bad decision-making tends to get even worse when they feel they are being judged by their friends.

Making understanding and negotiating consent a life skill gives children and adolescents ways to understand and respect both their own desires and those of other people. And it can help educators frame instruction about consent without sinking into the morass of long-running arguments and anxiety over gender roles, cultural values, and teen sexuality.

The info is here.

Wednesday, March 13, 2019

Why Sexual Morality Doesn't Exist

Alan Goldman
iai.tv
Originally posted February 12, 2019

There is no such thing as sexual morality per se. Put less dramatically, there is no morality special to sex: no act is wrong simply because of its sexual nature. Sexual morality consists in moral considerations that are relevant elsewhere as well being applied to sexual activity or relations. This is because the proper concept of sexual activity is morally neutral. Sexual activity is that which fulfills sexual desire.  Sexual desire in its primary sense can be defined as desire for physical contact with another person’s body and for the pleasure that such contact brings. Masturbation or desire to view pornography are sexual activity and desire in a secondary sense, substitutes for normal sexual desire in its primary sense. Sex itself is not a moral category, although it places us in relations in which moral considerations apply. It gives us opportunity to do what is otherwise regarded as wrong: to harm, deceive, or manipulate others against their will.

As other philosophers point out, pleasure is normally a byproduct of successfully doing things not aimed at pleasure directly, but this is not the case with sex. Sexual desire aims directly at the pleasure derived from physical contact. Desire for physical contact in other contexts, for example contact sports, is not sexual because it has other motives (winning, exhibiting dominance, etc.), but sexual desire in itself has no other motive. It is not a desire to reproduce or to express love or other emotions, although sexual activity, like other activities, can express various emotions including love.

The info is here.

Thursday, February 14, 2019

Sex talks

Rebecca Kukla
aeon.co
Originally posted February 4, 2019

Communication is essential to ethical sex. Typically, our public discussions focus on only one narrow kind of communication: requests for sex followed by consent or refusal. But notice that we use language and communication in a wide variety of ways in negotiating sex. We flirt and rebuff, express curiosity and repulsion, and articulate fantasies. Ideally, we talk about what kind of sex we want to have, involving which activities, and what we like and don’t like. We settle whether or not we are going to have sex at all, and when we want to stop. We check in with one another and talk dirty to one another during sex. 

In this essay I explore the language of sexual negotiation. My specific interest is in what philosophers call the ‘pragmatics’ of speech. That is, I am less interested in what words mean than I am in how speaking can be understood as a kind of action that has a pragmatic effect on the world. Philosophers who specialise in what is known as ‘speech act theory’ focus on what an act of speaking accomplishes, as opposed to what its words mean. J L Austin developed this way of thinking about the different things that speech can do in his classic book, How To Do Things With Words (1962), and many philosophers of language have developed the idea since.

The info is here.

Happy Valentine's Day

Thursday, January 24, 2019

Facebook’s Suicide Algorithms are Invasive

Michael Spencer
www.medium.com
Originally published January 6, 2019

Here is an excerpt:

Facebook is scanning nearly every post on the platform in an attempt to assess suicide risk. Sadly, Facebook has a long history of conducting “experiments” on its users. It’s hard to own a stock that itself isn’t trustworthy either for democracy or our personal data.

Facebook acts a bit like a social surveillance program, where it passes the information (suicide score) along to law enforcement for wellness checks. That’s pretty much like state surveillance, what’s the difference?

Privacy experts say Facebook’s failure to get affirmative consent from users for the program presents privacy risks that could lead to exposure or worse. Facebook has a history with sharing our personal data with other technology companies. So we are being profiled in the most intimate ways by third parties we didn’t even know had our data.

In March 2017, Facebook launched an ambitious project to prevent suicide with artificial intelligence but what is the real reason they make these contructs? It’s to monetize our data, it’s not to “help humanity” or connect the world.

The info is here.

Tuesday, January 22, 2019

Kaiser settled 2014 patient-dumping class-action suit earlier this year

Michael McCough
The Sacramento Bee
Originally posted December 20, 2018

Kaiser Foundation Health Plan recently settled a 2014 class-action lawsuit stemming from two allegations that it dumped patients with severe mental illness.

Plaintiffs Douglas Kerr and Barbara Knighton alleged that in separate incidents, Kaiser psychiatrists told them their sons needed to be transferred to locked residential facilities called IMDs (institutions for mental disease) for treatment, according to court documents. Knighton and Kerr claimed they were both told they should remove their children from their Kaiser health plans in 2014 to be transferred to these county-run institutions — a change that shifted the costs of treatment from Kaiser to government-funded programs such as Medi-Cal.

Despite the settlement, Kaiser said in a statement it continues to dispute some of the claims included in the lawsuit.

“In certain relatively rare cases, Kaiser Permanente members entered a specialized type of locked mental health facility that often preferred Medi-Cal coverage to private insurance,” Kaiser Vice President of Communications John Nelson said in an emailed statement. “In some of these cases, cancellation of Kaiser Permanente coverage was required to enter the facility. However, this was not Kaiser Permanente’s requirement, and we cover many members’ care at such facilities. Any decision to cancel coverage was made by a court-appointed conservator.”

The info is here.

Monday, December 3, 2018

Our lack of interest in data ethics will come back to haunt us

Jayson Demers
thenextweb.com
Originally posted November 4, 2018

Here is an excerpt:

Most people understand the privacy concerns that can arise with collecting and harnessing big data, but the ethical concerns run far deeper than that.

These are just a smattering of the ethical problems in big data:

  • Ownership: Who really “owns” your personal data, like what your political preferences are, or which types of products you’ve bought in the past? Is it you? Or is it public information? What about people under the age of 18? What about information you’ve tried to privatize?
  • Bias: Biases in algorithms can have potentially destructive effects. Everything from facial recognition to chatbots can be skewed to favor one demographic over another, or one set of values over another, based on the data used to power it.
  • Transparency: Are companies required to disclose how they collect and use data? Or are they free to hide some of their efforts? More importantly, who gets to decide the answer here?
  • Consent: What does it take to “consent” to having your data harvested? Is your passive use of a platform enough? What about agreeing to multi-page, complexly worded Terms and Conditions documents?

If you haven’t heard about these or haven’t thought much about them, I can’t really blame you. We aren’t bringing these questions to the forefront of the public discussion on big data, nor are big data companies going out of their way to discuss them before issuing new solutions.

“Oops, our bad”

One of the biggest problems we keep running into is what I call the “oops, our bad” effect. The idea here is that big companies and data scientists use and abuse data however they want, outside the public eye and without having an ethical discussion about their practices. If and when the public finds out that some egregious activity took place, there’s usually a short-lived public outcry, and the company issues an apology for the actions — without really changing their practices or making up for the damage.

The info is here.

Monday, August 27, 2018

Unwanted Events and Side Effects in Cognitive Behavior Therapy

Schermuly-Haupt, ML., Linden, M. & Rush, A.J.
Cognitive Therapy and Research
June 2018, Volume 42, Issue 3, pp 219–229

Abstract

Side effects (SEs) are negative reactions to an appropriately delivered treatment, which must be discriminated from unwanted events (UEs) or consequences of inadequate treatment. One hundred CBT therapists were interviewed for UEs and SEs in one of their current outpatients. Therapists reported 372 UEs in 98 patients and SEs in 43 patients. Most frequent were "negative wellbeing/distress" (27% of patients), "worsening of symptoms" (9%), "strains in family relations" (6%); 21% of patients suffered from severe or very severe and 5% from persistent SEs. SEs are unavoidable and frequent also in well-delivered CBT. They include both symptoms and the impairment of social life. Knowledge about the side effect profile can improve early recognition of SEs, safeguard patients, and enhance therapy outcome.

The research is here.

Friday, June 15, 2018

Tech giants need to build ethics into AI from the start

James Titcomb
The Telegraph
Originally posted May 13, 2018

Here is an excerpt:

But excitement about the software soon turned to comprehending the ethical minefield it created. Google’s initial demo gave no indication that the person on the other end of the phone would be alerted that they were talking to a robot. The software even had human-like quirks built into it, stopping to say “um” and “mm-hmm”, a quality designed to seem cute but that ended up appearing more deceptive.

Some found the whole idea that a person should have to go through an artificial conversation with a robot somewhat demeaning; insulting even.

After a day of criticism, Google attempted to play down some of the concerns. It said the technology had no fixed release date, would take into account people’s concerns and promised to ensure that the software identified itself as such at the start of every phone call.

But the fact that it did not do this immediately was not a promising sign. The last two years of massive data breaches, evidence of Russian propaganda campaigns on social media and privacy failures have proven what should always have been obvious: that the internet has as much power to do harm as good. Every frontier technology now needs to be built with at least some level of paranoia; some person asking: “How could this be abused?”

The information is here.

Tuesday, June 12, 2018

Did Google Duplex just pass the Turing Test?

Lance Ulanoff
Medium.com
Originally published

Here is an excerpt:

In short, this means that while Duplex has your hair and dining-out options covered, it could stumble in movie reservations and negotiations with your cable provider.

Even so, Duplex fooled two humans. I heard no hesitation or confusion. In the hair salon call, there was no indication that the salon worker thought something was amiss. She wanted to help this young woman make an appointment. What will she think when she learns she was duped by Duplex?

Obviously, Duplex’s conversations were also short, each lasting less than a minute, putting them well-short of the Turing Test benchmark. I would’ve enjoyed hearing the conversations devolve as they extended a few minutes or more.

I’m sure Duplex will soon tackle more domains and longer conversations, and it will someday pass the Turing Test.

It’s only a matter of time before Duplex is handling other mundane or difficult calls for us, like calling our parents with our own voices (see Wavenet technology). Eventually, we’ll have our Duplex voices call each other, handling pleasantries and making plans, which Google Assistant can then drop in our Google Calendar.

The information is here.

Monday, June 4, 2018

Human-sounding Google Assistant sparks ethics questions

The Strait Times
Originally published May 9, 2018

Here are some excerpts:

The new Google digital assistant converses so naturally it may seem like a real person.

The unveiling of the natural-sounding robo-assistant by the tech giant this week wowed some observers, but left others fretting over the ethics of how the human-seeming software might be used.

(cut)

The Duplex demonstration was quickly followed by debate over whether people answering phones should be told when they are speaking to human-sounding software and how the technology might be abused in the form of more convincing "robocalls" by marketers or political campaigns.

(cut)

Digital assistants making arrangements for people also raises the question of who is responsible for mistakes, such as a no-show or cancellation fee for an appointment set for the wrong time.

The information is here.

Wednesday, May 30, 2018

Reining It In: Making Ethical Decisions in a Forensic Practice

Donna M. Veraldi and Lorna Veraldi
A Paper Presented to American College of Forensic Psychology
34th Annual Symposium, San Diego, CA

Here is an excerpt:

Ethical dilemmas sometimes require making difficult choices among competing ethical principles and values. This presentation will discuss ethical dilemmas arising from the use of coercion and deception in forensic practice. In a forensic practice, the choice is not as simple as “do no harm” or “tell the truth.” What is and is not acceptable in terms of using various forms of pressure on individuals or of assisting agencies that put pressure on individuals? How much information should forensic psychologists share with individuals about evaluation techniques? What does informed consent
mean in the context of a forensic practice where many of the individuals with whom we interact are not there by choice?

The information is here.

Monday, May 28, 2018

The ethics of experimenting with human brain tissue

Nita Farahany, and others
Nature
Originally published April 25, 2018

If researchers could create brain tissue in the laboratory that might appear to have conscious experiences or subjective phenomenal states, would that tissue deserve any of the protections routinely given to human or animal research subjects?

This question might seem outlandish. Certainly, today’s experimental models are far from having such capabilities. But various models are now being developed to better understand the human brain, including miniaturized, simplified versions of brain tissue grown in a dish from stem cells — brain organoids. And advances keep being made.

These models could provide a much more accurate representation of normal and abnormal human brain function and development than animal models can (although animal models will remain useful for many goals). In fact, the promise of brain surrogates is such that abandoning them seems itself unethical, given the vast amount of human suffering caused by neurological and psychiatric disorders, and given that most therapies for these diseases developed in animal models fail to work in people. Yet the closer the proxy gets to a functioning human brain, the more ethically problematic it becomes.

The information is here.


Sunday, May 13, 2018

Facebook Uses AI To Predict Your Future Actions for Advertizers

Sam Biddle
The Intercept
Originally posted April 13, 2018

Here is an excerpt:

Asked by Fortune’s Stacey Higginbotham where Facebook hoped its machine learning work would take it in five years, Chief Technology Officer Mike Schroepfer said in 2016 his goal was that AI “makes every moment you spend on the content and the people you want to spend it with.” Using this technology for advertising was left unmentioned. A 2017 TechCrunch article declared, “Machine intelligence is the future of monetization for Facebook,” but quoted Facebook executives in only the mushiest ways: “We want to understand whether you’re interested in a certain thing generally or always. Certain things people do cyclically or weekly or at a specific time, and it’s helpful to know how this ebbs and flows,” said Mark Rabkin, Facebook’s vice president of engineering for ads. The company was also vague about the melding of machine learning to ads in a 2017 Wired article about the company’s AI efforts, which alluded to efforts “to show more relevant ads” using machine learning and anticipate what ads consumers are most likely to click on, a well-established use of artificial intelligence. Most recently, during his congressional testimony, Zuckerberg touted artificial intelligence as a tool for curbing hate speech and terrorism.

The article is here.