Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Privacy. Show all posts
Showing posts with label Privacy. Show all posts

Wednesday, November 15, 2023

Private UK health data donated for medical research shared with insurance companies

Shanti Das
The Guardian
Originally poste 12 Nov 23

Sensitive health information donated for medical research by half a million UK citizens has been shared with insurance companies despite a pledge that it would not be.

An Observer investigation has found that UK Biobank opened up its vast biomedical database to insurance sector firms several times between 2020 and 2023. The data was provided to insurance consultancy and tech firms for projects to create digital tools that help insurers predict a person’s risk of getting a chronic disease. The findings have raised concerns among geneticists, data privacy experts and campaigners over vetting and ethical checks at Biobank.

Set up in 2006 to help researchers investigating diseases, the database contains millions of blood, saliva and urine samples, collected regularly from about 500,000 adult volunteers – along with medical records, scans, wearable device data and lifestyle information.

Approved researchers around the world can pay £3,000 to £9,000 to access records ranging from medical history and lifestyle information to whole genome sequencing data. The resulting research has yielded major medical discoveries and led to Biobank being considered a “jewel in the crown” of British science.

Biobank said it strictly guarded access to its data, only allowing access by bona fide researchers for health-related projects in the public interest. It said this included researchers of all stripes, whether employed by academic, charitable or commercial organisations – including insurance companies – and that “information about data sharing was clearly set out to participants at the point of recruitment and the initial assessment”.


Here is my summary:

Private health data donated by over half a million UK citizens for medical research has been shared with insurance companies, despite a pledge that it would not be used for this purpose. The data, which includes genetic information, medical diagnoses, and lifestyle factors, has been used to develop digital tools that help insurers predict a person's risk of getting a chronic disease. This raises concerns about the privacy and security of sensitive health data, as well as the potential for insurance companies to use the data to discriminate against people with certain health conditions.

Tuesday, October 17, 2023

Tackling healthcare AI's bias, regulatory and inventorship challenges

Bill Siwicki
Healthcare IT News
Originally posted 29 August 23

While AI adoption is increasing in healthcare, there are privacy and content risks that come with technology advancements.

Healthcare organizations, according to Dr. Terri Shieh-Newton, an immunologist and a member at global law firm Mintz, must have an approach to AI that best positions themselves for growth, including managing:
  • Biases introduced by AI. Provider organizations must be mindful of how machine learning is integrating racial diversity, gender and genetics into practice to support the best outcome for patients.
  • Inventorship claims on intellectual property. Identifying ownership of IP as AI begins to develop solutions in a faster, smarter way compared to humans.
Healthcare IT News sat down with Shieh-Newton to discuss these issues, as well as the regulatory landscape’s response to data and how that impacts AI.

Q. Please describe the generative AI challenge with biases introduced from AI itself. How is machine learning integrating racial diversity, gender and genetics into practice?
A. Generative AI is a type of machine learning that can create new content based on the training of existing data. But what happens when that training set comes from data that has inherent bias? Biases can appear in many forms within AI, starting from the training set of data.

Take, as an example, a training set of patient samples already biased if the samples are collected from a non-diverse population. If this training set is used for discovering a new drug, then the outcome of the generative AI model can be a drug that works only in a subset of a population – or have just a partial functionality.

Some traits of novel drugs are better binding to its target and lower toxicity. If the training set excludes a population of patients of a certain gender or race (and the genetic differences that are inherent therein), then the outcome of proposed drug compounds is not as robust as when the training sets include a diversity of data.

This leads into questions of ethics and policies, where the most marginalized population of patients who need the most help could be the group that is excluded from the solution because they were not included in the underlying data used by the generative AI model to discover that new drug.

One can address this issue with more deliberate curation of the training databases. For example, is the patient population inclusive of many types of racial backgrounds? Gender? Age ranges?

By making sure there is a reasonable representation of gender, race and genetics included in the initial training set, generative AI models can accelerate drug discovery, for example, in a way that benefits most of the population.

The info is here. 

Here is my take:

 One of the biggest challenges is bias. AI systems are trained on data, and if that data is biased, the AI system will be biased as well. This can have serious consequences in healthcare, where biased AI systems could lead to patients receiving different levels of care or being denied care altogether.

Another challenge is regulation. Healthcare is a highly regulated industry, and AI systems need to comply with a variety of laws and regulations. This can be complex and time-consuming, and it can be difficult for healthcare organizations to keep up with the latest changes.

Finally, the article discusses the challenges of inventorship. As AI systems become more sophisticated, it can be difficult to determine who is the inventor of a new AI-powered healthcare solution. This can lead to disputes and delays in bringing new products and services to market.

The article concludes by offering some suggestions for how to address these challenges:
  • To reduce bias, healthcare organizations need to be mindful of the data they are using to train their AI systems. They should also audit their AI systems regularly to identify and address any bias.
  • To comply with regulations, healthcare organizations need to work with experts to ensure that their AI systems meet all applicable requirements.
  • To resolve inventorship disputes, healthcare organizations should develop clear policies and procedures for allocating intellectual property rights.
By addressing these challenges, healthcare organizations can ensure that AI is deployed in a way that is safe, effective, and ethical.

Additional thoughts

In addition to the challenges discussed in the article, there are a number of other factors that need to be considered when deploying AI in healthcare. For example, it is important to ensure that AI systems are transparent and accountable. This means that healthcare organizations should be able to explain how their AI systems work and why they make the decisions they do.

It is also important to ensure that AI systems are fair and equitable. This means that they should treat all patients equally, regardless of their race, ethnicity, gender, income, or other factors.

Finally, it is important to ensure that AI systems are used in a way that respects patient privacy and confidentiality. This means that healthcare organizations should have clear policies in place for the collection, use, and storage of patient data.

By carefully considering all of these factors, healthcare organizations can ensure that AI is used to improve patient care and outcomes in a responsible and ethical way.

Friday, September 22, 2023

Police are Getting DNA Data from People who Think They Opted Out

Jordan Smith
The Intercept
Originally posted 18 Aug 23

Here is an excerpt:

The communications are a disturbing example of how genetic genealogists and their law enforcement partners, in their zeal to close criminal cases, skirt privacy rules put in place by DNA database companies to protect their customers. How common these practices are remains unknown, in part because police and prosecutors have fought to keep details of genetic investigations from being turned over to criminal defendants. As commercial DNA databases grow, and the use of forensic genetic genealogy as a crime-fighting tool expands, experts say the genetic privacy of millions of Americans is in jeopardy.

Moore did not respond to The Intercept’s requests for comment.

To Tiffany Roy, a DNA expert and lawyer, the fact that genetic genealogists have accessed private profiles — while simultaneously preaching about ethics — is troubling. “If we can’t trust these practitioners, we certainly cannot trust law enforcement,” she said. “These investigations have serious consequences; they involve people who have never been suspected of a crime.” At the very least, law enforcement actors should have a warrant to conduct a genetic genealogy search, she said. “Anything less is a serious violation of privacy.”

(cut)

Exploitation of the GEDmatch loophole isn’t the only example of genetic genealogists and their law enforcement partners playing fast and loose with the rules.

Law enforcement officers have used genetic genealogy to solve crimes that aren’t eligible for genetic investigation per company terms of service and Justice Department guidelines, which say the practice should be reserved for violent crimes like rape and murder only when all other “reasonable” avenues of investigation have failed. In May, CNN reported on a U.S. marshal who used genetic genealogy to solve a decades-old prison break in Nebraska. There is no prison break exception to the eligibility rules, Larkin noted in a post on her website. “This case should never have used forensic genetic genealogy in the first place.”

A month later, Larkin wrote about another violation, this time in a California case. The FBI and the Riverside County Regional Cold Case Homicide Team had identified the victim of a 1996 homicide using the MyHeritage database — an explicit violation of the company’s terms of service, which make clear that using the database for law enforcement purposes is “strictly prohibited” absent a court order.

“The case presents an example of ‘noble cause bias,’” Larkin wrote, “in which the investigators seem to feel that their objective is so worthy that they can break the rules in place to protect others.”


My take:

Forensic genetic genealogists have been skirting GEDmatch privacy rules by searching users who explicitly opted out of sharing DNA with law enforcement. This means that police can access the DNA of people who thought they were protecting their privacy by opting out of law enforcement searches.

The practice of forensic genetic genealogy has been used to solve a number of cold cases, but it has also raised concerns about privacy and civil liberties. Some people worry that the police could use DNA data to target innocent people or to build a genetic database of the entire population.

GEDmatch has since changed its privacy policy to make it more difficult for police to access DNA data from users who have opted out. However, the damage may already be done. Police have already used GEDmatch data to solve dozens of cases, and it is unclear how many people have had their DNA data accessed without their knowledge or consent.

Monday, September 11, 2023

Kaiser agrees to $49-million settlement for illegal disposal of hazardous waste, protected patient information

Gabriel San Roman
Los Angeles Times
Originally posted 9 September 23

Here are two excepts:

“The illegal disposal of hazardous and medical waste puts the environment, workers and the public at risk,” Bonta said. “It also violates numerous federal and state laws. As a healthcare provider, Kaiser should know that it has specific legal obligations to properly dispose of medical waste and safeguard patients’ medical information.”

The state attorney general partnered with six district attorney offices — including Alameda, San Bernardino, San Francisco, San Joaquin, San Mateo and Yolo counties — in the undercover probe of 16 Kaiser facilities statewide that first began in 2015.

Investigators found hundreds of hazardous and medical waste items such as syringes, tubing with body fluid and aerosol cans destined for public landfills. The inspections also uncovered more than 10,000 pages of confidential patient files.

During a news conference on Friday, Bonta said that investigators also found body parts in the public waste stream but did not elaborate.

(cut)

As part of the settlement agreement, the healthcare provider must retain an independent third-party auditor approved by the state and local law enforcement involved in the investigation.

Kaiser faces a $1.75-million penalty if adequate steps are not taken within a five-year period.

“As a major corporation in Alameda County, Kaiser Permanente has a special obligation to treat its communities with the same bedside manner as its patients,” said Alameda County Dist. Atty. Pamela Price. “Dumping medical waste and private information are wrong, which they have acknowledged. This action will hold them accountable in such a way that we hope means it doesn’t happen again.”

Thursday, July 20, 2023

Big tech is bad. Big A.I. will be worse.

Daron Acemoglu and Simon Johnson
The New York Times
Originally posted 15 June 23

Here is an excerpt:

Today, those countervailing forces either don’t exist or are greatly weakened. Generative A.I. requires even deeper pockets than textile factories and steel mills. As a result, most of its obvious opportunities have already fallen into the hands of Microsoft, with its market capitalization of $2.4 trillion, and Alphabet, worth $1.6 trillion.

At the same time, powers like trade unions have been weakened by 40 years of deregulation ideology (Ronald Reagan, Margaret Thatcher, two Bushes and even Bill Clinton). For the same reason, the U.S. government’s ability to regulate anything larger than a kitten has withered. Extreme polarization, fear of killing the golden (donor) goose or undermining national security means that most members of Congress would still rather look away.

To prevent data monopolies from ruining our lives, we need to mobilize effective countervailing power — and fast.

Congress needs to assert individual ownership rights over underlying data that is relied on to build A.I. systems. If Big A.I. wants to use our data, we want something in return to address problems that communities define and to raise the true productivity of workers. Rather than machine intelligence, what we need is “machine usefulness,” which emphasizes the ability of computers to augment human capabilities. This would be a much more fruitful direction for increasing productivity. By empowering workers and reinforcing human decision making in the production process, it also would strengthen social forces that can stand up to big tech companies. It would also require a greater diversity of approaches to new technology, thus making another dent in the monopoly of Big A.I.

We also need regulation that protects privacy and pushes back against surveillance capitalism, or the pervasive use of technology to monitor what we do — including whether we are in compliance with “acceptable” behavior, as defined by employers and how the police interpret the law, and which can now be assessed in real time by A.I. There is a real danger that A.I. will be used to manipulate our choices and distort lives.

Finally, we need a graduated system for corporate taxes, so that tax rates are higher for companies when they make more profit in dollar terms. Such a tax system would put shareholder pressure on tech titans to break themselves up, thus lowering their effective tax rate. More competition would help by creating a diversity of ideas and more opportunities to develop a pro-human direction for digital technologies.


The article argues that big tech companies, such as Google, Amazon, and Facebook, have already accumulated too much power and control. I concur that if these companies are allowed to continue their unchecked growth, they will eventually become too powerful and oppressive because of strength of AI compared to the limited thinking and reasoning of human beings.

Tuesday, July 18, 2023

How AI is learning to read the human mind

Nicola Smith
The Telegraph
Originally posted 23 May 2023

Here is an excerpt:

‘Brain rights’

But he warned that it could also be weaponised and used for military applications or for nefarious purposes to extract information from people.

“We are on the brink of a crisis from the point of view of mental privacy,” he said. “Humans are defined by their thoughts and their mental processes and if you can access them then that should be the sanctuary.”

Prof Yuste has become so concerned about the ethical implications of advanced neurotechnology that he co-founded the NeuroRights Foundation to promote “brain rights” as a new form of human rights.

The group advocates for safeguards to prevent the decoding of a person’s brain activity without consent, for protection of a person’s identity and free will, and for the right to fair access to mental augmentation technology.

They are currently working with the United Nations to study how human rights treaties can be brought up to speed with rapid progress in neurosciences, and raising awareness of the issues in national parliaments.

In August, the Human Rights Council in Geneva will debate whether the issues around mental privacy should be covered by the International Covenant on Civil and Political Rights, one of the most significant human rights treaties in the world.

The gravity of the task was comparable to the development of the atomic bomb, when scientists working on atomic energy warned the UN of the need for regulation and an international control system of nuclear material to prevent the risk of a catastrophic war, said Prof Yuste.

As a result, the International Atomic Energy Agency (IAEA) was created and is now based in Vienna.

Saturday, June 10, 2023

Generative AI entails a credit–blame asymmetry

Porsdam Mann, S., Earp, B. et al. (2023).
Nature Machine Intelligence.

The recent releases of large-scale language models (LLMs), including OpenAI’s ChatGPT and GPT-4, Meta’s LLaMA, and Google’s Bard have garnered substantial global attention, leading to calls for urgent community discussion of the ethical issues involved. LLMs generate text by representing and predicting statistical properties of language. Optimized for statistical patterns and linguistic form rather than for
truth or reliability, these models cannot assess the quality of the information they use.

Recent work has highlighted ethical risks that are associated with LLMs, including biases that arise from training data; environmental and socioeconomic impacts; privacy and confidentiality risks; the perpetuation of stereotypes; and the potential for deliberate or accidental misuse. We focus on a distinct set of ethical questions concerning moral responsibility—specifically blame and credit—for LLM-generated
content. We argue that different responsibility standards apply to positive and negative uses (or outputs) of LLMs and offer preliminary recommendations. These include: calls for updated guidance from policymakers that reflect this asymmetry in responsibility standards; transparency norms; technology goals; and the establishment of interactive forums for participatory debate on LLMs.‌

(cut)

Credit–blame asymmetry may lead to achievement gaps

Since the Industrial Revolution, automating technologies have made workers redundant in many industries, particularly in agriculture and manufacturing. The recent assumption25 has been that creatives
and knowledge workers would remain much less impacted by these changes in the near-to-mid-term future. Advances in LLMs challenge this premise.

How these trends will impact human workforces is a key but unresolved question. The spread of AI-based applications and tools such as LLMs will not necessarily replace human workers; it may simply
shift them to tasks that complement the functions of the AI. This may decrease opportunities for human beings to distinguish themselves or excel in workplace settings. Their future tasks may involve supervising or maintaining LLMs that produce the sorts of outputs (for example, text or recommendations) that skilled human beings were previously producing and for which they were receiving credit. Consequently, work in a world relying on LLMs might often involve ‘achievement gaps’ for human beings: good, useful outcomes will be produced, but many of them will not be achievements for which human workers and professionals can claim credit.

This may result in an odd state of affairs. If responsibility for positive and negative outcomes produced by LLMs is asymmetrical as we have suggested, humans may be justifiably held responsible for negative outcomes created, or allowed to happen, when they or their organizations make use of LLMs. At the same time, they may deserve less credit for AI-generated positive outcomes, as they may not be displaying the skills and talents needed to produce text, exerting judgment to make a recommendation, or generating other creative outputs.

Wednesday, May 24, 2023

Fighting for our cognitive liberty

Liz Mineo
The Harvard Gazette
Originally published 26 April 23

Imagine going to work and having your employer monitor your brainwaves to see whether you’re mentally tired or fully engaged in filling out that spreadsheet on April sales.

Nita Farahany, professor of law and philosophy at Duke Law School and author of “The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology,” says it’s already happening, and we all should be worried about it.

Farahany highlighted the promise and risks of neurotechnology in a conversation with Francis X. Shen, an associate professor in the Harvard Medical School Center for Bioethics and the MGH Department of Psychiatry, and an affiliated professor at Harvard Law School. The Monday webinar was co-sponsored by the Harvard Medical School Center for Bioethics, the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, and the Dana Foundation.

Farahany said the practice of tracking workers’ brains, once exclusively the stuff of science fiction, follows the natural evolution of personal technology, which has normalized the use of wearable devices that chronicle heartbeats, footsteps, and body temperatures. Sensors capable of detecting and decoding brain activity already have been embedded into everyday devices such as earbuds, headphones, watches, and wearable tattoos.

“Commodification of brain data has already begun,” she said. “Brain sensors are already being sold worldwide. It isn’t everybody who’s using them yet. When it becomes an everyday part of our everyday lives, that’s the moment at which you hope that the safeguards are already in place. That’s why I think now is the right moment to do so.”

Safeguards to protect people’s freedom of thought, privacy, and self-determination should be implemented now, said Farahany. Five thousand companies around the world are using SmartCap technologies to track workers’ fatigue levels, and many other companies are using other technologies to track focus, engagement and boredom in the workplace.

If protections are put in place, said Farahany, the story with neurotechnology could be different than the one Shoshana Zuboff warns of in her 2019 book, “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.” In it Zuboff, Charles Edward Wilson Professor Emerita at Harvard Business School, examines the threat of the widescale corporate commodification of personal data in which predictions of our consumer activities are bought, sold, and used to modify behavior.

Friday, May 12, 2023

‘Mind-reading’ AI: Japan study sparks ethical debate

David McElhinney
Aljazeera.com
Originally posted 7 APR 203

Yu Takagi could not believe his eyes. Sitting alone at his desk on a Saturday afternoon in September, he watched in awe as artificial intelligence decoded a subject’s brain activity to create images of what he was seeing on a screen.

“I still remember when I saw the first [AI-generated] images,” Takagi, a 34-year-old neuroscientist and assistant professor at Osaka University, told Al Jazeera.

“I went into the bathroom and looked at myself in the mirror and saw my face, and thought, ‘Okay, that’s normal. Maybe I’m not going crazy'”.

Takagi and his team used Stable Diffusion (SD), a deep learning AI model developed in Germany in 2022, to analyse the brain scans of test subjects shown up to 10,000 images while inside an MRI machine.

After Takagi and his research partner Shinji Nishimoto built a simple model to “translate” brain activity into a readable format, Stable Diffusion was able to generate high-fidelity images that bore an uncanny resemblance to the originals.

The AI could do this despite not being shown the pictures in advance or trained in any way to manufacture the results.

“We really didn’t expect this kind of result,” Takagi said.

Takagi stressed that the breakthrough does not, at this point, represent mind-reading – the AI can only produce images a person has viewed.

“This is not mind-reading,” Takagi said. “Unfortunately there are many misunderstandings with our research.”

“We can’t decode imaginations or dreams; we think this is too optimistic. But, of course, there is potential in the future.”


Note: If AI systems can decode human thoughts, it could infringe upon people's privacy and autonomy. There are concerns that this technology could be used for invasive surveillance or to manipulate people's thoughts and behavior. Additionally, there are concerns about how this technology could be used in legal proceedings and whether it violates human rights.

Monday, April 24, 2023

ChatGPT in the Clinic? Medical AI Needs Ethicists

Emma Bedor Hiland
The Hastings Center
Originally published by 10 MAR 23

Concerns about the role of artificial intelligence in our lives, particularly if it will help us or harm us, improve our health and well-being or work to our detriment, are far from new. Whether 2001: A Space Odyssey’s HAL colored our earliest perceptions of AI, or the much more recent M3GAN, these questions are not unique to the contemporary era, as even the ancient Greeks wondered what it would be like to live alongside machines.

Unlike ancient times, today AI’s presence in health and medicine is not only accepted, it is also normative. Some of us rely upon FitBits or phone apps to track our daily steps and prompt us when to move or walk more throughout our day. Others utilize chatbots available via apps or online platforms that claim to improve user mental health, offering meditation or cognitive behavioral therapy. Medical professionals are also open to working with AI, particularly when it improves patient outcomes. Now the availability of sophisticated chatbots powered by programs such as OpenAI’s ChatGPT have brought us closer to the possibility of AI becoming a primary source in providing medical diagnoses and treatment plans.

Excitement about ChatGPT was the subject of much media attention in late 2022 and early 2023. Many in the health and medical fields were also eager to assess the AI’s abilities and applicability to their work. One study found ChatGPT adept at providing accurate diagnoses and triage recommendations. Others in medicine were quick to jump on its ability to complete administrative paperwork on their behalf. Other research found that ChatGPT reached, or came close to reaching, the passing threshold for United States Medical Licensing Exam.

Yet the public at large is not as excited about an AI-dominated medical future. A study from the Pew Research Center found that most Americans are “uncomfortable” with the prospect of AI-provided medical care. The data also showed widespread agreement that AI will negatively affect patient-provider relationships, and that the public is concerned health care providers will adopt AI technologies too quickly, before they fully understanding the risks of doing so.


In sum: As AI is increasingly used in healthcare, this article argues that there is a need for ethical considerations and expertise to ensure that these systems are designed and used in a responsible and beneficial manner. Ethicists can play a vital role in evaluating and addressing the ethical implications of medical AI, particularly in areas such as bias, transparency, and privacy.

Thursday, December 15, 2022

Dozens of telehealth startups sent sensitive health information to big tech companies

Katie Palmer with
Todd Feathers & Simon Fondrie-Teitler 
STAT NEWS
Originally posted 13 DEC 22

Here is an excerpt:

Health privacy experts and former regulators said sharing such sensitive medical information with the world’s largest advertising platforms threatens patient privacy and trust and could run afoul of unfair business practices laws. They also emphasized that privacy regulations like the Health Insurance Portability and Accountability Act (HIPAA) were not built for telehealth. That leaves “ethical and moral gray areas” that allow for the legal sharing of health-related data, said Andrew Mahler, a former investigator at the U.S. Department of Health and Human Services’ Office for Civil Rights.

“I thought I was at this point hard to shock,” said Ari Friedman, an emergency medicine physician at the University of Pennsylvania who researches digital health privacy. “And I find this particularly shocking.”

In October and November, STAT and The Markup signed up for accounts and completed onboarding forms on 50 telehealth sites using a fictional identity with dummy email and social media accounts. To determine what data was being shared by the telehealth sites as users completed their forms, reporters examined the network traffic between trackers using Chrome DevTools, a tool built into Google’s Chrome browser.

On Workit’s site, for example, STAT and The Markup found that a piece of code Meta calls a pixel sent responses about self-harm, drug and alcohol use, and personal information — including first name, email address, and phone number — to Facebook.

The investigation found trackers collecting information on websites that sell everything from addiction treatments and antidepressants to pills for weight loss and migraines. Despite efforts to trace the data using the tech companies’ own transparency tools, STAT and The Markup couldn’t independently confirm how or whether Meta and the other tech companies used the data they collected.

After STAT and The Markup shared detailed findings with all 50 companies, Workit said it had changed its use of trackers. When reporters tested the website again on Dec. 7, they found no evidence of tech platform trackers during the company’s intake or checkout process.

“Workit Health takes the privacy of our members seriously,” Kali Lux, a spokesperson for the company, wrote in an email. “Out of an abundance of caution, we elected to adjust the usage of a number of pixels for now as we continue to evaluate the issue.”

Friday, April 29, 2022

Navy Deputizes Psychologists to Enforce Drug Rules Even for Those Seeking Mental Health Help

Konstantin Toropin
MilitaryTimes.com
Originally posted 18 APR 22

In the wake of reports that a Navy psychologist played an active role in convicting for drug use a sailor who had reached out for mental health assistance, the service is standing by its policy, which does not provide patients with confidentiality and could mean that seeking help has consequences for service members.

The case highlights a set of military regulations that, in vaguely defined circumstances, requires doctors to inform commanding officers of certain medical details, including drug tests, even if those tests are conducted for legitimate medical reasons necessary for adequate care. Allowing punishment when service members are looking for help could act as a deterrent in a community where mental health is still a taboo topic among many, despite recent leadership attempts to more openly discuss getting assistance.

On April 11, Military.com reported the story of a sailor and his wife who alleged that the sailor's command, the destroyer USS Farragut, was retaliating against him for seeking mental health help.

Jatzael Alvarado Perez went to a military hospital to get help for his mental health struggles. As part of his treatment, he was given a drug test that came back positive for cannabinoids -- the family of drugs associated with marijuana. Perez denies having used any substances, but the test resulted in a referral to the ship's chief corpsman.

Perez's wife, Carli Alvarado, shared documents with Military.com that were evidence in the sailor's subsequent nonjudicial punishment, showing that the Farragut found out about the results because the psychologist emailed the ship's medical staff directly, according to a copy of the email.

"I'm not sure if you've been tracking, but OS2 Alvarado Perez popped positive for cannabis while inpatient," read the email, written to the ship's medical chief. Navy policy prohibits punishment for a positive drug test when administered as part of regular medical care.

The email goes on to describe efforts by the psychologist to assist in obtaining a second test -- one that could be used to punish Perez.

"We are working to get him a command directed urinalysis through [our command] today," it added.

Tuesday, March 8, 2022

"Without Her Consent" Harvard Allegedly Obtained Title IX Complainant’s Outside Psychotherapy Records, Absent Her Permission

Colleen Flaherty
Inside Higher Ed
Originally published 10 FEB 22

Here are two excerpts:

Harvard provided background information about how its dispute resolution office works, saying that it doesn’t contact a party’s medical care provider except when a party has indicated that the provider has relevant information that the party wants the office to consider. In that case, the office receives information from the care provider only with the party’s consent.

Multiple legal experts said Wednesday that this is the established protocol across higher education.

Asked for more details about what happened, Kilburn’s lawyer, Carolin Guentert, said that Kilburn’s therapist is a private provider unaffiliated with Harvard, and “we understand that ODR contacted Ms. Kilburn’s therapist and obtained the psychotherapy notes from her sessions with Ms. Kilburn, without first seeking Ms. Kilburn’s written consent as required under HIPAA,” the Health Insurance Portability and Accountability Act of 1996, which governs patient privacy.

Asked if Kilburn ever signed a privacy waiver with her therapist that would have granted the university access to her records, Guentert said Kilburn “has no recollection of signing such a waiver, nor has Harvard provided one to us.”

(cut)

Even more seriously, these experts said that Harvard would have had no right to obtain Kilburn’s mental health records from a third-party provider without her consent.

Andra J. Hutchins, a Massachusetts-based attorney who specializes in education law, said that therapy records are protected by psychotherapist-patient privilege (something akin to attorney-client privilege).

“Unless the school has an agreement with and a release from the student to provide access to those records or speak to the student’s therapist—which can be the case if a student is placed on involuntary leave due to a mental health issue—there should be no reason that a school would be able to obtain a student’s psychotherapy records,” she said.

As far as investigations under Title IX (the federal law against gender-based discrimination in education) go, questions from the investigator seeking information about the student’s psychological records aren’t permitted unless the student has given written consent, Hutchins added. “Schools have to follow state and federal health-care privacy laws throughout the Title IX process. I can’t speculate as to how or why these records were released.”

Daniel Carter, president of Safety Advisors for Educational Campuses, said that “it is absolutely illegal and improper for an institution of higher education to obtain one of their students’ private therapy records from a third party. There’s no circumstance under which that is permissible without their consent.”

Wednesday, March 2, 2022

Despite Decades of Hacking Attacks, Companies Leave Vast Amounts of Sensitive Data Unprotected

Cezary Podkul
ProPublica
Originally published 25 JAN 22

Here is an excerpt:

Americans rarely get a glimpse of hackers, much less what their work entails. They might be surprised to learn how little experience is needed. People often think hackers are highly sophisticated, Troy Hunt, creator of data breach tracking website Have I Been Pwned, told ProPublica. But in reality, there’s so much unsecured data online that most of the 11.7 billion email addresses and usernames in Hunt’s collection come from young adults who watch a few instructional videos and figure out how to grab them for malicious purposes. “It’s coming from kids with internet access and the ability to run a Google search and watch YouTube videos,” Hunt said in a 2019 talk about how hackers gain access to data.

Hiếu was once one of those teenagers. He grew up in a Vietnamese fishing town where his parents ran an electronics store. His dad got him a computer at age 12 and, like many adolescents, Hiếu was hooked.

His online pursuits quickly took a wrong turn. First, he started stealing dial-up account logins so he could surf the web for free. Then he learned how to deface websites and abscond with data left exposed on them. In high school, he joined forces with a friend who helped him pilfer credit card data from online stores and make up to $500 a day reselling it.

Eventually fellow hackers told him the real money was in aggregating and reselling Americans’ identities. Unlike credit cards, which banks can cancel instantly, stolen identities can be reused for various fraudulent purposes.

Beginning around 2010, Hiếu went looking for ways to get detailed profiles of Americans. It didn’t take long to find a source: MicroBilt, a Georgia-based consumer credit reporting firm, had a vulnerability on its website that allowed Hiếu to identify and take over user accounts. Hiếu said he used the credentials to start querying MicroBuilt’s database. He sold access to the search results on his online data store, called Superget.info.

MicroBilt spotted the vulnerability and kicked Hiếu out, setting off a monthslong standoff during which, Hiếu said, he exploited several vulnerabilities in the company’s systems to keep his store going. MicroBilt did not respond to requests seeking comment.

Tired of the back and forth, Hiếu went looking for another source. He found his way into a company called Court Ventures, which resold aggregated personally identifiable information on Americans. Hiếu used forged documents to pretend he was a private investigator from Singapore with a legitimate use for the data. He called himself Jason Low and provided a fake Yahoo email address. Soon, he was in.

Saturday, February 12, 2022

Privacy and digital ethics after the pandemic

Carissa Véliz
Nature Electronics
VOL 4 | January 2022, 10, 11.

The coronavirus pandemic has permanently changed our relationship with technology, accelerating the drive towards digitization. While this change has brought advantages, such as increased opportunities to work from home and innovations in e-commerce, it has also been accompanied with steep drawbacks,
which include an increase in inequality and undesirable power dynamics.

Power asymmetries in the digital age have been a worry since big tech became big.  Technophiles have often argued that if users are unhappy about online services, they can always opt-out. But opting-out has not felt like a meaningful alternative for years for at least two reasons.  

First, the cost of not using certain services can amount to a competitive disadvantage — from not seeing a job advert to not having access to useful tools being used by colleagues. When a platform becomes too dominant, asking people not to use it is like asking them to refrain from being full participants in society. Second, platforms such as Facebook and Google are unavoidable — no one who has an online life can realistically steer clear of them. Google ads and their trackers creep throughout much of the Internet, and Facebook has shadow profiles on netizens even when they have never had an account on the platform.

(cut)

Reasons for optimism

Despite the concerning trends regarding privacy and digital ethics during the pandemic, there are reasons to be cautiously optimistic about the future.  First, citizens around the world are increasingly suspicious of tech companies, and are gradually demanding more from them. Second, there is a growing awareness that the lack of privacy ingrained in current apps entails a national security risk, which can motivate governments into action. Third, US President Joe Biden seems eager to collaborate with the international community, in contrast to his predecessor. Fourth, regulators in the US are seriously investigating how to curtail tech’s power, as evidenced by the Department of Justice’s antitrust lawsuit against Google and the Federal Trade Commission’s (FTC) antitrust lawsuit against Facebook.  Amazon and YouTube have also been targeted by the FTC for a privacy investigation. With discussions of a federal privacy law becoming more common in the US, it would not be surprising to see such a development in the next few years. Tech regulation in the US could have significant ripple effects elsewhere.

Tuesday, November 2, 2021

Our evolved intuitions about privacy aren’t made for this era

Joe Green & Azim Shariff
psyche.co
Originally published September 16, 2021

Here is an excerpt:

Our concern for privacy has its evolutionary roots in the need to maintain boundaries between the self and others, for safety and security. The motivation for personal space and territoriality is a common phenomenon within the animal kingdom. Among humans, this concern about regulating physical access is complemented by one about regulating informational access. The language abilities, complex social lives and long memories of human beings made protecting our social reputations almost as important as protecting our physical bodies. Norms about sexual privacy, for instance, are common across cultures and time periods. Establishing basic seclusion for secret trysts would have allowed for all the carnal benefits without the unwelcome reputational scrutiny.

Since protection and seclusion must be balanced with interaction, our privacy concern is tuned to flexibly respond to cues in our environment, helping to determine when and what and with whom we share our physical space and personal information. We reflexively lower our voices when strange or hostile interlopers come within earshot. We experience an uneasy creepiness when someone peers over our shoulder. We viscerally feel the presence of a crowd and the public scrutiny that comes with it.

However, just as the turtles’ light-orienting reflex was confounded by the glow of urban settlements, so too have our privacy reactions been confounded by technology. Cameras and microphones – with their superhuman sensory abilities – were challenging enough. But the migration of so much of our lives online is arguably the largest environmental shift in our species’ history with regard to privacy. And our evolved privacy psychology has not caught up. Consider how most people respond to the presence of others when they are in a crowd. Humans use a host of social cues to regulate how much distance they keep between themselves and others. These include facial expression, gaze, vocal quality, posture and hand gestures. In a crowd, such cues can produce an anxiety-inducing cacophony. Moreover, our hair-trigger reputation-management system – critical to keeping us in good moral standing within our group – can drive us into a delirium of self-consciousness.

However, there is some wisdom in this anxiety. Looking into the whites of another’s eyes anchors us within the social milieu, along with all of its attendant norms and expectations. As a result, we tread carefully. Our private thoughts generally remain just that – private, conveyed only to small, trusted groups or confined to our own minds. But as ‘social networks’ suddenly switched from being small, familiar, in-person groupings to online social media platforms connecting millions of users, things changed. Untethered from recognisable social cues such as crowding and proximity, thoughts better left for a select few found their way in front of a much wider array of people, many of whom do not have our best interests at heart. Online we can feel alone and untouchable when we are neither.

Consider, too, our intuitions about what belongs to whom. Ownership can be complicated from a legal perspective but, psychologically, it is readily inferred from an early age (as anyone with young children will have realised). This is achieved through a set of heuristics that provide an intuitive ‘folk psychology’ of ownership. First possession (who first possessed an object), labour investment (who made or modified an object), and object history (information about past transfer of ownership) are all cues that people reflexively use in attributing the ownership of physical things – and consequently, the right to open, inspect or enter them.

Friday, October 29, 2021

Harms of AI

Daron Acemoglu
NBER Working Paper No. 29247
September 2021

Abstract

This essay discusses several potential economic, political and social costs of the current path of AI technologies. I argue that if AI continues to be deployed along its current trajectory and remains unregulated, it may produce various social, economic and political harms. These include: damaging competition, consumer privacy and consumer choice; excessively automating work, fueling inequality, inefficiently pushing down wages, and failing to improve worker productivity; and damaging political discourse, democracy's most fundamental lifeblood. Although there is no conclusive evidence suggesting that these costs are imminent or substantial, it may be useful to understand them before they are fully realized and become harder or even impossible to reverse, precisely because of AI's promising and wide-reaching potential. I also suggest that these costs are not inherent to the nature of AI technologies, but are related to how they are being used and developed at the moment - to empower corporations and governments against workers and citizens. As a result, efforts to limit and reverse these costs may need to rely on regulation and policies to redirect AI research. Attempts to contain them just by promoting competition may be insufficient.

Conclusion

In this essay, I explored several potential economic, political and social costs of the current path of AI technologies. I suggested that if AI continues to be deployed along its current trajectory and remains unregulated, then it can harm competition, consumer privacy and consumer choice, it may excessively automate work, fuel inequality, inefficiently push down wages, and fail to improve productivity. It may also make political discourse increasingly distorted, cutting one of the lifelines of democracy. I also mentioned several other potential social costs from the current path of AI research.

I should emphasize again that all of these potential harms are theoretical. Although there is much evidence indicating that not all is well with the deployment of AI technologies and the problems of increasing market power, disappearance of work, inequality, low wages, and meaningful challenges to democratic discourse and practice are all real, we do not have sufficient evidence to be sure that AI has been a serious contributor to these troubling trends.  Nevertheless, precisely because AI is a promising technological platform, aiming to transform every sector of the economy and every aspect of our social lives, it is imperative for us to study what its downsides are, especially on its current trajectory. It is in this spirit that I discussed the potential costs of AI this paper.

Wednesday, October 20, 2021

The Fight to Define When AI Is ‘High Risk’

Khari Johnson
wired.com
Originally posted 1 Sept 21

Here is an excerpt:

At the heart of much of that commentary is a debate over which kinds of AI should be considered high risk. The bill defines high risk as AI that can harm a person’s health or safety or infringe on fundamental rights guaranteed to EU citizens, like the right to life, the right to live free from discrimination, and the right to a fair trial. News headlines in the past few years demonstrate how these technologies, which have been largely unregulated, can cause harm. AI systems can lead to false arrests, negative health care outcomes, and mass surveillance, particularly for marginalized groups like Black people, women, religious minority groups, the LGBTQ community, people with disabilities, and those from lower economic classes. Without a legal mandate for businesses or governments to disclose when AI is used, individuals may not even realize the impact the technology is having on their lives.

The EU has often been at the forefront of regulating technology companies, such as on issues of competition and digital privacy. Like the EU's General Data Protection Regulation, the AI Act has the potential to shape policy beyond Europe’s borders. Democratic governments are beginning to create legal frameworks to govern how AI is used based on risk and rights. The question of what regulators define as high risk is sure to spark lobbying efforts from Brussels to London to Washington for years to come.

Tuesday, October 5, 2021

Social Networking and Ethics

Vallor, Shannon
The Stanford Encyclopedia of Philosophy 
(Fall 2021 Edition), Edward N. Zalta (ed.)

Here is an excerpt:

Contemporary Ethical Concerns about Social Networking Services

While early SNS scholarship in the social and natural sciences tended to focus on SNS impact on users’ psychosocial markers of happiness, well-being, psychosocial adjustment, social capital, or feelings of life satisfaction, philosophical concerns about social networking and ethics have generally centered on topics less amenable to empirical measurement (e.g., privacy, identity, friendship, the good life and democratic freedom). More so than ‘social capital’ or feelings of ‘life satisfaction,’ these topics are closely tied to traditional concerns of ethical theory (e.g., virtues, rights, duties, motivations and consequences). These topics are also tightly linked to the novel features and distinctive functionalities of SNS, more so than some other issues of interest in computer and information ethics that relate to more general Internet functionalities (for example, issues of copyright and intellectual property).

Despite the methodological challenges of applying philosophical theory to rapidly shifting empirical patterns of SNS influence, philosophical explorations of the ethics of SNS have continued in recent years to move away from Borgmann and Dreyfus’ transcendental-existential concerns about the Internet, to the empirically-driven space of applied technology ethics. Research in this space explores three interlinked and loosely overlapping kinds of ethical phenomena:
  • direct ethical impacts of social networking activity itself (just or unjust, harmful or beneficial) on participants as well as third parties and institutions;
  • indirect ethical impacts on society of social networking activity, caused by the aggregate behavior of users, platform providers and/or their agents in complex interactions between these and other social actors and forces;
  • structural impacts of SNS on the ethical shape of society, especially those driven by the dominant surveillant and extractivist value orientations that sustain social networking platforms and culture.
Most research in the field, however, remains topic- and domain-driven—exploring a given potential harm or domain-specific ethical dilemma that arises from direct, indirect, or structural effects of SNS, or more often, in combination. Sections 3.1–3.5 outline the most widely discussed of contemporary SNS’ ethical challenges.

Monday, August 16, 2021

Therapist Targeted Googling: Characteristics and Consequences for the Therapeutic Relationship

Cox, K. E., Simonds, L. M., & Moulton-Perkins, A. 
(2021).  Professional Psychology: 
Research and Practice. Advance online publication. 

Abstract

Therapist-targeted googling (TTG) refers to a patient searching online to find information about their therapist. The present study investigated TTG prevalence and characteristics in a sample of adult psychotherapy clients. Participants (n = 266) who had attended at least one session with a therapist completed an anonymous online survey about TTG prevalence, motivations, and perceived impact on the therapeutic relationship. Two-thirds of the sample had conducted TTG. Those participants who were having therapy privately had worked with more than one therapist, or were having sessions more often than weekly were significantly more likely to conduct TTG; this profile was particularly common among patients who were having psychodynamic psychotherapy. Motivations included wanting to see if the therapist is qualified, curiosity, missing the therapist, and wanting to know them better. Nearly a quarter who undertook TTG thought the findings impacted the therapeutic relationship but only one in five had disclosed TTG to the therapist. TTG beyond common sense consumerism can be conceptualized as a patient’s attempt to attain closeness to the therapist but may result in impacts on trust and ability to be open. Disclosures of TTG may constitute important therapeutic material. 

Impact Statement

This study suggests that there are multiple motivations for clients searching online for information about their therapist. It highlights the need for practitioners to carefully consider the information available about them online and the importance of client searching to the therapeutic relationship.

Here is the conclusion:

In this study, most participants searched for information about their therapist. Curiosity and commonsense consumerism might explain much of this activity. We argue that there is evidence that some of this might be motivated by moments of vulnerability between sessions to regain a connection with the therapist. We also suggest that the discovery of challenging information during vulnerability might represent difficulties for the patient that are not disclosed to the therapist due to feelings of guilt and shame. Further work is needed to understand TTG, the implications on the therapeutic relationship, and how therapists work with disclosures of TTG in a way that does not provoke more shame in the patient, but which also allows therapists to effectively manage therapeutic closeness and their own vulnerability.