Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Data. Show all posts
Showing posts with label Data. Show all posts

Tuesday, March 26, 2024

Why the largest transgender survey ever could be a powerful rebuke to myths, misinformation

Susan Miller
Originally posted 23 Feb 24

Here is an excerpt:

Laura Hoge, a clinical social worker in New Jersey who works with transgender people and their families, said the survey results underscore what she sees in her daily practice: that lives improve when access to something as basic as gender-affirming care is not restricted.

“I see children who come here sometimes not able to go to school or are completely distanced from their friends,” she said. “And when they have access to care, they can go from not going to school to trying out for their school play.”

Every time misinformation about transgender people surfaces, Hoge says she is flooded with phone calls.

The survey now gives real-world data on the lived experiences of transgender people and how their lives are flourishing, she said. “I can tell you that when I talk to families I am able to say to them: This is what other people in your child’s situation or in your situation are saying.”

Gender-affirming care has been a target of state bills

Gender-affirming care, which can involve everything from talk sessions to hormone therapy, in many ways has been ground zero in recent legislative debates over the rights of transgender people.

A poll by the Trevor Project, which provides crisis and suicide prevention services to LGBTQ+ people under 25, found that 85% of trans and nonbinary youths say even the debates about these laws have negatively impacted their mental health.

In January, the Ohio Senate overrode the governor’s veto of legislation that restricted medical care for transgender young people.

The bill prohibits doctors from prescribing hormones, puberty blockers, or gender reassignment surgery before patients turn 18 and requires mental health providers to get parental permission to diagnose and treat gender dysphoria.

Here are my thoughts:

A landmark study is underway: the largest survey of transgender individuals in the United States. This comprehensive data collection holds the potential to be a powerful weapon against harmful myths and misinformation surrounding the transgender community. By providing a clear picture of their experiences, the survey can challenge misconceptions, inform policy, and ultimately improve the lives of transgender individuals. This data-driven approach has the potential to foster greater understanding and acceptance, paving the way for a more inclusive society.

Wednesday, January 10, 2024

Indigenous data sovereignty—A new take on an old theme

Tahu Kukutai (2023).
Science, 382.

A new kind of data revolution is unfolding around the world, one that is unlikely to be on the radar of tech giants and the power brokers of Silicon Valley. Indigenous Data Sovereignty (IDSov) is a rallying cry for Indigenous communities seeking to regain control over their information while pushing back against data colonialism and its myriad harms. Led by Indigenous academics, innovators, and knowledge-holders, IDSov networks now exist in the United States, Canada, Aotearoa (New Zealand), Australia, the Pacific, and Scandinavia, along with an international umbrella group, the Global Indigenous Data Alliance (GIDA). Together, these networks advocate for the rights of Indigenous Peoples over data that derive from them and that pertain to Nation membership, knowledge systems, customs, or territories. This lens on data sovereignty not only exceeds narrow notions of sovereignty as data localization and jurisdictional rights but also upends the assumption that the nation state is the legitimate locus of power. IDSov has thus become an important catalyst for broader conversations about what Indigenous sovereignty means in a digital world and how some measure of self-determination can be achieved under the weight of Big Tech dominance.

Indigenous Peoples are, of course, no strangers to struggles for sovereignty. There are an estimated 476 million Indigenous Peoples worldwide; the actual number is unknown because many governments do not separately identify Indigenous Peoples in their national data collections such as the population census. Colonial legacies of racism; land dispossession; and the suppression of Indigenous cultures, languages, and knowledges have had profound impacts. For example, although Indigenous Peoples make up just 6% of the global population, they account for about 20% of the world’s extreme poor. Despite this, Indigenous Peoples continue to assert their sovereignty and to uphold their responsibilities as protectors and stewards of their lands, waters, and knowledges.

The rest of the article is here.

Here is a brief summary:

This is an article about Indigenous data sovereignty. It discusses the importance of Indigenous communities having control over their own data. This is because data can be used to exploit and harm Indigenous communities. Indigenous data sovereignty is a way for Indigenous communities to protect themselves from this harm. There are a number of principles that guide Indigenous data sovereignty, including collective consent and the importance of upholding cultural protocols. Indigenous data sovereignty is still in its early stages, but it has the potential to be a powerful tool for Indigenous communities.

Tuesday, October 17, 2023

Tackling healthcare AI's bias, regulatory and inventorship challenges

Bill Siwicki
Healthcare IT News
Originally posted 29 August 23

While AI adoption is increasing in healthcare, there are privacy and content risks that come with technology advancements.

Healthcare organizations, according to Dr. Terri Shieh-Newton, an immunologist and a member at global law firm Mintz, must have an approach to AI that best positions themselves for growth, including managing:
  • Biases introduced by AI. Provider organizations must be mindful of how machine learning is integrating racial diversity, gender and genetics into practice to support the best outcome for patients.
  • Inventorship claims on intellectual property. Identifying ownership of IP as AI begins to develop solutions in a faster, smarter way compared to humans.
Healthcare IT News sat down with Shieh-Newton to discuss these issues, as well as the regulatory landscape’s response to data and how that impacts AI.

Q. Please describe the generative AI challenge with biases introduced from AI itself. How is machine learning integrating racial diversity, gender and genetics into practice?
A. Generative AI is a type of machine learning that can create new content based on the training of existing data. But what happens when that training set comes from data that has inherent bias? Biases can appear in many forms within AI, starting from the training set of data.

Take, as an example, a training set of patient samples already biased if the samples are collected from a non-diverse population. If this training set is used for discovering a new drug, then the outcome of the generative AI model can be a drug that works only in a subset of a population – or have just a partial functionality.

Some traits of novel drugs are better binding to its target and lower toxicity. If the training set excludes a population of patients of a certain gender or race (and the genetic differences that are inherent therein), then the outcome of proposed drug compounds is not as robust as when the training sets include a diversity of data.

This leads into questions of ethics and policies, where the most marginalized population of patients who need the most help could be the group that is excluded from the solution because they were not included in the underlying data used by the generative AI model to discover that new drug.

One can address this issue with more deliberate curation of the training databases. For example, is the patient population inclusive of many types of racial backgrounds? Gender? Age ranges?

By making sure there is a reasonable representation of gender, race and genetics included in the initial training set, generative AI models can accelerate drug discovery, for example, in a way that benefits most of the population.

The info is here. 

Here is my take:

 One of the biggest challenges is bias. AI systems are trained on data, and if that data is biased, the AI system will be biased as well. This can have serious consequences in healthcare, where biased AI systems could lead to patients receiving different levels of care or being denied care altogether.

Another challenge is regulation. Healthcare is a highly regulated industry, and AI systems need to comply with a variety of laws and regulations. This can be complex and time-consuming, and it can be difficult for healthcare organizations to keep up with the latest changes.

Finally, the article discusses the challenges of inventorship. As AI systems become more sophisticated, it can be difficult to determine who is the inventor of a new AI-powered healthcare solution. This can lead to disputes and delays in bringing new products and services to market.

The article concludes by offering some suggestions for how to address these challenges:
  • To reduce bias, healthcare organizations need to be mindful of the data they are using to train their AI systems. They should also audit their AI systems regularly to identify and address any bias.
  • To comply with regulations, healthcare organizations need to work with experts to ensure that their AI systems meet all applicable requirements.
  • To resolve inventorship disputes, healthcare organizations should develop clear policies and procedures for allocating intellectual property rights.
By addressing these challenges, healthcare organizations can ensure that AI is deployed in a way that is safe, effective, and ethical.

Additional thoughts

In addition to the challenges discussed in the article, there are a number of other factors that need to be considered when deploying AI in healthcare. For example, it is important to ensure that AI systems are transparent and accountable. This means that healthcare organizations should be able to explain how their AI systems work and why they make the decisions they do.

It is also important to ensure that AI systems are fair and equitable. This means that they should treat all patients equally, regardless of their race, ethnicity, gender, income, or other factors.

Finally, it is important to ensure that AI systems are used in a way that respects patient privacy and confidentiality. This means that healthcare organizations should have clear policies in place for the collection, use, and storage of patient data.

By carefully considering all of these factors, healthcare organizations can ensure that AI is used to improve patient care and outcomes in a responsible and ethical way.

Tuesday, March 14, 2023

What Happens When AI Has Read Everything?

Ross Anderson
The Atlantic
Originally posted 18 JAN 23

Here is an excerpt:

Ten trillion words is enough to encompass all of humanity’s digitized books, all of our digitized scientific papers, and much of the blogosphere. That’s not to say that GPT-4 will have read all of that material, only that doing so is well within its technical reach. You could imagine its AI successors absorbing our entire deep-time textual record across their first few months, and then topping up with a two-hour reading vacation each January, during which they could mainline every book and scientific paper published the previous year.

Just because AIs will soon be able to read all of our books doesn’t mean they can catch up on all of the text we produce. The internet’s storage capacity is of an entirely different order, and it’s a much more democratic cultural-preservation technology than book publishing. Every year, billions of people write sentences that are stockpiled in its databases, many owned by social-media platforms.

Random text scraped from the internet generally doesn’t make for good training data, with Wikipedia articles being a notable exception. But perhaps future algorithms will allow AIs to wring sense from our aggregated tweets, Instagram captions, and Facebook statuses. Even so, these low-quality sources won’t be inexhaustible. According to Villalobos, within a few decades, speed-reading AIs will be powerful enough to ingest hundreds of trillions of words—including all those that human beings have so far stuffed into the web.

And the conclusion:

If, however, our data-gorging AIs do someday surpass human cognition, we will have to console ourselves with the fact that they are made in our image. AIs are not aliens. They are not the exotic other. They are of us, and they are from here. They have gazed upon the Earth’s landscapes. They have seen the sun setting on its oceans billions of times. They know our oldest stories. They use our names for the stars. Among the first words they learn are flow, mother, fire, and ash.

Wednesday, October 21, 2020

Neurotechnology can already read minds: so how do we protect our thoughts?

Rafael Yuste
El Pais
Originally posted 11 Sept 20

Here is an excerpt:

On account of these and other developments, a group of 25 scientific experts – clinical engineers, psychologists, lawyers, philosophers and representatives of different brain projects from all over the world – met in 2017 at Columbia University, New York, and proposed ethical rules for the use of these neurotechnologies. We believe we are facing a problem that affects human rights, since the brain generates the mind, which defines us as a species. At the end of the day, it is about our essence – our thoughts, perceptions, memories, imagination, emotions and decisions.

To protect citizens from the misuse of these technologies, we have proposed a new set of human rights, called “neurorights.” The most urgent of these to establish is the right to the privacy of our thoughts, since the technologies for reading mental activity are more developed than the technologies for manipulating it.

To defend mental privacy, we are working on a three-pronged approach. The first consists of legislating “neuroprotection.” We believe that data obtained from the brain, which we call “neurodata,” should be rigorously protected by laws similar to those applied to organ donations and transplants. We ask that “neurodata” not be traded and only be extracted with the consent of the individual for medical or scientific purposes.

This would be a preventive measure to protect against abuse. The second approach involves the proposal of proactive ideas; for example, that the companies and organizations that manufacture these technologies should adhere to a code of ethics from the outset, just as doctors do with the Hippocratic Oath. We are working on a “technocratic oath” with Xabi Uribe-Etxebarria, founder of the artificial intelligence company Sherpa.ai, and with the Catholic University of Chile.

The third approach involves engineering, and consists of developing both hardware and software so that brain “neurodata” remains private, and that it is possible to share only select information. The aim is to ensure that the most personal data never leaves the machines that are wired to our brain. One option is to systems that are already used with financial data: open-source files and blockchain technology so that we always know where it came from, and smart contracts to prevent data from getting into the wrong hands. And, of course, it will be necessary to educate the public and make sure that no device can use a person’s data unless he or she authorizes it at that specific time.

Sunday, August 30, 2020

Prosocial modeling: A meta-analytic review and synthesis

Jung, H., Seo, E., et al. (2020).
Psychological Bulletin, 146(8), 635–663.

Exposure to prosocial models is commonly used to foster prosocial behavior in various domains of society. The aim of the current article is to apply meta-analytic techniques to synthesize several decades of research on prosocial modeling, and to examine the extent to which prosocial modeling elicits helping behavior. We also identify the theoretical and methodological variables that moderate the prosocial modeling effect. Eighty-eight studies with 25,354 participants found a moderate effect (g = 0.45) of prosocial modeling in eliciting subsequent helping behavior. The prosocial modeling effect generalized across different types of helping behaviors, different targets in need of help, and was robust to experimenter bias. Nevertheless, there was cross-societal variation in the magnitude of the modeling effect, and the magnitude of the prosocial modeling effect was larger when participants were presented with an opportunity to help the model (vs. a third-party) after witnessing the model’s generosity. The prosocial modeling effect was also larger for studies with higher percentage of female in the sample, when other people (vs. participants) benefitted from the model’s prosocial behavior, and when the model was rewarded for helping (vs. was not). We discuss the publication bias in the prosocial modeling literature, limitations of our analyses and identify avenues for future research. We end with a discussion of the theoretical and practical implications of our findings.

Impact Statement

Public Significance Statement: This article synthesizes several decades of research on prosocial modeling and shows that witnessing others’ helpful acts encourages prosocial behavior through prosocial goal contagion. The magnitude of the prosocial modeling effect, however, varies across societies, gender and modeling contexts. The prosocial modeling effect is larger when the model is rewarded for helping. These results have important implications for our understanding of why, how, and when the prosocial modeling effect occurs. 

Saturday, July 11, 2020

Why Do People Avoid Facts That Could Help Them?

Francesca Gino
Scientific American
Originally posted 16 June 20

In our information age, an unprecedented amount of data are right at our fingertips. We run genetic tests on our unborn children to prepare for the worst. We get regular cancer screenings and monitor our health on our wrist and our phone. And we can learn about our ancestral ties and genetic predispositions with a simple swab of saliva.

Yet there’s some information that many of us do not want to know. A study of more than 2,000 people in Germany and Spain by Gerd Gigerenzer of the Max Planck Institute for Human Development in Berlin and Rocio Garcia-Retamero of the University of Granada in Spain found that 90 percent of them would not want to find out, if they could, when their partner would die or what the cause would be. And 87 percent also reported not wanting to be aware of the date of their own death. When asked if they’d want to know if, and when, they’d get divorced, more than 86 percent said no.

Related research points to a similar conclusion: We often prefer to avoid learning information that could cause us pain. Investors are less likely to log on to their stock portfolios on days when the market is down. And one laboratory experiment found that subjects who were informed that they were rated less attractive than other participants were willing to pay money not to find out their exact rank.

More consequentially, people avoid learning certain information related to their health even if having such knowledge would allow them to identify therapies to manage their symptoms or treatment. As one study found, only 7 percent of people at high risk for Huntington’s disease elect to find out whether they have the condition, despite the availability of a genetic test that is generally paid for by health insurance plans and the clear usefulness of the information for alleviating the chronic disease’s symptoms. Similarly,participants in a laboratory experiment chose to forgo part of their earnings to avoid learning the outcome of a test for a treatable sexually transmitted disease. Such avoidance was even greater when the disease symptoms were more severe.

The info is here.

Friday, June 5, 2020

These are the Decade’s Biggest Discoveries in Human Evolution

Briana Pobiner and Rick Potts
Originally posted 28 April 20

Here is an excerpt:

We’re older than we thought

Stone tools aren’t the only things that are older than we thought. Humans are too.

Just three years ago, a team of scientists made a discovery that pushed back the origin of our species, Homo sapiens. The team re-excavated a cave in Morocco where a group of miners found skulls in 1961. They collected sediments and more fossils to help them identify and date the remains. Using CT scans, the scientists confirmed that the remains belonged to our species. They also used modern dating techniques on the remains. To their surprise, the remains dated to about 300,000 years ago, which means that our species originated 100,000 years earlier than we thought.

Social Networking Isn’t New

With platforms like Facebook, Twitter and Instagram, it hard to imagine social networking being old. But it is. And, now, it’s even older than we thought.

In 2018, scientists discovered that social networks were used to trade obsidian, valuable for its sharp edges, by around 300,000 years ago. After excavating and analyzing stone tools from southern Kenya, the team found that the stones chemically matched to obsidian sources in multiple directions of up to 55 miles away. The findings show how early humans related to and kept track of a larger social world.

We left Africa earlier than we thought

We’ve long known that early humans migrated from Africa not once but at least twice. But we didn’t know just how early those migrations happened.

We thought Homo erectus spread beyond Africa as far as eastern Asia by about 1.7 million years ago. But, in 2018, scientists dated new stone tools and fossils from China to about 2.1 million years ago, pushing the Homo erectus migration to Asia back by 400,000 years.

The info is here.

Thursday, March 12, 2020

Artificial Intelligence in Health Care

M. Matheny, D. Whicher, & S. Israni
JAMA. 2020;323(6):509-510.

The promise of artificial intelligence (AI) in health care offers substantial opportunities to improve patient and clinical team outcomes, reduce costs, and influence population health. Current data generation greatly exceeds human cognitive capacity to effectively manage information, and AI is likely to have an important and complementary role to human cognition to support delivery of personalized health care.  For example, recent innovations in AI have shown high levels of accuracy in imaging and signal detection tasks and are considered among the most mature tools in this domain.

However, there are challenges in realizing the potential for AI in health care. Disconnects between reality and expectations have led to prior precipitous declines in use of the technology, termed AI winters, and another such event is possible, especially in health care.  Today, AI has outsized market expectations and technology sector investments. Current challenges include using biased data for AI model development, applying AI outside of populations represented in the training and validation data sets, disregarding the effects of possible unintended consequences on care or the patient-clinician relationship, and limited data about actual effects on patient outcomes and cost of care.

AI in Healthcare: The Hope, The Hype, The Promise, The Peril, a publication by the National Academy of Medicine (NAM), synthesizes current knowledge and offers a reference document for the responsible development, implementation, and maintenance of AI in the clinical enterprise.  The publication outlines current and near-term AI solutions; highlights the challenges, limitations, and best practices for AI development, adoption, and maintenance; presents an overview of the legal and regulatory landscape for health care AI; urges the prioritization of equity, inclusion, and a human rights lens for this work; and outlines considerations for moving forward. This Viewpoint shares highlights from the NAM publication.

The info is here.

Tuesday, March 10, 2020

The Perils of “Survivorship Bias”

Katy Milkman
Scientific American
Originally posted 11 Feb 20

Here is an excerpt:

My colleagues and I, we’ve been spending a lot of time looking at medical decision-making. Say you walk into an emergency room, and you might or might not be having a heart attack. If I test you, I learn whether I’m making a good decision or not. But if I say, “It’s unlikely, so I’ll just send her home,” it’s almost the opposite of survivorship bias. I never get to learn if I made a good decision. And this is supercommon, not just in medicine but in every profession.

Similarly, there was a work done that showed that people who had car accidents were also more likely to have cancer. It was kind of a puzzle until you think, “Wait, who do we measure cancer in?” We don’t measure cancer in everybody. We measure cancer in people who have been tested. And who do we test? We test people who are in hospitals. So someone goes to the hospital for a car accident, and then I do an MRI and find a tumor. And now that leads to car accidents appearing to elevate the level of tumors. So anything that gets you into hospitals raises your “cancer rate,” but that’s not your real cancer rate.

That’s one of my favorite examples, because it really illustrates how even with something like cancer, we’re not actually measuring it without selection bias, because we only measure it in a subset of the population.

How can people avoid falling prey to these kinds of biases?

Look at your life and where you get feedback and ask, “Is that feedback selected, or am I getting unvarnished feedback?”

Whatever the claim—it could be “I’m good at blank” or “Wow, we have a high hit rate” or any sort of assessment—then you think about where the data comes from. Maybe it’s your past successes. And this is the key: Think about what the process that generated the data is. What are all the other things that could have happened that might have led me to not measure it? In other words, if I say, “I’m great at interviewing,” you say, “Okay. Well, what data are you basing that on?” “Well, my hires are great.” You can counter with, “Have you considered the people who you have not hired?”

The info is here.

Tuesday, February 11, 2020

How to build ethical AI

Carolyn Herzog
Originally posted 18 Jan 20

Here is an excerpt:

Any standard-setting in this field must be rooted in the understanding that data is the lifeblood of AI. The continual input of information is what fuels machine learning, and the most powerful AI tools require massive amounts of it. This of course raises issues of how that data is being collected, how it is being used, and how it is being safeguarded.

One of the most difficult questions we must address is how to overcome bias, particularly the unintentional kind. Let’s consider one potential application for AI: criminal justice. By removing prejudices that contribute to racial and demographic disparities, we can create systems that produce more uniform sentencing standards. Yet, programming such a system still requires weighting countless factors to determine appropriate outcomes. It is a human who must program the AI, and a person’s worldview will shape how they program machines to learn. That’s just one reason why enterprises developing AI must consider workforce diversity and put in place best practices and control for both intentional and inherent bias.

This leads back to transparency.

A computer can make a highly complex decision in an instant, but will we have confidence that it’s making a just one?

Whether a machine is determining a jail sentence, or approving a loan, or deciding who is admitted to a college, how do we explain how those choices were made? And how do we make sure the factors that went into that algorithm are understandable for the average person?

The info is here.

Friday, December 6, 2019

Ethical research — the long and bumpy road from shirked to shared

Sarah Franklin
Originally posted October 29, 2019

Here is an excerpt:

Beyond bewilderment

Just as the ramifications of the birth of modern biology were hard to delineate in the late nineteenth century, so there is a sense of ethical bewilderment today. The feeling of being overwhelmed is exacerbated by a lack of regulatory infrastructure or adequate policy precedents. Bioethics, once a beacon of principled pathways to policy, is increasingly lost, like Simba, in a sea of thundering wildebeest. Many of the ethical challenges arising from today’s turbocharged research culture involve rapidly evolving fields that are pursued by globally competitive projects and teams, spanning disparate national regulatory systems and cultural norms. The unknown unknowns grow by the day.

The bar for proper scrutiny has not so much been lowered as sawn to pieces: dispersed, area-specific ethical oversight now exists in a range of forms for every acronym from AI (artificial intelligence) to GM organisms. A single, Belmont-style umbrella no longer seems likely, or even feasible. Much basic science is privately funded and therefore secretive. And the mergers between machine learning and biological synthesis raise additional concerns. Instances of enduring and successful international regulation are rare. The stereotype of bureaucratic, box-ticking ethical compliance is no longer fit for purpose in a world of CRISPR twins, synthetic neurons and self-driving cars.

Bioethics evolves, as does any other branch of knowledge. The post-millennial trend has been to become more global, less canonical and more reflexive. The field no longer relies on philosophically derived mandates codified into textbook formulas. Instead, it functions as a dashboard of pragmatic instruments, and is less expert-driven, more interdisciplinary, less multipurpose and more bespoke. In the wake of the ‘turn to dialogue’ in science, bioethics often looks more like public engagement — and vice versa. Policymakers, polling companies and government quangos tasked with organizing ethical consultations on questions such as mitochondrial donation (‘three-parent embryos’, as the media would have it) now perform the evaluations formerly assigned to bioethicists.

The info is here.

Saturday, November 23, 2019

Is this “one of the worst scientific scandals of all time”?

Hans Eysenck
Stephen Fleischfresser
Originally posted 21 October 2019

Here is an excerpt:

Another study on the efficacy of psychotherapy in preventing cancer showed 100% of treated subjects did not die of cancer in the following 13 years, compared to 32% of an untreated control group.

Perhaps most alarming results were connected to Eysenck and Grossath-Maticek’s notion of ‘bibliotherapy’ which consisted of, as Eysenck put it, “a written pamphlet outlining the principles of behaviour therapy as applied to better, more autonomous living, and avoidance of stress.”

This was coupled with five hours of discussion, aimed both at reorienting a patient’s personality away from the cancer-prone and toward a healthier disposition. The results of this study, according to Pelosi, were that “128 of the 600 (21%) controls died of cancer over 13 years compared with 27 of 600 (4.5%) treated subjects.

"Such results are otherwise unheard of in the entire history of medical science.” There were similarly spectacular results concerning various forms of heart disease too.

These decidedly improbable findings led to a blizzard of critical scrutiny through the 90s: Eysenck and Grossath-Maticek’s work was attacked for its methodology, statistical treatment and ethics.

One researcher who attempted a sympathetic review of the work, in cooperation with the pair, found, says Pelosi, “unequivocal evidence of manipulation of data sheets,” from the Heidelberg cohort, as well as numerous patient questionnaires with identical responses.

An attempt at replicating some of their results concerning heart disease provided cold comfort, indicating that the personality type association with coronary illness was non-existent for all but one of the types.

A slightly modified replication of Eysenck and Grossath-Maticek’s research on personality and cancer faired no better, with the author, Manfred Amelang, writing “I know of no other area of research in which the change from an interview to a carefully constructed questionnaire measuring the same construct leads to a change from near-perfect prediction to near-zero prediction.”

The info is here.

Thursday, October 24, 2019

The consciousness illusion

Keith Frankish
Originally published September 26, 2019

Here is an excerpt:

The first concerns explanatory simplicity. If we observe something science can’t explain, then the simplest hypothesis is that it’s an illusion, especially if it can be observed only from one particular angle. This is exactly the case with phenomenal consciousness. Phenomenal properties cannot be explained in standard scientific ways and can be observed only from the first-person viewpoint (no one but me can experience my sensations). This does not show that they aren’t real. It could be that we need to radically rethink our science but, as Dennett says, the theory that they are illusory is the obvious default one.

A second argument concerns our awareness of phenomenal properties. We are aware of features of the natural world only if we have a sensory system that can detect them and generate representations of them for use by other mental systems. This applies equally to features of our own minds (which are parts of the natural world), and it would apply to phenomenal properties too, if they were real. We would need an introspective system that could detect them and produce representations of them. Without that, we would have no more awareness of our brains’ phenomenal properties than we do of their magnetic properties. In short, if we were aware of phenomenal properties, it would be by virtue of having mental representations of them. But then it would make no difference whether these representations were accurate. Illusory representations would have the same effects as veridical ones. If introspection misrepresents us as having phenomenal properties then, subjectively, that’s as good as actually having them. Since science indicates that our brains don’t have phenomenal properties, the obvious inference is that our introspective representations of them are illusory.

There is also a specific argument for preferring illusionism to property dualism. In general, if we can explain our beliefs about something without mentioning the thing itself, then we should discount the beliefs.

The info is here.

Friday, October 18, 2019

Code of Ethics Can Guide Responsible Data Use

Katherine Noyes
The Wall Street Journal
Originally posted September 26, 2019

Here is an excerpt:

Associated with these exploding data volumes are plenty of practical challenges to overcome—storage, networking, and security, to name just a few—but far less straightforward are the serious ethical concerns. Data may promise untold opportunity to solve some of the largest problems facing humanity today, but it also has the potential to cause great harm due to human negligence, naivety, and deliberate malfeasance, Patil pointed out. From data breaches to accidents caused by self-driving vehicles to algorithms that incorporate racial biases, “we must expect to see the harm from data increase.”

Health care data may be particularly fraught with challenges. “MRIs and countless other data elements are all digitized, and that data is fragmented across thousands of databases with no easy way to bring it together,” Patil said. “That prevents patients’ access to data and research.” Meanwhile, even as clinical trials and numerous other sources continually supplement data volumes, women and minorities remain chronically underrepresented in many such studies. “We have to reboot and rebuild this whole system,” Patil said.

What the world of technology and data science needs is a code of ethics—a set of principles, akin to the Hippocratic Oath, that guides practitioners’ uses of data going forward, Patil suggested. “Data is a force multiplier that offers tremendous opportunity for change and transformation,” he explained, “but if we don’t do it right, the implications will be far worse than we can appreciate, in all sorts of ways.”

The info is here.

Thursday, September 26, 2019

Patients don't think payers, providers can protect their data, survey finds

healthcare data analyticsPaige Minemyer
Fierce Healthcare
Originally published on August 26, 2019

Patients are skeptical of healthcare industry players’ ability to protect their data—and believe health insurers to be the worst at doing so, a new survey shows.

Harvard T.H. Chan School of Public Health and Politico surveyed 1,009 adults in mid-July and found that just 17% have a “great deal” of faith that their health plan will protect their data.

By contrast, 24% said they had a “great deal” of trust in their hospital to protect their data, and 34% said the same about their physician’s office. In addition, 22% of respondents said they had “not very much” trust in their insurer to protect their data, and 17% said they had no trust at all.

The firms that fared the worst on the survey, however, were online search engines and social media sites. Only 7% said they have a “great deal” of trust in search engines such as Google to protect their data, and only 3% said the same about social media platforms.

The info is here.

Sunday, September 22, 2019

The Ethics Of Hiding Your Data From the Machines

Molly Wood
Originally posted August 22, 2019

Here is an excerpt:

There’s also a real and reasonable fear that companies or individuals will take ethical liberties in the name of pushing hard toward a good solution, like curing a disease or saving lives. This is not an abstract problem: The co-founder of Google’s artificial intelligence lab, DeepMind, was placed on leave earlier this week after some controversial decisions—one of which involved the illegal use of over 1.5 million hospital patient records in 2017.

So sticking with the medical kick I’m on here, I propose that companies work a little harder to imagine the worst-case scenario surrounding the data they’re collecting. Study the side effects like you would a drug for restless leg syndrome or acne or hepatitis, and offer us consumers a nice, long, terrifying list of potential outcomes so we actually know what we’re getting into.

And for we consumers, well, a blanket refusal to offer up our data to the AI gods isn’t necessarily the good choice either. I don’t want to be the person who refuses to contribute my genetic data via 23andMe to a massive research study that could, and I actually believe this is possible, lead to cures and treatments for diseases like Parkinson’s and Alzheimer’s and who knows what else.

I also think I deserve a realistic assessment of the potential for harm to find its way back to me, because I didn’t think through or wasn’t told all the potential implications of that choice—like how, let’s be honest, we all felt a little stung when we realized the 23andMe research would be through a partnership with drugmaker (and reliable drug price-hiker) GlaxoSmithKline. Drug companies, like targeted ads, are easy villains—even though this partnership actually could produce a Parkinson’s drug. But do we know what GSK’s privacy policy looks like? That deal was a level of sharing we didn’t necessarily expect.

The info is here.

Wednesday, September 11, 2019

How The Software Industry Must Marry Ethics With Artificial Intelligence

Christian Pedersen
Originally posted July 15, 2019

Here is an excerpt:

Companies developing software used to automate business decisions and processes, military operations or other serious work need to address explainability and human control over AI as they weave it into their products. Some have started to do this.

As AI is introduced into existing software environments, those application environments can help. Many will have established preventive and detective controls and role-based security. They can track who made what changes to processes or to the data that feeds through those processes. Some of these same pathways can be used to document changes made to goals, priorities or data given to AI.

But software vendors have a greater opportunity. They can develop products that prevent bad use of AI, but they can also use AI to actively protect and aid people, business and society. AI can be configured to solve for anything from overall equipment effectiveness or inventory reorder point to yield on capital. Why not have it solve for nonfinancial, corporate social responsibility metrics like your environmental footprint or your environmental or economic impact? Even a common management practice like using a balanced scorecard could help AI strive toward broader business goals that consider the well-being of customers, employees, customers, suppliers and other stakeholders.

The info is here.

Saturday, August 31, 2019

Unraveling the Ethics of New Neurotechnologies

Nicholas Weiler
Originally posted July 30, 2019

Here is an excerpt:

“In unearthing these ethical issues, we try as much as possible to get out of our armchairs and actually observe how people are interacting with these new technologies. We interview everyone from patients and family members to clinicians and researchers,” Chiong said. “We also work with philosophers, lawyers, and others with experience in biomedicine, as well as anthropologists, sociologists and others who can help us understand the clinical challenges people are actually facing as well as their concerns about new technologies.”

Some of the top issues on Chiong’s mind include ensuring patients understand how the data recorded from their brains are being used by researchers; protecting the privacy of this data; and determining what kind of control patients will ultimately have over their brain data.

“As with all technology, ethical questions about neurotechnology are embedded not just in the technology or science itself, but also the social structure in which the technology is used” Chiong added. “These questions are not just the domain of scientists, engineers, or even professional ethicists, but are part of larger societal conversation we’re beginning to have about the appropriate applications of technology, and personal data, and when it's important for people to be able to opt out or say no.”

The info is here.

Tuesday, July 30, 2019

Ethics In The Digital Age: Protect Others' Data As You Would Your Own

uncaptionedJeff Thomson
Originally posted July 1, 2019

Here is an excerpt:

2. Ensure they are using people’s data with their consent. 

In theory, an increasing amount of rights to data use is willingly signed over by people through digital acceptance of privacy policies. But a recent investigation by the European Commission, following up on the impact of GDPR, indicated that corporate privacy policies remain too difficult for consumers to understand or even read. When analyzing the ethics of using data, finance professionals must personally reflect on whether the way information is being used is consistent with how consumers, clients or employees understand and expect it to be used. Furthermore, they should question if data is being used in a way that is necessary for achieving business goals in an ethical manner.

3. Follow the “golden rule” when it comes to data. 

Finally, finance professionals must reflect on whether they would want their own personal information being used to further business goals in the way that they are helping their organization use the data of others. This goes beyond regulations and the fine print of privacy agreements: it is adherence to the ancient, universal standard of refusing to do to other people what you would not want done to yourself. Admittedly, this is subjective and difficult to define. But finance professionals will be confronted with many situations in which there are no clear answers, and they must have the ability to think about the ethical implications of actions that might not necessarily be illegal.

The info is here.