Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Data. Show all posts
Showing posts with label Data. Show all posts

Friday, June 21, 2019

Tech, Data And The New Democracy Of Ethics

Neil Lustig
Forbes.com
Originally posted June 10, 2019

As recently as 15 years ago, consumers had no visibility into whether the brands they shopped used overseas slave labor or if multinationals were bribing public officials to give them unfair advantages internationally. Executives could engage in whatever type of misconduct they wanted to behind closed doors, and there was no early warning system for investors, board members and employees, who were directly impacted by the consequences of their behavior.
Now, thanks to globalization, social media, big data, whistleblowers and corporate compliance initiatives, we have more visibility than ever into the organizations and people that affect our lives and our economy.

What we’ve learned from this surge in transparency is that sometimes companies mess up even when they’re not trying to. There’s a distinct difference between companies that deliberately engage in unethical practices and those that get caught up in them due to loose policies, inadequate self-policing or a few bad actors that misrepresent the ethics of the rest of the organization. The primary difference between these two types of companies is how fast they’re able to act -- and if they act at all.

Fortunately, just as technology and data can introduce unprecedented visibility into organizations’ unethical practices, they can also equip organizations with ways of protecting themselves from internal and external risks. As CEO of a compliance management platform, I believe there are three things that must be in place for organizations to stay above board in a rising democracy of ethics.

The info is here.

Tuesday, April 16, 2019

Rise Of The Chief Ethics Officer

Forbes Insight Team
Forbes.com
Originally posted March 27, 2019

Here is an excerpt:

Robert Foehl is now executive-in-residence for business law and ethics at the Ohio University College of Business. In industry, he’s best known as the man who laid the ethical groundwork for Target as the company’s first director of corporate ethics.

At a company like Target, says Foehl, ethical issues arise every day. “This includes questions about where goods are sourced, how they are manufactured, the environment, justice and equality in the treatment of employees and customers, and obligations to the community,” he says. “In retail, the biggest issues tend to be around globalism and how we market to consumers. Are people being manipulated? Are we being discriminatory?”

For Foehl, all of these issues are just part of various ethical frameworks that he’s built over the years; complex philosophical frameworks that look at measures of happiness and suffering, the potential for individual harm, and even the impact of a decision on the “virtue” of the company. As he sees it, bringing a technology like AI into the mix has very little impact on that.

“The fact that you have an emerging technology doesn’t matter,” he says, “since you have thinking you can apply to any situation.” Whether it’s AI or big data or any other new tech, says Foehl, “we still put it into an ethical framework. Just because it involves a new technology doesn’t mean it’s a new ethical concept.”

The info is here.

Wednesday, February 13, 2019

The Art of Decision-Making

Joshua Rothman
The New Yorker
Originally published January 21, 2019

Here is an excerpt:

For centuries, philosophers have tried to understand how we make decisions and, by extension, what makes any given decision sound or unsound, rational or irrational. “Decision theory,” the destination on which they’ve converged, has tended to hold that sound decisions flow from values. Faced with a choice—should we major in economics or in art history?—we first ask ourselves what we value, then seek to maximize that value.

From this perspective, a decision is essentially a value-maximizing equation. If you’re going out and can’t decide whether to take an umbrella, you could come to a decision by following a formula that assigns weights to the probability of rain, the pleasure you’ll feel in strolling unencumbered, and the displeasure you’ll feel if you get wet. Most decisions are more complex than this, but the promise of decision theory is that there’s a formula for everything, from launching a raid in Abbottabad to digging an oil well in the North Sea. Plug in your values, and the right choice pops out.

In recent decades, some philosophers have grown dissatisfied with decision theory. They point out that it becomes less useful when we’re unsure what we care about, or when we anticipate that what we care about might shift.

The info is here.

Wednesday, December 19, 2018

Hackers are not main cause of health data breaches

Lisa Rapaport
Reuters News
Originally posted November 19, 2018

Most health information data breaches in the U.S. in recent years haven’t been the work of hackers but instead have been due to mistakes or security lapses inside healthcare organizations, a new study suggests.

Most health information data breaches in the U.S. in recent years haven’t been the work of hackers but instead have been due to mistakes or security lapses inside healthcare organizations, a new study suggests.

Another 25 percent of cases involved employee errors like mailing or emailing records to the wrong person, sending unencrypted data, taking records home or forwarding data to personal accounts or devices.

“More than half of breaches were triggered by internal negligence and thus are to some extent preventable,” said study coauthor Ge Bai of the Johns Hopkins Carey Business School in Washington, D.C.

The info is here.

Thursday, August 23, 2018

Designing a Roadmap to Ethical AI in Government


Joshua Entsminger, Mark Esposito, Terence Tse and Danny Goh
www.thersa.org
Originally posted July 23, 2018

Here is an excerpt:

When a decision was made using AI, we may not know whether or not the data was faulty; regardless, there will come a time when someone appeals a decision made by, or influenced by, AI-driven insights. People have the right to be informed that a significant decision concerning their lives was carried out with the help of an AI. Governments will need a better record of what companies and institutions use AI for making significant decisions to enforce this policy.

When specifically assessing a decision-making process of concern, the first step should be to determine whether or not the data set represents what the organisation wanted the AI to understand and make decisions about.

However, data sets, particularly easily available data sets, cover a limited range of situations, and inevitably, most AI will be confronted with situations they have not encountered before – the ethical issue is the framework by which decisions occur, and good data cannot secure that kind of ethical behavior by itself.

The blog post is here.

Monday, May 28, 2018

The ethics of experimenting with human brain tissue

Nita Farahany, and others
Nature
Originally published April 25, 2018

If researchers could create brain tissue in the laboratory that might appear to have conscious experiences or subjective phenomenal states, would that tissue deserve any of the protections routinely given to human or animal research subjects?

This question might seem outlandish. Certainly, today’s experimental models are far from having such capabilities. But various models are now being developed to better understand the human brain, including miniaturized, simplified versions of brain tissue grown in a dish from stem cells — brain organoids. And advances keep being made.

These models could provide a much more accurate representation of normal and abnormal human brain function and development than animal models can (although animal models will remain useful for many goals). In fact, the promise of brain surrogates is such that abandoning them seems itself unethical, given the vast amount of human suffering caused by neurological and psychiatric disorders, and given that most therapies for these diseases developed in animal models fail to work in people. Yet the closer the proxy gets to a functioning human brain, the more ethically problematic it becomes.

The information is here.


Friday, April 20, 2018

Feds: Pitt professor agrees to pay government more than $130K to resolve claims of research grant misdeeds

Sean D. Hamill and Jonathan D. Silver
Pittsburgh Post-Gazette
Originally posted March 21, 2018

Here is an excerpt:

A prolific researcher, Mr. Schunn, pulled in more than $50 million in 24 NSF grants over the past 20 years, as well as another $25 million in 24 other grants from the military and private foundations, most of it researching how people learn, according to his personal web page.

Now, according to the government, Mr. Schunn must “provide certifications and assurances of truthfulness to NSF for up to five years, and agree not to serve as a reviewer, adviser or consultant to NSF for a period of three years.”

But all that may be the least of the fallout from Mr. Schunn’s settlement, according to a fellow researcher who worked on a grant with him in the past.

Though the settlement only involved fraud accusations on four NSF grants from 2006 to 2016, it will bring additional scrutiny to all of his work, not only of the grants themselves, but results, said Joseph Merlino, president of the 21st Century Partnership for STEM Education, a nonprofit based in Conshohocken.

“That’s what I’m thinking: Can I trust the data he gave us?” Mr. Merlino said of a project that he worked on with Mr. Schunn, and for which they just published a research article.

The information is here.

Note: The article refers to Dr. Schunn as Mr. Shunn throughout, even though he has a PhD in Psychology at Carnegie Mellon University.

Tuesday, April 3, 2018

Cambridge Analytica: You Can Have My Money but Not My Vote

Emily Feng-Gu
Practical Ethics
Originally posted March 31, 2018

Here is an excerpt:

On one level, the Cambridge Analytica scandal concerns data protection, privacy, and informed consent. The data involved was not, as Facebook insisted, obtained via a ‘breach’ or a ‘leak’. User data was as safe as it had always been – which is to say, not very safe at all. At the time, the harvesting of data, including that of unconsenting Facebook friends, by third-party apps was routine policy for Facebook, provided it was used only for academic purposes. Cambridge researcher and creator of the third-party app in question, Aleksandr Kogan, violated the agreement only when the data was passed onto Cambridge Analytica. Facebook failed to protect its users’ data privacy, that much is clear.

But are risks like these transparent to users? There is a serious concern about informed consent in a digital age. Most people are unlikely to have the expertise necessary to fully understand what it means to use online and other digital services.  Consider Facebook: users sign up for an ostensibly free social media service. Facebook did not, however, accrue billions in revenue by offering a service for nothing in return; they profit from having access to large amounts of personal data. It is doubtful that the costs to personal and data privacy are made clear to users, some of which are children or adolescents. For most people, the concept of big data is likely to be nebulous at best. What does it matter if someone has access to which Pages we have Liked? What exactly does it mean for third-party apps to be given access to data? When signing up to Facebook, I hazard that few people imagined clicking ‘I agree’ could play a role in attempts to influence election outcomes. A jargon laden ‘terms and conditions’ segment is not enough to inform users regarding what precisely it is they are consenting to.

The blog post is here.

Thursday, February 8, 2018

How can groups make good decisions? Deliberation & Diversity

Mariano Sigman and Dan Ariely
TED Talk
Originally recorded April 2017

We all know that when we make decisions in groups, they don't always go right -- and sometimes they go very wrong. How can groups make good decisions? With his colleague Dan Ariely, neuroscientist Mariano Sigman has been inquiring into how we interact to reach decisions by performing experiments with live crowds around the world. In this fun, fact-filled explainer, he shares some intriguing results -- as well as some implications for how it might impact our political system. In a time when people seem to be more polarized than ever, Sigman says, better understanding how groups interact and reach conclusions might spark interesting new ways to construct a healthier democracy.

Saturday, December 9, 2017

Evidence-Based Policy Mistakes

Kausik Basu
Project Syndicate
Originally published November 30, 2017

Here is an excerpt:

Likewise, US President Donald Trump cites simplistic trade-deficit figures to justify protectionist policies that win him support among a certain segment of the US population. In reality, the evidence suggests that such policies will hurt the very people Trump claims to be protecting.

Now, the chair of Trump’s Council of Economic Advisers, Kevin Hassett, is attempting to defend Congressional Republicans’ effort to slash corporate taxes by claiming that, when developed countries have done so in the past, workers gained “well north of” $4,000 per year. Yet there is ample evidence that the benefits of such tax cuts accrue disproportionately to the rich, largely via companies buying back stock and shareholders earning higher dividends.

It is not clear whence Hassett is getting his data. But chances are that, at the very least, he is misinterpreting it. And he is far from alone in failing to reach accurate conclusions when assessing a given set of data.

Consider the oft-repeated refrain that, because there is evidence that virtually all jobs over the last decade were created by the private sector, the private sector must be the most effective job creator. At first glance, the logic might seem sound. But, on closer examination, the statement begs the question. Imagine a Soviet economist claiming that, because the government created virtually all jobs in the Soviet Union, the government must be the most effective job creator. To find the truth, one would need, at a minimum, data on who else tried to create jobs, and how.

The article is here.

Saturday, December 2, 2017

Japanese doctor who exposed a drug too good to be true calls for morality and reforms

Tomoko Otake
Japan Times
Originally posted November 15, 2017

Here is an excerpt:

Kuwajima says the Diovan case is a sobering reminder that large-scale clinical trials published in top medical journals should not be blindly trusted, as they can be exploited by drugmakers rushing to sell their products before their patents run out.

“I worked at a research hospital and had opportunities to try new or premarket drugs on patients, so I knew from early on that Diovan and the same class of drugs called ARB wouldn’t work, especially for elderly patients,” Kuwajima recalled in a recent interview at Tokyo Metropolitan Geriatric Hospital, where he has retired from full-time practice but still sees patients two days a week. “I had a strong sense of crisis that hordes of elderly people — whose ranks were growing as the population grayed — would be prescribed a drug that didn’t work.”

Kuwajima said he immediately found the Diovan research suspicious because the results were just too good to be true. This was before Novartis admitted that it had paid five professors conducting studies at their universities a total of ¥1.1 billion in “research grants,” and even had Shirahashi, a Novartis employee purporting to be a university lecturer, help with statistical analyses for the papers.

The article is here.

Wednesday, November 15, 2017

The U.S. Is Retreating from Religion

Allen Downey
Scientific American
Originally published on October 20, 2017

Since 1990, the fraction of Americans with no religious affiliation has nearly tripled, from about 8 percent to 22 percent. Over the next 20 years, this trend will accelerate: by 2020, there will be more of these "Nones" than Catholics, and by 2035, they will outnumber Protestants.

The following figure shows changes since 1972 and these predictions, based on data from the General Social Survey (GSS):




Friday, September 29, 2017

How Silicon Valley is erasing your individuality

Franklin Foer
Washington Post
Originally posted September 8, 2017

Here is an excerpt:

There’s an oft-used shorthand for the technologist’s view of the world. It is assumed that libertarianism dominates Silicon Valley, and that isn’t wholly wrong. High-profile devotees of Ayn Rand can be found there. But if you listen hard to the titans of tech, it’s clear that their worldview is something much closer to the opposite of a libertarian’s veneration of the heroic, solitary individual. The big tech companies think we’re fundamentally social beings, born to collective existence. They invest their faith in the network, the wisdom of crowds, collaboration. They harbor a deep desire for the atomistic world to be made whole. (“Facebook stands for bringing us closer together and building a global community,” Zuckerberg wrote in one of his many manifestos.) By stitching the world together, they can cure its ills.

Rhetorically, the tech companies gesture toward individuality — to the empowerment of the “user” — but their worldview rolls over it. Even the ubiquitous invocation of users is telling: a passive, bureaucratic description of us. The big tech companies (the Europeans have lumped them together as GAFA: Google, Apple, Facebook, Amazon) are shredding the principles that protect individuality. Their devices and sites have collapsed privacy; they disrespect the value of authorship, with their hostility toward intellectual property. In the realm of economics, they justify monopoly by suggesting that competition merely distracts from the important problems like erasing language barriers and building artificial brains. Companies should “transcend the daily brute struggle for survival,” as Facebook investor Peter Thiel has put it.

The article is here.

Tuesday, August 29, 2017

Must science be testable?

Massimo Pigliucci
Aeon
Originally published August 10, 2016

Here is an excerpt:

hat said, the publicly visible portion of the physics community nowadays seems split between people who are openly dismissive of philosophy and those who think they got the pertinent philosophy right but their ideological opponents haven’t. At stake isn’t just the usually tiny academic pie, but public appreciation of and respect for both the humanities and the sciences, not to mention millions of dollars in research grants (for the physicists, not the philosophers). Time, therefore, to take a more serious look at the meaning of Popper’s philosophy and why it is still very much relevant to science, when properly understood.

As we have seen, Popper’s message is deceptively simple, and – when repackaged in a tweet – has in fact deceived many a smart commentator in underestimating the sophistication of the underlying philosophy. If one were to turn that philosophy into a bumper sticker slogan it would read something like: ‘If it ain’t falsifiable, it ain’t science, stop wasting your time and money.’

But good philosophy doesn’t lend itself to bumper sticker summaries, so one cannot stop there and pretend that there is nothing more to say. Popper himself changed his mind throughout his career about a number of issues related to falsification and demarcation, as any thoughtful thinker would do when exposed to criticisms and counterexamples from his colleagues. For instance, he initially rejected any role for verification in establishing scientific theories, thinking that it was far too easy to ‘verify’ a notion if one were actively looking for confirmatory evidence. Sure enough, modern psychologists have a name for this tendency, common to laypeople as well as scientists: confirmation bias.

Nonetheless, later on Popper conceded that verification – especially of very daring and novel predictions – is part of a sound scientific approach. After all, the reason Einstein became a scientific celebrity overnight after the 1919 total eclipse is precisely because astronomers had verified the predictions of his theory all over the planet and found them in satisfactory agreement with the empirical data.

The article is here.

Thursday, August 24, 2017

China's Plan for World Domination in AI Isn't So Crazy After All

Mark Bergen and David Ramli
Bloomberg.com
First published August 14, 2017

Here is an excerpt:

Xu runs SenseTime Group Ltd., which makes artificial intelligence software that recognizes objects and faces, and counts China’s biggest smartphone brands as customers. In July, SenseTime raised $410 million, a sum it said was the largest single round for an AI company to date. That feat may soon be topped, probably by another startup in China.

The nation is betting heavily on AI. Money is pouring in from China’s investors, big internet companies and its government, driven by a belief that the technology can remake entire sectors of the economy, as well as national security. A similar effort is underway in the U.S., but in this new global arms race, China has three advantages: A vast pool of engineers to write the software, a massive base of 751 million internet users to test it on, and most importantly staunch government support that includes handing over gobs of citizens’ data –- something that makes Western officials squirm.

Data is key because that’s how AI engineers train and test algorithms to adapt and learn new skills without human programmers intervening. SenseTime built its video analysis software using footage from the police force in Guangzhou, a southern city of 14 million. Most Chinese mega-cities have set up institutes for AI that include some data-sharing arrangements, according to Xu. "In China, the population is huge, so it’s much easier to collect the data for whatever use-scenarios you need," he said. "When we talk about data resources, really the largest data source is the government."

The article is here.

Saturday, December 3, 2016

Data Ethics: The New Competitive Advantage

Gry Hasselbalch
Tech Crunch
Originally posted November 14, 2016

Here is an excerpt:

What is data ethics?

Ethical companies in today’s big data era are doing more than just complying with data protection legislation. They also follow the spirit and vision of the legislation by listening closely to their customers. They’re implementing credible and clear transparency policies for data management. They’re only processing necessary data and developing privacy-aware corporate cultures and organizational structures. Some are developing products and services using Privacy by Design.

A data-ethical company sustains ethical values relating to data, asking: Is this something I myself would accept as a consumer? Is this something I want my children to grow up with? A company’s degree of “data ethics awareness” is not only crucial for survival in a market where consumers progressively set the bar, it’s also necessary for society as a whole. It plays a similar role as a company’s environmental conscience — essential for company survival, but also for the planet’s welfare. Yet there isn’t a one-size-fits-all solution, perfect for every ethical dilemma. We’re in an age of experimentation where laws, technology and, perhaps most importantly, our limits as individuals are tested and negotiated on a daily basis.

The article is here.

Friday, December 2, 2016

New ruling finally requires homeopathic 'treatments' to obey the same labeling standards as real medicines

Lindsay Dodgson
Business Insider
Originally posted November 17, 2016

The Federal Trade Commission issued a statement this month which said that homeopathic remedies have to be held to the same standard as other products that make similar claims. In other words, American companies must now have reliable scientific evidence for health-related claims that their products can treat specific conditions and illnesses.

The article is here.

The Federal Trade Commission (FTC) ruling is here.

Wednesday, September 21, 2016

Forget ideology, liberal democracy’s newest threats come from technology and bioscience

John Naughton
The Guardian
Originally posted August 28, 2016

Here is an excerpt:

Here Harari ventures into the kind of dystopian territory that Aldous Huxley would recognise. He sees three broad directions.

1. Humans will lose their economic and military usefulness, and the economic system will stop attaching much value to them.

2. The system will still find value in humans collectively but not in unique individuals.

3. The system will, however, find value in some unique individuals, “but these will be a new race of upgraded superhumans rather than the mass of the population”. By “system”, he means the new kind of society that will evolve as bioscience and information technology progress at their current breakneck pace. As before, this society will be based on a deal between religion and science but this time humanism will be displaced by what Harari calls “dataism” – a belief that the universe consists of data flows, and the value of any entity or phenomenon is determined by its contribution to data processing.

The article is here.

Friday, September 18, 2015

Are Arguments about GMO Safety Really About Something Else?

By Gregory E. Kaebnick
The Hastings Center Blog
Originally published August 28, 2015

Here is an excerpt:

Saletan is trying to examine the impact of GMOs in more or less this objective way. Perhaps, however, the fiercer, dyed-in-the-wool opponents of GMOs are looking beyond health and safety, strictly construed in terms of quantifiable aspects of human well-being, to something else. One possibility is that they are indeed focused on health and safety but are put off by something about the particular form of the threat. Moral psychologists such as Paul Slovic and Daniel Kahneman have noted that the perception of a risk’s severity does not cleanly track the quantifiable outcomes. Different ways of dying may be perceived as better or worse, even though death is the measurable outcome in both cases. After September 11, 2001, air travel dropped significantly and many people who might have been expected to fly in planes, and safely reach their destinations, went by car instead and died in automobile accidents. Viewed strictly in terms of the quantifiable risk of death, the decision to go by car looks silly. But maybe, the risk assessor (and scholarly critic of risk assessment) Adam Finkel has proposed, what put people off flying was not the risk of death alone but the prospect of “death preceded by agonizing minutes of chaos and the awful opportunity of being able to contact loved ones before the grisly culmination of another’s suicide mission.”

The entire article is here.

Thursday, August 20, 2015

Algorithms and Bias: Q. and A. With Cynthia Dwork

By Claire Cane Miller
The New York Times - The Upshot
Originally posted August 10, 2015

Here is an excerpt:

Q: Some people have argued that algorithms eliminate discrimination because they make decisions based on data, free of human bias. Others say algorithms reflect and perpetuate human biases. What do you think?

A: Algorithms do not automatically eliminate bias. Suppose a university, with admission and rejection records dating back for decades and faced with growing numbers of applicants, decides to use a machine learning algorithm that, using the historical records, identifies candidates who are more likely to be admitted. Historical biases in the training data will be learned by the algorithm, and past discrimination will lead to future discrimination.

The entire article is here.