Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Security. Show all posts
Showing posts with label Security. Show all posts

Wednesday, November 15, 2023

Private UK health data donated for medical research shared with insurance companies

Shanti Das
The Guardian
Originally poste 12 Nov 23

Sensitive health information donated for medical research by half a million UK citizens has been shared with insurance companies despite a pledge that it would not be.

An Observer investigation has found that UK Biobank opened up its vast biomedical database to insurance sector firms several times between 2020 and 2023. The data was provided to insurance consultancy and tech firms for projects to create digital tools that help insurers predict a person’s risk of getting a chronic disease. The findings have raised concerns among geneticists, data privacy experts and campaigners over vetting and ethical checks at Biobank.

Set up in 2006 to help researchers investigating diseases, the database contains millions of blood, saliva and urine samples, collected regularly from about 500,000 adult volunteers – along with medical records, scans, wearable device data and lifestyle information.

Approved researchers around the world can pay £3,000 to £9,000 to access records ranging from medical history and lifestyle information to whole genome sequencing data. The resulting research has yielded major medical discoveries and led to Biobank being considered a “jewel in the crown” of British science.

Biobank said it strictly guarded access to its data, only allowing access by bona fide researchers for health-related projects in the public interest. It said this included researchers of all stripes, whether employed by academic, charitable or commercial organisations – including insurance companies – and that “information about data sharing was clearly set out to participants at the point of recruitment and the initial assessment”.


Here is my summary:

Private health data donated by over half a million UK citizens for medical research has been shared with insurance companies, despite a pledge that it would not be used for this purpose. The data, which includes genetic information, medical diagnoses, and lifestyle factors, has been used to develop digital tools that help insurers predict a person's risk of getting a chronic disease. This raises concerns about the privacy and security of sensitive health data, as well as the potential for insurance companies to use the data to discriminate against people with certain health conditions.

Thursday, December 31, 2020

Why business cannot afford to ignore tech ethics

Siddharth Venkataramakrishnan
ft.com
Originally posted 6 DEC 20

From one angle, the pandemic looks like a vindication of “techno-solutionism”. From the more everyday developments of teleconferencing to systems exploiting advanced artificial intelligence, platitudes to the power of innovation abound.

Such optimism smacks of short-termism. Desperate times often call for swift and sweeping solutions, but implementing technologies without regard for their impact is risky and increasingly unacceptable to wider society. The business leaders of the future who purchase and deploy such systems face costly repercussions, both financial and reputational.

Tech ethics, while a relatively new field, has suffered from perceptions that it is either the domain of philosophers or PR people. This could not be further from the truth — as the pandemic continues, so the importance grows of mapping out potential harms from technologies.

Take, for example, biometrics such as facial-recognition systems. These have a clear appeal for companies looking to check who is entering their buildings, how many people are wearing masks or whether social distancing is being observed. Recent advances in the field have combined technologies such as thermal scanning and “periocular recognition” (the ability to identify people wearing masks).

But the systems pose serious questions for those responsible for purchasing and deploying them. At a practical level, facial recognition has long been plagued by accusations of racial bias.


Monday, June 8, 2020

One Nation Under Guard

Samuel Bowles and Arjun Jayadev
The New York Times
Originally posted 15 Feb 2014
(and still relevant today)

Here is an excerpt:

What is happening in America today is both unprecedented in our history, and virtually unique among Western democratic nations. The share of our labor force devoted to guard labor has risen fivefold since 1890 — a year when, in case you were wondering, the homicide rate was much higher than today.

Is this the curse of affluence? Or of ethnic diversity? We don’t think so. The guard-labor share of employment in the United States is four times what it is in Sweden, where living standards rival America’s. And Britain, with its diverse population, uses substantially less guard labor than the United States.

In America, growing inequality has been accompanied by a boom in gated communities and armies of doormen controlling access to upscale apartment buildings. We did not count the doormen, or those producing the gates, locks and security equipment. One could quibble about the numbers; we have elsewhere adopted a broader definition, including prisoners, work supervisors with disciplinary functions, and others.

But however one totes up guard labor in the United States, there is a lot of it, and it seems to go along with economic inequality. States with high levels of income inequality — New York and Louisiana — employ twice as many security workers (as a fraction of their labor force) as less unequal states like Idaho and New Hampshire.

When we look across advanced industrialized countries, we see the same pattern: the more inequality, the more guard labor. As the graph shows, the United States leads in both.

The info is here.

Sunday, January 5, 2020

The Big Change Coming to Just About Every Website on New Year’s Day

Facebook billboard with a hand cursor clicking an X.Aaron Mak
Slate.com
Originally published 30 Dec 19

Starting New Year’s Day, you may notice a small but momentous change to the websites you visit: a button or link, probably at the bottom of the page, reading “Do Not Sell My Personal Information.”

The change is one of many going into effect Jan. 1, 2020, thanks to a sweeping new data privacy law known as the California Consumer Privacy Act. The California law essentially empowers consumers to access the personal data that companies have collected on them, to demand that it be deleted, and to prevent it from being sold to third parties. Since it’s a lot more work to create a separate infrastructure just for California residents to opt out of the data collection industry, these requirements will transform the internet for everyone.

Ahead of the January deadline, tech companies are scrambling to update their privacy policies and figure out how to comply with the complex requirements. The CCPA will only apply to businesses that earn more than $25 million in gross revenue, that collect data on more than 50,000 people, or for which selling consumer data accounts for more than 50 percent of revenue. The companies that meet these qualifications are expected to collectively spend a total of $55 billion upfront to meet the new standards, in addition to $16 billion over the next decade. Major tech firms have already added a number of user features over the past few months in preparation. In early December, Twitter rolled out a privacy center where users can learn more about the company’s approach to the CCPA and navigate to a dashboard for customizing the types of info that the platform is allowed to use for ad targeting. Google has also created a protocol that blocks websites from transmitting data to the company, which users can take advantage of by downloading an opt-out add-on. Facebook, meanwhile, is arguing that it does not need to change anything because it does not technically “sell” personal information. Companies must at least set up a webpage and a toll-free phone number for fielding data requests.

The info is here.

Friday, October 18, 2019

Code of Ethics Can Guide Responsible Data Use

Katherine Noyes
The Wall Street Journal
Originally posted September 26, 2019

Here is an excerpt:

Associated with these exploding data volumes are plenty of practical challenges to overcome—storage, networking, and security, to name just a few—but far less straightforward are the serious ethical concerns. Data may promise untold opportunity to solve some of the largest problems facing humanity today, but it also has the potential to cause great harm due to human negligence, naivety, and deliberate malfeasance, Patil pointed out. From data breaches to accidents caused by self-driving vehicles to algorithms that incorporate racial biases, “we must expect to see the harm from data increase.”

Health care data may be particularly fraught with challenges. “MRIs and countless other data elements are all digitized, and that data is fragmented across thousands of databases with no easy way to bring it together,” Patil said. “That prevents patients’ access to data and research.” Meanwhile, even as clinical trials and numerous other sources continually supplement data volumes, women and minorities remain chronically underrepresented in many such studies. “We have to reboot and rebuild this whole system,” Patil said.

What the world of technology and data science needs is a code of ethics—a set of principles, akin to the Hippocratic Oath, that guides practitioners’ uses of data going forward, Patil suggested. “Data is a force multiplier that offers tremendous opportunity for change and transformation,” he explained, “but if we don’t do it right, the implications will be far worse than we can appreciate, in all sorts of ways.”

The info is here.

Thursday, July 25, 2019

Societal and ethical issues of digitization

Lambèr Royakkers, Jelte Timmer, Linda Kool, & Rinie van Est
Ethics and Information Technology (2018) 20:127–142

Abstract

In this paper we discuss the social and ethical issues that arise as a result of digitization based on six dominant technologies: Internet of Things, robotics, biometrics, persuasive technology, virtual & augmented reality, and digital platforms. We highlight the many developments in the digitizing society that appear to be at odds with six recurring themes revealing from our analysis of the scientific literature on the dominant technologies: privacy, autonomy, security, human dignity, justice, and balance of power. This study shows that the new wave of digitization is putting pressure on these public values. In order to effectively shape the digital society in a socially and ethically responsible way, stakeholders need to have a clear understand-ing of what such issues might be. Supervision has been developed the most in the areas of privacy and data protection. For other ethical issues concerning digitization such as discrimination, autonomy, human dignity and unequal balance of power, the supervision is not as well organized.

The paper is here.

Friday, June 7, 2019

Cameras Everywhere: The Ethics Of Eyes In The Sky

Tom Vander Ark
Forbes.com
Originally posted May 8, 2019

Pictures from people's houses can predict the chances of that person getting into a car accident. The researchers that created the system acknowledged that "modern data collection and computational techniques...allow for unprecedented exploitation of personal data, can outpace development of legislation and raise privacy threats."

Hong Kong researchers created a drone system that can automatically analyze a road surface. It suggests that we’re approaching the era of automated surveillance for civil and military purposes.

In lower Manhattan, police are planning a surveillance center where officers can view thousands of video cameras around the downtown.


Microsoft turned down the sale of facial recognition software to California law enforcement arguing that innocent women and minorities would be disproportionately held for questioning. It suggests that the technology is running ahead of public policy but not ready for equitable use. 

And speaking of facial recognition, Jet Blue has begun using it in lieu of boarding passes on some flights much to the chagrin of some passengers who wonder when they gave consent for this application and who has access to what biometric data.

The info is here.

Sunday, May 19, 2019

House Democrats seek details of Trump ethics waivers

Kate Ackley
www.rollcall.com
Originally posted May 17, 2019

Rep. Elijah E. Cummings, chairman of the Oversight and Reform Committee, wants a status update on the state of the swamp in the Trump administration.

The Maryland Democrat launched an investigation late this week into the administration’s use of ethics waivers, which allow former lobbyists to work on matters they handled in their previous private sector jobs. Cummings sent letters to the White House and 24 agencies and Cabinet departments requesting copies of their ethics pledges and details of any waivers that could expose “potential conflicts of interest.”

“Although the White House committed to providing information on ethics waivers on its website, the White House has failed to disclose comprehensive information about the waivers,” Cummings wrote in a May 16 letter to White House counsel Pat Cipollone.

A White House official declined comment on the investigation, and a committee aide said the administration had not yet responded to the requests. A spokeswoman for Rep. Jim Jordan of Ohio, the top Republican on the Oversight panel, did not immediately provide a comment.

After President Donald Trump ran on a “drain the swamp” message, the Trump administration ushered in a tough-sounding ethics pledge through an executive order in January 2017 requiring officials to recuse themselves from participating in matters they had lobbied on in the previous two years. But the waivers allow appointees to circumvent those restrictions.

The info is here.

Sunday, April 21, 2019

Microsoft will be adding AI ethics to its standard checklist for product release

Alan Boyle
www.geekwire.com
Originally posted March 25, 2019

Microsoft will “one day very soon” add an ethics review focusing on artificial-intelligence issues to its standard checklist of audits that precede the release of new products, according to Harry Shum, a top executive leading the company’s AI efforts.

AI ethics will join privacy, security and accessibility on the list, Shum said today at MIT Technology Review’s EmTech Digital conference in San Francisco.

Shum, who is executive vice president of Microsoft’s AI and Research group, said companies involved in AI development “need to engineer responsibility into the very fabric of the technology.”

Among the ethical concerns are the potential for AI agents to pick up all-too-human biases from the data on which they’re trained, to piece together privacy-invading insights through deep data analysis, to go horribly awry due to gaps in their perceptual capabilities, or simply to be given too much power by human handlers.

Shum noted that as AI becomes better at analyzing emotions, conversations and writings, the technology could open the way to increased propaganda and misinformation, as well as deeper intrusions into personal privacy.

The info is here.

Saturday, April 20, 2019

Cardiologist Eric Topol on How AI Can Bring Humanity Back to Medicine

Alice Park
Time.com
Originally published March 14, 2019

Here is an excerpt:

What are the best examples of how AI can work in medicine?

We’re seeing rapid uptake of algorithms that make radiologists more accurate. The other group already deriving benefit is ophthalmologists. Diabetic retinopathy, which is a terribly underdiagnosed cause of blindness and a complication of diabetes, is now diagnosed by a machine with an algorithm that is approved by the Food and Drug Administration. And we’re seeing it hit at the consumer level with a smart-watch app with a deep learning algorithm to detect atrial fibrillation.

Is that really artificial intelligence, in the sense that the machine has learned about medicine like doctors?

Artificial intelligence is different from human intelligence. It’s really about using machines with software and algorithms to ingest data and come up with the answer, whether that data is what someone says in speech, or reading patterns and classifying or triaging things.

What worries you the most about AI in medicine?

I have lots of worries. First, there’s the issue of privacy and security of the data. And I’m worried about whether the AI algorithms are always proved out with real patients. Finally, I’m worried about how AI might worsen some inequities. Algorithms are not biased, but the data we put into those algorithms, because they are chosen by humans, often are. But I don’t think these are insoluble problems.

The info is here.

Tuesday, April 16, 2019

Rise Of The Chief Ethics Officer

Forbes Insight Team
Forbes.com
Originally posted March 27, 2019

Here is an excerpt:

Robert Foehl is now executive-in-residence for business law and ethics at the Ohio University College of Business. In industry, he’s best known as the man who laid the ethical groundwork for Target as the company’s first director of corporate ethics.

At a company like Target, says Foehl, ethical issues arise every day. “This includes questions about where goods are sourced, how they are manufactured, the environment, justice and equality in the treatment of employees and customers, and obligations to the community,” he says. “In retail, the biggest issues tend to be around globalism and how we market to consumers. Are people being manipulated? Are we being discriminatory?”

For Foehl, all of these issues are just part of various ethical frameworks that he’s built over the years; complex philosophical frameworks that look at measures of happiness and suffering, the potential for individual harm, and even the impact of a decision on the “virtue” of the company. As he sees it, bringing a technology like AI into the mix has very little impact on that.

“The fact that you have an emerging technology doesn’t matter,” he says, “since you have thinking you can apply to any situation.” Whether it’s AI or big data or any other new tech, says Foehl, “we still put it into an ethical framework. Just because it involves a new technology doesn’t mean it’s a new ethical concept.”

The info is here.

Thursday, January 31, 2019

HHS issues voluntary guidelines amid rise of cyberattacks

Samantha Liss
www.healthcaredive.com
Originally published January 2, 2019

Dive Brief:

  • To combat security threats in the health sector, HHS issued a voluminous report that details ways small, local clinics and large hospital systems alike can reduce their cybersecurity risks. The guidelines are voluntary, so providers will not be required to adopt the practices identified in the report. 
  • The four-volume report is the culmination of work by a task force, convened in May 2017, that worked to identify the five most common threats in the industry and 10 ways to prepare against those threats.  
  • The five most common threats are email phishing attacks, ransomware attacks, loss or theft of equipment or data, accidental or intentional data loss by an insider and attacks against connected medical devices.

Monday, November 12, 2018

7 Ways Marketers Can Use Corporate Morality to Prepare for Future Data Privacy Laws

Patrick Hogan
Adweek.com
Originally posted October 10, 2018

Here is an excerpt:

Many organizations have already made responsible adjustments in how they communicate with users about data collection and use and have become compliant to support recent laws. However, compliance does not always equal responsibility, and even though companies do require consent and provide information as required, linking to the terms of use, clicking a checkbox or double opting-in still may not be enough to stay ahead or protect consumers.

The best way to reduce the impact of the potential legislation is to take proactive steps now that set a new standard of responsibility in data use for your organization. Below are some measurable ways marketers can lead the way for the changing industry and creating a foundational perception shift away from data and back to the acknowledgment of putting other humans first.

Create an action plan for complete data control and transparency

Set standards and protocols for your internal teams to determine how you are going to communicate with each other and your clients about data privacy, thus creating a path for all employees to follow and abide by moving forward.

Map data in your organization from receipt to storage to expulsion

Accountability is key. As a business, you should be able to know and speak to what is being done with the data that you are collecting throughout each stage of the process.

The info is here.

Tuesday, October 16, 2018

Let's Talk About AI Ethics; We're On A Deadline

Tom Vander Ark
Forbes.com
Originally posted September 13, 2018

Here is an excerpt:

Creating Values-Aligned AI

“The project of creating value-aligned AI is perhaps one of the most important things we will ever do,” said the Future of Life Institute. It’s not just about useful intelligence but “the ends to which intelligence is aimed and the social/political context, rules, and policies in and through which this all happens.”

The Institute created a visual map of interdisciplinary issues to be addressed:

  • Validation: ensuring that the right system specification is provided for the core of the agent given stakeholders' goals for the system.
  • Security: applying cyber security paradigms and techniques to AI-specific challenges.
  • Control: structural methods for operators to maintain control over advanced agents.
  • Foundations: foundational mathematical or philosophical problems that have bearing on multiple facets of safety
  • Verification: techniques that help prove a system was implemented correctly given a formal specification
  • Ethics: effort to understand what we ought to do and what counts as moral or good.
  • Governance: the norms and values held by society, which are structured through various formal and informal processes of decision-making to ensure accountability, stability, broad participation, and the rule of law.

Monday, June 11, 2018

Can Morality Be Engineered In Artificial General Intelligence Systems?

Abhijeet Katte
Analytics India Magazine
Originally published May 10, 2018

Here is an excerpt:

This report Engineering Moral Agents – from Human Morality to Artificial Morality discusses challenges in engineering computational ethics and how mathematically oriented approaches to ethics are gaining traction among researchers from a wide background, including philosophy. AGI-focused research is evolving into the formalization of moral theories to act as a base for implementing moral reasoning in machines. For example, Kevin Baum from the University of Saarland talked about a project about teaching formal ethics to computer-science students wherein the group was involved in building a database of moral-dilemma examples from the literature to be used as benchmarks for implementing moral reasoning.

Another study, titled Towards Moral Autonomous Systems from a group of European researchers states that today there is a real need for a functional system of ethical reasoning as AI systems that function as part of our society are ready to be deployed.One of the suggestions include having every assisted living AI system to have  a “Why did you do that?” button which, when pressed, causes the robot to explain why it carried out the previous action.

The information is here.

Friday, June 8, 2018

The pros and cons of having sex with robots

Karen Turner
www.vox.com
Originally posted January 18, 2018

Here is an excerpt:

Karen Turner: Where does sex robot technology stand right now?

Neil McArthur:

When people have this idea of a sex robot, they think it’s going to look like a human being, it’s gonna walk around and say seductive things and so on. I think that’s actually the slowest-developing part of this whole nexus of sexual technology. It will come — we are going to have realistic sex robots. But there are a few technical hurdles to creating humanoid robots that are proving fairly stubborn. Making them walk is one of them. And if you use Siri or any of those others, you know that AI is proving sort of stubbornly resistant to becoming realistic.

But I think that when you look more broadly at what’s happening with sexual technology, virtual reality in general has just taken off. And it’s being used in conjunction with something called teledildonics, which is kind of an odd term. But all it means is actual devices that you hook up to yourself in various ways that sync with things that you see onscreen. It’s truly amazing what’s going on.

(cut)

When you look at the ethical or philosophical considerations, — I think there’s two strands. One is the concerns people have, and two, which I think maybe doesn’t get as much attention, in the media at least, is the potential advantages.

The concerns have to do with the psychological impact. As you saw with those Apple shareholders [who asked Apple to help protect children from digital addiction], we’re seeing a lot of concern about the impact that technology is having on people’s lives right now. Many people feel that anytime you’re dealing with sexual technology, those sorts of negative impacts really become intensified — specifically, social isolation, people cutting themselves off from the world.

The article is here.

Tuesday, April 3, 2018

AI Has a Hallucination Problem That's Proving Tough to Fix

Tom Simonite
wired.com
Originally posted March 9, 2018

Tech companies are rushing to infuse everything with artificial intelligence, driven by big leaps in the power of machine learning software. But the deep-neural-network software fueling the excitement has a troubling weakness: Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there.

That could be a big problem for products dependent on machine learning, particularly for vision, such as self-driving cars. Leading researchers are trying to develop defenses against such attacks—but that’s proving to be a challenge.

Case in point: In January, a leading machine-learning conference announced that it had selected 11 new papers to be presented in April that propose ways to defend or detect such adversarial attacks. Just three days later, first-year MIT grad student Anish Athalye threw up a webpage claiming to have “broken” seven of the new papers, including from boldface institutions such as Google, Amazon, and Stanford. “A creative attacker can still get around all these defenses,” says Athalye. He worked on the project with Nicholas Carlini and David Wagner, a grad student and professor, respectively, at Berkeley.

That project has led to some academic back-and-forth over certain details of the trio’s claims. But there’s little dispute about one message of the findings: It’s not clear how to protect the deep neural networks fueling innovations in consumer gadgets and automated driving from sabotage by hallucination. “All these systems are vulnerable,” says Battista Biggio, an assistant professor at the University of Cagliari, Italy, who has pondered machine learning security for about a decade, and wasn’t involved in the study. “The machine learning community is lacking a methodological approach to evaluate security.”

The article is here.

Thursday, March 15, 2018

Apple’s Move to Share Health Care Records Is a Game-Changer

Aneesh Chopra and Safiq Rab
wired.com
Originally posted February 19, 2018

Here is an excerpt:

Naysayers point out the fact that Apple is currently displaying only a sliver of a consumer’s entire electronic health record. That is true, but it's largely on account of the limited information available via the open API standard. As with all standards efforts, the FHIR API will add more content, like scheduling slots and clinical notes, over time. Some of that work will be motivated by proposed federal government voluntary framework to expand the types of data that must be shared over time by certified systems, as noted in this draft approach out for public comment.

Imagine if Apple further opens up Apple Health so it no longer serves as the destination, but a conduit for a patient's longitudinal health record to a growing marketplace of applications that can help guide consumers through decisions to better manage their health.

Thankfully, the consumer data-sharing movement—placing the longitudinal health record in the hands of the patient and the applications they trust—is taking hold, albeit quietly. In just the past few weeks, a number of health systems that were initially slow to turn on the required APIs suddenly found the motivation to meet Apple's requirement.

The article is here.

Wednesday, March 7, 2018

The Squishy Ethics of Sex With Robots

Adam Rogers
Wired.com
Originally published February 2, 2018

Here is an excerpt:

Most of the world is ready to accept algorithm-enabled, internet-connected, virtual-reality-optimized sex machines with open arms (arms! I said arms!). The technology is evolving fast, which means two inbound waves of problems. Privacy and security, sure, but even solving those won’t answer two very hard questions: Can a robot consent to having sex with you? Can you consent to sex with it?

One thing that is unquestionable: There is a market. Either through licensing the teledildonics patent or risking lawsuits, several companies have tried to build sex technology that takes advantage of Bluetooth and the internet. “Remote connectivity allows people on opposite ends of the world to control each other’s dildo or sleeve device,” says Maxine Lynn, a patent attorney who writes the blog Unzipped: Sex, Tech, and the Law. “Then there’s also bidirectional control, which is going to be huge in the future. That’s when one sex toy controls the other sex toy and vice versa.”

Vibease, for example, makes a wearable that pulsates in time to synchronized digital books or a partner controlling an app. We-vibe makes vibrators that a partner can control, or set preset patterns. And so on.

The article is here.

Thursday, August 17, 2017

New Technology Standards Guide Social Work Practice and Education

Susan A. Knight
Social Work Today
Vol. 17 No. 4 P. 10

Today's technological landscape is vastly different from what it was just 10 to 15 years ago. Smartphones have replaced home landlines. Texting has become an accepted form of communication, both personally and professionally. Across sectors—health and human services, education, government, and business—employees conduct all manner of work on tablets and other portable devices. Along with "liking" posts on Facebook, people are tracking hashtags on Twitter, sending messages via Snapchat, and pinning pictures to Pinterest.

To top it all off, it seems that there's always a fresh controversy emerging because someone shared something questionable on a social media platform for the general public to see and critique.

Like every other field, social work practice is dealing with issues, challenges, and risks that were previously nonexistent. The NASW and Association of Social Work Boards (ASWB) Standards for Technology and Social Work Practice, dating back to 2005, was in desperate need of a rework in order to address all the changes and complexities within the technological environment that social workers are forced to contend with.

The newly released updated standards are the result of a collaborative effort between four major social work organizations: NASW, ASWB, the Clinical Social Work Association (CSWA), and the Council on Social Work Education (CSWE). "The intercollaboration in the development of the technology standards provides one consensus product and resource for social workers to refer to," says Mirean Coleman, MSW, LICSW, CT, clinical manager of NASW.

The article is here.