Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Privacy. Show all posts
Showing posts with label Privacy. Show all posts

Thursday, February 20, 2020

Sharing Patient Data Without Exploiting Patients

McCoy MS, Joffe S, Emanuel EJ.
JAMA. Published online January 16, 2020.
doi:10.1001/jama.2019.22354

Here is an excerpt:

The Risks of Data Sharing

When health systems share patient data, the primary risk to patients is the exposure of their personal health information, which can result in a range of harms including embarrassment, stigma, and discrimination. Such exposure is most obvious when health systems fail to remove identifying information before sharing data, as is alleged in the lawsuit against Google and the University of Chicago. But even when shared data are fully deidentified in accordance with the requirements of the Health Insurance Portability and Accountability Act reidentification is possible, especially when patient data are linked with other data sets. Indeed, even new data privacy laws such as Europe's General Data Protection Regulation and California's Consumer Privacy Act do not eliminate reidentification risk.

Companies that acquire patient data also accept risk by investing in research and development that may not result in marketable products. This risk is less ethically concerning, however, than that borne by patients. While companies usually can abandon unpromising ventures, patients’ lack of control over data-sharing arrangements makes them vulnerable to exploitation. Patients lack control, first, because they may have no option other than to seek care in a health system that plans to share their data. Second, even if patients are able to authorize sharing of their data, they are rarely given the information and opportunity to ask questions needed to give meaningful informed consent to future uses of their data.

Thus, for the foreseeable future, data sharing will entail ethically concerning risks to patients whose data are shared. But whether these exchanges are exploitative depends on how much benefit patients receive from data sharing.

The info is here.

Wednesday, January 29, 2020

In 2020, let’s stop AI ethics-washing and actually do something

Karen Hao
technologyreview.com
Originally published 27 Dec 19

Here is an excerpt:

Meanwhile, the need for greater ethical responsibility has only grown more urgent. The same advancements made in GANs in 2018 have led to the proliferation of hyper-realistic deepfakes, which are now being used to target women and erode people’s belief in documentation and evidence. New findings have shed light on the massive climate impact of deep learning, but organizations have continued to train ever larger and more energy-guzzling models. Scholars and journalists have also revealed just how many humans are behind the algorithmic curtain. The AI industry is creating an entirely new class of hidden laborers—content moderators, data labelers, transcribers—who toil away in often brutal conditions.

But not all is dark and gloomy: 2019 was the year of the greatest grassroots pushback against harmful AI from community groups, policymakers, and tech employees themselves. Several cities—including San Francisco and Oakland, California, and Somerville, Massachusetts—banned public use of face recognition, and proposed federal legislation could soon ban it from US public housing as well. Employees of tech giants like Microsoft, Google, and Salesforce also grew increasingly vocal against their companies’ use of AI for tracking migrants and for drone surveillance.

Within the AI community, researchers also doubled down on mitigating AI bias and reexamined the incentives that lead to the field’s runaway energy consumption. Companies invested more resources in protecting user privacy and combating deepfakes and disinformation. Experts and policymakers worked in tandem to propose thoughtful new legislation meant to rein in unintended consequences without dampening innovation.

The info is here.

Thursday, January 23, 2020

Colleges want freshmen to use mental health apps. But are they risking students’ privacy?

 (iStock)Deanna Paul
The New York Times
Originally posted 2 Jan 20

Here are two excepts:

TAO Connect is just one of dozens of mental health apps permeating college campuses in recent years. In addition to increasing the bandwidth of college counseling centers, the apps offer information and resources on mental health issues and wellness. But as student demand for mental health services grows, and more colleges turn to digital platforms, experts say universities must begin to consider their role as stewards of sensitive student information and the consequences of encouraging or mandating these technologies.

The rise in student wellness applications arrives as mental health problems among college students have dramatically increased. Three out of 5 U.S. college students experience overwhelming anxiety, and 2 in 5 students reported debilitating depression, according to a 2018 survey from the American College Health Association.

Even so, only about 15 percent of undergraduates seek help at a university counseling center. These apps have begun to fill students’ needs by providing ongoing access to traditional mental health services without barriers such as counselor availability or stigma.

(cut)

“If someone wants help, they don’t care how they get that help,” said Lynn E. Linde, chief knowledge and learning officer for the American Counseling Association. “They aren’t looking at whether this person is adequately credentialed and are they protecting my rights. They just want help immediately.”

Yet she worried that students may be giving up more information than they realize and about the level of coercion a school can exert by requiring students to accept terms of service they otherwise wouldn’t agree to.

“Millennials understand that with the use of their apps they’re giving up privacy rights. They don’t think to question it,” Linde said.

The info is here.

You Are Already Having Sex With Robots

Henry the sex robotEmma Grey Ellis
wired.com
Originally published 23 Aug 19

Here are two excerpts:

Carnegie Mellon roboticist Hans Moravec has written about emotions as devices for channeling behavior in helpful ways—for example, sexuality prompting procreation. He concluded that artificial intelligences, in seeking to please humanity, are likely to be highly emotional. By this definition, if you encoded an artificial intelligence with the need to please humanity sexually, their urgency to follow their programming constitutes sexual feelings. Feelings as real and valid as our own. Feelings that lead to the thing that feelings, probably, evolved to lead to: sex. One gets the sense that, for some digisexual people, removing the squishiness of the in-between stuff—the jealousy and hurt and betrayal and exploitation—improves their sexual enjoyment. No complications. The robot as ultimate partner. An outcome of evolution.

So the sexbotcalypse will come. It's not scary, it's just weird, and it's being motivated by millennia-old bad habits. Laziness, yes, but also something else. “I don’t see anything that suggests we’re going to buck stereotypes,” says Charles Ess, who studies virtue ethics and social robots at the University of Oslo. “People aren’t doing this out of the goodness of their hearts. They’re doing this to make money.”

(cut)

Technologizing sexual relationships will also fill one of the last blank spots in tech’s knowledge of (ad-targetable) human habits. Brianna Rader—founder of Juicebox, progenitor of Slutbot—has spoken about how difficult it is to do market research on sex. If having sex with robots or other forms of sex tech becomes commonplace, it wouldn’t be difficult anymore. “We have an interesting relationship with privacy in the US,” Kaufman says. “We’re willing to trade a lot of our privacy and information away for pleasures less complicated than an intimate relationship.”

The info is here.

Tuesday, January 21, 2020

How Could Commercial Terms of Use and Privacy Policies Undermine Informed Consent in the Age of Mobile Health?

AMA J Ethics. 2018;20(9):E864-872.
doi: 10.1001/amajethics.2018.864.

Abstract

Granular personal data generated by mobile health (mHealth) technologies coupled with the complexity of mHealth systems creates risks to privacy that are difficult to foresee, understand, and communicate, especially for purposes of informed consent. Moreover, commercial terms of use, to which users are almost always required to agree, depart significantly from standards of informed consent. As data use scandals increasingly surface in the news, the field of mHealth must advocate for user-centered privacy and informed consent practices that motivate patients’ and research participants’ trust. We review the challenges and relevance of informed consent and discuss opportunities for creating new standards for user-centered informed consent processes in the age of mHealth.

The info is here.

Thursday, January 16, 2020

Ethics In AI: Why Values For Data Matter

Ethics in AIMarc Teerlink
forbes.com
Originally posted 18 Dec 19

Here is an excerpt:

Data Is an Asset, and It Must Have Values

Already, 22% of U.S. companies have attributed part of their profits to AI and advanced cases of (AI infused) predictive analytics.

According to a recent study SAP conducted in conjunction with the Economist’s Intelligent Unit, organizations doing the most with machine learning have experienced 43% more growth on average versus those who aren’t using AI and ML at all — or not using AI well.

One of their secrets: They treat data as an asset. The same way organizations treat inventory, fleet, and manufacturing assets.

They start with clear data governance with executive ownership and accountability (for a concrete example of how this looks, here are some principles and governance models that we at SAP apply in our daily work).

So, do treat data as an asset, because, no matter how powerful the algorithm, poor training data will limit the effectiveness of Artificial Intelligence and Predictive Analytics.

The info is here.

Monday, January 13, 2020

Big tech is thinking about digital ethics, and small businesses need to keep up

Daphne Leprince-Ringuet
zdnet.com
Originally posted 16 Dec 19

Here is an excerpt:

And insurance company Aviva recently published a one-page customer data charter along with an explainer video to detail how it uses personal information, "instead of long privacy policies that no one reads," said the company's chief data scientist, Orlando Machado.

For McDougall, however, this is just the tip of the iceberg. "We hear from Microsoft and Intel about what they are doing, and how they are implementing ethics," he said, "but there are many smaller organizations out there that are far from thinking about these things."

As an example of a positive development, he points to GDPR regulation introduced last year in the EU, and which provides more practical guidelines to ensure ethical business and protection of privacy.

Even GDPR rules, however, are struggling to find a grip with SMBs. A survey conducted this year among 716 small businesses in Europe showed that there was widespread ignorance about data security tools and loose adherence to the law's key privacy provisions.

About half of the respondents believed their organizations were compliant with the new rules – although only 9% were able to identify which end-to-end encrypted email service they used.

A full 44% said they were not confident that they always obtained consent or determined a lawful basis before using personal data.

The info is here.

Sunday, January 5, 2020

The Big Change Coming to Just About Every Website on New Year’s Day

Facebook billboard with a hand cursor clicking an X.Aaron Mak
Slate.com
Originally published 30 Dec 19

Starting New Year’s Day, you may notice a small but momentous change to the websites you visit: a button or link, probably at the bottom of the page, reading “Do Not Sell My Personal Information.”

The change is one of many going into effect Jan. 1, 2020, thanks to a sweeping new data privacy law known as the California Consumer Privacy Act. The California law essentially empowers consumers to access the personal data that companies have collected on them, to demand that it be deleted, and to prevent it from being sold to third parties. Since it’s a lot more work to create a separate infrastructure just for California residents to opt out of the data collection industry, these requirements will transform the internet for everyone.

Ahead of the January deadline, tech companies are scrambling to update their privacy policies and figure out how to comply with the complex requirements. The CCPA will only apply to businesses that earn more than $25 million in gross revenue, that collect data on more than 50,000 people, or for which selling consumer data accounts for more than 50 percent of revenue. The companies that meet these qualifications are expected to collectively spend a total of $55 billion upfront to meet the new standards, in addition to $16 billion over the next decade. Major tech firms have already added a number of user features over the past few months in preparation. In early December, Twitter rolled out a privacy center where users can learn more about the company’s approach to the CCPA and navigate to a dashboard for customizing the types of info that the platform is allowed to use for ad targeting. Google has also created a protocol that blocks websites from transmitting data to the company, which users can take advantage of by downloading an opt-out add-on. Facebook, meanwhile, is arguing that it does not need to change anything because it does not technically “sell” personal information. Companies must at least set up a webpage and a toll-free phone number for fielding data requests.

The info is here.

Thursday, January 2, 2020

The Tricky Ethics of Google's Project Nightingale Effort

Cason Schmit
nextgov.com
Originally posted 3 Dec 19

The nation’s second-largest health system, Ascension, has agreed to allow the software behemoth Google access to tens of millions of patient records. The partnership, called Project Nightingale, aims to improve how information is used for patient care. Specifically, Ascension and Google are trying to build tools, including artificial intelligence and machine learning, “to make health records more useful, more accessible and more searchable” for doctors.

Ascension did not announce the partnership: The Wall Street Journal first reported it.

Patients and doctors have raised privacy concerns about the plan. Lack of notice to doctors and consent from patients are the primary concerns.

As a public health lawyer, I study the legal and ethical basis for using data to promote public health. Information can be used to identify health threats, understand how diseases spread and decide how to spend resources. But it’s more complicated than that.

The law deals with what can be done with data; this piece focuses on ethics, which asks what should be done.

Beyond Hippocrates

Big-data projects like this one should always be ethically scrutinized. However, data ethics debates are often narrowly focused on consent issues.

In fact, ethical determinations require balancing different, and sometimes competing, ethical principles. Sometimes it might be ethical to collect and use highly sensitive information without getting an individual’s consent.

The info is here.

Monday, December 30, 2019

Privacy: Where Security and Ethics Miss the Mark

privacyJason Paul Kazarian
securityboulevard.com
Originally posted 29 Nov 19

Here is an excerpt:

Without question, we as a society have changed course. The unfettered internet has had its day. Going forward, more and more private companies will be subject to increasingly demanding privacy legislation.

Is this a bad thing? Something nefarious? Probably not. Just as we have always expected privacy in our physical lives, we now expect privacy in our digital lives as well. And businesses are adjusting toward our expectations.

One visible adjustment is more disclosure about exactly what private data a business collects and why. Privacy policies are easier to understand, as well as more comprehensive. Most websites warn visitors about the storage of private data in “cookies.” Many sites additionally grant visitors the ability to turn off such cookies except those technically necessary for the site’s operation.

Another visible adjustment is the widespread use of multi-factor authentication. Many sites, especially those involving credit, finance or shopping, validate login with a token sent by email, text or voice. These sites then verify the authorized user is logging in, which helps avoid leaking private data.

Perhaps the biggest adjustment is not visible: encryption of private data. More businesses now operate on otherwise meaningless cipher substitutes (the output of an encryption function) in place of sensitive data such as customer account numbers, birth dates, email or street addresses, member names and so on. This protects customers from breaches where private data is exploited via an all-too-common breach.

The info is here.

23 and Baby

Tanya Lewis
nature.com
Originally posted 4 Dec 19

Here are two excerpts:

Proponents say that genetic testing of newborns can help diagnose a life-threatening childhood-onset disease in urgent cases and could dramatically increase the number of genetic conditions all babies are screened for at birth, enabling earlier diagnosis and treatment. It could also inform parents of conditions they could pass on to future children or of their own risk of adult-onset diseases. Genetic testing could detect hundreds or even thousands of diseases, an order of magnitude more than current heel-stick blood tests—which all babies born in the U.S. undergo at birth—or confirm results from such a test.

But others caution that genetic tests may do more harm than good. They could miss some diseases that heel-stick testing can detect and produce false positives for others, causing anxiety and leading to unnecessary follow-up testing. Sequencing children’s DNA also raises issues of consent and the prospect of genetic discrimination.

Regardless of these concerns, newborn genetic testing is already here, and it is likely to become only more common. But is the technology sophisticated enough to be truly useful for most babies? And are families—and society—ready for that information?

(cut)

Then there’s the issue of privacy. If the child’s genetic information is stored on file, who has access to it? If the information becomes public, it could lead to discrimination by employers or insurance companies. The Genetic Information Nondiscrimination Act (GINA), passed in 2008, prohibits such discrimination. But GINA does not apply to employers with fewer than 15 employees and does not cover insurance for long-term care, life or disability. It also does not apply to people employed and insured by the military’s Tricare system, such as Rylan Gorby. When his son’s genome was sequenced, researchers also obtained permission to sequence Rylan’s genome, to determine if he was a carrier for the rare hemoglobin condition. Because it manifests itself only in childhood, Gorby decided taking the test was worth the risk of possible discrimination.

The info is here.

Saturday, December 28, 2019

Chinese residents worry about rise of facial recognition

Sam Shead
bbc.com
Originally posted 5 Dec 19

Here is an excerpt:

China has more facial recognition cameras than any other country and they are often hard to avoid.

Earlier this week, local reports said that Zhengzhou, the capital of the northeastern Henan province, had become the first Chinese city to roll the tech out across all its subway train stations.

Commuters can use the technology to automatically authorise payments instead of scanning a QR code on their phones. For now, it is a voluntary option, said the China Daily.

Earlier this month, university professor Guo Bing announced he was suing Hangzhou Safari Park for enforcing facial recognition.

Prof Guo, a season ticket holder at the park, had used his fingerprint to enter for years, but was no longer able to do so.

The case was covered in the government-owned media, indicating that the Chinese Communist Party is willing for the private use of the technology to be discussed and debated by the public.

The info is here.

Tuesday, December 24, 2019

DNA genealogical databases are a gold mine for police, but with few rules and little transparency

Paige St. John
The LA Times
Originally posted 24 Nov 19

Here is an excerpt:

But law enforcement has plunged into this new world with little to no rules or oversight, intense secrecy and by forming unusual alliances with private companies that collect the DNA, often from people interested not in helping close cold cases but learning their ethnic origins and ancestry.

A Times investigation found:
  • There is no uniform approach for when detectives turn to genealogical databases to solve cases. In some departments, they are to be used only as a last resort. Others are putting them at the center of their investigative process. Some, like Orlando, have no policies at all.
  • When DNA services were used, law enforcement generally declined to provide details to the public, including which companies detectives got the match from. The secrecy made it difficult to understand the extent to which privacy was invaded, how many people came under investigation, and what false leads were generated.
  • California prosecutors collaborated with a Texas genealogy company at the outset of what became a $2-million campaign to spotlight the heinous crimes they can solve with consumer DNA. Their goal is to encourage more people to make their DNA available to police matching.
There are growing concerns that the race to use genealogical databases will have serious consequences, from its inherent erosion of privacy to the implications of broadened police power.

In California, an innocent twin was thrown in jail. In Georgia, a mother was deceived into incriminating her son. In Texas, police met search guidelines by classifying a case as sexual assault but after an arrest only filed charges of burglary. And in the county that started the DNA race with the arrest of the Golden State killer suspect, prosecutors have persuaded a judge to treat unsuspecting genetic contributors as “confidential informants” and seal searches so consumers are not scared away from adding their own DNA to the forensic stockpile.

Friday, December 13, 2019

Conference warned of dangers of facial recognition technology

Because of new technologies, “we are all monitored and recorded every minute of every day of our lives”, a conference has heard. Photograph: iStockColm Keena
The Irish Times
Originally posted 13 Nov 19

Here is an excerpt:

The potential of facial recognition technology to be used by oppressive governments and manipulative corporations was such that some observers have called for it to be banned. The suggestion should be taken seriously, Dr Danaher said.

The technology is “like a fingerprint of your face”, is cheap, and “normalises blanket surveillance”. This makes it “perfect” for oppressive governments and for manipulative corporations.

While the EU’s GDPR laws on the use of data applied here, Dr Danaher said Ireland should also introduce domestic law “to save us from the depredations of facial recognition technology”.

As well as facial recognition technology, he also addressed the conference about “deepfake” technology, which allows for the creation of highly convincing fake video content, and algorithms that assess risk, as other technologies that are creating challenges for the law.

In the US, the use of algorithms to predict a person’s likelihood of re-offending has raised significant concerns.

The info is here.

Friday, November 8, 2019

Privacy is a collective concern

Carissa Veliz
newstatesman.com
Originally published 22 OCT 2019

People often give a personal explanation of whether they protect the privacy of their data. Those who don’t care much about privacy might say that they have nothing to hide. Those who do worry about it might say that keeping their personal data safe protects them from being harmed by hackers or unscrupulous companies. Both positions assume that caring about and protecting one’s privacy is a personal matter. This is a common misunderstanding.

It’s easy to assume that because some data is “personal”, protecting it is a private matter. But privacy is both a personal and a collective affair, because data is rarely used on an individual basis.

(cut)

Because we are intertwined in ways that make us vulnerable to each other, we are responsible for each other’s privacy. I might, for instance, be extremely careful with my phone number and physical address. But if you have me as a contact in your mobile phone and then give access to companies to that phone, my privacy will be at risk regardless of the precautions I have taken. This is why you shouldn’t store more sensitive data than necessary in your address book, post photos of others without their permission, or even expose your own privacy unnecessarily. When you expose information about yourself, you are almost always exposing information about others.

The info is here.

Thursday, November 7, 2019

Digital Ethics and the Blockchain

Dan Blum
ISACA, Volume 2, 2018

Here is an excerpt:

Integrity and Transparency

Integrity and transparency are core values for delivering trust to prosperous markets. Blockchains can provide immutable land title records to improve property rights and growth in small economies, such as Honduras.6 In smart power grids, blockchain-enabled meters can replace inefficient centralized record-keeping systems for transparent energy trading. Businesses can keep transparent records for product provenance, production, distribution and sales. Forward-thinking governments are exploring use cases through which transparent, immutable blockchains could facilitate a lighter, more effective regulatory touch to holding industry accountable.

However, trade secrets and personal information should not be published openly on blockchains. Blockchain miners may reorder transactions to increase fees or delay certain business processes at the expense of others. Architects must leaven accountability and transparency with confidentiality and privacy. Developers (or regulators) should sometimes add a human touch to smart contracts to avoid rigid systems operating without any consumer safeguards.

The info is here.

Friday, October 18, 2019

Code of Ethics Can Guide Responsible Data Use

Katherine Noyes
The Wall Street Journal
Originally posted September 26, 2019

Here is an excerpt:

Associated with these exploding data volumes are plenty of practical challenges to overcome—storage, networking, and security, to name just a few—but far less straightforward are the serious ethical concerns. Data may promise untold opportunity to solve some of the largest problems facing humanity today, but it also has the potential to cause great harm due to human negligence, naivety, and deliberate malfeasance, Patil pointed out. From data breaches to accidents caused by self-driving vehicles to algorithms that incorporate racial biases, “we must expect to see the harm from data increase.”

Health care data may be particularly fraught with challenges. “MRIs and countless other data elements are all digitized, and that data is fragmented across thousands of databases with no easy way to bring it together,” Patil said. “That prevents patients’ access to data and research.” Meanwhile, even as clinical trials and numerous other sources continually supplement data volumes, women and minorities remain chronically underrepresented in many such studies. “We have to reboot and rebuild this whole system,” Patil said.

What the world of technology and data science needs is a code of ethics—a set of principles, akin to the Hippocratic Oath, that guides practitioners’ uses of data going forward, Patil suggested. “Data is a force multiplier that offers tremendous opportunity for change and transformation,” he explained, “but if we don’t do it right, the implications will be far worse than we can appreciate, in all sorts of ways.”

The info is here.

Saturday, October 5, 2019

Brain-reading tech is coming. The law is not ready to protect us.

Sigal Samuel
vox.com
Originally posted August 30, 2019

Here is an excerpt:

2. The right to mental privacy

You should have the right to seclude your brain data or to publicly share it.

Ienca emphasized that neurotechnology has huge implications for law enforcement and government surveillance. “If brain-reading devices have the ability to read the content of thoughts,” he said, “in the years to come governments will be interested in using this tech for interrogations and investigations.”

The right to remain silent and the principle against self-incrimination — enshrined in the US Constitution — could become meaningless in a world where the authorities are empowered to eavesdrop on your mental state without your consent.

It’s a scenario reminiscent of the sci-fi movie Minority Report, in which a special police unit called the PreCrime Division identifies and arrests murderers before they commit their crimes.

3. The right to mental integrity

You should have the right not to be harmed physically or psychologically by neurotechnology.

BCIs equipped with a “write” function can enable new forms of brainwashing, theoretically enabling all sorts of people to exert control over our minds: religious authorities who want to indoctrinate people, political regimes that want to quash dissent, terrorist groups seeking new recruits.

What’s more, devices like those being built by Facebook and Neuralink may be vulnerable to hacking. What happens if you’re using one of them and a malicious actor intercepts the Bluetooth signal, increasing or decreasing the voltage of the current that goes to your brain — thus making you more depressed, say, or more compliant?

Neuroethicists refer to that as brainjacking. “This is still hypothetical, but the possibility has been demonstrated in proof-of-concept studies,” Ienca said, adding, “A hack like this wouldn’t require that much technological sophistication.”

The info is here.

Friday, September 27, 2019

Nudging Humans

Brett M. Frischmann
Villanova University - School of Law
Originally published August 1, 2019

Abstract

Behavioral data can and should inform the design of private and public choice architectures. Choice architects should steer people toward outcomes that make them better off (according to their own interests, not the choice architects’) but leave it to the people being nudged to choose for themselves. Libertarian paternalism can and should provide ethical constraints on choice architects. These are the foundational principles of nudging, the ascendant social engineering agenda pioneered by Nobel Prize winning economist Richard Thaler and Harvard law professor Cass Sunstein.

The foundation bears tremendous weight. Nudging permeates private and public institutions worldwide. It creeps into the design of an incredible number of human-computer interfaces and affects billions of choices daily. Yet the foundation has deep cracks.

This critique of nudging exposes those hidden fissures. It aims at the underlying theory and agenda, rather than one nudge or another, because that is where micro meets macro, where dynamic longitudinal impacts on individuals and society need to be considered. Nudging theorists and practitioners need to better account for the longitudinal effects of nudging on the humans being nudged, including malleable beliefs and preferences as well as various capabilities essential to human flourishing. The article develops two novel and powerful criticisms of nudging, one focused on nudge creep and another based on normative myopia. It explores these fundamental flaws in the nudge agenda theoretically and through various examples and case studies, including electronic contracting, activity tracking in schools, and geolocation tracking controls on an iPhone.

The paper is here.

Thursday, September 26, 2019

Business and the Ethical Implications of Technology

Martin, K., Shilton, K. & Smith, J.
J Bus Ethics (2019).
https://doi.org/10.1007/s10551-019-04213-9

Abstract

While the ethics of technology is analyzed across disciplines from science and technology studies (STS), engineering, computer science, critical management studies, and law, less attention is paid to the role that firms and managers play in the design, development, and dissemination of technology across communities and within their firm. Although firms play an important role in the development of technology, and make associated value judgments around its use, it remains open how we should understand the contours of what firms owe society as the rate of technological development accelerates. We focus here on digital technologies: devices that rely on rapidly accelerating digital sensing, storage, and transmission capabilities to intervene in human processes. This symposium focuses on how firms should engage ethical choices in developing and deploying these technologies. In this introduction, we, first, identify themes the symposium articles share and discuss how the set of articles illuminate diverse facets of the intersection of technology and business ethics. Second, we use these themes to explore what business ethics offers to the study of technology and, third, what technology studies offers to the field of business ethics. Each field brings expertise that, together, improves our understanding of the ethical implications of technology. Finally we introduce each of the five papers, suggest future research directions, and interpret their implications for business ethics.

The Introduction is here.

There are several other articles related to this introduction.