Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Innovation. Show all posts
Showing posts with label Innovation. Show all posts

Tuesday, February 9, 2021

Neanderthals And Humans Were at War For Over 100,000 Years, Evidence Shows

Nicholas Longrich
The Conversation
Originally posted 3 Nov 20

Here is an excerpt:

Why else would we take so long to leave Africa? Not because the environment was hostile but because Neanderthals were already thriving in Europe and Asia.

It's exceedingly unlikely that modern humans met the Neanderthals and decided to just live and let live. If nothing else, population growth inevitably forces humans to acquire more land, to ensure sufficient territory to hunt and forage food for their children.

But an aggressive military strategy is also good evolutionary strategy.

Instead, for thousands of years, we must have tested their fighters, and for thousands of years, we kept losing. In weapons, tactics, strategy, we were fairly evenly matched.

Neanderthals probably had tactical and strategic advantages. They'd occupied the Middle East for millennia, doubtless gaining intimate knowledge of the terrain, the seasons, how to live off the native plants and animals.

In battle, their massive, muscular builds must have made them devastating fighters in close-quarters combat. Their huge eyes likely gave Neanderthals superior low-light vision, letting them manoeuvre in the dark for ambushes and dawn raids.

Sapiens victorious

Finally, the stalemate broke, and the tide shifted. We don't know why. It's possible the invention of superior ranged weapons – bows, spear-throwers, throwing clubs – let lightly-built Homo sapiens harass the stocky Neanderthals from a distance using hit-and-run tactics.

Or perhaps better hunting and gathering techniques let sapiens feed bigger tribes, creating numerical superiority in battle.

Even after primitive Homo sapiens broke out of Africa 200,000 years ago, it took over 150,000 years to conquer Neanderthal lands. In Israel and Greece, archaic Homo sapiens took ground only to fall back against Neanderthal counteroffensives, before a final offensive by modern Homo sapiens, starting 125,000 years ago, eliminated them.

Friday, November 20, 2020

When Did We Become Fully Human? What Fossils and DNA Tell Us About the Evolution of Modern Intelligence

Nick Longrich
singularityhub.com
Originally posted 18 OCT 2020 

Here are two excerpts:

Because the fossil record is so patchy, fossils provide only minimum dates. Human DNA suggests even earlier origins for modernity. Comparing genetic differences between DNA in modern people and ancient Africans, it’s estimated that our ancestors lived 260,000 to 350,000 years ago. All living humans descend from those people, suggesting that we inherited the fundamental commonalities of our species, our humanity, from them.

All their descendants—Bantu, Berber, Aztec, Aboriginal, Tamil, San, Han, Maori, Inuit, Irish—share certain peculiar behaviors absent in other great apes. All human cultures form long-term pair bonds between men and women to care for children. We sing and dance. We make art. We preen our hair, adorn our bodies with ornaments, tattoos and makeup.

We craft shelters. We wield fire and complex tools. We form large, multigenerational social groups with dozens to thousands of people. We cooperate to wage war and help each other. We teach, tell stories, trade. We have morals, laws. We contemplate the stars, our place in the cosmos, life’s meaning, what follows death.

(cut)

First, we journeyed out of Africa, occupying more of the planet. There were then simply more humans to invent, increasing the odds of a prehistoric Steve Jobs or Leonardo da Vinci. We also faced new environments in the Middle East, the Arctic, India, Indonesia, with unique climates, foods and dangers, including other human species. Survival demanded innovation.

Many of these new lands were far more habitable than the Kalahari or the Congo. Climates were milder, but Homo sapiens also left behind African diseases and parasites. That let tribes grow larger, and larger tribes meant more heads to innovate and remember ideas, more manpower, and better ability to specialize. Population drove innovation.

Thursday, January 9, 2020

Artificial Intelligence Is Superseding Well-Paying Wall Street Jobs

Deutsche Boerse To Acquire NYSE Euronext To Create Largest Exchange OwnerJack Kelly
forbes.com
Originally posted 10 Dec 19

Here is an excerpt:

Compliance people run the risk of being replaced too. “As bad actors become more sophisticated, it is vital that financial regulators have the funding resources, technological capacity and access to AI and automated technologies to be a strong and effective cop on the beat,” said Martina Rejsjö, head of Nasdaq Surveillance North America Equities.

Nasdaq, a tech-driven trading platform, has an associated regulatory body that offers over 40 different algorithms, using 35,000 parameters, to spot possible market abuse and manipulation in real time. “The massive and, in many cases, exponential growth in market data is a significant challenge for surveillance professionals," Rejsjö said. “Market abuse attempts have become more sophisticated, putting more pressure on surveillance teams to find the proverbial needle in the data haystack." In layman's terms, she believes that the future is in tech overseeing trading activities, as the human eye is unable to keep up with the rapid-fire, sophisticated global trading dominated by algorithms.

When people say not to worry, that’s the precise time to worry. Companies—whether they are McDonald’s, introducing self-serve kiosks and firing hourly workers to cut costs, or top-tier investment banks that rely on software instead of traders to make million-dollar bets on the stock market—will continue to implement technology and downsize people in an effort to enhance profits and cut down on expenses. This trend will be hard to stop and have serious future consequences for the workers at all levels and salaries. 

The info is here.

Friday, December 13, 2019

The Ethical Dilemma at the Heart of Big Tech Companies

Emanuel Moss and Jacob Metcalf
Harvard Business Review
Originally posted 14 Nov 19

Here is an excerpt:

The central challenge ethics owners are grappling with is negotiating between external pressures to respond to ethical crises at the same time that they must be responsive to the internal logics of their companies and the industry. On the one hand, external criticisms push them toward challenging core business practices and priorities. On the other hand, the logics of Silicon Valley, and of business more generally, create pressures to establish or restore predictable processes and outcomes that still serve the bottom line.

We identified three distinct logics that characterize this tension between internal and external pressures:

Meritocracy: Although originally coined as a derisive term in satirical science fiction by British sociologist Michael Young, meritocracy infuses everything in Silicon Valley from hiring practices to policy positions, and retroactively justifies the industry’s power in our lives. As such, ethics is often framed with an eye toward smarter, better, and faster approaches, as if the problems of the tech industry can be addressed through those virtues. Given this, it is not surprising that many within the tech industry position themselves as the actors best suited to address ethical challenges, rather than less technically-inclined stakeholders, including elected officials and advocacy groups. In our interviews, this manifested in relying on engineers to use their personal judgement by “grappling with the hard questions on the ground,” trusting them to discern and to evaluate the ethical stakes of their own products. While there are some rigorous procedures that help designers scan for the consequences of their products, sitting in a room and “thinking hard” about the potential harms of a product in the real world is not the same as thoroughly understanding how someone (whose life is very different than a software engineer) might be affected by things like predictive policing or facial recognition technology, as obvious examples. Ethics owners find themselves being pulled between technical staff that assert generalized competence over many domains and their own knowledge that ethics is a specialized domain that requires deep contextual understanding.

The info is here.

Monday, June 3, 2019

Regulation of AI as a Means to Power

Daniel Faggella
emerj.com
Last updated May 5, 2019

Here is an excerpt:

The most fundamental principle of power and artificial intelligence is data dominance: Whoever controls the most valuable data within a space or sector will be able to make a better product or solve a better problem. Whoever solves the problem best will win business and win revenue, and whoever wins customers wins more data.

That cycle continues and you have the tech giants of today (a topic for a later AI Power essay).

No companies are likely to get more general search queries than Google, and so people will not likely use any search engine other than Google – and so Google gets more searches (data) to train with, and gets an even better search product. Eventually: Search monopoly.

No companies are likely to generate more general eCommerce purchases than Amazon, and so people will not likely use any online store other than Amazon – and so Amazon gets more purchases and customers (data) to train with, and gets an even better eCommerce product. Eventually: eCommerce monopoly.

There are 3-4 other well-known examples (Facebook, to some extent Netflix, Uber, etc), but I’ll leave it at two. AI may change to become less reliant on data collection, and data dominance may eventually be eclipsed by some other power dynamic, but today it’s the way the game is won.

I’m not aiming to oversimplify the business models of these complex companies, nor and I disparaging these companies as being “bad”. Companies like Google are no more filled with “bad” people than churches, law firms, or AI ethics committees.

The info is here.

Wednesday, May 8, 2019

The ethics of emerging technologies

artificial intelligenceJessica Hallman
www.techxplore.com
Originally posted April 10, 2019


Here is an excerpt:

"There's a new field called data ethics," said Fonseca, associate professor in Penn State's College of Information Sciences and Technology and 2019-2020 Faculty Fellow in Penn State's Rock Ethics Institute. "We are collecting data and using it in many different ways. We need to start thinking more about how we're using it and what we're doing with it."

By approaching emerging technology with a philosophical perspective, Fonseca can explore the ethical dilemmas surrounding how we gather, manage and use information. He explained that with the rise of big data, for example, many scientists and analysts are foregoing formulating hypotheses in favor of allowing data to make inferences on particular problems.

"Normally, in science, theory drives observations. Our theoretical understanding guides both what we choose to observe and how we choose to observe it," Fonseca explained. "Now, with so much data available, science's classical picture of theory-building is under threat of being inverted, with data being suggested as the source of theories in what is being called data-driven science."

Fonseca shared these thoughts in his paper, "Cyber-Human Systems of Thought and Understanding," which was published in the April 2019 issue of the Journal of the Association of Information Sciences and Technology. Fonseca co-authored the paper with Michael Marcinowski, College of Liberal Arts, Bath Spa University, United Kingdom; and Clodoveu Davis, Computer Science Department, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil.

In the paper, the researchers propose a concept to bridge the divide between theoretical thinking and a-theoretical, data-driven science.

The info is here.

Tuesday, May 7, 2019

Ethics Alone Can’t Fix Big Tech

Daniel Susser
Slate.com
Originally posted April 17, 2019

Here is an excerpt:

At a deeper level, these issues highlight problems with the way we’ve been thinking about how to create technology for good. Desperate for anything to rein in otherwise indiscriminate technological development, we have ignored the different roles our theoretical and practical tools are designed to play. With no coherent strategy for coordinating them, none succeed.

Consider ethics. In discussions about emerging technologies, there is a tendency to treat ethics as though it offers the tools to answer all values questions. I suspect this is largely ethicists’ own fault: Historically, philosophy (the larger discipline of which ethics is a part) has mostly neglected technology as an object of investigation, leaving that work for others to do. (Which is not to say there aren’t brilliant philosophers working on these issues; there are. But they are a minority.) The result, as researchers from Delft University of Technology and Leiden University in the Netherlands have shown, is that the vast majority of scholarly work addressing issues related to technology ethics is being conducted by academics trained and working in other fields.

This makes it easy to forget that ethics is a specific area of inquiry with a specific purview. And like every other discipline, it offers tools designed to address specific problems. To create a world in which A.I. helps people flourish (rather than just generate profit), we need to understand what flourishing requires, how A.I. can help and hinder it, and what responsibilities individuals and institutions have for creating technologies that improve our lives. These are the kinds of questions ethics is designed to address, and critically important work in A.I. ethics has begun to shed light on them.

At the same time, we also need to understand why attempts at building “good technologies” have failed in the past, what incentives drive individuals and organizations not to build them even when they know they should, and what kinds of collective action can change those dynamics. To answer these questions, we need more than ethics. We need history, sociology, psychology, political science, economics, law, and the lessons of political activism. In other words, to tackle the vast and complex problems emerging technologies are creating, we need to integrate research and teaching around technology with all of the humanities and social sciences.

The info is here.

Friday, April 26, 2019

EU beats Google to the punch in setting strategy for ethical A.I.

Elizabeth Schulze
www.CNBC.com
Originally posted April 8, 2019

Less than one week after Google scrapped its AI ethics council, the European Union has set out its own guidelines for achieving “trustworthy” artificial intelligence.

On Monday, the European Commission released a set of steps to maintain ethics in artificial intelligence, as companies and governments weigh both the benefits and risks of the far-reaching technology.

“The ethical dimension of AI is not a luxury feature or an add-on,” said Andrus Ansip, EU vice-president for the digital single market, in a press release Monday. “It is only with trust that our society can fully benefit from technologies.”

The EU defines artificial intelligence as systems that show “intelligent behavior,” allowing them to analyze their environment and perform tasks with some degree of autonomy. AI is already transforming businesses in a variety of functions, like automating repetitive tasks and analyzing troves of data. But the technology raises a series of ethical questions, such as how to ensure algorithms are programmed without bias and how to hold AI accountable if something goes wrong.

The info is here.

Thursday, April 18, 2019

Google cancels AI ethics board in response to outcry

Kelsey Piper
www.Vox.com
Originally published April 4, 2019

his week, Vox and other outlets reported that Google’s newly created AI ethics board was falling apart amid controversy over several of the board members.

Well, it’s officially done falling apart — it’s been canceled. Google told Vox on Thursday that it’s pulling the plug on the ethics board.

The board survived for barely more than one week. Founded to guide “responsible development of AI” at Google, it would have had eight members and met four times over the course of 2019 to consider concerns about Google’s AI program. Those concerns include how AI can enable authoritarian states, how AI algorithms produce disparate outcomes, whether to work on military applications of AI, and more. But it ran into problems from the start.

Thousands of Google employees signed a petition calling for the removal of one board member, Heritage Foundation president Kay Coles James, over her comments about trans people and her organization’s skepticism of climate change. Meanwhile, the inclusion of drone company CEO Dyan Gibbens reopened old divisions in the company over the use of the company’s AI for military applications.

The info is here.

Monday, April 8, 2019

Mark Zuckerberg And The Tech World Still Do Not Understand Ethics

Derek Lidow
Forbes.com
Originally posted March 11, 2018

Here is an excerpt:

Expectations for technology startups encourage expedient, not ethical, decision making. 

As people in the industry are fond of saying, the tech world moves at “lightspeed.” That includes the pace of innovation, the rise and fall of markets, the speed of customer adoption, the evolution of business models and the lifecycles of companies. Decisions must be made quickly and leaders too often choose the most expedient path regardless of whether it is safe, legal or ethical.

 This “move fast and break things” ethos is embodied in practices like working toward a minimum viable product (MVP), helping to establish a bias toward cutting corners. In addition, many founders look for CFOs who are “tech trained—that is, people accustomed to a world where time and money wait for no one—as opposed to a seasoned financial officer with good accounting chops and a moral compass.

The host of scandals at Zenefits, a cloud-based provider of employee-benefits software to small businesses and once one of the most promising of Silicon Valley startups, had their origins in the shortcuts the company took in order to meet unreasonably high expectations for growth. The founder apparently created software that helped employees cheat on California’s online broker license course. As the company expanded rapidly, it began hiring people with little experience in the highly regulated health insurance industry. As the company moved from small businesses to larger businesses, the strain on it software increased. Instead of developing appropriate software, the company hired more people to manually take up the slack where the existing software failed. When the founder was asked by an interviewer before the scandals why he was so intent on expanding so rapidly he replied, “Slowing down doesn’t feel like something I want to do.”

The info is here.

Wednesday, March 27, 2019

The Value Of Ethics And Trust In Business.. With Artificial Intelligence

Stephen Ibaraki
Forbes.com
Originally posted March 2, 2019

Here is an excerpt:

Increasingly contributing positively to society and driving positive change are a growing discourse around the world and hitting all sectors and disruptive technologies such as Artificial Intelligence (AI).

With more than $20 Trillion USD wealth transfer from baby boomers to millennials, and their focus on the environment and social impact, this trend will accelerate. Business is aware and and taking the lead in this movement of advancing the human condition in a responsible and ethical manner. Values-based leadership, diversity, inclusion, investment and long-term commitment are the multi-stakeholder commitments going forward.

“Over the last 12 years, we have repeatedly seen that those companies who focus on transparency and authenticity are rewarded with the trust of their employees, their customers and their investors. While negative headlines might grab attention, the companies who support the rule of law and operate with decency and fair play around the globe will always succeed in the long term,” explained Ethisphere CEO, Timothy Erblich. “Congratulations to all of the 2018 honorees.”

The info is here.

Monday, January 28, 2019

Second woman carrying gene-edited baby, Chinese authorities confirm

Zhou Xiaoqin, left, loads Cas9 protein and PCSK9 sgRNA molecules into a fine glass pipette as Qin Jinzhou watches at a laboratory in Shenzhen in southern ChinaAgence France-Presse
Originally posted January 21, 2019


A second woman became pregnant during the experiment to create the world’s first genetically edited babies, Chinese authorities have confirmed, as the researcher behind the claim faces a police investigation.

He Jiankui shocked the scientific community last year after announcing he had successfully altered the genes of twin girls born in November to prevent them contracting HIV.

He had told a human genome forum in Hong Kong there had been “another potential pregnancy” involving a second couple.

A provincial government investigation has since confirmed the existence of the second mother and that the woman was still pregnant, the official Xinhua news agency reported.

The expectant mother and the twin girls from the first pregnancy will be put under medical observation, an investigator told Xinhua.

The info is here.

Tuesday, December 11, 2018

Is It Ethical to Use Prognostic Estimates from Machine Learning to Treat Psychosis?

Nicole Martinez-Martin, Laura B. Dunn, and Laura Weiss Roberts
AMA J Ethics. 2018;20(9):E804-811.
doi: 10.1001/amajethics.2018.804.

Abstract

Machine learning is a method for predicting clinically relevant variables, such as opportunities for early intervention, potential treatment response, prognosis, and health outcomes. This commentary examines the following ethical questions about machine learning in a case of a patient with new onset psychosis: (1) When is clinical innovation ethically acceptable? (2) How should clinicians communicate with patients about the ethical issues raised by a machine learning predictive model?

(cut)

Conclusion

In order to implement the predictive tool in an ethical manner, Dr K will need to carefully consider how to give appropriate information—in an understandable manner—to patients and families regarding use of the predictive model. In order to maximize benefits from the predictive model and minimize risks, Dr K and the institution as a whole will need to formulate ethically appropriate procedures and protocols surrounding the instrument. For example, implementation of the predictive tool should consider the ability of a physician to override the predictive model in support of ethically or clinically important variables or values, such as beneficence. Such measures could help realize the clinical application potential of machine learning tools, such as this psychosis prediction model, to improve the lives of patients.

Saturday, September 22, 2018

The Business Case for Curiosity

Francesca Gino
Harvard Business Review
Originally posted September-October Issue

Here are two excerpts:

The Benefits of Curiosity

New research reveals a wide range of benefits for organizations, leaders, and employees.

Fewer decision-making errors.

In my research I found that when our curiosity is triggered, we are less likely to fall prey to confirmation bias (looking for information that supports our beliefs rather than for evidence suggesting we are wrong) and to stereotyping people (making broad judgments, such as that women or minorities don’t make good leaders). Curiosity has these positive effects because it leads us to generate alternatives.

(cut)

It’s natural to concentrate on results, especially in the face of tough challenges. But focusing on learning is generally more beneficial to us and our organizations, as some landmark studies show. For example, when U.S. Air Force personnel were given a demanding goal for the number of planes to be landed in a set time frame, their performance decreased. Similarly, in a study led by Southern Methodist University’s Don VandeWalle, sales professionals who were naturally focused on performance goals, such as meeting their targets and being seen by colleagues as good at their jobs, did worse during a promotion of a product (a piece of medical equipment priced at about $5,400) than reps who were naturally focused on learning goals, such as exploring how to be a better salesperson. That cost them, because the company awarded a bonus of $300 for each unit sold.

A body of research demonstrates that framing work around learning goals (developing competence, acquiring skills, mastering new situations, and so on) rather than performance goals (hitting targets, proving our competence, impressing others) boosts motivation. And when motivated by learning goals, we acquire more-diverse skills, do better at work, get higher grades in college, do better on problem-solving tasks, and receive higher ratings after training. Unfortunately, organizations often prioritize performance goals.

The information is here.

Friday, September 21, 2018

Why Social Science Needs Evolutionary Theory

Christine Legare
Nautilus.com
Originally posted June 15, 2018

Here is an excerpt:

Human cognition and behavior is the product of the interaction of genetic and cultural evolution. Gene-culture co-evolution has allowed us to adapt to highly diverse ecologies and to produce cultural adaptations and innovations. It has also produced extraordinary cultural diversity. In fact, cultural variability is one of our species’ most distinctive features. Humans display a wider repertoire of behaviors that vary more within and across groups than any other animal. Social learning enables cultural transmission, so the psychological mechanisms supporting it should be universal. These psychological mechanisms must also be highly responsive to diverse developmental contexts and cultural ecologies.

Take the conformity bias. It is a universal proclivity of all human psychology—even very young children imitate the behavior of others and conform to group norms. Yet beliefs about conformity vary substantially between populations. Adults in some populations are more likely to associate conformity with children’s intelligence, whereas others view creative non-conformity as linked with intelligence. Psychological adaptations for social learning, such as conformity bias, develop in complex and diverse cultural ecologies that work in tandem to shape the human mind and generate cultural variation.

The info is here.

Friday, June 29, 2018

Business Class

John Benjamin
The New Republic
Originally posted May 14, 2018

Students in the country’s top MBA programs pride themselves on their open-mindedness. This is, after all, what they’ve been sold: American business schools market their ability to train the kinds of broadly competent, intellectually receptive people that will help solve the problems of a global economy.

But in truth, MBA programs are not the open forums advertised in admissions brochures. Behind this façade, they are ideological institutions committed to a strict blend of social liberalism and economic conservatism. Though this fusion may be the favorite of American elites—the kinds of people who might repeat that tired line “I’m socially liberal but fiscally conservative”—it takes a strange form in business school. Elite business schooling is tailored to promote two types of solutions to the big problems that arise in society: either greater innovation or freer markets. Proposals other than what’s essentially more business are brushed aside, or else patched over with a type of liberal politics that’s heavy on rhetorical flair but light on relevance outside privileged circles.

It is in this closed ideological loop that we wannabe masters of the universe often struggle to think clearly about the common good or what it takes to achieve it.

The information is here.

Tuesday, May 29, 2018

Ethics debate as pig brains kept alive without a body

Pallab Ghosh
BBC.com
Originally published April 27, 2018

Researchers at Yale University have restored circulation to the brains of decapitated pigs, and kept the organs alive for several hours.

Their aim is to develop a way of studying intact human brains in the lab for medical research.

Although there is no evidence that the animals were aware, there is concern that some degree of consciousness might have remained.

Details of the study were presented at a brain science ethics meeting held at the National Institutes of Health (NIH) in Bethesda in Maryland on 28 March.

The work, by Prof Nenad Sestan of Yale University, was discussed as part of an NIH investigation of ethical issues arising from neuroscience research in the US.

Prof Sestan explained that he and his team experimented on more than 100 pig brains.

The information is here.

Saturday, March 10, 2018

Universities Rush to Roll Out Computer Science Ethics Courses

Natasha Singer
The New York Times
Originally posted February 12, 2018

Here is an excerpt:

“Technology is not neutral,” said Professor Sahami, who formerly worked at Google as a senior research scientist. “The choices that get made in building technology then have social ramifications.”

The courses are emerging at a moment when big tech companies have been struggling to handle the side effects — fake news on Facebook, fake followers on Twitter, lewd children’s videos on YouTube — of the industry’s build-it-first mind-set. They amount to an open challenge to a common Silicon Valley attitude that has generally dismissed ethics as a hindrance.

“We need to at least teach people that there’s a dark side to the idea that you should move fast and break things,” said Laura Norén, a postdoctoral fellow at the Center for Data Science at New York University who began teaching a new data science ethics course this semester. “You can patch the software, but you can’t patch a person if you, you know, damage someone’s reputation.”

Computer science programs are required to make sure students have an understanding of ethical issues related to computing in order to be accredited by ABET, a global accreditation group for university science and engineering programs. Some computer science departments have folded the topic into a broader class, and others have stand-alone courses.

But until recently, ethics did not seem relevant to many students.

The article is here.

Friday, March 9, 2018

The brain as artificial intelligence: prospecting the frontiers of neuroscience

Fuller, S.
AI & Soc (2018).
https://doi.org/10.1007/s00146-018-0820-1

Abstract

This article explores the proposition that the brain, normally seen as an organ of the human body, should be understood as a biologically based form of artificial intelligence, in the course of which the case is made for a new kind of ‘brain exceptionalism’. After noting that such a view was generally assumed by the founders of AI in the 1950s, the argument proceeds by drawing on the distinction between science—in this case neuroscience—adopting a ‘telescopic’ or a ‘microscopic’ orientation to reality, depending on how it regards its characteristic investigative technologies. The paper concludes by recommending a ‘microscopic’ yet non-reductionist research agenda for neuroscience, in which the brain is seen as an underutilised organ whose energy efficiency is likely to outstrip that of the most powerful supercomputers for the foreseeable future.

The article is here.

Wednesday, September 20, 2017

Companies should treat cybersecurity as a matter of ethics

Thomas Lee
The San Francisco Chronicle
Originally posted September 2, 2017

Here is an excerpt:

An ethical code will force companies to rethink how they approach research and development. Instead of making stuff first and then worrying about data security later, companies will start from the premise that they need to protect consumer privacy before they start designing new products and services, Harkins said.

There is precedent for this. Many professional organizations like the American Medical Association and American Bar Association require members to follow a code of ethics. For example, doctors must pledge above all else not to harm a patient.

A code of ethics for cybersecurity will no doubt slow the pace of innovation, said Maurice Schweitzer, a professor of operations, information and decisions at the University of Pennsylvania’s Wharton School.

Ultimately, though, following such a code could boost companies’ reputations, Schweitzer said. Given the increasing number and severity of hacks, consumers will pay a premium for companies dedicated to security and privacy from the get-go, he said.

In any case, what’s wrong with taking a pause so we can catch our breath? The ethical quandaries technology poses to mankind are only going to get more complex as we increasingly outsource our lives to thinking machines.

That’s why a code of ethics is so important. Technology may come and go, but right and wrong never changes.

The article is here.