Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Transparency. Show all posts
Showing posts with label Transparency. Show all posts

Sunday, January 12, 2020

Bias in algorithmic filtering and personalization

Engin Bozdag
Ethics Inf Technol (2013) 15: 209.

Abstract

Online information intermediaries such as Facebook and Google are slowly replacing traditional media channels thereby partly becoming the gatekeepers of our society. To deal with the growing amount of information on the social web and the burden it brings on the average user, these gatekeepers recently started to introduce personalization features, algorithms that filter information per individual. In this paper we show that these online services that filter information are not merely algorithms. Humans not only affect the design of the algorithms, but they also can manually influence the filtering process even when the algorithm is operational. We further analyze filtering processes in detail, show how personalization connects to other filtering techniques, and show that both human and technical biases are present in today’s emergent gatekeepers. We use the existing literature on gatekeeping and search engine bias and provide a model of algorithmic gatekeeping.

From the Discussion:

Today information seeking services can use interpersonal contacts of users in order to tailor information and to increase relevancy. This not only introduces bias as our model shows, but it also has serious implications for other human values, including user autonomy, transparency, objectivity, serendipity, privacy and trust. These values introduce ethical questions. Do private companies that are
offering information services have a social responsibility, and should they be regulated? Should they aim to promote values that the traditional media was adhering to, such as transparency, accountability and answerability? How can a value such as transparency be promoted in an algorithm?  How should we balance between autonomy and serendipity and between explicit and implicit personalization? How should we define serendipity? Should relevancy be defined as what is popular in a given location or by what our primary groups find interesting? Can algorithms truly replace human filterers?

The info can be downloaded here.

Wednesday, January 8, 2020

Many Public Universities Refuse to Reveal Professors’ Conflicts of Interest

Annie Waldman and David Armstrong
Chronicle of Higher Ed and
ProPublica
Originally posted 6 Dec 19

Here is an excerpt:

All too often, what’s publicly known about faculty members’ outside activities, even those that could influence their teaching, research, or public-policy views, depends on where they teach. Academic conflicts of interest elude scrutiny because transparency varies from one university and one state to the next. ProPublica discovered those inconsistencies over the past year as we sought faculty outside-income forms from at least one public university in all 50 states.

About 20 state universities complied with our requests. The rest didn't, often citing exemptions from public-information laws for personnel records, or offering to provide the documents only if ProPublica first paid thousands of dollars. And even among those that released at least some records, there’s a wide range in what types of information are collected and disclosed, and whether faculty members actually fill out the forms as required. Then there's the universe of private universities that aren't subject to public-records laws and don't disclose professors’ potential conflicts at all. While researchers are supposed to acknowledge industry ties in scientific journals, those caveats generally don’t list compensation amounts.

We've accumulated by far the largest collection of university faculty and staff conflict-of-interest reports available anywhere, with more than 29,000 disclosures from state schools, which you can see in our new Dollars for Profs database. But there are tens of thousands that we haven't been able to get from other public universities, and countless more from private universities.

Sheldon Krimsky, a bioethics expert and professor of urban and environmental planning and policy at Tufts University, said that the fractured disclosure landscape deprives the public of key information for understanding potential bias in research. “Financial conflicts of interest influence outcomes,” he said. “Even if the researchers are honorable people, they don’t know how the interests affect their own research. Even honorable people can’t figure out why they have a predilection toward certain views. It’s because they internalize the values of people from whom they are getting funding, even if it’s not on the surface."

The info is here.

Monday, December 16, 2019

Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation

Ishowo-Oloko, F., Bonnefon, J., Soroye, Z. et al.
Nat Mach Intell 1, 517–521 (2019)
doi:10.1038/s42256-019-0113-5

Abstract

Recent advances in artificial intelligence and deep learning have made it possible for bots to pass as humans, as is the case with the recent Google Duplex—an automated voice assistant capable of generating realistic speech that can fool humans into thinking they are talking to another human. Such technologies have drawn sharp criticism due to their ethical implications, and have fueled a push towards transparency in human–machine interactions. Despite the legitimacy of these concerns, it remains unclear whether bots would compromise their efficiency by disclosing their true nature. Here, we conduct a behavioural experiment with participants playing a repeated prisoner’s dilemma game with a human or a bot, after being given either true or false information about the nature of their associate. We find that bots do better than humans at inducing cooperation, but that disclosing their true nature negates this superior efficiency. Human participants do not recover from their prior bias against bots despite experiencing cooperative attitudes exhibited by bots over time. These results highlight the need to set standards for the efficiency cost we are willing to pay in order for machines to be transparent about their non-human nature.

Monday, December 9, 2019

Escaping Skinner's Box: AI and the New Era of Techno-Superstition

John Danaher
Philosophical Disquisitions
Originally posted October 10, 2019

Here is an excerpt:

The findings were dispiriting. Green and Chen found that using algorithms did improve the overall accuracy of decision-making across all conditions, but this was not because adding information and explanations enabled the humans to play a more meaningful role in the process. On the contrary, adding more information often made the human interaction with the algorithm worse. When given the opportunity to learn from the real-world outcomes, the humans became overconfident in their own judgments, more biased, and less accurate overall. When given explanations, they could maintain accuracy but only to the extent that they deferred more to the algorithm. In short, the more transparent the system seemed to the worker, the more the workers made them worse or limited their own agency.

It is important not to extrapolate too much from one study, but the findings here are consistent what has been found in other cases of automation in the workplace: humans are often the weak link in the chain. They need to be kept in check. This suggests that if we want to reap the benefits of AI and automation, we may have to create an environment that is much like that of the Skinner box, one in which humans can flap their wings, convinced they are making a difference, but prevented from doing any real damage. This is the enchanted world of techno-superstition: a world in which we adopt odd rituals and habits (explainable AI; fair AI etc) to create an illusion of control.

(cut)

These two problems combine into a third problem: the erosion of the possibility of achievement. One reason why we work is so that we can achieve certain outcomes. But when we lack understanding and control it undermines our sense of achievement. We achieve things when we use our reason to overcome obstacles to problem-solving in the real world. Some people might argue that a human collaborating with an AI system to produce some change in the world is achieving something through the combination of their efforts. But this is only true if the human plays some significant role in the collaboration. If humans cannot meaningfully make a difference to the success of AI or accurately calibrate their behaviour to produce better outcomes in tandem with the AI, then the pathway to achievement is blocked. This seems to be what happens, even when we try to make the systems more transparent.

The info is here.

Tuesday, December 3, 2019

AI Ethics is All About Power

Code of Ethics in TechnologyKhair Johnson
venturebeat.com
Originally published Nov 11, 2109


Here is an excerpt:

“The gap between those who develop and profit from AI and those most likely to suffer the consequences of its negative effects is growing larger, not smaller,” the report reads, citing a lack of government regulation in an AI industry where power is concentrated among a few companies.

Dr. Safiya Noble and Sarah Roberts chronicled the impact of the tech industry’s lack of diversity in a paper UCLA published in August. The coauthors argue that we’re now witnessing the “rise of a digital technocracy” that masks itself as post-racial and merit-driven but is actually a power system that hoards resources and is likely to judge a person’s value based on racial identity, gender, or class.

“American corporations were not able to ‘self-regulate’ and ‘innovate’ an end to racial discrimination — even under federal law. Among modern digital technology elites, myths of meritocracy and intellectual superiority are used as racial and gender signifiers that disproportionately consolidate resources away from people of color, particularly African-Americans, Latinx, and Native Americans,” reads the report. “Investments in meritocratic myths suppress interrogations of racism and discrimination even as the products of digital elites are infused with racial, class, and gender markers.”

Despite talk about how to solve tech’s diversity problem, much of the tech industry has only made incremental progress, and funding for startups with Latinx or black founders still lags behind those for white founders. To address the tech industry’s general lack of progress on diversity and inclusion initiatives, a pair of Data & Society fellows suggested that tech and AI companies embrace racial literacy.

The info is here.

Editor's Note: The article covers a huge swath of information.

Monday, December 2, 2019

A.I. Systems Echo Biases They’re Fed, Putting Scientists on Guard

Cade Metz
The New York Times
Originally published Nov 11, 2019

Here is the conclusion:

“This is hard. You need a lot of time and care,” he said. “We found an obvious bias. But how many others are in there?”

Dr. Bohannon said computer scientists must develop the skills of a biologist. Much as a biologist strives to understand how a cell works, software engineers must find ways of understanding systems like BERT.

In unveiling the new version of its search engine last month, Google executives acknowledged this phenomenon. And they said they tested their systems extensively with an eye toward removing any bias.

Researchers are only beginning to understand the effects of bias in systems like BERT. But as Dr. Munro showed, companies are already slow to notice even obvious bias in their systems. After Dr. Munro pointed out the problem, Amazon corrected it. Google said it was working to fix the issue.

Primer’s chief executive, Sean Gourley, said vetting the behavior of this new technology would become so important, it will spawn a whole new industry, where companies pay specialists to audit their algorithms for all kinds of bias and other unexpected behavior.

“This is probably a billion-dollar industry,” he said.

The whole article is here.

Thursday, November 7, 2019

Digital Ethics and the Blockchain

Dan Blum
ISACA, Volume 2, 2018

Here is an excerpt:

Integrity and Transparency

Integrity and transparency are core values for delivering trust to prosperous markets. Blockchains can provide immutable land title records to improve property rights and growth in small economies, such as Honduras.6 In smart power grids, blockchain-enabled meters can replace inefficient centralized record-keeping systems for transparent energy trading. Businesses can keep transparent records for product provenance, production, distribution and sales. Forward-thinking governments are exploring use cases through which transparent, immutable blockchains could facilitate a lighter, more effective regulatory touch to holding industry accountable.

However, trade secrets and personal information should not be published openly on blockchains. Blockchain miners may reorder transactions to increase fees or delay certain business processes at the expense of others. Architects must leaven accountability and transparency with confidentiality and privacy. Developers (or regulators) should sometimes add a human touch to smart contracts to avoid rigid systems operating without any consumer safeguards.

The info is here.

Friday, October 18, 2019

Code of Ethics Can Guide Responsible Data Use

Katherine Noyes
The Wall Street Journal
Originally posted September 26, 2019

Here is an excerpt:

Associated with these exploding data volumes are plenty of practical challenges to overcome—storage, networking, and security, to name just a few—but far less straightforward are the serious ethical concerns. Data may promise untold opportunity to solve some of the largest problems facing humanity today, but it also has the potential to cause great harm due to human negligence, naivety, and deliberate malfeasance, Patil pointed out. From data breaches to accidents caused by self-driving vehicles to algorithms that incorporate racial biases, “we must expect to see the harm from data increase.”

Health care data may be particularly fraught with challenges. “MRIs and countless other data elements are all digitized, and that data is fragmented across thousands of databases with no easy way to bring it together,” Patil said. “That prevents patients’ access to data and research.” Meanwhile, even as clinical trials and numerous other sources continually supplement data volumes, women and minorities remain chronically underrepresented in many such studies. “We have to reboot and rebuild this whole system,” Patil said.

What the world of technology and data science needs is a code of ethics—a set of principles, akin to the Hippocratic Oath, that guides practitioners’ uses of data going forward, Patil suggested. “Data is a force multiplier that offers tremendous opportunity for change and transformation,” he explained, “but if we don’t do it right, the implications will be far worse than we can appreciate, in all sorts of ways.”

The info is here.

Monday, October 7, 2019

A Theranos Whistleblower’s Mission to Make Tech Ethical

Brian Gallagher
ethicalsystems.org
Originally published September 12, 2019

Here is an excerpt from the interview:

Is Theranos emblematic of a cultural trend or an anomaly of unethical behavior?

My initial impression was that Theranos was some very bizarre one-off scandal. But as I started to review thousands of startups, I realized that there is quite a lot of unethical behavior in tech. The stories may not be quite as grandiose or large-scale as Theranos’, but it was really common to see companies lie to investors, mislead customers, and create abusive work environments. Many founders lacked an understanding of how their products could have negative impacts on society. The frustration of seeing the same mistakes happen over and over again made it clear that something needed to be done about this.

How has your experience at Theranos helped shape your understanding of the link between ethics and culture?

If the company had effective and ethically mature leadership, the company may not have used underdeveloped technology on patients without their consent. If the board was constructed in a way to properly challenge the product, perhaps it would have been developed. If employees weren’t scared and disillusioned, perhaps constructive conversations about novel solutions could have arisen. On rare occasions are these scandals a sort of random surprise or the result of an unexpected disaster. They are often an accumulation of poor ethical decisions. Having a culture where, at every stakeholder level, people can speak-up and be properly considered when they see something wrong is crucial. It makes the difference in building ethical organizations and preventing large disastrous events from happening.

The info is here.

Thursday, September 26, 2019

Business and the Ethical Implications of Technology

Martin, K., Shilton, K. & Smith, J.
J Bus Ethics (2019).
https://doi.org/10.1007/s10551-019-04213-9

Abstract

While the ethics of technology is analyzed across disciplines from science and technology studies (STS), engineering, computer science, critical management studies, and law, less attention is paid to the role that firms and managers play in the design, development, and dissemination of technology across communities and within their firm. Although firms play an important role in the development of technology, and make associated value judgments around its use, it remains open how we should understand the contours of what firms owe society as the rate of technological development accelerates. We focus here on digital technologies: devices that rely on rapidly accelerating digital sensing, storage, and transmission capabilities to intervene in human processes. This symposium focuses on how firms should engage ethical choices in developing and deploying these technologies. In this introduction, we, first, identify themes the symposium articles share and discuss how the set of articles illuminate diverse facets of the intersection of technology and business ethics. Second, we use these themes to explore what business ethics offers to the study of technology and, third, what technology studies offers to the field of business ethics. Each field brings expertise that, together, improves our understanding of the ethical implications of technology. Finally we introduce each of the five papers, suggest future research directions, and interpret their implications for business ethics.

The Introduction is here.

There are several other articles related to this introduction.

Friday, September 20, 2019

The crossroads between ethics and technology

Arrow indicating side road in mountain landscapeTehilla Shwartz Altshuler
Techcrunch.com
Originally posted August 6, 2019

Here is an excerpt:

The first relates to ethics. If anything is clear today in the world of technology, it is the need to include ethical concerns when developing, distributing, implementing and using technology. This is all the more important because in many domains there is no regulation or legislation to provide a clear definition of what may and may not be done. There is nothing intrinsic to technology that requires that it pursue only good ends. The mission of our generation is to ensure that technology works for our benefit and that it can help realize social ideals. The goal of these new technologies should not be to replicate power structures or other evils of the past. 

Startup nation should focus on fighting crime and improving autonomous vehicles and healthcare advancements. It shouldn’t be running extremist groups on Facebook, setting up “bot farms” and fakes, selling attackware and spyware, infringing on privacy and producing deepfake videos.

The second issue is the lack of transparency. The combination of individuals and companies that have worked for, and sometimes still work with, the security establishment frequently takes place behind a thick screen of concealment. These entities often evade answering challenging questions that result from the Israeli Freedom of Information law and even recourse to the military censor — a unique Israeli institution — to avoid such inquires.


Thursday, September 12, 2019

Morals Ex Machina: Should We Listen To Machines For Moral Guidance?

Michael Klenk
3QuarksDaily.com
Originally posted August 12, 2019

Here are two excerpts:

The prospects of artificial moral advisors depend on two core questions: Should we take ethical advice from anyone anyway? And, if so, are machines any good at morality (or, at least, better than us, so that it makes sense that we listen to them)? I will only briefly be concerned with the first question and then turn to the second question at length. We will see that we have to overcome several technical and practical barriers before we can reasonably take artificial moral advice.

(cut)

The limitation of ethically aligned artificial advisors raises an urgent practical problem, too. From a practical perspective, decisions about values and their operationalisation are taken by the machine’s designers. Taking their advice means buying into preconfigured ethical settings. These settings might not agree with you, and they might be opaque so that you have no way of finding out how specific values have been operationalised. This would require accepting the preconfigured values on blind trust. The problem already exists in machines that give non-moral advice, such as mapping services. For example, when you ask your phone for the way to the closest train station, the device will have to rely on various assumptions about what path you can permissibly take and it may also consider commercial interests of the service provider. However, we should want the correct moral answer, not what the designers of such technologies take that to be.

We might overcome these practical limitations by letting users input their own values and decide about their operationalisation themselves. For example, the device might ask users a series of questions to determine their ethical views and also require them to operationalise each ethical preference precisely. A vegetarian might, for instance, have to decide whether she understands ‘vegetarianism’ to encompass ‘meat-free meals’ or ‘meat-free restaurants.’ Doing so would give us personalised moral advisors that could help us live more consistently by our own ethical rules.

However, it would then be unclear how specifying our individual values, and their operationalisation improves our moral decision making instead of merely helping individuals to satisfy their preferences more consistently.

The info is here.

Wednesday, September 11, 2019

Assessment of Patient Nondisclosures to Clinicians of Experiencing Imminent Threats

Levy AG, Scherer AM, Zikmund-Fisher BJ, Larkin K, Barnes GD, Fagerlin A.
JAMA Netw Open. Published online August 14, 20192(8):e199277.
doi:10.1001/jamanetworkopen.2019.9277

Question 

How common is it for patients to withhold information from clinicians about imminent threats that they face (depression, suicidality, abuse, or sexual assault), and what are common reasons for nondisclosure?

Findings 

This survey study, incorporating 2 national, nonprobability, online surveys of a total of 4,510 US adults, found that at least one-quarter of participants who experienced each imminent threat reported withholding this information from their clinician. The most commonly endorsed reasons for nondisclosure included potential embarrassment, being judged, or difficult follow-up behavior.

Meaning

These findings suggest that concerns about potential negative repercussions may lead many patients who experience imminent threats to avoid disclosing this information to their clinician.

Conclusion

This study reveals an important concern about clinician-patient communication: if patients commonly withhold information from clinicians about significant threats that they face, then clinicians are unable to identify and attempt to mitigate these threats. Thus, these results highlight the continued need to develop effective interventions that improve the trust and communication between patients and their clinicians, particularly for sensitive, potentially life-threatening topics.

How The Software Industry Must Marry Ethics With Artificial Intelligence

Christian Pedersen
Forbes.com
Originally posted July 15, 2019

Here is an excerpt:

Companies developing software used to automate business decisions and processes, military operations or other serious work need to address explainability and human control over AI as they weave it into their products. Some have started to do this.

As AI is introduced into existing software environments, those application environments can help. Many will have established preventive and detective controls and role-based security. They can track who made what changes to processes or to the data that feeds through those processes. Some of these same pathways can be used to document changes made to goals, priorities or data given to AI.

But software vendors have a greater opportunity. They can develop products that prevent bad use of AI, but they can also use AI to actively protect and aid people, business and society. AI can be configured to solve for anything from overall equipment effectiveness or inventory reorder point to yield on capital. Why not have it solve for nonfinancial, corporate social responsibility metrics like your environmental footprint or your environmental or economic impact? Even a common management practice like using a balanced scorecard could help AI strive toward broader business goals that consider the well-being of customers, employees, customers, suppliers and other stakeholders.

The info is here.

Wednesday, September 4, 2019

AI Ethics Guidelines Every CIO Should Read

Image: Mopic - stock.adobe.comJohn McClurg
www.informationweek.com
Originally posted August 7, 2019

Here is an excerpt:

Because AI technology and use cases are changing so rapidly, chief information officers and other executives are going to find it difficult to keep ahead of these ethical concerns without a roadmap. To guide both deep thinking and rapid decision-making about emerging AI technologies, organizations should consider developing an internal AI ethics framework.

The framework won’t be able to account for all the situations an enterprise will encounter on its journey to increased AI adoption. But it can lay the groundwork for future executive discussions. With a framework in hand, they can confidently chart a sensible path forward that aligns with the company’s culture, risk tolerance, and business objectives.

The good news is that CIOs and executives don’t need to come up with an AI ethics framework out of thin air. Many smart thinkers in the AI world have been mulling over ethics issues for some time and have published several foundational guidelines that an organization can use to draft a framework that makes sense for their business. Here are five of the best resources to get technology and ethics leaders started.

The info is here.

Monday, August 26, 2019

Proprietary Algorithms for Polygenic Risk: Protecting Scientific Innovation or Hiding the Lack of It?

A. Cecile & J.W. Janssens
Genes 2019, 10(6), 448
https://doi.org/10.3390/genes10060448

Abstract

Direct-to-consumer genetic testing companies aim to predict the risks of complex diseases using proprietary algorithms. Companies keep algorithms as trade secrets for competitive advantage, but a market that thrives on the premise that customers can make their own decisions about genetic testing should respect customer autonomy and informed decision making and maximize opportunities for transparency. The algorithm itself is only one piece of the information that is deemed essential for understanding how prediction algorithms are developed and evaluated. Companies should be encouraged to disclose everything else, including the expected risk distribution of the algorithm when applied in the population, using a benchmark DNA dataset. A standardized presentation of information and risk distributions allows customers to compare test offers and scientists to verify whether the undisclosed algorithms could be valid. A new model of oversight in which stakeholders collaboratively keep a check on the commercial market is needed.

Here is the conclusion:

Oversight of the direct-to-consumer market for polygenic risk algorithms is complex and time-sensitive. Algorithms are frequently adapted to the latest scientific insights, which may make evaluations obsolete before they are completed. A standardized format for the provision of essential information could readily provide insight into the logic behind the algorithms, the rigor of their development, and their predictive ability. The development of this format gives responsible providers the opportunity to lead by example and show that much can be shared when there is nothing to hide.

Friday, August 16, 2019

Federal Watchdog Reports EPA Ignored Ethics Rules

Alyssa Danigelis
www.environmentalleader.com
Originally published July 17, 2019

The Environmental Protection Agency failed to comply with federal ethics rules for appointing advisory committee members, the General Accounting Office concluded this week. President Trump’s EPA skipped disclosure requirements for new committee members last year, according to the federal watchdog.

Led by Andrew Wheeler, the EPA currently manages 22 committees that advise the agency on a wide range of issues, including developing regulations and managing research programs.

However, in fiscal year 2018, the agency didn’t follow a key step in its process for appointing 20 committee members to the Science Advisory Board (SAB) and Clean Air Scientific Advisory Committee (CASAC), the report says.

“SAB is the agency’s largest committee and CASAC is responsible for, among other things, reviewing national ambient air-quality standards,” the report noted. “In addition, when reviewing the step in EPA’s appointment process related specifically to financial disclosure reporting, we found that EPA did not consistently ensure that [special government employees] appointed to advisory committees met federal financial disclosure requirements.”

The GAO also pointed out that the number of committee members affiliated with academic institutions shrank.

The info is here.

Wednesday, August 14, 2019

Getting AI ethics wrong could 'annihilate technical progress'

Richard Gray
TechXplore
Originally published July 30, 2019

Here is an excerpt:

Biases

But these algorithms can also learn the biases that already exist in data sets. If a police database shows that mainly young, black men are arrested for a certain crime, it may not be a fair reflection of the actual offender profile and instead reflect historic racism within a force. Using AI taught on this kind of data could exacerbate problems such as racism and other forms of discrimination.

"Transparency of these algorithms is also a problem," said Prof. Stahl. "These algorithms do statistical classification of data in a way that makes it almost impossible to see how exactly that happened." This raises important questions about how legal systems, for example, can remain fair and just if they start to rely upon opaque 'black box' AI algorithms to inform sentencing decisions or judgements about a person's guilt.

The next step for the project will be to look at potential interventions that can be used to address some of these issues. It will look at where guidelines can help ensure AI researchers build fairness into their algorithms, where new laws can govern their use and if a regulator can keep negative aspects of the technology in check.

But one of the problems many governments and regulators face is keeping up with the fast pace of change in new technologies like AI, according to Professor Philip Brey, who studies the philosophy of technology at the University of Twente, in the Netherlands.

"Most people today don't understand the technology because it is very complex, opaque and fast moving," he said. "For that reason it is hard to anticipate and assess the impacts on society, and to have adequate regulatory and legislative responses to that. Policy is usually significantly behind."

The info is here.

Why You Should Develop a Personal Ethics Statement

Charlene Walters
www.entrepreneur.com
Originally posted July 16, 2019

As an entrepreneur, it can be helpful to create a personal ethics statement. A personal ethics statement is an assertion that defines your core ethical values and beliefs. It also delivers a strong testimonial about your code of conduct when dealing with people.

This statement can differentiate you from other businesses and entrepreneurs in your space. It should include information regarding your position on honesty and be reflective of how you interact with others. You can use your personal ethics statement or video on your website or when speaking with clients.

When you create it, you should include information about your fundamental beliefs, opinions and values. Your statement will give potential customers some insight into what it’s like to do business with you. You should also talk about anything that’s happened in your life that has impacted your ethical stance. Were you wronged in the past or affected by some injustice you witnessed? How did that shape and define you?

Remember that you’re basically telling clients why it’s better to do business with you than other entrepreneurs and communicating what you value as a person. Give creating a personal ethics statement a try. It’s a wonderful exercise and can provide value to your customers.

The info is here.

Friday, August 9, 2019

Advice for technologists on promoting AI ethics

Joe McKendrick
www.zdnet.com
Originally posted July 13, 2019

Ethics looms as a vexing issue when it comes to artificial intelligence (AI). Where does AI bias spring from, especially when it's unintentional? Are companies paying enough attention to it as they plunge full-force into AI development and deployment? Are they doing anything about it? Do they even know what to do about it?

Wringing bias and unintended consequences out of AI is making its way into the job descriptions of technology managers and professionals, especially as business leaders turn to them for guidance and judgement. The drive to ethical AI means an increased role for technologists in the business, as described in a study of 1,580 executives and 4,400 consumers from the Capgemini Research Institute. The survey was able to make direct connections between AI ethics and business growth: if consumers sense a company is employing AI ethically, they'll keep coming back; it they sense unethical AI practices, their business is gone.

Competitive pressure is the reason businesses are pushing AI to its limits and risking crossing ethical lines. "The pressure to implement AI is fueling ethical issues," the Capgemini authors, led by Anne-Laure Thieullent, managing director of Capgemini's Artificial Intelligence & Analytics Group, state. "When we asked executives why ethical issues resulting from AI are an increasing problem, the top-ranked reason was the pressure to implement AI." Thirty-four percent cited this pressure to stay ahead with AI trends.

The info is here.