Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Algorithms. Show all posts
Showing posts with label Algorithms. Show all posts

Sunday, January 12, 2020

Bias in algorithmic filtering and personalization

Engin Bozdag
Ethics Inf Technol (2013) 15: 209.

Abstract

Online information intermediaries such as Facebook and Google are slowly replacing traditional media channels thereby partly becoming the gatekeepers of our society. To deal with the growing amount of information on the social web and the burden it brings on the average user, these gatekeepers recently started to introduce personalization features, algorithms that filter information per individual. In this paper we show that these online services that filter information are not merely algorithms. Humans not only affect the design of the algorithms, but they also can manually influence the filtering process even when the algorithm is operational. We further analyze filtering processes in detail, show how personalization connects to other filtering techniques, and show that both human and technical biases are present in today’s emergent gatekeepers. We use the existing literature on gatekeeping and search engine bias and provide a model of algorithmic gatekeeping.

From the Discussion:

Today information seeking services can use interpersonal contacts of users in order to tailor information and to increase relevancy. This not only introduces bias as our model shows, but it also has serious implications for other human values, including user autonomy, transparency, objectivity, serendipity, privacy and trust. These values introduce ethical questions. Do private companies that are
offering information services have a social responsibility, and should they be regulated? Should they aim to promote values that the traditional media was adhering to, such as transparency, accountability and answerability? How can a value such as transparency be promoted in an algorithm?  How should we balance between autonomy and serendipity and between explicit and implicit personalization? How should we define serendipity? Should relevancy be defined as what is popular in a given location or by what our primary groups find interesting? Can algorithms truly replace human filterers?

The info can be downloaded here.

Thursday, January 9, 2020

Artificial Intelligence Is Superseding Well-Paying Wall Street Jobs

Deutsche Boerse To Acquire NYSE Euronext To Create Largest Exchange OwnerJack Kelly
forbes.com
Originally posted 10 Dec 19

Here is an excerpt:

Compliance people run the risk of being replaced too. “As bad actors become more sophisticated, it is vital that financial regulators have the funding resources, technological capacity and access to AI and automated technologies to be a strong and effective cop on the beat,” said Martina Rejsjö, head of Nasdaq Surveillance North America Equities.

Nasdaq, a tech-driven trading platform, has an associated regulatory body that offers over 40 different algorithms, using 35,000 parameters, to spot possible market abuse and manipulation in real time. “The massive and, in many cases, exponential growth in market data is a significant challenge for surveillance professionals," Rejsjö said. “Market abuse attempts have become more sophisticated, putting more pressure on surveillance teams to find the proverbial needle in the data haystack." In layman's terms, she believes that the future is in tech overseeing trading activities, as the human eye is unable to keep up with the rapid-fire, sophisticated global trading dominated by algorithms.

When people say not to worry, that’s the precise time to worry. Companies—whether they are McDonald’s, introducing self-serve kiosks and firing hourly workers to cut costs, or top-tier investment banks that rely on software instead of traders to make million-dollar bets on the stock market—will continue to implement technology and downsize people in an effort to enhance profits and cut down on expenses. This trend will be hard to stop and have serious future consequences for the workers at all levels and salaries. 

The info is here.

Saturday, January 4, 2020

Robots in Finance Could Wipe Out Some of Its Highest-Paying Jobs

Lananh Nguyen
Bloomberg.com
Originally poste 6 Dec 19

Robots have replaced thousands of routine jobs on Wall Street. Now, they’re coming for higher-ups.

That’s the contention of Marcos Lopez de Prado, a Cornell University professor and the former head of machine learning at AQR Capital Management LLC, who testified in Washington on Friday about the impact of artificial intelligence on capital markets and jobs. The use of algorithms in electronic markets has automated the jobs of tens of thousands of execution traders worldwide, and it’s also displaced people who model prices and risk or build investment portfolios, he said.

“Financial machine learning creates a number of challenges for the 6.14 million people employed in the finance and insurance industry, many of whom will lose their jobs -- not necessarily because they are replaced by machines, but because they are not trained to work alongside algorithms,” Lopez de Prado told the U.S. House Committee on Financial Services.

During the almost two-hour hearing, lawmakers asked experts about racial and gender bias in AI, competition for highly skilled technology workers, and the challenges of regulating increasingly complex, data-driven financial markets.

The info is here.

Monday, December 23, 2019

Will The Future of Work Be Ethical?

Greg Epstein
Interview at TechCrunch.com
Originally posted 28 Nov 19

Here is an excerpt:

AI and climate: in a sense, you’ve already dealt with this new field people are calling the ethics of technology. When you hear that term, what comes to mind?

As a consumer of a lot of technology and as someone of the generation who has grown up with a phone in my hand, I’m aware my data is all over the internet. I’ve had conversations [with friends] about personal privacy and if I look around the classroom, most people have covers for the cameras on their computers. This generation is already aware [of] ethics whenever you’re talking about computing and the use of computers.

About AI specifically, as someone who’s interested in the field and has been privileged to be able to take courses and do research projects about that, I’m hearing a lot about ethics with algorithms, whether that’s fake news or bias or about applying algorithms for social good.

What are your biggest concerns about AI? What do you think needs to be addressed in order for us to feel more comfortable as a society with increased use of AI?

That’s not an easy answer; it’s something our society is going to be grappling with for years. From what I’ve learned at this conference, from what I’ve read and tried to understand, it’s a multidimensional solution. You’re going to need computer programmers to learn the technical skills to make their algorithms less biased. You’re going to need companies to hire those people and say, “This is our goal; we want to create an algorithm that’s fair and can do good.” You’re going to need the general society to ask for that standard. That’s my generation’s job, too. WikiLeaks, a couple of years ago, sparked the conversation about personal privacy and I think there’s going to be more sparks.

The info is here.

Sunday, December 15, 2019

The automation of ethics: the case of self-driving cars

Raffaele Rodogno, Marco Nørskov
Forthcoming in C. Hasse and
D. M. Søndergaard (Eds.)
Designing Robots.

Abstract
This paper explores the disruptive potential of artificial moral decision-makers on our moral practices by investigating the concrete case of self-driving cars. Our argument unfolds in two movements, the first purely philosophical, the second bringing self-driving cars into the picture.  More in particular, in the first movement, we bring to the fore three features of moral life, to wit, (i) the limits of the cognitive and motivational capacities of moral agents; (ii) the limited extent to which moral norms regulate human interaction; and (iii) the inescapable presence of tragic choices and moral dilemmas as part of moral life. Our ultimate aim, however, is not to provide a mere description of moral life in terms of these features but to show how a number of central moral practices can be envisaged as a response to, or an expression of these features. With this understanding of moral practices in hand, we broach the second movement. Here we expand our study to the transformative potential that self-driving cars would have on our moral practices . We locate two cases of potential disruption. First, the advent of self-driving cars would force us to treat as unproblematic the normative regulation of interactions that are inescapably tragic. More concretely, we will need to programme these cars’ algorithms as if there existed uncontroversial answers to the moral dilemmas that they might well face once on the road. Second, the introduction of this technology would contribute to the dissipation of accountability and lead to the partial truncation of our moral practices. People will be harmed and suffer in road related accidents, but it will be harder to find anyone to take responsibility for what happened—i.e. do what we normally do when we need to repair human relations after harm, loss, or suffering has occurred.

The book chapter is here.

Wednesday, December 4, 2019

AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense

Image result for AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of DefenseDepartment of Defense
Defense Innovation Board
Published November 2019

Here is an excerpt:

What DoD is Doing to Establish an Ethical AI Culture

DoD’s “enduring mission is to provide combat-credible military forces needed to deter war and protect the security of our nation.” As such, DoD seeks to responsibly integrate and leverage AI across all domains and mission areas, as well as business administration, cybersecurity, decision support, personnel, maintenance and supply, logistics, healthcare, and humanitarian programs. Notably, many AI use cases are non-lethal in nature. From making battery fuel cells more efficient to predicting kidney disease in our veterans to managing fraud in supply chain management, AI has myriad applications throughout the Department.

DoD is mission-oriented, and to complete its mission, it requires access to cutting edge technologies to support its warfighters at home and abroad. These technologies, however, are only one component to fulfilling its mission. To ensure the safety of its personnel, to comply with the Law of War, and to maintain an exquisite professional force, DoD maintains and abides by myriad processes, procedures, rules, and laws to guide its work.  These are buttressed by DoD’s strong commitment to the following values: leadership, professionalism, and technical knowledge through the dedication to duty, integrity, ethics, honor, courage, and loyalty. As DoD utilizes AI in its mission, these values ground, inform,
and sustain the AI Ethics Principles.

As DoD continues to comply with existing policies, processes, and procedures, as well as to
create new opportunities for responsible research and innovation in AI, there are several
cases where DoD is beginning to or already engaging in activities that comport with the
calls from the DoD AI Strategy and the AI Ethics Principles enumerated here.

The document is here.

Monday, December 2, 2019

A.I. Systems Echo Biases They’re Fed, Putting Scientists on Guard

Cade Metz
The New York Times
Originally published Nov 11, 2019

Here is the conclusion:

“This is hard. You need a lot of time and care,” he said. “We found an obvious bias. But how many others are in there?”

Dr. Bohannon said computer scientists must develop the skills of a biologist. Much as a biologist strives to understand how a cell works, software engineers must find ways of understanding systems like BERT.

In unveiling the new version of its search engine last month, Google executives acknowledged this phenomenon. And they said they tested their systems extensively with an eye toward removing any bias.

Researchers are only beginning to understand the effects of bias in systems like BERT. But as Dr. Munro showed, companies are already slow to notice even obvious bias in their systems. After Dr. Munro pointed out the problem, Amazon corrected it. Google said it was working to fix the issue.

Primer’s chief executive, Sean Gourley, said vetting the behavior of this new technology would become so important, it will spawn a whole new industry, where companies pay specialists to audit their algorithms for all kinds of bias and other unexpected behavior.

“This is probably a billion-dollar industry,” he said.

The whole article is here.

Monday, November 25, 2019

Racial bias in a medical algorithm favors white patients over sicker black patients

Carolyn Johnson
Scientists discovered racial bias in a widely used medical algorithm that predicts which patients will have complex health needs.  (iStock)The Washington Post
Originally posted October 24, 2019

A widely used algorithm that predicts which patients will benefit from extra medical care dramatically underestimates the health needs of the sickest black patients, amplifying long-standing racial disparities in medicine, researchers have found.

The problem was caught in an algorithm sold by a leading health services company, called Optum, to guide care decision-making for millions of people. But the same issue almost certainly exists in other tools used by other private companies, nonprofit health systems and government agencies to manage the health care of about 200 million people in the United States each year, the scientists reported in the journal Science.

Correcting the bias would more than double the number of black patients flagged as at risk of complicated medical needs within the health system the researchers studied, and they are already working with Optum on a fix. When the company replicated the analysis on a national data set of 3.7 million patients, they found that black patients who were ranked by the algorithm as equally as in need of extra care as white patients were much sicker: They collectively suffered from 48,772 additional chronic diseases.

The info is here.

Thursday, November 14, 2019

Assessing risk, automating racism

Embedded ImageRuha Benjamin
Science  25 Oct 2019:
Vol. 366, Issue 6464, pp. 421-422

Here is an excerpt:

Practically speaking, their finding means that if two people have the same risk score that indicates they do not need to be enrolled in a “high-risk management program,” the health of the Black patient is likely much worse than that of their White counterpart. According to Obermeyer et al., if the predictive tool were recalibrated to actual needs on the basis of the number and severity of active chronic illnesses, then twice as many Black patients would be identified for intervention. Notably, the researchers went well beyond the algorithm developers by constructing a more fine-grained measure of health outcomes, by extracting and cleaning data from electronic health records to determine the severity, not just the number, of conditions. Crucially, they found that so long as the tool remains effective at predicting costs, the outputs will continue to be racially biased by design, even as they may not explicitly attempt to take race into account. For this reason, Obermeyer et al. engage the literature on “problem formulation,” which illustrates that depending on how one defines the problem to be solved—whether to lower health care costs or to increase access to care—the outcomes will vary considerably.

Monday, November 4, 2019

Ethical Algorithms: Promise, Pitfalls and a Path Forward

Image result for ethical algorithmJay Van Bavel, Tessa West, Enrico Bertini, and Julia Stoyanovich
PsyArXiv Preprints
Originally posted October 21, 2019

Abstract

Fairness in machine-assisted decision making is critical to consider, since a lack of fairness can harm individuals or groups, erode trust in institutions and systems, and reinforce structural discrimination. To avoid making ethical mistakes, or amplifying them, it is important to ensure that the algorithms we develop are fair and promote trust. We argue that the marriage of techniques from behavioral science and computer science is essential to develop algorithms that make ethical decisions and ensure the welfare of society. Specifically, we focus on the role of procedural justice, moral cognition, and social identity in promoting trust in algorithms and offer a road map for future research on the topic.

--

The increasing role of machine-learning and algorithms in decision making has revolutionized areas ranging from the media to medicine to education to industry. As the recent One Hundred Year Study on Artificial Intelligence (Stone et al., 2016) reported: “AI technologies already pervade our lives. As they become a central force in society, the field is shifting from simply building systems that are intelligent to building intelligent systems that are human-aware and trustworthy.” Therefore, the effective development and widespread adoption of algorithms will hinge not only on the sophistication of engineers and computer scientists, but also on the expertise of behavioural scientists.

These algorithms hold enormous promise for solving complex problems, increasing efficiency, reducing bias, and even making decision-making transparent. However, the last few decades of behavioral science have established that humans hold a number of biases and shortcomings that impact virtually every sphere of human life (Banaji& Greenwald, 2013) and discrimination can become entrenched, amplified, or even obscured when decisions are implemented by algorithms (Kleinberg, Ludwig, Mullainathan, & Sunstein, 2018). While there has been a growing awareness that programmers and organizations should pay greater attention to discrimination and other ethical considerations (Dignum, 2018), very little behavioral research has directly examined these issues.  In  this  paper,  we  describe  how  behavioural  science  will  play  a  critical  role  in  the development  of  ethical  algorithms  and  outline  a  roadmap  for behavioural  scientists  and computer scientists to ensure that these algorithms are as ethical as possible.

The paper is here.

Thursday, October 24, 2019

Facebook isn’t free speech, it’s algorithmic amplification optimized for outrage

Jon Evans
techcrunch.com
Originally published October 20, 2019

This week Mark Zuckerberg gave a speech in which he extolled “giving everyone a voice” and fighting “to uphold a wide a definition of freedom of expression as possible.” That sounds great, of course! Freedom of expression is a cornerstone, if not the cornerstone, of liberal democracy. Who could be opposed to that?

The problem is that Facebook doesn’t offer free speech; it offers free amplification. No one would much care about anything you posted to Facebook, no matter how false or hateful, if people had to navigate to your particular page to read your rantings, as in the very early days of the site.

But what people actually read on Facebook is what’s in their News Feed … and its contents, in turn, are determined not by giving everyone an equal voice, and not by a strict chronological timeline. What you read on Facebook is determined entirely by Facebook’s algorithm, which elides much — censors much, if you wrongly think the News Feed is free speech — and amplifies little.

What is amplified? Two forms of content. For native content, the algorithm optimizes for engagement. This in turn means people spend more time on Facebook, and therefore more time in the company of that other form of content which is amplified: paid advertising.

Of course this isn’t absolute. As Zuckerberg notes in his speech, Facebook works to stop things like hoaxes and medical misinformation from going viral, even if they’re otherwise anointed by the algorithm. But he has specifically decided that Facebook will not attempt to stop paid political misinformation from going viral.

The info is here.

Editor's note: Facebook is one of the most defective products that millions of Americans use everyday.

Tuesday, October 22, 2019

AI used for first time in job interviews in UK to find best applicants

Charles Hymas
The Telegraph
Originally posted September 27, 2019

Artificial intelligence (AI) and facial expression technology is being used for the first time in job interviews in the UK to identify the best candidates.

Unilever, the consumer goods giant, is among companies using AI technology to analyse the language, tone and facial expressions of candidates when they are asked a set of identical job questions which they film on their mobile phone or laptop.

The algorithms select the best applicants by assessing their performances in the videos against about 25,000 pieces of facial and linguistic information compiled from previous interviews of those who have gone on to prove to be good at the job.

Hirevue, the US company which has developed the interview technology, claims it enables hiring firms to interview more candidates in the initial stage rather than simply relying on CVs and that it provides a more reliable and objective indicator of future performance free of human bias.

However, academics and campaigners warned that any AI or facial recognition technology would inevitably have in-built biases in its databases that could discriminate against some candidates and exclude talented applicants who might not conform to the norm.

The info is here.

Monday, August 26, 2019

Psychological reactions to human versus robotic job replacement

Armin Granulo, Christoph Fuchs & Stefano Puntoni
Nature.com
Originally posted August 5, 2019

Abstract

Advances in robotics and artificial intelligence are increasingly enabling organizations to replace humans with intelligent machines and algorithms. Forecasts predict that, in the coming years, these new technologies will affect millions of workers in a wide range of occupations, replacing human workers in numerous tasks, but potentially also in whole occupations. Despite the intense debate about these developments in economics, sociology and other social sciences, research has not examined how people react to the technological replacement of human labour. We begin to address this gap by examining the psychology of technological replacement. Our investigation reveals that people tend to prefer workers to be replaced by other human workers (versus robots); however, paradoxically, this preference reverses when people consider the prospect of their own job loss. We further demonstrate that this preference reversal occurs because being replaced by machines, robots or software (versus other humans) is associated with reduced self-threat. In contrast, being replaced by robots is associated with a greater perceived threat to one’s economic future. These findings suggest that technological replacement of human labour has unique psychological consequences that should be taken into account by policy measures (for example, appropriately tailoring support programmes for the unemployed).

The info is here.

Proprietary Algorithms for Polygenic Risk: Protecting Scientific Innovation or Hiding the Lack of It?

A. Cecile & J.W. Janssens
Genes 2019, 10(6), 448
https://doi.org/10.3390/genes10060448

Abstract

Direct-to-consumer genetic testing companies aim to predict the risks of complex diseases using proprietary algorithms. Companies keep algorithms as trade secrets for competitive advantage, but a market that thrives on the premise that customers can make their own decisions about genetic testing should respect customer autonomy and informed decision making and maximize opportunities for transparency. The algorithm itself is only one piece of the information that is deemed essential for understanding how prediction algorithms are developed and evaluated. Companies should be encouraged to disclose everything else, including the expected risk distribution of the algorithm when applied in the population, using a benchmark DNA dataset. A standardized presentation of information and risk distributions allows customers to compare test offers and scientists to verify whether the undisclosed algorithms could be valid. A new model of oversight in which stakeholders collaboratively keep a check on the commercial market is needed.

Here is the conclusion:

Oversight of the direct-to-consumer market for polygenic risk algorithms is complex and time-sensitive. Algorithms are frequently adapted to the latest scientific insights, which may make evaluations obsolete before they are completed. A standardized format for the provision of essential information could readily provide insight into the logic behind the algorithms, the rigor of their development, and their predictive ability. The development of this format gives responsible providers the opportunity to lead by example and show that much can be shared when there is nothing to hide.

Wednesday, August 14, 2019

Getting AI ethics wrong could 'annihilate technical progress'

Richard Gray
TechXplore
Originally published July 30, 2019

Here is an excerpt:

Biases

But these algorithms can also learn the biases that already exist in data sets. If a police database shows that mainly young, black men are arrested for a certain crime, it may not be a fair reflection of the actual offender profile and instead reflect historic racism within a force. Using AI taught on this kind of data could exacerbate problems such as racism and other forms of discrimination.

"Transparency of these algorithms is also a problem," said Prof. Stahl. "These algorithms do statistical classification of data in a way that makes it almost impossible to see how exactly that happened." This raises important questions about how legal systems, for example, can remain fair and just if they start to rely upon opaque 'black box' AI algorithms to inform sentencing decisions or judgements about a person's guilt.

The next step for the project will be to look at potential interventions that can be used to address some of these issues. It will look at where guidelines can help ensure AI researchers build fairness into their algorithms, where new laws can govern their use and if a regulator can keep negative aspects of the technology in check.

But one of the problems many governments and regulators face is keeping up with the fast pace of change in new technologies like AI, according to Professor Philip Brey, who studies the philosophy of technology at the University of Twente, in the Netherlands.

"Most people today don't understand the technology because it is very complex, opaque and fast moving," he said. "For that reason it is hard to anticipate and assess the impacts on society, and to have adequate regulatory and legislative responses to that. Policy is usually significantly behind."

The info is here.

Friday, May 31, 2019

The Ethics of Smart Devices That Analyze How We Speak

Trevor Cox
Harvard Business Review
Originally posted May 20, 2019

Here is an excerpt:

But what happens when machines start analyzing how we talk? The big tech firms are coy about exactly what they are planning to detect in our voices and why, but Amazon has a patent that lists a range of traits they might collect, including identity (“gender, age, ethnic origin, etc.”), health (“sore throat, sickness, etc.”), and feelings, (“happy, sad, tired, sleepy, excited, etc.”).

This worries me — and it should worry you, too — because algorithms are imperfect. And voice is particularly difficult to analyze because the signals we give off are inconsistent and ambiguous. What’s more, the inferences that even humans make are distorted by stereotypes. Let’s use the example of trying to identify sexual orientation. There is a style of speaking with raised pitch and swooping intonations which some people assume signals a gay man. But confusion often arises because some heterosexuals speak this way, and many homosexuals don’t. Science experiments show that human aural “gaydar” is only right about 60% of the time. Studies of machines attempting to detect sexual orientation from facial images have shown a success rate of about 70%. Sound impressive? Not to me, because that means those machines are wrong 30% of the time. And I would anticipate success rates to be even lower for voices, because how we speak changes depending on who we’re talking to. Our vocal anatomy is very flexible, which allows us to be oral chameleons, subconsciously changing our voices to fit in better with the person we’re speaking with.

The info is here.

Wednesday, May 29, 2019

The Problem with Facebook


Making Sense Podcast

Originally posted on March 27, 2019

In this episode of the Making Sense podcast, Sam Harris speaks with Roger McNamee about his book Zucked: Waking Up to the Facebook Catastrophe.

Roger McNamee has been a Silicon Valley investor for thirty-five years. He has cofounded successful venture funds including Elevation with U2’s Bono. He was a former mentor to Facebook CEO Mark Zuckerberg and helped recruit COO Sheryl Sandberg to the company. He holds a B.A. from Yale University and an M.B.A. from the Tuck School of Business at Dartmouth College.

The podcast is here.

The fundamental ethical problems with social media companies like Facebook and Google start about 20 minutes into the podcast.

Tuesday, May 28, 2019

Values in the Filter Bubble Ethics of Personalization Algorithms in Cloud Computing

Engin Bozdag and Job Timmermans
Delft University of Technology
Faculty of Technology, Policy and Management

Abstract

Cloud services such as Facebook and Google search started to use personalization algorithms in order to deal with growing amount of data online. This is often done in order to reduce the “information overload”. User’s interaction with the system is recorded in a single identity, and the information is personalized for the user using this identity. However, as we argue, such filters often ignore the context of information and they are never value neutral. These algorithms operate without the control and knowledge of the user, leading to a “filter bubble”. In this paper we use Value Sensitive Design methodology to identify the values and value assumptions implicated in personalization algorithms. By building on existing philosophical work, we discuss three human values implicated in personalized filtering: autonomy, identity, and transparency.

A copy of the paper is here.

Tuesday, May 14, 2019

Who Should Decide How Algorithms Decide?

Mark Esposito, Terence Tse, Joshua Entsminger, and Aurelie Jean
Project-Syndicate
Originally published April 17, 2019

Here is an excerpt:

Consider the following scenario: a car from China has different factory standards than a car from the US, but is shipped to and used in the US. This Chinese-made car and a US-made car are heading for an unavoidable collision. If the Chinese car’s driver has different ethical preferences than the driver of the US car, which system should prevail?

Beyond culturally based differences in ethical preferences, one also must consider differences in data regulations across countries. A Chinese-made car, for example, might have access to social-scoring data, allowing its decision-making algorithm to incorporate additional inputs that are unavailable to US carmakers. Richer data could lead to better, more consistent decisions, but should that advantage allow one system to overrule another?

Clearly, before AVs take to the road en masse, we will need to establish where responsibility for algorithmic decision-making lies, be it with municipal authorities, national governments, or multilateral institutions. More than that, we will need new frameworks for governing this intersection of business and the state. At issue is not just what AVs will do in extreme scenarios, but how businesses will interact with different cultures in developing and deploying decision-making algorithms.

The info is here.

Saturday, April 20, 2019

Cardiologist Eric Topol on How AI Can Bring Humanity Back to Medicine

Alice Park
Time.com
Originally published March 14, 2019

Here is an excerpt:

What are the best examples of how AI can work in medicine?

We’re seeing rapid uptake of algorithms that make radiologists more accurate. The other group already deriving benefit is ophthalmologists. Diabetic retinopathy, which is a terribly underdiagnosed cause of blindness and a complication of diabetes, is now diagnosed by a machine with an algorithm that is approved by the Food and Drug Administration. And we’re seeing it hit at the consumer level with a smart-watch app with a deep learning algorithm to detect atrial fibrillation.

Is that really artificial intelligence, in the sense that the machine has learned about medicine like doctors?

Artificial intelligence is different from human intelligence. It’s really about using machines with software and algorithms to ingest data and come up with the answer, whether that data is what someone says in speech, or reading patterns and classifying or triaging things.

What worries you the most about AI in medicine?

I have lots of worries. First, there’s the issue of privacy and security of the data. And I’m worried about whether the AI algorithms are always proved out with real patients. Finally, I’m worried about how AI might worsen some inequities. Algorithms are not biased, but the data we put into those algorithms, because they are chosen by humans, often are. But I don’t think these are insoluble problems.

The info is here.