Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Wednesday, December 30, 2020

Google AI researcher's exit sparks ethics, bias concerns

Timnit Gebru
Matt Obrien
AP Tech Writer
Originally published 4 DEC 20

Here is an excerpt:

Gebru on Tuesday vented her frustrations about the process to an internal diversity-and-inclusion email group at Google, with the subject line: “Silencing Marginalized Voices in Every Way Possible." Gebru said on Twitter that's the email that got her fired.

Dean, in an email to employees, said the company accepted “her decision to resign from Google” because she told managers she'd leave if her demands about the study were not met.

"Ousting Timnit for having the audacity to demand research integrity severely undermines Google’s credibility for supporting rigorous research on AI ethics and algorithmic auditing," said Joy Buolamwini, a graduate researcher at the Massachusetts Institute of Technology who co-authored the 2018 facial recognition study with Gebru.

“She deserves more than Google knew how to give, and now she is an all-star free agent who will continue to transform the tech industry,” Buolamwini said in an email Friday.

How Google will handle its AI ethics initiative and the internal dissent sparked by Gebru's exit is one of a number of problems facing the company heading into the new year.

At the same time she was on her way out, the National Labor Relations Board on Wednesday cast another spotlight on Google's workplace. In a complaint, the NRLB accused the company of spying on employees during a 2019 effort to organize a union before the company fired two activist workers for engaging in activities allowed under U.S. law. Google has denied the allegations in the case, which is scheduled for an April hearing.

Tuesday, December 29, 2020

Internal Google document reveals campaign against EU lawmakers

Javie Espinoza
ft.com
Originally published 28 OCT 20

Here is an excerpt:

The leak of the internal document lays bare the tactics that big tech companies employ behind the scenes to manipulate public discourse and influence lawmakers. The presentation is watermarked as “privileged and need-to-know” and “confidential and proprietary”.

The revelations are set to create new tensions between the EU and Google, which are already engaged in tough discussions about how the internet should be regulated. They are also likely to trigger further debate within Brussels, where regulators hold divergent positions on the possibility of breaking up big tech companies.

Margrethe Vestager, the EU’s executive vice-president in charge of competition and digital policy, on Tuesday argued to MEPs that structural separation of big tech is not “the right thing to do”. However, in a recent interview with the FT, Mr Breton accused such companies of being “too big to care”, and suggested that they should be broken up in extreme circumstances.

Among the other tactics outlined in the report were objectives to “undermine the idea DSA has no cost to Europeans” and “show how the DSA limits the potential of the internet . . . just as people need it the most”.

The campaign document also shows that Google will seek out “more allies” in its fight to influence the regulation debate in Brussels, including enlisting the help of Europe-based platforms such as Booking.com.

Booking.com told the FT: “We have no intention of co-operating with Google on upcoming EU platform regulation. Our interests are diametrically opposed.”


Friday, February 21, 2020

Why Google thinks we need to regulate AI

Sundar Pichai
ft.com
Originally posted 19 Jan 20

Here are two excerpts:

Yet history is full of examples of how technology’s virtues aren’t guaranteed. Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents. The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.

These lessons teach us that we need to be clear-eyed about what could go wrong. There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone.

(cut)

But principles that remain on paper are meaningless. So we’ve also developed tools to put them into action, such as testing AI decisions for fairness and conducting independent human-rights assessments of new products. We have gone even further and made these tools and related open-source code widely available, which will empower others to use AI for good. We believe that any company developing new AI tools should also adopt guiding principles and rigorous review processes.

Government regulation will also play an important role. We don’t have to start from scratch. Existing rules such as Europe’s General Data Protection Regulation can serve as a strong foundation. Good regulatory frameworks will consider safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways. Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities.

Regulation can provide broad guidance while allowing for tailored implementation in different sectors. For some AI uses, such as regulated medical devices including AI-assisted heart monitors, existing frameworks are good starting points. For newer areas such as self-driving vehicles, governments will need to establish appropriate new rules that consider all relevant costs and benefits.


Thursday, August 1, 2019

Google Contractors Listen to Recordings of People Using Virtual Assistant

Sarah E. Needleman and Parmy Olson
The Wall Street Journal
Originally posted July 11, 2019

Here are two excerpts:

In a blog post Thursday, Google confirmed it employs people world-wide to listen to a small sample of recordings.

The public broadcaster’s report said the recordings potentially expose sensitive information about users such as names and addresses.

It also said Google, in some cases, is recording voices of customers even when they aren’t using Google Assistant [emphasis added].

In its blog post, Google said language experts listen to 0.2% of “audio snippets” taken from the Google Assistant to better understand different languages, accents and dialects.

(cut)

It is common practice for makers of virtual assistants to record and listen to some of what their users say so they can improve on the technology, said Bret Kinsella, chief executive of Voicebot.ai, a research firm focused on voice technology and artificial intelligence.

“Anything with speech recognition, you generally have humans at one point listening and annotating to sort out what types of errors are occurring,” he said.

In May, however, a coalition of privacy and child-advocacy groups filed a complaint with federal regulators about Amazon potentially preserving conversations of young users through its Echo Dot Kids devices.

The info is here.

Friday, March 29, 2019

Artificial Morality

Robert Koehler
www.commondreams.org
Originally posted March 14, 2019

Artificial Intelligence is one thing. Artificial morality is another. It may sound something like this:

“First, we believe in the strong defense of the United States and we want the people who defend it to have access to the nation’s best technology, including from Microsoft.”

The words are those of Microsoft president Brad Smith, writing on a corporate blogsite last fall in defense of the company’s new contract with the U.S. Army, worth $479 million, to make augmented reality headsets for use in combat. The headsets, known as the Integrated Visual Augmentation System, or IVAS, are a way to “increase lethality” when the military engages the enemy, according to a Defense Department official. Microsoft’s involvement in this program set off a wave of outrage among the company’s employees, with more than a hundred of them signing a letter to the company’s top executives demanding that the contract be canceled.

“We are a global coalition of Microsoft workers, and we refuse to create technology for warfare and oppression. We are alarmed that Microsoft is working to provide weapons technology to the U.S. Military, helping one country’s government ‘increase lethality’ using tools we built. We did not sign up to develop weapons, and we demand a say in how our work is used.”

The info is here.

Tuesday, August 7, 2018

Google’s AI ethics won't curb war by algorithm

Phoebe Braithwaite
Wired.com
Originally published July 5, 2018

Here is an excerpt:

One of these programmes is Project Maven, which trains artificial intelligence systems to parse footage from surveillance drones in order to “extract objects from massive amounts of moving or still imagery,” writes Drew Cukor, chief of the Algorithmic Warfare Cross-Functional Team. The programme is a key element of the US army’s efforts to select targets. One of the companies working on Maven is Google. Engineers at Google have protested their company’s involvement; their peers at companies like Amazon and Microsoft have made similar complaints, calling on their employers not to support the development of the facial recognition tool Rekognition, for use by the military, police and immigration control. For technology companies, this raises a question: should they play a role in governments’ use of force?

The US government’s policy of using armed drones to hunt its enemies abroad has long been controversial. Gibson argues that the CIA and US military are using drones to strike “far from the hot battlefield, against communities that aren't involved in an armed conflict, based on intelligence that is quite frequently wrong”. Paul Scharre, director of the technology and national security programme at the Center for a New American Security and author of Army of None says that the use of drones and computing power is making the US military a much more effective and efficient force that kills far fewer civilians than in previous wars. “We actually need tech companies like Google helping the military to do many other things,” he says.

The article is here.

Tuesday, July 10, 2018

Google to disclose ethical framework on use of AI

Richard Walters
The Financial Times
Originally published June 3, 2018

Here is an excerpt:

However, Google already uses AI in other ways that have drawn criticism, leading experts in the field and consumer activists to call on it to set far more stringent ethical guidelines that go well beyond not working with the military.

Stuart Russell, a professor of AI at the University of California, Berkeley, pointed to the company’s image search feature as an example of a widely used service that perpetuates preconceptions about the world based on the data in Google’s search index. For instance, a search for “CEOs” returns almost all white faces, he said.

“Google has a particular responsibility in this area because the output of its algorithms is so pervasive in the online world,” he said. “They have to think about the output of their algorithms as a kind of ‘speech act’ that has an effect on the world, and to work out how to make that effect beneficial.”

The information is here.

Wednesday, October 4, 2017

Google Sets Limits on Addiction Treatment Ads, Citing Safety

Michael Corkery
The New York Times
Originally published September 14, 2017

As drug addiction soars in the United States, a booming business of rehab centers has sprung up to treat the problem. And when drug addicts and their families search for help, they often turn to Google.

But prosecutors and health advocates have warned that many online searches are leading addicts to click on ads for rehab centers that are unfit to help them or, in some cases, endangering their lives.

This week, Google acknowledged the problem — and started restricting ads that come up when someone searches for addiction treatment on its site. “We found a number of misleading experiences among rehabilitation treatment centers that led to our decision,” Google spokeswoman Elisa Greene said in a statement on Thursday.

Google has taken similar steps to restrict advertisements only a few times before. Last year it limited ads for payday lenders, and in the past it created a verification system for locksmiths to prevent fraud.

In this case, the restrictions will limit a popular marketing tool in the $35 billion addiction treatment business, affecting thousands of small-time operators.

The article is here.

Wednesday, April 13, 2016

Who is on ethics board that Google set up after buying DeepMind?

Sam Shead
Business Insider
Originally published March 26, 2016

Google's artificial intelligence (AI) ethics board, established when Google acquired London AI startup DeepMind in 2014, remains one of the biggest mysteries in tech, with both Google and DeepMind refusing to reveal who sits on it.

Google set up the board at DeepMind's request after the cofounders of the £400 million research-intensive AI lab said they would only agree to the acquisition if Google promised to look into the ethics of the technology it was buying into.

The article is here.

Wednesday, June 17, 2015

Tim Cook says privacy is an issue of morality

By Chris Matyszczyk
cnet.com
Originally posted on June 3, 2015

Here is an excerpt:

Cook, though, presented the issue in deeply political terms. He said: "We believe that people have a fundamental right to privacy. The American people demand it, the constitution demands it, morality demands it."

Morality is a feast that moves as it's eaten. It's admirable that Cook would appeal to our moral core, but how much is there left? And how many can identify it?

The entire article is here.

Wednesday, April 8, 2015

Online Ethics for Professionals

By The Social Network Show
Originally published March 16, 2015

Part of the Show Recap

The Social Network Show welcomes Dr. John Gavazzi to the March 16, 2015 episode.

If you are a healthcare professional or a professional in any other field, one thing you have to pay attention to is your online reputation. Something to remember, there is no difference in your professional and your personal online presence and patients, clients and customers can find you.

Dr. Gavazzi, a clinical psychologist and named Ethics Educator of the Year by the Pennsylvania Psychological Association in 2013, talks about the issues to consider when building an online presence. In this episode you will hear about what ethical issues to consider; the importance of setting boundaries; what is included in informed consent; the limitations of technology; what constitutes a violation of privacy; and what are the advantages of being online for professionals.

(I was also named Ethics Educator of the Year by the American Psychological Association in 2014.)

The podcast is here.

Monday, January 27, 2014

When Doctors ‘Google’ Their Patients

By Haider Javed Warraich
The New York Times - Well Blog
Originally published January 6, 2013

Here is an excerpt:

I am tempted to prescribe that physicians should never look online for information about their patients, though I think the practice will become only more common, given doctors’ — and all of our — growing dependence on technology. The more important question health care providers need to ask themselves is why we would like to.

To me, the only legitimate reason to search for a patient’s online footprint is if there is a safety issue. If, for example, a patient appears to be manic or psychotic, it might be useful to investigate whether certain claims the patient makes are true. Or, if a doctor suspects a pediatric patient is being abused, it might make sense to look for evidence online. Physicians have also investigated patients on the web if they were concerned about suicide risk, or needed to contact the family of an unresponsive patient.

The entire article is here.

Wednesday, September 28, 2011

5 ways to manage your online reputation


Even if some physicians themselves are not online, their names, comments on their style of practice, and complaints or compliments about them probably are.

All of the online content devoted to a particular physician could negatively impact his or her reputation, and subsequently his or her business, if steps aren't taken to manage that content and -- when necessary -- defend it. This is often referred to as online reputation management.

Online reputation management has become big business, as evidenced by the number of radio and online ads offering to help physicians. But physicians can manage their own reputations, help build positive ones, and prevent negative content from turning into a crisis that needs to be dealt with professionally.

As quickly as online content can spread, especially in the age of social media, experts say online reputation management should be a key component to any business plan.

"The best defense in these cases is good offense," said Scott Sobel, president of Media and Communications Strategy, a Washington-based public relations firm specializing in crisis management.

Christian Olsen, vice president of Levick Strategic Communication's digital and social media practice, said social media has changed the dynamics of reputation management, because in addition to physicians communicating with their patients, their patients are now communicating with one another on social media websites.

For most physicians, there are five simple steps they can take to manage and maintain a good reputation online. For others, managing their online reputations may require more time and expertise than they have available.

One: Google yourself

Olsen said many make the mistake of thinking that because they don't have a website or are not involved in social media they are not online. "It just means your voice is not being heard in a conversation about you," he said.

The first step in managing a reputation is knowing what there is to manage. Reputation management experts recommend that physicians conduct Google searches on themselves at least once a month, preferably more often. Things can spread quickly online, so seeing what content is there on a regular basis will help doctors stay ahead of a potential crisis. It's also a good way to see what positive things are being said about you, which you may be able to build on.

Steven Wyer, managing director of Reputation Advocate Inc. and author of the book Violated Online, said physicians should set up alerts on Google and Yahoo. These alerts work by registering keywords, such as a name, that the search engines will use to comb the Internet looking for any new mention of those keywords on blogs, websites, online forums and other sites. When it finds a new mention, it will send an email detailing where the keywords were mentioned, what was said and a link to the website.

The mistake many physicians make, however, is to not include all reasonable variations of their name in an alert, Wyer said. For example, John Smith, MD, could have several variations, including Dr. John Smith, Dr. John C. Smith, Dr. John Smith, MD, etc. Alerts for a handful of those variations should be set up.

Two: Correct mistakes and false information

The easiest places to start are websites that show up high in Google searches. Those sites are likely to be physician finder or rating sites or health plan physician finders. The sites often include wrong or outdated contact information and incomplete biographical and educational history.

The entire story can be found here.

Friday, July 8, 2011

To Google or Not to Google Patients

Not to Google (sort of)
by David Palmiter, PhD, ABPP


At this point in my career, I often find myself discussing the interaction of the practice of psychology with various electronic venues. Regarding ethics and risk management, I would offer this: The scripts are the same; only the stages are different. Some of these electronic stages are bombastic, provocative and outrageously popular, which can distract us from the familiarity of the scripts being acted out. For instance, I recently learned of cases of early career psychologists communicating on dating sites with anonymous parties (a popular forum in part because it allows parties to withhold their iden­tities until they feel comfortable), only to discover that the other person was a current or past client. While this is a new stage, the script is similar to the one that rural psychologists fine-tuned years ago for handling a surprising multiple relationship (Schank and Skovholt, 1997, discuss managing such overlap­ping relationships). Today I had a friendly debate with a colleague in another state about whether one should allow comments when writing a blog. While we agreed that one should screen all comments to blogs before posting them, my colleague argued that receiving such comments, even without publishing them, might establish a duty. While I agreed with her, I also suggested that the risk was no different from the one that exists when we allow the public to leave messages on our voice mail.

Yes, Google is a (relatively) new and enormously popular stage. But, we’ve had ways of clandestinely learning about other people long before Google existed, such as looking up public records at court­houses or libraries, hiring detectives to observe public behaviors. Sure, Google is more easily accessible and usually does not involve fees, but isn’t the script similar?

Would it be okay to hire a detective to observe a client’s public behavior and to report back to us? Clearly there could be advantages, such as those advanced by my friend, Dr. Cohen. I won’t say it is always right or always wrong to hire a detective, just as I won’t make a similar declaration regarding Google searching. For me, the key issue, sans emergency, is whether informed consent has been acquired. Hence the qualifier “sort of” in the title. In trying to strike a balance between covering the issues and brevity, this is what I cover in my intake paper­work regarding electronic venues:
I have a website (www.helpingfamilies. com), a Twitter page (www.twitter.com/ HelpingParents), and a blog (www.hecticparents. com). My clients are welcome to visit these pages, which contain information and guidance.

It is my practice not to do Internet searches on my clients, but to rely exclusively on the information my clients have provided. However, in an instance of significant safety or risk, I reserve the right to use the Internet as a source of information. While this circumstance would be highly unusual, I am mentioning it in order to provide you the most comprehensive level of informed consent.

Unless the communication is very simple and is not sensitive, I prefer to not communi­cate through e-mail. My preference is born of a desire to protect the confidentiality of our communications.
  
You also may find I have a presence on social networking sites. Because I take every reasonable step in my power to be of maximum service to you, I do not accept invitations from current or past clients to network on these sites. The collective experience of my field, as well as my personal experience, is that psychologists are more likely to be of help to their clients if they do not establish relationships outside of the office.

If you have questions or concerns about any of these issues, please let me know.
So, I really do not disagree with Dr. Cohen. I think.

Reference
Schank, J. A., & Skovholt, T. M. (1997). Dual-relationship dilemmas of rural and small-community psychologists. Professional Psychology: Research and Practice, 28, 44-49.
 

Thursday, July 7, 2011

To Google or Not To Google Patients

To Google
By Steven R. Cohen, Ph.D.

When seeking a psycholo­gist, physician, plumber, or any other service provider, the first place many people look is the Internet. It has been years since a new patient found my name using a phone book. Most patients today inform me that, even with a direct referral, they have checked me out on the Internet. This is basic to being a good consumer. In addition to finding my website, patients often find information about me that is not psychology-related. There may be postings about charitable activities or service on nonprofit boards. I am on LinkedIn, but I am not on Facebook, MySpace, or other social networking sites. Even if we limit our presence on the Internet, clients can find out a lot about us. All of this is public information, available to anyone with Internet access.

Just as our clients may learn about us on the Internet, we may gather valuable information about our clients. My work, doing forensic evaluations and treating court-ordered clients, influences this view. When doing a forensic evaluation, I want to be as thorough as possible: The crucible of cross-exami­nation is a great teacher about thoroughness. If the Internet offers information about a client, an oppos­ing attorney is likely to have obtained it. I believe it is incumbent upon us to at least Google the client.

At times, the information obtained can signifi­cantly influence the report. Not long ago, I evaluated a mother with a history of drug and alcohol prob­lems, who assured me she had been clean and sober. Hair follicle testing yielded negative results, which supported her claim. At her second interview some time later, I asked again if she had been using sub­stances. She assured me she had remained clean and sober. However, unbeknownst to her, the previ­ous day I had Googled her name and found she had just been arrested for drunk and disorderly conduct. When I confronted her, she said she had hoped I wouldn’t find out until the evaluation was com­plete. In another case, a teen’s mother accused her “ex” of not adequately supervising their daughter. He disputed it — until the daughter posted photos on Facebook of herself and friends playing drink­ing games at her father’s house. Had I prepared the reports without Internet searches, my recommenda­tions might have been completely different. I believe that in some situations, searching for Internet infor­mation about our clients is an emerging standard of care.

I realize most psychologists are not engaged in forensic evaluations. In individual therapy, we often lack collateral information to validate clients’ reports, and too naïvely trust what they say. However, it is not unusual for clients to post contradictory informa­tion about mood, substance use, and even suicidal ideation on their social network sites or blogs. If we treat adolescents who tell us they are drug- and alcohol-free but post pictures of themselves drinking, the photos are relevant to treatment. If information is available in a public posting, I believe under many circumstances we should take a peek. I am not advo­cating “friending” a patient or finding a way into their private postings, but I believe the public behaviors of our clients are fair game.

In the July/August 2010 APA Monitor on Psychology, APA Ethics Director Steven Behnke dis­cusses ethical challenges posed by the Internet and says curiosity about a client is not a clinically appro­priate reason to do an Internet search. Yet so much of our emphasis is on data-gathering, gathering evi­dence, and looking for validation that I believe there are times when “clinical curiosity” may warrant an Internet search. Some say we need special consent. I do not believe consent is needed to look at public information, whether posted by others or by our clients. But, you must use your clinical judgment to decide when Googling will help or hinder therapy.