Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Regulation. Show all posts
Showing posts with label Regulation. Show all posts

Friday, October 11, 2019

Is there a right to die?

Eric Mathison
Baylor Medical College of Medicine Blog
Originally posted May 31, 2019

How people think about death is undergoing a major transformation in the United States. In the past decade, there has been a significant rise in assisted dying legalization, and more states are likely to legalize it soon.

People are adapting to a healthcare system that is adept at keeping people alive, but struggles when aggressive treatment is no longer best for the patient. Many people have concluded, after witnessing a loved one suffer through a prolonged dying process, that they don’t want that kind of death for themselves.

Public support for assisted dying is high. Gallup has tracked Americans’ support for it since 1951. The most recent survey, from 2017, found that 73% of Americans support legalization. Eighty-one percent of Democrats and 67% of Republicans support it, making this a popular policy regardless of political affiliation.

The effect has been a recent surge of states passing assisted dying legislation. New Jersey passed legislation in April, meaning seven states (plus the District of Columbia) now allow it. In addition to New Jersey, California, Colorado, Hawaii, and D.C. all passed legislation in the past three years, and seventeen states are considering legislation this year. Currently, around 20% of Americans live in states where assisted dying is legal.

The info is here.

Monday, August 26, 2019

Proprietary Algorithms for Polygenic Risk: Protecting Scientific Innovation or Hiding the Lack of It?

A. Cecile & J.W. Janssens
Genes 2019, 10(6), 448
https://doi.org/10.3390/genes10060448

Abstract

Direct-to-consumer genetic testing companies aim to predict the risks of complex diseases using proprietary algorithms. Companies keep algorithms as trade secrets for competitive advantage, but a market that thrives on the premise that customers can make their own decisions about genetic testing should respect customer autonomy and informed decision making and maximize opportunities for transparency. The algorithm itself is only one piece of the information that is deemed essential for understanding how prediction algorithms are developed and evaluated. Companies should be encouraged to disclose everything else, including the expected risk distribution of the algorithm when applied in the population, using a benchmark DNA dataset. A standardized presentation of information and risk distributions allows customers to compare test offers and scientists to verify whether the undisclosed algorithms could be valid. A new model of oversight in which stakeholders collaboratively keep a check on the commercial market is needed.

Here is the conclusion:

Oversight of the direct-to-consumer market for polygenic risk algorithms is complex and time-sensitive. Algorithms are frequently adapted to the latest scientific insights, which may make evaluations obsolete before they are completed. A standardized format for the provision of essential information could readily provide insight into the logic behind the algorithms, the rigor of their development, and their predictive ability. The development of this format gives responsible providers the opportunity to lead by example and show that much can be shared when there is nothing to hide.

Sunday, August 4, 2019

First Steps Towards an Ethics of Robots and Artificial Intelligence

John Tasioulas
King's College London

Abstract

This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to exploring some illustrative issues arising under each rubric, the article also emphasizes a number of more general themes. These include: the multiplicity of interacting levels on which ethical questions about RAIs arise, the need to recognize that RAIs potentially implicate the full gamut of human values (rather than exclusively or primarily some readily identifiable sub-set of ethical or legal principles), and the need for practically salient ethical reflection on RAIs to be informed by a realistic appreciation of their existing and foreseeable capacities.

From the section: Ethical Questions: Frames and Levels

Difficult questions arise as to how best to integrate these three modes of regulating RAIs, and there is a serious worry about the tendency of industry-based codes of ethics to upstage democratically enacted law in this domain, especially given the considerable political clout wielded by the small number of technology companies that are driving RAI-related developments. However, this very clout creates the ever-present danger that powerful corporations may be able to shape any resulting laws in ways favourable to their interests rather than the common good (Nemitz 2018, 7). Part of the difficulty here stems from the fact that three levels of ethical regulation inter-relate in complex ways. For example, it may be that there are strong moral reasons against adults creating or using a robot as a sexual partner (third level). But, out of respect for their individual autonomy, they should be legally free to do so (first level). However, there may also be good reasons to cultivate a social morality that generally frowns upon such activities (second level), so that the sale and public display of sex robots is legally constrained in various ways (through zoning laws, taxation, age and advertising restrictions, etc.) akin to the legal restrictions on cigarettes or gambling (first level, again). Given this complexity, there is no a priori assurance of a single best way of integrating the three levels of regulation, although there will nonetheless be an imperative to converge on some universal standards at the first and second levels where the matter being addressed demands a uniform solution across different national jurisdictional boundaries.

The paper is here.

Tuesday, July 30, 2019

Ethics In The Digital Age: Protect Others' Data As You Would Your Own

uncaptionedJeff Thomson
Forbes.com
Originally posted July 1, 2019

Here is an excerpt:

2. Ensure they are using people’s data with their consent. 

In theory, an increasing amount of rights to data use is willingly signed over by people through digital acceptance of privacy policies. But a recent investigation by the European Commission, following up on the impact of GDPR, indicated that corporate privacy policies remain too difficult for consumers to understand or even read. When analyzing the ethics of using data, finance professionals must personally reflect on whether the way information is being used is consistent with how consumers, clients or employees understand and expect it to be used. Furthermore, they should question if data is being used in a way that is necessary for achieving business goals in an ethical manner.

3. Follow the “golden rule” when it comes to data. 

Finally, finance professionals must reflect on whether they would want their own personal information being used to further business goals in the way that they are helping their organization use the data of others. This goes beyond regulations and the fine print of privacy agreements: it is adherence to the ancient, universal standard of refusing to do to other people what you would not want done to yourself. Admittedly, this is subjective and difficult to define. But finance professionals will be confronted with many situations in which there are no clear answers, and they must have the ability to think about the ethical implications of actions that might not necessarily be illegal.

The info is here.

Monday, July 29, 2019

AI Ethics – Too Principled to Fail?

Brent Mittelstadt
Oxford Internet Institute
https://ssrn.com/abstract=3391293

Abstract

AI Ethics is now a global topic of discussion in academic and policy circles. At least 63 public-private initiatives have produced statements describing high-level principles, values, and other tenets to guide the ethical development, deployment, and governance of AI. According to recent meta-analyses, AI Ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI Ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach in the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.

The paper is here.

Shift from professional ethics to business ethics

The outputs of many AI Ethics initiatives resemble professional codes of ethics that address design requirements and the behaviours and values of individual professions.  The legitimacy of particular applications and their underlying business interests remain largely unquestioned.  This approach conveniently steers debate towards the transgressions of unethical individuals, and away from the collective failure of unethical businesses and business models.  Developers will always be constrained by the institutions that employ them. To be truly effective, the ethical challenges of AI cannot conceptualised as individual failures. Going forward, AI Ethics must become an ethics of AI businesses as well.

Monday, June 3, 2019

Regulation of AI as a Means to Power

Daniel Faggella
emerj.com
Last updated May 5, 2019

Here is an excerpt:

The most fundamental principle of power and artificial intelligence is data dominance: Whoever controls the most valuable data within a space or sector will be able to make a better product or solve a better problem. Whoever solves the problem best will win business and win revenue, and whoever wins customers wins more data.

That cycle continues and you have the tech giants of today (a topic for a later AI Power essay).

No companies are likely to get more general search queries than Google, and so people will not likely use any search engine other than Google – and so Google gets more searches (data) to train with, and gets an even better search product. Eventually: Search monopoly.

No companies are likely to generate more general eCommerce purchases than Amazon, and so people will not likely use any online store other than Amazon – and so Amazon gets more purchases and customers (data) to train with, and gets an even better eCommerce product. Eventually: eCommerce monopoly.

There are 3-4 other well-known examples (Facebook, to some extent Netflix, Uber, etc), but I’ll leave it at two. AI may change to become less reliant on data collection, and data dominance may eventually be eclipsed by some other power dynamic, but today it’s the way the game is won.

I’m not aiming to oversimplify the business models of these complex companies, nor and I disparaging these companies as being “bad”. Companies like Google are no more filled with “bad” people than churches, law firms, or AI ethics committees.

The info is here.

Friday, May 10, 2019

Privacy, data science and personalised medicine. Time for a balanced discussion

Claudia Pagliari
LinkedIn.com Post
Originally posted March 26, 2019

There are several fundamental truths that those of us working at the intersection of data science, ethics and medical research have recognised for some time. Firstly that 'anonymised’ and ‘pseudonymised' data can potentially be re-identified through the convergence of related variables, coupled with clever inference methods (although this is by no means easy). Secondly that genetic data is not just about individuals but also about families and generations, past and future. Thirdly, as we enter an increasingly digitized society where transactional, personal and behavioural data from public bodies, businesses, social media, mobile devices and IoT are potentially linkable, the capacity of data to tell meaningful stories about us is becoming constrained only by the questions we ask and the tools we are able to deploy to get the answers. Some would say that privacy is an outdated concept, and control and transparency are the new by-words. Others either disagree or are increasingly confused and disenfranchised.

Some of the quotes from the top brass of Iceland’s DeCODE Genetics, appearing in today’s BBC’s News, neatly illustrate why we need to remain vigilant to the ethical dilemmas presented by the use of data sciences for personalised medicine. For those of you who are not aware, this company has been at the centre of innovation in population genomics since its inception in the 1990s and overcame a state outcry over privacy and consent, which led to its temporary bankruptcy, before rising phoenix-like from the ashes. The fact that its work has been able to continue in an era of increasing privacy legislation and regulation shows just how far the promise of personalized medicine has skewed the policy narrative and the business agenda in recent years. What is great about Iceland, in terms of medical research, is that it is a relatively small country that has been subjected to historically low levels of immigration and has a unique family naming system and good national record keeping, which means that the pedigree of most of its citizens is easy to trace. This makes it an ideal Petri dish for genetic researchers. And here’s where the rub is. In short, by fully genotyping only 10,000 people from this small country, with its relatively stable gene pool, and integrating this with data on their family trees - and doubtless a whole heap of questionnaires and medical records - the company has, with the consent of a few, effectively seized the data of the "entire population".

The info is here.


Friday, April 26, 2019

Social media giants no longer can avoid moral compass

Don Hepburn
thehill.com
Originally published April 1, 2019

Here is an excerpt:

There are genuine moral, legal and technical dilemmas in addressing the challenges raised by the ubiquitous nature of the not-so-new social media conglomerates. Why, then, are social media giants avoiding the moral compass, evading legal guidelines and ignoring technical solutions available to them? The answer is, their corporate culture refuses to be held accountable to the same standards the public has applied to all other global corporations for the past five decades.

A wholesale change of culture and leadership is required within the social media industry. The culture of “everything goes” because “we are the future” needs to be more than tweaked; it must come to an end. Like any large conglomerate, social media platforms cannot ignore the public’s demand that they act with some semblance of responsibility. Just like the early stages of the U.S. coal, oil and chemical industries, the social media industry is impacting not only our physical environment but the social good and public safety. No serious journalism organization would ever allow a stranger to write their own hate-filled stories (with photos) for their newspaper’s daily headline — that’s why there’s a position called editor-in-chief.

If social media giants insist they are open platforms, then anyone can purposefully exploit them for good or evil. But if social media platforms demonstrate no moral or ethical standards, they should be subject to some form of government regulation. We have regulatory environments where we see the need to protect the public good against the need for profit-driven enterprises; why should social media platforms be given preferential treatment?

The info is here.

Tuesday, April 23, 2019

How big tech designs its own rules of ethics to avoid scrutiny and accountability

David Watts
theconversaton.com
Originally posted March 28, 2019

Here is an excerpt:

“Applied ethics” aims to bring the principles of ethics to bear on real-life situations. There are numerous examples.

Public sector ethics are governed by law. There are consequences for those who breach them, including disciplinary measures, termination of employment and sometimes criminal penalties. To become a lawyer, I had to provide evidence to a court that I am a “fit and proper person”. To continue to practice, I’m required to comply with detailed requirements set out in the Australian Solicitors Conduct Rules. If I breach them, there are consequences.

The features of applied ethics are that they are specific, there are feedback loops, guidance is available, they are embedded in organisational and professional culture, there is proper oversight, there are consequences when they are breached and there are independent enforcement mechanisms and real remedies. They are part of a regulatory apparatus and not just “feel good” statements.

Feelgood, high-level data ethics principles are not fit for the purpose of regulating big tech. Applied ethics may have a role to play but because they are occupation or discipline specific they cannot be relied on to do all, or even most of, the heavy lifting.

The info is here.

Friday, April 19, 2019

Duke agrees to pay $112.5 million to settle allegation it fraudulently obtained federal research funding

Seth Thomas Gulledge
Triangle Business Journal
Originally posted March 25, 2019

Duke University has agreed to pay $112.5 million to settle a suit with the federal government over allegations the university submitted false research reports to receive federal research dollars.

This week, the university reached a settlement over allegations brought forward by whistleblower Joseph Thomas – a former Duke employee – who alleged that during his time working as a lab research analyst in the pulmonary, asthma and critical care division of Duke University Health Systems, the clinical research coordinator, Erin Potts-Kant, manipulated and falsified studies to receive grant funding.

The case also contends that the university and its office of research support, upon discovering the fraud, knowingly concealed it from the government.

According to court documents, Duke was accused of submitting claims to the National Institute of Health (NIH) and Environmental Protection Agency (EPA) between 2006-2018 that contained "false or fabricated data" cause the two agencies to pay out grant funds they "otherwise would not have." Those fraudulent submissions, the case claims, netted the university nearly $200 million in federal research funding.

“Taxpayers expect and deserve that federal grant dollars will be used efficiently and honestly. Individuals and institutions that receive research funding from the federal government must be scrupulous in conducting research for the common good and rigorous in rooting out fraud,” said Matthew Martin, U.S. attorney for the Middle District of North Carolina in a statement announcing the settlement. “May this serve as a lesson that the use of false or fabricated data in grant applications or reports is completely unacceptable.”

The info is here.

Tuesday, April 2, 2019

Former Patient Coordinator Pleads Guilty to Wrongfully Disclosing Health Information to Cause Harm

Department of Justice
U.S. Attorney’s Office
Western District of Pennsylvania
Originally posted March 6, 2019

A resident of Butler, Pennsylvania, pleaded guilty in federal court to a charge of wrongfully disclosing the health information of another individual, United States Attorney Scott W. Brady announced today.

Linda Sue Kalina, 61, pleaded guilty to one count before United States District Judge Arthur J. Schwab.

In connection with the guilty plea, the court was advised that Linda Sue Kalina worked, from March 7, 2016 through June 23, 2017, as a Patient Information Coordinator with UPMC and its affiliate, Tri Rivers Musculoskeletal Centers (TRMC) in Mars, Pennsylvania, and that during her employment, contrary to the requirements of the Health Insurance Portability and Accountability Act (HIPAA) improperly accessed the individual health information of 111 UPMC patients who had never been provided services at TRMC. Specifically, on August 11, 2017, Kalina unlawfully disclosed personal gynecological health information related to two such patients, with the intent to cause those individuals embarrassment and mental distress.

Judge Schwab scheduled sentencing for June 25, 2019, at 10 a.m. The law provides for a total sentence of 10 years in prison, a fine of $250,000, or both. Under the Federal Sentencing Guidelines, the actual sentence imposed is based upon the seriousness of the offense and the prior criminal history, if any, of the defendant. Kalina remains on bonding pending the sentencing hearing.

Assistant United States Attorney Carolyn J. Bloch is prosecuting this case on behalf of the government.

The Federal Bureau of Investigation conducted the investigation that led to the prosecution of Kalina.

Sunday, March 31, 2019

Is Ethical A.I. Even Possible?

Cade Metz
The New York Times
Originally posted March 1, 2019

Here is an excerpt:

As companies and governments deploy these A.I. technologies, researchers are also realizing that some systems are woefully biased. Facial recognition services, for instance, can be significantly less accurate when trying to identify women or someone with darker skin. Other systems may include security holes unlike any seen in the past. Researchers have shown that driverless cars can be fooled into seeing things that are not really there.

All this means that building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.

As some Microsoft employees protest the company’s military contracts, Mr. Smith said that American tech companies had long supported the military and that they must continue to do so. “The U.S. military is charged with protecting the freedoms of this country,” he told the conference. “We have to stand by the people who are risking their lives.”

Though some Clarifai employees draw an ethical line at autonomous weapons, others do not. Mr. Zeiler argued that autonomous weapons will ultimately save lives because they would be more accurate than weapons controlled by human operators. “A.I. is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents,” he said in a statement.

The info is here.

Thursday, March 28, 2019

Behind the Scenes, Health Insurers Use Cash and Gifts to Sway Which Benefits Employers Choose

Marshall Allen
Propublica.org
Originally posted February 20, 2019

Here is an excerpt:

These industry payments can’t help but influence which plans brokers highlight for employers, said Eric Campbell, director of research at the University of Colorado Center for Bioethics and Humanities.

“It’s a classic conflict of interest,” Campbell said.

There’s “a large body of virtually irrefutable evidence,” Campbell said, that shows drug company payments to doctors influence the way they prescribe. “Denying this effect is like denying that gravity exists.” And there’s no reason, he said, to think brokers are any different.

Critics say the setup is akin to a single real estate agent representing both the buyer and seller in a home sale. A buyer would not expect the seller’s agent to negotiate the lowest price or highlight all the clauses and fine print that add unnecessary costs.

“If you want to draw a straight conclusion: It has been in the best interest of a broker, from a financial point of view, to keep that premium moving up,” said Jeffrey Hogan, a regional manager in Connecticut for a national insurance brokerage and one of a band of outliers in the industry pushing for changes in the way brokers are paid.

The info is here.

Monday, March 18, 2019

OpenAI's Realistic Text-Generating AI Triggers Ethics Concerns

William Falcon
Forbes.com
Originally posted February 18, 2019

Here is an excerpt:

Why you should care.

GPT-2 is the closest AI we have to make conversational AI a possibility. Although conversational AI is far from solved, chatbots powered by this technology could help doctors scale advice over chats, scale advice for potential suicide victims, improve translation systems, and improve speech recognition across applications.

Although OpenAI acknowledges these potential benefits, it also acknowledges the potential risks of releasing the technology. Misuse could include, impersonate others online, generate misleading news headlines, or automate the automation of fake posts to social media.

But I argue these malicious applications are already possible without this AI. There exist other public models which can already be used for these purposes. Thus, I think not releasing this code is more harmful to the community because A) it sets a bad precedent for open research, B) keeps companies from improving their services, C) unnecessarily hypes these results and D) may trigger unnecessary fears about AI in the general public.

The info is here.

Thursday, March 14, 2019

An ethical pathway for gene editing

Julian Savulescu & Peter Singer
Bioethics
First published January 29, 2019

Ethics is the study of what we ought to do; science is the study of how the world works. Ethics is essential to scientific research in defining the concepts we use (such as the concept of ‘medical need’), deciding which questions are worth addressing, and what we may do to sentient beings in research.

The central importance of ethics to science is exquisitely illustrated by the recent gene editing of two healthy embryos by the Chinese biophysicist He Jiankui, resulting in the birth of baby girls born this month, Lulu and Nana. A second pregnancy is underway with a different couple. To make the babies resistant to human immunodeficiency virus (HIV), He edited out a gene (CCR5) that produces a protein which allows HIV to enter cells. One girl has both copies of the gene modified (and may be resistant to HIV), while the other has only one (making her still susceptible to HIV).

He Jiankui invited couples to take part in this experiment where the father was HIV positive and the mother HIV negative. He offered free in vitro fertilization (IVF) with sperm washing to avoid transmission of HIV. He also offered medical insurance, expenses and treatment capped at 280,000 RMB/CNY, equivalent to around $40,000. The package includes health insurance for the baby for an unspecified period. Medical expenses and compensation arising from any harm caused by the research were capped at 50,000 RMB/CNY ($7000 USD). He says this was from his own pocket. Although the parents were offered the choice of having either gene‐edited or ‐unedited embryos transferred, it is not clear whether they understood that editing was not necessary to protect their child from HIV, nor what pressure they felt under. There has been valid criticism of the process of obtaining informed consent.4 The information was complex and probably unintelligible to lay people.

The info is here.

Tuesday, March 12, 2019

Sex robots are here, but laws aren’t keeping up with the ethical and privacy issues they raise

Francis Shen
The Conversation
Originally published February 12, 2019

Here is an except:

A brave new world

A fascinating question for me is how the current taboo on sex robots will ebb and flow over time.

There was a time, not so long ago, when humans attracted to the same sex felt embarrassed to make this public. Today, society is similarly ambivalent about the ethics of “digisexuality” – a phrase used to describe a number of human-technology intimate relationships. Will there be a time, not so far in the future, when humans attracted to robots will gladly announce their relationship with a machine?

No one knows the answer to this question. But I do know that sex robots are likely to be in the American market soon, and it is important to prepare for that reality. Imagining the laws governing sexbots is no longer a law professor hypothetical or science fiction.

The info is here.

Saturday, March 2, 2019

Serious Ethical Violations in Medicine: A Statistical and Ethical Analysis of 280 Cases in the United States From 2008–2016

James M. DuBois, Emily E. Anderson, John T. Chibnall, Jessica Mozersky & Heidi A. Walsh (2019) The American Journal of Bioethics, 19:1, 16-34.
DOI: 10.1080/15265161.2018.1544305

Abstract

Serious ethical violations in medicine, such as sexual abuse, criminal prescribing of opioids, and unnecessary surgeries, directly harm patients and undermine trust in the profession of medicine. We review the literature on violations in medicine and present an analysis of 280 cases. Nearly all cases involved repeated instances (97%) of intentional wrongdoing (99%), by males (95%) in nonacademic medical settings (95%), with oversight problems (89%) and a selfish motive such as financial gain or sex (90%). More than half of cases involved a wrongdoer with a suspected personality disorder or substance use disorder (51%). Despite clear patterns, no factors provide readily observable red flags, making prevention difficult. Early identification and intervention in cases requires significant policy shifts that prioritize the safety of patients over physician interests in privacy, fair processes, and proportionate disciplinary actions. We explore a series of 10 questions regarding policy, oversight, discipline, and education options. Satisfactory answers to these questions will require input from diverse stakeholders to help society negotiate effective and ethically balanced solutions.

Thursday, February 21, 2019

Federal ethics agency refuses to certify financial disclosure from Commerce Secretary Wilbur Ross

Wilbur RossJeff Daniels
CNBC.com
Originally published February 19, 2019

The government's top ethics watchdog disclosed Tuesday that it had refused to certify a financial disclosure report from Commerce Secretary Wilbur Ross.

In a filing, the Office of Government Ethics said it wouldn't certify the 2018 annual filing by Ross because he didn't divest stock in a bank despite stating otherwise. The move could have legal ramifications for Ross and add to pressure for a federal probe.

"The report is not certified," OGE Director Emory Rounds said in a filing, explaining that a previous document the watchdog received from Ross indicated he "no longer held BankUnited stock." However, Rounds said an Oct. 31 document "demonstrates that he did" still hold the shares and as a result, "the filer was therefore not in compliance with his ethics agreement at the time of the report."

A federal ethics agreement required that Ross divest stock worth between $1,000 and $15,000 in BankUnited by the end of May 2017, or within 90 days of the Senate confirming him to the Commerce post. He previously reported selling the stock twice, first in May 2017 and again in August 2018 as part of an annual disclosure required by OGE.

The info is here.

Thursday, February 7, 2019

Google is quietly infiltrating medicine, but what rules will it play by?

Michael Millenson
STAT News
Originally posted January 3, 2019

Here is an excerpt:

Other tech companies are also making forays into fields previously reserved for physicians as they compete for a slice of the $3.5 trillion health care pie. Renowned surgeon and author Dr. Atul Gawande was hired to head the still-nascent health care joint venture between Amazon, Berkshire Hathaway, and JPMorgan. Apple recently hired more than 50 physicians to tend its growing health care portfolio. Those efforts include Apple Watch apps to detect irregular heart rhythms and falls, a medical record repository on your iPhone, a genetic risk score for heart disease, and a partnership with medical equipment manufacturer Zimmer Biomet aimed at improving knee and hip surgery.

Google is hiring physicians, too. Its high-profile hires include the former chief executives of the Geisinger Clinic and the Cleveland Clinic. The company’s ambitious health care expansion plans reportedly encompass everything from the management of Parkinson’s disease to selling hardware to providers and insurers.

To be clear, I’ve connected the dots among separate Google companies in a way Google might dispute. However, there are some concerns about how and whether any separation of information will be maintained. In November, Bloomberg reported that plans in the United Kingdom to combine an Alphabet subsidiary using artificial intelligence on medical records with the Google search engine were “tripping alarm bells about privacy.”

The info is here.

Friday, January 18, 2019

House Democrats Look to Crack Down on Feds With Conflicts of Interest, Ethics Violations

Eric Katz
Government Executive
Originally posted January 3, 2018

Federal employees who pass through the revolving door with the private sector and engage in other actions that could present conflicts of interest would come under intensified scrutiny in a slew of reforms House Democrats introduced on Friday aimed at boosting ethics oversight in government.

The new House majority put forward the For the People Act (H.R. 1) as its first legislative priority, after the more immediate concern of reopening the full government. The package involves an array of issues House Speaker Nancy Pelosi, D-Calif., said were critical to “restoring integrity in government,” such as voting rights access and campaign finance changes. It would also place new restrictions on federal workers before, during and after their government service, with special obligations for senior officials and the president.

“Over the last two years President Trump set the tone from the top of his administration that behaving ethically and complying with the law is optional,” said newly minted House Oversight and Reform Committee Chairman Rep. Elijah Cummings, D-Md. “That is why we are introducing the For the People Act. This bill contains a number of reforms that will strengthen our accountability for the executive branch officials, including the president.”

All federal employees would face a ban on using their official positions to participate in matters related to their former employers. Violators would face fines and one-to-five years in prison. Agency heads, in consultation with the director of the Office of Government Ethics, could issue waivers if it were deemed in the public interest.

The info is here.