Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Military. Show all posts
Showing posts with label Military. Show all posts

Monday, April 19, 2021

The Military Is Funding Ethicists to Keep Its Brain Enhancement Experiments in Check

Sara Scoles
Originally posted 1 April 21

Here is an excerpt:

The Department of Defense has already invested in a number of projects to which the Minerva research has relevance. The Army Research Laboratory, for example, has funded researchers who captured and transmitted a participant’s thoughts about a character’s movement in a video game, using magnetic stimulation to beam those neural instructions to another person’s brain and cause movement. And it has supported research using deep learning algorithms and EEG readings to predict a person’s “drowsy and alert” states.

Evans points to one project funded by Defense Advanced Research Projects Agency (DARPA): Scientists tested a BCI that allowed a woman with quadriplegia to drive a wheelchair with her mind. Then, “they disconnected the BCI from the wheelchair and connected to a flight simulator,” Evans says, and she brainfully flew a digital F-35. “DARPA has expressed pride that their work can benefit civilians,” says Moreno. “That helps with Congress and with the public so it isn’t just about ‘supersoldiers,’” says Moreno.

Still, this was a civilian participant, in a Defense-funded study, with “fairly explicitly military consequences,” says Evans. And the big question is whether the experiment’s purpose justifies the risks. “There’s no obvious therapeutic reason for learning to fly a fighter jet with a BCI,” he says. “Presumably warfighters have a job that involves, among other things, fighter jets, so there might be a strategic reason to do this experiment. Civilians rarely do.”

It’s worth noting that warfighters are, says Moreno, required to take on more risks than the civilians they are protecting, and in experiments, military members may similarly be asked to shoulder more risk than a regular-person participant.

DARPA has also worked on implants that monitor mood and boost the brain back to “normal” if something looks off, created prosthetic limbs animated by thought, and made devices that improve memory. While those programs had therapeutic aims, the applications and follow-on capabilities extend into the enhancement realm — altering mood, building superstrong bionic arms, generating above par memory.

Monday, September 28, 2020

Military AI vanquishes human fighter pilot in F-16 simulation. How scared should we be?

Sébastien Robli
Originally published 31 Aug 20

Here is an excerpt:

The AlphaDogfight simulation on Aug. 20 was an important milestone for AI and its potential military uses. While this achievement shows that AI can master increasingly difficult combat skills at warp speed, the Pentagon’s futurists still must remain mindful of its limitations and risks — both because AI remains long away from eclipsing the human mind in many critical decision-making roles, despite what the likes of Elon Musk have warned, and to make sure we don’t race ahead of ourselves and inadvertently leave the military exposed to new threats.

That’s not to minimize this latest development. Within the scope of the simulation, the AI pilot exceeded human limitations in the tournament: It was able to consistently execute accurate shots in very short timeframes; consistently push the airframe’s tolerance of the force of gravity to its maximum potential without going beyond that; and remain unaffected by the crushing pressure exerted by violent maneuvers the way a human pilot would.

All the more remarkable, Heron’s AI pilot was self-taught using deep reinforcement learning, a method in which an AI runs a combat simulation over and over again and is “rewarded” for rapidly successful behaviors and “punished” for failure. 

I emboldened the last sentence because of its importance.

Tuesday, September 8, 2020

Fallen Soldier Insults Give Trump a Lot to Fear

Cass Sunstein
Originally published 6 Sept 20

Here is an excerpt:

Building on Haidt’s work, Harvard economist Benjamin Enke has studied the rhetoric of numerous recent presidential candidates, and found that one has done better than all others in emphasizing loyalty, authority and sanctity: Trump. On the same scales, Hillary Clinton was especially bad. (Barack Obama was far better.) Enke also found that Trump’s emphasis on these values mattered to many voters, and attracted them to his side.

This framework helps sort out what many people consider to be a puzzle: Trump avoided military service, has been married three times, and has not exactly been a paragon of virtue in his personal life. Yet many people focused on patriotism, religious faith and traditional moral values have strongly supported him. A key reason is that however he has lived his life, he speaks their language — and indeed does so at least as well as, and probably better than, any presidential candidate they have heard before.

That’s why his reported expressions of contempt and disrespect for American soldiers threaten to be uniquely damaging — far more so than other outrageous comments he has made. When he said that Mexico is sending rapists to the U.S., made fun of the looks of prominent women, mocked disabled people, or said that protesters should be roughed up, people might have nodded or cringed, or laughed or been appalled.

As a matter of pure politics, though, saying that soldiers are “losers” or “suckers” is much worse for Trump because it attacks the foundation of his appeal: However he lives his life, at least he expresses deep love for this country and reverence for those who fight for it, and at least he speaks out for traditional moral values.

There are strong lessons here for both Trump and his Democratic challenger, former Vice President Joe Biden. Through both word and deed, the president needs to do whatever he can to make it clear that he respects and supports American soldiers.

The info is here.

Wednesday, June 10, 2020

The moral courage of the military in confronting the commander in chief

Robert Bruce Adolph
Tampa Bay Times
Originally posted 9 June 20

The president recently threatened to use our active duty military to “dominate” demonstrators nationwide, who are exercising their wholly legitimate right to assemble and be heard.

The distinguished former Secretary of Defense Jim Mattis nailed it in his recent broadside published in The Atlantic that took aim at our current commander-in-chief. Mattis states, “When I joined the military, some 50 years ago … I swore an oath to support and defend the Constitution. Never did I dream that troops taking the same oath would be ordered under any circumstances to violate the constitutional rights of their fellow citizens—much less to provide a bizarre photo op for the elected commander-in-chief, with military leadership standing alongside.”

The current Secretary of Defense, Mike Esper, who now perhaps regrets being made into a photographic prop for the president, has come out publicly against using the active duty military to quell civil unrest in our cities; as has 89 high ranking former defense officials who stated that they were “alarmed” by the chief executive’s threat to use troops against our country’s citizens on U.S. soil. Former Secretary of State Colin Powell, a former U.S. Army general and Republican Party member, has also taken aim at this presidency by stating that he will vote for Joe Biden in the next election.

The info is here.

Monday, June 8, 2020

Marine Corps bans public display of Confederate flag on all bases worldwide

Elliot Henney
Originally posted 6 June 20

The Marine Corps has banned all public displays of the Confederate flag from Marine Corp installations worldwide.

The Marines issued guidance on Friday on how commanders are to identify and remove the display of the flag within workplaces, common-access areas, and public areas on their installations.

The ban includes bumper stickers, clothing, mugs, posters, and flags.

The Marines say that the flag "presents a threat to our core values, unit cohesion, security, and good order and discipline."

Exceptions to the new rule include state flags that incorporate the Confederate flag, state-issued license plates with a depiction of the Confederate flag, and Confederate soldier's gravesites.

The info is here.

Wednesday, April 8, 2020

How a Ship’s Coronavirus Outbreak Became a Moral Crisis for the Military

Navy fires USS Theodore Roosevelt captain over loss of confidence ...Helene Cooper,
Thomas Gibbons-Neff, & Eric Schmitt
The New York Times
Originally posted 6 April 20

Here is an excerpt:

In the close-knit world of the American military, the crisis aboard the Roosevelt — known widely as the “T.R.”— generated widespread criticism from men and women who are usually careful to steer clear of publicly rebuking their peers.

Mr. Modly’s decision to remove Captain Crozier without first conducting an investigation went contrary to the wishes of both the Navy’s top admiral, Michael M. Gilday, the chief of naval operations, and the military’s top officer, Gen. Mark A. Milley, the chairman of the Joint Chiefs of Staff.

“I am appalled at the content of his address to the crew,” retired Adm. Mike Mullen, the chairman of the Joint Chiefs of Staff under Presidents George W. Bush and Barack Obama, said in a telephone interview, referring to Mr. Modly.

Mr. Modly, Admiral Mullen said, “has become a vehicle for the president. He basically has completely undermined, throughout the T.R. situation, the uniformed leadership of the Navy and the military leadership in general.”

Mr. Modly, Admiral Mullen said, “has become a vehicle for the president. He basically has completely undermined, throughout the T.R. situation, the uniformed leadership of the Navy and the military leadership in general.”

“At its core, this is about an aircraft carrier skipper who sees an imminent threat and is forced to make a decision that risks his career in the act of what he believes to be the safety of the near 5,000 members of his crew,” said Sean O’Keefe, a former Navy secretary under President George Bush. “That is more than enough to justify the Navy leadership rendering the benefit of the doubt to the deployed commander.”

The info is here.

Monday, April 6, 2020

JAIC launches pilot for implementing new DOD AI ethics principles

Jackson Barnett
Originally posted 2 April 20

Here is an excerpt:

The Department of Defense‘s Joint Artificial Intelligence Center is bringing together different types of engineers, policymakers and other DOD personnel to serve as “Responsible AI Champions” in support of the Pentagon’s new principles for AI ethics.

The pilot program brings together a “cross-functional group” of personnel from across the department to receive training on AI and DOD’s new ethical principles from JAIC staff who represent different parts of the AI development lifecycle. The intent is that when these trainees go back to their normal jobs, they will be “champions” for AI and the principles.

The model, which was announced through a JAIC blog post, is similar to a pilot Microsoft launched to implement its artificial intelligence governance structure. The JAIC did not say how many people will participate in the pilot program.

“The goal is to learn from this pilot so that we can develop a more robust and comprehensive program that can be implemented across the DOD,” Lt. Cmdr. Arlo Abrahamson, a JAIC spokesman, told FedScoop.

The info is here.

Friday, March 13, 2020

DoD unveils how it will keep AI in check with ethics principles

Image result for military aiScott Maucione
Originally posted 25 Feb 20

Here is an excerpt:

The principle areas are based on recommendations from a 15-month study by the Defense Innovation Board — a panel of science and technology experts from industry and academia.

The principles are as follows:

  1. Responsible: DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  2. Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  4. Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  5. Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

Wednesday, December 4, 2019

AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense

Image result for AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of DefenseDepartment of Defense
Defense Innovation Board
Published November 2019

Here is an excerpt:

What DoD is Doing to Establish an Ethical AI Culture

DoD’s “enduring mission is to provide combat-credible military forces needed to deter war and protect the security of our nation.” As such, DoD seeks to responsibly integrate and leverage AI across all domains and mission areas, as well as business administration, cybersecurity, decision support, personnel, maintenance and supply, logistics, healthcare, and humanitarian programs. Notably, many AI use cases are non-lethal in nature. From making battery fuel cells more efficient to predicting kidney disease in our veterans to managing fraud in supply chain management, AI has myriad applications throughout the Department.

DoD is mission-oriented, and to complete its mission, it requires access to cutting edge technologies to support its warfighters at home and abroad. These technologies, however, are only one component to fulfilling its mission. To ensure the safety of its personnel, to comply with the Law of War, and to maintain an exquisite professional force, DoD maintains and abides by myriad processes, procedures, rules, and laws to guide its work.  These are buttressed by DoD’s strong commitment to the following values: leadership, professionalism, and technical knowledge through the dedication to duty, integrity, ethics, honor, courage, and loyalty. As DoD utilizes AI in its mission, these values ground, inform,
and sustain the AI Ethics Principles.

As DoD continues to comply with existing policies, processes, and procedures, as well as to
create new opportunities for responsible research and innovation in AI, there are several
cases where DoD is beginning to or already engaging in activities that comport with the
calls from the DoD AI Strategy and the AI Ethics Principles enumerated here.

The document is here.

Tuesday, November 26, 2019

Engineers need a required course in ethics

Kush Saxena
Originally posted November 8, 2019

Here is an except:

Typically, engineers are trained to be laser-focused on solving problems in the most effective and efficient way. And those solutions often have ripple effects in society, and create externalities that must be carefully considered.

Given the pace with which we can deploy technology at scale, the decisions of just a few people can have deep and far-reaching impact.

But in spite of the fact that they build potentially society-altering technologies—such as artificial intelligence—engineers often have no training or exposure to ethics. Many don’t even consider it part of their remit.

But it is. In a world where a few lines of code can impact whether a woman lands a job in tech, or how a criminal is sentenced in court, everyone who touches technology must be qualified to make ethical decisions, however insignificant they may seem at the time.

Engineers need to understand that their work may be used in ways that they never intended and consider the broader impact it can have on the world.

How can tech leaders not only create strong ethical frameworks, but also ensure their employees act with “decency” and abide by the ideals and values they’ve set out? And how can leaders in business, government, and education better equip the tech workforce to consider the broader ethical implications of what they build?

The info is here.

Monday, November 18, 2019

Suicide Has Been Deadlier Than Combat for the Military

Carol Giacomo
The New York Times
Originally published November 1, 2019

Here are two excerpts:

The data for veterans is also alarming.

In 2016, veterans were one and a half times more likely to kill themselves than people who hadn’t served in the military, according to the House Committee on Oversight and Reform.

Among those ages 18 to 34, the rate went up nearly 80 percent from 2005 to 2016.

The risk nearly doubles in the first year after a veteran leaves active duty, experts say.

The Pentagon this year also reported on military families, estimating that in 2017 there were 186 suicide deaths among military spouses and dependents.


Experts say suicides are complex, resulting from many factors, notably impulsive decisions with little warning. Pentagon officials say a majority of service members who die by suicide do not have mental illness. While combat is undoubtedly high stress, there are conflicting views on whether deployments increase risk.

Where there seems to be consensus is that high-quality health care and keeping weapons out of the hands of people in distress can make a positive difference.

Studies show that the Department of Veterans Affairs provides high-quality care, and its Veterans Crisis Line “surpasses most crisis lines” operating today, according to Terri Tanielian, a researcher with the RAND Corporation. (The Veterans Crisis Line is staffed 24/7 at 800-273-8255, press 1. Services also are available online or by texting 838255.)

But Veterans Affairs often can’t accommodate all those needing help, resulting in patients being sent to community-based mental health professionals who lack the training to deal with service members.

The info is here.

Tuesday, September 24, 2019

Pentagon seeks 'ethicist' to oversee military artificial intelligence

A prototype robot goes through its paces at the Defense Advanced Research Projects Agency (Darpa) Robotics Challenge in Pomona, California, in 2015.David Smith
The Guardian
Originally posted September 7, 2019

Wanted: military “ethicist”. Skills: data crunching, machine learning, killer robots. Must have: cool head, moral compass and the will to say no to generals, scientists and even presidents.

The Pentagon is looking for the right person to help it navigate the morally murky waters of artificial intelligence (AI), billed as the battlefield of the 21st century.

“One of the positions we are going to fill will be someone who’s not just looking at technical standards, but who’s an ethicist,” Lt Gen Jack Shanahan, director of the Joint Artificial Intelligence Center (JAIC) at the US defense department, told reporters last week.

“I think that’s a very important point that we would not have thought about this a year ago, I’ll be honest with you. In Maven [a pilot AI machine learning project], these questions really did not rise to the surface every day, because it was really still humans looking at object detection, classification and tracking. There were no weapons involved in that.”

Shanahan added: “So we are going to bring in someone who will have a deep background in ethics and then, with the lawyers within the department, we’ll be looking at how do we actually bake this into the future of the Department of Defense.”

The JAIC is a year old and has 60 employees. Its budget last year was $93m; this year’s request was $268m. Its focus comes amid fears that China has gained an early advantage in the global race to explore AI’s military potential, including for command and control and autonomous weapons.

Tuesday, September 3, 2019

Psychologist Found Guilty of Sexual Assault During Psychotherapy

Richard Bammer
Originally published July 27, 2019

A Solano County Superior Court judge on Friday sentenced to more than 11 years behind bars a former Travis Air Force Base psychologist found guilty last fall of a series of felony sexual assaults on female patients and three misdemeanor counts.

After hearing victim impact testimony and statements from attorneys — but before pronouncing the prison term — Judge E. Bradley Nelson looked directly at Heath Jacob Sommer, 43, saying he took a version of exposure therapy “to a new level” and used his “position of trust” between 2014 and 2016 to repeatedly take advantage of “very vulnerable people,” female patients who sought his help to cope with previous sexual trauma while on active duty.

And following a statement from Sommer — “I apologize … I never intended to be offensive to people,” he said — Nelson enumerated the counts, noting the second one, rape, would account for the greatest number of years, eight, in state prison, with two other felonies, oral copulation by fraudulent representation and sexual battery by fraudulent means, filling out the balance.

Nelson added 18 months in Solano County Jail for three misdemeanor charges of sexual battery for the purpose of sexual arousal. He then credited Sommer, shackled at the waist in a striped jail jumpsuit and displaying no visible reaction to the sentence, with 904 days in custody. Additionally, Sommer will be required to serve 20 years probation upon release, register as a sex offender for life, and pay nearly $10,000 in restitution to the victims and other court costs.

The info is here.

Tuesday, July 2, 2019

Moral Decision Making, Religious Strain, and the Experience of Moral Injury

Steven Lancaster and Maggie Miller
PsyArXiv Preprints


Moral injury is the recognition that acts perpetrated during combat, or other stressful situations, can having lasting psychological impacts. Models of moral injury examine the role of transgressive acts, moral appraisals of these acts, and the symptoms of moral injury. However, little research has examined potential pathways between these elements. The current study examined everyday moral decision making and aspects of religious functioning as possible mediators of these relationships in a military veteran sample. Our pre-registered structural equation model supported a relationship between acts and appraisals; however, this relationship was not mediated by moral decision making as we had hypothesized. Our results demonstrated that religious strain significantly mediated the relationship between moral appraisals and both self- and other-directed symptoms of moral injury. Additional research is needed to better understand how and which transgressive acts are appraised as morally wrong. Further research is also needed to better integrate moral decision making into our understanding of moral injury.

From the Discussion:

Contrary to our predictions, moral decision making did not mediate the relationship between acts and appraisals in our hypothesized model.  This is surprising due to moral conflict being seen as the core of moral injury experience (Jinkerson, 2016).  Given the importance of moral evaluations of one’s actions in moral injury, we expected that one’s “moral compass would make a significant contribution to this model (Drescher & Foy, 2008, p. 99).  It is not clear whether this null finding is due to the method in which moral decision making was assessed or if perhaps moral decision making for everyday experiences (or non-combat experiences) fails to play a role in how one evaluates their potentially transgressive experiences (Christensen & Gomila, 2012).  The EDMD is limited in at least two ways which may have affected our results.  First, the test lacks a contemplation component which is necessary for the psychological processing of an moral decision (Gunia, Wang, Huang, Wang, & Murnighan, 2012).  Second, given that the EDMD focuses on everyday experiences, it may be limited in its ability to assess the moral decisions made during stressful situations (Yousef et al., 2012).  While moral decision making did not mediate as the act-appraisal relationship as hypothesized, it was correlated with other-directed symptoms of moral injury and the MODINDICES output in MPLUS indicated this pathway would improve model fit.  While not hypothesized, one reason for this finding could be that higher altruism leads an individual to give the “benefit of the doubt” to others, particularly those with whom they have endured stressful or traumatic experiences (Staub & Vollhardt, 2008).  Given the relatively young status of the field, additional research is needed to better understand who experiences these acts as negative/wrong and for which types of events does this occur.  Future studies may want to incorporate a broad range of potential mediators including multiple indices of moral decision making.

The pre-print is here.

Thursday, April 18, 2019

Google cancels AI ethics board in response to outcry

Kelsey Piper
Originally published April 4, 2019

his week, Vox and other outlets reported that Google’s newly created AI ethics board was falling apart amid controversy over several of the board members.

Well, it’s officially done falling apart — it’s been canceled. Google told Vox on Thursday that it’s pulling the plug on the ethics board.

The board survived for barely more than one week. Founded to guide “responsible development of AI” at Google, it would have had eight members and met four times over the course of 2019 to consider concerns about Google’s AI program. Those concerns include how AI can enable authoritarian states, how AI algorithms produce disparate outcomes, whether to work on military applications of AI, and more. But it ran into problems from the start.

Thousands of Google employees signed a petition calling for the removal of one board member, Heritage Foundation president Kay Coles James, over her comments about trans people and her organization’s skepticism of climate change. Meanwhile, the inclusion of drone company CEO Dyan Gibbens reopened old divisions in the company over the use of the company’s AI for military applications.

The info is here.

Wednesday, April 3, 2019

Artificial Morality

Robert Koehler
Originally posted March 21, 2019

Here is an excerpt:

What I see here is moral awakening scrambling for sociopolitical traction: Employees are standing for something larger than sheer personal interests, in the process pushing the Big Tech brass to think beyond their need for an endless flow of capital, consequences be damned.

This is happening across the country. A movement is percolating: Tech won’t build it!

“Across the technology industry,” the New York Times reported in October, “rank-and-file employees are demanding greater insight into how their companies are deploying the technology that they built. At Google, Amazon, Microsoft and Salesforce, as well as at tech start-ups, engineers and technologists are increasingly asking whether the products they are working on are being used for surveillance in places like China or for military projects in the United States or elsewhere.

“That’s a change from the past, when Silicon Valley workers typically developed products with little questioning about the social costs.”

What if moral thinking — not in books and philosophical tracts, but in the real world, both corporate and political — were as large and complex as technical thinking? It could no longer hide behind the cliché of the just war (and surely the next one we’re preparing for will be just), but would have to evaluate war itself — all wars, including the ones of the past 70 years or so, in the fullness of their costs and consequences — as well as look ahead to the kind of future we could create, depending on what decisions we make today.

Complex moral thinking doesn’t ignore the need to survive, financially and otherwise, in the present moment, but it stays calm in the face of that need and sees survival as a collective, not a competitive, enterprise.

The info is here.

Friday, March 29, 2019

Artificial Morality

Robert Koehler
Originally posted March 14, 2019

Artificial Intelligence is one thing. Artificial morality is another. It may sound something like this:

“First, we believe in the strong defense of the United States and we want the people who defend it to have access to the nation’s best technology, including from Microsoft.”

The words are those of Microsoft president Brad Smith, writing on a corporate blogsite last fall in defense of the company’s new contract with the U.S. Army, worth $479 million, to make augmented reality headsets for use in combat. The headsets, known as the Integrated Visual Augmentation System, or IVAS, are a way to “increase lethality” when the military engages the enemy, according to a Defense Department official. Microsoft’s involvement in this program set off a wave of outrage among the company’s employees, with more than a hundred of them signing a letter to the company’s top executives demanding that the contract be canceled.

“We are a global coalition of Microsoft workers, and we refuse to create technology for warfare and oppression. We are alarmed that Microsoft is working to provide weapons technology to the U.S. Military, helping one country’s government ‘increase lethality’ using tools we built. We did not sign up to develop weapons, and we demand a say in how our work is used.”

The info is here.

Tuesday, November 20, 2018

How tech employees are pushing Silicon Valley to put ethics before profit

Alexia Fernández Campbell
Originally published October 18, 2018

The chorus of tech workers demanding American tech companies put ethics before profit is growing louder.

In recent days, employees at Google and Microsoft have been pressuring company executives to drop bids for a $10 billion contract to provide cloud computing services to the Department of Defense.

As part of the contract, known as JEDI, engineers would build cloud storage for military data; there are few public details about what else it would entail. But one thing is clear: The project would involve using artificial intelligence to make the US military a lot deadlier.

“This program is truly about increasing the lethality of our department and providing the best resources to our men and women in uniform,” John Gibson, chief management officer at the Defense Department, said at a March industry event about JEDI.

Thousands of Google employees reportedly pressured the company to drop its bid for the project, and many had said they would refuse to work on it. They pointed out that such work may violate the company’s new ethics policy on the use of artificial intelligence. Google has pledged not to use AI to make “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” a policy company employees had pushed for.

The info is here.

Friday, August 10, 2018

SAS officers given lessons in ‘morality’

Paul Maley
PM Malcolm Turnbull with Defence Minister Marise Payne and current Chief of the Defence Force Air Chief Marshal Mark Binskin. Picture: Kym SmithThe Australian
Originally posted July 9, 2018

SAS officers are being given ­additional training in ethics, ­morality and courage in leadership as the army braces itself for a potentially damning report ­expected to find that a small number of troops may have committed war crimes during the decade-long fight in Afghanistan.

With the Inspector-General of the Australian Defence Force due within months to hand down his report into ­alleged battlefield atrocities committed by Diggers, The Australian can reveal that the SAS Regiment has been quietly instituting a series of reforms ahead of the findings.

The changes to special forces training reflect a widely held view within the army that any alleged misconduct committed by Australian troops was in part the ­result of a failure of leadership, as well as the transgression of individual soldiers.

Many of the reforms are ­focused on strengthening operational leadership and regimental culture, while others are designed to help special operations officers make ethical ­decisions even under the most challenging conditions.

Tuesday, August 7, 2018

Thousands of leading AI researchers sign pledge against killer robots

Ian Sample
The Guardian
Originally posted July 18, 2018

Here is an excerpt:

The military is one of the largest funders and adopters of AI technology. With advanced computer systems, robots can fly missions over hostile terrain, navigate on the ground, and patrol under seas. More sophisticated weapon systems are in the pipeline. On Monday, the defence secretary Gavin Williamson unveiled a £2bn plan for a new RAF fighter, the Tempest, which will be able to fly without a pilot.

UK ministers have stated that Britain is not developing lethal autonomous weapons systems and that its forces will always have oversight and control of the weapons it deploys. But the campaigners warn that rapid advances in AI and other fields mean it is now feasible to build sophisticated weapons that can identify, track and fire on human targets without consent from a human controller. For many researchers, giving machines the decision over who lives and dies crosses a moral line.

“We need to make it the international norm that autonomous weapons are not acceptable. A human must always be in the loop,” said Toby Walsh, a professor of AI at the University of New South Wales in Sydney who signed the pledge.

The info is here.