Artificial intelligence, war and bias – PRIO Blogs

0

Examples of biased AI

Some lessons can be learned by examining examples of biased AI in non-military contexts. It has become more and more clear since a number of surveys and studies that the prejudices that exist within our society will also be embedded in AI. This involves facial recognition programs such as the one developed by Amazon which had little trouble recognizing the faces of white men but was considerably less precise in identifying black women and other groups of people. Another example is the program used in US courts who falsely predicted Blacks are twice as likely to commit a crime as whites. It is clear that biased AI can have serious and real consequences in society.

Different bias entry points

A major challenge is that there are many entry points for introducing bias into AI. This can result from the often unintentional the prejudices of program developers, a real issue in a workforce lacking in diversity. This can also occur if the data used to train a program is sub-representative or over-representative of a group, like what happened with Amazon’s facial recognition program. There are many other ways that prejudice can feed into AI-based programs, further highlighting the ethical and social dilemmas surrounding the use of AI in warfare.

Three considerations for biased AI and warfare

Drawing on some of these examples of biased AI in contexts other than combat, we can consider three main themes or concerns that emerge when considering how biased AI might play out in war contexts. and combat.

The Ffirst of all revolves around how AI might play a role in threat assessments. These threat assessments would be based on algorithms in which certain attributes are selected to help determine how much of a threat the target is. One of the issues behind this is the role that prejudice often plays in shaping what we might see as a threat in the first place. Sarah Shoker, a researcher examining gender bias and drone warfare, argued that under the Bush and Obama administrations, there was a lower threshold for killing men than women with drones, and that this happened due to bias linked to gender, age, ethnicity and religion. Trump administration documents show that similar decisions have been made regarding whether men are considered more threatening than women, and therefore more “killable”, whether or not they are combatants. This raises questions about the attributes included for a threat assessment and how they can falsely identify individuals as threats.

Another concern revolves around the reliability of facial and object recognition programs when it comes to targeting systems for the battlefield. As military development advances in the development of autonomous vehicles and weapon systems, they will use a variety of AI programs to maneuver and engage in the battlespace. When it comes to targeting, and who to target, the the key to these systems will be facial and object recognition. These may be less than accurate to recognize certain racial or ethnic groups, often due to biases that are built into the systems software.

A third consideration revolves around the defense of cybersecurity and hacking. Many gender researchers have argued that the very idea of ​​security is gendered, and this has different implications in terms of defense. Based on gender norms, what is considered masculine in society, and therefore considered more important, is also a priority in cybersecurity and defense. As a result, organizations considered to be more masculine, such as the military and large corporations, are given the tools and resources to strengthen defense against cybersecurity, while other industries such as healthcare, education, and nonprofits are more vulnerable. It has already become clear that these types of institutions are vulnerable and targeted in cybersecurity attacks, like what happened in the UK in 2017, which crippled the health system. As cybersecurity operations increasingly rely on AI, it is also worth considering how they might also inherit these forms of bias in terms of priority in terms of response.

The need for further investigation

Research into the military implications of bias is still in its infancy with regard to AI-based systems. While there is an increase in reports and literature that describe the ethical challenges of using AI in warfare and evidence of biased AI in civilian settings, there is little dialogue and interaction between these two themes. There are important lessons to be learned from examples of biased AI in civilian contexts, and I hope the growing body of information in this area will lead to a better understanding of gender bias in military applications of AI.


Source link

Leave A Reply

Your email address will not be published.