Can We Trust Killer Robots?

Can We Trust Killer Robots?

Naveen Joshi 03/07/2019 7

The introduction of killer robots in the military has led to a heated debate about their potential applications and misuse.

Sci-fi movies like ‘The Terminator’ and ‘Transformers,’ where robots kill an evil mastermind or an army of malevolent robots have always been a part of pop culture. Thanks to increasing developments in AI, such robots are no longer fictional. Military organizations are considering the deployment of different types of robots, such as the killer robots, for military operations. Killer robots (also known as autonomous weapons) are robots with artificial intelligence that can be utilized in military combat and other military operations. Killer robots can independently search and target enemies with the help of preprogrammed descriptions and constraints.

The implementation of killer robots has given rise to a major debate among several experts.Some experts believe that autonomous weapons and killer robots would make military operations more efficient and reduce casualties. Whereas, others believe that the development of killer robots would lead to inevitable warfare of robot armies. Both sides have made some noteworthy arguments about the utilization of killer robots in the military. Hence, the analysis of all arguments regarding killer robots is necessary to decide the most feasible approach for the implementation of killer robots.

The Promise of Killer Robots

Killer robots can benefit military organizations in the following manner:

No alt text provided for this image

Fewer Casualties

Everyone knows for a fact that several soldiers die or get severely injured every year in war-torn nations and combat with terrorists. Multiple families face dire conditions after the death of loved ones. Governments and citizens appreciate the sacrifice made by brave soldiers but eventually, it’s the loved ones who face the consequences. Hence, governments and military organizations must adopt an alternative to reduce war casualties. A major advantage of deploying killer robots is that the number of casualties can be reduced significantly. Killer robots can help in dangerous missions, where there is a possibility of a high rate of casualties. Military organizations may also be able to get better results using killer robots as their attacks can be precise. Also, warfighters have basic human instincts of survival that can restrict them from making potentially life-threatening decisions, which is quite understandable. However, killer robots will do what they are programmed for and sometimes, make the necessary sacrifice.

Penetrate Highly Secured Combat Zones

In several wars, every army has some combat zones that are highly secured. Attacking such combat sites is extremely complicated and requires extensive planning. Even after developing an effective attack strategy, warfighters may receive unforeseeable damage and be forced to retreat. In some cases, warfighters may win the combat but, face multiple casualties. Therefore, attacking such combat zones is risky and not a feasible approach. The adoption of killer robots will enhance combat in highly secured zones. With proper planning, killer robots can attack combat zones effectively. In case the robots start failing, military commanders can order them to retreat. Also, casualties would not be a significant concern with killer robots as it is with soldiers. Therefore, killer robots are essential for military organizations in highly secured combat zones.

Access to Hazardous Sites

Military organizations often indulge in missions that may expose soldiers to radioactive materials or missions that involve explosive ordnance disposal. Such missions can be incredibly life-threating for military personnel. Additionally, certain missions can last for months, where military personnel may be unable to meet their families throughout the mission. Such prolonged missions can mentally affect military personnel and lead to disorders such as post-traumatic stress disorder, depression, and substance abuse.

Military organizations can deploy killer robots in risky missions such as explosive ordnance disposal. Killer robots can be programmed to work efficiently in dangerous work sites. In radioactive areas, killer robots can be especially useful as they can work for longer hours compared to humans, who will be affected by the radiation within hours. Also, killer robots can prove to be increasingly effective in prolonged missions if military commanders monitor them frequently.


The US Senate has allocated a military budget of $716 billion for 2019. The budget is distributed to various military departments, operations, and veteran healthcare schemes. A significant amount of the budget is also spent on dead military personnel’s family and treatment of injuries and mental disorders. Compared to the expenses of military personnel, the cost of killer robots can be lower. Killer robots would not require payroll and additional perks. The only expenses involved in killer robots would be their cost and maintenance charges. Hence, governments can significantly reduce their expenses and allocate saved funds to other domains such as education, social security, and housing.

The Perils of Killer Robots

Multiple experts such as Elon Musk and Stephen Hawking have approved an open letter that highlights the perils of killer robots. These experts believe that the development of killer robots will lead to the generation of a military AI arms race. Along with such disadvantages, killer robots will also give rise to issues such as:

Flawed Distinction Process

Military organizations must note that facial recognition and AI can make mistakes. A striking example of this failure is the face unlock feature on Samsung Galaxy S10 phones. Several users have been able to fool the facial recognition system of Samsung Galaxy S10with pictures of the owner or even their siblings. In case killer robots fail in recognizing enemies, the consequences can be disastrous. In this manner, killer robots may violate the principle of distinction. The principle of distinction requires belligerents to distinguish between civilians and combatants during an armed conflict. If killer robots are deployed in areas where civilians are present, then the chances of civilians being attacked increase exponentially. Such incidents can be fatal for civilians and negatively affect reputation of the military.

Lack of Accountability

Killer robots raise significant concerns about accountability. Military personnel can be held accountable for mistakes or civilian death. However, killer robots cannot be held accountable in a similar manner. Also, it is increasingly difficult to determine whether a killer robot made a flawed decision due to bugs in the program or AI-based decision-making abilities. Additionally, families of dead civilians can be given justice by suspension or imprisonment of military personnel. But, law enforcement agencies cannot deliver such justice with killer robots. Besides, one of the major motives of accountability and justice is making an example out of a situation or a person, which can deter unlawful acts in the future. However, robots that cannot comprehend human emotions such as compassion, fear, and empathy and hence, cannot respond positively to such incidents and their effects.

Disputes Regarding Regulations

There is a major dispute about regulations for killer robots. The signatories of the open letter mentioned above propose an upstream approach for regulations. In this approach, the experts suggest setting up limits on the development of killer robots and other future technologies in the military domain that must not be violates. The upstream approach aims to preemptively reduce the dangers that killer robots and autonomous weapons development may pose. Alternatively, others prefer the downstream approach for regulations regarding killer robots. The downstream approach suggests that regulations should be developed along with the development of killer robots as military organizations test all the possibilities and limits. With this approach, regulations will be based on real-life scenarios and experiences. Therefore, deciding the most feasible approach for killer robots regulations can be increasingly complicated.

Poor Judgment in Complex Situations

Military organizations must understand that AI-powered killer robots cannot be trained for every possible scenario. AI systems are usually trained with the help of large volumes of historical data. There are many scenarios that may not have documented data. Training AI systems for such scenarios can be extremely complex. Also, killer robots may witness several new situations and make mistakes on-field, leading to dire consequences. Hence, killer robots may be unable to make correct decisions in unfamiliar situations.

To ensure that killer robots cannot be misused and to maximize their potential, governments, military, and tech experts need to work together. Government officials, military personnel, and tech experts must collaboratively develop regulations for killer robots. Additionally, military organizations must ensure that killer robots are not assigned to missions where civilians might be present, until AI becomes advanced enough. To sum up, governments and military organizations must listen to and prepare for every possible outcome. In this manner, governments can develop a holistic approach to ensure that the threat of killer robots does not outweigh their benefits.

Share this article

Leave your comments

Post comment as a guest

terms and condition.
  • Preston Richards

    If a robot can be used to kill then it’s a certainty it will be used to kill.

  • Ulrich Weber

    Anything that can be weaponized will be weaponized otherwise the other side gains an advantage.

  • Liam Harwood

    Robots will be running the streets once 5G has rolled out.

  • Jason Morgan

    I am afraid to say that robots already rule the world.

  • Eddie Grime

    The fact that we can sit back and argue about how we should kill the enemy means it's working.

  • Robert Melendez

    WW3 will end in a drone vs drone arena with many robot casualties.

  • Robert Melendez

    A robot has no conscience. Enough said.

Share this article

Naveen Joshi

Tech Expert

Naveen is the Founder and CEO of Allerin, a software solutions provider that delivers innovative and agile solutions that enable to automate, inspire and impress. He is a seasoned professional with more than 20 years of experience, with extensive experience in customizing open source products for cost optimizations of large scale IT deployment. He is currently working on Internet of Things solutions with Big Data Analytics. Naveen completed his programming qualifications in various Indian institutes.

Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics