Musing 71: Neuro-Symbolic AI for Military Applications
A paper with some interesting strategic points
Today’s paper: Neuro-Symbolic AI for Military Applications. Hagos and Rawat. 17 Aug 2024. https://arxiv.org/pdf/2408.09224
Neurosymbolic AI is an emerging field that seeks to combine the strengths of symbolic AI and neural networks. It aims to bridge the gap between traditional symbolic reasoning, which involves using explicit rules and logic to solve problems, and the more recent approaches of neural networks, which excel at pattern recognition, learning from large datasets, and generalizing across tasks.
I am a fan and a believer, perhaps because I work in it. So it would seem, are the authors of today’s paper. To quote from the abstract itself, “These systems have the potential to be more impactful and flexible than traditional AI systems, making them well-suited for military applications. This paper comprehensively explores the diverse dimensions and capabilities of Neuro-Symbolic AI, aiming to shed light on its potential applications in military contexts.”
First, let’s begin with the obvious: the use and rise of AI in the military domain itself, which is defined broadly here. The authors cite a range from intelligence gathering and surveillance to autonomous weapons systems. They specifically mention two applications in the early part of the article itself: cyberwarfare (and cybersecurity) and autonomous drones. Considering the former, machine learning has proven to be critical for the defense community, as cyberattacks have become increasingly prevalent in the digital age.
One aspect of the paper I should point out at the start is that it’s not designed to present original research, and it’s not really a survey either. It’s more like a perspective, and the thoughts of the authors on how neurosymbolic AI can impact military applications. The paper’s structure is embodied by the following figure:
I won’t go too much into either symbolic AI or connectionist AI, as most readers of this substack are likely to be familiar with the basics. Anytime you hear of rule-based or knowledge-based systems, they are likely to fall in the camp of the former, and belong to a more traditional, theoretically well-explored school of thought. Connectionist AI is broadly defined and the term is not commonly used (at least today) in computer science, but in recent times has come to encompass deep learning and of course, the large language models.
How does a neurosymbolic system ‘work’? Implementations of such systems are complex, but the figure below offers a good abstract architecture. As shown therein, the combination of expert knowledge and the ability to refine that knowledge through iterative learning processes is essential in creating adaptable and effective systems. Expert knowledge serves as a robust initial foundation, while the iterative refinement process allows the model to adapt to new information and continuously enhance its performance. The iterative process is important for enabling the model to adjust to changing conditions, improve accuracy, and address inconsistencies that may arise during the integration of neural and symbolic representations.
The continuous learning loop enables the AI to adapt seamlessly to changing environments and incorporate new information. Furthermore, the combined symbolic and neural representation provides insights into the reasoning process and decision-making of the AI, making it more transparent and interpretable for humans. Many applications of neurosymbolic AI have now been documented, including commonsense reasoning, healthcare, finance, and robotics.
One of the reasons the authors are promoting the use of neurosymbolic AI as an important military asset is the concurrent importance of autonomy in military weapons systems. Autonomy in military weapons systems refers to the ability of a weapon system, such as vehicles and drones, to operate and make decisions with some degree of independence from human intervention. This involves the use of advanced technologies, often including AI, robotics, and ML, to enable military weapons to perceive, analyze, plan, and execute actions in a dynamic and complex environment. One of the most significant ways in which AI is changing the world in military settings is by enabling the development of autonomous weapons systems, which the authors classify into lethal and non-lethal autonomous weapons systems (LAWS, and NLAWS).
The authors state that integrating neurosymbolic AI with LAWS holds the potential for significant advantages in addressing decision-making complexity and adaptability. However, this integration also amplifies some concerns and introduces additional challenges, ethical, technical and legal. One challenge is establishing responsibility and accountability for the use of lethal force, as stipulated in international treaties, implicitly requiring human judgment in lethal decision-making.
What are some of the other military applications of neurosymbolic AI? The authors list a few, and I’ve selected and summarized three here:
Tactical Decision Support. Through the seamless integration of neurosymbolic AI, military commanders gain immediate access to real-time data analysis and strategic understanding, enabling more informed and adaptable decision-making on complex battlefields. Expert knowledge can be encoded into AI systems to assist military commanders in strategic planning. This not only improves mission success and reduces collateral damage but also protects soldiers by making potential threat and opportunity identification more accurate.
Communication and Coordination. Expert knowledge in military command and control can be used to design advanced AI systems that facilitate effective communication and coordination among different units, enhancing overall operational efficiency.
Logistics and Resource Management. Military logistics experts can provide knowledge about efficient resource allocation and supply chain management. By drawing on neurosymbolic AI-driven systems and advanced strategies, military organizations can use this expertise to optimize logistics, ensuring that resources are deployed effectively during operations. Hence, the military can achieve a higher degree of precision in logistics and supply chain management.
The authors state that militaries worldwide are investing heavily in AI research and development to gain an advantage in future wars. This is not surprising. AI has the potential to enhance intelligence collection and accurate analysis, improve cyberwarfare capabilities, and deploy autonomous weapons systems. The figure below shows some of the main military applications of Neuro-Symbolic AI. These applications offer the potential for increased efficiency, reduced risk, and improved operational effectiveness.
Some readers may feel that a topic such as this deserves a thorough treatment of ethical and moral issues, and I’m happy to say that the authors dedicate entire sections on this. One reason why we should seriously consider the use of neurosymbolic AI in such applications is that it may help alleviate some safety concerns; particularly, non-interpretable outputs of AI that may lead to significant breaches of safety. The authors also detail some technical challenges.
One of the more substantive parts of the paper is Section IV.B in the paper, where the authors describe some potential case studies, especially from the perspective of funded programs. As a researcher whose work is supported by funded contracts and grants, I was fascinated with this part of the article, though others may not be. Here’s a snapshot:
Assured Neuro Symbolic Learning and Reasoning (ANSR). The Defense Advanced Research Projects Agency (DARPA) is funding the ANSR research program aimed at developing hybrid AI algorithms that integrate symbolic reasoning with data-driven learning to create robust, assured, and trustworthy systems. The authors state that ANSR-powered AI systems could be used to develop autonomous systems that can make complex decisions in uncertain and dynamic environments. Additionally, ANSR-powered AI systems could be instrumental in developing new tools for intelligence analysis, cyber defense, and mission planning.
Deep Green (DG) Concept. The DARPA’s DG technology helps commanders discover and evaluate more action alternatives and proactively manage operations. This concept differs from traditional planning methods in that it creates a new Observe, Orient, Decide, Act (OODA) loop paradigm. Instead of relying on a priori staff estimates, DG maintains a state space graph of possible future states and uses information on the trajectory of the ongoing operation to assess the likelihood of reaching some set of possible future states. DG is based on the idea that commanders need to be able to think ahead and anticipate the possible consequences of their decisions before they are made. This is difficult to do in the complex and fast-paced environment of the modern battlefield. DG aims to help military commanders by providing them with tools that can help them facilitate faster decision-making in real-time. It also helps the commander to identify and assess the risks and benefits of each operation.
Real-time Adversarial Intelligence and Decision-making (RAID). The RAID program is another example of neurosymbolic AI used in military applications. RAID, a DARPA research program, focuses on developing AI technology to assist tactical commanders in predicting enemy tactical movements and countering their actions. These include understanding enemy intentions, detecting deception, and providing real-time decision support. RAID achieves this by combining AI for planning with cognitive modeling, game theory, control theory, and ML. The authors argue that these capabilities have significant value in military planning, executing operations, and intelligence analysis.
In closing, I recognize that this might seem like a heavy topic and one that many researchers shy away from. The article does not present new research, theory, or experiments, so many researchers and practitioners, especially in industry, may only find it of limited interest. But more conversation is good, and we should all be advocating for safe and responsible use of AI in military systems, whether for defense or offense. Neurosymbolic AI doesn’t solve all the problems, and it certainly can’t substitute for a substantive discussion on the ethical and moral aspects, but it is a better option than a purely neural or symbolic approach.