By David Hutchins and Emily Wolfteich, Senior Industry Analysts at Government Business Council
AI is an increasingly hot topic across both the public and private sectors as organizations weigh the world of possibilities that this tech can create – and the associated risks. Within the military, this technological potential means an edge on the battlefield, better and faster intel analysis and decision support for leaders, and improved training and strategic planning operations. However, it can also introduce vulnerabilities that can impact national security, warfighting capabilities, and soldier safety. As the Pentagon begins to explore how it can leverage artificial intelligence, we take a quick look at the difference between traditional and generative AI and what this might mean for the future of the military.
What is Generative AI?
Generative AI is different from “traditional” AI in a few key ways. “Traditional” AI, things we think of as useful for automation or data analytics, is limited by rules and patterns that tell it what to do. Generative AI, on the other hand, learns from enormous reserves of data and algorithms to create new content that resembles something a human would write or create – think ChatGPT or AI-generated art. It does this by using neural networks similar to those the human brain uses to build its own patterns of thinking and understanding relationships, This allows the program to learn and train relatively unsupervised through the model’s unlabeled data pool. Generative AI can trawl through terabytes of data quickly and provide summaries and some analysis; it can respond to questions or provide suggestions based on what it’s learned; and it can theoretically quickly respond to cybersecurity threats or anomalies from surveillance equipment. For the military, where rapid analysis can make crucial differences, this “brain” that never gets tired could be a powerful tool.
What are the risks/challenges?
Though both traditional AI and generative AI offer exciting opportunities to use and understand data in revolutionary ways, there are significant risks associated with both. Traditional and generative AI both work with and learn from enormous datasets. However, if the integrity of that data is compromised (is it accurate, complete, and consistent?), the chances that errors or omissions will be replicated and reified is extremely problematic. Generative AI, for example, can produce synthetic data based on past learnings to augment or replace real data. This synthetic data is, in turn, used to train AI models. If the synthetic data is based on inaccurate, biased, or unreliable real data, those flaws will be replicated. Generative AI can also “hallucinate,” or produce incorrect responses that still sound credible. Relying too much on this tool without appropriate and safe oversight from humans poses real national security risks – as does the possibility that adversaries will use the same tools against American interests.
What is Task Force Lima?
Announced by the Department of Defense (DOD) on August 10th, 2023, Task Force Lima is a Generative AI and large language model (LLM) task force operated through the Chief Digital and Artificial Intelligence Office’s (CDAO) Algorithmic Warfare Directorate. Task Force Lima was created to focus the Department’s exploration and responsible fielding of generative AI capabilities and explore how adversaries might use generative AI to harm the United States.
Under the direction of DoD CDO Craig Martell, Task Force Lima will assess, synchronize, and employ generative AI capabilities across the DoD. At the same time, Task Force Lima must ensure the Department is able to design, deploy, and use generative AI technologies responsibly and securely. Task Force Lima will also be responsible for providing guidance and recommendations to policy-making bodies related to generative AI.
How will the DOD use Task Force Lima?
Because of the relatively nascent stage of generative AI, there are still many questions around how this technology should be used, especially when considering military applications. A key objective of Task Force Lima is to find a set of use cases where generative AI can aid the DoD in its functions. The Department is already looking to leverage generative AI to enhance its capabilities in areas such as administrative operations, strategic decision-making, and warfighting.
Administrative Operations
Generative AI could prove especially useful in the category of administrative operations. Effectively processing the enormous amount of data held by the DoD can be extremely time-consuming and has been an operational challenge for the department. A well-trained generative AI could be used to rapidly locate files and data, filter and select the most valuable information, respond to questions, and provide text summaries of lengthy documents.
Strategic Decision-making
Generative AI can also group information from various datasets and quickly identify patterns. This allows military personnel to draw more accurate conclusions and create response plans based on a more complete picture of a situation. Generative AI can also provide military personnel with a more detailed understanding of an area of operation by collecting and analyzing reports, documents, news, and other information sources. Ultimately, the DoD hopes to use the rapid analytical abilities of generative AI to augment decision-makers, especially in high-stress situations where quick response times are essential.
Warfighting
While the risks are currently still too high for AI to direct any kinetic warfare (i.e. autonomous firepower), this could very well be the DoD’s goal on the horizon. As autonomous and semi-autonomous systems become increasingly present in the U.S. military, generative AI may one day be used to inform the decision of these systems.
Closing thoughts
Generative AI, although still relatively new in its development, could one day benefit nearly every aspect of military operations. From logistics to medical care, from training personnel to guiding autonomous systems, AI will be integral to the future of military operations, logistics, and decision-making. Task Force Lima’s cautious but curious approach is an important step in the U.S. military’s utilization of AI and a critical effort in keeping pace with potential adversaries. Despite its challenges, the DOD must keep pace with developments in AI, or risk losing its technological edge.
To read additional thought leadership from Emily and David, connect with them on LinkedIn.
Photo 1 – by Wiyre Media on Flickr
Photo 2 – by Stockasso Media on Envato Elements