By Emily Wolfteich, Senior Industry Analyst at Government Business Council
The GAO’s report on the federal government’s adoption of AI is as comprehensive as it can be – but do we like what we see?
Predictions based on seismic sensors and historical data of where the next earthquake will hit California. A sensor-based app to assist in physical therapy for wounded veterans. Automated detection of hazardous low clouds for air traffic safety. Across the federal government, federal agencies are looking for increasingly creative ways to use artificial intelligence in support of their mission, whether helping them query data, predict outcomes, communicate with the public, or automate repetitive tasks. About 1,241 use cases have been reported across 20 non-defense agencies. Now, after a year of review, the Government Accountability Office (GAO) has released an audit of these use cases and the state of AI in the government as a whole – and they have some suggestions.
Tracking AI use cases within the government is not new. In 2020, Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, mandated that most federal agencies publish a yearly inventory of their AI use cases, available for anyone to see at AI.gov. (For perhaps obvious reasons, defense and intelligence agencies are exempt from showing their hands.) Twenty-seven agencies are represented, with 710 examples (as of September 2023). These use cases vary widely, ranging from sea lion tracking to natural disaster predictions to chatbots, and they use an equally wide variety of AI techniques – robotic process automation (RPA), natural language processing (NLP), neural networks, and deep learning, among others. Yet the reporting of these examples, at least in its current iteration on AI.gov, is neither clear nor standardized regarding what is planned, predicted, in development, or currently in use. In other words, we can’t be sure who is using what, where they’re sourcing their technology, or what data they’re using in the process.
The GAO’s report – the first of its kind, examining both AI acquisition and use as well as the accuracy of agency reports and compliance with federal policy – aims to clear up the picture. It provides recommendations to help agencies standardize their reporting, saying in summary that “federal agencies have taken initial steps to comply with AI requirements in executive orders and federal law; however, more work remains to fully implement these.”
The report relied on agency submissions to the Office of Management and Budget to analyze the current state of AI within the government. The robustness of these use cases varies across agencies. NASA and the Department of Commerce have bounded ahead of the others (390 and 285 respectively) followed by the Departments of Energy (117), Health and Human Services (87) and State (71). The majority of these use cases (516) are planned, while only 228 are in production. Overall, the report provided 35 recommendations, particularly praising the work of Commerce and the General Services Administration (GSA).
However, the report also faced issues of incomplete or inaccurate data; out of twenty agencies, only five provided comprehensive reports for their use cases, meaning that for over three hundred use cases it was not possible to tell where they are within their life cycle or production timeline. These submissions also differ from the published use cases, in some cases fairly dramatically. NASA, for example, submitted almost four hundred use cases to OMB; their published use case list contains just 33. Some of these differences may be explained by methodology (NASA, for example, says these 33 are projects using AI tools they have developed in-house), but it still muddies public understanding of how agencies are actually using AI. Some may result from revised understandings within the agencies themselves — the GAO noted that two inventories included AI use cases that were later determined not to be AI at all.
Taking Control
It should be noted that things have changed rapidly since the report’s inception in April 2022. Most notably, the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence sets a new set of standards and guidelines for agencies to adhere to, including adoption of safeguards and further guidance for streamlined procurement. There have also been several government-wide pieces of guidance published, including the extensive (and evolving) AI Guide for Government, published in early December by the GSA’s Center of Excellence, that aim to build exactly that shared understanding. Yet is it quick enough?
“AI is driving the car whether we want it to or not,” says Kevin Walsh, director of IT and cybersecurity at GAO. “Is somebody going to take the wheel and put some guardrails around this thing, or is it going to keep doing what it wants?”
The Executive Order is a good start. It sets out guidelines and expectations. But as we’ve seen from the GAO report, there’s still a considerable amount of confusion within the federal government about what is and isn’t AI, and what is or isn’t expected of agencies to report – and know – about their own tools. And this matters – to minimize risk, for protecting data privacy, for regulations attempting to keep this powerful tool from getting out of hand. This car is getting more torqued up with each new innovation, and those are happening rapidly.
But the laws and regulations that allow RPAs access to personal information to quickly triage tax requests cannot be the same that allow neural networks to “learn” and make predictions about who might be a terrorist threat, or a good tenant, given the cavernous chance that those algorithms may be tainted by bias. We need to understand exactly what’s being used and how we’re using it, and for that we need robust, specific definitions and regulations. There are so many exciting examples of how AI could change our world for the better. The VA, for example, is investing in tools that could help triage eye patients through simply a picture of their eye, predict surgical needs for those with Crohn’s disease, and are even pursuing a landmark project with the Department of Energy that could identify veterans at risk of suicide. AI has the capabilities to clean our waterways, improve our cities, and heal us wherever we are. But first we have to climb, quickly and definitively, into the driver’s seat.
To read additional thought leadership from Emily connect with her on LinkedIn.
Source, chart images: https://www.gao.gov/products/gao-24-105980