Image by Bing Image Creator powered by DALL-E 3
This is something of an experiment. In preparation for a presentation at a conference on Responsible AI and Intelligence at Georgetown University, ChatGPT was asked to write on how I would assess the implications over the next decade of widespread adoption by the US Intelligence Community of tools for analysts powered by artificial intelligence (AI).
By Gregory F. Treverton
Note: The views expressed here are those of the author (and ChatGPT) and do not necessarily represent or reflect the views of SMA, Inc.
In general, I was pretty impressed by ChatGPT’s work. Since it could only know what I had written about, it didn’t include activity-based intelligence, about which I have spoken but not written much. It was slightly prone to platitudes, but it didn’t include any ‘howlers”—plain untruths or wild statements. This article summarizes what it wrote, adjusted by me where I thought it needed embellishment. You can read ChatGPT’s raw output by clicking here: https://smawins.com/news/ai-and-intelligence-chatgpt-output/
The Evolutionary Arc of Intelligence Analysis
Cold War intelligence focused on puzzles, not mysteries, issues that had definitive answers—like how many warheads a Soviet SS‑19 carried. Because our adversary was secretive, we had to apply exotic methods to try to solve the puzzles. Because information was in short supply, intelligence relied heavily on human intelligence (HUMINT) and rudimentary signals intelligence (SIGINT). In effect analysts became human “computers” in synthesizing and interpreting data. Post 9/11, the focus shifted to network analysis and counterterrorism, influenced by the availability of massive data sets and the necessity for quick interpretation.
Now, as we sail through the digital age, artificial intelligence (AI) presents itself as the latest paradigm shift. This technology promises revolutionary benefits but also brings complex challenges. Given AI’s potential for predictive analytics, automated content analysis, and real-time monitoring, its impact on the US Intelligence Community (IC) will be transformative. Yet, the moral, ethical, and operational dilemmas it creates can hardly be ignored.
The Changing Nature of Analysis
Efficiency and speed. AI’s foremost contribution will be in automating mundane tasks that typically consume much of an analyst’s time. For instance, machine learning algorithms can sift through vast amounts of data in real-time, identifying patterns or anomalies that could suggest a potential security threat. As I grow older, I’m more and more skeptical of causation in human affairs, preferring correlation, and machines will be wonderful at identifying correlations; some of them will be spurious, but some will tell analysts to examine connections they had not imagined. This automation will free up analysts to focus on complex tasks requiring nuanced understanding and strategic foresight.
Machines also remember; humans forget. Recall Don Rumsfeld’s use of the distinction among “known knowns,” “known unknowns,” and “unknown unknowns.” I’ve added the fourth box, “unknown knowns”—things we know but don’t know we know, like those Arab men taking flying lessons in the years before 9/11. AI won’t forget such particulars, but rather will remind analysts of what they knew but forgot.
Predictive capabilities. Machine learning models trained on historical data will do predictive analytics. This will be particularly valuable in anticipating the moves of adversaries or identifying emerging threats. To be sure, it is essential to remember that these models are only as good as the data they are trained on. Recall the line from the beginning of the computer era: “garbage in, garbage out.” And we know that AI cannot imagine or create—at least not yet.
Looking further out, In the next five years, the intelligence community will mature technologies for real-time prediction and analysis of global events. AI and machine learning will play a central role, collating disparate data streams such as satellite imagery, social media activity, economic indicators, and climate data. These real-time analyses could provide early warnings for things like mass migrations, economic downturns, or imminent military actions, thereby allowing policymakers to be pro-active rather than reactive.
Multidimensional analysis. AI-powered tools can integrate information from diverse sources—open source, HUMINT, SIGINT, geospatial intelligence (GEOINT), and others—providing a comprehensive picture that human analysts might overlook. Neural networks can analyze satellite imagery to discern relevant military movements or evaluate social media chatter to gauge public sentiment in a specific region. For me, the biggest innovation in intelligence from the wars in the Middle East was activity-based intelligence (ABI), which depended on geolocating intelligence from a variety of INTs and storing it lest a later event make it relevant. AI will be a natural for this analytic approach.
The Risks: Ethical and Operational
For all the benefits, some of the risks already are apparent, and more will arise as the technology develops:
Biases and reliability. AI is surely not impervious to human prejudices. If the training data incorporates biases, the AI tool will reflect those biases in its output, potentially leading to inaccurate or skewed intelligence assessments. AI does not innately have moral or social filter, and we already have seen too many cases in which AI, in effect, lied to produce an answer, or descended into hateful speech or imagery. Moreover, too much reliance on AI’s predictive models may engender a false sense of confidence, possibly causing analysts to overlook other relevant information. In the short run, though, I’m inclined to think analysts will trust AI too little, not too much.
Ethical dilemmas. While these tools may enhance national security, they also risk infringing on civil liberties and privacy rights. Striking a balance between security and individual freedom will require keeping humans in the loop and will necessitate new approaches to oversight mechanisms and possibly even legislation.
Security concerns. AI systems themselves can be targets for cyberattacks. Adversaries could tamper with training data or introduce malicious software to skew intelligence outputs. Protecting these systems will necessitate advanced cybersecurity measures and constant vigilance.
Institutional Implications
Training and skillset. The integration of AI will require a shift in the skillsets needed within the intelligence community. Analysts will need to be versed not just in geopolitics and traditional analytical techniques, but also in data science and machine learning concepts. This creates a talent management challenge but also an opportunity to cultivate interdisciplinary experts. The Intelligence Community has already confronted this challenge in seeking to attract and benefit from data scientists in intelligence analysis. AI will compound that challenge, impelling intelligence agencies to imagine new career patterns to attract high-priced technical talent that might be attracted to government service for patriotic reasons or to see how the other half lives but wouldn’t stay for an entire career.
Collaboration across agencies and INTs. AI’s capabilities could foster greater collaboration among different arms of the intelligence community. Sharing machine learning models and data sets across agencies could enable a more cohesive and robust analytical framework, thereby enhancing national security.
Today, human analysts must work painstakingly to synthesize data from multiple sources: SIGINT, HUMINT, GEOINT, and more. In five years, advanced AI algorithms will be capable of fusing these data types into a comprehensive whole. This multisource data fusion will enable a level of cross-disciplinary insight that was previously unattainable. Imagine an analytic model that combines intercepted communications, drone footage, and insider testimonials to build a complete and continually updated picture of a hostile entity’s capabilities and intentions.
Cognitive augmentation tools for analysts. Cognitive computing, a field related to AI, aims to emulate human problem-solving and decision-making skills. Within the next five years, cognitive computing could be used to develop tools that augment an analyst’s cognitive processes. These tools would help analysts identify the most relevant data, consider alternative hypotheses, assess the implications of new evidence, and even gauge the likely effectiveness of various policy responses. This kind of cognitive augmentation could significantly reduce human error and oversight in strategic analysis.
Dynamic simulation and scenario planning. Advanced modeling and simulation capabilities will allow for real-time scenario planning that incorporates an ever-changing influx of new data. These models will be far more nuanced and adaptable than today’s, encompassing multiple variables and dependencies that can shift in real time. Analysts and decision-makers could use these tools to “war-game” different strategies, providing invaluable insights into the probable outcomes of various policy options.
Semantic analysis for deep context. As we move forward, AI tools will not just be able to parse text but also understand context, sentiment, and cultural nuance. Semantic analysis technologies will enable analysts to “read between the lines” of public statements, social media chatter, and intercepted communications, vastly improving our understanding of foreign actors’ intentions and public sentiment. This would be particularly useful in analyzing propaganda, or in understanding the social dynamics at play in given situations.
Policy making and decision support. Decision-makers will come to rely heavily on AI-supported intelligence briefings. While this has advantages in terms of speed and possibly accuracy, there’s the risk of reducing complex geopolitical issues to algorithmic outputs. Therefore, humans must always remain in the decision-making loop to provide context and ethical considerations. The challenge will be for them to understand the algorithms well enough to trust them, and to explain them to policymakers who understand far less about AI than they do.
Striking a Balance
As we look to the next decade, the US Intelligence Community stands at a transformative juncture. AI promises not only greater efficiency but also more effective threat identification and response capabilities. Yet, this technology is not a panacea; it carries inherent risks that could compromise the quality of intelligence and raise ethical concerns.
A balanced, thoughtful approach to AI adoption is therefore imperative. This includes rigorous validation of machine learning models, multi-stakeholder discussions on ethical guidelines, and continuous training for analysts to adapt to the new technological landscape. The intersection of artificial intelligence and intelligence analysis is not merely a technical evolution; it’s a complex interplay of capabilities, ethics, and risks that will shape the future of national security. Managed wisely, this convergence can be a potent asset; managed poorly, it could become a liability.
Conclusion
Here is ChatGPT’s conclusion, which I very much share: The next decade will reveal which path the intelligence community takes. Let’s aim for a future where technology serves as a tool for human analysts, not a substitute, making the intelligence apparatus more effective, ethical, and accountable than ever before.