The Possible Cost of GEOINT Analysts Using LLMs
I remember in the 1980s, when WordPerfect and WordStar introduced built-in spell checkers. What we now take for granted was surprisingly controversial back then. Fellow educators were worried that automating spell checking would make students lazy and careless. Since then, spell checkers have evolved to grammar checkers that incorporate AI. Rather than simply comparing words to a dictionary, modern word processing tools can identify misused words even when they are spelled correctly.
Analogous to the spell checker controversy of the 1980s, Large Language Models (LLMs) like ChatGPT are poised to transform how GEOINT analysts collect information and produce assessments. Potential benefits include faster workflows, automated summarization, and writing support. This article reflects my own experience using an LLM, specifically ChatGPT (OpenAI, 2025), to help draft this text. My aim was to explore both the efficiencies gained and the potential cognitive costs when analysts rely too heavily on these tools. And yes, I’m well aware that using an LLM to assist in writing is controversial in academic circles.
What prompted this step toward academic heresy was a recent study by Kosmyna et al. (2025) titled Your Brain on ChatGPT. The research offers compelling evidence of the risks associated with using LLMs in analytical work. Through EEG analysis and user interviews, the study found that while LLMs can ease immediate cognitive load, they may also suppress deeper engagement, memory retention, and critical thinking. These findings are especially concerning in GEOINT, where analytic depth, contextual understanding, and reasoning under uncertainty are essential (Hoffman et al., 2011).
One of the clearest findings from Kosmyna et al. (2025) is that users who relied on LLMs engaged in less effortful thinking than those using web search or “brain-only” methods. Participants read what the LLM produced but reported minimal critical thought about the content. This aligns with Hoffman et al. (2011), who argue that intelligence analysis involves iterative, messy, and often opaque reasoning under indeterminate conditions and overreliance on machines risks bypassing these necessary cognitive steps.
Kosmyna et al. (2025) also found that LLM users exhibited lower memory retention. For GEOINT analysts, this is concerning. Analytic expertise depends on internalizing spatiotemporal patterns, adversary behaviors, and context over time. Without this internal foundation, analysts may struggle to detect anomalies, anticipate second- and third-order effects, or just accept all under time pressure. The tendency to offload cognitive effort to AI tools—what some researchers call “metacognitive offloading” may leave analysts outside the human-machine decision loop when it matters most.
Furthermore, LLMs optimize for coherence and user satisfaction, not truth. This means they may reinforce dominant narratives, contributing to premature convergence and confirmation bias. Analysts could fall into the trap of “mirror imaging”—assuming adversaries think as we do—rather than maintaining competing hypotheses and seeking disconfirming evidence (Hoffman et al., 2011). AI-generated coherence can mask complexity and ambiguity, critical dimensions in intelligence reasoning.
Most troubling may be the erosion of analytic ownership. Kosmyna et al. (2025) found that LLM users felt less connection to the ideas they submitted. In GEOINT, where judgments must be defended and documented, such detachment weakens accountability which is especially concerning when the stakes are high. As Dekens (2025) warns in the OSINT context, we risk becoming “operators of automation” instead of analysts if we offload too much cognitive labor to the machine.
There is irony here. While I am highlighting the cognitive risks of LLMs, I also found real value in using one during the analysis and writing process. I was comfortable doing so because of my decades in the GEOINT field and prior research on how automation affects human and machine teams. To be clear, the LLM did not generate my ideas. It served as a tool to help integrate content, suggest refinements, and streamline the writing process. In simple terms, I treated AI not as an unquestioned authority but as a coworker, a partner in a thoughtful exchange between human insight and machine assistance.
So how do we prepare professionals to use AI as a coworker? To preserve and strengthen GEOINT expertise while working with LLMs, here are a few ideas for professional training and education:
- Create scenarios where learners must critique, revise, or challenge LLM outputs.
- Require learners to recall and generate difficult analytic assessments without an LLM. This strengthens internal schema and long-term retention.
- Have learners compare and critique multiple plausible explanations, especially ones generated by LLMs. This deepens analytic rigor and guards against bias.
- Ask learners to explain their reasoning in a discussion, especially when agreeing or disagreeing with AI output. This restores metacognitive control.
- Let junior analysts work with both human mentors and LLMs, gradually shifting analytic responsibility as expertise develops.
Like spell checkers, LLMs are here to stay. They clearly offer significant productivity benefits in geospatial analysis, but they are not a “free lunch.” They may erode the very cognitive processes that make GEOINT analysis effective if not utilized appropriately. As one study warned, “Confidence in AI replaces confidence in self—and with it, the thinking disappears” (Dekens, 2025). LLMs must be treated not as oracles to obey, but as tools of the profession. With the right educational foundation, training, and methodologies, this technology can augment—rather than atrophy—the minds of GEOINT professionals.
References
Dekens, N. (2025, April 2). The slow collapse of critical thinking in OSINT due to AI. Dutch OSINT Guy. https://www.dutchosintguy.com/post/the-slow-collapse-of-critical-thinking-in-osint-due-to-ai
Hoffman, R. R., Henderson, S., Moon, B., Moore, D. T., & Litman, J. A. (2011). Reasoning difficulty in analytical activity. Theoretical Issues in Ergonomics Science, 12(3), 225–240. https://doi.org/10.1080/1464536X.2011.564484
Hoffman, R. R. (2010). Accelerated proficiency and facilitated retention: Research and development roadmap [White paper]. Institute for Human and Machine Cognition.
Kapur, M., & Bielaczyc, K. (2012). Designing for productive failure. Journal of the Learning Sciences, 21(1), 45–83.
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task (arXiv Preprint No. 2506.08872). arXiv. https://doi.org/10.48550/arXiv.2506.08872
McCowan, M. (2025). The surprising history of spell checkers—and what it means for AIanxious editors. Inkbot Editing. Retrieved July 16, 2025, from https://www.inkbotediting.com/blog/the-surprising-history-of-spell-checkers-and-what-it-means-for-ai-anxious-editors
Mollick, E. (2024). CoIntelligence: Living and working with AI. Portfolio (Penguin Random House).
Mulani, N. (2018, September 11). Humans and machines meet in the missing middle. CIO. https://www.cio.com/article/222256/humans-and-machines-meet-in-the-missing-middle.html
OpenAI. (2025). ChatGPT (June 2025 version) [Large language model]. https://chat.openai.com/
Roberts, M. (2025). Embracing productive struggle: The key to success for college students. University Center for Teaching and Learning. https://teaching.pitt.edu/resources/embracing-productive-struggle-the-key-to-success-for-college-students