US General Sparks Debate Using ChatGPT for Military Decisions

Start

Written By Lexx Thornton

In an era defined by the dizzying acceleration of artificial intelligence, the United States military finds itself wrestling with the role of powerful, consumer-grade tools like ChatGPT in its command structure. The debate reached a flashpoint when Major General William “Hank” Taylor, the Commanding General of the 8th Field Army in South Korea, revealed his close professional relationship with generative AI, openly stating he uses the chatbot to help make “key command decisions” and build analytical models for both military and personal leadership. 

While General Taylor argues his goal is simple—to gain a critical time advantage and make “better decisions” within strategic frameworks like the OODA (Observe, Orient, Decide, Act) loop—his candid admission has triggered a wave of public concern, exposing a core conflict between technological efficiency and the requirement for independent human judgment in the highest echelons of military leadership. 

To the average observer, the revelation that a General is consulting an off-the-shelf chatbot for high-level military and personnel decisions raises an immediate and uncomfortable question: Does he know what he’s doing? 

Military command is, by definition, the domain of hard-won experience, critical thinking, and decisive action under conditions of incomplete information and extreme pressure. A General’s rank is a symbol of their ability to process complex, often unprecedented, geopolitical and logistical variables without a technological crutch. 

When a commander openly admits to relying on a commercial Large Language Model (LLM) to “build models” for decision-making, it can be interpreted as a failure of institutional training or a lack of personal confidence. If a general is incapable of independently structuring the analytical framework needed to manage the readiness of thousands of troops, is that General fundamentally equipped for the independent judgment demanded by a major command post? 

The resulting online mockery—including viral comparisons to apocalyptic sci-fi scenarios like The Terminator or Metal Gear Solid—underscores a deep public skepticism. Many critics questioned, “why is bro even an army general in the first place if he can’t take decisions on his own?” This sentiment reflects a primal fear: that the human element, the ultimate source of accountability and ethical reasoning, is being outsourced to an algorithm. 

Never Miss A Story

Covering HBCUS
and The African American Community