
When a coupon suddenly appears on your phone as you approach a store, you might find it convenient and even helpful. But the same AI systems that know where you are and try to influence your purchases can be used to infer what you fear, what you trust and which stories you are likely to believe. AI-fueled marketing algorithms are becoming increasingly good at influencing human behavior.
That raises concern about what various governments might do with these tools to influence citizens’ views about warfare. A clear-eyed look at how administrations are exploiting these systems may help people and their nations navigate an uncertain future.
I am a security researcher who studies ways to explore and characterize the risk technology poses to individuals and society. The rise of AI-mediated influence has raised questions about the erosion of people’s capacity to exercise free will and, by extension, society’s ability to distinguish a just war from an unjust war.
AI-powered marketing
The integration of AI with location-based services is pushing the marketing frontier. Location-based services use geographic data from indoor sensors, cellphone towers and satellites to promote goods and services that are tailored to your location, a capability called geofencing.
When marketing firms couple massive amounts of data about individuals’ behaviors – including information that people voluntarily or unknowingly share through mobile device applications – the firms can group, or segment, potential customers based on what they like, what they do and what they say.
Once an AI-powered marketing system knows where a user is and can make an informed guess about that person’s likes and dislikes, it can design targeted coupons and advertisements to influence the behavior of each person in a group, and possibly the group as a whole. This combination of AI with geofencing and segmentation makes hyperpersonalized marketing content possible at an unprecedented scale.
Real-time propaganda
What might this advance have to do with warfare? The use of psychology to win battles or obviate the need for war is as old as armed conflict itself. Sun Tzu, the Chinese military general and philosopher who died in 496 B.C., wrote: “Therefore the skillful leader subdues the enemy’s troops without any fighting; he captures their cities without laying siege to them; he overthrows their kingdom without lengthy operations in the field.”
From Sun Tzu’s era until today, skilled practitioners of military strategy have sought to reduce the risk in fighting through reflexive control: getting opponents to willingly perform actions that are best for the strategist’s empire or nation.
Today’s strategists increasingly rely on paid social media advertisements, influencers, AI-generated content and even fake social media accounts to sway popular opinion toward their goals. This power, and controversy surrounding it, has been implicated in recent national elections, domestic unrest and negotiations to end the conflict in Ukraine.
Unlike propaganda during the Cold War between the U.S. and the Soviet Union, modern influencers don’t rely on a single message broadcast to the masses. Strategists test and deploy thousands of narrative variations simultaneously, monitor how different groups respond and refine their approach in near-real time. The purveyors don’t need to convince everyone. They just need to nudge enough people at the right moment to change election outcomes, pressure domestic policies or even trigger ethnic violence.
How much deception is tolerable?
As online influence becomes more automated and personalized, it is harder to determine where persuasion ends and coercion begins. If groups of people, or even a nation’s citizenry, can be guided toward certain beliefs or behaviors without overt force, democratic societies face a new problem: how to distinguish traditional attempts at influence from manipulation – especially during conflict.
Recent studies show that Americans trust local news sources more than national ones, although trust in both local and national news media has declined across all age groups in the U.S. Ironically, this trust deficit is being exploited by unscrupulous media in various ways, such as AI-generated, pink-slime news – online news stories that only appear to be from authentic local news outlets. The stories are often technically accurate but presented with veiled political bias.
AI-driven propaganda directly challenges how people typically evaluate claims that their nation has been wronged – that it is the “good guy” standing up for what is right. Just war theory assumes that citizens can reasonably consent to war. Legitimate political authority requires an informed public that can decide violence is both necessary and proportional to the offense. However, when influence operations sway people’s views without them being aware of it, these systems threaten to undermine the moral preconditions that make war just.
The question citizens have to answer is how they will allow their information environments to evolve. Do they assume that deception is ubiquitous and therefore governments must control information and even preempt the truth by weaponizing AI-driven narratives? Or should the public accept the risk of AI-generated influence as a regrettable but necessary part of openness, pluralism and the belief that truth emerges through transparent debate and not under tight controls?
The same systems that decide which coupon reaches your phone are starting to shape which narratives reach you, your community and a nation’s entire population during a crisis. Recognizing this connection is the first step toward deciding how much influence people are willing to accept from such algorithms and the propagandists who control them.
![]()
Justin Pelletier is affiliated with the United States Army Reserve. The views expressed are those of the author and do not reflect the official policy or position of the U.S. Army, Department of War, or the U.S. Government.


