New research challenges the idea that artificial intelligence makes dictators all‑powerful
Artificial intelligence is often portrayed as a powerful new tool for authoritarian governments—one that enables constant surveillance, predictive repression, and near‑total social control. But new research from èƵ argues that this popular narrative overstates what AI can actually do.
In The Limits of Authoritarian AI, L. Jason Anastasopoulos, Associate Professor of Public Administration and Policy, and Jie (Jason) Lian, a Postdoctoral Research Fellow at the Harvard Kennedy School and 2025 PhD in Political Science and International Affairs, show that AI systems do not eliminate uncertainty for authoritarian leaders. Instead, they force regimes into difficult tradeoffs that can generate instability, backlash, and new vulnerabilities.
“People tend to imagine AI giving authoritarian governments this all‑seeing, all‑powerful ability to control society,” Anastasopoulos said. “But in reality, AI systems always make errors, and those errors create real political problems for the regimes that rely on them.”
Why AI creates dilemmas, not dominance
At the center of the paper is what Anastasopoulos and Lian call a “calibration dilemma”. Any AI system designed to identify threats—whether dissenters, protesters, or political opponents—must be programmed to decide how suspicious is suspicious enough to trigger action.
If the system is calibrated too loosely, it flags large numbers of innocent people as threats. That leads to widespread repression, public anger, protests, and political backlash. If it is calibrated too narrowly, genuine opponents slip through the cracks, organize, and potentially challenge the regime.
“There’s no perfect setting,” Anastasopoulos explained. “Authoritarian leaders have to choose between two kinds of failure. Either way, they expose themselves to risk.”
The authors refer to the resulting cycles of tightening and loosening surveillance as “threshold whiplash.” Governments crack down aggressively, face backlash, retreat, then tighten controls again when new threats emerge.
China’s experience during its zero‑COVID policies illustrates this dynamic. Broad, AI‑enabled monitoring systems flagged enormous portions of the population as potential risks, generating widespread unrest and political instability rather than seamless control.
The myth of the all‑seeing state
The research also challenges what the authors describe as the “panopticon bluff.” Authoritarian governments benefit when citizens believe AI surveillance is omnipresent and infallible—even when it is not.
“The real power of AI for authoritarian leaders often lies in the perception that the system sees everything,” he said. “But that perception is a bluff.”
AI systems can be evaded through simple behavioral changes, coded language, or shifts in communication. Once people recognize the limits of surveillance, fear diminishes, and with it, some of the regime’s control.
Implications for democracy advocates
Rather than painting a dystopian future in which technology guarantees authoritarian dominance, the paper offers a more nuanced—and more hopeful—assessment.
The authors argue that pro‑democracy actors can take advantage of the limits of AI‑driven repression by better understanding how algorithmic systems actually work and where they fall short. By demystifying these technologies, supporting individuals who are falsely targeted by overbroad surveillance, and challenging exaggerated claims about AI’s omnipotence, democratic actors can reduce the power these systems are meant to project.
“If you understand how these systems operate, you can identify their vulnerabilities,” Anastasopoulos said. “AI doesn’t make authoritarian regimes unstoppable—it gives them new problems to manage.”
From research to policy conversation
The article has attracted attention beyond academia. Anastasopoulos will present the paper’s arguments to staff and leadership at the National Endowment for Democracy, an organization that supports democratic movements worldwide.
The invitation reflects a broader shift among policymakers and practitioners toward recognizing that emerging technologies not only reshape authoritarian power but also create meaningful openings for democratic resistance and accountability.
“There’s a tendency to think that AI automatically means more control and less hope,” Anastasopoulos said. “Our goal is to show that the reality is more complicated, and that understanding those complications matters for people working to defend democracy.”