Artificial Intelligence & Machine Learning , Governance & Risk Management , Next-Generation Technologies & Secure Development

Adoption of AI Surveillance Technology Surges

China Is Leading Supplier, But Other Countries Catching Up, Report Finds
Adoption of AI Surveillance Technology Surges
Sources: Carnegie (chart), Sanderflight (photo)

Governments' use of artificial intelligence technology that provides surveillance capabilities is rapidly spreading, according to a new report from the Carnegie Endowment for International Peace.

See Also: OnDemand | The Four Steps to Build a Modern Data Protection Platform

The Global Expansion of AI Surveillance report offers a country-by-country look at the AI surveillance capabilities in place across 176 countries, based on data gathered since 2017. It finds that numerous countries are not only developing various AI surveillance technologies domestically, but also exporting it to other countries, where governments are deploying it at a more rapid pace than many experts predicted.

The rapid adoption of AI surveillance is cause for concern because the rise of facial recognition, big data platforms and predictive police capabilities give governments the ability to track and analyze individuals' behaviors - and to advance political goals - in ways that were never before possible, says report author Steven Feldstein, an associate professor of public affairs at Boise State University (see: Facial Recognition: Big Trouble With Big Data Biometrics).

"As these technologies become more embedded in governance and politics, the window for change will narrow."
—Steven Feldstein

"Sadly I’m not surprised," says Alan Woodward, a computer science professor at the University of Surrey, commenting on the report's finding that AI surveillance technology is being rapidly embraced by governments. "Adoption of something this useful for security is bound to run ahead, and as is so often the case, particularly ahead of the legislation or regulation one might hope for."

Technology, of course, is the limiting practical factor when designing more automated surveillance systems. But tool set capabilities and combinations have been rapidly improving.

"Several breakthroughs are making new achievements in the field possible: the maturation of machine learning and the onset of deep learning; cloud computing and online data gathering; a new generation of advanced microchips and computer hardware; improved performance of complex algorithms; and market-driven incentives for new uses of AI technology," says Feldstein, who's also a nonresident fellow in Carnegie’s Democracy, Conflict, and Governance Program. He formerly served as a deputy assistant secretary in the Democracy, Human Rights, and Labor Bureau at the U.S. Department of State.

Index of AI Global Surveillance

The study's AI Global Surveillance Index reports that at least 75 countries now use AI technologies for surveillance purposes, including facial recognition systems in 64 countries, smart/safe city platforms in 56 countries, and predictive policing in 52 countries.

Source: Carnegie Endowment for International Peace

Feldstein says his report carries some caveats. "The index does not distinguish between AI surveillance used for legitimate purposes and unlawful digital surveillance," he says. "Rather, the purpose of the research is to shine a light on new surveillance capabilities that are transforming the ability of states - from autocracies to advanced democracies - to keep watch on individuals."

AI Surveillance Tech: China Dominates

The report finds that there's a free flow of technology between authoritarian regimes and liberal democracies, on both the selling and procuring fronts (see: Amazon Rekognition Stokes Surveillance State Fears).

China is the leader in the AI surveillance technology field, the report finds, noting that buyers of its wares include many countries linked to China's $1 trillion Belt and Road Initiative. The global infrastructure improvement strategy, which has been likened to a Chinese Marshall Plan, is designed to give the country a long-term, global strategic advantage.

Source: Carnegie Endowment for International Peace

"Technology linked to Chinese companies - particularly Huawei, Hikvision, Dahua, and ZTE - supply AI surveillance technology in 63 countries, 36 of which have signed onto China’s Belt and Road Initiative," the report finds. "Huawei alone is responsible for providing AI surveillance technology to at least 50 countries worldwide. No other company comes close."

The next largest non-Chinese supplier is Japan's NEC, which supplies technology to 14 countries. The U.S. is also a relatively big player, with IBM selling such technology to 11 countries, Palantir to nine countries and Cisco to six countries.

The buying and selling flows freely regardless of government type.

"China is exporting surveillance tech to liberal democracies as much as it is targeting authoritarian markets," Feldstein says. "Likewise, companies based in liberal democracies - for example, Germany, France, Israel, Japan, South Korea, the U.K., the United States - are actively selling sophisticated equipment to unsavory regimes."

Huawei and IBM didn't immediately respond to a request for comment on the report's findings.

Privacy Protections Lag

Steven Feldstein

Feldstein says his report is meant to provoke public policy debate about not just how governments are deploying AI surveillance technologies, but what they can and must do to safeguard their citizens' rights.

"The good news is that there is ample time to initiate a much-needed public debate about the proper balance between AI technology, government surveillance and the privacy rights of citizens," Feldstein says. "But as these technologies become more embedded in governance and politics, the window for change will narrow."

As the state of biometrics and wider AI surveillance technologies continues to rapidly improve, many security experts say it's time for legislation and regulations to catch up.

"I personally think there should be a much greater public discussion about the use of AI in a variety of fields: law enforcement, medicine, transportation and many others," Woodward says. "Each has safety implications and that at the very least should cause scrutiny."

As with so many types of technology, how AI surveillance tools get deployed can carry upsides and downsides. Furthermore, just because a technology gets deployed doesn't mean it's useful, or that the data it captures has been secured (see: Use of Facial Recognition Stirs Controversy).

"Where AI is proving useful is predicting where police forces should deploy," Woodward says. "It monitors various sources - e.g. social media - and can be present to head off trouble at the pass. My concern here is that it’s a short step from there to 'pre-crime.' We need to understand the limits of AI, and always have a human in the loop."

Other outstanding issues include the impact of such technology on privacy rights, how long such data should be retained, as well as who's responsible - users, buyers, developers - when it fails. "Who is liable if AI makes a mistake?" Woodward asks.


About the Author

Mathew J. Schwartz

Mathew J. Schwartz

Executive Editor, DataBreachToday & Europe, ISMG

Schwartz is an award-winning journalist with two decades of experience in magazines, newspapers and electronic media. He has covered the information security and privacy sector throughout his career. Before joining Information Security Media Group in 2014, where he now serves as the executive editor, DataBreachToday and for European news coverage, Schwartz was the information security beat reporter for InformationWeek and a frequent contributor to DarkReading, among other publications. He lives in Scotland.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.co.uk, you agree to our use of cookies.