Smart-Prison and the future of dynamic security with AI

This article discusses the use of AI technologies in prison systems, particularly in China and Hong Kong, where AI-powered surveillance systems are being used to monitor inmates. These “smart” prisons aim to track inmate behavior, detect anomalies, and improve safety and rehabilitation. However, concerns about privacy, reliability, and impact on inmates’ well-being are raised. The success of AI in prisons depends on careful ethical considerations and integration into a reformed correctional system.

Dr Francesco Dergano
4 min readAug 28, 2024

Artificial intelligence (AI)-enabled sensors, tracking wristbands, and data analytics have become commonplace in smart homes, vehicles, classrooms, and workplaces. Now, these technologies are entering a new domain – prisons. Recently, China and Hong Kong announced plans to implement advanced AI systems aimed at continuous inmate surveillance. In Hong Kong, the government is testing Fitbit-like devices to monitor inmates’ locations, activities, and heart rates around the clock. Additionally, some prisons will deploy networked video surveillance systems designed to detect abnormal behaviors, such as self-harm or violence, while others will use robots to search for drugs in inmates’ feces.

In mainland China, the government is finalizing a “smart” surveillance system at Yancheng Prison, intended to monitor high-profile inmates in real-time via hidden cameras and sensors in every cell. According to the South China Morning Post, the system will transmit data to an AI-powered computer capable of tracking and analyzing inmate behavior continuously. Each day, the system will generate comprehensive reports on each prisoner, using AI functions such as facial recognition and movement analysis. Like in Hong Kong, this system is designed to flag suspicious behavior and alert guards to any anomalies. A representative from Tiandy Technologies, the company behind the surveillance system, claimed that the new technology could make prison breaks a thing of the past and reduce unethical practices like bribery among guards.

As society embraces smart technology, the concept of “smart prisons” presents both opportunities and risks. While the U.S. should approach China’s panopticon-like surveillance model with caution, there is potential for using similar technologies to enhance safety and rehabilitation in American prisons – provided they are implemented thoughtfully.

American prisons aim to achieve several criminal justice goals, including incapacitation, retribution, deterrence, and rehabilitation. With a growing recognition that the vast majority of inmates (95% of those in state prisons) will eventually return to society, rehabilitation is becoming increasingly important. The introduction of connected technology in prisons should not aim to replace human staff, particularly given the importance of preserving the rights and dignity of inmates and the critical role of human interaction in rehabilitation – something AI cannot replicate. Instead, the focus should be on developing smart-prison technologies that can assist staff in enhancing safety, support, and education for inmates.

For instance, AI could be used to analyze inmate behavior, helping to identify situations that may escalate to violence or indicate a risk of self-harm. It could also be used to monitor guards, similar to how machine learning is being explored to reduce police violence by identifying officers at high risk of initiating adverse events. A smart prison system could use collected data to provide similar oversight of guards, potentially flagging abusive behavior.

However, before implementing these technologies, it is essential to consider whether current technology can perform these tasks reliably and whether we should rely on technology for these purposes. Facial recognition systems, such as Amazon’s Rekognition, have struggled with accurately identifying individuals, particularly people of color and women. There is a risk that AI could mistakenly target the wrong individuals, who may have limited recourse given the courts’ tendency to defer to prison authorities.

Even if technology can reliably identify individuals, we should remain skeptical of claims that it can accurately detect “abnormal” behavior. Human beings train AI systems, and defining what constitutes abnormal behavior is inherently subjective. An overly broad definition could create psychological pressure to conform, while an overly narrow one could miss important issues that human staff might catch. Moreover, the intense surveillance enabled by these systems could undermine rehabilitation efforts by fostering an environment of distrust and control, potentially worsening behavioral issues.

In contrast, prisons that have successfully reduced violence and behavioral problems, such as those in Connecticut, Norway, and Germany, often do so by granting inmates more agency rather than relying on strict surveillance. However, a well-designed AI system that accurately flags only genuinely high-risk behaviors might enhance safety and fairness, potentially allowing human staff to focus more on rehabilitation.

Nevertheless, even if AI systems work as intended, they represent an unprecedented invasion of privacy, collecting and analyzing data on an unprecedented scale. This raises concerns about the misuse of sensitive information, including the potential for it to be sold or stolen.

In conclusion, while smart prison technologies offer potential benefits, they must be approached with caution. These technologies are not inherently good or bad; their impact will depend on how they are used. Whether smart prisons will help address or exacerbate the challenges facing the U.S. criminal justice system will depend largely on the direction of prison reforms at the time of their introduction. For now, a fundamental shift in how the government treats incarcerated people, followed by the thoughtful application of AI technologies, may one day lead to truly “smarter” prisons.

--

--

Dr Francesco Dergano

CEO of Skydatasol —Managing Principal of Kamiweb Project —Lead Research Manager and CISO of The National Security Framework—Full-Time Student in London