In January 2026, India’s law enforcement entered a decisive phase of technological transition. The Delhi Police operationalised its Safe City Project, while Maharashtra began the statewide rollout of MahaCrime OS AI. Together, these initiatives reflect a shift from conventional policing to algorithm-driven governance, raising a fundamental dilemma: how to enhance public safety without undermining civil liberties.
Delhi’s Safe City Project integrates nearly 10,000 AI-enabled cameras equipped with:
This marks a move towards automated surveillance where real-time monitoring becomes continuous and preventive.
Maharashtra’s MahaCrime OS AI is designed as a statewide intelligence and investigation platform focusing on:
Developed with private collaboration, it uses AI copilots to analyse case files, detect patterns, and even generate investigation plans.
AI-powered drones are increasingly deployed for:
They create a “high-altitude surveillance advantage,” reducing reliance on ground personnel while amplifying observational capacity.
A major driver behind AI policing is the availability of long-term criminal datasets. Modern AI systems are trained using decades of records from the Criminal Tracking Network and Systems (CCTNS), enabling quicker pattern recognition and cold-case linkages.
|
Concern |
Implication |
|
Centralisation of Power |
Policing shifts from local beat officers to remote data centres, reducing public accessibility and accountability. |
|
“Imprisoning Cities” |
Dense surveillance creates a permanent environment of suspicion where everyday behaviour is monitored for anomalies. |
|
Historical Bias |
AI inherits past policing patterns, risking institutionalisation of caste, community, or locality-based bias. |
|
Erosion of Rights |
Monitoring of assemblies and gatherings can create a chilling effect on dissent and freedom of association. |
Unlike written police manuals, AI systems rarely provide a transparent rulebook explaining why a person was flagged. This makes it extremely difficult for citizens to:
AI tools are only as reliable as their data and conditions. Errors from:
Although the Digital Personal Data Protection Act (DPDPA), 2023 exists, broad exemptions for the state on grounds like “security” create major gaps in protection against surveillance misuse and unchecked profiling.
Predictive policing changes the logic of justice. Instead of investigating crime after evidence emerges, the system increasingly screens behaviour as suspicious before any wrongdoing is proven, weakening the constitutional principle of innocent until proven guilty.
India needs a dedicated legal framework that ensures:
AI must remain assistive, not authoritative.
AI tools used in policing must undergo frequent audits to detect and eliminate:
Audits should be conducted by independent third-party institutions, not only internal agencies.
Legal reforms must ensure proportionality in data collection, especially biometric data. Laws such as the Criminal Procedure (Identification) Act, 2022 require stronger safeguards so that:
Technology can strengthen policing capacity, but it cannot replace constitutional discipline. If unchecked, AI-driven policing risks turning governance into digital authoritarianism through mass monitoring, biased profiling, and invisible decision-making. A genuinely safe society is secured not by total surveillance, but by trust, transparency, proportionality, and the Rule of Law, ensuring that innovation remains anchored in constitutional values and democratic accountability.