Continuous surveillance of AI systems following deployment
In recent years, several countries and regions, such as the United States, the European Union, and others, have enacted or proposed regulations that require post-deployment monitoring of AI systems. This article provides an overview of the major regulations and proposals in these jurisdictions.
### United States
The Texas Responsible Artificial Intelligence Governance Act (H.B. 149, 2025) is one such example. The law, effective from January 1, 2026, features a regulatory sandbox that allows experimentation with AI systems under state oversight. Participants are required to maintain ongoing internal governance, including post-deployment monitoring and safeguards. Entities must also respond to enforcement inquiries with evidence of post-deployment monitoring, and a "safe harbor" is offered for organizations that maintain risk management programs aligned with standards such as the NIST AI Risk Management Framework.
### European Union
The EU Artificial Intelligence Act, adopted in June 2024, is the world’s first binding, comprehensive AI regulation. It adopts a risk-based approach, categorizing AI systems as unacceptable, high, or limited risk. High-risk AI systems must undergo rigorous conformity assessments before deployment and are subject to continuous post-market monitoring. Companies must maintain robust internal processes for ongoing risk management, technical documentation, and cybersecurity.
### Other Jurisdictions
Many U.S. states are considering or have introduced bills that include requirements for post-deployment monitoring, risk assessments, and internal governance. Enforcement typically includes periods for entities to cure violations, with statutory penalties only after a failure to comply.
### Effectiveness in Ensuring Transparency and Preventing Misuse
Transparency is a key concern in AI regulation. The EU mandates explicit transparency for high-risk and limited-risk systems, requiring clear documentation, user notifications, and public reporting in some cases. Texas requires annual public reporting on sandbox outcomes but does not mandate transparency for all deployed systems outside the sandbox. Adoption of ISO/IEC and other management standards is encouraged to formalize transparency practices.
Preventing misuse is another critical aspect of post-deployment monitoring. The EU bans unacceptable-risk applications and imposes strict controls on high-risk uses. Continuous monitoring aims to catch and mitigate misuse quickly, but resource constraints may limit enforcement. Texas' sandbox oversight and post-deployment monitoring are designed to catch risks early, but the safe harbor provision could allow some misuse to go unpunished if internal controls are deemed "substantially compliant."
A comparative table outlines the post-deployment monitoring requirements, transparency obligations, enforcement mechanisms, safe harbor/compliance incentives, and effectiveness concerns in the EU, Texas, and other U.S. states.
While these frameworks represent significant progress, challenges remain in harmonizing global standards, ensuring compliance across sectors, and maintaining public trust as AI systems become more pervasive. The effectiveness of these regulations in ensuring transparency and preventing misuse depends on rigorous enforcement, the quality and accessibility of public reporting, and the ability of regulators to keep pace with technological change.
- In the United States, the Texas Responsible Artificial Intelligence Governance Act requires entities to maintain post-deployment monitoring and safeguards, offering a "safe harbor" for organizations that maintain risk management programs aligned with standards such as the NIST AI Risk Management Framework.
- The European Union's Artificial Intelligence Act imposes continuous post-market monitoring on high-risk AI systems and bans unacceptable-risk applications, aiming to catch and mitigate misuse quickly despite potential resource constraints.