Responsible AI in Security: Accountability in an Era of Autonomous Systems

Dr. Florian Matusek is the director of AI strategy and managing director in Vienna, Austria, for Genetec and a member of the Security Industry Association’s AI Advisory Board.

Security work involves a lot of routine tasks, but when an incident occurs, decisions must be made fast—often with incomplete information and real consequences.

Security operators are responsible for making those judgment calls. Their decisions don’t just affect safety, but also legal liability, operational continuity and public trust.

As more industries experiment with fully autonomous, or “agentic,” artificial intelligence-powered systems, it can be tempting to apply the same approach to security. While machine learning can automate repetitive tasks and help operators work more efficiently, removing humans from the decision-making process introduces serious risk.

Security environments are unpredictable. They require context, experience and the ability to interpret ambiguity. Machine learning models are effective at recognizing patterns and surfacing information, but they’re not well-suited for making high-stakes decisions in uncertain situations. For example, determining whether to escalate a situation to law enforcement, initiate an evacuation or grant access during an emergency requires human judgment. These are not purely technical decisions—they involve nuance, accountability and an understanding of context that systems cannot fully replicate.

Where AI tools add the most value is in helping operators access the right information quickly. Natural language search is a good example. Instead of manually reviewing hours of footage, operators can describe what they are looking for and receive relevant results, such as tracking when an object was placed and removed or identifying a person’s path across multiple cameras.

For these tools to be reliable, transparency is critical. In consumer applications, large language models sometimes generate confident but incorrect responses. In a security context, that is unacceptable. Systems should clearly indicate what they can and cannot detect, and why a result was returned—or not returned—so operators can make informed decisions.

Responsible use of machine learning in security is not just a question of system autonomy—it is a matter of accountability. In high-pressure moments, decisions have consequences—and the ultimate responsibility remains with humans.

The views and opinions expressed in guest posts and/or profiles are those of the authors or sources and do not necessarily reflect the official policy or position of the Security Industry Association.

This article originally appeared in All Things AI, a newsletter presented by the SIA AI Advisory Board.