October is Cybersecurity Awareness Month, and the Security Industry Association (SIA) Cybersecurity Advisory Board is marking the occasion with a series of helpful content, tips and guidance on key cybersecurity topics. In this blog from SIA Cybersecurity Advisory Board member Pauline Norstrom – founder and CEO of Anekanta Consulting – learn about mitigating artificial intelligence-driven cybersecurity threats to physical security products.
Artificial intelligence (AI) is rapidly transforming the physical security industry, and now many Internet of Things (IoT) and networked devices enhanced by AI techniques are used to secure buildings and protect people; however, AI also poses new cybersecurity threats, as malicious actors may use AI to develop more sophisticated and effective attacks. Cybersecurity professionals need to continually stay informed about the latest AI threat landscape and the AI tools available to counter it. Some of the most common AI cyber threats use a range of generative AI, machine learning, natural language processing and computer vision techniques and may include:
- Deepfakes: Deepfakes are videos or audio recordings that have been manipulated to make it appear as if someone is saying or doing something they never actually said or did. Deepfakes can be used to impersonate authorized personnel and gain access to secure areas or to spread misinformation and create chaos.
- IoT malware: IoT malware is malicious software that is designed to target IoT devices. IoT malware can be used to take control of devices, steal data or disrupt operations.
- AI-powered phishing attacks: Phishing attacks are a type of social engineering attack in which the attacker attempts to trick the victim into revealing confidential information or clicking on a malicious link. Generative AI can be used to create more targeted and convincing phishing attacks.
- Zero-day attacks: Zero-day attacks are attacks that exploit vulnerabilities in software that the vendor is not aware of. AI can be used to automate the discovery and exploitation of zero-day vulnerabilities.
AI pitted against AI
AI may also drive the tools which identify and protect against AI cybersecurity threats and, in doing so, improve cyber-resilience:
- Intrusion detection systems (IDS): AI-powered IDS systems use machine learning to identify and respond automatically to suspicious activity on networks and devices by classifying the attack, sending alerts and isolating the network.
- Vulnerability scanners: AI-powered vulnerability scanners use machine learning to automate the identification of vulnerabilities such as missing security updates, misconfigurations and API issues.
- Risk assessment tools: AI powered risk tools may use machine learning to assess the risk of cyberattacks on specific organizations and assets by analyzing the results of risk checklists and automatically determining a risk level and providing an easy to interpret report.
- Security automation tools: AI security automation tools can be used to automate tasks such as security incident response and patch management.
How to Protect Physical Security Products from AI-Driven Cyber Threats
There are a number of steps that organizations can take to protect their physical security products from AI-driven cybersecurity threats, including:
- Software updates: It is important to keep all software up to date, including the firmware on physical security devices. Software updates often include security patches that can fix vulnerabilities that could be exploited by attackers.
- Strong passwords and multifactor authentication: Using strong passwords and multifactor authentication can help to prevent attackers from gaining access to physical security devices and systems.
- Segment networks: Segmenting networks can help to limit the damage that can be done by an attacker if they are able to breach the network.
- Monitor networks and devices: Use AI-powered IDS systems and other state of the art security tools to monitor networks and devices for suspicious activity.
- Develop and maintain a response plan: It is important to have a plan in place to respond to security incidents in a timely and effective manner and which includes AI transparency and explainability information to aid communication with customers and other stakeholders who may be affected by a breach.
Strike a Balance Between Technology and Human Oversight
It is important to strike a good balance between using AI to secure physical security products and maintaining human oversight. AI can be very effective at detecting and responding to cybersecurity threats, but it is important that the right people are in control and oversight is defined in policies and procedures. An overarching AI governance policy in the organization, which may sit on the board risk register, should include the policy for the protection of all critical systems, including security, with a chain of accountability to the top table. On the front line, the people operating and maintaining the systems should have adequate and measurable training to review AI decisions and to ensure systems are performing correctly within the defined and agreed scope of use.
- Employee training and education: Employees should be aware of the latest AI-driven cybersecurity threats and how to identify and avoid them.
- Security vendors: Security vendors may provide organizations with advice and support on how to protect their physical security products from AI cybersecurity threats.
- Investment: Organizations should invest in research and development to develop systems and innovative processes to protect their physical security products from AI cyber threats.
AI is a powerful tool that can be used to both threaten and protect physical security products. It is important for organizations to be aware of the latest AI-driven cybersecurity threats and to take steps to protect their systems. Organizations should also strike a good balance between using AI to secure their systems and maintaining human oversight to ensure a clear route to accountability, reduce overdependence on the AI system’s decisions and be equipped to ask the right questions about transparency and explainability. Without these measures in place, a black box system may leave company executives in an embarrassing position if questioned about an attack, and they have no answers about the nature of the AI threat and the way the AI tool averted it.