Honeywell’s Don Morron discusses how the advent of generative AI presents a pivotal moment for the security industry.
In the world of physical security, there has historically been a noticeable lag in adopting technological advancements, primarily due to a reluctance to trust technology and an aversion to change rooted in conventional thinking; however, as time has shown, change is the only constant, and technology evolves at an increasingly rapid pace, with artificial intelligence (AI) at the forefront of this transformation.
Throughout history, we have witnessed countless instances where businesses failed to keep pace with technological progress, eventually facing the choice of adapting or becoming obsolete. As human beings, our natural tendency is to focus on the potential negatives associated with new technology, a coping mechanism aimed at ensuring our safety, yet this reluctance to embrace change has held back the security industry’s progress. We need only look at the cautionary tales of companies like Blackberry and Blockbuster, which failed to innovate and ultimately paid the price for their inaction.
Today, security professionals find themselves faced with a range of technologies that are often dismissed, whether they are considered old news, such as AI/machine learning or cloud computing, or more recent innovations like the metaverse, Web3 and blockchain. Traditionally, the physical security industry has had a narrow focus, centered on protecting people, property and profits. The advent of generative AI (GenAI), a subset of deep learning AI, however, presents a pivotal moment in history where we must choose to trust and adopt this technology, or risk becoming obsolete.
Take, for example, ChatGPT, a chatbot developed by OpenAI which garnered over 100 million users within two months of its release, marking an unprecedented achievement and shedding light on the potential of this groundbreaking technology. AI, despite its existence since the 1950s, has only recently gained mainstream prominence. Why is this the case? GenAI is built on large language models (LLMs), employing deep learning algorithms based on billions or even trillions of parameters to draw new insights from diverse data sources, such as text and images. Think of it as a human brain capable of piecing together information from the vast expanse of the internet, connecting virtual neurons to generate creative content in mere seconds. The most crucial aspect, however, is accessibility. Never before has such advanced technology been so readily available. GPT and other LLMs have brought some of the most powerful technologies within reach through publicly accessible websites, API integration capabilities to enhance products and services and natural language processing commands that allow a few prompts to transform a GenAI model into a subject matter expert in virtually any field.
In summary, the possibilities are limitless, demonstrating how accessibility can accelerate the adoption, innovation and acceptance of technology. Nonetheless, this accessibility also means that even malevolent actors on the dark web can exploit LLMs to commit heinous crimes on an unprecedented scale. As security professionals, we must maintain an open mindset, focusing on the potential and the transformative technological change that await us.
So where do we go from here? Manufacturers, end users, specifiers and integrators in the field of physical security are at a crossroads. Our collective perspectives regarding AI can either propel us forward or hold us back in the same way we have been historically. We have essentially transitioned into an IT-driven industry where virtually every piece of technology we encounter is connected to the internet, bringing topics like cybersecurity and cloud computing to the forefront. AI, although not a new concept, has resurfaced as a prominent force, a subject of conversation and consideration more than ever before. As security professionals, we owe it to ourselves and the future success of our industry to wholeheartedly embrace GenAI and all future AI developments. We must educate ourselves about the strengths, weaknesses, opportunities and threats this technology brings while avoiding the historical mistrust and resistance to change that could obscure our vision of the vast potential it holds. Failure to do so might lead to our industry’s obsolescence, a risk we can ill afford to take in this era of rapid technological evolution.
This article originally appeared in RISE Together, a newsletter presented by SIA’s RISE community for emerging security industry leaders.
The views and opinions expressed in guest posts and/or profiles are those of the authors or sources and do not necessarily reflect the official policy or position of the Security Industry Association.