New AI Systems Generate Successes, Failures and Potential

Daniel Reichman is the CEO and chief scientist for Ai-RGUS.
Daniel Reichman is the CEO and chief scientist for Ai-RGUS and a member of the SIA AI Advisory Board.

Artificial intelligence (AI) and, more specifically, generative AI (gen-AI), are in the news on an almost-daily basis. The technology captivated our collective attention approximately 18 months ago with the release of ChatGPT, which garnered 100 million users in just two months, something that took the social media giant Facebook two and a half years to achieve.

Gen-AI is focused on creating new content using artificial intelligence (hence the modifier “generative”). Before release to the public, a gen-AI system will review enormous amounts of data (be it text, images, videos or all the above) and will use statistical methods to learn to predict what word comes next in a sentence or how to fill in an image based on the surrounding context. In this way, it learns both about language and the world, as well as how to converse. By treating a sentence submitted by a user (that is, a “prompt”) as the start of a paragraph, it can use its statistical model to complete the paragraph. Because it has reviewed a tremendous amount of data, it can use this information to answer new questions with relevant content and correct sounding prose.

As this technology has been rolled out, there have been both success stories and stories of failure. That said, this being a new technology in a new field, it is an exciting time for experimentation. People have used gen-AI, for example, to:

  • Get summaries of a topic that would otherwise require extensive research
  • Generate documentation, how-to guides and suggestions related to a specific situation or project
  • Generate photo-realistic imagery from a short textual prompt

Gen-AI’s ability to provide a user with information and images saves time and effort and has spurred a wave of new companies that can customize and tailor the technology’s abilities to specific use cases.

Some of the failures stem from the way in which gen-AI learns about the world around us. If the information it has seen on a topic is sparse, wrong or outdated, its knowledge will be limited, biased or corrupted. Another issue is that it learns how to complete sentences based on what is most likely, which can lead to unpredictable outcomes.

One case that led to difficulty for a user was when the gen-AI engine cited sources that were not real. The format and placement of the citations was correct because the engine had likely seen many such documents, but the “sources” had been invented by the AI. This issue in automatic content production is referred to as “hallucination” and avoiding or mitigating it is an active topic of research. As a result, it is highly advisable to verify the content that gen-AI produces before using it. The most successful way to use gen-AI, currently, is as a starting point and not as a final product.

One way that gen-AI is being deployed in the security industry is with regard to video search and querying. A major concern in video surveillance is the amount of data that needs to be reviewed in order to identify a problem or collect evidence. Providing an interface in which the user can describe the content of interest in plain English and then have the AI identify images that are related to the query can save a lot of time. In the future, we can expect that a conversational engine will be a standard feature of user interfaces so that users can retrieve the information they would like to see with ease.

The views and opinions expressed in guest posts and/or profiles are those of the authors or sources and do not necessarily reflect the official policy or position of the Security Industry Association.

This article originally appeared in All Things AI, a newsletter presented by the SIA AI Advisory Board.