Body-Worn and Mobile Video and Sensor Networks

body-worn camera

SIA Education@ISC West will provide conferees with more than 80 sessions of valuable information on important topics in the security industry at the Sands Expo in Las Vegas on April 10-12.

Robert Ehlers, Vice President of Business Development, RGB Spectrum, will present Body-Worn and Mobile Video and Sensor Networks on Thursday, April 12, at 9:45 a.m. SIA’s blog chatted with Ehlers to learn more about his session. Register for ISC West, and find more info on SIA Education@ISC West class sessions.

SIA: Can you tell us a little bit about your background and how you were inspired to present on the topic of “Body-Worn and Mobile Video and Sensor Networks” at ISC West?

Ehlers: I am currently the Vice President of Business Development at RGB Spectrum. In this role, I regularly interact with customers building Emergency Operations Centers, Public Safety Answering Points and other similar control rooms where the focus is on real time situational awareness and operations management. I have designed many audio/visual enabled control rooms for these types of customers as well as consulted on workflow process design and supporting technology.

I also founded HauteSpot Networks Corp., which has developed wireless routers, deployable communications systems, wireless (LTE and Wi-Fi mesh) body-worn and digital in-car audio/visual systems, and edge mesh video processing systems. These systems are deployed in both commercial, industrial and government applications worldwide. HauteSpot products are now available through SMC Networks, which provides service, support, logistics, integration with cloud services, and much more, greatly enhancing the value proposition of HauteSpot solutions.

SIA: Briefly, what do you anticipate are key important takeaways concerning body-worn technology and surveillance networks in your session?

Ehlers: Body-worn cameras have traditionally been a forensic-only, disconnected, evidence-gathering tool—essentially, a wearable DVR. But with the evolution of ubiquitous wireless communications and advances in video technology such as low-power consumption, better compression and high-resolution imagers, supported by infrastructure and policy changes such as FirstNet (the public safety communication network based on LTE technology), body-worn technology is becoming a critical part of communications and operations. Live real-time audio/visual streaming, location tracking, biometric tracking, analytics and alarming contribute to better management of assets, people and incidents, in addition to evidence gathering. We are on the cusp of a new paradigm in communications, operations and workflow for public safety and other vertical applications.

SIA: What is an important consideration for folks to ponder in the state of connected body-worn and mobile surveillance technology today?

Ehlers: What are you buying when you look at body-worn cameras: an evidence collection system or an operational communications system? Most law enforcement wants evidence collection as the primary application, and it completely overlooks the value of live audio/visual streaming, location tracking, peer-to-peer audio/visual sharing, etc. Requests for proposals do not comprehend real-time communications and the funding process has overlooked this valuable application.

Additionally, many agencies think that real-time audio/visual communication can be accomplished by simply putting an app on smart phones. But a smart phone does not satisfy the requirements for body-worn application: Power consumption, data integrity, mesh wireless, tamper evidence, durability and reliability are key factors to consider. Not every body-worn application requires a dedicated device, but most do.

Now, we have FirstNet, which is being rolled out by AT&T nationwide. FirstNet updates the communications infrastructure for public safety so that real-time communication of audio/visual, location and sensor data is possible. So now is the time to start planning for new applications of body-worn technology. Even if you cannot use FirstNet, solutions can run over private broadband or commercial cellular today using existing infrastructure and rate plans.

Finally, interoperability is key. Live audio/visual streaming and recorded evidence both need to be able to flow from body-worn technology to many different applications, including Records Management Systems (RMS), Computer Aided Dispatch Systems (CAD), Incident and Emergency Management Systems, Security Video Management Systems (VMS), Content Management and Distribution Systems (CMS), and more. Backend integration using standard protocols, file formats, meta data structures, etc., are critical. Avoiding proprietary architectures and protocols is essential. Many body-worn solutions today are built solely as data storage repositories; they do not support live real-time functions, and they do not comprehend interoperability.

SIA: What might you say to people reluctant to consider body-worn or mobile surveillance technology? Are they going to miss out on benefits of some technological advancements?

Ehlers: Real-time body-worn audio/visual communications systems are just getting started. But if the market ends up being anything like the evidence collection market, it is essential to be in the market early. Paradigms will be set, workflow processes established, and funding approved today for implementations years out. Understanding the law enforcement market early, and influencing it before widescale adoption happens, is essential.

Remember that body-worn audio/visual technology is not limited to law enforcement. There are many other applications in markets such as healthcare, transportation, utilities, guard services, fleet services and more. These markets are ready for body-worn audio/visual communication systems, and the technology is starting to arrive from innovative solution providers like HauteSpot Networks/SMC Networks and RGB Spectrum.

Early entrants into the market will define standards, protocols, work flows, features and interoperability. Once established, these entrants will have an advantage over later entrants whom will have to play catchup.

SIA: What is an example of how a specific venue, perhaps in a specific vertical, might impact a requirement for design of a network to deploy body-worn and mobile wireless surveillance tech?

Ehlers: Let’s use an example of a shopping mall.

A public private consortium is established between all interested parties: the property owner of the shopping mall, the tenant businesses in the mall, the security guard service/central station monitoring service, community groups and the local police. This group establishes a policy for audio/visual data use and sharing between participants. This sets the ground rules for how the interested parties will work together. Policy is very important before any body-worn technology project gets underway.

Then, the shopping mall property own sets up a private broadband wireless network that covers the entire property. This is done by placing wireless mesh nodes throughout the property, generally at locations where IP security cameras will be placed. The nodes will serve both as connectivity to the cameras as well as access points for the body-worn and in-car devices to connect. Some of the nodes can also have LTE connectivity to an off-site monitoring site and to allow police access. The network can also deliver connectivity to tenant networks if agreed.

The guards at the shopping center and the police all have wireless body-worn audio/visual communication devices, and there are wireless digital in car systems installed in their vehicles. Supervisors and police can also have apps on the smart phones for accessing and viewing audio/visual and GIS information.

When an incident occurs such as a burglary, a guard service dispatcher can remotely enable the cameras on the guards and send video from cameras in the store generating the alarm to the guards responding.

Once the guards verify the alarm and see that the break in was not a false alarm, police could be called, and the video from the guards, from the cameras around the mall and from cameras in the store could all be shared to the police.

When the police are dispatched, their police dispatcher enables the cameras in their car and on their body. The police are connected via LTE until they arrive onsite where they may switch to the local Wi-Fi broadband at the mall. The police arrive on scene with a good idea of what the situation is, the precise location of the break-in, and maybe even pictures of the suspects. They go precisely to where the problem is. All assets are tracked and supervisors know exactly who is where and what they are doing.

Alarms for motion or lack of motion, geo-fencing, track monitoring, speed of travel, direction of travel, proximity of other assets and much more can alert supervisors to impending issues.

All video, audio, location and other sensor data is recorded both at the dispatch centers and on the officers and guards. All data is marked with metadata related to the incident ID, location, time, officer ids, officer names, camera ids, victim names and other data that may be used later to correlate files, track chain of custody and search.

Should the local Wi-Fi broadband network fail at the mall, LTE takes over. Should LTE fail, local Wi-Fi takes over.

Evidence is always stored locally on the body-worn and in-car devices, and transferred using Criminal Justice Information Services (CJIS) chain of custody management over encrypted links and with full tamper-evident watermarking. This is separate from the live audio/visual GIS streaming for operations. The two functions are treated separately but supported by a single converged architecture.

SIA: What’s one thing you would like folks to think about prior to attending the session?

Ehlers: What other applications besides law enforcement could you apply live audio/visual GIS streaming to?

What metadata would you apply to the audio/visual data to make it more useful?

What other systems would you integrate a live streaming audio/visual GIS network to? What APIs, protocols and file formats do they require?

If you want to integrate live streaming audio/visual GIS data to an existing video management system (VMS), how would you do it? Does your current VMS support the RAPID import of video and audio data? Does your current VMS support other file formats such as PDF, DOC, XLS, MOV, AVI, etc.?