Edge Video Analytics is an application of Edge Computing. This form of computing is the opposite of Cloud Computing, however it is usually augmented by Cloud Computing. This implies that most of the computing and filtering is done at the Edges instead of the Cloud.
In formal terms, Edge computing is a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from Cloud to edges. This reduces the amount of data transferred from an Edge to Cloud, and most of the computations and storage are performed either within the edge or near the edge.
So what is an Edge? An edge could be a Security Camera, a Jet Plane Engine, an autonomous car or any other device, you name it. In Artificial Intelligence, these edges are also known as Agents. The concept of dumb and intelligent agents is very popular in IoT (Internet of Things) and Artificial Intelligence.
An agent or an edge is a device that takes some action by detecting the environment around it. An Agent/Edge is considered intelligent, if it has the computing power and is able to decide most of the things for itself. From now on, we may use terms “Edge”, “Agent” and “Device” interchangeably, as these mostly refer to the same thing.
Edge Computing is frequently aliased with Fog Computing, which is a standard defining how edge computing works. In essence, fog computing is the standard, and Edge Computing is the concept. The term Fog Computing was first coined by Cisco in January 2014 as a way to bring cloud computing capabilities to the edge of the network. From now on, we may use both terms interchangeably, because they mostly refer to the same thing.
In fog computing, there is a famous term called Fog Node or simply Fog, it refers to a group of edges that share information and computations between them either using an arbitrary server node or by peer to peer connections. A Fog deployment is a group of edges working together.
As we have discussed that edges are agents that sense data from environment. If an edge produces graphical or video data, then it can be termed as a Video Edge. Typically these edges are video cameras recording/sensing movements from surroundings in sequence of images.
So for video recording edges, fog computing can help us provide analytics, anomaly detection, object identification and various things in real-time, which was not possible if a Cloud based solution was used.
The infrastructure of Edge Video Analytics provides a mesh of fog nodes to intelligently partition video processing between other fog nodes co-located with cameras and the cloud enabling real-time tracking, anomaly detection, and insights from data collected over long time intervals.
Assume that we have hundreds of security cameras deployed in a region, each camera records tons of gigabytes of videos daily. Collectively these cameras generate several terabytes of data daily. This data, if sent to the cloud directly, would become overwhelming for the cloud to process it. We will face a bottleneck of our network bandwidth, which can be catastrophic, as it may result in delayed processing. We all know that we cannot afford delayed response from a security camera.
The solution to this problem is Edge Video Analytics (Edge Computing). We will have to enable the Edges (cameras) to process the data (video) and transmit only the relevant information to the cloud. The concept is fascinating, as it enables a dumb device to become an intelligent one.
But like any other concept, there are some advantageous areas and some pain areas of Edge computing and Edge Video Analytics. Let’s discuss some pros and cons below.
The concept of Edge computing and Edge video analytics is very cool, but comes with few precautions.
Now let’s talk specifically about Video Analytics at the Edge (i.e. Edge Video Analytics). As discussed above, we can leverage the high latency and processing capabilities of Edge Computing for Video processing. This enables the cameras to produce real-time analytics which can be used for surveillance, anomaly detection, tracking, collecting insights from images and many other tasks. There can be different architectures of how the Edge Video Analytics is implemented, and it depends upon the specific use cases.
As a technological paradigm, edge computing may be architecturally organized as peer-to-peer computing, autonomic (self-healing) computing, grid computing, and by other names implying non-centralized availability.
Many of the times the Edges can be pooled together to form a fog, where each fog is having one or more local processors to process the requests from the heterogenous camera devices. The transmission to cloud could also be done using the fog nodes.
The devices where processors and storage are located locally in a fog are called fog nodes. Different video analytics and machine learning algorithms can be deployed in these fog nodes. These fog nodes intelligently partition video processing between cameras and the cloud.
Pooling the edges together makes the system cost-effective, however the drawback to this architecture is that we have to ensure maximum availability of these local processors and storage.
A variant of the above architecture could be having hierarchical fog nodes, i.e. top level fog nodes that connect several other fog nodes and aggregate the information received from them.
Another architecture could be having a processing unit at each Edge, however this can increase the overall costs of the infrastructure and maintenance, but will have very low latency.
One other architecture could be allowing edges within a fog node to communicate with each other and form a mesh.
Likewise there can be numerous ways the Edge Video Analytics architecture can be implemented, and it really depends on the use case and feasibility. Similarly responsibilities of each component (Cloud, Fog Nodes, Edges) can vary depending upon the use case. Some architectures add another layer between Fog Nodes and Edges, and call it the Application Layer, where processing occurs.
Edge Video Analytics is a trending field nowadays as we are evidencing boom in Big Data and Artificial Intelligence (specifically Machine Learning and Deep Learning algorithms). And also the fact that the hardware and computation costs are getting cheaper, this concept is becoming more and more realizable.
Utilizing the newer hardware, the AI algorithms enable us to extract meaningful information from images and video streams. So that video generated by endpoint or edge devices (i.e. cameras) is not transmitted directly to the Cloud and only relevant information is sent, enabling faster response and quick decision making.
Furthermore there is an Open Fog Consortium (https://www.openfogconsortium.org/), a non-profit organization, which is accelerating the adoption of Edge computing in order to solve the bandwidth, latency, communications and security challenges associated with IoT, 5G and Artificial intelligence. They also release a yearly architecture and implementation guide for Fog Computing, I have added that guide’s link under references.
All in all, the concept of Edge Video Analytics and Edge Computing is really fascinating, may lead to the realization of many science fiction ideas that were previously discarded.
USA408 365 4638
1301 Shoreway Road, Suite 160,
Belmont, CA 94002
Whether you are a large enterprise looking to augment your teams with experts resources or an SME looking to scale your business or a startup looking to build something.
We are your digital growth partner.
Tel:
+1 408 365 4638
Support:
+1 (408) 512 1812
COMMENTS ()
Tweet