Chat with us, powered by LiveChat

Edge Computing and Fog Computing

In this post, Edge Computing and Fog Computing will be explained. Network engineers, network designers, network architects need to know these two important industry terms and their architecture to decide “how” and “where” to deploy workload, where to store data. and many important architectural decisions.

I will start explaining Edge Computing and Fog Computing and after that we will compare them, we will try to understand the architectural differences.

Let’s start with Edge Computing.

 

Edge Computing:

 

Edge computing is a networking philosophy focused on bringing computing as close to the source of data as possible, in order to reduce latency and bandwidth usage.
In a simpler term, edge computing means running fewer processes in the cloud and moving those processes to local places, such as on a user’s computer, an IoT device, or an edge server. Bringing computation to the network’s edge minimizes the amount of long-distance communication that has to happen between a client and server.
It is important to understand that the edge of the network is geographically close to the device, unlike origin servers and cloud servers, which can be very far from the devices they communicate with.
Cloud computing offers significant amount of resources (e.g., processing, memory and storage resources) for the computation requirement of mobile applications. However, gathering all the computation resources in a distant cloud environment started to cause issues for applications that are latency sensitive and bandwidth hungry.