Edge analytics isn’t a form of extreme analytics with data scientists performing complex calculations on precipitous cliffs. So what is it, and why does it matter? It’s key to effective network monitoring. Here’s the theory:
- You want to minimise the amount of data you need to move around and store because
- There is a cost to comms, processing and storage
- You don’t want to drown in the data, you only want the useful information that can be extracted from it
- You want to maximise battery life in your sensors because batteries are expensive, and so is changing them out in the field
So, instead of gathering every piece of available data, sending it back in its raw form, and processing and storing it centrally, you do that as close to the ‘edge’ as possible. Of course, there is a cost to processing and storing data at the edge too, so the decision about how to balance load across the architecture of the solution isn’t a simple one.
IDC in its FutureScape for IoT report suggested that “by 2019, at least 40% of IoT-created data will be stored, processed, analyzed, and acted upon close to, or at the edge of, the network.”
Don’t rush therefore to increase the frequency of raw data return from your loggers. You will likely increase cost and drown in the data. Use a solution in which the analytics to be performed are appropriately distributed in the architecture to maximise the cost and effectiveness of the solution.