Data management needs a stable ‘scaffold’

assets/files/images/27_07_22/bigstock-Online-Documentation-Database--448541863.jpg

This article is brought to you by Retail Technology Review: Data management needs a stable ‘scaffold’.

By Greg Hookings, Director of Business Development – EMEA, Stratus Technologies.

The digital transformation journey is one that is being travelled across the business landscape, even more so now with the acceleration brought on by Covid. Digital transformation is fundamental for companies to remain competitive and this one-way journey isn’t simply about throwing resources at a digitalisation strategy to get the best results.
  
An enterprise that has undergone digital transformation has essentially opened the dlow of data. This flow consists of data being produced from a variety of sources, this could be from a piece of quipment, a whole plant, a retail space, or a remote asset. Data from any of these sources is then measured and analysed, before being implemented.


Once all steps are complete, a company can expect to see improved efficiency and more profitable operations. Simple right? Not quite, companies still must answer an integral question, do we have the optimum data ‘scaffold’ in place to support data flow and information availability throughout the enterprise? 

What is the data scaffold?

Even if you haven’t heard the term data scaffold, you will have crossed paths with what it is describing. In simplest terms, a data scaffold is exactly as defined, a supporting framework (or platform) for making information available throughout an enterprise. 
  
Gatherine data for the top of the funnel is the easy part, modern facilities whether they are making a product, welcoming customers, or processing a resource are producing an abundance of data, The challenge is turning this mountrain of data into actionable intelligence. Every modern capability stems from this point, initiatives including predictive maintenance, digital twins, artificial intelligence (AI) and machine learning (ML) all require large amounts of high-quality data. 

Data decision cycles

The journey of data, and the reason for a robust data scaffold, often follows a typical decision cycle. This consists of measuring the data being produced by an asset or facility, analysing it, perfecting the process, deciding on the next steps, and then finally implementing the change. Again, using this cycle to improve operational efficiency isn’t about quantity of data, it’s about quality.
  
To achieve a return on investment (RoI) on a digital programme, enterprises need to measure the spend against many things including for example enhanced operations reliability and improved safety performance. Success in this can only be achieved by the availability of data that can flow freely throughout an organisation, if data is analysed and changes are implemented in a silo, disconnected from the rest of the network, how can an enterprise see or measure company-wide improvements? 

Without a data scaffold in place, advanced capabilities like artificial intelligence and machine learning would derail even the best algorithms. This is because facilities often flow production, operation, and equipment data in closed loops, creating three silos of information.
  
Breaking these silos with a robust scaffold means information can flow in all directions. Physical devices that are producing data (sensors, valve actuators, motor starters) are no longer doing so in insolation, these data streams are integrating and paving the way for major efficiency improvements taking into account information from the entire enterprise. 
  
An example of this journey would be a manufacturing facility seeing lower output on one production line compared to others. With sensors in place they can measure how the machine performs, including any unplanned downtime and overall production data. What they can’t do is relate this data with other enterprise data to pinpoint the exact cause. 
  
With a strong data scaffold, that same manufacturer would be able to see if the cause was down to employee practices, other assets in the production process, a maintenance issue, or unforeseen environmental factors. They can now proactively solve the issue without the dreaded rip-and-replace of high-cost assets. 

Unlocking the flow of data

All of the capabilities of a robust data scaffold, and those it enables, can be achieved through the right Edge Computing platform that offers the capacity to collect, analyse and store data right at the source. This solves the issue of data gaps which can erode trust in using such information in critical decisions that affect the whole company. The right platform produces data in the right quantity and quality to enable analytics projects, while removing the need for calibration and human input that can result in errors.
  
The reduction of unplanned downtime can be a driver for seeking a robust data scaffold. A failover event is often very costly no matter what sector it is in, whether that is retail stores being unable to take card payments or a critical machine in a manufacturing process not completing its task. Business owners should always look to address causes of unplanned downtime and the simplest way to do so is through the right Edge Computing platform. 
  
Edge Computing is the platform that enables a robust digital scaffold, creating simple, automated, and protected data collection at the source. Data that can be trusted, never lost, and fed into the overall continuous improvement of an enterprise. No matter the sector or operating space, the free flow of data securely is a fundamental step to future-proofing operations and enabling company-wide efficiency improvements.

Add a Comment

No messages on this article yet

Editorial: +44 (0)1892 536363
Publisher: +44 (0)208 440 0372
Subscribe FREE to the weekly E-newsletter