Data in Manufacturing and the Hype Curve
One of the fundamental promises of Smart Manufacturing and Industry 4.0 is to make available to the Industrial Manufacturing Floor some of the advances in data management and analytics, including Machine Learning and Artificial Intelligence. The hope is that the new visibility and understanding gained from this data will help improve production efficiency, flexibility, product quality, and enable sensor rich and collaborative automation.
Early experiments and deployments were driven by an enthusiastic embracing of the motto “Data is the New Oil” applied to Manufacturing. The promise was the revolutionary power of the Cloud, where all the data would be accumulated and analyzed and where the magic of Artificial Intelligence and Machine Learning in the hands of Data Scientists would provide extraordinary insights.
This early approach provided some value, but it also proved costly. As the volume of data collected increased, it created management and security challenges. The expected insights did not fully materialize, and the work required to squeeze value out of this process proved significant. Important questions surrounding Data ownership remained without clear answers.
After the classic peak of inflated expectations, we are now in something like a trough of disillusionment, as exemplified by the meteoric rise and fall of GE Predix. The questioning and, often, even the rejection of the Cloud paradigm leave many unanswered questions surrounding data management, security, value, and ownership on the Industrial Floor.
There are however ample reasons for optimism: the future of data in Manufacturing is still bright!
We will move forward and climb the slope of enlightenment towards the plateau of productivity by fully understanding the key lessons learned over the past years
Data in Smart Manufacturing: Lessons Learned
Over the past four or five years, we gained new insights about the value and complexities of data in Smart Manufacturing through an enriching but often painful experience of pilot and early deployment activities. The following list highlights the key lessons we learned:
- Edge data management and analytics play a crucial role, complementing and even replacing the role of the Cloud
Data management, data cleaning, data securing, data analytics, and even storage needs to first happen on the Industrial Floor, as close as possible to the machines.
Ultimately, the traditional hierarchical organization of the Manufacturing Floor, defined by the classic Purdue model and by the Industrial Automation Pyramid, needs to map into a hierarchical data management and analytics functionality, going from the end points all the way to the Cloud.
- People that know the manufacturing process need to be involved from Day 1
Without an intimate understanding of the process under consideration, it is difficult to extract value from the process data and its analysis. Artificial Intelligence and Machine Learning have not yet proven to provide a magic wand. Wise selectivity about what to look at and the application of simple but well understood models can lead to impactful results. Vendors need to deliver simple to use tools and intuitive User Interfaces attractive to the floor operators.
- Data ownership has to be addressed right away
The Industrial floor brings together a large set of “players,” an ecosystem including the plant owners, system integrators, IT software providers, tool and machine builders, just to name a few.
As the infrastructure on the industrial floor moves towards consolidation, data extracted by the variety of end points converges on consolidated platforms. This process offers great data exchange and interworking potential, with powerful consequences in efficiency and collaborative automation, but also exposes the urgency of protecting data ownership and privacy.
Data ownership and privacy needs to be addressed as close to the sources of data as possible. Edge platforms play a critical role here.
- Use Cases: From Predictive Quality to Digital Twins based Control
The early uses cases leading to remarkable success address the problem of Predictive Quality. Data is used to feed simple models predicting process failures which may be prevented in real time. Progressively, process data will be used to feed local models, based on AI and ML, but also based on local models describing the physical process, i.e., Digital Twins. The outputs of the models, running on Edge Computing platforms, will greatly contribute to more efficient and precise Process Control.
- It will take time ….
With the Industrial IoT, we are witnessing a tremendous technology acceleration, bringing together in an unprecedented way complete stacks, from connectivity, software defined networking, and data management, to artificial intelligence and advanced control of systems. This “technology tsunami” is overwhelming everyone who is part of the industrial automation ecosystem, which has moved at its own pace and has been rather isolated for decades.
It will take time for this enormous transition to settle and to be understood fully by all the relevant stakeholders. As they say: good things take time!!!
The Nebbiolo Edge Computing Infrastructure: A Complete Answer for Data in Manufacturing
Nebbiolo Technologies’ modern Fog Computing platform offers the ideal support for data connectivity, data management, data analytics, with local hosting of AI/ML models, and Digital Twins, requiring high bandwidth, low latency, and even deterministic performance. This distributed infrastructure can host the required hierarchical data management functionality that will power the future Smart Manufacturing floors.