top of page

Three Questions You Need To Answer to Make the Best Use of Data in Industry 4.0


GETTY

A key consequence of Industry 4.0 is data — lots and lots of data. For manufacturers accustomed to making decisions in data-starved environments, the data deluge is daunting. Not only is there more data, but there are also entirely new datasets — many of which solve old problems and some of which illuminate new ones. It’s not surprising, then, that most manufacturers simply don't know how to begin making the most of the information. Truthfully, though, for many, cultural resistance to change and the learning curve needed to adopt new ways of using data in real time is challenging enough to send them back to using gut instinct to make decisions — decisions that impact hundreds of millions of dollars in investment.


In spite of these hurdles, many manufacturers know that unlocking the insight from all this data has the power to drive true operational change. They're undertaking efforts to harness the insight by creating business processes that get the right insights into the right hands for maximum benefit. Here, I offer a few thoughts on how best to ford these tricky data swamps.


With Big Data Getting Bigger, Products, People and Processes Must Adapt

There are three major trends fueling the creation of new data:

  1. Sensors (including video).

  2. Data-driven computing (including deep learning).

  3. Infrastructure (the cloud, computing that includes GPU, storage and bandwidth).


These technology trends are impacting lives, not just the plant floor. For example, consider the process of monitoring blood sugar levels. Fifty years ago, my grandmother was tested once a month by a doctor who came to her home with a bunch of reagents and a bunsen burner. Twenty-five years ago, my dad pricked his finger several times a day to get an immediate, on-demand reading. Today, continuous glucose monitoring devices measure every five minutes, 24/7.


This journey is parallel to advances in manufacturing technology: Sensors have been improving in leaps and bounds. We now have the infrastructure to process the raw signals, and with the data that they're creating, we can close loops faster. Most importantly, the insights are now interpretable by the average Jane, not a trained doctor or engineer. With shrinking decision times, an erroneous decision that was once serious can now be quickly corrected.


Here’s how this scenario plays out in manufacturing: To measure productivity with data collection technology, a line supervisor has historically measured the performance of a line with a stopwatch over a finite period of time and all the attendant limitations. Now, with Industry 4.0, they get continuous data for the entire shift and from many lines. This should translate into the ability to make better decisions more quickly.


But it's not as easy as it sounds, primarily because data in and of itself doesn't drive change. The results of one study found that the "average factory generates 1TB of production data daily, but only 1% is analyzed and acted upon in real-time."


For that to change, the data has to be accessible, relevant and actionable. It has to be presented to users in the context of their roles, and users have to know how to interpret the data in the context of the decisions they need to make. The first step to making that possible is to design data consumption models that enable problem-solving by multiple stakeholders.


Designing Data Models That Turn the Tide on the Deluge

When designing a data model and using it in the data-driven scientific problem-solving process that's the bedrock of continuous process improvement, there are three key questions to ask. These ensure that the data model presents the information in a way that it can be used by those who need to understand what it means and can act upon it.


  • Can you clearly articulate the problem you're looking to solve? Crafting a clean problem statement is often half the solution.

  • What does the solution look like? Who are the actors? What metrics do they need to use? How and when are these metrics presented to them? How is the raw data processed? What's the analysis and the associated visualization? What tools are required and what systems need to be integrated to make this most effective? How is the process set up to help manufacturers use statistical techniques like principal component analysis and the design of experiments to reduce the analysis and interpretation space to make it easier to understand and work with?

  • How do you measure success? Can we define measurement frameworks that can be used in the user’s daily workday? Not everyone needs to see or know everything, but everyone should be able to interpret what the data is telling them based on what their roles require them to know, understand and act upon.


Anticipating and Addressing Resistance to Change

Not everyone is ready to embrace a data-driven world with arms wide open. A significant number of stakeholders may see the new approach as a threat, including:

  • People who are suddenly being measured.

  • Those who are no longer gatekeepers of exclusive knowledge.

  • Workers whose measurement skills are becoming obsolete.

  • Those whose good decisions now look bad with more data.

This is where building models that present information contextually helps curb anxiety, negativity and overall resistance to implementing data-driven decision-making in the factory. When the information is relevant and presented in a format that makes it easy to interpret, it presents less of a threat to others, and action is possible — whether it's a solution to a problem or an innovation that improves the process. Building data models that answer these three questions and can be used by all stakeholders in the operation is the first step on the journey to data-driven decision-making.


Writer is Dr. Prasad Akella, founder and chairman of Drishti.


bottom of page