Predictive Analytics: Blowing Smoke or Revolutionising Manufacturing

This series discusses the stark realisation that less than 10% of manufacturers are satisfied with their Industry 4.0 accomplishments.

Part one of this series criticised the ‘technology first approach’ to implementing digital solutions, and argued that the complexity of broad Industry 4.0 solutions has led to delayed or non-existent ROI. In part two, we discussed how unrealistic expectations of Industry 4.0 technology can lead to further dissatisfaction. Today, in part 3, we discuss one of the most enticing Industry 4.0 prizes — predictive analytics.

Of the many attractions of Industry 4.0 is the ability to predict events that will occur in the future and take action ahead of time to improve efficiency. A simple example would be to use real production data and typical downtime related to a particular job to predict when it is most likely to complete so logistics can be alerted to arrive JIT.

For such simple cases, the path from the data to the prediction is quite clear, in part because the relationship between production rate, down time, microstops etc. is already understood. But what happens when we don’t know the relationship between the event data and the objective?

The confluence of increasingly low-cost sensors, the IIoT and Big Data Analytics are effectively commoditizing efficient data capture and analysis of the large volumes of system data. The key issue for successful predictive technology implementations is now: Can a purely data driven approach (e.g. machine learning algorithms that identify trends and relationships in operational data - Correlation models) provide an organisation with the all of the necessary information and context required to make the ‘right’ decisions on a future event? Effectively the question becomes: How can the appropriate engineering, operational and economic considerations (the ‘Rationale’) for recommended actions during operations be concurrently and consistently incorporated into an iterative decision process?

The problem inherent in a purely data driven approach is illustrated by the story behind the pictured device. There are several variations of this story, but it loosely goes as follows: A woman had fallen in the river Thames and unconscious, nearly drowning. Her husband frantically looking for a way to revive her, took a suggestion to apply a tobacco enema. This was not such a strange idea as back in 1746 tobacco was thought to have legitimate healing powers.

So at this point the husband took a pipe filled with burning tobacco, inserted the stem in the appropriate place for an enema, covered the other end of the pipe with his mouth and blew. Quite accidentally, the hot embers of tobacco being blown up her rectum startled her and she was, indeed, revived although the cause was completely misunderstood.

From here came a treatment model based on this data which said ”if the victim is drowning, then insert smoke in the form an enema”.

This practice spread quickly, peaking in the early 19th century. Courtesy of the Royal Humane Society, the equipment used in this procedure was redesigned to deliver smoke and avoid hot embers and hung alongside certain major waterways, such as along the River Thames. People frequently visiting waterways were expected to know the location and use of this equipment, similar to modern times concerning the location and use of defibrillators.

Had they a better understanding about the causation of recovery, they would have thrown out this data point as interesting, but not entirely useful.

A brief search will uncover a myriad of these kind of examples, with thanks to the work of Tyler Vigen, where good correlations can be found between two very separate events, leading to ludicrous conclusions. These are called correlations without causation.

Other Tyler Vigen examples include:

-The number of people who drowned by falling into a pool correlates with the number of films Nicolas Cage appeared in.

-Total revenue generated by arcades correlates with computer science doctorates awarded in the US.

We can consider how this might affect manufacturing by looking at this question: how might we use data to develop the ability to provide warning that a failure event is likely, in order to:

-Only perform maintenance when needed; or

-Prepare for a breakdown ahead of time?

This is the basis for Condition Based Maintenance (CBM) which is an easy concept to understand: only do the right maintenance at the right time, based on the condition of the asset, to optimize availability, safety and cost of the system. However, historically CBM has not been as easy to implement successfully, based on the complexity of real engineering systems, their diversity of usage profiles and operating environments which make establishing causation difficult.

Where traditional approaches for CBM* and Failure Reporting, Analysis, and Corrective Action Systems (FRACAS) have been successful, it is in applications where causation is known. This usually means very specific failures which are detectable through monitoring relatively simple parameters. This limits, its application to individual components and requires well-defined circumstances. For example, monitoring of bearing lubricating oil for traces of copper to detect bearing wear, or monitoring for abnormal states in motors by capturing vibration/acoustic signatures. [*Note: Extending CBM, to CBM+ begins to incorporate some predictive technology but it also assumes that causality is known.]

Using pure data analytics to move beyond a single, predetermined failure mode to a more generalised approach brings with it an exponential increase in complexity. More failure modes require more data, but what data, and what are the root causes of the failures indicated by a vast array of sensors? Ignoring these questions and applying machine learning and data analytics to as much data as we can, will likely lead to false correlations discussed above and difficulty in identifying the root cause of a failure, and thus the corrective action. That said, it is worth pursuing to determine if a broader solution is possible even if a general solution is not.

On the other hand, increasing the complexity and accuracy of systems that directly model failure and causation based on mathematical description of underlying asset’s physics, may make those systems difficult to apply and may be unattainable.

So, for a more general solution in manufacturing, we need to consider an approach that starts with the knowledge of causation and uses it to inform the analytics. Such a hybrid solution uses physics to build a model based on robust prior knowledge of the asset, and then allow the data to fine-tune the model over time to characterize the specific asset in question even more accurately. In some cases, the appropriate engineering, operational and economic considerations have already been analysed by the equipment OEM’s Reliability, Availability, Maintainability (RAMS) function using physics-of-failure & design-for-reliability methods in the design process: How/why/how often will failures occur? How will any component failure(s) impact on the behaviour of the system? How should a failure be safely and cost-effectively mitigated? Collectively, this knowledge can be captured in a Causation model for the system that is based on the physics of failure applicable to the design of the system.

Typically, RAMS knowledge about a system is manually authored and aggregated, then documented in design artefacts such as Functional Hazard Analysis (FHA) and Failure Modes and Effects Analysis (FMEA) and establish maintenance strategies with Reliability Centered Maintenance (RCM). However, it is not directly integrated with the diagnostics discipline or function. This integration only becomes possible if the RAMS is model based, such as is offered in MADe™.

The starting point is a failure model of the equipment in question. A failure model will define all of the failure modes of the system. There will be many, and economics can be used to assess those that are most important. The model identifies the minimum sensor coverage required to capture those failure modes and the root cause of the downstream failures with upstream causes. Finally, it can deliver a theoretical reliability analysis.

The data improves the theoretical model in two ways. Firstly, the parameters input to the reliability model can be refined by feeding back real failure events and sensor readings. Secondly, the model can be used to correctly categorize and identify the root cause of failures and use this to help direct the training of a machine learning model, ensuring that the correlations it identifies are attached to the root causes of the failure.

At OFS, we have collaborative projects with partners looking at different approaches to predictive maintenance. We will keep you informed as were learn more.


learn more about OFS and how you can take your production facilities into Industry 4.0, submit your interest here or contact the sales team at

The Author

Roland Thomas has spent 35+ years providing software solutions for faster, better, cheaper manufacturing. Much of this time was focused on simulating injection moulding with a company called Moldflow, which in Industry 4.0 “speak” developed facets of the “digital twin”. Beyond OFS and Moldflow, Roland works with PHM technology specialising in model-based failure and reliability of engineering systems which is a prerequisite to condition based maintenance.