Creating a Predictive Maintenance solution in the world of pipelines
Rotterdam hosts the largest refinery in Europe and therefore also one of the biggest in the world. On the 550-hectare site (1000 football pitches) there are around sixty factories in which oil is processed in all sorts of ways. In addition to the factories, there is also an immense amount of pipelines to transport the oil. In total this involves 160,000 kilometres of pipeline -enough pipe to span the globe four times. To regularly check for rust and wear, among other things, is a colossal challenge. However, this will soon become a lot easier.
For a ‘solution’ you’ll have to move north. To Amsterdam to be precise. At a huge campus teams are working dedicated on research and development day and night, including predictive maintenance through the use of AI and Machine Learning. As a result, maintaining the many thousands of kilometres of pipelines is becoming easier, cheaper and safer.
The best way to achieve that is by predicting maintenance using big data. The more and better the data, the better the predictive power of the underlying algorithm and the more reliable the result.
Data engineers Anis Boudih and William Geuns from LINKIT have been spending the last 6 months working on various predictive maintenance projects. Due to the increasing interest in AI and Machine Learning, there is always a need for extra talent, which is also the basis for the collaboration with LINKIT. After their first period they are certainly willing to tell something about their special work.
Pipeline traffic light
Anis is the first to talk about the project he is working on: a near real-time dashboard that provides field engineers with useful information based on more than 200 sensors and a statistical model. “We have developed a kind of traffic light dashboard that displays the current situation based on the continuous flow of data. If the process goes beyond certain limits, an email is immediately sent to the engineers. This is then an indication that an engineer must go there for inspection.”
Data tracking might sound like a simple process, but as Anis explains, it’s not all so straight-forward. “Firstly, new data is constantly being received that needs to be processed as quickly as possible. In addition, different processes work side by side in the background, such as a machine learning model that makes a ‘prediction’ based on the latest data, the database where part of our data is stored as well as the backend of our dashboard. As a result, the development of this product is complex and it is sometimes difficult to discover where something went wrong. In the next phase of the project we want to make everything even better, faster and more accurate so that the field engineers can take action in better time.”
William is working on a very different project that requires an introduction. “For as long as they have existed inspectors routinely go along the pipelines to check them for rust, for example, and to check whether the valves still work well enough. This is very labour intensive and because of the enormous amount of pipelines it’s a huge challenge to constantly monitor. What we already have is an algorithm that can independently assess photos taken by the inspector of a section of a pipeline. It detects where rust is and detects common deviations. For the inspector, this works as an aid in case he himself overlooks something.”
This algorithm was the starting point for a much larger project that William is currently working on. “By taking a lot of photos with drones and having them automatically assessed by an algorithm, we hope to map complete areas and be able to assess them continuously. That is of course a huge job. They are certainly not normal photos either. The idea is to stick them on a virtual 3D model of Moerdijk or Pernis so we can see very precisely where risks arise.”
Tag a hundred thousand photos
But once you have all the photos you still haven’t nearly solved the problem. Because in order for the algorithm to work properly, it must first be fed with the correct data. In this case, the existing photos that were taken in the past. “We now have more than one hundred thousand photos that all have to be tagged”. A monster job where people manually indicate for each photo where the error is. In this manner, you ‘feed’ the algorithm, as it were, with more and better data, making it increasingly smarter and better able to detect errors independently. The algorithm has now reached the point where it can recognize the material in the photo, assess it and display it in multiple layers. The next phase is actually testing.”
Learning how to ‘drive’
“You can reasonably compare that phase with how Tesla teaches its cars to drive independently,” William explains. “In the beginning it makes all kinds of mistakes and does not ‘see’ things. But by correcting the errors in the data and, above all, by collecting a lot of data, you continue to work towards the point at which the car can truly take to the road independently. The same applies to us. Hopefully at some point our algorithm will become so good that it can independently detect possible integrity problems at sites. That would be a huge step forward.”
Steep learning curve
For William himself it is an exciting project. Especially since he has only been in the field for six months. He attended the LINKIT boot camp in 2019, after which he started working as a data engineer. “In six months my knowledge of data engineering has increased enormously. A very steep learning curve in Backend development, Python, Cloud (Azure and AWS), Docker, Scrum, you name it! I am still learning every day and as this project enters new phases, I am faced with new, educational challenges every time. An ideal start to my career as a data engineer!”
Do you want to know more about the ‘how’ of implementing AI and machine learning in an industrial environment? You can read all about it in our whitepaper ‘From unstructured data tot business value for manufacturing’. Or you can ask one of our experts for more information.