Featured image credits: Laila Sømme / © Equinor
The usual way of implementing machine learning and AI in an organisation starts with doing proofs of concept and pilot projects for specific use cases. But this approach falls short of delivering any ROI for the business to be able to justify the cost.
This is why Equinor, a Norwegian leading energy company, took a different approach to artificial intelligence and machine learning. They were looking for a way to introduce this cutting edge technology as a competitive advantage and align them with their corporate strategy.
Dr. Ahmed Khamassi, Vice President Data Science at Equinor, outlined Equinor’s process of creating machine learning products that deliver both value and innovation at scale at the Data Innovation Summit 2019. And more importantly, Ahmed presented an approach that has worked for them and which busts the myth of ROI traditionally applied to ML.
How Equinor started with AI and ML
Knowing that the industry was changing at a rapid pace and soon most processes are well on the way of becoming automated and robotised, Ahmed relates that Equinor decided to rely on machine learning to stay ahead on the market. To do that they needed to have three things in place:
- Proprietary data
- Knowledge about the generation, usage and meaning of the data
- Changes in the business processes while embedding machine learning.
Digitalisation as a catalyst
Equinor has gone true a transformation from an oil & gas company to an energy company, in which digitalisation and AI/ML have a central role. One of the three objectives in their transformation, besides safety and low carbon footprint, is high-value investments. Digital technologies central role in their goals to achieve 2 billion value in production, automate drilling and reduce cost, explains Ahmed.
From an ML/AI perspective, these goals boil down to four main areas:
- Focusing on knowledge – Equinor’s Knowledge AI team specialises in NLU with a primary target to reduce incidents rate by gathering knowledge and informing people who work in high-risk environments.
- Changing behaviour – Transforming the organisation into a data-driven business and people making decisions based on data. The main focus is on safety through predicting and preventing incidents.
- Machine data – Equinor has a tremendous amount of sensor data which they use for equipment monitoring and optimisation algorithms.
- Reduction in capital investment through autonomy – Developing autonomous systems with computer vision, deep learning and reinforcement learning.
For each of these four areas, Equinor has appointed a dedicated team. But they soon came across the question of how to deliver both value and innovation at scale by applying machine learning.
Deploying machine learning at scale at Equinor
Ahmed states that when they started thinking about the solution, they had one main principle in consideration – that of ROI fallacy in machine learning.
To explain what ROI fallacy means, Ahmed points to traditional organisations that implement machine learning as delivering a model in a linear case by case fashion. Considering the time and effort needed to understand the problem, get the data, clean it, try different algorithms, it leaves us wondering how much value the produced model has. The return on investment from building machine learning models case by case is meagre.
But Equinor went for a different approach. They wanted to solve classes of problems with machine learning and apply the models multiple times by applying nonlinear ε-cost scaling.
ε-cost scaling refers to solving a problem once going through the whole ML modelling process and deploying the solution several times with a click of a button, which generates a far higher value than the linear case by case model production.
A good analogy Ahmed gives for their machine learning scaling is car production. Just like car companies that build only a few chassis and they create a myriad of cars models on top of it, which allows them to scale. Similarly, Equinor builds an ML platform, a general pipeline and a technology infrastructure that enables business teams to build several models and deploy them easily.
“We think about machine learning, not in terms of producing models, but producing a platform that generates lots of models,” explains Ahmed.
A first-hand example with deploying ML
As we mentioned previously, machine data, i.e. sensor data, is one of the key investment areas of Equinor, as their equipment generates tons of data. Machines are the backbone of Equinor business. And if the machines breakdown or deteriorate, they lose production. They monitor them using 1.5 million useful sensors.
But with about a hundred turbines, thousands compressors and pumps, it’s impossible to build one ML model for each machine to analyse sensor data and detect any deterioration or failure that may stop the entire production.
Equinor needed a process that is fast, scalable and applicable to any machine that produces sensor data. However, these requirements come with constraints or costs that Ahmed refers to as design constraints.
From AI/ML perspective, they need to fulfil three conditions:
- Models must automatically find correlations between and within sensors (e.g. the faster a turbine moves, the hotter it becomes)
- Models must be variance insensitive, i.e., if one sensor breaks on a machine, the model still works.
- Sensor data are signals.
Taking these principles, Equinor’s machine learning process starts with data ingestion and pre-processing, signal spectrum processing, continues with running the signals through autoencoders – deep learning algorithms that capture the relations between the data. The output indicates if there’s any anomaly detected.
Apart from the AI/ML conditions, there are also UX technical conditions that allow for building a lot of models with a click of a button:
- Model building must be simple and flexible
- Models building must be fast, robust and easily reversible
- Model deployment must be automated.
The final product?
Equinor uses a technology stack designed to deliver scale and products rapidly reducing time to value. The stack is composed of automated microservices. It also includes a DevOps process that helps to automate and technology that helps to run things at scale.
Key attributes of the tech stack:
- Code in development works in production
- Everything is version controlled
- Everything is built by deploying small microservices woven together flexibly
- Everything is cloud-native
The Data Innovation Summit has gone 100% Online and become a Global event!
You can now join the summit from the comfort of your home or office, and enjoy the unparalleled content shared through the program. The entire program will be streamed LIVE through the event platform Agorify between 18th to 21st of August 2020.
Register on the link below to get your online ticket and listen to more than 300 sessions delivered by the leading data-driven companies in the world!