Delving into the true purpose of DataOps: Interview with Andrea Piro

Share

As data grew to become the blood that sustains the modern organisation, so did the need for people and resources to manage and derive value from it. But migrating to the cloud or adopting a fancy IT solution doesn’t automatically make the company data-driven. Enterprise AI and algorithmic processes require perfect synchronisation and agility. This is where DataOps comes in the picture – as a single point of contact when something goes out of sync or a vessel engineer that makes sure data is delivered to the users, as Andrea Piro, DataOps Manager at A.P. Moller – Maersk, interpreted it.

Andrea Piro shared his views on the DataOps role in the modern data-driven organisation, as a preface to his Data 2030 Summit session.

Hyperight: Hello Andrea, we are super thrilled that you are joining us at the 5th edition of the Data 2030 Summit. As you are for the first time with us, please tell us a bit about yourself and your background.

Andrea Piro, DataOps Manager at A.P. Moller

Andrea Piro: Thanks for your invite to such a quality event. I am honoured and happy to contribute to community discussion.

Have joined Maersk in November 2016, coming from the Pharmaceutical industry. I have spent my career almost exclusively in Europe, focusing on enterprise IT for almost 20 years now. I am married and the father of Eugenia. Two dogs, one of which was rescued, are also part of the family.

I have been working on Euro migrations, engaged in exuberant early Business Intelligence enterprise projects and provided services in Digital Forensics. I hold a master’s degree in law and am an expert witness. Most important, glad to have the opportunity to work in a state-of-the-art IT setup.

Delving into the true purpose of DataOps: Interview with Andrea Piro
Photo by Austin Distel on Unsplash

Hyperight: Your Data 2030 Summit session will focus on the evolution of data processing. As you state, there is a notion that automation, integration and decisions can run unmanned. So to ask your question and get an answer to it: Is DataOps just another step toward full machine autonomy?

Andrea Piro: Data is the new oil. How many times have we heard this quote? Let’s extend this simplified view. Enterprise operations feed the oil field, data engineers do upstream, and in turn data scientists do the downstream art, generating energy to boost data-driven decisions. This paradigm is true for all shades of BI, Big Data, AI and ML.

Quality, reusability and scalability depend on the talents you have in the team, guidance and vision from senior leaders.

Once upon a time, there was the all-purpose called “IT”. Modern organisations have much more technology-savvy affiliates, communications are easier, and the gap between business and IT shrank impressively. Siloed way of working is over so the chain is close to perfect sync and able to run smoothly with agility.

In this virtual oil pipeline, DataOps teams cover different roles. They can be a partner, sentinel or advisor. For sure DataOps appears as a single point of contact when something goes out of sync or a quick answer or reaction is needed. DataOps is the team that has the best observation point and can feel if something is prone to face hiccups, reassures you as a data consumer upon an incident. Keeping a readily usable memory of developments history is aware of periodic critical needs. I am sure many of us worked on period closing.

In one word, DataOps is a cross-functional team.

Delving into the true purpose of DataOps: Interview with Andrea Piro
Photo by LinkedIn Sales Navigator on Unsplash

Hyperight: DataOps is the data management for the AI era. It reflects collaborative data management practice focused on improving communication, integration and automation of data flow between data managers and data consumers across an organisation. Why every company that is striving to be AI-driven needs to implement DataOps?

Andrea Piro: The point is that we live in a technological era. Self-healing systems, real-time data flows like IoT metrics, everything sounds measurable and under control. This is true and at the same time, partially true. Every organisation can build an AI solution – imagine how easy and reasonably expensive is spinning up a virtual machine in the cloud.

Building and running are easy, but how robust and valuable is your solution? Automation by itself is not boosting your business. I strongly believe that return on investment metrics – quality, reusability and scalability, to name a few – depend on the talents you have in the team, guidance and vision from senior leaders.

Migrating on the cloud or running a fancy IT product doesn’t really pay out if you don’t have a vision. And probably the machines won’t uproot us until human contribution and sensitivity are a key variable in the algorithms. So to answer your question and slightly anticipate my session content, I think we are still far from a full machine autonomy. Will be exploring more this aspect during the session.

Hyperight: What are the conditions that an organisation should fulfil, or considerations to uphold, in order to be able to successfully implement DataOps?

Andrea Piro: DataOps fits and performs in a quality shaped IT organisation. In other words, it is a gear of a much more complex mechanism. Data must reach consumers in time and be of good quality like a container.

Containerised goods normally travel through different means of transportation – truck, rail, ship, air – and pass checkpoints (e.g. customs).

The perfect execution of one single segment doesn’t mean you did perform well. Only a planned and robust end to end journey ensures success. This is true for physical good as well as data. DataOps is part of the journey. Senior leaders plan the route and rules. DataOps can be seen as a vessel engineer and mechanics or truck driver. Collaboration makes the rest. The term “collaboration” has to be read in a mature and modern meaning; a dotted, invisible, not requested collaboration: this is the trust blockchain. Don’t want to reveal too much, just ensure to book a spot for the February session.

Delving into the true purpose of DataOps
Photo by Canva Studio from Pexels

Hyperight: How DataOps helps to accelerate AI and ML operationalisation?

Andrea Piro: AI, ML, data lake are all specifications of the general data processing concept. DataOps provides stability in terms of data availability and completeness knowledge.  

There are several differences between a classic Business Intelligence solution and AI and ML. Data preparation, training, computational power are more recursive concepts while working in AI and ML projects and operations. The DataOps eye is seasoned and can shorten teams’ ramp-up into platforms, reduce handovers, in general, giving more time to engineering teams to focus on the building phase.

Hyperight:  Besides DataOps, there are other offshoots that have risen from the DevOps methodology like MLOps and AIOps. How can organisations choose the right workflow for them?

Andrea Piro: IT is like a library, and literature for a human being is endless, you can’t read all texts. Same for IT, you can’t install and use all tools. There’s a need for choice. Which ERP has to be chosen, develop an AI model, data you collect. The Ops suffix is indeed very recurrent like as a code and many other combinations. All have to be factored based on the industry sector, so this is really all strategic and the reason for which we all look to C leaders virtually asking: Where shall we go? I think our leaders also look at our feedback to identify patterns; a field force colleague might provide critical insights.

Zooming out, you’re asking the secret formula for the ultimate IT solution. I will contribute to community discussion, sharing my experiences and thoughts. Artificial Intelligence as IT is 70 years old and some bad habits like reading the whole database are still there. Both are the result of human activity, just as equilibrium.

Free up your agenda and attend, I am doing the same to attend as many sessions from my fellow speakers.

Hear the latest methodologies, strategies and tools used by organizations discussed by the brightest minds in the Data Management community. The Data 2030 Summit focuses on the fundamental pillars of a modern data management strategy: Data Governance & Data Quality, Cloud or Multi-Cloud enabled infrastructure and DataOps (+Master Data Management) wrapped in a three-day extensive programme.

Featured photo by Matthew Henry from Burst 

Chief Editor at Hyperight Read. Reading and writing are her passion. Claims typing on a keyboard calms her down. Enthusiastic about Data Science, AI, Machine Learning and all things digital.

Related Posts
Leave a Reply

Your email address will not be published. Required fields are marked *