The percentage of companies investing in Big Data and AI initiatives has nearly doubled in the last two years, climbing to 64.8% in 2020 from just 39.7% in 2018, Forbes finds. Yet, for all the stocks placed in the disruptive technology of AI, only 14.6% of firms report that they have deployed AI capabilities into widespread production.
Adding to this is a survey report by 451 Research that finds as much as 68 % of respondents are either using machine learning or plan to within the next three years. The 451 Research’s survey results also indicate that one of the main barriers to implementing AI and ML is accessing and preparing data (33% of respondents).
As can be noted, the business appetite for applying AI is on an upward curve, and yet the implementation and deployment of AI projects remains the biggest hurdle for companies.
A successful AI implementation depends on several factors including access to a large body of data, appropriate algorithms, a skilled team of data scientists and fitting compute resources that facilitate a robust data infrastructure, as well as data management and database software to support high-performance data processing and analytics, highlights the study.
Data management a vital enabler of AI implementation
Data management is a critical driver and enabler of AI and machine learning because it helps companies with the data ingestion and preparation stage of the AI pipeline, which 39% of respondents assess as the most demanding in relation to their underlying infrastructure.
Successful implementation of AI and ML requires a constant, high-quality and reliable supply of data. We cannot miss mentioning the adage that artificial intelligence can is only as smart as the data used to train it. This is where data management plays a central role in ensuring AI is trained on high-quality data.
For AI systems to provide first-rate performance and make informed decisions, they need good data, properly conditioned and placed in the right context. Organisations’ efforts for AI deployment must be rooted in a sound and comprehensive data management strategy that is up to par to address any gaps or weaknesses in the areas of data governance, quality, cleansing, cataloguing, security, or metadata management that may surface during the implementation of AI projects.
As data is the building material of any AI system, the quality and reliability of AI-enabled prescriptive recommendations or automated tasks are directly connected to the quality and reliability of the data used to train the system.
Unfortunately, companies are still falling short in data management, which has also affected their capabilities to fast-track AI deployment. Leaving unsolved data management and data governance issues early in the process brings on more complicated issues and fracture AI initiatives further along in the process. For instance, as more companies are shifting their AI workloads to the cloud, they are coming across greater data integration challenges. Some of the most common barriers in the integration include dealing with disparate data that exists on different systems and merging data from diverse sources, affirms Analytics Insight.
How to start solving the challenge with the help of data management
The 451 Research study presents a progressive approach to solving the most demanding aspects of data management, t.e. data ingestions and preparation. It sees the solution in infusing data management solutions with AI and automating the time-consuming tasks.
As it’s stated in the research, while data management is a crucial factor for the acceleration of AI, using AI to improve data management has proven massively advantageous, particularly through automation of repetitive and time-consuming tasks. As we’ve seen above, data access and data processing, which are highly repetitive tasks requiring a fair amount of time from data engineers and database administrators, can be accelerated through automation.
Machine learning has proven to have an essential role in improving the efficiency of the data ingestion and preparation stage of the AI pipeline by automating the identification and tagging of data to reduce the need for manual data preparation.
Infusing these labour-intensive stages of the data pipeline is not intended to remove the human involvement from the process. But instead, it leaves business analytics, database administration and data engineers to focus on more high-impact tasks of data management which cannot be automated, such as architecture planning, data modelling, data security and lifecycle management.
As data management plays a fundamental role in the deployment of AI applications, data management tools for automating repetitive and predictable tasks can have a huge impact on accelerating AI deployment across the whole enterprise.
How to incorporate AI into your data strategy
Companies just starting the journey of incorporating AI into their business and chasing the potential to capitalise on it, need to adapt their data strategies to include AI.
To get a clear glimpse into how it is done, we look into the Nets success story presented at the Data 2020 Summit by Vanessa Eriksson, previous SVP, Chief of Staff to the Group CIO, and current SVP, Chief Digital Officer at Zenseact.
Vanessa provided first-hand insight into how to adapt the data strategy for the rise of AI in business, and concrete examples of how Nets structured their data strategy to accommodate AI initiatives. Nets’ data strategy is founded on 4 main units: Master Data & Governance, Common Platforms, Analytics and Automation and Data Delivery, Vanessa described.
As they are operating in the payment services, Nets have focused their AI initiatives on fraud detection and prevention and creating an AI-powered payment fraud prevention solution. Now, where does data management sit in their exciting journey?
Since fraud detection is quite a sensitive area, the AI models that executives are working with must incorporate trusted and transparent data, for which data management and governance are pivotal. Data is the cornerstone of many of Nets’ risk & fraud products & services that leverage AI, and for every use case, it’s crucial to have trustworthy and easily available data, emphasised Vanessa. So, data managers should make sure to help the organisation get ready for the AI journey.
Additionally, enterprises typically deal with siloed legacy systems which cause discrepancies in the data. To accelerate the data cleansing process, Nets implemented AI-powered data quality that speeds up the process of data preparation so their data scientists can deliver faster, reliable and high-quality data to the organisation.
Making the first step towards faster AI deployment
The road to successful AI deployment is full of trials and errors. To empower data-driven organizations in their AI implementation, we’ve created a global knowledge-sharing network of practitioners equipped with a toolkit to fast-track Data and AI-innovation across sectors and markets.
The Data 2030 Summit unites the greatest minds in the Data Management community in one platform to discuss ways of enabling faster Data Innovation and AI deployment across the enterprise by setting up a modern Data Management strategy and platform for the new decade.