Data Innovation Summit turns five next March. Along the way, we have had fantastic speakers unselfishly sharing their knowledge on stage with their peers. Without them, this journey would be impossible.
This interview is part of an interview series dedicated to humanising Data and AI innovation and celebrating speakers who have presented on Data Innovation Summit. The emphasis lies on the Data/AI people/practitioners, their professional journey and their stories.
In 2016, when we started the Data Innovation Summit and data science was in its infancy, we nurtured high hopes and expectations about what state-of-the-art tech would power data projects in 5 years. Looking back now, some of them came true, and some of them are an honest target for our jokes.
Robert Luciani, a passionate computer engineer, can attest about how the technology of data science has evolved during the latest years.
Hyperight: Hi Robert, it’s great to catch up again! You are a regular at our summits, particularly Data Innovation Summit. You have been with us since the very beginning and had your debut presentation at Data Innovation Summit 2016. Next year the Data Innovation Summit celebrates its 5th anniversary. To refresh our memories and introduce yourself to our readers, please tell us a bit who is Robert Luciani. How did your relationship with Data Innovation Summit begin?
Robert Luciani: Thanks, Ivana! My name is Robert and I’m an obsessive-compulsive computer engineer. It started 15 years ago it was the FreeBSD kernel. Now it’s CUDA kernels instead. I was invited to partake in the Data Innovation Summit shortly after founding LakeTide. At the time, deep learning and Kafka were still considered “fringe technologies” by many enterprises. My co-founder Peter sometimes jokes about how concerned I was that we were not niche enough and that all companies would be “data powered” within a few years. Lucky for us there’s still a lot of work to do on that front!
We’ll become much better at squeezing AI capabilities into low-power devices that are built into everyday things.
Hyperight: Your 2016 presentation topic was “The Emerging Discipline Of Data Science” where you talked about data science and analytics transformation from an IT initiative to its own business strategy. And after 5 years we are talking about data and AI industrialisation, it seems like a lifetime ago. Today data science is an established discipline that adds real value for businesses. Could you please reflect on your professional journey with data science during these 5 years? What were some expected, but also unexpected advancements in data science?
Robert Luciani: It really does seem like long ago. I expected a number of work-flows would become standardized and canned into products and cloud SaaS. That turned out to be the case, and now we see amazing services like Azure Cognitive and Google AutoML. A welcome and unexpected advancement is how GPU programming is not the dark art it used to be with shader hacks and whatnot.
Hyperight: Can we talk about the challenges with data science then and now. What were the challenges data scientists faced in 2016, how they have transformed and what challenges are we seeing today?
Robert Luciani: In 2016, enterprises were still sceptical that open-source toys were comparable to products from the incumbent blue-chip companies. That seems to have passed completely. Now the trouble is that there are so many layers of abstraction involved in scalable ML-Ops that poor Python scripters struggle to put their work into production. Like all things, this phase too will pass and people will start applying software development and other IT principles to AI operations.
Hyperight: In 2019 you came back to the Data Innovation Summit representing LakeTide – a data science consultancy. You are the CTO and founder, which means you have first-line hands-on experience with data science projects. What advice would you give to companies just starting with data science?
Robert Luciani: My advice would be to not spread yourselves too thin. The most common phrase we hear is “oh, we have so many use cases to choose from”. Great. Pick one, and see it through to production. Moreover, you don’t need an army of scientists to get things done. A single passionate individual is oftentimes more than enough.
Hyperight: As we mentioned, 2020 will be the decade of Data/AI industrialisation. And talking about the decade to come, what are your future outlooks for the year 2030?
Robert Luciani: Language will not be “solved” but we will have NLP systems that are good enough that they will be the primary mode of interaction with services. I also think that all “computer-aided work” will be AI-assisted, whether it’s writing a legal document, designing a house, running physics simulations, and even programming. I also think we’ll become much better at squeezing AI capabilities into low-power devices that are built into everyday things.
Join the biggest and the most influential Data and Advanced Analytics event in the Nordics! This CELEBRATE edition of Data Innovation Summit is Bigger, Extended, More Insightful, Global.