There’s rarely a person that hasn’t booked or at least tried to book a trip at least once with Airbnb. Founded in 2008, it’s an online platform that has revolutionised how people find places to stay around the world. Instead of picking an ordinary hotel room, travellers can choose to spend their holiday like a local in one of the unique accommodations offered by home-owners themselves.
The core of Airbnb’s business is matching guests and hosts. But matching thousands of listings with thousands of guests would have been impossible, hadn’t been for machine learning. And Airbnb has a machine learning platform ready to meet the demand.
Nikhil Simha, a Software Engineer at Airbnb, presented the workings of Airbnb’s end-to-end machine learning platform – Bighead and how it unifies feature engineering, model training, model serving and monitoring to serve guests the most suitable travel recommendations at Data Innovation Summit 2019.
Where Airbnb uses machine learning
Airbnb utilises machine learning throughout its products and has integrated it into every aspect of their product development. The most successful use cases Nikhil mentions are:
- Search ranking – Airbnb owns evolved search rankings which they have been improving throughout the years by adding data points about their users. It is used to give accurate predictions about users listing preferences and personalised accommodation offerings.
- Smart pricing – Determining property nightly rates presented a challenge for Airbnb hosts. To solve it, Airbnb developed smart pricing to help hosts determine the price at which they should host their listings at.
- Fraud detection – Trust is the cornerstone of Airbnb’s global community and growth. To increase and ensure trust, Airbnb has been using machine learning to identify irregular user behaviour and shut them down.
These are the three major use cases. But Nikhil also points out that they have several niche cases, for example:
- Identifying the type of room and amenities based on an image. This way they avoid relying on wrong information given by hosts.
- Automatically routing customer support tickets.
- Predicting customer buying behaviour – This use case has several dimensions, Nikhil says. The most important are surfacing the right listings, automatically setting the price and modelling users.
Why Airbnb needs a platform to run machine learning
The logical question that comes to mind is why there is a need for a machine learning platform if machine learning is just running data through some linear algebra and waiting for the answers on the other side. But it’s not as straight-forward and simple as it may seem.
Nikhil breaks down the problem into four parts: data, modelling, predictions and scientist tooling. What’s more, machine learning is even more complicated when done in real life compared to how it’s done in the classroom. In the classroom, the setting is quite simple, Nikhil states, it all starts with the hypothesis of taking a problem, generating features with the data, do the modelling and evaluating until we are satisfied with the results.
But in real life, things get a little messy and there are complications to the models. Again, the data scientist starts with a hypothesis, goes around the company to collect the data and transform it into features that can be consumed by a machine learning model. Then the data is fed into the model, the model is trained, evaluated and tested in different settings. At the end, the model is put into production and monitored continuously. The process is as complex as it is, but we also have to take into account change. The data is constantly changing as the world it represents, points out Nikhil.
Handling change is far from easy and it requires careful coordination of how all changes will be deployed into production. And there are numerous opportunities to go wrong as the process is a cycle, not a one-time thing. Moreover, it’s an engineering problem as well, not just a machine learning one.
How Airbnb solves production ML problems
Nikhil calls attention to a paper published by Google that states that in one ML code stack only 5% is ML code, the rest 95% is glue code that is making sure all works properly. This is why a more rigorous and principled approach is needed to solve production ML problems. There are a few necessary conditions that need to be satisfied to have a properly functioning machine learning platform:
- Versatility – to be able to work with multiple data sources and different kinds of models.
- Consistency – the model that was trained should be also used for predictions and the same goes for the data – it should be the same throughout.
- Scalability – the data transformation should be able to handle terabytes of raw data.
- Reliability – all systems should be reliable. They should pick up where they left off and continue processing.
The last two points – scalability and reliability apply to transformation, training and serving stage.
Having established the conditions, Nikhil lays out all components of the problem
- Transformation or feature engineering (can be both online or offline) Online transformations are used to serve features to a live model. The offline counterpart is when the training sets are created in a data lake. (Zipline)
- Training (ML Automator)
- Prediction (can be both online or offline) (Deepthought)
- Management -Bighead service.
But where is the data scientist in this picture? All components should be connected via 5) a user interface (Redspot) that enables the data scientist to carry out the machine learning workflow, Nikhil emphasises.
The majority of the machine learning project is spent on collecting data and transforming it into features. In fact, Nikhil states that they at Airbnb have discovered that nearly 70% of the time a data scientist spends developing machine learning models – is not doing the actual modelling, but collecting data and feature engineering.
A real-life example of predicting booking with machine learning
To provide a real-life example of prediction feature, we’ll look at Experiences.
Apart from listing properties, Airbnb also offers users the Experience features with which locals can show visitors landmarks and help them experience a place like a local. If a data scientist wants to predict the likelihood of a user booking an Experience, they look at the sum of bookings in the last 7 days.
The feature that makes the predictions consists of several components:
- The prediction we make
- Labels that tell us whether our prediction came true
- Features associated with the problem which the data scientist defines and which change with time. They are stored like training rows in the warehouse, collected into a training set to train a model.
This is a reiterative loop of the machine learning process – the model is put in production, data is being collected, the model is trained again.
The Data Innovation Summit has gone 100% Online and become a Global event!
You can now join the summit from the comfort of your home or office, and enjoy the unparalleled content shared through the program. The entire program will be streamed LIVE through the event platform Agorify between 18th to 21st of August 2020.
Register on the link below to get your online ticket and listen to more than 300 sessions delivered by the leading data-driven companies in the world!