Successfully reported this slideshow.
Your SlideShare is downloading. ×

Data Science - Delivered Continuously - XConf 2017

Ad

Arif Wider & Christian Deger
DATA SCIENCE,
DELIVERED CONTINUOUSLY

Ad

A CONFERENCE ALL ABOUT TECHNOLOGY

Ad

Christian Deger
Chief Architect
cdeger@autoscout24.com
@cdeger

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Loading in …3
×

Check these out next

1 of 26 Ad
1 of 26 Ad

Data Science - Delivered Continuously - XConf 2017

Download to read offline

AutoScout24 is the largest online car marketplace Europe-wide for new and used cars. With more than 2.4 million listings across Europe, AutoScout24 has access to large amounts of data about historic and current market prices and wants to use this data to empower its users to make informed decisions about selling and buying cars. We created a live price estimation service for used vehicles based on a Random Forest prediction model that is continuously delivered to the end user.

https://info.thoughtworks.com/XConf-2017-EU.html

Presented at XConf 2017 in Hamburg and Manchester by Arif Wider (ThoughtWorks) and Christian Deger (AutoScout24).

AutoScout24 is the largest online car marketplace Europe-wide for new and used cars. With more than 2.4 million listings across Europe, AutoScout24 has access to large amounts of data about historic and current market prices and wants to use this data to empower its users to make informed decisions about selling and buying cars. We created a live price estimation service for used vehicles based on a Random Forest prediction model that is continuously delivered to the end user.

https://info.thoughtworks.com/XConf-2017-EU.html

Presented at XConf 2017 in Hamburg and Manchester by Arif Wider (ThoughtWorks) and Christian Deger (AutoScout24).

Advertisement
Advertisement

More Related Content

Advertisement

Data Science - Delivered Continuously - XConf 2017

  1. 1. Arif Wider & Christian Deger DATA SCIENCE, DELIVERED CONTINUOUSLY
  2. 2. A CONFERENCE ALL ABOUT TECHNOLOGY
  3. 3. Christian Deger Chief Architect cdeger@autoscout24.com @cdeger
  4. 4. Arif Wider Senior Consultant/Developer awider@thoughtworks.com @arifwider
  5. 5. PL S RUS UA RO CZ D NL B F A HR I E BG TR 18countries 2.4m+cars & motos 10m+users per month
  6. 6. The task: A consumer-facing data product 6XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  7. 7. The task: A consumer-facing data product 7XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  8. 8. The task: A consumer-facing data product 8XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  9. 9. The prediction model: Random forest 9 Volkswagen GolfCar listings of last two years XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  10. 10. How to turn an R-based prediction model into a high-performance web application? 10 ? XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  11. 11. How to turn an R-based prediction model into a high-performance web application? 11XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  12. 12. How to turn an R-based prediction model into a high-performance web application? 12XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  13. 13. How to turn an R-based prediction model into a high-performance web application? 13  Continuous Delivery! XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  14. 14. Application code in one repository per service. CI Deployment package as artifact. CD Deliver package to servers Typical delivery pipeline XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  15. 15. Continuous delivery pipelines 15 Prediction Model Pipeline XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  16. 16. Continuous delivery pipelines 16 Prediction Model Pipeline Web Application Pipeline XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  17. 17. The price for CD: Extensive model validation 17XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  18. 18. The price for CD: Extensive model validation 18XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  19. 19. Lessons learned 19 Form a cross-functional team of data scientists & software engineers! Software engineers … learn how data scientists work … and understand the quirks of a prediction model Data Scientist … learn about unit testing, stable interfaces, git, etc. ... get quick feedback about the impact of their work  Model and product iterations become much faster! XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  20. 20. Lessons learned 20 Generating gigabytes of Java code is a challenge for the JVM Use the G1 garbage collector Turn off Tiered Compilation  Do extensive warm-ups XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  21. 21. Lessons learned – Warm up 21XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  22. 22. Lessons learned 22 The approach of applying Continuous Delivery to Data Science is useful independently of the tech  Successfully applied similarly to a Python- and Spark-based project  Even more useful when quick model evolution is required because of rapidly changing inputs (e.g. user interaction) XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  23. 23. Conclusions 23  Continuous Delivery allows us to bring prediction model changes live very quickly.  Only extensive automated end-to-end tests provide confidence to deploy to production automatically.  Java code generation allows for very low response times and excellent scalability for high loads but requires plenty of memory. XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  24. 24. Conclusions: Price evaluation everywhere 24XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  25. 25. Conclusions: Price evaluation everywhere 25XConf ’17 Hamburg/Manchester Data Science, Delivered Continuously – A. Wider & C. Deger
  26. 26. QUESTIONS? THANK YOU Arif Wider & Christian Deger @arifwider @cdeger

Editor's Notes

  • Hi everybody, thanks a lot for being with us at XConf, thanks for attending this talk
  • A
    This is Christian, he is now AutoScout24‘s chief architect but he actually joined AutoScout as a mere Developer, and then first became a team lead, before he moved into a role of a Coding Architect.
    At AutoScout we as TW have worked a lot with Christian and I think I can say that we‘ve enjoyed each others company quite a bit
  • C
    is a developer at TW Germany where Scala is his language of choice, particularly in the context of Big Data applications
    Before joining TW he has been in academia doing research on applying FP techniques to data synchronization
  • C
    AutoScout24 is the largest online car marketplace Europe-wide, with roughly 2.4 million listings on the platform, which means that they have a lot of data about how cars are sold.
  • A
    AutoScout has a lot of data about how cars are sold and at what prices.
    - Now our task was to turn all this data into something actually useful for the end user of the page.
    - So our task was to create a consumer-facing data product where users can quickly estimate the current value of their car.
    This works as follows… basic information about the car
  • A
    Optionally indicate equipment and condition
  • A
    You get a price range
  • A
    - What we had when we started working on this was a prediction model because that‘s what the data scientists at AutoScout had already build, the language they used for it was R, and the approach being used for that is called random forest.
    - Who of you has heard of Random Forest before?
    - Let‘s have look how this works: The data of the last two years is used to train a prediction model, and what you get out of training are many of such decision trees.
    - RF is the algorithm that decides … and it is a technique to work agains overfitting, i.e., producing a prediction model that only works on the training data.
  • C
    - But our task was to turn this model into a high-performance web application.
    - And in fact, that is not so common yet in the context of data science, because often, such data is only used for internal decision purposes.
    - But if you want to create a user facing application, you have a very different situation, where you have to deal with load peaks etc.
    - And that was also the reason why we ruled out to run an R server in production pretty early. The problem is that R, at least in its open source version does not support multi-threading, thus, scaling for many concurrent requests is extremely difficult.
  • C
    Traditional approach that we still see quite often: model is developed by data scientists in some language that suits their way of working best, e.g., R, and then in order to get a good performance, software engineers translate…
    For linear regression, reimplementing the model is not a big problem.
    Data scientists already changed the model once.
  • C
    - However, with this manual approach, what do you do, if the internal structure of the prediction model changes?
    If a software engineer has to reimplement theses changes, it first of all takes a long time, and also mistakes can be introduced in that translation.
    For example changed from a random forest to gradient boosted machines.
  • A
    - We therefore looked on how we can automate this and the technology that helped us with that was H2O.
    - Has anybody heard of H2O?
    - It‘s a Java based analytics engine that can be programmed using R (which the data scientists liked) and, and that was the important piece for us, provides the possibility to export your fully trained prediction model as Java source code.
    - This then allowed us to integrate this model generation into a continuous delivery pipeline.
  • C
    Commit stage: Unit tests etc.
    Additional database migration scripts.
    Blue/ Green delivery on the instance.
  • C
    This looks as follows: …
    - Then Java code is generated, actually in our case gigabytes of Java source code, which is then compiled into a JAR which is uploaded to AWS S3. This the prediction model pipeline.
    - Now, whenever something is changed in the R-based configuration or, at least as important, when the model should be updated using the latest data from the platform, a new model JAR gets generated automatically, and is deployed to S3.
  • C
    - Now for the web application, which we implemented in Scala using the Play Framework, there is another CD pipeline.
    - This pipeline also generates a JAR, the application JAR which is then deployed to AWS EC2.
    - Now, everytime when deploying this application, the pipeline also pulls the latest prediction model from S3 and then loads both into the same JVM.
    - But also when the model is updated, this triggers a redeployment of the web application with the newest model.
    - This way all prediction model changes made by the data scientists go straight to production and users can immediately benefit.
  • A
    - However, this only works, if you have enough confidence to do so.
    - Therefore we build an extensive model validation workflow.
    - Let‘s start with how a model is usually trained and how the success of the training is evaluated.
    - You, that is the data scientist, divides the existing historical data into training data and test data, and those two sets need to be disjunct. Then the model is trained using the training data.
    - The test data is then used to create test estimations and these results are compared with the actual price in the test data.
    - Will never be exactly the same but indicate how good the model is.
    - We want to validate how the model reacts to new data.
  • A
    - Further down the pipeline we use these test estimation results for a comprehensive end-to-end model validation.
    - That means we check whether the JAR that was created by compiling the generated Java code gives us exactly the same results as directly asking the model that was created by the data scientists.
    - Furthermore, we also check whether this model fufills all the expectations that our web application poses on it.
    - This is called a consumer-driven contract test (CDC), the web application in this case is the consumer of the model.
    Only if all those things are green, we release to production.
  • C
  • A
  • C
    The warmup time becomes especially problematic during an incident. Time to recover is drastically increased.
    You also need to configure your autoscaling to take the period of high load into account.
  • A
  • A
  • C
    Price suggestions during listing creation.
  • C
    Price labels on list and detail pages for fair price, good price and top price.
    Price prediction done on listing creation or change.

×