Spark plays an important role on data scientists to solve all kinds of problems, especially the release of SparkR which provide very friendly APIs for traditional data scientists. However, processing various data size, data format and models will lead to different application patterns compared with traditional R. In this talk, we will illustrate the practical experience that using SparkR to solve some typical data science problems, such as the performance improvement for SparkR and native R interoperation, how to load data from HBase which is a very common data source efficiently, how to schedule a large scale machine learning job with multiple single R machine learning jobs, how to tuning performance for jobs triggered by many different users, how to use SparkR in the cloud-based environment, etc. At last, we will shortly introduce the community efforts in progress on SparkR in the coming releases.
Yanbo Liang, Software Engineer, Hortonworks
Casey Stella, Principal Software Engineer/Data Scientist, Hortonworks