Advertisement
Advertisement

More Related Content

Slideshows for you(20)

Viewers also liked(20)

Advertisement

Similar to Serverless Data Architecture at scale on Google Cloud Platform - Lorenzo Ridi - Codemotion Milan 2016 (20)

More from Codemotion(20)

Advertisement

Serverless Data Architecture at scale on Google Cloud Platform - Lorenzo Ridi - Codemotion Milan 2016

  1. Serverless Data Architecture at scale on Google Cloud Platform Lorenzo Ridi MILAN 25-26 NOVEMBER 2016
  2. What’s the date today?
  3. Black Friday (ˈblæk fraɪdɪ) noun The day following Thanksgiving Day in the United States. Since 1932, it has been regarded as the beginning of the Christmas shopping season.
  4. Black Friday in the US 2012 - 2016 source: Google Trends, November 23rd 2016
  5. Black Friday in Italy 2012 - 2016 source: Google Trends, November 23rd 2016
  6. What are we doing Processing + analytics Tweets about black friday insights
  7. How we’re gonna do it
  8. How we’re gonna do it
  9. Pub/Sub Container Engine (Kubernetes) How we’re gonna do it
  10. What is Google Cloud Pub/Sub? ● Google Cloud Pub/Sub is a fully-managed real-time messaging service. ○ Guaranteed delivery ■ “At least once” semantics ○ Reliable at scale ■ Messages are replicated in different zones
  11. From Twitter to Pub/Sub $ gcloud beta pubsub topics create blackfridaytweets Created topic [blackfridaytweets]. SHELL
  12. From Twitter to Pub/Sub ? Pub/Sub Topic Subscription A Subscription B Subscription C Consumer A Consumer B Consumer C
  13. From Twitter to Pub/Sub ● Simple Python application using the TweePy library # somewhere in the code, track a given set of keywords stream = Stream(auth, listener) stream.filter(track=['blackfriday', [...]]) [...] # somewhere else, write messages to Pub/Sub for line in data_lines: pub = base64.urlsafe_b64encode(line) messages.append({'data': pub}) body = {'messages': messages} resp = client.projects().topics().publish( topic='blackfridaytweets', body=body).execute(num_retries=NUM_RETRIES) PYTHON
  14. From Twitter to Pub/Sub App + Libs
  15. VM From Twitter to Pub/Sub App + Libs
  16. VM From Twitter to Pub/Sub App + Libs
  17. From Twitter to Pub/Sub App + Libs Container
  18. From Twitter to Pub/Sub App + Libs Container FROM google/python RUN pip install --upgrade pip RUN pip install pyopenssl ndg-httpsclient pyasn1 RUN pip install tweepy RUN pip install --upgrade google-api-python-client RUN pip install python-dateutil ADD twitter-to-pubsub.py /twitter-to-pubsub.py ADD utils.py /utils.py CMD python twitter-to-pubsub.py DOCKERFILE
  19. From Twitter to Pub/Sub App + Libs Container
  20. From Twitter to Pub/Sub App + Libs Container Pod
  21. What is Kubernetes (K8S)? ● An orchestration tool for managing a cluster of containers across multiple hosts ○ Scaling, rolling upgrades, A/B testing, etc. ● Declarative – not procedural ○ Auto-scales and self-heals to desired state ● Supports multiple container runtimes, currently Docker and CoreOS Rkt ● Open-source: github.com/kubernetes
  22. From Twitter to Pub/Sub App + Libs Container Pod apiVersion: v1 kind: ReplicationController metadata: [...] Spec: replicas: 1 template: metadata: labels: name: twitter-stream spec: containers: - name: twitter-to-pubsub image: gcr.io/codemotion-2016-demo/pubsub_pipeline env: - name: PUBSUB_TOPIC value: ... YAML
  23. From Twitter to Pub/Sub App + Libs Container Pod
  24. From Twitter to Pub/Sub App + Libs Container Pod Node
  25. Node From Twitter to Pub/Sub Pod A Pod B
  26. From Twitter to Pub/Sub Node 1 Node 2
  27. From Twitter to Pub/Sub $ gcloud container clusters create codemotion-2016-demo-cluster Creating cluster cluster-1...done. Created [...projects/codemotion-2016-demo/.../clusters/codemotion-2016-demo-cluster]. $ gcloud container clusters get-credentials codemotion-2016-demo-cluster Fetching cluster endpoint and auth data. kubeconfig entry generated for cluster-1. $ kubectl create -f ~/git/kube-pubsub-bq/pubsub/twitter-stream.yaml replicationcontroller “twitter-stream” created. SHELL
  28. Pub/Sub Kubernetes How we’re gonna do it
  29. Pub/Sub Kubernetes Dataflow How we’re gonna do it
  30. Pub/Sub Kubernetes Dataflow BigQuery How we’re gonna do it
  31. What is Google Cloud Dataflow? ● Cloud Dataflow is a collection of open source SDKs to implement parallel processing pipelines. ○ same programming model for streaming and batch pipelines ● Cloud Dataflow is a managed service to run parallel processing pipelines on Google Cloud Platform
  32. What is Google BigQuery? ● Google BigQuery is a fully- managed Analytic Data Warehouse solution allowing real-time analysis of Petabyte- scale datasets. ● Enterprise-grade features ○ Batch and streaming (100K rows/sec) data ingestion ○ JDBC/ODBC connectors ○ Rich SQL-2011-compliant query language ○ Supports updates and deletes new! new!
  33. From Pub/Sub to BigQuery Pub/Sub Topic Subscription Read tweets from Pub/Sub Format tweets for BigQuery Write tweets on BigQuery BigQuery Table Dataflow Pipeline
  34. From Pub/Sub to BigQuery ● A Dataflow pipeline is a Java program. // TwitterProcessor.java public static void main(String[] args) { Pipeline p = Pipeline.create(); PCollection<String> tweets = p.apply(PubsubIO.Read.topic("...blackfridaytweets")); PCollection<TableRow> formattedTweets = tweets.apply(ParDo.of(new DoFormat())); formattedTweets.apply(BigQueryIO.Write.to(tableReference)); p.run(); } JAVA
  35. From Pub/Sub to BigQuery ● A Dataflow pipeline is a Java program. // TwitterProcessor.java // Do Function (to be used within a ParDo) private static final class DoFormat extends DoFn<String, TableRow> { private static final long serialVersionUID = 1L; @Override public void processElement(DoFn<String, TableRow>.ProcessContext c) { c.output(createTableRow(c.element())); } } // Helper method private static TableRow createTableRow(String tweet) throws IOException { return JacksonFactory.getDefaultInstance().fromString(tweet, TableRow.class); } JAVA
  36. From Pub/Sub to BigQuery ● Use Maven to build, deploy or update the Pipeline. $ mvn compile exec:java -Dexec.mainClass=it.noovle.dataflow.TwitterProcessor -Dexec.args="--streaming" [...] INFO: To cancel the job using the 'gcloud' tool, run: > gcloud alpha dataflow jobs --project=codemotion-2016-demo cancel 2016-11- 19_15_49_53-5264074060979116717 [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 18.131s [INFO] Finished at: Sun Nov 20 00:49:54 CET 2016 [INFO] Final Memory: 28M/362M [INFO] ------------------------------------------------------------------------ SHELL
  37. From Pub/Sub to BigQuery ● You can monitor your pipelines from Cloud Console.
  38. From Pub/Sub to BigQuery ● Data start flowing into BigQuery tables. You can run queries from the CLI or the Web Interface.
  39. Pub/Sub Kubernetes Dataflow BigQuery How we’re gonna do it
  40. Pub/Sub Kubernetes Dataflow BigQuery Data Studio How we’re gonna do it
  41. Pub/Sub Kubernetes Dataflow BigQuery How we’re gonna do it Data Studio
  42. Pub/Sub Kubernetes Dataflow BigQuery How we’re gonna do it Natural Language API Data Studio
  43. Sentiment Analysis with Natural Language API Polarity: [-1,1] Magnitude: [0,+inf) Text
  44. Sentiment Analysis with Natural Language API Polarity: [-1,1] Magnitude: [0,+inf) Text sentiment = polarity x magnitude
  45. Sentiment Analysis with Natural Language API Pub/Sub Topic Read tweets from Pub/Sub Write tweets on BigQuery BigQuery Tables Dataflow Pipeline Filter and Evaluate sentiment Format tweets for BigQuery Write tweets on BigQuery Format tweets for BigQuery
  46. From Pub/Sub to BigQuery ● We just add the additional necessary steps. // TwitterProcessor.java public static void main(String[] args) { Pipeline p = Pipeline.create(); PCollection<String> tweets = p.apply(PubsubIO.Read.topic("...blackfridaytweets")); PCollection<String> sentTweets = tweets.apply(ParDo.of(new DoFilterAndProcess())); PCollection<TableRow> formSentTweets = sentTweets.apply(ParDo.of(new DoFormat())); formSentTweets.apply(BigQueryIO.Write.to(sentTableReference)); PCollection<TableRow> formattedTweets = tweets.apply(ParDo.of(new DoFormat())); formattedTweets.apply(BigQueryIO.Write.to(tableReference)); p.run(); } JAVA PCollection<String> sentTweets = tweets.apply(ParDo.of(new DoFilterAndProcess())); PCollection<TableRow> formSentTweets = sentTweets.apply(ParDo.of(new DoFormat())); formSentTweets.apply(BigQueryIO.Write.to(sentTableReference));
  47. From Pub/Sub to BigQuery ● The update process preserves all in-flight data. $ mvn compile exec:java -Dexec.mainClass=it.noovle.dataflow.TwitterProcessor -Dexec.args="--streaming --update --jobName=twitterprocessor-lorenzo-1107222550" [...] INFO: To cancel the job using the 'gcloud' tool, run: > gcloud alpha dataflow jobs --project=codemotion-2016-demo cancel 2016-11- 19_15_49_53-5264074060979116717 [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 18.131s [INFO] Finished at: Sun Nov 20 00:49:54 CET 2016 [INFO] Final Memory: 28M/362M [INFO] ------------------------------------------------------------------------ SHELL
  48. From Pub/Sub to BigQuery
  49. Pub/Sub Kubernetes Dataflow BigQuery Data Studio We did it! Natural Language API
  50. Pub/Sub Kubernetes Dataflow BigQuery Data Studio We did it! Natural Language API
  51. Live demo
  52. Polarity: -1.0 Magnitude: 1.5 Polarity: -1.0 Magnitude: 2.1
  53. Thank you!

Editor's Notes

  1. Yes, today is BLACK FRIDAY!
  2. Black Friday is the biggest selling event in the US, and since 1932 it demarcated the begin of the Christmas shopping season.
  3. Interest about Black Friday in the US remained unchanged in the last years, according to Google Trends.
  4. However, if we perform the same analysis in Italy, we can see that interest about Black Friday in Italy grew exponentially. That’s why there is no company (even Worldwide) who can ignore this day. Companies can take advantage of Black Friday to advertise themselves and sell more We are going to step into the shoes of a company that wants to propose some deals specific to Black Friday, so the problem is: how to make and on which channels we have to advertise the deals to maximize revenues?
  5. Social Networks like Twitter can help a lot about analyzing people trends and opinions and supporting us to making the right decision. So today we are focusing on Twitter. <selected hashtags>
  6. This is how we want to do this. The story is more or less always the same: we get some data we process it (removing unnecessary things, transforming others) we store the data in a format that is good for analysis. Complexities: We do not have so much time We have to make it work even if we don’t know the traffic we will have to handle (how high is the peak we saw before?)
  7. Our solution is to adopt a serverless architecture: We want to use services that allow us to concentrate on our solution, rather than config files and boilerplate code We do not have to configure or manage the infrastructure We choose Google Cloud Platform because its Data Analytics offering is based exactly on these foundations. Today we are going to explore almost all the tools of GCP for Data Analytics. So, let’s start this whirlwind tour!
  8. Let’s start from the beginning. For the ingestion part we are going to use two technologies: Google Container Engine, the technology that powers Kubernetes-as-a-service (who knows Kubernetes? Containers/Docker?) on GCP Google Cloud Pub/Sub, a middleware solution on the Cloud
  9. Pub/Sub is a fully managed real time messaging service. I create a topic, I can send messages to a topic, if I’m interested in a topic i can subscribe to it and I start receiving messages. Nothing new, other technologies do this. However, Pub/Sub has a few strong points: It is a service, i do not have to configure a cluster It is reliable by design It keeps being reliable at scale
  10. How do I create a Pub/Sub topic? Without going much into detail, it is a one-liner. gcloud is the command line tool that manages all Google Cloud Platform resources.
  11. This is how we are going to use Pub/Sub: we implement something that converts tweets into messages, and by means of Pub/Sub we can distribute these tweets to several subscribers with ease. Pub/Sub decouples producers and consumers: they do not have to know each others It improves the reliability of the overall system, acting as a shock-absorber even if some parts of the following infrastructure has problems. We have a missing part here: how do we capture tweets and transform them in messages?
  12. We write a simple Python app that uses the TweePy library to interact with Twitter Streaming API Somewhere we use the stream.filter method to track a list of keywords somewhere else (in the listener of TweePy events) we collect tweets, packaging them and sending them out as Pub/Sub messages (note the Pub/Sub topic name)
  13. We wrote the app, we tested it. Now we have to deploy it (and its library) somewhere. Our first temptation would be...
  14. To start a Virtual Machine, install python on it and make it run there. However...
  15. This is not the solution we want. It doesn’t scale It is hard to make fault-tolerant (if the VM crashes it doesn’t restart) It is difficult to deploy and to update (no rolling update)
  16. A much better solution is to use containers. Containers provide an higher level of abstraction (OS-level rather than HW-level), that allows us to create portable and isolated deployments that can be installed easily on on-prem or Cloud environments.
  17. We create a docker image using a dockerfile, which is a sequence of instruction that, starting from a base image, add some pieces to build our personal solution. In this case we: Install necessary libraries Add our Python files Invoke our Python executable file (the container will run as long as this command does)
  18. We build an image based on the dockerfile and we are done. But, a container solves the problem of deploy and portability, but not the one of scaling and management.
  19. We need a further layer of abstraction, and this level of abstraction is provided by Kubernetes.
  20. Kubernetes is an open source orchestration tool for managing clusters of containers. It introduces all those features that are missing from “standard” container deployments. A cool thing about Kubernetes is that it is completely declarative - you do not specify that you want one more node or one less pod, but you define a desired state and the Kubernetes Master works to reach and maintain that state.
  21. This is what we deploy on Kubernetes: a ReplicationController (or a ReplicaSet/Deployment in recent versions) is the definition of a group of container replicas that you want concurrently running. For the sake of our example we need only one replica, but also in this case a ReplicationController is useful - as it ensures that this single replica is always up and running.
  22. So we wrap our container into a Pod. The Pod is the replica unit of Kubernetes.
  23. Each Pod runs on a cluster node, but...
  24. ...more than one Pod can run on a single node. The allocation of Pods on nodes are managed by the Kubernetes Master, which is a particular cluster node. In Container Engine the K8S Master is completely managed (and free!)
  25. Since version 1.3 Kubernetes supports also autoscaling of nodes. If there isn’t sufficient resources available to keep up with Pods scaling, node pool is enlarged.
  26. Creating a Kubernetes cluster is easy: 1) we create the cluster 2) we acquire Kubernetes credentials using gcloud 3) we use kubectl (opensource CLI) to submit commands to the Kubernetes Master
  27. Once the cluster has been created, we can monitor all worker nodes from the Cloud Console. Here we have one node, that contains one Pod, that contains one Container, that contains our application, that is transforming Tweets in Pub/Sub messages.
  28. Cool! We have implemented the first piece of our processing chain. What’s next?
  29. For the processing we want something equally scalable, so we are going to use a technology named Google Cloud Dataflow and...
  30. ...for the storage we are going to use Google BigQuery.
  31. Google Cloud Dataflow is two things: A collection of open source SDKs to implement parallel processing pipelines. The cool thing of being open source is that it means that runners for Dataflow pipelines have already been implemented for other opensource processing technologies, like Apache Spark or Apache Flink. (all the code I’ve written for that demo could run in an open source environment) The project itself is now an Apache Incubator project called Apache Beam. Cloud Dataflow is also a managed service on Google Cloud Platform that runs Apache Beam pipelines.
  32. Google BigQuery is an analytic data warehouse with impressive (almost magical) performances. It comes with a series of features that make it a valid choice as an enterprise-grade DWH: The ability to ingest streaming and batch data JDBC and ODBC connectors to guarantee interoperability A rich query language, which has now been renewed to support standard ANSI SQL-2011 A new Data Manipulation Language that supports updates and deletes
  33. How we are going to make use of these tools? We will build a simple Dataflow pipeline that is composed by three steps: Read tweets from Pub/Sub Transform tweets so as to conform with BigQuery API Write tweets on BigQuery For “tweet” I do not mean only the text, but all the informations that are returned by Twitter APIs (infos about the user,etc)
  34. The implementation is very easy: this is one of the best parts of Cloud Dataflow wrt existing processing technologies like MapReduce. First, we create a Pipeline object First operation is performed invoking an apply method to the Pipeline object, and using a Source to create collections of data called PCollections. In this case, we are using a PubSub Source to create a so-called unbounded PCollection (that is, a PCollection without a limited number of elements) All subsequent operations are performed by invoking apply methods on PCollections, which in turn generate other PCollections The simplest operation you can apply on a PCollection is a ParDo (ParallelDo), that process every element of the PCollection independently from the others. We write data by applying a transform At the end, we tell the system to run the pipeline. The source (PubSubIO) determines if the pipeline is a streaming or a batch one. All the other components (like BigQueryIO) adapt themselves consequently, e.g. BigQueryIO uses Streaming APIs in streaming mode and Load Jobs in batch mode.
  35. The simplest operation you can apply on a PCollection is a ParDo (ParallelDo), that process every element of the PCollection independently from the others. The argument of a ParDo is a DoFn object, we need to redefine the processElement method to instruct the system to do the right thing.
  36. The easiest way to deploy a Datalab Pipeline is using Maven. (hidden some complexity here, like the choice of the runner, the staging location)
  37. Once your pipeline is deployed, you can monitor its execution from the Cloud Console.
  38. You can check if data are actually being processed by querying the destination BigQuery table. It works! We built a very simple processing pipeline that streams data in real-time to our DWH and allows us to query results right as they are coming in. What now?
  39. Now we have to find some interesting analyses that we can evaluate on our data, represent them in a readable and shareable manner
  40. Google Data Studio is a BI solution that allows the creation of dashboards and graphs from several sources, including BigQuery.
  41. Here you see an example showing the number of tweets per state in the US. Not very fancy. In fact, we soon realize that the informations we have from raw data don’t give us very “smart” insights.
  42. We need to enrich our data model in some way. The good news is that Google released a series of APIs exposing ready-to-use Machine Learning algorithms and models. The one that seems to fit our case is...
  43. ...Natural Language APIs. These APIs can perform several different tasks on text strings: extract the syntactic structure of sentences extract entities that are mentioned within a text and even perform sentiment analysis.
  44. The Sentiment analysis API takes a text in input and returns two float values: Polarity (float ranging from -1 to 1) expresses the mood of the text: positive values denote positive moods Magnitude (float ranging from 0 to +inf) expresses the intensity of the feeling. Higher values denote stronger feelings.
  45. Our personal simplistic definition of “sentiment” will be “polarity times magnitude”.
  46. Let’s modify our pipeline. For illustration purposes we will maintain the old flow adding another one to implement the sentiment analysis. The evaluation of the sentiment will happen only for a subset of tweets (those that explicitly contain the words “blackfriday”)
  47. How does this reflect on the Pipeline code? We only have to add three lines of code (I’m lying!) Note how we start from the “tweets” PCollection both for the processing and the write of raw data. Note also how we can reuse the DoFormat function for both flows.
  48. Updating a pipeline is easy if the update doesn’t modify the existing structure (we are only adding new pieces). We only have to provide the name of the job we want to update. Dataflow will take care of draining the existing pipeline before shutting it down.
  49. The Cloud Console shows the updated pipeline, and new “enriched” data is immediately available in a BigQuery table.
  50. We did it! We built a serverless scalable data solution based on Google Cloud Platform. One interesting aspect about this architecture is that it is completely no-ops, and...
  51. ...it has integrated logging, monitoring and alerting thanks to Google Stackdriver. And we didn’t have to do anything!
  52. Let me show you the final solution. We will see how easy it is to query data, monitor the infrastructure, and we will give a look to some dashboards.
  53. When you detect an anomaly in one of the trends, you can drill down in BigQuery to explore the reasons. Walmart popularity is not so high mainly due to their decision of starting Black Friday sales at 6 PM on Thanksgiving Day Amazon popularity dropped down right after they announced their first “Black Friday Week” deals, which apparently did not meet customers’ expectations (they are recovering, though :)
Advertisement