Big data. Real time.
The open big data serving engine: Store, search, rank and organize big data at user serving time.
Big data maturity levels
0: Latent Data is produced but not systematically leveraged
1: Analysis Data is used to inform decisions made by humans
2: Learning Data is used to learn automated decisions disconnected from direct action
3: Acting Automated data-driven decisions are made in real time
Examples: Statistics on credit card fraud are gathered to create policies for flagging fraudulent transactions.
Lists of movies popular with various user segments are compiled to inform curated recommendation lists.
Examples: Fraudulent credit card transactions are automatically flagged.
Lists of movie recommendations per user segment are automatically generated.
Examples: Fraudulent credit card transactions are automatically blocked.
Personalized movie recommendations are computed when needed by that user.
Examples: Credit card transaction data is stored for audit purposes.
Movie streaming events are logged.
Zooming in
3: Acting Automated data-driven decisions are made in real time
Examples: Fraudulent credit card transactions are automatically blocked.
Personalized movie recommendations are computed when needed by that user.
Two types
● Decisions can be made by considering a single data item: Streaming or stateless model evaluation
● Decisions are made over many data items: Big data serving
Big data serving: What is required?
Realtime actions: Find data and make inferences in tens of milliseconds.
Realtime knowledge: Handle data changes at high continuous rates.
Scalable: Handle large requests rates over big data sets.
Always available: Recover from hardware failures without human intervention.
Online evolvable: Change schemas, logic, models, hardware while online.
Integrated: Data feeds from Hadoop, learned models from TensorFlow etc.
Introducing Vespa
An open source platform for big data serving.
As Hadoop - developed at Yahoo for search, now for all big data serving cases
Recently open sourced at http://vespa.ai
Makes the big data serving features available to everyone.
Vespa at Oath/Yahoo
Oath: Flickr, Tumblr, TechCrunch, Huffington Post, Aol, Engadget, Gemini, Yahoo
News Sports Finance Mail etc.
Hundreds of Vespa applications,
… serving over a billion users,
… over 200.000 queries per second,
… over billions of content items.
A platform for low latency computations over large, evolving data sets:
● Search and selection over structured and unstructured data
● Relevance scoring: NL features, advanced ML models, TensorFlow etc.
● Query time organization and aggregation of matching data
● Real-time writes at a high sustained rate
● Live elastic and auto-recovering stateful content clusters
● Processing logic container (Java)
● Managed clusters: One to hundreds of nodes
Typical use cases: text search, personalization / recommendation / targeting,
real-time data display
Vespa is
Another use case: Zedge
The primary motivations for Zedge to use Vespa
are:
1) simplify search and recommender systems for
Zedge Android and iOS apps, both for serving
(reduce amount of custom code to maintain) and for
processing/indexing (reduce need for big data jobs
by calculating more on the fly with tensors in Vespa)
2) accelerate innovation for content discovery, e.g.
easier to improve ranking with machine learning
using Vespa in combination with Tensorflow than
with e.g. our custom code recommender systems
An added bonus so far has been that more people
understand both search and recommender systems
due to the overall reduction in complexity of search
and recommender systems.
- Zedge VP of Data, Amund Tveit
Comparisons
Vespa: Focus on big data serving: Large scale, efficient, ML models
ElasticSearch: Focus on analytics: Log ingestion, visualization etc.
Solr: Focused on enterprise search: Handling document formats etc.
Relational databases: Transactions, hard to scale, no IR, no relevance
NoSQl stores: Easier to scale, no transactions, no IR, no relevance
Hadoop/Cloudera/Hortonworks: Big Data, but not for serving
Analytics vs big data serving
Analytics Big data serving
Response time in low seconds Response time in low milliseconds
Low query rate High query rate
Time series, append only Random writes
Down time, data loss acceptable HA, no data loss, online redistribution
Massive data sets (trillions of docs) are
cheap
Massive data sets are more expensive
Analytics GUI integration Machine learning integration
Vespa and Elastic Search use cases
Big data serving Text search, relevance,
grouping and aggregation
Analytics
Vespa Elastic
search
Vespa architecture
Scalable low latency execution: How to bound latency
Container node
Query
Application
Package
Admin &
Config
Content node
Deploy
- Configuration
- Components
- ML models
Scatter-gather
Core
sharding
models models models
1) Parallelization
2) Move execution to data nodes
3) Prepared data structures (indexes etc.)
Inference in Vespa
Tensor data model: Multidimensional collections
of numbers in queries, documents, models
Tensor math express all common
machine-learned models with join, map, reduce
TensorFlow and ONNX integration: Deploy
TensorFlow and ONNX (SciKit, Caffe2, PyTorch
etc.) directly on Vespa
Vespa execution engine optimized for
repeated execution of models over many data
items, and running many inferences in parallel
Wrapping up
Making the best use of big data often implies making decisions in real time
Vespa is the only open source platform optimized for such big data serving
Available on https://vespa.ai
Quick start: Run a complete application (on a laptop or AWS) in 10 minutes:
http://docs.vespa.ai/documentation/vespa-quick-start.html
Tutorial: Make a scalable blog search and recommendation engine from scratch:
http://docs.vespa.ai/documentation/tutorials/blog-search.html

Introduction to Vespa – The Open Source Big Data Serving Engine, Jon Bratseth, Distinguished Architect, Oath

  • 1.
    Big data. Realtime. The open big data serving engine: Store, search, rank and organize big data at user serving time.
  • 2.
    Big data maturitylevels 0: Latent Data is produced but not systematically leveraged 1: Analysis Data is used to inform decisions made by humans 2: Learning Data is used to learn automated decisions disconnected from direct action 3: Acting Automated data-driven decisions are made in real time Examples: Statistics on credit card fraud are gathered to create policies for flagging fraudulent transactions. Lists of movies popular with various user segments are compiled to inform curated recommendation lists. Examples: Fraudulent credit card transactions are automatically flagged. Lists of movie recommendations per user segment are automatically generated. Examples: Fraudulent credit card transactions are automatically blocked. Personalized movie recommendations are computed when needed by that user. Examples: Credit card transaction data is stored for audit purposes. Movie streaming events are logged.
  • 3.
    Zooming in 3: ActingAutomated data-driven decisions are made in real time Examples: Fraudulent credit card transactions are automatically blocked. Personalized movie recommendations are computed when needed by that user. Two types ● Decisions can be made by considering a single data item: Streaming or stateless model evaluation ● Decisions are made over many data items: Big data serving
  • 4.
    Big data serving:What is required? Realtime actions: Find data and make inferences in tens of milliseconds. Realtime knowledge: Handle data changes at high continuous rates. Scalable: Handle large requests rates over big data sets. Always available: Recover from hardware failures without human intervention. Online evolvable: Change schemas, logic, models, hardware while online. Integrated: Data feeds from Hadoop, learned models from TensorFlow etc.
  • 5.
    Introducing Vespa An opensource platform for big data serving. As Hadoop - developed at Yahoo for search, now for all big data serving cases Recently open sourced at http://vespa.ai Makes the big data serving features available to everyone.
  • 6.
    Vespa at Oath/Yahoo Oath:Flickr, Tumblr, TechCrunch, Huffington Post, Aol, Engadget, Gemini, Yahoo News Sports Finance Mail etc. Hundreds of Vespa applications, … serving over a billion users, … over 200.000 queries per second, … over billions of content items.
  • 7.
    A platform forlow latency computations over large, evolving data sets: ● Search and selection over structured and unstructured data ● Relevance scoring: NL features, advanced ML models, TensorFlow etc. ● Query time organization and aggregation of matching data ● Real-time writes at a high sustained rate ● Live elastic and auto-recovering stateful content clusters ● Processing logic container (Java) ● Managed clusters: One to hundreds of nodes Typical use cases: text search, personalization / recommendation / targeting, real-time data display Vespa is
  • 8.
    Another use case:Zedge The primary motivations for Zedge to use Vespa are: 1) simplify search and recommender systems for Zedge Android and iOS apps, both for serving (reduce amount of custom code to maintain) and for processing/indexing (reduce need for big data jobs by calculating more on the fly with tensors in Vespa) 2) accelerate innovation for content discovery, e.g. easier to improve ranking with machine learning using Vespa in combination with Tensorflow than with e.g. our custom code recommender systems An added bonus so far has been that more people understand both search and recommender systems due to the overall reduction in complexity of search and recommender systems. - Zedge VP of Data, Amund Tveit
  • 9.
    Comparisons Vespa: Focus onbig data serving: Large scale, efficient, ML models ElasticSearch: Focus on analytics: Log ingestion, visualization etc. Solr: Focused on enterprise search: Handling document formats etc. Relational databases: Transactions, hard to scale, no IR, no relevance NoSQl stores: Easier to scale, no transactions, no IR, no relevance Hadoop/Cloudera/Hortonworks: Big Data, but not for serving
  • 10.
    Analytics vs bigdata serving Analytics Big data serving Response time in low seconds Response time in low milliseconds Low query rate High query rate Time series, append only Random writes Down time, data loss acceptable HA, no data loss, online redistribution Massive data sets (trillions of docs) are cheap Massive data sets are more expensive Analytics GUI integration Machine learning integration
  • 11.
    Vespa and ElasticSearch use cases Big data serving Text search, relevance, grouping and aggregation Analytics Vespa Elastic search
  • 12.
  • 13.
    Scalable low latencyexecution: How to bound latency Container node Query Application Package Admin & Config Content node Deploy - Configuration - Components - ML models Scatter-gather Core sharding models models models 1) Parallelization 2) Move execution to data nodes 3) Prepared data structures (indexes etc.)
  • 14.
    Inference in Vespa Tensordata model: Multidimensional collections of numbers in queries, documents, models Tensor math express all common machine-learned models with join, map, reduce TensorFlow and ONNX integration: Deploy TensorFlow and ONNX (SciKit, Caffe2, PyTorch etc.) directly on Vespa Vespa execution engine optimized for repeated execution of models over many data items, and running many inferences in parallel
  • 15.
    Wrapping up Making thebest use of big data often implies making decisions in real time Vespa is the only open source platform optimized for such big data serving Available on https://vespa.ai Quick start: Run a complete application (on a laptop or AWS) in 10 minutes: http://docs.vespa.ai/documentation/vespa-quick-start.html Tutorial: Make a scalable blog search and recommendation engine from scratch: http://docs.vespa.ai/documentation/tutorials/blog-search.html