Slim Baltagi, director of Enterprise Architecture at Capital One, gave a presentation at Hadoop Summit on major trends in big data analytics. He discussed 1) increasing portability between execution engines using Apache Beam, 2) the emergence of stream analytics driven by data streams, technology advances, business needs and consumer demands, 3) the growth of in-memory analytics using tools like Alluxio and RocksDB, 4) rapid application development using APIs, notebooks, GUIs and microservices, 5) open sourcing of machine learning systems by tech giants, and 6) hybrid cloud computing models for deploying big data applications both on-premise and in the cloud.
Sharing metadata across the data lake and streamsDataWorks Summit
Traditionally systems have stored and managed their own metadata, just as they traditionally stored and managed their own data. A revolutionary feature of big data tools such as Apache Hadoop and Apache Kafka is the ability to store all data together, where users can bring the tools of their choice to process it.
Apache Hive's metastore can be used to share the metadata in the same way. It is already used by many SQL and SQL-like systems beyond Hive (e.g. Apache Spark, Presto, Apache Impala, and via HCatalog, Apache Pig). As data processing changes from only data in the cluster to include data in streams, the metastore needs to expand and grow to meet these use cases as well. There is work going on in the Hive community to separate out the metastore, so it can continue to serve Hive but also be used by a more diverse set of tools. This talk will discuss that work, with particular focus on adding support for storing schemas for Kafka messages.
Speaker
Alan Gates, Co-Founder, Hortonworks
Boost Performance with Scala – Learn From Those Who’ve Done It! Cécile Poyet
Scalding is a scala DSL for Cascading. Run on Hadoop, it’s a concise, functional, and very efficient way to build big data applications. One significant benefit of Scalding is that it allows easy porting of Scalding apps from MapReduce to newer, faster execution fabrics.
In this webinar, Cyrille Chépélov, of Transparency Rights Management, will share how his organization boosted the performance of their Scalding apps by over 50% by moving away from MapReduce to Cascading 3.0 on Apache Tez. Dhruv Kumar, Hortonworks Partner Solution Engineer, will then explain how you can interact with data on HDP using Scala and leverage Scala as a programming language to develop Big Data applications.
Integrating Apache Phoenix with Distributed Query EnginesDataWorks Summit
This talk will describe the work being done to create connectors for Presto and Apache Spark to read and write data in Phoenix tables. We will describe the new phoenix connector that implements Spark’s DataSource v2 API which will enable customizing and optimizing reads and writes to Phoenix tables.
We will also demo the Presto-phoenix connector, showing how it can be used to federate multiple Phoenix clusters and join Phoenix data with different types of data sources.
We will also describe some in progress work to more tightly integrate with the query optimizers of these frameworks in order to provide table statistics and push down filters, limits and aggregates into Phoenix whenever possible in order to speed up query execution.
Another area being worked on is to provide a way to support bulk loading using HFiles.
Enabling Modern Application Architecture using Data.gov open government dataDataWorks Summit
Big Data and the Internet of Things (IoT) have forced businesses and the Federal Government to reevaluate their existing data strategies and adopt a more modern data architecture. With the advent of the connected data platform, migrating or building data-driven applications that take advantage of data-in-motion and data-at-rest can be a daunting journey to undertake. Scaling, reusability, and achieving operational agility are just some of the common pitfalls associated with existing software architectures. How do we embrace this paradigm shift? Adopting agile methodologies and emerging development practices such as Microservices and DevOps offer greater agility and operational efficiency enabling the government to rapidly build modern data-driven applications.
During this talk and demonstration, we will show how the federal government can unleash the true power of the connected data platform with modern data-driven applications.
Connected Data Platform:
• Hortonworks DataFlow
o Using Apache NiFi for capturing data at the edge of the data lake & managing the flow of data to the data platform
o Apache Storm for complex event processing and stream processing
• Hortonworks Data Platform
o Apache Accumulo for scalability and cell-level security
o Apache YARN for resource management
• Modern Data-Driven Applications
o Microservices: a software architecture practice for designing software applications as suites of independently deployable services, promoting componentization, single responsibility & scalability. Adopting a Microservices mindset enables the government to be technology agnostic: using the best tool or programming language for the job.
♣ Demoed REST API’s on-top of Apache Accumulo. (Spark-Java, AngularJS/Typescript)
o DevOps: A culture and practice that breaks down the silos found between development and operations teams in traditional software practices.
♣ CI / CD pipelines, automated build kick-offs using containers (Docker, Jenkins)
This talk will lay out a basic environment for promoting greater agility and operational efficiency for the federal government while taking advantage of a connected data platform.
Bringing it All Together: Apache Metron (Incubating) as a Case Study of a Mod...DataWorks Summit
There have been many voices discussing how to architect streaming
applications on Hadoop. Before now, there have been very few worked
examples existing within the open source. Apache Metron (Incubating) is a
streaming advanced analytics cybersecurity application which utilizes
the components within the Hadoop stack as its platform.
We will attempt to go beyond theoretical discussions of Kappa vs Lambda
architectures and describe the nuts and bolts of a streaming
architecture that enables advanced analytics in Hadoop. We will discuss
the componentry that we had to build and what we could utilize. We will
discuss why we made the architectural decisions that we made and how
they fit together to knit together a coherent application on top of many
different Hadoop ecosystem projects.
We will also discuss the domain specific language that we created out of
necessity to enable a pluggable layer to enable user defined enrichments.
We will discuss how this helped make Metron less rigid and easier to
use. We will also candidly discuss mistakes that we made early on.
Embeddable data transformation for real time streamsJoey Echeverria
Real-time stream analysis starts with ingesting raw data and extracting structured records. While stream-processing frameworks such as Apache Spark and Apache Storm provide primitives for processing individual records, processing windows of records, and grouping/joining records, the process of performing common actions such as filtering, applying regular expressions to extract data, and converting records from one schema to another are left to developers writing business logic.
Joey Echeverria presents an alternative approach based on a reusable library that provides configuration-based data transformation. This allows users to write command data-transformation rules once and reuse them in multiple contexts. A common pattern is to consume a single, raw stream and transform it using the same rules before storing in different repositories such as Apache Solr for search and Apache Hadoop HDFS for deep storage.
Apache Apex brings you the power to quickly build and run big data batch and stream processing applications. But what about visualizing your data in real time as it flows through the Apache Apex applications? Together, we will review Apache Apex, and how it integrates with Apache Hadoop and Apache Kafka to process your big data with streaming computation. Then we will explore the options available to visualize Apex applications metrics and data, including open-source options like REST and PubSub mechanisms in StrAM, as well as features available in the RTS Console like real-time Dashboards and Widgets. We will also look into ways of packaging dashboards inside your Apache Apex applications.
Innovation in the Enterprise Rent-A-Car Data WarehouseDataWorks Summit
Big Data adoption is a journey. Depending on the business the process can take weeks, months, or even years. With any transformative technology the challenges have less to do with the technology and more to do with how a company adapts itself to a new way of thinking about data. Building a Center of Excellence is one way for IT to help drive success.
This talk will explore Enterprise Holdings Inc. (which operates the Enterprise Rent-A-Car, National Car Rental and Alamo Rent A Car) and their experience with Big Data. EHI’s journey started in 2013 with Hadoop as a POC and today are working to create the next generation data warehouse in Microsoft’s Azure cloud utilizing a lambda architecture.
We’ll discuss the Center of Excellence, the roles in the new world, share the things which worked well, and rant about those which didn’t.
No deep Hadoop knowledge is necessary, architect or executive level.
The Unbearable Lightness of Ephemeral ProcessingDataWorks Summit
Ephemeral clusters can be launched quickly (minutes), are pre-configured for a specific processing purpose, and can be brought down quickly as soon as their usefulness has expired. The ability to launch Ephemeral clusters for on-demand processing, quickly and efficiently, is transforming how organizations design, deploy and Manage applications. The velocity and elasticity of fast cluster deployment enables seamless peak-demand provisioning, enables cost optimization by leveraging significantly lower cloud spot pricing, and maximizes utilization of existing compute capacity. Additionally, being able to launch bespoke clusters for specific compute needs in a repeatable fashion and within a shared infrastructure provides flexibility for special purpose processing needs. Organizations can leverage Ephemeral Clusters for parallel compute intensive applications which require short bursts of power but are short lived. In this session we will explore how to design Ephemeral clusters, how to launch, modify and bring them down, as well as application design considerations to maximize Ephemeral clusters usability.
Druid and Hive Together : Use Cases and Best PracticesDataWorks Summit
Two popular open source technologies, Druid and Apache Hive, are often mentioned as viable solutions for large-scale analytics. Hive works well for storing large volumes of data, although not optimized for ingesting streaming data and making it available for queries in realtime. On the other hand, Druid excels at low-latency, interactive queries over streaming data and making data available in realtime for queries. Although the high level messaging presented by both projects may lead you to believe they are competing for same use case, the technologies are in fact extremely complementary solutions.
By combining the rich query capabilities of Hive with the powerful realtime streaming and indexing capabilities of Druid, we can build more powerful, flexible, and extremely low latency realtime streaming analytics solutions. In this talk we will discuss the motivation to combine Hive and Druid together alongwith the benefits, use cases, best practices and benchmark numbers.
The Agenda of the talk will be -
1. Motivation behind integrating Druid with Hive
2. Druid and Hive together - benefits
3. Use Cases with Demos and architecture discussion
4. Best Practices - Do's and Don'ts
5. Performance vs Cost Tradeoffs
6. SSB Benchmark Numbers
Sharing metadata across the data lake and streamsDataWorks Summit
Traditionally systems have stored and managed their own metadata, just as they traditionally stored and managed their own data. A revolutionary feature of big data tools such as Apache Hadoop and Apache Kafka is the ability to store all data together, where users can bring the tools of their choice to process it.
Apache Hive's metastore can be used to share the metadata in the same way. It is already used by many SQL and SQL-like systems beyond Hive (e.g. Apache Spark, Presto, Apache Impala, and via HCatalog, Apache Pig). As data processing changes from only data in the cluster to include data in streams, the metastore needs to expand and grow to meet these use cases as well. There is work going on in the Hive community to separate out the metastore, so it can continue to serve Hive but also be used by a more diverse set of tools. This talk will discuss that work, with particular focus on adding support for storing schemas for Kafka messages.
Speaker
Alan Gates, Co-Founder, Hortonworks
Boost Performance with Scala – Learn From Those Who’ve Done It! Cécile Poyet
Scalding is a scala DSL for Cascading. Run on Hadoop, it’s a concise, functional, and very efficient way to build big data applications. One significant benefit of Scalding is that it allows easy porting of Scalding apps from MapReduce to newer, faster execution fabrics.
In this webinar, Cyrille Chépélov, of Transparency Rights Management, will share how his organization boosted the performance of their Scalding apps by over 50% by moving away from MapReduce to Cascading 3.0 on Apache Tez. Dhruv Kumar, Hortonworks Partner Solution Engineer, will then explain how you can interact with data on HDP using Scala and leverage Scala as a programming language to develop Big Data applications.
Integrating Apache Phoenix with Distributed Query EnginesDataWorks Summit
This talk will describe the work being done to create connectors for Presto and Apache Spark to read and write data in Phoenix tables. We will describe the new phoenix connector that implements Spark’s DataSource v2 API which will enable customizing and optimizing reads and writes to Phoenix tables.
We will also demo the Presto-phoenix connector, showing how it can be used to federate multiple Phoenix clusters and join Phoenix data with different types of data sources.
We will also describe some in progress work to more tightly integrate with the query optimizers of these frameworks in order to provide table statistics and push down filters, limits and aggregates into Phoenix whenever possible in order to speed up query execution.
Another area being worked on is to provide a way to support bulk loading using HFiles.
Enabling Modern Application Architecture using Data.gov open government dataDataWorks Summit
Big Data and the Internet of Things (IoT) have forced businesses and the Federal Government to reevaluate their existing data strategies and adopt a more modern data architecture. With the advent of the connected data platform, migrating or building data-driven applications that take advantage of data-in-motion and data-at-rest can be a daunting journey to undertake. Scaling, reusability, and achieving operational agility are just some of the common pitfalls associated with existing software architectures. How do we embrace this paradigm shift? Adopting agile methodologies and emerging development practices such as Microservices and DevOps offer greater agility and operational efficiency enabling the government to rapidly build modern data-driven applications.
During this talk and demonstration, we will show how the federal government can unleash the true power of the connected data platform with modern data-driven applications.
Connected Data Platform:
• Hortonworks DataFlow
o Using Apache NiFi for capturing data at the edge of the data lake & managing the flow of data to the data platform
o Apache Storm for complex event processing and stream processing
• Hortonworks Data Platform
o Apache Accumulo for scalability and cell-level security
o Apache YARN for resource management
• Modern Data-Driven Applications
o Microservices: a software architecture practice for designing software applications as suites of independently deployable services, promoting componentization, single responsibility & scalability. Adopting a Microservices mindset enables the government to be technology agnostic: using the best tool or programming language for the job.
♣ Demoed REST API’s on-top of Apache Accumulo. (Spark-Java, AngularJS/Typescript)
o DevOps: A culture and practice that breaks down the silos found between development and operations teams in traditional software practices.
♣ CI / CD pipelines, automated build kick-offs using containers (Docker, Jenkins)
This talk will lay out a basic environment for promoting greater agility and operational efficiency for the federal government while taking advantage of a connected data platform.
Bringing it All Together: Apache Metron (Incubating) as a Case Study of a Mod...DataWorks Summit
There have been many voices discussing how to architect streaming
applications on Hadoop. Before now, there have been very few worked
examples existing within the open source. Apache Metron (Incubating) is a
streaming advanced analytics cybersecurity application which utilizes
the components within the Hadoop stack as its platform.
We will attempt to go beyond theoretical discussions of Kappa vs Lambda
architectures and describe the nuts and bolts of a streaming
architecture that enables advanced analytics in Hadoop. We will discuss
the componentry that we had to build and what we could utilize. We will
discuss why we made the architectural decisions that we made and how
they fit together to knit together a coherent application on top of many
different Hadoop ecosystem projects.
We will also discuss the domain specific language that we created out of
necessity to enable a pluggable layer to enable user defined enrichments.
We will discuss how this helped make Metron less rigid and easier to
use. We will also candidly discuss mistakes that we made early on.
Embeddable data transformation for real time streamsJoey Echeverria
Real-time stream analysis starts with ingesting raw data and extracting structured records. While stream-processing frameworks such as Apache Spark and Apache Storm provide primitives for processing individual records, processing windows of records, and grouping/joining records, the process of performing common actions such as filtering, applying regular expressions to extract data, and converting records from one schema to another are left to developers writing business logic.
Joey Echeverria presents an alternative approach based on a reusable library that provides configuration-based data transformation. This allows users to write command data-transformation rules once and reuse them in multiple contexts. A common pattern is to consume a single, raw stream and transform it using the same rules before storing in different repositories such as Apache Solr for search and Apache Hadoop HDFS for deep storage.
Apache Apex brings you the power to quickly build and run big data batch and stream processing applications. But what about visualizing your data in real time as it flows through the Apache Apex applications? Together, we will review Apache Apex, and how it integrates with Apache Hadoop and Apache Kafka to process your big data with streaming computation. Then we will explore the options available to visualize Apex applications metrics and data, including open-source options like REST and PubSub mechanisms in StrAM, as well as features available in the RTS Console like real-time Dashboards and Widgets. We will also look into ways of packaging dashboards inside your Apache Apex applications.
Innovation in the Enterprise Rent-A-Car Data WarehouseDataWorks Summit
Big Data adoption is a journey. Depending on the business the process can take weeks, months, or even years. With any transformative technology the challenges have less to do with the technology and more to do with how a company adapts itself to a new way of thinking about data. Building a Center of Excellence is one way for IT to help drive success.
This talk will explore Enterprise Holdings Inc. (which operates the Enterprise Rent-A-Car, National Car Rental and Alamo Rent A Car) and their experience with Big Data. EHI’s journey started in 2013 with Hadoop as a POC and today are working to create the next generation data warehouse in Microsoft’s Azure cloud utilizing a lambda architecture.
We’ll discuss the Center of Excellence, the roles in the new world, share the things which worked well, and rant about those which didn’t.
No deep Hadoop knowledge is necessary, architect or executive level.
The Unbearable Lightness of Ephemeral ProcessingDataWorks Summit
Ephemeral clusters can be launched quickly (minutes), are pre-configured for a specific processing purpose, and can be brought down quickly as soon as their usefulness has expired. The ability to launch Ephemeral clusters for on-demand processing, quickly and efficiently, is transforming how organizations design, deploy and Manage applications. The velocity and elasticity of fast cluster deployment enables seamless peak-demand provisioning, enables cost optimization by leveraging significantly lower cloud spot pricing, and maximizes utilization of existing compute capacity. Additionally, being able to launch bespoke clusters for specific compute needs in a repeatable fashion and within a shared infrastructure provides flexibility for special purpose processing needs. Organizations can leverage Ephemeral Clusters for parallel compute intensive applications which require short bursts of power but are short lived. In this session we will explore how to design Ephemeral clusters, how to launch, modify and bring them down, as well as application design considerations to maximize Ephemeral clusters usability.
Druid and Hive Together : Use Cases and Best PracticesDataWorks Summit
Two popular open source technologies, Druid and Apache Hive, are often mentioned as viable solutions for large-scale analytics. Hive works well for storing large volumes of data, although not optimized for ingesting streaming data and making it available for queries in realtime. On the other hand, Druid excels at low-latency, interactive queries over streaming data and making data available in realtime for queries. Although the high level messaging presented by both projects may lead you to believe they are competing for same use case, the technologies are in fact extremely complementary solutions.
By combining the rich query capabilities of Hive with the powerful realtime streaming and indexing capabilities of Druid, we can build more powerful, flexible, and extremely low latency realtime streaming analytics solutions. In this talk we will discuss the motivation to combine Hive and Druid together alongwith the benefits, use cases, best practices and benchmark numbers.
The Agenda of the talk will be -
1. Motivation behind integrating Druid with Hive
2. Druid and Hive together - benefits
3. Use Cases with Demos and architecture discussion
4. Best Practices - Do's and Don'ts
5. Performance vs Cost Tradeoffs
6. SSB Benchmark Numbers
Overview of Apache Fink: the 4 G of Big Data Analytics FrameworksSlim Baltagi
Slides of my talk at the Hadoop Summit Europe in Dublin, Ireland on April 13th, 2016. The talk introduces Apache Flink as both a multi-purpose Big Data analytics framework and real-world streaming analytics framework. It is focusing on Flink's key differentiators and suitability for streaming analytics use cases. It also shows how Flink enables novel use cases such as distributed CEP (Complex Event Processing) and querying the state by behaving like a key value data store.
Overview of Apache Fink: The 4G of Big Data Analytics FrameworksSlim Baltagi
Slides of my talk at the Hadoop Summit Europe in Dublin, Ireland on April 13th, 2016. The talk introduces Apache Flink as both a multi-purpose Big Data analytics framework and real-world streaming analytics framework. It is focusing on Flink's key differentiators and suitability for streaming analytics use cases. It also shows how Flink enables novel use cases such as distributed CEP (Complex Event Processing) and querying the state by behaving like a key value data store.
ALT-F1.BE : The Accelerator (Google Cloud Platform)Abdelkrim Boujraf
The Accelerator is an IT infrastructure able to collect and analyze a massive amount of public data on the WWW.
The Accelerator leverages the untapped potential of web data with the first solution designed for diverse sectors,
completely scalable, available on-premise, and cloud-provider agnostic.
Memory Management in BigData: A Perpective Viewijtsrd
The requirement to perform complicated statistic analysis of big data by institutions of engineering, scientific research, health care, commerce, banking and computer research is immense. However, the limitations of the widely used current desktop software like R, excel, minitab and spss gives a researcher limitation to deal with big data and big data analytic tools like IBM BigInsight, HP Vertica, SAP HANA & Pentaho come at an overpriced license. Apache Hadoop is an open source distributed computing framework that uses commodity hardware. With this project, I intend to collaborate Apache Hadoop and R software to develop an analytic platform that stores big data (using open source Apache Hadoop) and perform statistical analysis (using open source R software).Due to the limitations of vertical scaling of computer unit, data storage is handled by several machines and so analysis becomes distributed over all these machines. Apache Hadoop is what comes handy in this environment. To store massive quantities of data as required by researchers, we could use commodity hardware and perform analysis in distributed environment. Bhavna Bharti | Prof. Avinash Sharma"Memory Management in BigData: A Perpective View" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd14436.pdf http://www.ijtsrd.com/engineering/computer-engineering/14436/memory-management-in-bigdata-a-perpective-view/bhavna-bharti
In today’s context, the big data market is rapidly undergoing contortions that define market maturity, such as consolidation. Big data refers to large volumes of data. This can be both structured and unstructured data. Big data is data that is huge in size and grows exponentially with time. As the data is too large and complex, traditional data management tools are not sufficient for storing or processing it efficiently. But analyzing big data is crucial to know the patterns and trends to be adopted to improve your business.
How do you analyze a Petabyte of data?
The Spark Python API or PySpark exposes the Spark programming model to Python. Apache® Spark™ is open-source and is one of the most popular Big Data frameworks for scaling up your tasks in a cluster. It was developed to utilize distributed, in-memory data structures to improve data processing speeds for massive amounts of data.
We’ll also look into Spark SQL - Apache Spark’s module for working with structured data and MLlib - Apache Spark’s scalable machine learning library.
What will you learn?
Perform Big Data analysis with PySpark
Use SQL queries with DataFrames by using the Spark SQL module
Use Machine learning with MLlib library
This introductory level talk is about Apache Flink: a multi-purpose Big Data analytics framework leading a movement towards the unification of batch and stream processing in the open source.
With the many technical innovations it brings along with its unique vision and philosophy, it is considered the 4 G (4th Generation) of Big Data Analytics frameworks providing the only hybrid (Real-Time Streaming + Batch) open source distributed data processing engine supporting many use cases: batch, streaming, relational queries, machine learning and graph processing.
In this talk, you will learn about:
1. What is Apache Flink stack and how it fits into the Big Data ecosystem?
2. How Apache Flink integrates with Hadoop and other open source tools for data input and output as well as deployment?
3. Why Apache Flink is an alternative to Apache Hadoop MapReduce, Apache Storm and Apache Spark.
4. Who is using Apache Flink?
5. Where to learn more about Apache Flink?
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...confluent
Tinder’s Quickfire Pipeline powers all things data at Tinder. It was originally built using AWS Kinesis Firehoses and has since been extended to use both Kafka and other event buses. It is the core of Tinder’s data infrastructure. This rich data flow of both client and backend data has been extended to service a variety of needs at Tinder, including Experimentation, ML, CRM, and Observability, allowing backend developers easier access to shared client side data. We perform this using many systems, including Kafka, Spark, Flink, Kubernetes, and Prometheus. Many of Tinder’s systems were natively designed in an RPC first architecture.
Things we’ll discuss decoupling your system at scale via event-driven architectures include:
– Powering ML, backend, observability, and analytical applications at scale, including an end to end walk through of our processes that allow non-programmers to write and deploy event-driven data flows.
– Show end to end the usage of dynamic event processing that creates other stream processes, via a dynamic control plane topology pattern and broadcasted state pattern
– How to manage the unavailability of cached data that would normally come from repeated API calls for data that’s being backfilled into Kafka, all online! (and why this is not necessarily a “good” idea)
– Integrating common OSS frameworks and libraries like Kafka Streams, Flink, Spark and friends to encourage the best design patterns for developers coming from traditional service oriented architectures, including pitfalls and lessons learned along the way.
– Why and how to avoid overloading microservices with excessive RPC calls from event-driven streaming systems
– Best practices in common data flow patterns, such as shared state via RocksDB + Kafka Streams as well as the complementary tools in the Apache Ecosystem.
– The simplicity and power of streaming SQL with microservices
In this slidecast, Jim Kaskade from Infochimps presents: Cloud for Big Data.
"Infochimps was founded by data scientists and cloud computing experts. Our solutions make it faster, easier and far less complex to build and manage Big Data systems behind applications to quickly deliver actionable insights. With Infochimps Cloud, enterprises benefit from the fastest way to deploy Big Data applications in complex, hybrid cloud environments."
Learn more at:
http://infochimps.com
View the presentation video:
http://inside-bigdata.com/slidecast-cloud-for-big-data/
Apache Flink: Real-World Use Cases for Streaming AnalyticsSlim Baltagi
This face to face talk about Apache Flink in Sao Paulo, Brazil is the first event of its kind in Latin America! It explains how Apache Flink 1.0 announced on March 8th, 2016 by the Apache Software Foundation (link), marks a new era of Big Data analytics and in particular Real-Time streaming analytics. The talk maps Flink's capabilities to real-world use cases that span multiples verticals such as: Financial Services, Healthcare, Advertisement, Oil and Gas, Retail and Telecommunications.
In this talk, you learn more about:
1. What is Apache Flink Stack?
2. Batch vs. Streaming Analytics
3. Key Differentiators of Apache Flink for Streaming Analytics
4. Real-World Use Cases with Flink for Streaming Analytics
5. Who is using Flink?
6. Where do you go from here?
Career opportunities in open source frameworkedunextgen
EduNextgen extended arm of Product Innovation Academy is a growing entity in education and career transformation, specializing in today’s most in-demand skills. A platform with blended learning programs supported by in-trend technology platforms for learning. Engaging organizations for learning development objectives. Training courses are designed and updated by renowned industry experts. Our blended learning approach combines online classes, instructor-led live virtual classrooms and virtual teaching assistance.
Similar to Analysis of Major Trends in Big Data Analytics (20)
Many Organizations are currently processing various types of data and in different formats. Most often this data will be in free form, As the consumers of this data growing it’s imperative that this free-flowing data needs to adhere to a schema. It will help data consumers to have an expectation of about the type of data they are getting and also they will be able to avoid immediate impact if the upstream source changes its format. Having a uniform schema representation also gives the Data Pipeline a really easy way to integrate and support various systems that use different data formats.
SchemaRegistry is a central repository for storing, evolving schemas. It provides an API & tooling to help developers and users to register a schema and consume that schema without having any impact if the schema changed. Users can tag different schemas and versions, register for notifications of schema changes with versions etc.
In this talk, we will go through the need for a schema registry and schema evolution and showcase the integration with Apache NiFi, Apache Kafka, Apache Storm.
There is increasing need for large-scale recommendation systems. Typical solutions rely on periodically retrained batch algorithms, but for massive amounts of data, training a new model could take hours. This is a problem when the model needs to be more up-to-date. For example, when recommending TV programs while they are being transmitted the model should take into consideration users who watch a program at that time.
The promise of online recommendation systems is fast adaptation to changes, but methods of online machine learning from streams is commonly believed to be more restricted and hence less accurate than batch trained models. Combining batch and online learning could lead to a quickly adapting recommendation system with increased accuracy. However, designing a scalable data system for uniting batch and online recommendation algorithms is a challenging task. In this talk we present our experiences in creating such a recommendation engine with Apache Flink and Apache Spark.
DeepLearning is not just a hype - it outperforms state-of-the-art ML algorithms. One by one. In this talk we will show how DeepLearning can be used for detecting anomalies on IoT sensor data streams at high speed using DeepLearning4J on top of different BigData engines like ApacheSpark and ApacheFlink. Key in this talk is the absence of any large training corpus since we are using unsupervised machine learning - a domain current DL research threats step-motherly. As we can see in this demo LSTM networks can learn very complex system behavior - in this case data coming from a physical model simulating bearing vibration data. Once draw back of DeepLearning is that normally a very large labaled training data set is required. This is particularly interesting since we can show how unsupervised machine learning can be used in conjunction with DeepLearning - no labeled data set is necessary. We are able to detect anomalies and predict braking bearings with 10 fold confidence. All examples and all code will be made publicly available and open sources. Only open source components are used.
QE automation for large systems is a great step forward in increasing system reliability. In the big-data world, multiple components have to come together to provide end-users with business outcomes. This means, that QE Automations scenarios need to be detailed around actual use cases, cross-cutting components. The system tests potentially generate large amounts of data on a recurring basis, verifying which is a tedious job. Given the multiple levels of indirection, the false positives of actual defects are higher, and are generally wasteful.
At Hortonworks, we’ve designed and implemented Automated Log Analysis System - Mool, using Statistical Data Science and ML. Currently the work in progress has a batch data pipeline with a following ensemble ML pipeline which feeds into the recommendation engine. The system identifies the root cause of test failures, by correlating the failing test cases, with current and historical error records, to identify root cause of errors across multiple components. The system works in unsupervised mode with no perfect model/stable builds/source-code version to refer to. In addition the system provides limited recommendations to file/open past tickets and compares run-profiles with past runs.
Improving business performance is never easy! The Natixis Pack is like Rugby. Working together is key to scrum success. Our data journey would undoubtedly have been so much more difficult if we had not made the move together.
This session is the story of how ‘The Natixis Pack’ has driven change in its current IT architecture so that legacy systems can leverage some of the many components in Hortonworks Data Platform in order to improve the performance of business applications. During this session, you will hear:
• How and why the business and IT requirements originated
• How we leverage the platform to fulfill security and production requirements
• How we organize a community to:
o Guard all the players, no one gets left on the ground!
o Us the platform appropriately (Not every problem is eligible for Big Data and standard databases are not dead)
• What are the most usable, the most interesting and the most promising technologies in the Apache Hadoop community
We will finish the story of a successful rugby team with insight into the special skills needed from each player to win the match!
DETAILS
This session is part business, part technical. We will talk about infrastructure, security and project management as well as the industrial usage of Hive, HBase, Kafka, and Spark within an industrial Corporate and Investment Bank environment, framed by regulatory constraints.
HBase hast established itself as the backend for many operational and interactive use-cases, powering well-known services that support millions of users and thousands of concurrent requests. In terms of features HBase has come a long way, overing advanced options such as multi-level caching on- and off-heap, pluggable request handling, fast recovery options such as region replicas, table snapshots for data governance, tuneable write-ahead logging and so on. This talk is based on the research for the an upcoming second release of the speakers HBase book, correlated with the practical experience in medium to large HBase projects around the world. You will learn how to plan for HBase, starting with the selection of the matching use-cases, to determining the number of servers needed, leading into performance tuning options. There is no reason to be afraid of using HBase, but knowing its basic premises and technical choices will make using it much more successful. You will also learn about many of the new features of HBase up to version 1.3, and where they are applicable.
There has been an explosion of data digitising our physical world – from cameras, environmental sensors and embedded devices, right down to the phones in our pockets. Which means that, now, companies have new ways to transform their businesses – both operationally, and through their products and services – by leveraging this data and applying fresh analytical techniques to make sense of it. But are they ready? The answer is “no” in most cases.
In this session, we’ll be discussing the challenges facing companies trying to embrace the Analytics of Things, and how Teradata has helped customers work through and turn those challenges to their advantage.
In this talk, we will present a new distribution of Hadoop, Hops, that can scale the Hadoop Filesystem (HDFS) by 16X, from 70K ops/s to 1.2 million ops/s on Spotiy's industrial Hadoop workload. Hops is an open-source distribution of Apache Hadoop that supports distributed metadata for HSFS (HopsFS) and the ResourceManager in Apache YARN. HopsFS is the first production-grade distributed hierarchical filesystem to store its metadata normalized in an in-memory, shared nothing database. For YARN, we will discuss optimizations that enable 2X throughput increases for the Capacity scheduler, enabling scalability to clusters with >20K nodes. We will discuss the journey of how we reached this milestone, discussing some of the challenges involved in efficiently and safely mapping hierarchical filesystem metadata state and operations onto a shared-nothing, in-memory database. We will also discuss the key database features needed for extreme scaling, such as multi-partition transactions, partition-pruned index scans, distribution-aware transactions, and the streaming changelog API. Hops (www.hops.io) is Apache-licensed open-source and supports a pluggable database backend for distributed metadata, although it currently only support MySQL Cluster as a backend. Hops opens up the potential for new directions for Hadoop when metadata is available for tinkering in a mature relational database.
In high-risk manufacturing industries, regulatory bodies stipulate continuous monitoring and documentation of critical product attributes and process parameters. On the other hand, sensor data coming from production processes can be used to gain deeper insights into optimization potentials. By establishing a central production data lake based on Hadoop and using Talend Data Fabric as a basis for a unified architecture, the German pharmaceutical company HERMES Arzneimittel was able to cater to compliance requirements as well as unlock new business opportunities, enabling use cases like predictive maintenance, predictive quality assurance or open world analytics. Learn how the Talend Data Fabric enabled HERMES Arzneimittel to become data-driven and transform Big Data projects from challenging, hard to maintain hand-coding jobs to repeatable, future-proof integration designs.
Talend Data Fabric combines Talend products into a common set of powerful, easy-to-use tools for any integration style: real-time or batch, big data or master data management, on-premises or in the cloud.
While you could be tempted assuming data is already safe in a single Hadoop cluster, in practice you have to plan for more. Questions like: "What happens if the entire datacenter fails?, or "How do I recover into a consistent state of data, so that applications can continue to run?" are not a all trivial to answer for Hadoop. Did you know that HDFS snapshots are handling open files not as immutable? Or that HBase snapshots are executed asynchronously across servers and therefore cannot guarantee atomicity for cross region updates (which includes tables)? There is no unified and coherent data backup strategy, nor is there tooling available for many of the included components to build such a strategy. The Hadoop distributions largely avoid this topic as most customers are still in the "single use-case" or PoC phase, where data governance as far as backup and disaster recovery (BDR) is concerned are not (yet) important. This talk first is introducing you to the overarching issue and difficulties of backup and data safety, looking at each of the many components in Hadoop, including HDFS, HBase, YARN, Oozie, the management components and so on, to finally show you a viable approach using built-in tools. You will also learn not to take this topic lightheartedly and what is needed to implement and guarantee a continuous operation of Hadoop cluster based solutions.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Analysis of Major Trends in Big Data Analytics
1. Hadoop Summit
San Jose, California
June 28th 2016
Analysis of Major Trends in
Big Data Analytics
Slim Baltagi
Director, Enterprise Architecture
Capital One Financial Corporation
2. Welcome!
About me:
• I’m currently director of Enterprise Architecture at Capital One: a
top 10 US financial corporation based in McLean, VA.
• I have over 20 years of IT experience.
• I have over 7 years of Big Data experience: Engineer, Architect,
Evangelist, Blogger, Thought Leader, Speaker, Organizer of Apache
Flink meetups in many countries, Creator and maintainer of the Big
Data Knowledge Base: http://SparkBigData.com with over 7,000
categorized web resources about Hadoop, Spark, Flink, …
Thanks: This talk won the community vote of the ‘Future
of Apache Hadoop’ track. Thanks to all of you who: voted
for this talk, attending this talk now, reading these slides.
Disclaimer: This is a vendor-independent talk that
expresses my own opinions. I am not endorsing nor
promoting any product or vendor mentioned in this talk.2
3. Agenda
1. Portability between Big Data Execution
Engines
2. Emergence of stream analytics
3. In-Memory analytics
4. Rapid Application Development of Big Data
applications
5. Open sourcing Machine Learning systems by
tech giants
6. Hybrid Cloud Computing
3
4. What is a typical Big Data Analytics Stack:
Hadoop, Spark, Flink, …?
4
5. 1. Portability between Big Data Execution Engines
If you have an existing Big Data application based on
MapReduce and you want to benefit from a different
execution engine such as Tez, Spark or Flink, you might
need to:
• Reuse some of your existing code such as mapper and
reduce functions. Example:
• Leverage a ‘compatibility layer’ to run your existing
Big Data application on the new engine. Example:
Hadoop Compatibility Layer from Flink
• Switch to a different engine if the tool you used
supports it. Example: Hive/Pig on Tez, Hive/Pig on
Spark, Sqoop on Spark, Cascading on Flink.
• Rewrite your Big Data application! 5
6. 1. Portability between Big Data Execution Engines
Apache Beam (unified Batch and Stream processing) is
a new Apache incubator project based on years of
experience developing Big Data infrastructure
(MapReduce, FlumeJava, MillWheel) within Google
http://beam.incubator.apache.org/
Apache Beam provides a unified API for Batch and
Stream processing and also multiple runners.
Beam programs become portable across multiple
runtime environments, both proprietary (e.g., Google
Cloud Dataflow) and open-source (e.g., Flink, Spark).
Apache Beam web
resourceshttp://sparkbigdata.com/component/tags/tag/67
6
7. Agenda
1. Portability between Big Data Execution
Engines
2. Emergence of stream analytics
3. In-Memory analytics
4. Rapid Application Development of Big Data
applications
5. Open sourcing Machine Learning systems by
tech giants
6. Hybrid Cloud Computing
7
8. 2. Emergence of stream analytics
Stonebraker et al. predicted in 2005 that stream
processing is going to become increasingly important
and attributed this to the ‘sensorization of the real
world: everything of material significance on the
planet get ‘sensor-tagged’ and report its state or
location in real time’. http://cs.brown.edu/~ugur/8rulesSigRec.pdf
I think stream processing is becoming important not
only because of this sensorization of the real world but
also because of the following factors:
1. Data streams
2. Technology
3. Business
4. Consumers
8
9. 2. Emergence of stream analytics
ConsumersData Streams
Technology Business1
2 3
4
Emergence of Stream
Analytics
9
10. 2. Emergence of stream analytics
1 Data Streams
Real-world data is available as series of events that
are continuously produced by a variety of
applications and disparate systems inside and
outside the enterprise.
Examples:
• Sensor networks data
• Web logs
• Database transactions
• System logs
• Tweets and social media data
• Click streams
• Mobile apps data
10
11. 2. Emergence of stream analytics
2 Technology
Simplified data architecture with Apache Kafka as a
major innovation and backbone of stream
architectures.
Rapidly maturing open source stream analytics tools:
Apache Flink, Apache Apex, Spark Streaming, Kafka Streams,
Apache Samza, Apache Storm, Apache Gearpump, Heron, …
Cloud services for stream processing: Google Cloud
Dataflow, Microsoft’s Azure Stream Analytics, Amazon Kinesis
Streams, IBM InfoSphere Streams, …
Vendors innovating in this space: Confluent, Data
Artisans, Databricks, MapR, Hortonworks, StreamSets, …
More mobile devices than human beings!
11
12. 2. Emergence of stream analytics
3 Business
Challenges:
Lag between data creation and actionable insights.
Infrastructure is idle most of the time
Web and mobile application growth, new types/sources
of data.
Need of organizations to shift from reactive approach
to a more of a proactive approach to interactions with
customers, suppliers and employees.
12
13. 2. Emergence of stream analytics
3 Business
Opportunities:
Embracing stream analytics helps organizations with
faster time to insight, competitive advantages and
operational efficiency in a wide range of verticals.
With stream analytics, new startups are/will be
challenging established companies. Example: Pay-As-
You-Go insurance or Usage-Based Auto Insurance
Speed is said to have become the new currency of
business.
13
14. 2. Emergence of stream analytics
4 Consumers
Consumers expect everything to be online and
immediately accessible through mobile
applications.
Mobile, always-on consumers are becoming more and
more demanding for instant responses from enterprise
applications in the way they are used to in mobile
applications from social networks such as Twitter,
Facebook, Linkedin …
Younger generation who grow up with video gaming
and accustomed to real-time interaction are now
themselves a growing class of consumers.
14
15. 2. Emergence of stream analytics
Financial services
Telecommunications
Online gaming systems
Security & Intelligence
Advertisement serving
Sensor Networks
Social Media
Healthcare
Oil & Gas
Retail & eCommerce
Transportation and logistics
16. Stream Processor
Business
Applications
(e.g. Enterprise
Command
Center)
Personal Mobile
Applications
Data Lake
Event
Collector
& Broker
Advanced Analytics
& Machine Learning
Real-Time
Notifications
Real-Time
DecisionsApps
Sensors
Devices
Other
Sources
Business
System
Backend
Dashboards
Sourcing & Integration Analytics & Processing Serving & Consuming
16
End-to-end stream analytics solution architecture
2. Emergence of stream analytics
17. Agenda
1. Portability between Big Data Execution
Engines
2. Emergence of stream analytics
3. In-Memory analytics
4. Rapid Application Development of Big Data
applications
5. Open sourcing Machine Learning systems by
tech giants
6. Hybrid Cloud Computing
17
18. 3. In-Memory Analytics
While In-Memory Analytics are not new, the trend is that
they are the focus of renewed attention thanks to:
• the availability of new memory that could easily fit
most active data sets
• the maturing or newly available in-memory open source
tools in many categories such as:
Memory-centric distributed File System
Columnar data format
Key Value data stores
IMDG: In-Memory Data Grids
Distributed Cache
Very Large Hashmaps
In the next couple slides, I will share a few examples
18
19. 3. In-Memory Analytics
Alluxio http://alluxio.org (formerly known as Tachyon) is
an open source memory speed virtual distributed
storage system. Example of its usage patterns:
• Accelerate Big Data Analytics workloads by
prefetching views and creating caches on demand.
• Sharing data between applications by writing to
Alluxio’s in-memory data store and read it back at
far greater speed.
Rocks DB https://github.com/facebook/rocksdb/ An open
source library from Facebook that provides an
embeddable, persistent key-value store. It is suited for
fast storage of data on RAM and flash drives. It is used
as state backend by Samza, Flink, Kafka Streams, …
19
20. 3. In-Memory Analytics
Apache Arrow (http://arrow.apache.org/) for columnar in-
memory analytics.
• Apache Arrow enables execution engines to take
advantage of the latest SIMD (Single Input Multiple
Data) operations included in modern processors, for
native vectorized optimization of analytical data
processing.
• Columnar layout of data also allows for a better use of
CPU caches by placing all data relevant to a column
operation in as compact of a format as possible.
• Apache Arrow advantages is that systems utilizing it
as a common memory format have no overhead for
cross-system data communication and also can share
functionality.
20
21. Agenda
1. Portability between Big Data Execution
Engines
2. Emergence of stream analytics frameworks
3. In-Memory analytics
4. Rapid Application Development of Big Data
applications
5. Open sourcing Machine Learning systems by
tech giants
6. Deployment of Big Data applications in a
hybrid model: on-premise and on the cloud
21
22. 4. Rapid Application Development of Big
Data applications
MicroservicesAPIs
Notebooks
/Shells
GUIs1
2 3
4
Rapid Applications Development of
Big Data Analytics
22
23. 4. Rapid Application Development of Big
Data applications
1 APIs
Apache Spark and Apache Flink provide high level and
easy to use API compared to Hadoop MapReduce
Apache Beam is a new open source project from
Google that attempts to unify data processing
frameworks with a core API, allowing easy portability
between execution engines.
Use Apache Beam unified API for batch and streaming
and then run on a local runner, Apache Spark, Apache
Flink, …
The biggest advantage is in developer productivity and
ease of migration between processing engines.
23
24. 4. Rapid Application Development of Big
Data applications
2 Shells or Notebooks
• REPL (Read Evaluate Print Loop) interpreter
• Interactive queries
• Explore data quickly
• Sketch out your ideas in the shell to make sure you’ve
got your code right before deploying it to a cluster.
• Web-based interactive computation environment
• Collaborative data analytics and visualization tool
• Combines rich text, execution code, plots and rich
media
• Exploratory data science
• Saving and replaying of written code
24
25. 4. Rapid Application Development of Big
Data applications
2 Shells or Notebooks Apache Zeppelin
25
26. 4. Rapid Application Development of Big
Data applications
3 GUIs
Apache Nifi
26
27. 4. Rapid Application Development of Big
Data applications
4 Microservices:
Microservices are an important trend in building larger
systems by:
• decomposing their functions into relatively simple,
single purpose services
• that asynchronously communicate via Apache
Kafka as a message passing technology that avoid
unwanted dependencies between these services.
This streaming architectural style provides agility
as microservices can be built and maintained by
small and cross-functional teams.
27
28. Agenda
1. Portability between Big Data Execution
Engines
2. Emergence of stream analytics frameworks
3. In-Memory analytics
4. Rapid Application Development of Big Data
applications
5. Open sourcing Machine Learning systems by
tech giants
6. Hybrid Cloud Computing
28
29. 5. Open sourcing Machine Learning systems
by tech giants
Yahoo
CaffeOnSpark
Facebook
Torch
IBM
SystemML
Google
TensorFlow1
2 3
5
Open sourcing machine
learning systems by tech giants
29
4
Microsoft
DMTK
Amazon
DSSTNE
6
30. 5. Open sourcing Machine Learning systems
by tech giants
1 Torch http://torch.ch/ is an open source
Machine Learning library which provides a
wide range of deep learning algorithms.
Facebook donated its optimized deep learning modules to
the Torch project on January 16, 2015.
2 Apache SystemML http://systemml.apache.org/
is a distributed and declarative machine learning platform.
It was created in 2010 by IBM and donated as an open
source Apache project on November 2nd, 2015.
3 TensorFlow is an open source machine learning library
created by Google. https://www.tensorflow.org It was released
under the Apache 2.0 open source license on November 9th,
2015 30
31. 5. Open sourcing Machine Learning
systems by tech giants
4 DMTK (Distributed Machine Learning Toolkit) allows
models to be trained on multiple nodes at once.
http://www.dmtk.io/ DMTK was open sourced
by Microsoft on November 12, 2015.
5 CaffeOnSpark https://github.com/yahoo/CaffeOnSpark is an
open source machine learning library created by Yahoo. It
was open sourced on February 24th, 2016
DSSTNE (Deep Scalable Sparse Tensor Network
Engine) “Destiny” is an Amazon developed library for
building Deep Learning (DL) Machine Learning (ML)
models. It was open sourced on May 11th, 2016
https://github.com/amznlabs/amazon-dsstne
31
6
32. 5. Open sourcing Machine Learning
systems by tech giants
It is expected to see wider adoption of Machine Learning
tools by companies besides these tech giants in a
similar way that MapReduce and Hadoop helped making
“Big Data” a part of just every company’s strategy!
These tech giants are not pushing their machine
learning systems for internal use only but they are
racing to open source them, attract users and
committers and advance the entire industry.
This combined with deployment on commodity clusters
will accelerate such adoption and as a result we will see
new machine learning use cases especially building on
deep learning that will transform multiple industries.
32
33. Agenda
1. Portability between Big Data Execution
Engines
2. Emergence of stream analytics frameworks
3. In-Memory analytics
4. Rapid Application Development of Big Data
applications
5. Open sourcing Machine Learning systems by
tech giants
6. Hybrid Cloud Computing
33
34. 6. Hybrid Cloud Computing
Cloud is becoming mainstream and software stack is
adapting.
Big Data applications will eventually all move to the
cloud to benefit from agility, elasticity and on-demand
computing!
Meanwhile, companies need to advance their strategy
for hybrid integration between cloud and on-premise
deployments.
Deployment of Big Data applications in a hybrid
model: on-premise and on the cloud
34
35. 6. Hybrid Cloud Computing
The following are a few patterns for such hybrid
integration:
1. Replicating data from SaaS apps to existing on-
premise databases to be used by other on-premise
applications such as analytics ones.
2. Integrating SaaS applications themselves with on-
premise applications.
3. Hybrid Data Warehousing with the Cloud: move data
from on-premise data warehouse to the cloud.
4. Real-Time analytics on streaming data: depending on
your use case, you might keep your stream analytics
infrastructure directly accessible on-premise for low
latency.
36. Key Takeaways
1. Adopt Apache Beam for easier development and
portability between Big Data Execution Engines
2. Adopt stream analytics for faster time to insight,
competitive advantages and operational efficiency
3. Accelerate your Big Data applications with In-Memory
open source tools
4. Adopt Rapid Application Development of Big Data
applications: APIs, Notebooks, GUIs, Microservices…
5. Have Machine Learning part of your strategy or
passively watch your industry completely
transformed!
6. How to advance your strategy for hybrid integration
between cloud and on-premise deployments?
36
37. Thanks!
To all of you for attending!
Any questions?
Let’s keep in touch!
• sbaltagi@gmail.com
• @SlimBaltagi
• https://www.linkedin.com/in/slimbaltagi
37