All modern Big Data solutions, like Hadoop, Kafka or the rest of the ecosystem tools, are designed as distributed processes and as such include some sort of redundancy for High Availability.
https://www.bigdataspain.org/2017/talk/disaster-recovery-for-big-data
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Enabling Next Gen Analytics with Azure Data Lake and StreamSetsStreamsets Inc.
Big data and the cloud are perfect partners for companies who want to unlock maximum value from all of their unstructured, semi-structured, and structured data. The challenge has been how to create and manage a reliable end-to-end solution that spans data ingestion, storage and analysis in the face of the volume, velocity and variety of big data sources.
In this webinar, we will show you how to achieve big data bliss by combining StreamSets Data Collector, which specializes in creating and running complex any-to-any dataflows, with Microsoft's Azure Data Lake and Azure analytic solutions.
We will walk through an example of how a major bank is using StreamSets to transport their on-premise data to the Azure Cloud Computing Platform and Azure Data Lake to take advantage of analytics tools with unprecedented scale and performance.
Big Data Day LA 2016/ Use Case Driven track - Shaping the Role of Data Scienc...Data Con LA
At IRIS.TV, our business builds algorithmic solutions for video recommendation with the end goal to deliver a great user experience as evidenced by users viewing more video content. This talk outlines our reasons for expanding from a descriptive/predictive approach to data analytics toward a philosophy that features more prescriptive analytics, driven by our data science team.
Big Data Day LA 2016/ Hadoop/ Spark/ Kafka track - Building an Event-oriented...Data Con LA
While we frequently talk about how to build interesting products on top of machine and event data, the reality is that collecting, organizing, providing access to, and managing this data is where most people get stuck. In this session, we’ll follow the flow of data through an end to end system built to handle tens of terabytes per day of event-oriented data, providing real time streaming, in-memory, SQL, and batch access to this data. We’ll go into detail on how open source systems such as Hadoop, Kafka, Solr, and Impala/Hive are actually stitched together; describe how and where to perform data transformation and aggregation; provide a simple and pragmatic way of managing event metadata; and talk about how applications built on top of this platform get access to data and extend its functionality. This session is especially recommended for data infrastructure engineers and architects planning, building, or maintaining similar systems.
Big Data Day LA 2016/ Use Case Driven track - From Clusters to Clouds, Hardwa...Data Con LA
Today’s Software Defined environments attempt to remove the weakness of computing hardware from the operational equation. There is no doubt that this is a natural progress away from overpriced, proprietary compute and storage layers. However, even at the heart of any Software Defined universe is an underlying hardware stack that must be robust, reliable and cost effective. Our 20+ years experience delivering over 2000 clusters and clouds has taught us how to properly design and engineer the right hardware solution for Big Data, Cluster and Cloud environments. This presentation will share this knowledge allowing user to make better design decisions for any deployment.
Webinar: Don't Leave Your Data in the DarkDataStax
As new types of data sources emerge from cloud, mobile devices, social media and machine sensor devices, traditional databases hit the ceiling due to today’s dynamic, data-volume driven business culture.
Join us in this online webinar and learn how you can incorporate a modern, NoSQL platform into daily operations to optimize and simplify data performance. DataStax recently announced DataStax Enterprise 4.0, a production-certified version of Apache Cassandra with an in-memory option, enterprise search, advanced security features and visual management tools. Give your developers a simple and powerful way to deliver the information your customers care about most—unconstrained by the complexities and high costs of traditional database systems.
Learn how to:
- Easily assign data based on its performance needs on traditional spinning disk, SSD or in-memory. All in the same database instance
- Leverage DataStax’s built-in enhancements for broader information search and analysis even with many thousands of concurrent requests
- Visually monitor, manage, and fine-tune your environment to get the most of your online data
Enabling Next Gen Analytics with Azure Data Lake and StreamSetsStreamsets Inc.
Big data and the cloud are perfect partners for companies who want to unlock maximum value from all of their unstructured, semi-structured, and structured data. The challenge has been how to create and manage a reliable end-to-end solution that spans data ingestion, storage and analysis in the face of the volume, velocity and variety of big data sources.
In this webinar, we will show you how to achieve big data bliss by combining StreamSets Data Collector, which specializes in creating and running complex any-to-any dataflows, with Microsoft's Azure Data Lake and Azure analytic solutions.
We will walk through an example of how a major bank is using StreamSets to transport their on-premise data to the Azure Cloud Computing Platform and Azure Data Lake to take advantage of analytics tools with unprecedented scale and performance.
Big Data Day LA 2016/ Use Case Driven track - Shaping the Role of Data Scienc...Data Con LA
At IRIS.TV, our business builds algorithmic solutions for video recommendation with the end goal to deliver a great user experience as evidenced by users viewing more video content. This talk outlines our reasons for expanding from a descriptive/predictive approach to data analytics toward a philosophy that features more prescriptive analytics, driven by our data science team.
Big Data Day LA 2016/ Hadoop/ Spark/ Kafka track - Building an Event-oriented...Data Con LA
While we frequently talk about how to build interesting products on top of machine and event data, the reality is that collecting, organizing, providing access to, and managing this data is where most people get stuck. In this session, we’ll follow the flow of data through an end to end system built to handle tens of terabytes per day of event-oriented data, providing real time streaming, in-memory, SQL, and batch access to this data. We’ll go into detail on how open source systems such as Hadoop, Kafka, Solr, and Impala/Hive are actually stitched together; describe how and where to perform data transformation and aggregation; provide a simple and pragmatic way of managing event metadata; and talk about how applications built on top of this platform get access to data and extend its functionality. This session is especially recommended for data infrastructure engineers and architects planning, building, or maintaining similar systems.
Big Data Day LA 2016/ Use Case Driven track - From Clusters to Clouds, Hardwa...Data Con LA
Today’s Software Defined environments attempt to remove the weakness of computing hardware from the operational equation. There is no doubt that this is a natural progress away from overpriced, proprietary compute and storage layers. However, even at the heart of any Software Defined universe is an underlying hardware stack that must be robust, reliable and cost effective. Our 20+ years experience delivering over 2000 clusters and clouds has taught us how to properly design and engineer the right hardware solution for Big Data, Cluster and Cloud environments. This presentation will share this knowledge allowing user to make better design decisions for any deployment.
Webinar: Don't Leave Your Data in the DarkDataStax
As new types of data sources emerge from cloud, mobile devices, social media and machine sensor devices, traditional databases hit the ceiling due to today’s dynamic, data-volume driven business culture.
Join us in this online webinar and learn how you can incorporate a modern, NoSQL platform into daily operations to optimize and simplify data performance. DataStax recently announced DataStax Enterprise 4.0, a production-certified version of Apache Cassandra with an in-memory option, enterprise search, advanced security features and visual management tools. Give your developers a simple and powerful way to deliver the information your customers care about most—unconstrained by the complexities and high costs of traditional database systems.
Learn how to:
- Easily assign data based on its performance needs on traditional spinning disk, SSD or in-memory. All in the same database instance
- Leverage DataStax’s built-in enhancements for broader information search and analysis even with many thousands of concurrent requests
- Visually monitor, manage, and fine-tune your environment to get the most of your online data
Expert IT analyst groups like Wikibon forecast that NoSQL database usage will grow at a compound rate of 60% each year for the next five years, and Gartner Groups says NoSQL databases are one of the top trends impacting information management in 2013. But is NoSQL right for your business? How do you know which business applications will benefit from NoSQL and which won't? What questions do you need to ask in order to make such decisions?
If you're wondering what NoSQL is and if your business can benefit from NoSQL technology, join DataStax for the Webinar, "How to Tell if Your Business Needs NoSQL". This to-the-point presentation will provide practical litmus tests to help you understand whether NoSQL is right for your use case, and supplies examples of NoSQL technology in action with leading businesses that demonstrate how and where NoSQL databases can have the greatest impact."
Speaker: Robin Schumacher, Vice President of Products at DataStax
Robin Schumacher has spent the last 20 years working with databases and big data. He comes to DataStax from EnterpriseDB, where he built and led a market-driven product management group. Previously, Robin started and led the product management team at MySQL for three years before they were bought by Sun (the largest open source acquisition in history), and then by Oracle. He also started and led the product management team at Embarcadero Technologies, which was the #1 IPO in 2000. Robin is the author of three database performance books and frequent speaker at industry events. Robin holds BS, MA, and Ph.D. degrees from various universities.
Transforms Document Management at Scale with Distributed Database Solution wi...DataStax Academy
SpringCM will discuss how they achieve massive scalability and blazing performance with DataStax Enterprise and HP Moonshot Solutions; to transform enterprise document management forever. They will dive into: how they broke through their capacity to run millions of workloads per hour, why relational database technologies simply cannot handle the scalability demands of document management SaaS, and how they deliver up to 70% more energy savings than traditional rack server architecture.
Big Data Day LA 2016/ Use Case Driven track - How to Use Design Thinking to J...Data Con LA
There is a novel approach to identifying big data use cases, one which will ultimately lower the barrier to entry to big data projects and increase overall implementation success. This talk describes the approach used by big data pioneer and Datameer CEO Stefan Groschupf to drive over 200 production implementations.
Part 1: Lambda Architectures: Simplified by Apache KuduCloudera, Inc.
3 Things to Learn About:
* The concept of lambda architectures
* The Hadoop ecosystem components involved in lambda architectures
* The advantages and disadvantages of lambda architectures
Webinar: ROI on Big Data - RDBMS, NoSQL or Both? A Simple Guide for Knowing H...DataStax
Big data doesn't mean big money. In fact, choosing a NoSQL solution will almost certainly save your business money, in terms of hardware, licensing, and total cost of ownership. What's more, choosing the correct technology for your use case will almost certainly increase your top line as well.
Big words, right? We'll back them up with customer case studies and lots of details.
This webinar will give you the basics for growing your business in a profitable way. What's the use of growing your top line but outspending any gains on cumbersome, ineffective, outdated IT? We'll take you through the specific use cases and business models that are the best fit for NoSQL solutions.
By the way, no prior knowledge is required. If you don't even know what RDBMS or NoSQL stand for, you are in the right place. Get your questions answered, and get your business on the right track to meeting your customers' needs in today's data environment.
Free Servers to Build Big Data System on: Bing’s ApproachDataWorks Summit
A couple of thousands of servers for a big data system is also a big investment. Microsoft Bing has figured out a way to fulfill our needs without signing a huge check. We have the technology to harvest spare cycles on underutilized servers. And we tweak the configurations in Hadoop and Spark to fit the flexible capacity base. We have saved hundreds of millions of dollars per year.
Bing is adopting open source big data technologies for our offline data processing system. It requires a massive amount of capacity, which implies a significant bill. With collaboration with Windows and Azure, Research teams, we can harvest most of the needed capacity from our existing server fleet. We make use of the capacity on reserve servers while keeping them instantly available for emergency use; we allocate compute and storage to servers when they are not fully occupied. We updated Hadoop node decommission, HDFS block placement, YARN node label mapping, and a few other policies so that they can adapt to the capacity that is even less reliable than commodity servers. We brought open source capacity to Bing product with less than 1 percent of the cost we had done it through normal approach. We also extend the YARN and Spark framework to better fit the need of deep learning training and inferencing workloads in our system. This extension is equipping Bing with direct questioning and answering type of interactive query features.
Big does not mean expensive. The audience can learn about the approach from Bing that they can make better use of their existing servers to do additional big data systems.
A three hour lecture I gave at the Jyväskylä Summer School. The talk goes through important details about the use of data science in real businesses. These include data deployment, data processing, practical issues with data solutions and arising trends in data science.
See also Part 1 of the lecture: Introduction Data Science. You can find it in my profile (click the face)
Big Data Day LA 2016/ NoSQL track - Architecting Real Life IoT Architecture, ...Data Con LA
Learn how to benefit from IoT (internet of things) to reduce costs and spur transformation for your company and clients. Attendees will learn about building blocks to create an IoT solution, and walk through real life architectural decisions in building a solution.
Big Data Day LA 2016/ Big Data Track - Apply R in Enterprise Applications, Lo...Data Con LA
Prototypes are typically re-implemented in another language due to compatibility issues with R in the enterprise, but TIBCO Enterprise Runtime for R (TERR) allows the language to be run on several platforms. Enterprise-level scalability has been brought to the R language, enabling rapid iteration without the need to recode, re-implement and test. This presentation will delve further into these topics, highlighting specific use cases and the true value that can be gained from utilizing R. The session will be followed by a lively, open Q&A discussion.
From Events to Networks: Time Series Analysis on ScaleDr. Mirko Kämpf
Event processing, time series aggregation and analysis, and finally analysis of structural patterns between those data snippets can all be done on Hadoop clusters on huge data volumes.
In order to find hidden relations and invisible structures one has to combine three disciplines using a variety of tools. Luckily, the Hadoop ecosystem offers many of such tools. In this session you can see practical examples and a demonstration of the "Hadoop-Oscilloscope". Generic analysis patterns and recommendations towards selection of appropriate algorithms will also provide additional background.
Expert IT analyst groups like Wikibon forecast that NoSQL database usage will grow at a compound rate of 60% each year for the next five years, and Gartner Groups says NoSQL databases are one of the top trends impacting information management in 2013. But is NoSQL right for your business? How do you know which business applications will benefit from NoSQL and which won't? What questions do you need to ask in order to make such decisions?
If you're wondering what NoSQL is and if your business can benefit from NoSQL technology, join DataStax for the Webinar, "How to Tell if Your Business Needs NoSQL". This to-the-point presentation will provide practical litmus tests to help you understand whether NoSQL is right for your use case, and supplies examples of NoSQL technology in action with leading businesses that demonstrate how and where NoSQL databases can have the greatest impact."
Speaker: Robin Schumacher, Vice President of Products at DataStax
Robin Schumacher has spent the last 20 years working with databases and big data. He comes to DataStax from EnterpriseDB, where he built and led a market-driven product management group. Previously, Robin started and led the product management team at MySQL for three years before they were bought by Sun (the largest open source acquisition in history), and then by Oracle. He also started and led the product management team at Embarcadero Technologies, which was the #1 IPO in 2000. Robin is the author of three database performance books and frequent speaker at industry events. Robin holds BS, MA, and Ph.D. degrees from various universities.
Transforms Document Management at Scale with Distributed Database Solution wi...DataStax Academy
SpringCM will discuss how they achieve massive scalability and blazing performance with DataStax Enterprise and HP Moonshot Solutions; to transform enterprise document management forever. They will dive into: how they broke through their capacity to run millions of workloads per hour, why relational database technologies simply cannot handle the scalability demands of document management SaaS, and how they deliver up to 70% more energy savings than traditional rack server architecture.
Big Data Day LA 2016/ Use Case Driven track - How to Use Design Thinking to J...Data Con LA
There is a novel approach to identifying big data use cases, one which will ultimately lower the barrier to entry to big data projects and increase overall implementation success. This talk describes the approach used by big data pioneer and Datameer CEO Stefan Groschupf to drive over 200 production implementations.
Part 1: Lambda Architectures: Simplified by Apache KuduCloudera, Inc.
3 Things to Learn About:
* The concept of lambda architectures
* The Hadoop ecosystem components involved in lambda architectures
* The advantages and disadvantages of lambda architectures
Webinar: ROI on Big Data - RDBMS, NoSQL or Both? A Simple Guide for Knowing H...DataStax
Big data doesn't mean big money. In fact, choosing a NoSQL solution will almost certainly save your business money, in terms of hardware, licensing, and total cost of ownership. What's more, choosing the correct technology for your use case will almost certainly increase your top line as well.
Big words, right? We'll back them up with customer case studies and lots of details.
This webinar will give you the basics for growing your business in a profitable way. What's the use of growing your top line but outspending any gains on cumbersome, ineffective, outdated IT? We'll take you through the specific use cases and business models that are the best fit for NoSQL solutions.
By the way, no prior knowledge is required. If you don't even know what RDBMS or NoSQL stand for, you are in the right place. Get your questions answered, and get your business on the right track to meeting your customers' needs in today's data environment.
Free Servers to Build Big Data System on: Bing’s ApproachDataWorks Summit
A couple of thousands of servers for a big data system is also a big investment. Microsoft Bing has figured out a way to fulfill our needs without signing a huge check. We have the technology to harvest spare cycles on underutilized servers. And we tweak the configurations in Hadoop and Spark to fit the flexible capacity base. We have saved hundreds of millions of dollars per year.
Bing is adopting open source big data technologies for our offline data processing system. It requires a massive amount of capacity, which implies a significant bill. With collaboration with Windows and Azure, Research teams, we can harvest most of the needed capacity from our existing server fleet. We make use of the capacity on reserve servers while keeping them instantly available for emergency use; we allocate compute and storage to servers when they are not fully occupied. We updated Hadoop node decommission, HDFS block placement, YARN node label mapping, and a few other policies so that they can adapt to the capacity that is even less reliable than commodity servers. We brought open source capacity to Bing product with less than 1 percent of the cost we had done it through normal approach. We also extend the YARN and Spark framework to better fit the need of deep learning training and inferencing workloads in our system. This extension is equipping Bing with direct questioning and answering type of interactive query features.
Big does not mean expensive. The audience can learn about the approach from Bing that they can make better use of their existing servers to do additional big data systems.
A three hour lecture I gave at the Jyväskylä Summer School. The talk goes through important details about the use of data science in real businesses. These include data deployment, data processing, practical issues with data solutions and arising trends in data science.
See also Part 1 of the lecture: Introduction Data Science. You can find it in my profile (click the face)
Big Data Day LA 2016/ NoSQL track - Architecting Real Life IoT Architecture, ...Data Con LA
Learn how to benefit from IoT (internet of things) to reduce costs and spur transformation for your company and clients. Attendees will learn about building blocks to create an IoT solution, and walk through real life architectural decisions in building a solution.
Big Data Day LA 2016/ Big Data Track - Apply R in Enterprise Applications, Lo...Data Con LA
Prototypes are typically re-implemented in another language due to compatibility issues with R in the enterprise, but TIBCO Enterprise Runtime for R (TERR) allows the language to be run on several platforms. Enterprise-level scalability has been brought to the R language, enabling rapid iteration without the need to recode, re-implement and test. This presentation will delve further into these topics, highlighting specific use cases and the true value that can be gained from utilizing R. The session will be followed by a lively, open Q&A discussion.
From Events to Networks: Time Series Analysis on ScaleDr. Mirko Kämpf
Event processing, time series aggregation and analysis, and finally analysis of structural patterns between those data snippets can all be done on Hadoop clusters on huge data volumes.
In order to find hidden relations and invisible structures one has to combine three disciplines using a variety of tools. Luckily, the Hadoop ecosystem offers many of such tools. In this session you can see practical examples and a demonstration of the "Hadoop-Oscilloscope". Generic analysis patterns and recommendations towards selection of appropriate algorithms will also provide additional background.
Fundamentals of Big Data, Hadoop project design and case study or Use case
General planning consideration and most necessaries in Hadoop ecosystem and Hadoop projects
This will provide the basis for choosing the right Hadoop implementation, Hadoop technologies integration, adoption and creating an infrastructure.
Building applications using Apache Hadoop with a use-case of WI-FI log analysis has real life example.
Voldemort & Hadoop @ Linkedin, Hadoop User Group Jan 2010Bhupesh Bansal
Jan 22nd, 2010 Hadoop meetup presentation on project voldemort and how it plays well with Hadoop at linkedin. The talk focus on Linkedin Hadoop ecosystem. How linkedin manage complex workflows, data ETL , data storage and online serving of 100GB to TB of data.
Organizations looking to the cloud now have more vendor offerings and architecture choices available to them than ever before. In order to correctly select and implement the most appropriate cloud based DBMS architecture for their shops, technology pros must create and execute a well-thought out, detailed analysis of the competing offerings.
In addition, they must consider the impact cloud based DBMS systems, like any new architecture, will have on their support environment. Changes to policies and procedures, security controls, staff roles and responsibilities, change management processes and support documentation must be evaluated.
The data management industry has matured over the last three decades, primarily based on relational database management system(RDBMS) technology. Since the amount of data collected, and analyzed in enterprises has increased several folds in volume, variety and velocityof generation and consumption, organisations have started struggling with architectural limitations of traditional RDBMS architecture. As a result a new class of systems had to be designed and implemented, giving rise to the new phenomenon of “Big Data”. In this paper we will trace the origin of new class of system called Hadoop to handle Big data.
Join our guest, Vale Inco, worldwide leading producer of nickel, and Scalar for an informative session providing you insight on how to:
•Automate data management tasks to free up IT resources and eliminate downtime
•Get better utilization out of your storage resources
•Utilize storage policies to better manage and optimize use of storage devices
•Easily add and manage storage policies for all devices from a single management console
•Reduce overall storage costs by 50 to 80%
•Cut migration times by up to 90% with zero impact to users during migration
•Reduce backup times and costs by up to 90%
Data Protection and Disaster Recovery Solutions: Ensuring Business ContinuityMaryJWilliams2
In today's digital landscape, data protection and disaster recovery are critical components of any robust IT strategy. This article delves into various solutions designed to safeguard your data against loss, corruption, and cyber threats. Explore the latest technologies and best practices for effective data protection, from backup strategies to comprehensive disaster recovery plans. To know more: https://stonefly.com/white-papers/data-protection-disaster-recovery-solution/
Shielding Data Assets: Exploring Data Protection and Disaster Recovery Strate...MaryJWilliams2
Delve into comprehensive data protection and disaster recovery strategies with our detailed PDF submission. Discover best practices, methodologies, and technologies to safeguard critical data and ensure operational continuity in the face of unforeseen events. Gain insights into designing resilient backup plans, implementing disaster recovery solutions, and mitigating risks effectively. Equip yourself with the knowledge needed to protect your organization's data assets and maintain business continuity. To Know more: https://stonefly.com/white-papers/data-protection-disaster-recovery-solution/
From limited Hadoop compute capacity to increased data scientist efficiencyAlluxio, Inc.
Alluxio Tech Talk
Oct 17, 2019
Speaker:
Alex Ma, Alluxio
Want to leverage your existing investments in Hadoop with your data on-premise and still benefit from the elasticity of the cloud?
Like other Hadoop users, you most likely experience very large and busy Hadoop clusters, particularly when it comes to compute capacity. Bursting HDFS data to the cloud can bring challenges – network latency impacts performance, copying data via DistCP means maintaining duplicate data, and you may have to make application changes to accomodate the use of S3.
“Zero-copy” hybrid bursting with Alluxio keeps your data on-prem and syncs data to compute in the cloud so you can expand compute capacity, particularly for ephemeral Spark jobs.
Big Data, Big Quality? by Irene Gonzálvez at Big Data Spain 2017Big Data Spain
Insights can only be as good as the data. The data quality domain is enormously large, so you need to understand your company pain points to know what to focus on first.
https://www.bigdataspain.org/2017/talk/big-data-big-quality
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Scaling a backend for a big data and blockchain environment by Rafael Ríos at...Big Data Spain
2gether is a financial platform based on Blockchain, Big Data and Artificial Intelligence that allows interaction between users and third-party services in a single interface.
https://www.bigdataspain.org/2017/talk/scaling-a-backend-for-a-big-data-and-blockchain-environment
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Presentation: Boost Hadoop and Spark with in-memory technologies by Akmal Cha...Big Data Spain
In this presentation, attendees will see how to speed up existing Hadoop and Spark deployments by just making Apache Ignite responsible for RAM utilization. No code modifications, no new architecture from scratch!
https://www.bigdataspain.org/2017/talk/boost-hadoop-and-spark-with-in-memory-technologies
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Data science for lazy people, Automated Machine Learning by Diego Hueltes at ...Big Data Spain
The power of this new set of tools for Data Science. Is really easy to start applying these technics in your current workflow.
https://www.bigdataspain.org/2017/talk/data-science-for-lazy-people-automated-machine-learning
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Training Deep Learning Models on Multiple GPUs in the Cloud by Enrique Otero ...Big Data Spain
GPUs on the cloud as Infrastructure as a Service (IaaS) seem a commodity. However to efficiently distribute deep learning tasks on several GPUs is challenging.
https://www.bigdataspain.org/2017/talk/training-deep-learning-models-on-multiple-gpus-in-the-cloud
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Unbalanced data: Same algorithms different techniques by Eric Martín at Big D...Big Data Spain
Unbalanced data is a specific data configuration that appears commonly in nature. Applying machine learning techniques to this kind of data is a difficult process, usually addressed by unbalanced reduction techniques.
https://www.bigdataspain.org/2017/talk/unbalanced-data-same-algorithms-different-techniques
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
State of the art time-series analysis with deep learning by Javier Ordóñez at...Big Data Spain
Time series related problems have traditionally been solved using engineered features obtained by heuristic processes.
https://www.bigdataspain.org/2017/talk/state-of-the-art-time-series-analysis-with-deep-learning
Big Data Spain 2017
November 16th - 17th
Trading at market speed with the latest Kafka features by Iñigo González at B...Big Data Spain
Not long ago only banks and hedge funds could afford doing automated and High Frequency Trading, that is, the ability to send buy commodities in microseconds intervals.
https://www.bigdataspain.org/2017/talk/trading-at-market-speed-with-the-latest-kafka-features
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Unified Stream Processing at Scale with Apache Samza by Jake Maes at Big Data...Big Data Spain
The shift to stream processing at LinkedIn has accelerated over the past few years. We now have over 200 Samza applications in production processing more than 260B events per day.
https://www.bigdataspain.org/2017/talk/apache-samza-jake-maes
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
The Analytic Platform behind IBM’s Watson Data Platform by Luciano Resende a...Big Data Spain
IBM has built a “Data Science Experience” cloud service that exposes Notebook services at web scale.
https://www.bigdataspain.org/2017/talk/the-analytic-platform-behind-ibms-watson-data-platform
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Artificial Intelligence and Data-centric businesses by Óscar Méndez at Big Da...Big Data Spain
Artificial Intelligence and Data-centric businesses.
https://www.bigdataspain.org/2017/talk/tbc
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Why big data didn’t end causal inference by Totte Harinen at Big Data Spain 2017Big Data Spain
Ten years ago there were rumours of the death of causal inference. Big data was supposed to enable us to rely on purely correlational data to predict and control the world.
https://www.bigdataspain.org/2017/talk/why-big-data-didnt-end-causal-inference
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Meme Index. Analyzing fads and sensations on the Internet by Miguel Romero at...Big Data Spain
The Meme of the Internet Index will be the new normal to analyze and predict facts and sensations which go around the Internet.
https://www.bigdataspain.org/2017/talk/meme-index-analyzing-fads-and-sensations-on-the-internet
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Vehicle Big Data that Drives Smart City Advancement by Mike Branch at Big Dat...Big Data Spain
Geotab is a leader in the expanding world of Internet of Things (IoT) and telematics industry with Big Data.
https://www.bigdataspain.org/2017/talk/vehicle-big-data-that-drives-smart-city-advancement
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
End of the Myth: Ultra-Scalable Transactional Management by Ricardo Jiménez-P...Big Data Spain
The talk will focus on explaining why operational databases do not scale due to limitations in legacy transactional management.
https://www.bigdataspain.org/2017/talk/end-of-the-myth-ultra-scalable-transactional-management
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Attacking Machine Learning used in AntiVirus with Reinforcement by Rubén Mart...Big Data Spain
In recent years Machine Learning (ML) and especially Deep Learning (DL) have achieved great success in many areas such as visual recognition, NLP or even aiding in medical research.
https://www.bigdataspain.org/2017/talk/attacking-machine-learning-used-in-antivirus-with-reinforcement
Big Data Spain 2017
16th - 17th Kinépolis Madrid
More people, less banking: Blockchain by Salvador Casquero at Big Data Spain ...Big Data Spain
Primary function of banking sector is promoting economic activity; which means “commerce”, exchanging what someone produces-has for something that someone consumes-desires.
https://www.bigdataspain.org/2017/talk/more-people-less-banking-blockchain
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Make the elephant fly, once again by Sourygna Luangsay at Big Data Spain 2017Big Data Spain
Bol.com has been an early Hadoop user: since 2008 where it was first built for a recommendation algorithm.
https://www.bigdataspain.org/2017/talk/make-the-elephant-fly-once-again
Big Data Spain 2017
16th - 17th Kinépolis Madrid
Feature selection for Big Data: advances and challenges by Verónica Bolón-Can...Big Data Spain
In an era of growing data complexity and volume and the advent of Big Data, feature selection has a key role to play in helping reduce high-dimensionality in machine learning problems.
https://www.bigdataspain.org/2017/talk/feature-selection-for-big-data-advances-and-challenges
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
3. About us
We are nerds!
Started working in Big Data for international companies
Founded a start-up a few years ago:
With colleagues working in related technical areas
And who also knew business stuff!
We’ve been participating in different Big Data projects
4. Introduction
“I already have HDFS replication and High
Availability in my services, why would I
need Disaster Recovery (or backup)?”
5. Concepts
High Availability (HA)
Protects from failing
components: disks,
servers, network
Is generally a “systems”
issue
Redundant, doubles
components
Generally has strict
network requirements
Fully automated,
immediate
6. Concepts
Backup
Allows you to go back to
a previous state in time:
daily, monthly, etc.
It is a “data” issue
Protects from accidental
deletion or modification
Also used to check for
unwanted modifications
Takes some time to
restore
7. Concepts
Disaster Recovery
Allows you to work
elsewhere
It is a “business” issue
Covers you from: main site
failures such as electric
power or network outages,
fires, floods or building
damage
Similar to having insurance
Medium time to be back
online
8. The ideal Disaster Recovery
High Availability for
datacenters
Exact duplicate of the
main site
Seamless operation (no
changes required)
Same performance
Same data
This is often very
expensive and sometimes
downright impossible
9. DR considerations
So, can we build a cheap(ish) DR?
We must evaluate some tradeoffs:
What’s the cost of the service not being
available? (Murphy’s Law: accidents will happen
when you are busiest)
Is all information equally important? Can we lose
a small amount of data?
Can we wait until we recover certain data from
backup?
Can I find other uses for the DR site?
11. DR considerations
Synchronous vs
Asynchronous
Synchronous replication
requires a FAST connection
Synchronous works at
transaction level and is
necessary for operational
systems
Asynchronous replication
converges over time
Asynchronous is not
affected by delays nor does
it create them
12. Big Data DR
Can’t generally be copied
synchronously
No VM replication
Other DR rules apply:
Since it impacts users,
someone is in charge of the
“starting gun”
DNS and network changes
to point clients
Main types:
Storage replication
Dual ingestion
13. Storage replication
Similar to non-Big Data solutions, where central
storage is replicated
Generally implemented using distcp and HDFS
snapshots
Data is ingested in source cluster and then copied
15. Dual Ingestion
No files, just streams
Generally ingested from multiple outside
sources through Kafka
Streams must be directed to both sites
16. Dual Ingestion
Adds complexity to apps
Nifi can be set up as a front-end to both
endpoints
Data consistency must be checked
Can be automatically set up via monitoring
Consolidation processes (such as a monthly
re-sync) might be needed
17. Others
Ingestion replication
Variant of the dual ingestion
A consumer is set up in the source Kafka that in turn
writes to a destination Kafka
Bottleneck if the initial streams were generated by
many producers
Mixed:
Previous solutions are not mutually exclusive
Storage replication for batch processes’ results
Dual ingestion for streams
18. Commercial offerings
Solutions that ease DR setup
Cloudera BDR
Coordinates HDFS snapshots and copy
WANdisco Fusion
Continuous storage replication
Confluent Multi-site
Allows multi-site Kafka data replication
19. Tips
Big Data clusters
have many nodes
Costly to replicate
Performance /
Capacity tradeoff
We can use
cheaper servers in
DR, since we don’t
expect to use them
often
20. Tips
Document and test procedures
DR is rarely fully automated, so responsibilities and
actions should be clearly defined
Plan for (at least) a yearly DR run
Track changes in software and configuration
21. Tips
Once you have a DR
solution, other uses will
surface
DR site can be used for
backup
Maintain HDFS
snapshots
DR data can be used
for testing / reporting
Warning: it may alter
stored data
22. Conclusions
Balance HA / Backup / DR as needed, they
are not exclusive:
Different costs
Different impact
Big Data DR is different:
Dedicated hardware
No VMs, no storage cabin
Plan for DATA CENTRIC solutions