One of the largest retailers in North America are considering Apache Geode for their new mobile loyalty application, to support their digital transformation effort. They would use Geode to provide operational data services for their mobile cloud service. This retailer needs to replace sluggish response times with sub-second response which will improved conversion rates. They also want to able to close the loop between data science findings and app experience. This way the right customer interaction is suggested when it is needed such as when customers are looking at their mobile app while walking in the store, or sending notifications at the individuals most likely shopping times. The final benefits of using Geode will include faster development cycles, increased customer loyalty, and higher revenue.
#GeodeSummit - Modern manufacturing powered by Spring XD and GeodePivotalOpenSourceHub
Wondering how to improve on your production yield, increase asset life and activate reliability centered maintenance? TEKsystems has developed “Golden Batch” recommendation engine to realize your goals of modern manufacturing. This is a Predictive analytics framework built on top of Manufacturing Data Lake for analysis and training of machine learning algorithms, and subsequent processing and detection of streaming data from sensors to detect or predict failures. We’ll present a solution architecture featuring Spring XD for data pipelining, Apache Geode for in-memory processing, Hadoop as a data lake, and R for machine learning.
#GeodeSummit: Architecting Data-Driven, Smarter Cloud Native Apps with Real-T...PivotalOpenSourceHub
This talk introduces an open-source solution that integrates cloud native apps running on Cloud Foundry with an open-source hybrid transactions + analytics real-time solution. The architecture is based on the fastest scalable, highly available and fully consistent In-Memory Data Grid (Apache Geode / GemFire), natively integrated to the first open-source massive parallel data warehouse (Greenplum Database) in a hybrid transactional and analytical architecture that is extremely fast, horizontally scalable, highly resilient and open source. This session also features a live demo running on Cloud Foundry, showing a real case of real-time closed-loop analytics and machine learning using the featured solution.
Lessons From Integrating Machine Learning into Data Products | Wrangle Confer...Cloudera, Inc.
In this talk, we will share practical lessons and patterns for building machine learning (ML) models in production, based on our experience with search ranking and recommendation systems at Instacart. As part of this I will include a detailed discussion on the technical challenges in building a ML features pipeline, one of which is now shared across multiple data products at Instacart.
Global Big Data Conference Hyderabad-2Aug2013- Finance/Manufacturing Use CasesSanjay Sharma
Financial institutions today are under intense pressure to provide more value add to the customers, reduce IT costs and also grow year to year. This challenge has been further complicated by huge amounts of data being generated as well as mandatory federal compliances in place.
Similarly, Manufacturing industry today also is facing the challenge to process huge amount of data in real time and predict failures as early as possible to reduce cost and increase production efficiency.
The session will cover some high level Big Data use cases applicable to financial and manufacturing domain and how big data technologies are being used successfully to solve these challenges using some examples in credit card/banking industry in financial domain and semi-conductor production in manufacturing domain.
#GeodeSummit - Modern manufacturing powered by Spring XD and GeodePivotalOpenSourceHub
Wondering how to improve on your production yield, increase asset life and activate reliability centered maintenance? TEKsystems has developed “Golden Batch” recommendation engine to realize your goals of modern manufacturing. This is a Predictive analytics framework built on top of Manufacturing Data Lake for analysis and training of machine learning algorithms, and subsequent processing and detection of streaming data from sensors to detect or predict failures. We’ll present a solution architecture featuring Spring XD for data pipelining, Apache Geode for in-memory processing, Hadoop as a data lake, and R for machine learning.
#GeodeSummit: Architecting Data-Driven, Smarter Cloud Native Apps with Real-T...PivotalOpenSourceHub
This talk introduces an open-source solution that integrates cloud native apps running on Cloud Foundry with an open-source hybrid transactions + analytics real-time solution. The architecture is based on the fastest scalable, highly available and fully consistent In-Memory Data Grid (Apache Geode / GemFire), natively integrated to the first open-source massive parallel data warehouse (Greenplum Database) in a hybrid transactional and analytical architecture that is extremely fast, horizontally scalable, highly resilient and open source. This session also features a live demo running on Cloud Foundry, showing a real case of real-time closed-loop analytics and machine learning using the featured solution.
Lessons From Integrating Machine Learning into Data Products | Wrangle Confer...Cloudera, Inc.
In this talk, we will share practical lessons and patterns for building machine learning (ML) models in production, based on our experience with search ranking and recommendation systems at Instacart. As part of this I will include a detailed discussion on the technical challenges in building a ML features pipeline, one of which is now shared across multiple data products at Instacart.
Global Big Data Conference Hyderabad-2Aug2013- Finance/Manufacturing Use CasesSanjay Sharma
Financial institutions today are under intense pressure to provide more value add to the customers, reduce IT costs and also grow year to year. This challenge has been further complicated by huge amounts of data being generated as well as mandatory federal compliances in place.
Similarly, Manufacturing industry today also is facing the challenge to process huge amount of data in real time and predict failures as early as possible to reduce cost and increase production efficiency.
The session will cover some high level Big Data use cases applicable to financial and manufacturing domain and how big data technologies are being used successfully to solve these challenges using some examples in credit card/banking industry in financial domain and semi-conductor production in manufacturing domain.
Big Data as Competitive Advantage in Financial ServicesCloudera, Inc.
Financial firms are under pressure to grow their business while containing risk and complying with many regulations world-wide. In addition, there is the growing demand from customers to improve their experience and offer new services over multiple channels.
Data is at the core of these capabilities but there are many challenges to overcome: fragmentation, security, quality, privacy, retention, to name a few.
We are going to hear about trends in the industry from IDC Financial Insights Research Director Bill Fearnley, followed by a discussion about how Cloudera has helped Transamerica turn their data into competitive advantage by creating an Enterprise Marketing and Analytics Engine.
VoltDB and HPE Vertica Present: Building an IoT Architecture for Fast + Big DataVoltDB
This webinar with Chris Selland of HPE Vertica and Dennis Duckworth of VoltDB addresses the growing challenges with managing a complex IoT solution and how to enable real-time operational interaction with comprehensive data analytics.
Hear how Manulife Asia has built an environment that enables the company to solve business-critical problems across many countries. What began in 2017 as an update to their enterprise architecture now spans everything from infrastructure to applications, powering their entire digital backbone. It includes fraud identification, real-time investment dashboards, advanced analytics and machine learning, and digital connection apps that talk to customers for claims, support, and more. Learn the importance hard work, coordination, discipline, and an agile methodology play in deciding which use cases they will focus on to deliver new services in an environment where everything is time sensitive and business requirements shift regularly.
Speaker: Ellen Wu, Head of Asia Data Office, Global Data Enablement and Governance, Manulife
Moving Beyond Batch: Transactional Databases for Real-time DataVoltDB
Join guest Forrester speaker, Principal Analyst Mike Gualtieri, and Dennis Duckworth Director of Product Marketing from VoltDB to learn how enterprises can create a real-time, “origin-zero” data architecture within transactional databases to become a real-time enterprise.
Hadoop is regarded as a key capability for implementing Big Data initiatives in the enterprise, but organizations have yet to realize its full business benefits. In this webinar, Pivotal and guest Forrester Research, Inc. Identify the use cases driving Hadoop adoption, and explore what is needed to transform initial investments into results.
Learn about:
Challenges Hadoop introduces, and how the right tools and platforms can help address them
Shifts in the industry with regards to SQL and NoSQL systems and their implications to Big Data analytics
Applying in-memory technologies for data management systems, data analytics, transactional processing and operational databases
Watch the on-demand webinar here:
http://www.pivotal.io/big-data/pivotal-forrester-operationalizing-data-analytics-webinar
Learn how to maximize business value from all of your data here: http://www.pivotal.io/big-data/pivotal-hd
Relying on Data for Strategic Decision-Making--Financial Services ExperienceCloudera, Inc.
Many Federal agencies can benefit from the real-world experience of the financial services sector when it comes to best practices for big data management, cybersecurity, and fraud detection and mitigation. The reality is that most government organizations struggle to manage and reconcile financial information, as they must rely on a mix of legacy systems and newer applications to make fundamental business decisions. If you have big data challenges and are looking for a better way to streamline and secure data management to support your agency’s business and IT operations, plan to attend this session. This is your opportunity to understand how Hadoop can support your specific financial management mandates and help you use your organizational information to make defensible, data-driven decisions.
Building a Modern FinTech Big Data InfrastructureDatabricks
The cloud is now the first choice for large-scale analytics, but organizations that have sunk investment into Hadoop on-premises are also challenged with maintaining operations. This can make a move to modern analytics platforms like Spark difficult or impossible. Learn about innovations for large-scale migration that can take full advantage of cloud-based analytics without disrupting operations.
Future-Proof Your Streaming Analytics Architecture- StreamAnalytix WebinarImpetus Technologies
Future-Proof Your Streaming Analytics Architecture- StreamAnalytix Webinar
View the webcast on http://bit.ly/1HFD8YR
The speakers from Forrester and Impetus talk about the options and optimal architecture to incorporate real-time insights into your apps that provisions benefitting from future innovation also.
Building a Modern Analytic Database with Cloudera 5.8Cloudera, Inc.
Analytic workloads and the ability to determine “what happened” are some of the most common use cases across enterprises today - helping you understand and adapt based on changing trends. However, for most businesses today, they are only able to see a piece of the story. Analytics are limited by the amount of data able to be stored and ultimately accessed, it’s time-intensive to bring in new datasets or fit unstructured data into rigid schemas, and user access is constrained to a select few who must already know the questions they’re trying to answer.
It’s no surprise that big data is disrupting this modus operandi for analytics. A modern, Hadoop-based platform is designed to help businesses break free of these analytic limitations, providing a new kind of adaptive, high-performance analytic database. The recent release of Cloudera 5.8 continues to advance Cloudera Enterprise as the foundation for these analytic workloads.
Join Justin Erickson, Senior Director of Product Management at Cloudera, and Andy Frey, Chief Technology Officer at Marketing Associates, as they discuss:
-What technology is needed to build a modern analytic database with Hadoop
-What’s new with Cloudera 5.8
-How to align your teams around agile analytics
-Real world success from Marketing Associates
-What’s next for Cloudera Enterprise’s Analytic Database
How a Media Data Platform Drives Real-time Insights & Analytics using Apache ...Databricks
Roularta is a leading publishing company in Belgium. As digital news and channels move at a rapid pace and contain massive volumes of data, Roularta decided in 2019 to invest in a Spark-based data platform to drive true real-time website analytics and unlock insights on previously untouched (big) data sources. In this talk we’ll first explain why and how Roularta embarked from a classical data warehouse to a Spark-based Lakehouse using Delta. We’ll outline the series of publishing & marketing use-cases done in the last 12 months and highlight for each use-case the advantages of Spark and how the team further tuned performance to truly deliver insights with high velocity.
View this talk here: https://www.confluent.io/online-talks/connecting-apache-kafka-to-cash-lyndon-hedderly
Real-time data has value. But how do you quantify that value in order to create a business case for becoming data, or event driven? This talk explores why valuing Kafka is important - but covers some of the problems in quantifying the value of a data infrastructure platform.
Despite the challenges, we will explore some examples of where we have attributed a quantified monetary amount to Kafka across specific business use cases, within Retail, Banking and Automotive.
Whether organizations are using data to create new business products and services, improving user experiences, increasing productivity, or managing risk, we’ll see that fast and interconnected data, or ‘event streaming’ is increasingly important. We will conclude with the five steps to creating a business case around Kafka use cases.
Learn to Use Databricks for the Full ML LifecycleDatabricks
Machine learning development brings many new complexities beyond the traditional software development lifecycle. Unlike traditional software development, ML developers want to try multiple algorithms, tools and parameters to get the best results, and they need to track this information to reproduce work. In addition, developers need to use many distinct systems to productionize models. In this talk, learn how to operationalize ML across the full lifecycle with Databricks Machine Learning.
Designing Fault-Tolerant Applications with DataStax Enterprise and Apache Cas...DataStax
Data resiliency and availability are mission-critical for enterprises today—yet we live in a world where outages are an everyday occurrence. Whether the problem is a single server failure or losing connectivity to an entire data center, if your applications aren’t designed to be fault tolerant, recovery from an outage can be painful and slow. Watch this on-demand webinar to look at best practices for developing fault-tolerant applications with DataStax Drivers for Apache Cassandra and DataStax Enterprise (DSE).
View recording: https://youtu.be/NT2-i3u5wo0
Explore all DataStax webinars: https://www.datastax.com/resources/webinars
Real-time Microservices and In-Memory Data GridsAli Hodroj
How in-memory data grids enable a real-time microservices architecture while diminishing the accidental complexity of persistence, orchestration, and fragmentation of scale.
#GeodeSummit - Wall St. Derivative Risk Solutions Using GeodePivotalOpenSourceHub
In this talk, Andre Langevin discusses how Geode forms the core of many Wall Street derivative risk solutions. By externalizing risk from trading systems, Geode-based solutions provide cross-product risk management at speeds suitable for automated hedging, while simultaneously eliminating the back office costs associated with traditional trading system based solutions.
Big Data as Competitive Advantage in Financial ServicesCloudera, Inc.
Financial firms are under pressure to grow their business while containing risk and complying with many regulations world-wide. In addition, there is the growing demand from customers to improve their experience and offer new services over multiple channels.
Data is at the core of these capabilities but there are many challenges to overcome: fragmentation, security, quality, privacy, retention, to name a few.
We are going to hear about trends in the industry from IDC Financial Insights Research Director Bill Fearnley, followed by a discussion about how Cloudera has helped Transamerica turn their data into competitive advantage by creating an Enterprise Marketing and Analytics Engine.
VoltDB and HPE Vertica Present: Building an IoT Architecture for Fast + Big DataVoltDB
This webinar with Chris Selland of HPE Vertica and Dennis Duckworth of VoltDB addresses the growing challenges with managing a complex IoT solution and how to enable real-time operational interaction with comprehensive data analytics.
Hear how Manulife Asia has built an environment that enables the company to solve business-critical problems across many countries. What began in 2017 as an update to their enterprise architecture now spans everything from infrastructure to applications, powering their entire digital backbone. It includes fraud identification, real-time investment dashboards, advanced analytics and machine learning, and digital connection apps that talk to customers for claims, support, and more. Learn the importance hard work, coordination, discipline, and an agile methodology play in deciding which use cases they will focus on to deliver new services in an environment where everything is time sensitive and business requirements shift regularly.
Speaker: Ellen Wu, Head of Asia Data Office, Global Data Enablement and Governance, Manulife
Moving Beyond Batch: Transactional Databases for Real-time DataVoltDB
Join guest Forrester speaker, Principal Analyst Mike Gualtieri, and Dennis Duckworth Director of Product Marketing from VoltDB to learn how enterprises can create a real-time, “origin-zero” data architecture within transactional databases to become a real-time enterprise.
Hadoop is regarded as a key capability for implementing Big Data initiatives in the enterprise, but organizations have yet to realize its full business benefits. In this webinar, Pivotal and guest Forrester Research, Inc. Identify the use cases driving Hadoop adoption, and explore what is needed to transform initial investments into results.
Learn about:
Challenges Hadoop introduces, and how the right tools and platforms can help address them
Shifts in the industry with regards to SQL and NoSQL systems and their implications to Big Data analytics
Applying in-memory technologies for data management systems, data analytics, transactional processing and operational databases
Watch the on-demand webinar here:
http://www.pivotal.io/big-data/pivotal-forrester-operationalizing-data-analytics-webinar
Learn how to maximize business value from all of your data here: http://www.pivotal.io/big-data/pivotal-hd
Relying on Data for Strategic Decision-Making--Financial Services ExperienceCloudera, Inc.
Many Federal agencies can benefit from the real-world experience of the financial services sector when it comes to best practices for big data management, cybersecurity, and fraud detection and mitigation. The reality is that most government organizations struggle to manage and reconcile financial information, as they must rely on a mix of legacy systems and newer applications to make fundamental business decisions. If you have big data challenges and are looking for a better way to streamline and secure data management to support your agency’s business and IT operations, plan to attend this session. This is your opportunity to understand how Hadoop can support your specific financial management mandates and help you use your organizational information to make defensible, data-driven decisions.
Building a Modern FinTech Big Data InfrastructureDatabricks
The cloud is now the first choice for large-scale analytics, but organizations that have sunk investment into Hadoop on-premises are also challenged with maintaining operations. This can make a move to modern analytics platforms like Spark difficult or impossible. Learn about innovations for large-scale migration that can take full advantage of cloud-based analytics without disrupting operations.
Future-Proof Your Streaming Analytics Architecture- StreamAnalytix WebinarImpetus Technologies
Future-Proof Your Streaming Analytics Architecture- StreamAnalytix Webinar
View the webcast on http://bit.ly/1HFD8YR
The speakers from Forrester and Impetus talk about the options and optimal architecture to incorporate real-time insights into your apps that provisions benefitting from future innovation also.
Building a Modern Analytic Database with Cloudera 5.8Cloudera, Inc.
Analytic workloads and the ability to determine “what happened” are some of the most common use cases across enterprises today - helping you understand and adapt based on changing trends. However, for most businesses today, they are only able to see a piece of the story. Analytics are limited by the amount of data able to be stored and ultimately accessed, it’s time-intensive to bring in new datasets or fit unstructured data into rigid schemas, and user access is constrained to a select few who must already know the questions they’re trying to answer.
It’s no surprise that big data is disrupting this modus operandi for analytics. A modern, Hadoop-based platform is designed to help businesses break free of these analytic limitations, providing a new kind of adaptive, high-performance analytic database. The recent release of Cloudera 5.8 continues to advance Cloudera Enterprise as the foundation for these analytic workloads.
Join Justin Erickson, Senior Director of Product Management at Cloudera, and Andy Frey, Chief Technology Officer at Marketing Associates, as they discuss:
-What technology is needed to build a modern analytic database with Hadoop
-What’s new with Cloudera 5.8
-How to align your teams around agile analytics
-Real world success from Marketing Associates
-What’s next for Cloudera Enterprise’s Analytic Database
How a Media Data Platform Drives Real-time Insights & Analytics using Apache ...Databricks
Roularta is a leading publishing company in Belgium. As digital news and channels move at a rapid pace and contain massive volumes of data, Roularta decided in 2019 to invest in a Spark-based data platform to drive true real-time website analytics and unlock insights on previously untouched (big) data sources. In this talk we’ll first explain why and how Roularta embarked from a classical data warehouse to a Spark-based Lakehouse using Delta. We’ll outline the series of publishing & marketing use-cases done in the last 12 months and highlight for each use-case the advantages of Spark and how the team further tuned performance to truly deliver insights with high velocity.
View this talk here: https://www.confluent.io/online-talks/connecting-apache-kafka-to-cash-lyndon-hedderly
Real-time data has value. But how do you quantify that value in order to create a business case for becoming data, or event driven? This talk explores why valuing Kafka is important - but covers some of the problems in quantifying the value of a data infrastructure platform.
Despite the challenges, we will explore some examples of where we have attributed a quantified monetary amount to Kafka across specific business use cases, within Retail, Banking and Automotive.
Whether organizations are using data to create new business products and services, improving user experiences, increasing productivity, or managing risk, we’ll see that fast and interconnected data, or ‘event streaming’ is increasingly important. We will conclude with the five steps to creating a business case around Kafka use cases.
Learn to Use Databricks for the Full ML LifecycleDatabricks
Machine learning development brings many new complexities beyond the traditional software development lifecycle. Unlike traditional software development, ML developers want to try multiple algorithms, tools and parameters to get the best results, and they need to track this information to reproduce work. In addition, developers need to use many distinct systems to productionize models. In this talk, learn how to operationalize ML across the full lifecycle with Databricks Machine Learning.
Designing Fault-Tolerant Applications with DataStax Enterprise and Apache Cas...DataStax
Data resiliency and availability are mission-critical for enterprises today—yet we live in a world where outages are an everyday occurrence. Whether the problem is a single server failure or losing connectivity to an entire data center, if your applications aren’t designed to be fault tolerant, recovery from an outage can be painful and slow. Watch this on-demand webinar to look at best practices for developing fault-tolerant applications with DataStax Drivers for Apache Cassandra and DataStax Enterprise (DSE).
View recording: https://youtu.be/NT2-i3u5wo0
Explore all DataStax webinars: https://www.datastax.com/resources/webinars
Real-time Microservices and In-Memory Data GridsAli Hodroj
How in-memory data grids enable a real-time microservices architecture while diminishing the accidental complexity of persistence, orchestration, and fragmentation of scale.
#GeodeSummit - Wall St. Derivative Risk Solutions Using GeodePivotalOpenSourceHub
In this talk, Andre Langevin discusses how Geode forms the core of many Wall Street derivative risk solutions. By externalizing risk from trading systems, Geode-based solutions provide cross-product risk management at speeds suitable for automated hedging, while simultaneously eliminating the back office costs associated with traditional trading system based solutions.
#GeodeSummit: Democratizing Fast Analytics with Ampool (Powered by Apache Geode)PivotalOpenSourceHub
Today, if events change the decision model, we wait until the next batch model build for new insights. By extending fast “time-to-decisions” into the world of Big Data Analytics to get fast “time-to-insights”, apps will get what used to be batch insights in near real time. The technology enabling this includes smart in-memory data storage, new storage class memory, and products designed to do one or more parts of an analysis pipeline very well. In this talk we describe how Ampool is building on Apache Geode to allow Big Data analysis solutions to work together with a scalable smart storage class memory layer to allow fast and complex end-to-end pipelines to be built -- closing the loop and providing dramatically lower time to critical insights.
#GeodeSummit: Combining Stream Processing and In-Memory Data Grids for Near-R...PivotalOpenSourceHub
The financial sector is an exciting mix of challenges regarding throughput, high availability as well as specific constraints regarding latency and consistency. In the continuous evolution of its platform, Murex relies on open source technologies like Apache Geode and Apache Storm in a "kind of" lambda architecture to ensure storage, near-real time (around the milliseconds) aggregation of thousands of events per second, advanced notification mechanisms and on-demand deployments. This talk will focus on the technical architecture, the underlying principles as well as the technologies used to support this mix of functional and non-functional requirements.
In this session we review the design of the current capabilities of the Spring Data GemFire API that supports Geode, and explore additional use cases and future direction that the Spring API and underlying Geode support might evolve.
In this session we review the design of the newly released off heap storage feature in Apache Geode, and discuss use cases and potential direction for additional capabilities of this feature.
In this session we review the design of the current capabilities of a partially completed feature in Apache Geode - the ability to act as a backend for Redis client applications. We’ll explore potential use cases and future direction that this capability might evolve.
#GeodeSummit - Large Scale Fraud Detection using GemFire Integrated with Gree...PivotalOpenSourceHub
In this session we explore a case study of a large-scale government fraud detection program that prevents billions of dollars in fraudulent payments each year leveraging the beta release of the GemFire+Greenplum Connector, which is planned for release in GemFire 9. Topics will include an overview of the system architecture and a review of the new GemFire+Greenplum Connector features that simplify use cases requiring a blend of massively parallel database capabilities and accelerated in-memory data processing.
#GeodeSummit - Integration & Future Direction for Spring Cloud Data Flow & GeodePivotalOpenSourceHub
In this session we review the design of the current state of support for Apache Geode by Spring Cloud Data Flow, and explore additional use cases and future direction that Spring Cloud Data Flow and Apache Geode might evolve.
#GeodeSummit Keynote: Creating the Future of Big Data Through 'The Apache Way"PivotalOpenSourceHub
Keynote at Geode Summit 2016 by Dr. Justin Erenkrantz, Bloolmberg LP. Creating the Future of Big Data Through "The Apache Way" and why this matters to the community
#GeodeSummit - Where Does Geode Fit in Modern System ArchitecturesPivotalOpenSourceHub
In this talk, Eitan Suez explores the question: Where does Geode fit in an organization's system architecture? Geode is a unique and feature-rich product that perhaps hasn't seen as much adoption as it deserves. Today's apps are no longer the straightforward, database-backed web applications we used to build a few years ago. Applications have become more sophisticated, as they've had to meet the need to scale, to be reliable, fault-tolerant, and to integrate with other systems. In this talk, Eitan will suggest one particular fit for Geode in the context of a CQRS architecture, and welcomes you to attend, and to contribute by sharing how you've put Geode to use in your organization.
Apache Apex and Apache Geode are two of the most promising incubating open source projects. Combined, they promise to fill gaps of existing big data analytics platforms. Apache Apex is an enterprise grade native YARN big data-in-motion platform that unifies stream and batch processing. Apex is highly scalable, performant, fault tolerant, and strong in operability. Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing. We will also look at some use cases where how these two projects can be used together to form distributed, fault tolerant, reliable in memory data processing layer.
How Southwest Airlines Uses Geode
Distributed systems and fast data require new software patterns and implementation skills. Learn how Southwest Airlines uses Apache Geode, organizes team responsibilities, and approaches design tradeoffs. Drawing inspiration from real whiteboard conversations, we’ll explore: common development pitfalls, environment capacity planning, streaming data patterns like consumer checkpointing, support roles, and production lessons learned.
Every day, Apache Geode improves how Southwest Airlines schedules nearly 4,000 flights and serves over 500,000 passengers. It’s an essential component of Southwest’s ability to reduce flight delays and support future growth.
Apache Apex: Stream Processing Architecture and Applications Comsysto Reply GmbH
• Architecture highlights: high throughput, low-latency, operability with stateful fault tolerance, strong processing guarantees, auto-scaling etc
• Application development model, unified approach for real-time and batch use cases
• Tools for ease of use, ease of operability and ease of management
• How customers use Apache Apex in production
Pivoting Spring XD to Spring Cloud Data Flow with Sabby AnandanPivotalOpenSourceHub
Pivoting Spring XD to Spring Cloud Data Flow: A microservice based architecture for stream processing
Microservice based architectures are not just for distributed web applications! They are also a powerful approach for creating distributed stream processing applications. Spring Cloud Data Flow enables you to create and orchestrate standalone executable applications that communicate over messaging middleware such as Kafka and RabbitMQ that when run together, form a distributed stream processing application. This allows you to scale, version and operationalize stream processing applications following microservice based patterns and practices on a variety of runtime platforms such as Cloud Foundry, Apache YARN and others.
About Sabby Anandan
Sabby Anandan is a Product Manager at Pivotal. Sabby is focused on building products that eliminate the barriers between application development, cloud, and big data.
This slides provides description for how apex can be used by a developer. The slide also provides information about various components of Apex Operator lifecycle.
In this session, we will talk about two of the most promising incubating open source Projects, Apache Apex & Apache Geode and how together they attempt to solve shortcomings of existing big data analytics platforms.
Project Apex is an enterprise grade native YARN big data-in-motion platform that unifies stream processing as well as batch processing. Apex processes big data-in-motion in a highly scalable, highly performant, fault-tolerant, stateful, secure, distributed, and an easily operable way.
Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing.
We will also look at some use cases where how these two projects can be used together to form distributed, fault tolerant, reliable In memory data processing layer.
Digital Shelf Analytics: Are you ready to rock the digital shelf?Joakim Gavelin
Digital Shelf Analytics can be an overwhelming concept to tackle. Join this easy-to-follow guide to learn how to take the guesswork out of digital merchandising and uncover their lost sales opportunities.
And how to continuously monitor market activities to optimize your product experiences and respond faster to market changes.
Kevin Benedict, Senior Analyst for Digital Transformation and Mobility at Cognizant, and Susan Miller, Chief Strategy Officer at AnyPresence, explore the ways companies can achieve an information advantage through digital and organizational transformation.
The First Kilometre: Building a Back-End That Sets You Up For Success Demac Media
The front end of an E-Commerce platform may get all the attention but more often than not, it's the back end that will determine whether you're a successful E-Commerce Case Study or a highlight of 5 Things You Don't Want to Do in 2014. This presentation will help to ensure it's not the latter.
Enabling digital business with governed data lakeKaran Sachdeva
Digital business is enabled by Artificial intelligence, Machine learning, and data science. Artificial intelligence and machine learning are dependent on right Information architecture and data foundation. Governed data lake infused with governance and data science platform gives you the power to take the organization in the digital transformation and AI journey.
Retail Analytics and BI with Looker, BigQuery, GCP & Leigha JarettDaniel Zivkovic
Leigha Jarett of GCP explains how to bring Cloud "superpowers" to your Data and modernize your Business Intelligence with Looker, BigQuery and Google Cloud services on an example of Cymbal Direct - one of Google Cloud's demo brands. The meetup recording with TOC for easy navigation is at https://youtu.be/BpzJU_S40ic.
P.S. For more interactive lectures like this, go to http://youtube.serverlesstoronto.org/ or sign up for our upcoming live events at https://www.meetup.com/Serverless-Toronto/events/
Hyper Group’s Director and Co-Founder, Damon Bryan, will be presenting: ‘The Data Science behind Data Personalisation’.
BIBoss is an award-winning networking event for leaders in Business Intelligence and Data Science. Hyper Group’s Director and Co-Founder, Damon Bryan, presented: ‘The Data Science behind Data Personalisation’ and Infinity Works' Technical Director, Neil Dunlop, presented: ‘AI, where are we now?’, including a discussion around demystifying the preconceptions of AI.
'Six simple steps for retail innovation'
Alex is a mobile retail technology expert, with a multi-discipline skill set: user psychology and experience; mobile strategy; technical design; systems architecture; marketing; and consumer technology. he has designed and delivered award-winning mobile technology and strategy for some of the UK’s largest retailers, including Topshop, BHS, Mastercard and Burton, as well as several start-ups.
BIG Data & Hadoop Applications in RetailSkillspeed
Explore the Applications of BIG Data & Hadoop in Retail Industry via Skillspeed.
BIG Data & Hadoop in Retail is a key differentiator, especially in terms of generating memorable customer experiences. They are used for brand sentiment analysis, consumer insights, optimizing store layouts and inventory-demand cycles.
To get more details regarding BIG Data & Hadoop, please visit - www.SkillSpeed.com
Mohanbir Sawhney, Robert R. McCormick Tribune Foundation Clinical Professor of Technology Kellogg School of Management, Northwestern University presents at the 2012 Big Analytics Roadshow.
Companies are drinking from a fire hydrant of data that is too big, moving too fast and is too diverse to be analyzed by conventional database systems. Big Data is like a giant gold mine with large quantities of ore that is difficult to extract. To get value out of Big Data, enterprises need a new mindset and a new set of tools. They also need to know how to extract actionable insights from Big Data that can lead to competitive advantage. The Big Story of Big Data is not what Big Data is, but what it means for business value and competitive advantage.... read more: http://www.biganalytics2012.com/sessions.html#mohan_sawhney
How can data solutions change the retail industry?
See the innovative solutions transforming Retail Industry.
Big Data, Artificial Intelligence, Augumented Reality.
See more at http://itmagination.com
AI, Content and Customer Experience: What’s the Future of Commerce?John Mullins
Darwinian principles in the hyper competitive world of online retail and what adaptations it takes to win. Presented at Retail Global 2018, The Gold Coast, Australia.
How Artificial Intelligence Is Transforming RetailBernard Marr
Retail is at a turning point where we are seeing businesses that are in-line with the pace of technological progress thriving, while those that are struggling to keep up are dropping by the wayside.
Similar to #GeodeSummit - Using Geode as Operational Data Services for Real Time Mobile Experience (20)
Here are the slides for Greenplum Chat #8. You can view the replay here: https://www.youtube.com/watch?v=FKFiyJDgdQk
The increased frequency and sophistication of high-profile data breaches and malicious hacking is putting organizations at continued risk of data theft and significant business disruption. Complicating this scenario is the unbounded growth of Big Data and petabyte-scale data storage, new open source database and distribution schemes, and the continued adoption of cloud services by enterprises.
Pivotal Greenplum customers often look for additional encryption of data-at-rest and data-in-motion. The massively parallel processing (MPP) architecture of Pivotal Greenplum provides an architecture that is unlike traditional OLAP on RDBMS for data warehousing, and encryption capabilities must address the scale-out architecture.
The Zettaset Big Data Encryption Suite has been designed for optimal performance and scalability in distributed Big Data systems like Greenplum Database and Apache HAWQ.
Here is a replay of our recent Greenplum Chat with Zettaset:
00:59 What is Greenplum’s approach for encryption and why Zettaset?
02:17 Results of field testing Zettaset with Greenplum
03:50 Introduction to Zettaset, the security company
05:36 Overview of Zettaset and their solutions
14:51 Different layers for encrypting data at rest
16:50 Encryption key management for big data
20:51 Zettaset BD Encrypt for data at rest and data in motion
22:19 How to mitigate encryption overhead with an MPP scale-out system
24:12 How to deploy BD Encrypt
25:50 Deep dive on data at rest encryption
30:44 Deep dive on data in motion encryption
36:72 Q: How does Zettaset deal with encrypting Greenplums multiple interfaces?
38:08 Q: Can I encrypt data for a particular column?
40:26 How Zettaset fits into a security strategy
41:21 Q: What is the performance impact on queries by encrypting the entire database?
43:28 How Zettaset helps Greenplum meet IT compliance requirements
45:12 Q: How authentication for keys is obtained
48:50 Q: How can Greenplum users try out Zettaset?
50:53 Q: What is a ‘Zettaset Security Coach’?
How to use the WAN Gateway feature of Apache Geode to implement multi-site and active-active failover, disaster recovery, and global scale applications.
Building Apps with Distributed In-Memory Computing Using Apache GeodePivotalOpenSourceHub
Slides from the Meetup Monday March 7, 2016 just before the beginning of #GeodeSummit, where we cover an introduction of the technology and community that is Apache Geode, the in-memory data grid.
GPORCA is newly open source advanced query optimizer that is a subproject of Greenplum Database open source project. GPORCA is the query optimizer used in commercial distributions of both Greenplum and HAWQ. In these distributions GPORCA has achieved 1000x performance improvement across TPC-DS queries by focusing on three distinct areas: Dynamic Partition Elimination, SubQuery Unnesting, and Common Table Expression.
Now that GPORCA is open source, we are looking for collaborators to help us realize the ultimate dream for GPORCA - to work with any database.
The new breed of data management systems in Big Data have to process so much data that optimization mistakes are magnified in traditional optimizers. Furthermore, coding and manual optimization of complex queries has proven to be hard.
In this session, Venkatesh will discuss:
- Overview of GPORCA
- How to add GPORCA to HAWQ with a build option
- How GPORCA could be made to work with any database
- Future vision for GPORCA and more immediate plans
- How to work with GPORCA, and how to contribute to GPORCA
Motivation and goals for off-heap storage
Off-heap features and usage
Implementation overview
Preliminary benchmarks: off-heap vs. heap
Tips and best practices
Zeppelin Interpreters
PSQL (to became JDBC in 0.6.x)
Geode
SpringXD
Apache Ambari
Zeppelin Service
Geode, HAWQ and Spring XD services
Webpage Embedder View
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.