This presentation gives an overview of StreamCentral technology targeted for IT professionals. StreamCentral is software to model and build Big Data Solutions. StreamCentral consists of a Big Data Solutions Modeler that not only makes it easy to model traditional BI/DW and Big Data solutions but also auto deploys the model on the latest innovations in Big Data Management solutions (like HP Vertica and SQL Server Parallel Data Warehouse). StreamCentral Big Data Server executes the model definition in real-time. StreamCentral drastically reduces the time to market, risk and cost associated with building traditional BI/DW and Big Data solutions!
DAMA & Denodo Webinar: Modernizing Data Architecture Using Data Virtualization Denodo
Watch here: https://bit.ly/2NGQD7R
In an era increasingly dominated by advancements in cloud computing, AI and advanced analytics it may come as a shock that many organizations still rely on data architectures built before the turn of the century. But that scenario is rapidly changing with the increasing adoption of real-time data virtualization - a paradigm shift in the approach that organizations take towards accessing, integrating, and provisioning data required to meet business goals.
As data analytics and data-driven intelligence takes centre stage in today’s digital economy, logical data integration across the widest variety of data sources, with proper security and governance structure in place has become mission-critical.
Attend this session to learn:
- Learn how you can meet cloud and data science challenges with data virtualization.
- Why data virtualization is increasingly finding enterprise-wide adoption
- Discover how customers are reducing costs and improving ROI with data virtualization
Watch here: https://bit.ly/3i2iJbu
You will often hear that "data is the new gold". In this context, data management is one of the areas that has received more attention by the software community in recent years. From Artificial Intelligence and Machine Learning to new ways to store and process data, the landscape for data management is in constant evolution. From the privileged perspective of an enterprise middleware platform, we at Denodo have the advantage of seeing many of these changes happen.
Join us for an exciting session that will cover:
- The most interesting trends in data management.
- Our predictions on how those trends will change the data management world.
- How these trends are shaping the future of data virtualization and our own software.
Slides: Accelerating Queries on Cloud Data LakesDATAVERSITY
Using “zero-copy” hybrid bursting on remote data to solve data lake analytics capacity and performance problems.
Data scientists want answers on demand. But in today’s enterprise architectures, the reality is that most data remains on-prem, despite the promise of cloud-based analytics. Moving all that data to the cloud has typically not been possible for many reasons including cost, latency, and technical difficulty. So, what if there was a technology that would connect these on-prem environments to any major cloud platform, enabling high-powered computing without the need to move massive amounts of data?
Join us for this webinar where Alex Ma of Alluxio, an open-source data orchestration platform, will discuss how a data orchestration approach offers a solution for connecting traditional on-prem data centers and cloud data lakes with other clouds and data centers. With Alluxio’s “zero-copy” burst solution, companies can bridge remote data centers and data lakes with computing frameworks in other locations, enabling them to offload, compute, and leverage the flexibility, scalability, and power of the cloud for their remote data.
Big Data kennen sehr viele IT-Experten, wenigstens haben Sie eine Vorstellung davon. In der Praxis arbeiten damit in Deutschland derzeit nur wenige. Dabei bringt Big Data ein ganz neues Momentum in moderne Softwarelösungen und ist im Kontext der Mobil-, Cloud- und Social-Veränderungen nicht wegzudenken. Big Data macht Software intelligent und damit auf eine ganz neue Art für die Benutzer erlebbar. Mit Big Data entstehen neue Softwarearchitekturen, weil Informationen völlig anders verarbeitet werden - nämlich schneller, differenzierter und oft mit dem Ziel, Schlüsse zu ziehen und Vorhersagen zu treffen.
In diesem Vortrag wird erläutert, wie moderne Softwarearchitekturen gestaltet werden, sodass Sie Big Data Paradigmen erfolgreich umsetzen und welche Vorteile sich für die zunehmend mobilen Softwarelösungen ergeben. Wir werfen zudem einen Blick auf die Potentiale und Optionen in Branchen wie Banken, Versicherung oder Handel.
Introduction to Segment, Analytics API and Customer Data Platform. (Demo: Segment, AWS Redshift, Redash, Segment and GTM Alternatives) (Frontend Fighters Edition)
Recommended links:
https://segment.com/ - Analytics API and Customer Data Platform
https://open.segment.com/ - Open Source Projects of Segment
https://segment.com/docs/ - Documentation of Segment
https://redash.io/ - Open Sorce Data Dashboard
https://aws.amazon.com/redshift/ - Data Warehouse Solution
https://quicksight.aws/ - Business Analytics Service
https://www.ghostery.com/ - Tracker Detector
Keywords: business agility, tag managers, data-driven
GDPR Noncompliance: Avoid the Risk with Data VirtualizationDenodo
You can watch the full webinar on-demand here: https://goo.gl/2f2RYF
In its recent report “Predictions 2018: A year of reckoning”, Forrester predicts that 80% of firms affected by GDPR will not comply with the regulation by May 2018. Of those noncompliant firms, 50% will intentionally not comply.
Compliance doesn’t have to be this difficult! What if you have an opportunity to facilitate GDPR compliance with a mature technology and significant cost reduction? Data virtualization is a mature, cost-effective technology that enables privacy by design to facilitate GDPR compliance.
Attend this session to learn:
• How data virtualization provides a GDPR compliance foundation with data catalog, auditing, and data security.
• How you can enable single enterprise-wide data access layer with guardrails.
• Why data virtualization is a must-have capability for compliance use cases.
• How Denodo’s customers have facilitated compliance.
Accelerate Self-Service Analytics with Data Virtualization and VisualizationDenodo
Watch full webinar here: https://bit.ly/39AhUB7
Enterprise organizations are shifting to self-service analytics as business users need real-time access to holistic and consistent views of data regardless of its location, source or type for arriving at critical decisions.
Data Virtualization and Data Visualization work together through a universal semantic layer. Learn how they enable self-service data discovery and improve performance of your reports and dashboards.
In this session, you will learn:
- Challenges faced by business users
- How data virtualization enables self-service analytics
- Use case and lessons from customer success
- Overview of the highlight features in Tableau
TIBCO Spotfire: Data Science in the EnterpriseTIBCO Spotfire
From Data to Insights in Internet Time
Eric Novik, Internal Analytics Group, TIBCO Spotfire
ANALYTICS AND VISUALIZATION FOR THE FINANCIAL ENTERPRISE CONFERENCE
June 25, 2013 The Langham Hotel Boston, MA
DAMA & Denodo Webinar: Modernizing Data Architecture Using Data Virtualization Denodo
Watch here: https://bit.ly/2NGQD7R
In an era increasingly dominated by advancements in cloud computing, AI and advanced analytics it may come as a shock that many organizations still rely on data architectures built before the turn of the century. But that scenario is rapidly changing with the increasing adoption of real-time data virtualization - a paradigm shift in the approach that organizations take towards accessing, integrating, and provisioning data required to meet business goals.
As data analytics and data-driven intelligence takes centre stage in today’s digital economy, logical data integration across the widest variety of data sources, with proper security and governance structure in place has become mission-critical.
Attend this session to learn:
- Learn how you can meet cloud and data science challenges with data virtualization.
- Why data virtualization is increasingly finding enterprise-wide adoption
- Discover how customers are reducing costs and improving ROI with data virtualization
Watch here: https://bit.ly/3i2iJbu
You will often hear that "data is the new gold". In this context, data management is one of the areas that has received more attention by the software community in recent years. From Artificial Intelligence and Machine Learning to new ways to store and process data, the landscape for data management is in constant evolution. From the privileged perspective of an enterprise middleware platform, we at Denodo have the advantage of seeing many of these changes happen.
Join us for an exciting session that will cover:
- The most interesting trends in data management.
- Our predictions on how those trends will change the data management world.
- How these trends are shaping the future of data virtualization and our own software.
Slides: Accelerating Queries on Cloud Data LakesDATAVERSITY
Using “zero-copy” hybrid bursting on remote data to solve data lake analytics capacity and performance problems.
Data scientists want answers on demand. But in today’s enterprise architectures, the reality is that most data remains on-prem, despite the promise of cloud-based analytics. Moving all that data to the cloud has typically not been possible for many reasons including cost, latency, and technical difficulty. So, what if there was a technology that would connect these on-prem environments to any major cloud platform, enabling high-powered computing without the need to move massive amounts of data?
Join us for this webinar where Alex Ma of Alluxio, an open-source data orchestration platform, will discuss how a data orchestration approach offers a solution for connecting traditional on-prem data centers and cloud data lakes with other clouds and data centers. With Alluxio’s “zero-copy” burst solution, companies can bridge remote data centers and data lakes with computing frameworks in other locations, enabling them to offload, compute, and leverage the flexibility, scalability, and power of the cloud for their remote data.
Big Data kennen sehr viele IT-Experten, wenigstens haben Sie eine Vorstellung davon. In der Praxis arbeiten damit in Deutschland derzeit nur wenige. Dabei bringt Big Data ein ganz neues Momentum in moderne Softwarelösungen und ist im Kontext der Mobil-, Cloud- und Social-Veränderungen nicht wegzudenken. Big Data macht Software intelligent und damit auf eine ganz neue Art für die Benutzer erlebbar. Mit Big Data entstehen neue Softwarearchitekturen, weil Informationen völlig anders verarbeitet werden - nämlich schneller, differenzierter und oft mit dem Ziel, Schlüsse zu ziehen und Vorhersagen zu treffen.
In diesem Vortrag wird erläutert, wie moderne Softwarearchitekturen gestaltet werden, sodass Sie Big Data Paradigmen erfolgreich umsetzen und welche Vorteile sich für die zunehmend mobilen Softwarelösungen ergeben. Wir werfen zudem einen Blick auf die Potentiale und Optionen in Branchen wie Banken, Versicherung oder Handel.
Introduction to Segment, Analytics API and Customer Data Platform. (Demo: Segment, AWS Redshift, Redash, Segment and GTM Alternatives) (Frontend Fighters Edition)
Recommended links:
https://segment.com/ - Analytics API and Customer Data Platform
https://open.segment.com/ - Open Source Projects of Segment
https://segment.com/docs/ - Documentation of Segment
https://redash.io/ - Open Sorce Data Dashboard
https://aws.amazon.com/redshift/ - Data Warehouse Solution
https://quicksight.aws/ - Business Analytics Service
https://www.ghostery.com/ - Tracker Detector
Keywords: business agility, tag managers, data-driven
GDPR Noncompliance: Avoid the Risk with Data VirtualizationDenodo
You can watch the full webinar on-demand here: https://goo.gl/2f2RYF
In its recent report “Predictions 2018: A year of reckoning”, Forrester predicts that 80% of firms affected by GDPR will not comply with the regulation by May 2018. Of those noncompliant firms, 50% will intentionally not comply.
Compliance doesn’t have to be this difficult! What if you have an opportunity to facilitate GDPR compliance with a mature technology and significant cost reduction? Data virtualization is a mature, cost-effective technology that enables privacy by design to facilitate GDPR compliance.
Attend this session to learn:
• How data virtualization provides a GDPR compliance foundation with data catalog, auditing, and data security.
• How you can enable single enterprise-wide data access layer with guardrails.
• Why data virtualization is a must-have capability for compliance use cases.
• How Denodo’s customers have facilitated compliance.
Accelerate Self-Service Analytics with Data Virtualization and VisualizationDenodo
Watch full webinar here: https://bit.ly/39AhUB7
Enterprise organizations are shifting to self-service analytics as business users need real-time access to holistic and consistent views of data regardless of its location, source or type for arriving at critical decisions.
Data Virtualization and Data Visualization work together through a universal semantic layer. Learn how they enable self-service data discovery and improve performance of your reports and dashboards.
In this session, you will learn:
- Challenges faced by business users
- How data virtualization enables self-service analytics
- Use case and lessons from customer success
- Overview of the highlight features in Tableau
TIBCO Spotfire: Data Science in the EnterpriseTIBCO Spotfire
From Data to Insights in Internet Time
Eric Novik, Internal Analytics Group, TIBCO Spotfire
ANALYTICS AND VISUALIZATION FOR THE FINANCIAL ENTERPRISE CONFERENCE
June 25, 2013 The Langham Hotel Boston, MA
How Precisely and Splunk Can Help You Better Manage Your IBM Z and IBM i Envi...Precisely
Splunk, an industry leader in IT operations and security analytics, is moving to the cloud. Adopting Splunk in the cloud can help you make better, faster decisions with real-time visibility across the enterprise. That said, if your critical business services rely on the IBM Z or IBM i, including these systems is a must in your new Splunk environment.
Having these systems in your Splunk environment helps remove a significant blind spot in your modernization efforts - avoiding security risks, failed audits, downtime, and escalating costs.
Join this discussion with presenters Brady Moyer from Splunk and Ian Hartley from Precisely to learn how to seamlessly integrate IBM Z and IBM i into Splunk for a true enterprise-wide view of your IT landscape.
During this on-demand webinar, you will hear:
• How Precisely Ironstream provides integration with Splunk without the need for mainframe or IBM i expertise
• The different types of data that can be collected and forwarded to Splunk
• Example use cases for events, security, and performance data
Slides: Proven Strategies for Hybrid Cloud Computing with Mainframes — From A...DATAVERSITY
Mainframes continue to perform mission-critical transaction processing and contain massive amounts of core business data. But digital transformation initiatives and cloud computing have created both opportunities and challenges for unlocking and utilizing this data. Qlik and AWS will share some of the proven strategies from successful customer deployments across a range of different mainframe to cloud use cases, including legacy application modernization, data analytics, and data migrations.
In this presentation, you will learn how to:
• Replicate very large volumes of mainframe data in real-time to the cloud
• Automate the creation of analytics-ready data lakes and data warehouses
• Achieve a 30% reduction in cost of compute
1. What are the difficulties in deploying and managing the life cycle of data-heavy application
2. Review of kubernetes landscape w.r.t data-heavy applications
3. Robin approach to orchestrating data-heavy applications
Oracle OpenWorld London - session for Stream Analysis, time series analytics, streaming ETL, streaming pipelines, big data, kafka, apache spark, complex event processing
Performance Acceleration: Summaries, Recommendation, MPP and moreDenodo
Watch full webinar here: https://bit.ly/3nLHayP
Performance is critical for an organization across the board. Developers can optimize execution with Summaries, MPP, Data Movement, and more. Business users rely on the Recommendation engine to guide them to the right data. Let’s discover and learn about various performance acceleration techniques in this session.
Analyst Webinar: Best Practices In Enabling Data-Driven Decision MakingDenodo
Watch full webinar here: https://bit.ly/37YkgN4
This presentation looks at the trends that are emerging from companies on their journeys to becoming data-driven enterprises.
These trends are taken from a survey of 500 companies and highlight critical success factors, what companies are doing, their progress so far and their plans going forward. It also looks at the role that data virtualization has within the data driven enterprise.
During the session we'll address:
- What is a data-driven enterprise?
- What are the critical success factors?
- What are companies doing to create a data-driven enterprise and why?
- What progress are they making?
- What are the plans on people, process and technologies?
- Why is data virtualization central to provisioning and accessing data in a data-driven enterprise?
- How should you get started?
Traditional BI vs. Business Data Lake – A ComparisonCapgemini
Traditional Business Intelligence (BI) systems provide various levels and kinds of analyses on structured data but they are not designed to handle unstructured data.
For these systems Big Data brings big problems because the data that flows in may be either structured or unstructured. That makes them hugely limited when it comes to delivering Big Data benefits.
The way forward is a complete rethink of the way we use BI - in terms of how the data is ingested, stored and analyzed.
More information: http://www.capgemini.com/big-data-analytics/pivotal
Hadoop 2.0 - Solving the Data Quality ChallengeInside Analysis
The Briefing Room with Dr. Claudia Imhoff and RedPoint Global
Live Webcast on July 22, 2014
Watch the archive:
https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=7bb4cbc33402c3b5f649343052cb9a6d
Whether data is big or small, quality remains the critical characteristic. While traditional approaches to cleansing data have made strides, nonetheless, data quality remains a serious hurdle for all organizations. This is especially true for identity resolution in customer data, but also for a range of other data sets, including social, supply chain, financial and other domains. One of the most promising approaches for solving this decades-old challenge incorporates the power of massive parallel processing, a la Hadoop.
Register for this episode of The Briefing Room to learn from veteran Analyst Dr. Claudia Imhoff, who will explain how Hadoop 2.0 and its YARN architecture can make a serious impact on the previously intractable problem of data quality. She’ll be briefed by George Corugedo of RedPoint Global, who will show how his company’s platform can serve as a super-charged marshaling area for accessing, cleansing and delivering high-quality data. He’ll explain how RedPoint was one of the first applications to be certified for running on YARN, which is the latest rendition of the now-ubiquitous Hadoop.
Visit InsideAnlaysis.com for more information.
Data Virtualization - Enabling Next Generation AnalyticsDenodo
Watch full webinar here: https://goo.gl/3gNMXX
Webinar featuring guest speaker Boris Evelson, Vice President, Principal Analyst at Forrester Research and Lakshmi Randall, Director of Product Marketing, Denodo.
Majority of enterprises today are data-aware. Being data-aware, or even data-driven, however, is not enough. Are your data-driven applications providing contextual and actionable insight? Are your analytics applications driving tangible business outcomes? Are you deriving insights from all the enterprise data? Enter Systems Of Insight (SOI), Forrester's latest analytical framework for insights-driven businesses.
In this webinar you will learn about the key principles that differentiate data-aware or data-driven businesses from their insights-driven peers and competitors. Specifically the webinar will explore roles data virtualization (aka Data Fabric) plays in modern SOI architectures such as:
• A single virtual catalog / view on all enterprise data sources including data lakes.
• A more agile and flexible virtual enterprise data warehouse.
• A common semantic layer for business intelligence (BI) and analytical applications (aka BI Fabric).
Data Warehouse Like a Tech Startup with Oracle Autonomous Data WarehouseRittman Analytics
“Tech startups can't afford DBAs, and they don't have time to provision servers and scale them up and down or deal with patches or downtime. They've never heard of indexes and they need data loaded and ready for analysis in days, not months. In this session learn how Oracle Database developers can build data warehouses as a hip startup data engineer would—but using a proper database built on Oracle technology. Oracle Data Visualization Desktop provides analytics and data exploration with techniques explained in this session. Hear real-world development experiences from working on data and analytics projects at a tech startup in the UK.”
Applying Big Data Superpowers to HealthcarePaul Boal
When I see a data analyst quickly transform and drill through a new pile of data to uncover a keen insight, I feel like I'm watching a new movie from the Marvel universe. If you haven't explored and learned to apply cloud, big data, streaming data, and rapid analytics techniques, then you haven't uncovered your superpowers, yet. Here's how you can get started.
The Role of the Logical Data Fabric in a Unified Platform for Modern AnalyticsDenodo
Watch full webinar here: https://bit.ly/3FHKalT
Given the growing demand for analytics and the need for organizations to advance beyond dashboards to self-service analytics and more sophisticated algorithms like machine learning (ML), enterprises are moving towards a unified environment for data and analytics. What is the best approach to accomplish this unification?
In TDWI’s recent Best Practice Report, Unified Platforms for Modern Analytics, written by Fern Halper, TDWI VP Research, Senior Research Director for Advanced Analytics, adoption, use, challenges, architectures, and best practices for unified platforms for modern analytics is explored. One of the approaches for unification outlined in the report is a data fabric approach.
Join us for a webinar with our Director of Product Marketing, Robin Tandon, where he will discuss the role of the logical data fabric in a unified platform for modern analytics, focusing on several of the key findings outlined in this report. He will share insights and use case examples that demonstrate how a properly implemented logical data fabric is the most suitable approach for Unified Data Platforms across enterprises and organizations.
Watch on-demand & Learn:
- The benefits of a unified platform and its ability to capture diverse & emerging data types and how to support high performance and scalable solutions.
- The role of an enhanced AI driven data catalog and its implications towards the findings in the best practice report.
- Implications of a logical data fabric as it relates to several of the recommendations outlined in the report.
How Precisely and Splunk Can Help You Better Manage Your IBM Z and IBM i Envi...Precisely
Splunk, an industry leader in IT operations and security analytics, is moving to the cloud. Adopting Splunk in the cloud can help you make better, faster decisions with real-time visibility across the enterprise. That said, if your critical business services rely on the IBM Z or IBM i, including these systems is a must in your new Splunk environment.
Having these systems in your Splunk environment helps remove a significant blind spot in your modernization efforts - avoiding security risks, failed audits, downtime, and escalating costs.
Join this discussion with presenters Brady Moyer from Splunk and Ian Hartley from Precisely to learn how to seamlessly integrate IBM Z and IBM i into Splunk for a true enterprise-wide view of your IT landscape.
During this on-demand webinar, you will hear:
• How Precisely Ironstream provides integration with Splunk without the need for mainframe or IBM i expertise
• The different types of data that can be collected and forwarded to Splunk
• Example use cases for events, security, and performance data
Slides: Proven Strategies for Hybrid Cloud Computing with Mainframes — From A...DATAVERSITY
Mainframes continue to perform mission-critical transaction processing and contain massive amounts of core business data. But digital transformation initiatives and cloud computing have created both opportunities and challenges for unlocking and utilizing this data. Qlik and AWS will share some of the proven strategies from successful customer deployments across a range of different mainframe to cloud use cases, including legacy application modernization, data analytics, and data migrations.
In this presentation, you will learn how to:
• Replicate very large volumes of mainframe data in real-time to the cloud
• Automate the creation of analytics-ready data lakes and data warehouses
• Achieve a 30% reduction in cost of compute
1. What are the difficulties in deploying and managing the life cycle of data-heavy application
2. Review of kubernetes landscape w.r.t data-heavy applications
3. Robin approach to orchestrating data-heavy applications
Oracle OpenWorld London - session for Stream Analysis, time series analytics, streaming ETL, streaming pipelines, big data, kafka, apache spark, complex event processing
Performance Acceleration: Summaries, Recommendation, MPP and moreDenodo
Watch full webinar here: https://bit.ly/3nLHayP
Performance is critical for an organization across the board. Developers can optimize execution with Summaries, MPP, Data Movement, and more. Business users rely on the Recommendation engine to guide them to the right data. Let’s discover and learn about various performance acceleration techniques in this session.
Analyst Webinar: Best Practices In Enabling Data-Driven Decision MakingDenodo
Watch full webinar here: https://bit.ly/37YkgN4
This presentation looks at the trends that are emerging from companies on their journeys to becoming data-driven enterprises.
These trends are taken from a survey of 500 companies and highlight critical success factors, what companies are doing, their progress so far and their plans going forward. It also looks at the role that data virtualization has within the data driven enterprise.
During the session we'll address:
- What is a data-driven enterprise?
- What are the critical success factors?
- What are companies doing to create a data-driven enterprise and why?
- What progress are they making?
- What are the plans on people, process and technologies?
- Why is data virtualization central to provisioning and accessing data in a data-driven enterprise?
- How should you get started?
Traditional BI vs. Business Data Lake – A ComparisonCapgemini
Traditional Business Intelligence (BI) systems provide various levels and kinds of analyses on structured data but they are not designed to handle unstructured data.
For these systems Big Data brings big problems because the data that flows in may be either structured or unstructured. That makes them hugely limited when it comes to delivering Big Data benefits.
The way forward is a complete rethink of the way we use BI - in terms of how the data is ingested, stored and analyzed.
More information: http://www.capgemini.com/big-data-analytics/pivotal
Hadoop 2.0 - Solving the Data Quality ChallengeInside Analysis
The Briefing Room with Dr. Claudia Imhoff and RedPoint Global
Live Webcast on July 22, 2014
Watch the archive:
https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=7bb4cbc33402c3b5f649343052cb9a6d
Whether data is big or small, quality remains the critical characteristic. While traditional approaches to cleansing data have made strides, nonetheless, data quality remains a serious hurdle for all organizations. This is especially true for identity resolution in customer data, but also for a range of other data sets, including social, supply chain, financial and other domains. One of the most promising approaches for solving this decades-old challenge incorporates the power of massive parallel processing, a la Hadoop.
Register for this episode of The Briefing Room to learn from veteran Analyst Dr. Claudia Imhoff, who will explain how Hadoop 2.0 and its YARN architecture can make a serious impact on the previously intractable problem of data quality. She’ll be briefed by George Corugedo of RedPoint Global, who will show how his company’s platform can serve as a super-charged marshaling area for accessing, cleansing and delivering high-quality data. He’ll explain how RedPoint was one of the first applications to be certified for running on YARN, which is the latest rendition of the now-ubiquitous Hadoop.
Visit InsideAnlaysis.com for more information.
Data Virtualization - Enabling Next Generation AnalyticsDenodo
Watch full webinar here: https://goo.gl/3gNMXX
Webinar featuring guest speaker Boris Evelson, Vice President, Principal Analyst at Forrester Research and Lakshmi Randall, Director of Product Marketing, Denodo.
Majority of enterprises today are data-aware. Being data-aware, or even data-driven, however, is not enough. Are your data-driven applications providing contextual and actionable insight? Are your analytics applications driving tangible business outcomes? Are you deriving insights from all the enterprise data? Enter Systems Of Insight (SOI), Forrester's latest analytical framework for insights-driven businesses.
In this webinar you will learn about the key principles that differentiate data-aware or data-driven businesses from their insights-driven peers and competitors. Specifically the webinar will explore roles data virtualization (aka Data Fabric) plays in modern SOI architectures such as:
• A single virtual catalog / view on all enterprise data sources including data lakes.
• A more agile and flexible virtual enterprise data warehouse.
• A common semantic layer for business intelligence (BI) and analytical applications (aka BI Fabric).
Data Warehouse Like a Tech Startup with Oracle Autonomous Data WarehouseRittman Analytics
“Tech startups can't afford DBAs, and they don't have time to provision servers and scale them up and down or deal with patches or downtime. They've never heard of indexes and they need data loaded and ready for analysis in days, not months. In this session learn how Oracle Database developers can build data warehouses as a hip startup data engineer would—but using a proper database built on Oracle technology. Oracle Data Visualization Desktop provides analytics and data exploration with techniques explained in this session. Hear real-world development experiences from working on data and analytics projects at a tech startup in the UK.”
Applying Big Data Superpowers to HealthcarePaul Boal
When I see a data analyst quickly transform and drill through a new pile of data to uncover a keen insight, I feel like I'm watching a new movie from the Marvel universe. If you haven't explored and learned to apply cloud, big data, streaming data, and rapid analytics techniques, then you haven't uncovered your superpowers, yet. Here's how you can get started.
The Role of the Logical Data Fabric in a Unified Platform for Modern AnalyticsDenodo
Watch full webinar here: https://bit.ly/3FHKalT
Given the growing demand for analytics and the need for organizations to advance beyond dashboards to self-service analytics and more sophisticated algorithms like machine learning (ML), enterprises are moving towards a unified environment for data and analytics. What is the best approach to accomplish this unification?
In TDWI’s recent Best Practice Report, Unified Platforms for Modern Analytics, written by Fern Halper, TDWI VP Research, Senior Research Director for Advanced Analytics, adoption, use, challenges, architectures, and best practices for unified platforms for modern analytics is explored. One of the approaches for unification outlined in the report is a data fabric approach.
Join us for a webinar with our Director of Product Marketing, Robin Tandon, where he will discuss the role of the logical data fabric in a unified platform for modern analytics, focusing on several of the key findings outlined in this report. He will share insights and use case examples that demonstrate how a properly implemented logical data fabric is the most suitable approach for Unified Data Platforms across enterprises and organizations.
Watch on-demand & Learn:
- The benefits of a unified platform and its ability to capture diverse & emerging data types and how to support high performance and scalable solutions.
- The role of an enhanced AI driven data catalog and its implications towards the findings in the best practice report.
- Implications of a logical data fabric as it relates to several of the recommendations outlined in the report.
BIG Data & Hadoop Applications in LogisticsSkillspeed
Explore the applications of BIG Data & Hadoop in Logistics via Skillspeed.
BIG Data & Hadoop in Logistics is a key differentiator, especially in terms of optimizing back-end operations. They are used by companies for delivery optimization, demand & inventory forecasting and simplifying distribution networks.
To get more details regarding BIG Data & Hadoop, please visit - www.SkillSpeed.com
"Big Data" is a term as ubiquitous as data itself, but it is more than just a way to describe the massive amount of information created every day. In fact, I would argue that it is more of a dynamic than a one-dimensional term.
In this presentation, I walk business audiences through the history and rise of big data, the four Vs of big Data, and end by looking at some practical applications and recommendations.
Originally presented on February 26, 2013 in Washington, DC at the US Chamber of Commerce.
Data-Driven Business Model Innovation BlueprintMohamed Zaki
In this paper the authors present an integrated framework that could help stimulate an
organisation to become data-driven by enabling it to construct its own Data-Driven Business Model (DDBM) in coordination with the six fundamental questions for a data-driven business. There are a series of implications that may be particularly helpful to companies already leveraging ‘big data’ for their businesses or planning to do so. By utilising the blueprint an existing business is able to follow a step-by-step process to construct its own DDBM centred around the business’ own desired outcomes, organisation dynamics, resources, skills and the business sector within which it sits. Furthermore, an existing business can identify, within its own organisation, the most common inhibitors to constructing and implementing an effective DDBM and plan to mitigate these accordingly. Within the DDBM-Innovation Blueprint inhibitors are colour-coded and ranked from severe (red) to minor (green). This system of inhibitor ranking represents the frequency and severity of inhibitor, as perceived by 41 strategy and data-oriented elite interviewees.
UK economy is showing signs of posting a strong pull-back. China on the other hand is facing the prospects of a slower growth this year. We cover this in the section on *Global Trends* in this month’s issue of Economy Matters.
In the section on *Domestic Trends*, we discuss the trends emanating out of the recent releases on GDP, Balance of Payments, IIP and Inflation during the month of February 2014.
In *Investment Tracker*, we analyse the latest data on investment proposals.
The *Sectoral* spotlight for this issue is on Travel & Tourism, which holds strategic importance in the Indian economy providing several socio economic benefits.
In *Focus of the Month*, we discuss the employment creation challenge that the economy is facing currently. In addition to our own analysis, we have carried articles from eminent experts on the subject.
This edition of the newsletter focuses on the areas under the taskforces formed by the B20 China which are
important to us considering India’s socio-economic development and priorities of the government.
When and How Data Lakes Fit into a Modern Data ArchitectureDATAVERSITY
Whether to take data ingestion cycles off the ETL tool and the data warehouse or to facilitate competitive Data Science and building algorithms in the organization, the data lake – a place for unmodeled and vast data – will be provisioned widely in 2020.
Though it doesn’t have to be complicated, the data lake has a few key design points that are critical, and it does need to follow some principles for success. Avoid building the data swamp, but not the data lake! The tool ecosystem is building up around the data lake and soon many will have a robust lake and data warehouse. We will discuss policy to keep them straight, send data to its best platform, and keep users’ confidence up in their data platforms.
Data lakes will be built in cloud object storage. We’ll discuss the options there as well.
Get this data point for your data lake journey.
Accelerate Self-Service Analytics with Data Virtualization and VisualizationDenodo
Watch full webinar here: https://bit.ly/3fpitC3
Enterprise organizations are shifting to self-service analytics as business users need real-time access to holistic and consistent views of data regardless of its location, source or type for arriving at critical decisions.
Data Virtualization and Data Visualization work together through a universal semantic layer. Learn how they enable self-service data discovery and improve performance of your reports and dashboards.
In this session, you will learn:
- Challenges faced by business users
- How data virtualization enables self-service analytics
- Use case and lessons from customer success
- Overview of the highlight features in Tableau
Take Action: The New Reality of Data-Driven BusinessInside Analysis
The Briefing Room with Dr. Robin Bloor and WebAction
Live Webcast on July 23, 2014
Watch the archive:
https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=360d371d3a49ad256942f55350aa0a8b
The waiting used to be the hardest part, but not anymore. Today’s cutting-edge enterprises can seize opportunities faster than ever, thanks to an array of technologies that enable real-time responsiveness across the spectrum of business processes. Early adopters are solving critical business challenges by enabling the rapid-fire design, development and production of very specific applications. Functionality can range from improved customer engagement to dynamic machine-to-machine interactions.
Register for this episode of The Briefing Room to learn from veteran Analyst Dr. Robin Bloor, who will tout a new era in data-driven organizations, and why a data flow architecture will soon be critical for industry leaders. He’ll be briefed by Sami Akbay of WebAction, who will showcase his company’s real-time data management platform, which combines all the component parts needed to access, process and leverage data big and small. He’ll explain how this new approach can provide game-changing power to organizations of all types and sizes.
Visit InsideAnlaysis.com for more information.
Enabling Next Gen Analytics with Azure Data Lake and StreamSetsStreamsets Inc.
Big data and the cloud are perfect partners for companies who want to unlock maximum value from all of their unstructured, semi-structured, and structured data. The challenge has been how to create and manage a reliable end-to-end solution that spans data ingestion, storage and analysis in the face of the volume, velocity and variety of big data sources.
In this webinar, we will show you how to achieve big data bliss by combining StreamSets Data Collector, which specializes in creating and running complex any-to-any dataflows, with Microsoft's Azure Data Lake and Azure analytic solutions.
We will walk through an example of how a major bank is using StreamSets to transport their on-premise data to the Azure Cloud Computing Platform and Azure Data Lake to take advantage of analytics tools with unprecedented scale and performance.
Topics including: The transformative value of real-time data and analytics, and current barriers to adoption. The importance of an end-to-end solution for data-in-motion that includes ingestion, processing, and serving. Apache Kudu’s role in simplifying real-time architectures.
ADV Slides: Building and Growing Organizational Analytics with Data LakesDATAVERSITY
Data lakes are providing immense value to organizations embracing data science.
In this webinar, William will discuss the value of having broad, detailed, and seemingly obscure data available in cloud storage for purposes of expanding Data Science in the organization.
ADV Slides: When and How Data Lakes Fit into a Modern Data ArchitectureDATAVERSITY
Whether to take data ingestion cycles off the ETL tool and the data warehouse or to facilitate competitive Data Science and building algorithms in the organization, the data lake – a place for unmodeled and vast data – will be provisioned widely in 2020.
Though it doesn’t have to be complicated, the data lake has a few key design points that are critical, and it does need to follow some principles for success. Avoid building the data swamp, but not the data lake! The tool ecosystem is building up around the data lake and soon many will have a robust lake and data warehouse. We will discuss policy to keep them straight, send data to its best platform, and keep users’ confidence up in their data platforms.
Data lakes will be built in cloud object storage. We’ll discuss the options there as well.
Get this data point for your data lake journey.
Advanced Analytics and Machine Learning with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/32c6TnG
Advanced data science techniques, like machine learning, have proven an extremely useful tool to derive valuable insights from existing data. Platforms like Spark, and complex libraries for R, Python and Scala put advanced techniques at the fingertips of the data scientists. However, these data scientists spent most of their time looking for the right data and massaging it into a usable format. Data virtualization offers a new alternative to address these issues in a more efficient and agile way.
Attend this webinar and learn:
- How data virtualization can accelerate data acquisition and massaging, providing the data scientist with a powerful tool to complement their practice
- How popular tools from the data science ecosystem: Spark, Python, Zeppelin, Jupyter, etc. integrate with Denodo
- How you can use the Denodo Platform with large data volumes in an efficient way
- About the success McCormick has had as a result of seasoning the Machine Learning and Blockchain Landscape with data virtualization
Why Your Data Science Architecture Should Include a Data Virtualization Tool ...Denodo
Watch full webinar here: https://bit.ly/35FUn32
Presented at CDAO New Zealand
Advanced data science techniques, like machine learning, have proven an extremely useful tool to derive valuable insights from existing data. Platforms like Spark, and complex libraries for R, Python, and Scala put advanced techniques at the fingertips of the data scientists.
However, most architecture laid out to enable data scientists miss two key challenges:
- Data scientists spend most of their time looking for the right data and massaging it into a usable format
- Results and algorithms created by data scientists often stay out of the reach of regular data analysts and business users
Watch this session on-demand to understand how data virtualization offers an alternative to address these issues and can accelerate data acquisition and massaging. And a customer story on the use of Machine Learning with data virtualization.
Discover the concept of 'on-the-fly' analysis with TIBCO Spotfire based on effortless of coding program for combining different types of file, cut cost of increasing in DB warehouse while DB growing, and real time analysis for digital era.
Watch full webinar here: https://buff.ly/2mHGaLA
What started to evolve as the most agile and real-time enterprise data fabric, data virtualization is proving to go beyond its initial promise and is becoming one of the most important enterprise big data fabrics.
Attend this session to learn:
• What data virtualization really is
• How it differs from other enterprise data integration technologies
• Why data virtualization is finding enterprise-wide deployment inside some of the largest organizations
Moving Targets: Harnessing Real-time Value from Data in Motion Inside Analysis
The Briefing Room with David Loshin and Datawatch
Live Webcast Feb. 17, 2015
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=4a053043c45cf0c2f6453dfb8577c72a
Patience may be a virtue, but when it comes to streaming analytics, waiting is no option. Between Big Data and the Internet of Things, businesses are faced with more data and greater complexity than ever before. Traditional information architectures simply cannot support the kind of processing necessary to make use of this fast-moving resource. The modern context requires a shorter path to analytics, one that narrows the gap between governance and discovery
Register for this episode of The Briefing Room to hear veteran Analyst David Loshin as he explains how the prevalence of streaming data is changing business pace and processes. He’ll be briefed by Dan Potter of Datawatch, who will tout his company’s real-time data discovery platform for data in motion. He will show how self-service data preparation can lead to faster insights, ultimately fostering the ability to make precise decisions at the right time.
Visit InsideAnalysis.com for more information.
A Key to Real-time Insights in a Post-COVID World (ASEAN)Denodo
Watch full webinar here: https://bit.ly/2EpHGyd
Presented at Data Champions, Online Asia 2020
Businesses and individuals around the world are experiencing the impact of a global pandemic. With many workers and potential shoppers still sequestered, COVID-19 is proving to have a momentous impact on the global economy. Regardless of the current situation and post-pandemic era, real-time data becomes even more critical to healthcare practitioners, business owners, government officials, and the public at large where holistic and timely information are important to make quick decisions. It enables doctors to make quick decisions about where to focus the care, business owners to alter production schedules to meet the demand, government agencies to contain the epidemic, and the public to be informed about prevention.
In this on-demand session, you will learn about the capabilities of data virtualization as a modern data integration technique and how can organisations:
- Rapidly unify information from disparate data sources to make accurate decisions and analyse data in real-time
- Build a single engine for security that provides audit and control by geographies
- Accelerate delivery of insights from your advanced analytics project
Similar to StreamCentral for the IT Professional (20)
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
2. http://www.virtus-it.com
Table of Contents
• Big Data Overview………..………………………………………………………… 3 - 6
• Step by Step approach to building Big Data Solutions……………….. 7 - 10
• StreamCentral Introduction……………………………………..………….... 11 - 21
• Keeping it structured? –Extending current DW/BI investments………. 22 - 25
• An approach to building Operational Intelligence solutions………….... 26 - 28
• StreamCentral additional details……………………………………..………….... 29 - 37
• StreamCentral physical architecture……………………………..………….... 38 - 40
• How StreamCentral fits in an enterprise technology architecture…… 41 - 44
2
3. http://www.virtus-it.com
10101010101010101010101010101010101010101010101ABC010
1010101010101010101010100101010101010101ABCABCABCABC
1
0
1
0
1
0
1
0
1
0
1
0
3
BIGData
Today With Big Data
Custom Application
ERP
• Analysis of structured data from
internal applications
• Data sets updated using batch
processes
• Traditional BI & Data Warehousing
• Traditional BI and data
warehousing extended to
include structured and
unstructured data from internal
and external applications
processed in real-time or in
batch.
• Ability to predict events as well
as analyze historical
associations in wide sets of data
for patterns and trends.
Data Management Discovery & Analysis Event Detection
Big Data Solution
Modeling & Model
Deployment
Stream Processing &
Batch Data
Acquisition
Blocks for building Big Data Solutions
ERP
Internal & External Applications Data Stores
1
0
1
0
1
0
Real-Time + Batch Big Data Processing Layer
Real-time event data in
operational applications
Pattern, trend and
association analysis
Understand the connections in a wide variety of data that impact
business performance and use that knowledge to deliver exceptional
business results
4. http://www.virtus-it.com
Understanding business performance with Big
Data includes two distinct capabilities:
Managing performance by analyzing
internal and external, structured and
unstructured data for patterns and
associations collected over time
• Customer segmentation based on buying
history patterns and finding associations
with population, census and twitter data
to develop marketing strategy
• Web analytics to improve marketing
campaigns and relevant content
• Sales pipeline analysis compared to
industry data to understand the right
goals to set
• Cash flow analysis to make capital
investment decisions considering
external variables
Managing performance by analyzing real-
time data for day to day events –
Operational Intelligence
• What is the current workload?
• Is staff available and working on
high priority work?
• What factors are impacting
customer experience right now?
• What processes are taking longer
than expected?
4
A few example scenarios: A few example scenarios:
5. http://www.virtus-it.com
Telco’s Core IMS
Network Data
Data, Voice & Video
Performance Data
Data, Voice & Video
Performance Data
Data from
Telco Towers
Weather Data
Traffic
IncidentsPopulation
Data
Data Stream
weatherundergrou
nd
MapquestUSA Today Census
data
Sources of real
time streaming
data from
networks,
devices, services
and other
internal
applications
External sources
of data that add
understanding of
what’s happening
when events are
detected
Example Big Data Solutions: Telco
Network
Test
New
Service –
Investment
Planning
Adaptive
Bit Rate –
Video
Streaming
QoE
360o
Customer
QoE for
1st Level
customer
service
Video QoE
for IPTV
Business
Solutions
5
6. http://www.virtus-it.com
Telco’s Core IMS
Network Data
Data, Voice & Video
Performance Data
Data, Voice & Video
Performance Data
Data from
Telco Towers
Weather Data Traffic IncidentsPopulation Data
Data Stream
weatherunderground INRIXUSA Today Census data
Example Big Data Scenario : Utilities - Water
Sources of real
time data
relating to
your business
Sources of
BIG DATA
relevant to
your business
VIBRATION
SENSOR
ENERGY
HARVESTING
WATER MAIN
PRESSURE
SCADA
NETWORK
6
7. http://www.virtus-it.com 7
Steps it takes to build
powerful Big Data
solutions!
Solution
Modeling
Model
Deployment
Data
Acquisition
(streaming or batch, internal or
external, structured or unstructured)
Data
Management
Event
Detection
Discovery &
Analysis
Big Data Solution Lifecycle
Start here
8. http://www.virtus-it.com 8
Solution Modeling
•Logical Data Model
design
•Data standardization
& transformation
modeling
•Key Performance
Indicator modeling
via business rules
•Dimensional
modeling
•Historical Data Mart
Modeling
•Event detection
modeling via
business rules
•Real-time analytics
data mart modeling
Model Deployment
•Physical Design
Implementation
•Physical deployment
of dimensional
model
•Database
deployment
•Physical deployment
of data marts
•Rules deployment
Data Acquisition
•Data from internal
data sources
•Data from external
sources
•Streaming data
•Batch data
•Structured Data
•Unstructured Data
•Data transformation
•Data standardization
Data Management
•Structured Data
Storage
•Unstructured Data
Storage
•Scalability
•Performance
Event Detection
•Detecting events on
streaming data
•Alerting
•Integration with
operational
applications
Discovery & Analysis
•Information
Discovery
•Data Classification
•Analytics
•Querying
•Visualization
Solution
Modeling
Model
Deployment
Data
Acquisition
Data
Management
Event
Detection
Discovery&
Analysis
Big Data Solution Lifecycle – Tasks Detailed
10. http://www.virtus-it.com
Solution
Modeling
Model
Deployment
Data
Acquisition
Data
Management
Event
Detection
Discovery &
Analysis
Big Data Lifecycle
But, where is the innovation in
these areas?
• Fragmented, point use or lack of
industry strength technology to aid in
Design, Model Deployment, Data
Acquisition and Event Detection
makes it difficult, time consuming
and specialist resource intensive to
build Big Data Solutions
• What is the use of having scalable
platforms that can store and manage
this data and tools that can deliver
incredible visualizations when the
effort to get the data right is still a
problem as it has always been?
12. http://www.virtus-it.com
Solution
Modeling
Model
Deployment
Data
Acquisition
Data
Management
Event
Detection
Discovery &
Analysis
Big Data Solution Life cycle
1. StreamCentral Solutions Designer makes it easy to
model traditional BI/DW and Big Data solutions
2. Builds and deploys model on HP Vertica or Microsoft
SQL Server
3. Adds context by connecting all streaming and static
data to time, location and entities
4. StreamCentral Big Data Server, horizontally scalable,
executes the model definition in real-time
5. StreamCentral drastically reduces time to market, risk
and cost in building Big Data solutions!
Software to design &
build BI & Big Data
SolutionsStreamCentral enables you to quickly move from a blank
sheet of paper to a production system, comprehensive and
powerful that can be delivered without a large investment
in specialist skills.
13. http://www.virtus-it.com 13
1010101010101010
ABCABCABCABCABC
StreamCentral Workbench:
Solution Designer
StreamCentral
Workbench:
Model DeploymentData
Collection
Data
Processing
Correlation
Data
Publishing
Data
Security
StreamCentral Big Data Server
StreamCentral has three main components:
1. Use the Workbench Designer to define source data, entities,
rules for monitoring conditions, events and data correlation,
analytical models and knowledgebase
2. Workbench Model Deployment configures, builds and deploys
the model on top of HP Vertica or Microsoft SQL Server
3. Big Data Server executes the defined model in real-time
1
2
3
14. http://www.virtus-it.com
Database
REST/SOAP
API
LDAP
PUSH
API
Data Processing
Engine
Vertica SQL Server
Correlation EngineCollector
Data Publishing, Access
and Security
• Capture data
• Validate data
• Prepare data
• Apply transformations
• Perform calculations
• Determine conditions & KPIs
• Identify & build dimensions
• Identify alerts
• Correlate incoming data based
on defined rules
• Detect events based on
correlated data
• Update fact data
• Update entity & dimension data
• Update analysis collections
• Update event collections
• Manage data level security
Data Acquisition –
Push / Pull data from
variety of sources
Design data
transformations
Conditions & KPI
modeler via rules
builder
Real-time data
correlation
Event detection
via rules builder
Real-time data
mart designer
360o data mart
designer
Define entities and
Import Entity Data
Dimension
modeler
Data Security
designer
StreamCentral Big Data Server
StreamCentral Workbench: Big Data Solution Designer
Meta Data
Create Database
Structure
Add Context
StreamCentral Workbench: Big Data Solution Deployment
15. http://www.virtus-it.com 15
Model Pull
Data
Sources with
strong REST,
SOAP & DB
Support Push Data
API
Data
Transformat
-ion
Model
Entities &
import static
data
Dimension
modeler
Time &
Location
Standard-
ization
Conditions &
KPI modeler
Correlation
Modeler
Event
Detection
rules on
real-time
data
Real-time &
Historical
analytics
Data mart
modeler
• Software targeted to be used by IT and non IT people to
design and build Big Data solutions
• Can work with batch data (as in traditional Business
Intelligence) or real-time streams (as in Operational
Intelligence)
• Workbench lets analysts model all necessary steps in
building a Big Data Solution
• Data Pull/Push
• Model Transformations
• Model Entities (like customers, patients, products),
import static entity data and define entity relationships to
source data
• Shared dimensions across data
• Condition modeler via business rules to monitor specific
sets of conditions in batch or streaming data
• Evaluate different entities with different sets of
conditions as data flows in
• Specify rules to model how to correlate data streams in
real-time
• Event detection
• Model data marts that aggregate the right data for
association and pattern analysis
StreamCentral Workbench : Software to design
traditional BI/DW & Big Data Solutions
Workbench
16. http://www.virtus-it.com 16
Generating insights from data requires context to be
added to the data. This context is a continuous
thread that connects all types of data throughout the
Big Data Solution lifecycle. Four typical examples of
context..
Insight
Who (entities
like customer,
patient)
When (time)
Where
(location)
What
(streaming &
static data
correlation)
• StreamCentral automatically builds
and maintains time and location
dimensions
• Entities can be created and defined in
StreamCentral
• All data in StreamCentral is
continuously and automatically
connected to time, location and
defined entities
• Resultant real-time events and
analytical data marts automatically
inherit this context without need for
any programming or development
work
• This increases the impact and value
of collected data
Converting data to insights by continuously adding context
17. http://www.virtus-it.com 17
Auto build
and deploy
DB structure
based on
Workbench
Model
Continuous
Pull with
strong REST,
SOAP & DB
Support
Push Data
API for
streaming
sources
Time &
Location
Standard-
ization
Monitor
conditions
Event
detection
Build data
marts &
continuously
update new
data
In-Memory
Operations
Distributed
Architecture
MPP Support
StreamCentral Big Data Server: Software
that runs Big Data Solutions
• Extends your Business Intelligence strategy by
easily incorporating external data sets
• Introduces integration of real-time data for
event insight to your organization
• Auto-builds database schema (facts,
dimensions, entities, flat tables and more)
• By default, standardizes all incoming data by
connecting it to auto created time and location
dimensions
• Builds event data marts and continuously loads
data
• Builds real-time data marts to help in
understanding associations in data
Continuously loads these analysis data marts
• Deliver real-time event insights to new or
existing operational applications
• Significantly reduces IT overhead in building Big
Data solutions
Big Data Server
18. http://www.virtus-it.com
18
Solution
Modeling
Model
Deployment
Data Acquisition
Data
Management
Event Detection
Discovery &
Analysis
Bringing it together:
Building Big Data Solutions
with StreamCentral and
partner solutions
1. MPP Columnar Databases : Vertica,
ParStream
2. Microsoft SQL Server
StreamCentral Big Data
Server
StreamCentral
Workbench: Model
Deployment
StreamCentral Workbench:
Big Data Solutions Modeler
Tableau Software,
Microsoft PowerView
StreamCentral Big Data
Server
19. http://www.virtus-it.com
• Industrial strength, enterprise ready with web scale
characteristics - handles extremely large amounts of
data
• Uses in-memory processing for high speed
• Next generation distributed architecture – allows you to
run on any number of commodity hardware
• Built in redundancy at every layer for high availability
• Easy to use tools to monitor and manage StreamCentral
• Built on Microsoft technology that most enterprises
already have invested in
• Runs on best of breed and latest database technology
from Microsoft SQL Server and HP Vertica
Choose
database from:
19
20. http://www.virtus-it.com
Why StreamCentral?
• Roadmap to Big Data: StreamCentral is the only solution that enables the evolution
of current practices in Business Intelligence and Data warehousing to now include
external data, event monitoring and real-time insights
• No programming solution modeler: StreamCentral takes a solution approach –
designing and modeling shifts to analysts versus everything being done by
developers or programmed from scratch
• Continuously adds context to data: Any kind of data that is streamed to
StreamCentral, pulled in near real-time or imported via batch is continuously and
automatically connected to time, location and defined entities. This significantly
reduces risk, time and cost associated with building BI/DW and Big Data solutions
• Reduced dependency on specialist skills: No in-depth knowledge needed on HP
Vertica or SQL Server development as StreamCentral builds, deploys and maintains
all internal structures in those environments automatically
• Plays well: Is standards based and agnostic to existing enterprise technologies
• Adaptable: Everything created in StreamCentral can be modified. Makes it easy to
adapt the Big Data solution to changing needs of the business
20
21. http://www.virtus-it.com 21
Making a business case for leveraging Big Data
just got a whole lot easier with StreamCentral
70%
Time taken to build Big Data
solutions is drastically reduced
by using StreamCentral
60%
Cost of building Big Data
solutions is drastically reduced
by using StreamCentral
In addition, StreamCentral reduces risk, data
quality issues, specialist skillsets requirements and
complexity in building traditional Business
Intelligence/Data Warehousing or Big Data
solutions
22. http://www.virtus-it.com 22
No immediate plans to go Big on Data?
Planning to work primarily with
structured data?
But would like to deliver additional
insights by enhancing your existing
investments in Business Intelligence and
Data Warehousing?
23. http://www.virtus-it.com
Traditional Data Warehousing
Interrogation of historical data for trend
analysis. Business Intelligence applications
deliver analytics or reports to
management for performance analysis
On-Demand Business Intelligence
Update Data Warehouse continuously with
real-time data. Provides the ability to
analyze data updated in real-time
Operational Intelligence
Allows organizations to monitor fast
moving data for key indicators and events
and immediately act on these insights,
through manual or automated actions
Reporting:-
What did happen ?
Analysis:-
Why did it happen ?
Happens on previously
stored data
(data at rest)
Happens on real-time
streaming data
(data in-flight)
Solution value to businessLower Higher
PerceivedComplexityHigherLower
Event Monitoring:-
What is happening ?
Predictive Analytics:-
What will happen ?
Traditional Data Warehousing
Solutions
On-Demand
BI
Operational
Intelligence
23
Keeping it structured – A roadmap to
extend current investments in BI/DW
24. http://www.virtus-it.com
Reporting:-
What did happen ?
Analysis:-
Why did it happen ?
Happens on previously
stored data
(data at rest)
Happens on real-time
streaming data
(data in-flight)
Solution value to businessLower Higher
PerceivedComplexityHigherLower
Event Monitoring:-
What is happening ?
Predictive Analytics:-
What will happen ?
Traditional Data Warehousing
Solutions
On-Demand
BI
Operational
Intelligence
Most organizations have
traditionally invested in
this area
In most companies, the scope of
understanding business performance is
limited to historical analysis and rarely
includes real-time understanding of key
events that impact day to day
operational processes
Keeping it structured – A roadmap to
extend current investments in BI/DW
25. http://www.virtus-it.com
Reporting:-
What did happen ?
Analysis:-
Why did it happen ?
Happens on previously
stored data
(data at rest)
Happens on real-time
streaming data
(data in-flight)
Solution value to businessLower Higher
PerceivedComplexityHigherLower
Event Monitoring:-
What is happening ?
Predictive Analytics:-
What will happen ?
Traditional Data Warehousing
Solutions
On-Demand
BI
Operational
Intelligence
Most organizations have
traditionally invested in
this area
StreamCentral’s
area of focus
25
Keeping it structured – A roadmap to
extend current investments in BI/DW
27. http://www.virtus-it.com
Data Layer
Interfaces
Data Processing
Real-Time Insights
Business Solutions
Operational (User)
Internal
Applications and
Data Sets
External Data
Connections to existing
architecture for tapping
data & data streams
APIs
Databases
Enterprise Service Bus
Messages
Push Streaming Data |Pull Data |Format | Standardize | Transform |
Measure | Correlate | Event Detection | Rules Engine | In-Memory
Processing Real-Time Streaming Analytics
Real-Time Event Notification
Historical data that
supports pattern &trend
analytics. New insights are
added in real time
Customer
Experience
Continuous
Improvement
Day to day insights and actions
delivered in multiple mediums to many
users
KPIs
Complaints
Brand –
Protection
1
2
3
4
5
6
!
Access to right information at the right
time along with knowledge base of
actions to perform
Operational Intelligence practices are similar to traditional Data Warehousing practices
27
28. http://www.virtus-it.com
Data Layer
Interfaces
Data Processing
Real-Time Insights
Business Solutions
Operational (User)
Internal Data Sets External Data
Connections to existing
architecture for tapping
data & data streams
APIs
Databases
Enterprise Service Bus
Messages
Push Streaming Data |Pull Data |Format | Standardize | Transform |
Measure | Correlate | Event Detection | Rules Engine | In-Memory
Processing Real-Time Streaming Analytics
Real-Time Event Notification
Historical data supporting
pattern &trend analytics.
New insights added in real
time
Customer
Experience
Continuous
Improvement
Day to day insights and actions
delivered in multiple mediums to many
users
KPIs
Complaints
Brand –
Protection
1
2
3
4
5
6
!
Access to right information at the right
time along with knowledge base of
actions to perform
Focus of StreamCentral
28
30. http://www.virtus-it.com
StreamCentral Workbench Big Data
Solutions Modeler - Inputs
• Data Sources
• Push/Pull
• Data transformations
• Define and import entity data
• Modeling
• Rules for monitoring conditions in data
• Correlation rules to identify related records across data sources in real-time
• Rules for detecting events
• Common dimension modeling
• Data Mart modelers
• Support for Real-time
• Correlation rules to identify related records across data sources in real-time
• Rules for detecting events
• Configure real-time data marts
• 360o data aggregation**
• Define data relationships across data sources
• Configure 360o data marts
• Data level security**
30** Coming Q3 2013
31. http://www.virtus-it.com
StreamCentral Big Data Server - Output
• Database structure automatically created, updated and managed in Big Data databases like HP
Vertica or SQL Server by StreamCentral.
• The StreamCentral database automatically builds time and location dimensions, fact tables, other
dimension tables, standardizes facts across data sources to the one time and location dimension
as well as connects facts to KPIs. StreamCentral also auto-loads this database from various data
sources into Big Data databases like HP Vertica or SQL Server
• Real-time event notification that can be consumed by operational applications via an API**
• Real-time event alerts
• Data marts that are automatically created, updated and managed by StreamCentral. The data
marts denormalize data into a single table facilitating faster querying and analysis of data
• Real-time analytical data marts built that aggregates events and data across data sources to better understand
conditions that influence events
• Real-time event data marts that bring together all relevant information for a single event
• 360o data marts for association and pattern analysis**
31** Coming Q3 2013
32. http://www.virtus-it.com 32
Sensors
Weather
Enterprise
Applications
Data Visualization
(Reporting, Analytics,
Dashboards)
Correlates
Data
Generates Key
Performance
Indicators
Uncovers
Events
Consumes real-
time or static data
OR Pulls data from
various data
sources and
applies
transformation and
standardization
rules
Model Deployment
Auto-builds database schema
Auto-loads database
Builds and continuous loads data to
event data marts
Builds and continuous loads
Analysis Collections
Publishes event data that can be
subscribed by Operational
Applications
Devices
Auto-build Database
Schema
360o Data marts and real-time data
marts
Event Data Marts for every event along
with its context as denormalized flat
tables
StreamCentral
Push
Push
Massively Parallel Processing Systems - Vertica
RDBMS – MS SQL Server
Publish event data
to operational
applications – Web,
mobile or desktop
StreamCentral Workbench – Big Data
Solutions Modeler
Collate Raw Data (Push/Pull) – Real-Time or Static
Model data standardization and transformation
rules
Define business entities and connect raw data to
business entities
Model Dimensions
Model conditions to monitor across data sources
Assign different conditions to different entities
Model Correlation Rules
Model events and specify context to add to events
Model analytical data marts auto built by
StreamCentral
StreamCentral Big Data Server
Enterprise
Applications
API
Traffic
API
API
API
33. http://www.virtus-it.com 33
builds two distinct types of analytical data marts
360o Data Marts** Real-Time Data Marts
• Defined: Easily bring together and aggregate data
across data sources to get 360o insight. Analyze
associations in data to determine patterns that
impact business performance
• Define data mart structure by choosing the right
set of attributes from data sources, KPIs, attributes
from entities, and dimensions in the Workbench
• . StreamCentral auto-builds the data mart
• Standardize data across time and location
• Update data mart at pre-defined intervals
StreamCentral Data Marts are denormalized flat tables – Why?
• Defined: Aggregate real-time events and bring
together data across data sources to analyze
conditions that existed when events are detected
• Standardize data across time and location
• Define data mart structure by choose the right set
of attributes from data sources, KPIs, events,
attributes from entities, and dimensions .
StreamCentral auto-builds the data mart
• Once data gets correlated in real-time data mart
gets updated with appropriate insights
• Technology advancements in columnar data stores, bit map indexes, column indexes make it
possible to scan and query large amounts of data in a single table
• Takes advantage of distributed architectures to scale out using commodity software
• Supports :
• SQL Server columnar indexes
• Vertica MPP
** Coming Q3 2013
34. http://www.virtus-it.com 34
StreamCentral Real-Time Operational Intelligence
• Data Sources
• Import initial data load
• Push data to StreamCentral API
• Pull data from data sources at defined intervals
• Apply transformations on the data in flight
• SQL Server, Oracle, My SQL, REST API, SOAP Web Service, LDAP
• Auto connects data to time and location dimension
• Model entities. Connect data sources to entities
• Model measures and KPIs
• Model standard dimensions
• Model real-time correlation rule (to identify related records across data sources in real time)
• Model Events
• Events based on real-time correlation rule
• Event Data Mart (automatically gets created when event is detected)
• Requires real-time correlation
• Brings together all data across data sources that were captured at the time the event was detected
• Model Real-Time Data Marts
• Requires a real-time correlation rule
• Update real-time data mart with streaming correlated data
• Define attributes that make up the real-time data mart definition. Select subsets of information from : specific attributes from data sources, KPIs, events, entity
attributes, dimensions, time and location
• Edit real-time data mart definition
35. http://www.virtus-it.com 35
StreamCentral 360o Data Aggregation**
• Data Sources
• Import initial data load
• Pull data from data sources at defined intervals
• Apply transformations on the data
• SQL Server, Oracle, My SQL, REST API, SOAP Web Service, LDAP
• Auto connects data to time and dimension location
• Model entities. Connect data sources to entities
• Model measures and KPIs
• Model standard dimensions
• Model 360o Data Marts
• Model 360 view query (define relationships across data sources to aggregate data)
• Schedule batch update interval (typically hours)
• Define attributes that make up the analysis collection. Select subsets of information from : specific
attributes from data sources, KPIs, entity attributes, dimensions, time and location
• Edit and update data mart definition
• Define data level security
** Coming Q3 2013
36. http://www.virtus-it.com 36
Data formats supported :
• XML
• JSON
• String
Data Sources supported :
• Database
• Microsoft SQL Server
• Oracle
• My SQL
• REST API
• SOAP API
• LDAP
• Specify transformation rules to data
that is applied to data in flight
• Specify parameters when calling APIs
• Auto fills location parameters based
on location data stored in the
database about entities
• Auto creates tables in the backend
database for data source data
Pull Data from Applications Push data to StreamCentral
• StreamCentral REST API available to
stream data to StreamCentral –
stream data from agents, sensors,
probes, devices
• Specify transformation rules to data
that are applied to the data in flight
• Auto creates the tables in the
backend database for source data
StreamCentral Databases
• Supports Microsoft SQL Server and
HP Vertica
• Auto creates data structures in the
database for source data
• Auto creates fact tables, dimensions,
flat tables for event analysis and flat
tables for pattern and association
analysis
• Data level security
StreamCentral Analytics
• Device friendly visualization
• Powerful portfolio of
visualization tools
• Ability to embed in custom
applications
• In-memory operations for fast
querying
StreamCentral Reports
• Role based security
• Subscribe to reports
• Ability to embed in custom
applications
• Export reports to various
formats
37. http://www.virtus-it.com 37
Transformation Description
LTRIM Removes all white spaces from the left
RTRIM Removes all white spaces from the right
Ignore Space Removes all white spaces from left, middle or right
Ignore Special Characters Returns string after ignoring all special characters
Contains Search for specific characters
Substring Extract a substring from a string
Left Removes the left part of a character string
Right Removes the right part of a character string
Replace Replaces specified string with another string
Startswith Search for a starting character
Endswith Search for an ending character
DoesNotContain Search for specific characters
Remove Remove specified characters or words from string
Range Search for a range
RoundOff Rounds off decimal value to a specific length
StreamCentral
Transformations
• Easy to use transformations
• Multiple transformations can
be executed on one attribute
39. http://www.virtus-it.com 39
StreamCentral Collector
Windows Server 2012, .Net Framework 4.5, MSMQ
StreamCentral Stream Processing Engine
Windows Server 2012, .Net Framework 4.5,, MSMQ
StreamCentral Stream Correlation Engine
Windows Server 2012, .Net Framework 4.5,, MSMQ
StreamCentral Data Engine
Windows Server 2012, .Net Framework 4.5,, MSMQ
All components can run on
one machine
Every component can run
on more than one machine
StreamCentral Cache Cluster
Windows Server 2012, .Net Framework 4.5, AppFabric
StreamCentral Metadata database
Windows Server 2012, Microsoft SQL Server2008 R2
or Microsoft SQL Server 2012
StreamCentral Database and data marts
Option 1:
Windows Server 2012, Microsoft SQL Server2008 R2
or Microsoft SQL Server 2012 or SQL Server Parallel
Data Warehouse
Option 2
Linux, HP Vertica
StreamCentral Analytics
Windows Server 2012, Tableau Software
StreamCentral Physical Architecture and
Software Requirements
40. http://www.virtus-it.com 40
1 server for StreamCentral Components:
Collector, Stream Processing Engine, Correlation Engine, Data Engine
Characteristics of this server : Processor dependent therefore the higher
the number of cores the better, medium cache and low disk storage
Software: Windows Server 2012, .Net Framework 4.5, MSMQ
1 server for cache
Hardware characteristics: : Cache
dependent therefore more memory the
better. Medium CPU and low disk storage
Software : Windows Server 2012, .Net
Framework 4.5, AppFabric
1 server for StreamCentral Meta Data
Database, data mart storage and reporting
Hardware Characteristics:: High CPU, High
Memory and High Storage
Software : Windows Server 2012, SQL Server
1 server for StreamCentral Meta Data
Database and reporting
Hardware Characteristics:: Medium CPU,
Medium Memory and High Storage
Software : Windows Server 2012, SQL Server
OR
1 server for StreamCentral data marts
Hardware Characteristics:: High CPU, High
Memory and High Storage
Software : Linux, HP Vertica
+
StreamCentral suggested minimum system configuration
42. http://www.virtus-it.com
Data Sources Method of Access
StreamCentral -
Read data from
Application
Application - Read data
by subscribing to
StreamCentral Real-Time
Event API
Application - Read data
by querying
StreamCentral database
Enterprise Applications X real-time X
Using Web Service or REST API X real-time
Using database query X
Enterprise Service Bus X real-time X
via Web Service or REST API
X real-time
via subscribing to messages
X real-time
Enterprise Data
Warehouse
via database query
X X
Point databases
via database query
X X
LDAP via database query X
External Data Sources via Web Service or REST API X real-time 42
43. http://www.virtus-it.com
Sensors
Weather
Devices
Traffic
Custom
ApplicationsMainframe
Business Services
Enterprise Service Bus - Messaging / Mediation / Orchestration / Security
Business
Process
Business
Process
Business
Process
Composite
Application
Composite
Application
Composite
Application
Auto-build Database
Schema
Analysis Collections – Data marts as
denormalized flat tables
Event Collections – Data Marts for
every event along with its context as
denormalized flat tables
StreamCentral Engine
StreamCentral Workbench
Collate Raw Data (Push/Pull)
Standardize Data
Define Business Rules
Define Correlation
Define events
Define analytical data marts
auto built by StreamCentral
Historical
Analysis
Real-time event data published
to operational applications
and dashboards
Massively Parallel Processing Systems - Vertica
Columnar databases with Bit Map indexes – ParStream
RDBMS – MS SQL Server
StreamCentral as part of an Enterprise Service Bus architecture
API
ERP
Push / Pull
43
44. http://www.virtus-it.com
Sensors
Weather
Devices
Traffic
ERP
Custom
ApplicationsMainframe
Business Services
Enterprise Service Bus - Messaging / Mediation / Orchestration / Security
Business
Process
Business
Process
Business
Process
Composite
Application
Composite
Application
Composite
Application
Auto-build Database
Schema
Analysis Collections – Data marts as
denormalized flat tables
Event Collections – Data Marts for
every event along with its context as
denormalized flat tables
StreamCentral Engine
StreamCentral Workbench
Collate Raw Data (Push/Pull)
Standardize Data
Define Business Rules
Define Correlation
Define events
Define analytical data marts
auto built by StreamCentral
Historical
AnalysisEnterprise Business Intelligence
System
Massively Parallel Processing Systems - Vertica
Columnar databases with Bit Map indexes – ParStream
RDBMS – MS SQL Server
StreamCentral and Enterprise BI as part of an Enterprise
Service Bus architecture
Real-time event data published
to operational applications
and dashboardsAPI
Push
Pull
Push / Pull
44
45. http://www.virtus-it.com
Thank you
for your time
Contact us for a demonstration
Stephen Wells
CEO - Virtus IT Ltd
E: stephen.wells@virtus-it.com
M: +44 77 111 30879
Raheel Retiwalla
CTO - Virtus IT Ltd
E: raheel.retiwalla@virtus-it.com
M: +1 617 901 8370
A trusted partner
45