The Pivotal Business Data Lake provides a flexible blueprint to meet your business's future information and analytics needs while avoiding the pitfalls of typical EDW implementations. Pivotal’s products will help you overcome challenges like reconciling corporate and local needs, providing real-time access to all types of data, integrating data from multiple sources and in multiple formats, and supporting ad hoc analysis.
The Business Data Lake is a new approach to information management, analytics and reporting that better matches the culture of business and better enables organizations to truly leverage the value of their information.
Data Lakehouse, Data Mesh, and Data Fabric (r2)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a modern data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. They all may sound great in theory, but I'll dig into the concerns you need to be aware of before taking the plunge. I’ll also include use cases so you can see what approach will work best for your big data needs. And I'll discuss Microsoft version of the data mesh.
THE FUTURE OF DATA: PROVISIONING ANALYTICS-READY DATA AT SPEEDwebwinkelvakdag
Data lakes & data warehouses, whether on-premises or in the cloud promise to provide a centralized, cost-effective and scalable foundation for modern analytics. However, organisations continue to struggle to deliver accurate, current and analytics-ready data sets in a timely fashion. Traditional ingestion tools weren’t designed to handle hundreds or even thousands of data sources and the lack of lineage forces data consumers to manually aggregate information from sources they trust. In this session, you’ll learn how to future-proof your modern data environment to meet the needs of the business for the long term. We'll examine how to overcome common challenges, the related must-have technology solutions in the data lake/ data warehousing world, using real-world success stories and even a few architecture tips from industry experts.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Watch full webinar here: https://bit.ly/3FcgiyK
Denodo recently released the Denodo Cloud Survey 2021. Learn about some of the insights we have from the survey as well as some of the use cases Denodo comes across in the cloud. We will also conduct a brief product demonstration highlighting how easy it is to migrate to the cloud and support access to data in hybrid cloud architectures.
In this session not only will we look at what you, the customers are saying in the Denodo Cloud Survey but also:
- We will explore how, in reality, many organizations are already operating in a hybrid or multi-cloud environment and how their needs are being met through the use of a logical data fabric and data virtualization
- We will discuss how easy it is to reduce the risk and minimize disruption when migrating to the cloud
- We will educate you on why a uniform security layer removes regulatory risk in data governance.
- Finally we will demonstrate some of the key capabilities of the Denodo Platform to support the above.
Data-Ed Online Presents: Data Warehouse StrategiesDATAVERSITY
Integrating data across systems has been a perpetual challenge. Unfortunately, the current technology-focused solutions have not helped IT to improve its dismal project success statistics. Data warehouses, BI implementations, and general analytical efforts achieve the same levels of success as other IT projects – approximately 1/3rd are considered successes when measured against price, schedule, or functionality objectives. The first step is determining the appropriate analysis approach to the data system integration challenge. The second step is understanding the strengths and weaknesses of various approaches. Turns out that proper analysis at this stage makes actual technology selection far more accurate. Only when these are accomplished can proper matching between problem and capabilities be achieved as the third step and true business value be delivered. This webinar will illustrate that good systems development more often depends on at least three data management disciplines in order to provide a solid foundation.
Takeaways:
Data system integration challenge analysis
Understanding of a range of data system-integration technologies including
Problem space (BI, Analytics, Big Data), Data (Warehousing, Vault, Cube) and alternative approaches (Virtualization, Linked Data, Portals, Meta-models)
Understanding foundational data warehousing & BI concepts based on the Data Management Body of Knowledge (DMBOK)
How to utilize data warehousing & BI in support of business strategy
Extended Data Warehouse - A New Data Architecture for Modern BI with Claudia ...Denodo
This presentation has been extracted from a full webinar organized by Denodo. To learn more click here: http://bit.ly/1FOMD90
Big Data, Internet of Things, Data Lakes, Streaming Analytics, Machine Learning… these are just a few of the buzzwords being thrown around in the world of data management today. They provide us with new sources of data, new forms of analytics, and new ways of storing, managing and utilizing our data. The reality however, is that traditional Data Warehouse architectures are no longer able to handle many of these new technologies and a new data architecture is required.
So what does the new architecture look like? Does the enterprise data warehouse still have a role? Where do these new technologies fit in? How can business users easily and quickly access the various sources of data and analytic results at the right time to make the right decisions in this new world order?
Dr. Claudia Imhoff addresses these questions and presents the Extended Data Warehouse architecture (XDW), demonstrating the need for each component and how an enterprise combines these into appropriate workflows for proper decision support.
Analyst View of Data Virtualization: Conversations with Boulder Business Inte...Denodo
In this presentation, executives from Denodo preview the new Denodo Platform 6.0 release that delivers Dynamic Query Optimizer, cloud offering on Amazon Web Services, and self-service data discovery and search. Over 30 analysts, led by Claudia Imhoff, provide input on strategic direction and benefits of Denodo 6.0 to the data virtualization and the broader data integration market.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/DR6r3m.
The Business Data Lake is a new approach to information management, analytics and reporting that better matches the culture of business and better enables organizations to truly leverage the value of their information.
Data Lakehouse, Data Mesh, and Data Fabric (r2)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a modern data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. They all may sound great in theory, but I'll dig into the concerns you need to be aware of before taking the plunge. I’ll also include use cases so you can see what approach will work best for your big data needs. And I'll discuss Microsoft version of the data mesh.
THE FUTURE OF DATA: PROVISIONING ANALYTICS-READY DATA AT SPEEDwebwinkelvakdag
Data lakes & data warehouses, whether on-premises or in the cloud promise to provide a centralized, cost-effective and scalable foundation for modern analytics. However, organisations continue to struggle to deliver accurate, current and analytics-ready data sets in a timely fashion. Traditional ingestion tools weren’t designed to handle hundreds or even thousands of data sources and the lack of lineage forces data consumers to manually aggregate information from sources they trust. In this session, you’ll learn how to future-proof your modern data environment to meet the needs of the business for the long term. We'll examine how to overcome common challenges, the related must-have technology solutions in the data lake/ data warehousing world, using real-world success stories and even a few architecture tips from industry experts.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Watch full webinar here: https://bit.ly/3FcgiyK
Denodo recently released the Denodo Cloud Survey 2021. Learn about some of the insights we have from the survey as well as some of the use cases Denodo comes across in the cloud. We will also conduct a brief product demonstration highlighting how easy it is to migrate to the cloud and support access to data in hybrid cloud architectures.
In this session not only will we look at what you, the customers are saying in the Denodo Cloud Survey but also:
- We will explore how, in reality, many organizations are already operating in a hybrid or multi-cloud environment and how their needs are being met through the use of a logical data fabric and data virtualization
- We will discuss how easy it is to reduce the risk and minimize disruption when migrating to the cloud
- We will educate you on why a uniform security layer removes regulatory risk in data governance.
- Finally we will demonstrate some of the key capabilities of the Denodo Platform to support the above.
Data-Ed Online Presents: Data Warehouse StrategiesDATAVERSITY
Integrating data across systems has been a perpetual challenge. Unfortunately, the current technology-focused solutions have not helped IT to improve its dismal project success statistics. Data warehouses, BI implementations, and general analytical efforts achieve the same levels of success as other IT projects – approximately 1/3rd are considered successes when measured against price, schedule, or functionality objectives. The first step is determining the appropriate analysis approach to the data system integration challenge. The second step is understanding the strengths and weaknesses of various approaches. Turns out that proper analysis at this stage makes actual technology selection far more accurate. Only when these are accomplished can proper matching between problem and capabilities be achieved as the third step and true business value be delivered. This webinar will illustrate that good systems development more often depends on at least three data management disciplines in order to provide a solid foundation.
Takeaways:
Data system integration challenge analysis
Understanding of a range of data system-integration technologies including
Problem space (BI, Analytics, Big Data), Data (Warehousing, Vault, Cube) and alternative approaches (Virtualization, Linked Data, Portals, Meta-models)
Understanding foundational data warehousing & BI concepts based on the Data Management Body of Knowledge (DMBOK)
How to utilize data warehousing & BI in support of business strategy
Extended Data Warehouse - A New Data Architecture for Modern BI with Claudia ...Denodo
This presentation has been extracted from a full webinar organized by Denodo. To learn more click here: http://bit.ly/1FOMD90
Big Data, Internet of Things, Data Lakes, Streaming Analytics, Machine Learning… these are just a few of the buzzwords being thrown around in the world of data management today. They provide us with new sources of data, new forms of analytics, and new ways of storing, managing and utilizing our data. The reality however, is that traditional Data Warehouse architectures are no longer able to handle many of these new technologies and a new data architecture is required.
So what does the new architecture look like? Does the enterprise data warehouse still have a role? Where do these new technologies fit in? How can business users easily and quickly access the various sources of data and analytic results at the right time to make the right decisions in this new world order?
Dr. Claudia Imhoff addresses these questions and presents the Extended Data Warehouse architecture (XDW), demonstrating the need for each component and how an enterprise combines these into appropriate workflows for proper decision support.
Analyst View of Data Virtualization: Conversations with Boulder Business Inte...Denodo
In this presentation, executives from Denodo preview the new Denodo Platform 6.0 release that delivers Dynamic Query Optimizer, cloud offering on Amazon Web Services, and self-service data discovery and search. Over 30 analysts, led by Claudia Imhoff, provide input on strategic direction and benefits of Denodo 6.0 to the data virtualization and the broader data integration market.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/DR6r3m.
Data Ninja Webinar Series: Realizing the Promise of Data LakesDenodo
Watch the full webinar: Data Ninja Webinar Series by Denodo: https://goo.gl/QDVCjV
The expanding volume and variety of data originating from sources that are both internal and external to the enterprise are challenging businesses in harnessing their big data for actionable insights. In their attempts to overcome big data challenges, organizations are exploring data lakes as consolidated repositories of massive volumes of raw, detailed data of various types and formats. But creating a physical data lake presents its own hurdles.
Attend this session to learn how to effectively manage data lakes for improved agility in data access and enhanced governance.
This is session 5 of the Data Ninja Webinar Series organized by Denodo. If you want to learn more about some of the solutions enabled by data virtualization, click here to watch the entire series: https://goo.gl/8XFd1O
Oracle OpenWorld London - session for Stream Analysis, time series analytics, streaming ETL, streaming pipelines, big data, kafka, apache spark, complex event processing
Fast and Furious: From POC to an Enterprise Big Data Stack in 2014MapR Technologies
View this webinar presentation as CenturyLink Technology Solutions (Formerly Savvis) and MapR as we deconstruct and demystify “the enterprise big data stack.” We provide you with a more holistic view of the landscape, explore use cases to show how you can derive business value from it, and share best practices for navigating through the fragmented big data environment.
Logical Data Lakes: From Single Purpose to Multipurpose Data Lakes (APAC)Denodo
Watch full webinar here: https://bit.ly/3dmOHyQ
Historically, data lakes have been created as a centralized physical data storage platform for data scientists to analyze data. But lately, the explosion of big data, data privacy rules, departmental restrictions among many other things have made the centralized data repository approach less feasible. In this webinar, we will discuss why decentralized multi-purpose data lakes are the future of data analysis for a broad range of business users.
Watch on-demand this webinar to learn:
- The restrictions of physical single-purpose data lakes
- How to build a logical multi-purpose data lake for business users
- The newer use cases that make multi-purpose data lakes a necessity
Performance Acceleration: Summaries, Recommendation, MPP and moreDenodo
Watch full webinar here: https://bit.ly/3nLHayP
Performance is critical for an organization across the board. Developers can optimize execution with Summaries, MPP, Data Movement, and more. Business users rely on the Recommendation engine to guide them to the right data. Let’s discover and learn about various performance acceleration techniques in this session.
Meaning making – separating signal from noise. How do we transform the customer's next input into an action that creates a positive customer experience? We make the data more intelligent, so that it is able to guide our actions. The Data Lake builds on Big Data strengths by automating many of the manual development tasks, providing several self-service features to end-users, and an intelligent management layer to organize it all. This results in lower cost to create solutions, "smart" analytics, and faster time to business value.
Creating a Next-Generation Big Data ArchitecturePerficient, Inc.
If you’ve spent time investigating Big Data, you quickly realize that the issues surrounding Big Data are often complex to analyze and solve. The sheer volume, velocity and variety changes the way we think about data – including how enterprises approach data architecture.
Significant reduction in costs for processing, managing, and storing data, combined with the need for business agility and analytics, requires CIOs and enterprise architects to rethink their enterprise data architecture and develop a next-generation approach to solve the complexities of Big Data.
Creating the data architecture while integrating Big Data into the heart of the enterprise data architecture is a challenge. This webinar covered:
-Why Big Data capabilities must be strategically integrated into an enterprise’s data architecture
-How a next-generation architecture can be conceptualized
-The key components to a robust next generation architecture
-How to incrementally transition to a next generation data architecture
Are you exploring the transition to becoming a cloud broker? Establishing cloud business practices and marketing is one of the most overlooked areas by enterprise IT professionals. This session explores the role marketing and the 4 Ps - Product, Price, Promotion, and Placement - play in multi-cloud and cloud brokerage. Don’t let your technical success die on the vine without exposure!aka the 4 Ps of Multi-cloud
Enabling a Data Mesh Architecture with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3rwWhyv
The Data Mesh architectural design was first proposed in 2019 by Zhamak Dehghani, principal technology consultant at Thoughtworks, a technology company that is closely associated with the development of distributed agile methodology. A data mesh is a distributed, de-centralized data infrastructure in which multiple autonomous domains manage and expose their own data, called “data products,” to the rest of the organization.
Organizations leverage data mesh architecture when they experience shortcomings in highly centralized architectures, such as the lack domain-specific expertise in data teams, the inflexibility of centralized data repositories in meeting the specific needs of different departments within large organizations, and the slow nature of centralized data infrastructures in provisioning data and responding to changes.
In this session, Pablo Alvarez, Global Director of Product Management at Denodo, explains how data virtualization is your best bet for implementing an effective data mesh architecture.
You will learn:
- How data mesh architecture not only enables better performance and agility, but also self-service data access
- The requirements for “data products” in the data mesh world, and how data virtualization supports them
- How data virtualization enables domains in a data mesh to be truly autonomous
- Why a data lake is not automatically a data mesh
- How to implement a simple, functional data mesh architecture using data virtualization
In this presentation at DAMA New York, Joe started by asking a key question: why are we doing this? Why analyze and share all these massive amounts of data? Basically, it comes down to the belief that in any organization, in any situation, if we can get the data and make it correct and timely, insights from it will become instantly actionable for companies to function more nimbly and successfully. Enabling the use of data can be a world-changing, world-improving activity and this session presents the steps necessary to get you there. Joe explained the concept of the "data lake" and also emphasizes the role of a strong data governance strategy that incorporates seven components needed for a successful program.
For more information on this presentation or Caserta Concepts, visit our website at http://casertaconcepts.com/.
What is Data? What are data types? Tools for data collection & data management
Data management is the practice of collecting, keeping, and using data securely, efficiently, and cost-effectively. ... Managing digital data in an organization involves a broad range of tasks, policies, procedures, and practices.
Building the Enterprise Data Lake: A look at architecturemark madsen
The topic is building an Enterprise Data Lake, discussing high level data and technology architecture. We will describe the architecture of a data warehouse, how a data lake needs to differ, and show a high level functional and data architecture for a data lake. This webinar will cover:
Why dumping data into Hadoop and letting users get it out doesn't work
The difference between a Hadoop application and a Data Lake
Why new ideas about data architecture are a key element
An Enterprise Data Lake reference architecture to frame what must be built
Data Ninja Webinar Series: Realizing the Promise of Data LakesDenodo
Watch the full webinar: Data Ninja Webinar Series by Denodo: https://goo.gl/QDVCjV
The expanding volume and variety of data originating from sources that are both internal and external to the enterprise are challenging businesses in harnessing their big data for actionable insights. In their attempts to overcome big data challenges, organizations are exploring data lakes as consolidated repositories of massive volumes of raw, detailed data of various types and formats. But creating a physical data lake presents its own hurdles.
Attend this session to learn how to effectively manage data lakes for improved agility in data access and enhanced governance.
This is session 5 of the Data Ninja Webinar Series organized by Denodo. If you want to learn more about some of the solutions enabled by data virtualization, click here to watch the entire series: https://goo.gl/8XFd1O
Oracle OpenWorld London - session for Stream Analysis, time series analytics, streaming ETL, streaming pipelines, big data, kafka, apache spark, complex event processing
Fast and Furious: From POC to an Enterprise Big Data Stack in 2014MapR Technologies
View this webinar presentation as CenturyLink Technology Solutions (Formerly Savvis) and MapR as we deconstruct and demystify “the enterprise big data stack.” We provide you with a more holistic view of the landscape, explore use cases to show how you can derive business value from it, and share best practices for navigating through the fragmented big data environment.
Logical Data Lakes: From Single Purpose to Multipurpose Data Lakes (APAC)Denodo
Watch full webinar here: https://bit.ly/3dmOHyQ
Historically, data lakes have been created as a centralized physical data storage platform for data scientists to analyze data. But lately, the explosion of big data, data privacy rules, departmental restrictions among many other things have made the centralized data repository approach less feasible. In this webinar, we will discuss why decentralized multi-purpose data lakes are the future of data analysis for a broad range of business users.
Watch on-demand this webinar to learn:
- The restrictions of physical single-purpose data lakes
- How to build a logical multi-purpose data lake for business users
- The newer use cases that make multi-purpose data lakes a necessity
Performance Acceleration: Summaries, Recommendation, MPP and moreDenodo
Watch full webinar here: https://bit.ly/3nLHayP
Performance is critical for an organization across the board. Developers can optimize execution with Summaries, MPP, Data Movement, and more. Business users rely on the Recommendation engine to guide them to the right data. Let’s discover and learn about various performance acceleration techniques in this session.
Meaning making – separating signal from noise. How do we transform the customer's next input into an action that creates a positive customer experience? We make the data more intelligent, so that it is able to guide our actions. The Data Lake builds on Big Data strengths by automating many of the manual development tasks, providing several self-service features to end-users, and an intelligent management layer to organize it all. This results in lower cost to create solutions, "smart" analytics, and faster time to business value.
Creating a Next-Generation Big Data ArchitecturePerficient, Inc.
If you’ve spent time investigating Big Data, you quickly realize that the issues surrounding Big Data are often complex to analyze and solve. The sheer volume, velocity and variety changes the way we think about data – including how enterprises approach data architecture.
Significant reduction in costs for processing, managing, and storing data, combined with the need for business agility and analytics, requires CIOs and enterprise architects to rethink their enterprise data architecture and develop a next-generation approach to solve the complexities of Big Data.
Creating the data architecture while integrating Big Data into the heart of the enterprise data architecture is a challenge. This webinar covered:
-Why Big Data capabilities must be strategically integrated into an enterprise’s data architecture
-How a next-generation architecture can be conceptualized
-The key components to a robust next generation architecture
-How to incrementally transition to a next generation data architecture
Are you exploring the transition to becoming a cloud broker? Establishing cloud business practices and marketing is one of the most overlooked areas by enterprise IT professionals. This session explores the role marketing and the 4 Ps - Product, Price, Promotion, and Placement - play in multi-cloud and cloud brokerage. Don’t let your technical success die on the vine without exposure!aka the 4 Ps of Multi-cloud
Enabling a Data Mesh Architecture with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3rwWhyv
The Data Mesh architectural design was first proposed in 2019 by Zhamak Dehghani, principal technology consultant at Thoughtworks, a technology company that is closely associated with the development of distributed agile methodology. A data mesh is a distributed, de-centralized data infrastructure in which multiple autonomous domains manage and expose their own data, called “data products,” to the rest of the organization.
Organizations leverage data mesh architecture when they experience shortcomings in highly centralized architectures, such as the lack domain-specific expertise in data teams, the inflexibility of centralized data repositories in meeting the specific needs of different departments within large organizations, and the slow nature of centralized data infrastructures in provisioning data and responding to changes.
In this session, Pablo Alvarez, Global Director of Product Management at Denodo, explains how data virtualization is your best bet for implementing an effective data mesh architecture.
You will learn:
- How data mesh architecture not only enables better performance and agility, but also self-service data access
- The requirements for “data products” in the data mesh world, and how data virtualization supports them
- How data virtualization enables domains in a data mesh to be truly autonomous
- Why a data lake is not automatically a data mesh
- How to implement a simple, functional data mesh architecture using data virtualization
In this presentation at DAMA New York, Joe started by asking a key question: why are we doing this? Why analyze and share all these massive amounts of data? Basically, it comes down to the belief that in any organization, in any situation, if we can get the data and make it correct and timely, insights from it will become instantly actionable for companies to function more nimbly and successfully. Enabling the use of data can be a world-changing, world-improving activity and this session presents the steps necessary to get you there. Joe explained the concept of the "data lake" and also emphasizes the role of a strong data governance strategy that incorporates seven components needed for a successful program.
For more information on this presentation or Caserta Concepts, visit our website at http://casertaconcepts.com/.
What is Data? What are data types? Tools for data collection & data management
Data management is the practice of collecting, keeping, and using data securely, efficiently, and cost-effectively. ... Managing digital data in an organization involves a broad range of tasks, policies, procedures, and practices.
Building the Enterprise Data Lake: A look at architecturemark madsen
The topic is building an Enterprise Data Lake, discussing high level data and technology architecture. We will describe the architecture of a data warehouse, how a data lake needs to differ, and show a high level functional and data architecture for a data lake. This webinar will cover:
Why dumping data into Hadoop and letting users get it out doesn't work
The difference between a Hadoop application and a Data Lake
Why new ideas about data architecture are a key element
An Enterprise Data Lake reference architecture to frame what must be built
Modern Data Architecture for a Data Lake with Informatica and Hortonworks Dat...Hortonworks
How do you turn data from many different sources into actionable insights and manufacture those insights into innovative information-based products and services?
Industry leaders are accomplishing this by adding Hadoop as a critical component in their modern data architecture to build a data lake. A data lake collects and stores data across a wide variety of channels including social media, clickstream data, server logs, customer transactions and interactions, videos, and sensor data from equipment in the field. A data lake cost-effectively scales to collect and retain massive amounts of data over time, and convert all this data into actionable information that can transform your business.
Join Hortonworks and Informatica as we discuss:
- What is a data lake?
- The modern data architecture for a data lake
- How Hadoop fits into the modern data architecture
- Innovative use-cases for a data lake
DAMA Webinar - Big and Little Data QualityDATAVERSITY
While technological innovation brings constant change to the data landscape, many organizations still struggle with the basics: ensuring they have reliable, high quality data. In health care, the promise of insight to be gained through analytics is dependent on ensuring the interactions between providers and patients are recorded accurately and completely. While traditional health care data is dependent on person-to-person contact, new technologies are emerging that change how health care is delivered and how health care data is captured, stored, accessed and used. Using health care as a lens through which to understand the emergence of big data, this presentation will ask the audience to think about data in old and new ways in order to gain insight about how to improve the quality of data, regardless of size.
Traditional BI vs. Business Data Lake – A ComparisonCapgemini
Traditional Business Intelligence (BI) systems provide various levels and kinds of analyses on structured data but they are not designed to handle unstructured data.
For these systems Big Data brings big problems because the data that flows in may be either structured or unstructured. That makes them hugely limited when it comes to delivering Big Data benefits.
The way forward is a complete rethink of the way we use BI - in terms of how the data is ingested, stored and analyzed.
More information: http://www.capgemini.com/big-data-analytics/pivotal
For Impetus’ White Papers archive, visit- http://www.impetus.com/whitepaper
In this paper, Impetus focuses at why organizations need to design an Enterprise Data Warehouse (EDW) to support the business analytics derived from the Big Data.
Best Practices in the Cloud for Data Management (US)Denodo
Watch here: https://bit.ly/2Npt82U
If you have data, you are engaged in data management—be sure to do it effectively.
As organizations are assessing how COVID-19 has impacted their operations, new possibilities and uncharted routes are becoming the norm for many businesses. While exploring and implementing different deployment and operational models, the question of data management naturally surfaces while considering how these changes impact your data. Is this the right time to focus on data management? The reality is that if you have data, you are engaged in data management and so the real question is, are you doing it well?
Join Brice Giesbrecht from Caserta and Mitesh Shah from Denodo to explore data management challenges and solutions facing data driven organizations.
Building an Effective Data Warehouse ArchitectureJames Serra
Why use a data warehouse? What is the best methodology to use when creating a data warehouse? Should I use a normalized or dimensional approach? What is the difference between the Kimball and Inmon methodologies? Does the new Tabular model in SQL Server 2012 change things? What is the difference between a data warehouse and a data mart? Is there hardware that is optimized for a data warehouse? What if I have a ton of data? During this session James will help you to answer these questions.
Watch Paul's session from Fast Data Strategy on-demand here: https://goo.gl/3veKqw
"Through 2020, 50% of enterprises will implement some form of data virtualization as one enterprise production option for data integration" according to Gartner. It is clear that data virtualization has become a driving force for companies to implement an agile, real-time and flexible enterprise data architecture.
Attend this session to learn:
• What data virtualization actually means and how it differs from traditional data integration approaches
• The most important use cases and key patterns of data virtualization
• The benefits of data virtualization
DAMA & Denodo Webinar: Modernizing Data Architecture Using Data Virtualization Denodo
Watch here: https://bit.ly/2NGQD7R
In an era increasingly dominated by advancements in cloud computing, AI and advanced analytics it may come as a shock that many organizations still rely on data architectures built before the turn of the century. But that scenario is rapidly changing with the increasing adoption of real-time data virtualization - a paradigm shift in the approach that organizations take towards accessing, integrating, and provisioning data required to meet business goals.
As data analytics and data-driven intelligence takes centre stage in today’s digital economy, logical data integration across the widest variety of data sources, with proper security and governance structure in place has become mission-critical.
Attend this session to learn:
- Learn how you can meet cloud and data science challenges with data virtualization.
- Why data virtualization is increasingly finding enterprise-wide adoption
- Discover how customers are reducing costs and improving ROI with data virtualization
Making the right decision at the right time requires that the data should be available at internet time not
after the event when it is too late to do anything. So the need for a web portal has been increased in order
to support decision making in the different organizational levels at internet time by providing features
such as, customization, personalization, support for collaboration and notification that support managers
in making the right decision at internet time. All this can be achieved by applying the (BI) tools on “One
version” data store that contains data from legacy ERP system after applying the ETL tools over it. Also
the Data marts will be used in order to shorten the response time of queries generated by the users.
Bridging the Last Mile: Getting Data to the People Who Need It (APAC)Denodo
Watch full webinar here: https://bit.ly/34iCruM
Many organizations are embarking on strategically important journeys to embrace data and analytics. The goal can be to improve internal efficiencies, improve the customer experience, drive new business models and revenue streams, or – in the public sector – provide better services. All of these goals require empowering employees to act on data and analytics and to make data-driven decisions. However, getting data – the right data at the right time – to these employees is a huge challenge and traditional technologies and data architectures are simply not up to this task. This webinar will look at how organizations are using Data Virtualization to quickly and efficiently get data to the people that need it.
Attend this session to learn:
- The challenges organizations face when trying to get data to the business users in a timely manner
- How Data Virtualization can accelerate time-to-value for an organization’s data assets
- Examples of leading companies that used data virtualization to get the right data to the users at the right time
Similar to The technology of the business data lake (20)
COVID-19 heightened chronic challenges within the global healthcare industry. It became a catalyst amid fierce competition and tight regulations for health providers and payers to focus on digital health, cybersecurity, patient data transparency, and a variety of customer-centric and operational enhancements. As a result, we found the 2022 trendline pointing to improvements in access and quality of care.
Healthcare challenges such as optimizing the cost of care while simultaneously enabling personalized interventions and consumer-friendly shoppable services are long-standing − but, historically, the industry has been slow to react.
Read our Top Trends 2022 report to examine the lingering ramifications of the pandemic, responses from medical and insurance organizations, and the worldwide impact of ever-changing regulatory standards and mandates.
A combination of factors − the pandemic, catastrophic weather events, evolving policyholder expectations, and insurers’ drive for operational efficiency and future relevance − are sparking P&C industry changes.
In a post-COVID, new-normal environment, the most strategic insurers are building resilient, crisis-proof enterprises poised to take advantage of emerging and future business opportunities. They are leveraging advanced data analytics and novel technologies to assure agility and achieve positive revenue and customer satisfaction outcomes. Competitive advantage will hinge on accelerated digitalization and faster go-to-market. Therefore, win-win partnerships and embedded services with InsurTechs and other ecosystem players are critical.
Read Capgemini’s Top P&C Insurance Trends 2022 for a glimpse at the tactical and strategic initiatives carriers are undertaking to boost customer-centricity, product agility, intelligent processes, and an open ecosystem to ensure profitable growth and future-readiness.
This analysis provides an overview of the top trends in the commercial banking sector as they shift to technology high gear to boost client efficiency and battle a volatile, uncertain, competitive, and evolving landscape.
First, it was retail banking. Now, advanced technology is shifting to – and disrupting − the commercial banking space. Many commercial banks, known for paperwork, red tape, and branch dependency, were unprepared to support clients during their post-COVID-19 ramp-up. But now, the digital pivot to new mindsets, partnerships, and processes is in overdrive.
As commercial banks grapple with competition from FinTechs, BigTechs, and alternative lenders, their inability
to fulfill SME demands and pandemic after-shocks necessitates transformative process changes and a move
to experiential, sustainable, and inclusive banking models. We expect banks to strive to meet the demands
of corporate clients and SMEs by digitally transforming critical workflows and improving client experience.
Additionally, incremental process improvements in the middle and back-office that leverage intelligent
automation will keep the competition at bay because engaged clients are loyal.
Adopting newer methods to mine data and moving to as-a-Service models will prepare commercial banks
to flexibly respond to newcomers and find ways to co-exist through effective collaboration. The time has come for commercial banks to put transformation on the fast track as lending losses in wallet and market share could spill over to other functions!
How incumbents react and respond to 2022 trends could determine their relevancy and resiliency in the years ahead.
The Covid-19 pandemic necessitated the payments industry undergo a facelift, sparked by novel approaches from new-age players, fostered by industry consolidation, and customers’ demand for end-to-end experience. Crossing the threshold, the industry is entering a new era – Payments 4.X, where payments are embedded and invisible, and an enabling function to provide frictionless customer experience. As customers make a permanent shift to next-gen payment methods, Digital IDs are critical for a seamless payment experience. The B2B payments segment is witnessing rapid digitization. BigTechs, PayTechs, and industry newcomers are ready to jump in with newfangled solutions to help underserved small to medium-sized businesses (SMBs).
As incumbents struggle with profits, new-age firms are forging ahead to take the lead in the Payments 4.X era by riding the success of non-card products and services. The new era demands collaboration, platformification, and firms can unleash full market potential only by embracing API-based business models and open ecosystems. Data prowess and enhanced payment processing capabilities are inevitable to thrive ahead. The clock is ticking for banks and traditional payments firms because the competitive advantage is not guaranteed forever. As industry players seek economies of scale, consolidations loom, and non-banks explore new territories to threaten incumbents’ market share. While all these 2022 trends are at play, central bank digital currency (CBDC) is emerging globally and might open a new chapter in the current payments landscape.
As we slowly move out of the pandemic, financial services firms have learned the criticality of virtual engagement to business resilience. Wealth management firms will need capabilities to cater to new-age clients and deliver new-age services. This report aims to understand and analyze the top trends in the Wealth Management industry this year and beyond.
A year ago, our Top Trends in Wealth Management report emphasized how the pandemic sparked disruption and digital transformation and changing investor attitudes around Environmental, Social, and Corporate Governance (ESG) products. As we begin 2022, many of those trends continue to hold as COVID-19’s wide-reaching effects continue to influence the wealth management industry.
As wealth management (WM) firms supercharge their digital transformation journeys, investments in cybersecurity and human-centered design are becoming critical to building superior digital client experience (CX). Another holdover trend − sustainable investing – is gaining mainstream attention and generating increasingly sophisticated client demands. Data and analytics capabilities will become ever more essential for ESG scoring and personalized customer engagement. As large financial services firms refocus on their wealth management business while new digital players make industry strides, competition is becoming historically intense. Not surprisingly, client experience is the new battleground.
This analysis provides an overview of the top trends in the retail banking sector driven by the competition, digital transformation, and innovation led by retail banks exploring novel ways to create and retain value in evolving landscape.
COVID-19 caught banks off guard and shook legacy mindsets to the core. With 20/20 (2020) hindsight, firms are more aware, digitally resilient, and financially stable as they head into 2022. The trials of the past 18 months forced firms to shore up existing business and consider new models and revenue streams.
Customer-centricity remains at the top of most FS agendas and is a 2022 focal point. Banks will focus on achieving operational excellence as diligently as delivering superior CX. In 2022 and beyond, it will be paramount for FIs to explore and invest in new technologies to remain relevant and resilient.
Banking 4.X will arrive in full force in 2022 with platform-supported firms monetizing diverse ecosystem capabilities and aggressively harvesting data to create experiential customer journeys through intelligent and personalized engagements. The new era will compel future-focused banks to finally abandon legacy infrastructure and collaborate with third-party specialists to solidify their best-fit, long-term roles. Increasingly, open platforms will make banks invisible as banking becomes embedded into customer lifestyles. At the same time, banks will shed asset-heavy models and shift to the cloud for greater agility, speed to market, and faster innovation. The shift will act as a precursor to adopting new technologies on the horizon – 5G and Decentralized Finance.
The recent past was filled will extraordinary lessons for financial institutions. Now is the time to act on those learnings and move forward profitably.
While COVID-19 has sparked the demand for life insurance, it has also exposed the operating model vulnerabilities in distribution, servicing, and customer retention. In a post-COVID, new-normal environment, insurers need to enhance their capabilities around advanced data management and focus on seamless and secure data sharing to provide superior CX and hyper-personalized offerings. Accelerated digitalization and faster go-to-market are vital to remaining competitive, and win-win partnerships with ecosystems are critical in the journey.
Read our Top Life Insurance Trends 2022 to explore the tactical and strategic initiatives carriers undertake to acquire competencies around customer centricity, product agility, intelligent processes, and an open ecosystem to ensure profitable growth and future readiness.
Property & Casualty Insurance Top Trends 2021Capgemini
The Property & Casualty insurance landscape is evolving quickly with the changing risk landscape, entry of new players, and changing customer expectations. The ripple effects of COVID-19 on the P&C insurance industry and natural disasters such as forest fires have adversely impacted insurance firm books.
In this scenario, to ensure growth and future-readiness, the most strategic insurers strive to be ‘Inventive Insurers’ – assuming a customer-centric approach, deploying intelligent processes, practicing business resilience and go-to-market agility, and embracing an open ecosystem.
Read our Property & Casualty Insurance Top Trends 2021 report to explore the strategies insurers are adapting to remain competitive amidst the evolving business landscape and how they can explore new ways to enhance their profitability.
A combination of factors such as demographic changes, evolving consumer preferences, and desire to become operationally efficient were already spurring changes in the life insurance industry. Enter 2020 – the COVID-19 pandemic is having a significant impact on the industry.
At the peak of disruption, the focus was on ensuring business continuity, but new initiatives are cropping up to tackle the challenges as the industry is adapting to the new normal.
Furthermore, COVID-19 has acted as a catalyst, pushing life insurers to prioritize their efforts on improving customer centricity, developing go-to-market agility, making processes intelligent, building business resilience, and embracing the open ecosystem.
Read our Life Insurance Top Trends 2021 report to explore the strategies insurers are adopting to manage the changing market dynamics.
The uncertainty of 2020 is setting the global tone for the immediate future in the financial services industry. So it is no surprise banks are laser-focused on business resilience, emphasizing both financial and operational risks. The need to adapt quickly to new normal conditions through virtual customer engagement is clear.
Customer centricity continues to drive commercial banks’ solution designs. And, the pandemic compelled products that deliver immediate client value ‒ quick digital onboarding, seamless lending, and support for small and medium-sized enterprises (SMEs). The onus is now on banks to go to market more quickly, which requires the implementation of intelligent processes and integrating corporates’ enterprise resource planning (ERP) systems with banking workflows.
To achieve go-to-market agility, banks across the globe are investing in and collaborating with FinTechs. Many of these partnerships are focused on boosting digital lending and providing seamless support to anxious small-business clients in need of assurance.
With newfound impetus for FinTech collaboration, commercial banks have picked up their step on the path toward OpenX. COVID-19 made it evident that survival during turbulence is manageable through collaboration with ecosystem players.
Read our Top Trends in Commercial Banking 2021 report to explore the strategies banks are adapting to transform their businesses from a product-led, siloed model to an experiential and agile plan.
When we published the Top Trends in Wealth Management 2020, little did we foresee the pandemic that would sweep through the world and disrupt life as we knew it. Yet, when we reviewed last year’s trends, we found that many still hold and some have taken on even greater relevance. One such trend is sustainable investing, which had begun to gain prominence as investors became more aware of ESG considerations, and firms rolled out more sustainable investing offerings. Another trend that has accelerated in the post-COVID world is the importance of investing in omnichannel capabilities and technologies such as artificial intelligence (AI) to enhance personalization and advisor effectiveness. The pandemic has driven wealth management firms to accelerate their digital transformation journey, with some immediate focus areas being interactive client communications and digital advisor tools.
There is no denying that time is of the essence. Yes, budgets are tight, but the Open X ecosystem offers wealth management firms opportunities to reimagine their operating models and deliver excellent customer experience cost-effectively.
Top trends in Payments: 2020 highlighted the payments industry’s flux driven by new trends in technology adoption, innovative solutions, and changing consumer behavior. The pandemic has tested the digital mastery of players, who are already grappling with transition. Non-cash transactions are on a robust growth path, accelerated by increased adoption during COVID-19. Regulators are working to instill trust and address non-cash payments risk amid unparalleled growth as players collaborate to quell uncertainty. Regional initiatives, such as the P27 (Nordics real-time payments system) and the EPI (European Payments Initiative), are gaining traction in response to country-level fragmentation and competition.
Investment in emerging technologies is looked upon as an elixir to mitigate fraud, data-driven offerings are being considered for providing value-added propositions, and distributed ledger technology is in focus for digital currency solutions, efficiency enhancement, and cost gains. New players, such as retailers/merchants, are integrating payments into their value chains while technology giants are upscaling their financial services game by weaving offerings around payments as a center stage. Constrained by budgets, firms consider business models such as Platform-as-a-Service (PaaS) to provide cost-effective and superior customer experience.
A combination of factors, including demographic changes, evolving consumer preferences, and regulatory and compliance mandates, were already spurring change in the health insurance industry. Enter 2020 and the COVID-19 pandemic, which is having sweeping implications for the industry.
At the peak of disruption, the focus was on ensuring business continuity, but new initiatives are cropping up to tackle the challenges as the industry adapts to the new normal.
Furthermore, some changes are here to stay, and it will be prudent for the industry players to be resilient to the market shifts by being agile, improving member centricity, making processes intelligent, and embracing the open ecosystem.
Read our Health Insurance Top Trends 2021 report to explore the strategies insurers are adopting to manage the external pressures.
The banking industry’s resilience is being tested as banks navigate through a remarkable 2020 filled with uncertainties. The impact of COVID-19 has been about setting the tone for future operational models. Retail banks have shifted focus towards integrated risk management with a more holistic view of operational risks. Adapting to the new normal, banks have prioritized cost transformation while engaging customers virtually. Incumbents sought to be more responsible within fast-changing environmental conditions and ESG remained a critical focus.
To provide more experiential services, banks are leveraging techniques such as segment-of-one to hyper-personalize offerings while aiming to humanize digital channels for increased engagement. Banks are also revamping middle and back offices, going beyond the front end leveraging intelligent processes. Open X is enabling banks to play on their strengths and use the expertise of ecosystem players. Going forward, banks are poised to become an enhanced one-stop shop by providing consumers value-adding FS and non-FS experiences.
To acquire customers in cost-effective manner, retail banks are tapping value-based propositions ‒ such as POS financing and mortgage refinancing. Further, Banking-as-Service provides incumbents a way to provide their high-value offerings to other players. In preparation for the future, banks will be looking to improve their go-to-market agility by leveraging the benefits of cloud. This analysis outlines the top 10 trends in retail banking for 2021.
Explore how Capgemini’s Connected autonomous planning fine-tunes Consumer Products Company’s operations for manufacturing, transport, procurement, and virtually every other aspect of the supply-value network in a touchless, autonomous way.
Financial services is undergoing a paradigm shift that is forcing incumbent retail banks to rethink growth strategies as they struggle to remain relevant. Growing competition from BigTechs, FinTech firms, and challenger banks has added to the complexity created by increasingly stringent regulatory and compliance requirements. Customers now expect a seamless customer journey and personalized offerings because they have become accustomed to top-notch individualized service from GAFA giants Google, Apple, Facebook, and Amazon. The changing ecosystem offers established banks new, unexplored opportunities and encourages a transition beyond traditional products to meet the exacting requirements of today’s customers. Bank collaboration with FinTech and RegTech partners is becoming commonplace. Incumbents are exploring point-of-sale financing and unsecured consumer lending, while they also boost their digital channel competencies to reach a broader customer base. Banks are beginning to accept open APIs and are working with third-party specialists to create an open shared marketplace. Technological advancements such as AI are fueling efforts to evolve customer onboarding and touchpoint processes. Increasingly, banks are turning to design thinking methodology to understand the customer journey, extract deep insights, and develop a more refined user experience across the customer lifecycle.
Our analysis of the top retail banking trends for 2020 offers a glimpse into the fast-changing banking ecosystem and explores the tools and solutions being used to face new-age challenges.
Aspects of the life insurance industry have remained constant for years – and so have premiums. Traditional savings products have taken a huge hit in terms of attractiveness because low interest-rates prevail. Meanwhile, the risk landscape is shifting, and insurers need to align better with the emerging business environment, manage changing customer preferences, and improve operational efficiencies. Within today’s scenario, industry players are undertaking tactical and strategic shifts in attempts to manage unpredictable market dynamics. Insurers must develop alternative products to breathe new life into policies and leverage emerging technologies (artificial intelligence (AI), analytics, and blockchain) to improve efficiency, agility, flexibility, and customer-centricity.
Read Top Trends in Life Insurance: 2020 for a look at the innovative steps future-focused insurers are considering to meet industry challenges and opportunities.
The health insurance industry is evolving and undergoing significant changes. As the risk landscape shifts, insurers are working to improve operational efficiencies, meet evolving customer preferences, and align better with the changing business environment. Accordingly, payers must adapt and align business models and offerings. An incisive tactical approach is required to accommodate members’ needs and related emerging risks — medical, health, and environmental. Advanced technologies such as artificial intelligence, analytics, automation, and connected devices are enabling insurers to manage these changes proactively, partner with members, and help to prevent risks, all the while continuing to fulfill payer responsibilities.
Read Top Trends in Health Insurance: 2020 to learn which strategies insurers are adopting to navigate and align with today’s challenges.
Similar to other financial services domains, payments is evolving into an open ecosystem. The EU’s Payment Services Directive (PSD2) pioneered open banking by encouraging banks and established payments players to securely open the systems to foster competition, innovation, and more customer choices. In tandem with non-cash transaction growth, regulations are driving banks and payments firms to expand their array of payment methods and channels. Governments are encouraging financial inclusion by also promoting the adoption of non-cash payments. Increasingly, merchants and corporates seek to offer alternative payment systems because of widespread popularity among consumers. Alternative payments also enable merchants to provide real-time and cross-border payments to boost business efficiency.
Banks, payment firms, card firms, BigTechs, FinTechs, and other players are continuously developing new technology to cash in on market changes. However, data breaches and fraud continue to hinder innovation as firms devote countless resources each year to address security issues. Many governments are also designing new regulations to reduce ecosystem threats. All these measures are expected to make the current ecosystem much more secure and simple for players as well as customers.
Top Trends in Payments: 2020 explores and analyzes payments ecosystem initiatives and solutions for this year and beyond
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
3. BIM the way we do it
Overview
A new approach
to providing data
to all constituents
of the enterprise,
consolidating
existing data
marts to satisfy
enterprise reporting
and information
management
requirements
Many organizations have built enterprise data warehouses (EDWs) to meet their
business’s operational and reporting needs. Most EDW platforms are relatively
expensive, costing upwards of $25,000 for 1TB of data storage, although costs have
come down and computing power increased over the years.
Now, however, alternative technologies have matured to become viable cost-efficient
alternatives to EDW processing. These technologies offer new ways to get value from
enterprise data by providing users with the data views they really need, instead of a
watered-down canonical view.
The Business Data Lake approach, enabled by Pivotal technology, reduces the
complexity and processing burden on the EDW while preserving end-user interfaces
and interactions with existing EDWs. Compared with a traditional EDW the approach
delivers a significant cost advantage and improves the ability to respond to the needs
of the business, while at the same time extending the life of EDW systems.
Introduction
The Pivotal Business Data Lake is a new approach to providing data to all
constituents of the enterprise, consolidating existing data marts to satisfy enterprise
reporting and information management requirements. Pivotal provides tools you
can use both to create a new Business Data Lake and to extend the life of existing
EDW solutions.
The Pivotal Business Data Lake also resolves a longstanding challenge in the area
of operational reporting: the frequent conflict between the local reporting needs
of individual business units and enterprise-wide reporting needs. With the Pivotal
approach, there is no longer an issue of individual versus enterprise-wide views:
individual business units can each get the local views they require, and there is
also a global view to meet enterprise-wide needs. This is possible because Pivotal
has combined the power of modern business intelligence (BI) and analytics into
an integrated operational reporting platform that can be leveraged across the
entire enterprise.
In addition, the Pivotal approach addresses concerns about the rising costs of
EDWs versus the value they provide. The Pivotal Business Data Lake lowers costs
by optimizing the data within an EDW, and provides more value by adding big data
analytics into the EDW without the cost of scaling the EDW to process big data
volumes1.
Pivotal can help your organization to satisfy evolving information needs while handling
new challenges such as big data processing and data access by mobile users and
transactional systems. The Pivotal Business Data Lake adds performance to EDWs
by providing lightning-fast, real-time, in-memory access for key information. This
1 Big data volumes: Any data over 1 petabyte in size or over 1 billion rows
3
4. means mobile users and transactional systems can leverage the power of your EDW
without the cost of scaling traditional EDWs to meet transactional demands.
The Pivotal Business Data Lake supports line-of-business solutions with a single
platform that also addresses enterprise needs. For operational reporting, it provides
three key capabilities:
• The ability to rapidly ingest information from source systems
• The ability to create standard and ad hoc reports
• The ability to add real-time alert management
EDW pain points
Your organization is likely to encounter several challenges when trying to create or
enhance EDWs to support requirements such as a single view of the customer:
1. Reconciling conflicting data needs. Individual business units often need
to enhance/extend global definitions to meet specific local needs. These
enhancements/extensions may require additional data elements specific to the
business unit, which may not be relevant from a corporate/global perspective.
Because EDW implementations mandate the creation of a single consistent
view of information across the enterprise, the conflicting views of the individual
business units and the enterprise as a whole can be a problem. The Pivotal
approach reconciles these conflicts by providing both global and local views of
the same data.
2. Providing real-time access. EDWs are usually segregated from transactional
and operational systems, which results in an inherent delay in information
availability and data freshness. However, today’s business decisioning systems
need access to real-time information in addition to historical information to
enable optimum decisions, better service, and product differentiation. Pivotal
makes that possible through the use of performance-enhancing techniques like
in-memory storage.
3. Assembling data from multiple sources. Data workers need an easy way
to access the information required for their analysis. Usually the information is
available in disparate systems. Each data worker has a preferred platform for
analysis and sometimes the type of analysis dictates the environment. Today,
data workers spend a lot of time getting the right data for analysis onto the
appropriate analytic platform. With Pivotal’s approach, it’s easy to ingest data
from multiple systems into a single analytics environment.
4. Supporting ad hoc analysis. In addition to regular operational reporting,
enterprises need the ability to run ad hoc analysis from time to time. Operational
systems typically are not capable of analysis without an adverse effect on
performance. The parallelism of Pivotal Business Data Lake’s architecture
overcomes the constraints of the operational systems and makes it possible to
run ad hoc analysis as needed.
To understand in more detail how Pivotal addresses these challenges, let’s review the
blueprint for building a Business Data Lake, and then see how applications can take
advantage of it.
4
5. BIM the way we do it
Business Data Lake Architecture
The figure below shows the key tiers of a Business Data Lake. Data flows from left to
right. The tiers on the left depict the data sources, while the tiers on the right depict
the integration points where insights from the system are consumed.
Figure 1: Business Data Lake Architecture
Sources
Real-time
ingestion
Ingestion
tier
Real
time
Unified operations tier
System monitoring System management
Insights
tier
Unified data management tier
Data mgmt.
services
MDM
RDM
Audit and
policy mgmt.
SQL
NoSQL
Action tier
Real-time
insights
Workflow management
Micro batch
ingestion
Micro
batch
Processing tier
SQL
Interactive
insights
In-memory
MPP database
Batch
ingestion
Mega
batch
Distillation tier
SQL
MapReduce
Batch
insights
PDF
Adobe
HDFS storage
Unstructured and structured data
Query interfaces
This figure also depicts the timeliness of data. The lower levels of the figure represent
data that is mostly at rest, while the upper levels depict real-time transactional data
that needs to flow through the system with as little latency as possible.
Unified operations that apply to all data – such as auditing and policy management,
systems management, and the workflow that manages how data moves between the
tiers – are represented separately in the figure.
Next, let’s take a closer look at some of the key elements of the Business Data Lake.
Data storage tier
Different applications impose different requirements on the storage infrastructure.
One class of application requires a real-time response to data access requests, while
another class requires access to all historical data. A holistic approach to data needs
storage of all the data plus real-time access to selected data.
5
6. Figure 2: HDFS storage
FS/namespace/meta ops
HDFS
Client
Secondary
NameNode
NameNode
Heartbeats, balancing, replication etc.
DataNode
Data
DataNode
DataNode
DataNode
DataNode
serving
Nodes write to local disk
Hadoop
The current explosion of both structured and unstructured data demands a cost2
effective, reliable storage mechanism. The Hadoop Distributed File System (HDFS )
has emerged as the predominant solution, providing a low-cost landing zone for all
data that is at rest in the system. One of the key principles of Hadoop is the ability
to store data “as is” and distill it to add the necessary structure as needed. This
“schema-on-read” principle eliminates the need for heavy extract transform load (ETL)
processing of data as it is deposited into the system.
Support for real-time responses. Many systems need to react to data in real time.
For these systems, the latency of writing the data to disk introduces too much delay.
Examples include the new class of location-aware mobile applications, or applications
that have to respond to events from machine sensors. For these systems, an
in-memory data solution provides the ability to collect and respond to data with very
low latency while it is in motion, and to persist the data on HDFS when at rest.
Figure 3: Support for real-time responses
Gemfire In-Memory Database
Hadoop
Cube
6
Gemfire
Node
Gemfire
Node
Gemfire
Node
Gemfire
Node
Gemfire
Node
7. BIM the way we do it
Distillation tier
Many factors influence the location of data processing. On the one hand, data
workers have their preferred access interfaces; on the other, the way data is stored
also makes one access interface preferable over others. This interface gap needs to
be bridged, which may require data movement and sometimes data transformation.
Here the Business Data Lake differs from traditional EDW solutions. With the Pivotal
approach, the process of ingesting, distilling, processing, and acting upon the data
does not rely on a pre-ordained canonical schema before you can store data. Instead,
raw data can be ingested and stored in the system in its native form until it is needed.
Schema and structure is added as the raw data is distilled and processed for action
using MapReduce jobs.
As raw data starts to take on more structure through the distillation process,
analytical or machine learning algorithms can be applied to the data in place to
extract additional insights from it. Individual events or transactions can also be
streamed into the in-memory portion of the data lake, and, thanks to the system’s grid
nature, can be processed with very low latency.
The results of these real-time processes can be asynchronously moved to HDFS to
be combined with other data in order to provide more insights for the business, or
trigger other actions.
Unified data management tier
To manage and provide access to all the data that is collected in the Business Data
Lake, authorized data workers can access data sets through a self-service portal that
allows them to look through a metadata catalog of the data in the system and create
a single view of data from across the company. Workflows in the system allow users
to lease sandboxes of data and start complex analytics processes. When their policycontrolled leases expire, the resources are released back to the system until they are
needed again.
Figure 4: Unified data management tier
Data provisioning, analytics
& collaboration tools
Analysts
Sources and analytical sandboxes
APIs
File farms
Databases
Request broker
Mapping
Search
Hadoop
Cloud
Streams
Meta data repository
Access
control
Data
movement
Data filtering &
transformation
Data provisioning &
sandbox controls
Workflow orchestration
load balancing
2 http://yoyoclouds.wordpress.com/2011/12/
7
8. Insights tier
Insights from the Business Data Lake can be accessed through a variety of
interfaces. In addition to Hadoop query interfaces like Hive or Pig, SQL – the lingua
franca of the data world – can be used. Interactive analytics tools and graphical
dashboards enable business users to join data across different data sets and draw
insights.
To satisfy the needs of the data scientist community, MADlib analytic libraries can
perform mathematical, statistical, and machine learning analysis on data in the
system. Real-time insights can also generate external events – for example they can
trigger actions in areas like fraud detection, or send alerts to mobile applications.
Components for the Business Data Lake
Having reviewed the Business Data Lake architecture, we’ll now consider the
enterprise-class products from Pivotal that you can integrate to create your own
Business Data Lake. The table below summarizes these products.
Product
Description
Greenplum Database
GemFire
A real-time distributed data store with linear scalability and continuous uptime
capabilities – now available with storage tier integrated on Hadoop HDFS (GemFire
XD).
Pivotal HD
Commercially supported Apache Hadoop. HAWQ brings enterprise-class SQL
capabilities, and GemFire XD brings real-time data access to Hadoop.
Spring XD
Spring XD simplifies the process of creating real-world big data solutions. It
simplifies high-throughput data ingestion and export, and provides the ability to
create cross-platform workflows
Pivotal Data Dispatch
On-demand big data access across and beyond the enterprise. PDD provides data
workers with security-controlled, self-service access to data. IT manages data
modeling, access, compliance, and data lifecycle policies for all data provided
through PDD.
Pivotal Analytics
8
A massively parallel platform for large-scale data analytics warehouses to manage
and analyze petabytes of data – now available with the storage tier integrated
on Hadoop HDFS (with HAWQ query engine). It Includes MADlib, an in-database
implementation of parallel common analytics functions.
Provides the business community with visualizations and insights from big data.
Data from different sources can be joined to create visualizations and dashboards
quickly. Pivotal Analytics can infer schemas from data sources, and automatically
creates insights as it ingests data, freeing up business analysts to focus on
analyzing data and generating insights rather than manipulating data.
9. BIM the way we do it
The next table relates these Pivotal products to specific Business Data Lake tiers.
Often, there are multiple products to support a given tier, sometimes with overlapping
capabilities, so you need to pick the appropriate product for your business
requirements. The discussion following the table will help with these choices.
Tiers
Storage
Description
Ability to store all (structured and unstructured) data cost efficiently in the
Business Data Lake
Pivotal HD: HDFS is the storage protocol on which the industry is standardizing
for all types of data.
Ingestion
Ability to bring data from multiple sources across all timelines with varying
Quality of Service (QoS)
GemFire XD: Ideal platform when real-time performance, throughput and
scalability are crucial.
Spring XD: Ideal platform when throughput and scalability are critical, with
very good latency.
Pivotal HD: Flume and Sqoop are some of the open source ingestion products.
Ideal when throughput and scalability are critical with reasonable latency
expectations.
Distillation
Ability to take data from the storage tier and convert it to structured data for
easier analysis by downstream applications
Pivotal Data Dispatch: Ideal self-serve platform to convert data from the
ingested input format to the analytics format. Essentially, it is about running
ETL to get the desired input format.
Pivotal Analytics: Provides analytics insights such as text indexing and
aggregations on data ingest.
ETL products: Clients can also use industry-standard ETL products such as
Informatica or Talend to transform data from the ingested input format to the
analytics format.
Processing
Ability to run analytical algorithms and user queries with varying QoS (realtime, interactive, batch) to generate structured data for easier analysis by
downstream applications
Pivotal HD: Ability to analyze any type of data using the Hadoop interfaces for
data analysis such as Hive, HBase, Pig and MapReduce.
HAWQ: Process complex queries and respond to user requests in interactive
time.
GemFire XD: Process queries and respond to user request in real time.
Spring XD: Managing cross-platform workflows.
9
10. Tiers
Insights
Description
Ability to analyze all the data with varying QoS (real-time, interactive, batch) to
generate insights for business decisioning
Pivotal HD: Extract insights from any type of data using the Hadoop
interfaces for data analysis, such as Mahout, Hive, HBase, Pig and
MapReduce.
HAWQ: Extract insights in interactive time using complex analytical
algorithms.
GemFire XD: Extract insights in real time from data stored in memory.
Action
Ability to integrate insights with business decisioning systems to build datadriven applications.
AppFabric: Redis and RabbitMQ are used to integrate with existing business
applications. New business applications can use Spring or other products.
GemFire: Continuous query mechanism provides CEP-like capability to react
to events as they happen.
Pivotal CF: Improves application development velocity by simplifying the
deployment of applications to your public or private cloud.
Unified data management
Ability to manage the data lifecycle, access policy definition, and master data
management and reference data management services
Pivotal Data Dispatch: Enables IT to define metadata centrally for data workers
to find and copy data in the sandbox environment. However, master data
management and reference data management services are capabilities not
currently available from the Pivotal data fabric products.
Unified operations
Ability to monitor, configure, and manage the whole data lake from a single
operations environment
Pivotal Command Center: Unified interface to manage and monitor Pivotal HD,
HAWQ and GemFire XD3.
Pivotal is continuing to improve unified operations across the platform.
10
11. BIM the way we do it
Designing the Business Data Lake
There are many architectural tradeoffs to consider in designing a Business Data
Lake. Pivotal’s products work together to help you build a solution that meets your
specific needs. In this section we’ll dive deeper into the architecture and explain how
individual components fit in, with some comments as to which options may work best
for a specific business requirement.
Data ingestion using Pivotal products
One of the principal differences between a data lake and a traditional EDW approach
is the way data is ingested. Rather than performing heavyweight transformations on
data to make it conform to a canonical data model, the data can be ingested into the
data lake in its native form. It is important to think of data ingestion in terms of batch
size and frequency – data arriving into a system can be grouped into mega batches,
micro batches, and streams of real-time data. The Pivotal product portfolio has ways
to deal with each of these groups.
Ingesting real-time data (“streaming”). Real-time data needs to be collected from
devices or applications as it is generated, one event at a time. Much critical streaming
enterprise data is revenue bearing – for example, credit card transactions – so
data quality, reliability and performance are vital. They can be assured by ingesting
transactions with Pivotal GemFire, which supports traditional XA transactions,
and keeps multiple redundant copies of data to provide fault tolerance, while
also achieving great performance. There is also a growing segment of streaming
data where extreme scalability is more important than data quality and reliability:
for example, sensor data or page view data. For this data, GemFire supports
configurable levels of reliability, allowing you to maximize performance, or balance
performance and reliability, to meet your needs. Where extreme scalability isn’t a
requirement, Spring XD can be an excellent choice for streaming real-time data. It
uses in-memory technology for great performance, but also focuses on making it
easy to configure inputs and outputs to implement enterprise integration patterns. A
Pivotal representative can help you determine whether GemFire or Spring XD is better
for your specific application.
Ingesting batches of data. Spring XD excels in use cases where small chunks of
data are batched up and pushed between systems at a regular frequency. It provides
transaction management, chunking of data, and a restart mechanism built specifically
for batch processing. Custom processing can be executed on batches while the
data is still being ingested, before it is stored on HDFS. The processing can be as
simple as transformations and lookup, or as complex as machine learning, scoring or
address cleanup.
Bulk data ingest. Often, large amounts of data need to be moved from one platform
to another. A key to success in bulk data transfer is maximizing network bandwidth
without impacting other applications that share the same network resources. Pivotal
DataLoader is designed specifically for this: it can ingest data at wire speed, but
also includes a throttling capability to ensure other applications aren’t affected.
DataLoader also provides the ability to restart or resume the transfer process if it
detects a failure.
3 GemFire Integration with Command Center is on the product delivery roadmap
11
12. The two diagrams below depict the spectrum of data loading challenges, and the
tools in the Pivotal Business Data Lake portfolio that address them.
HAWQ
GemFire XD
Hive
MapReduce
Query
Use connectors, programs,
models to convert to
structured data
MapReduce
Pig
Analytics
Data distillation
SQL
HiveQL
HBase APIs
Structured interfaces
MapReduce
Pig
Unstructured
HBase
Lookup
Unstructured
Structured
Event storage
Figure 5: Spectrum of data loading challenges
Real-time
Interactive
Batch
Event Access Methods
Data distillation and processing with Pivotal products
Data distillation and processing are related but separate topics in a data lake
environment. Distillation is about refining data and adding structure or value to
it. Processing is about triggering events and executing business logic within the
data tier.
Distillation. When accessing unstructured data in the Business Data Lake, standard
Hadoop MapReduce jobs can distill it into a more usable form in Pivotal HD. The
same MapReduce jobs can also access HAWQ and GemFire XD data. Examples of
distilling data to add structure include complex image processing, video analytics,
and graph and text analytical algorithms. All of them use the MapReduce interfaces to
generate insights from unstructured data. Pivotal HD also supports standard Hadoop
ecosystem tools like Pig and Hive, providing a simpler way to create MapReduce jobs
that add structure to unstructured data.
Processing. It is important that real-time events can be acted upon in real time. For
example, a mobile app that offers a free cup of coffee to people who walk by a store
must do so instantly if it is to be effective. This real-time action can be achieved using
Pivotal GemFire, which stores data in the memory of a grid of computers. That grid
also hosts event handlers and functions, allowing client applications to subscribe
to notifications for “continuous queries” that execute in the grid. With GemFire,
you can build the type of applications normally associated with complex event
processing packages.
Pivotal products for insights
To gain insight from your Business Data Lake, you need to provide access through
a standard SQL interface: this way, you can use existing query tools on big data.
Pivotal’s HAWQ component is a full SQL query engine for big data, with a distributed
cost-based query optimizer that has been refined in the Greenplum database to
maximize performance for big data queries.
12
13. BIM the way we do it
Insights with significant business value usually need to be delivered in real time.
Responding to news in high-frequency trading systems, finding fraudulent activity
in financial areas, identifying intruders from the access log, performing in-session
targeting – these are just a few examples of real-time insights. GemFire’s complex
event processing and data alerting ability generates these insights, which can then
be acted on by automated applications. Spring XD also has capabilities for running
simple analytics to get insights while data is in motion.
Pivotal Analytics provides the business community with insights from big data, joining
data from different sources to create visualizations and dashboards quickly. A key
feature is it can infer schemas from data sources, automatically creating insights like
time series analysis and text indexes so that business analysts can spend most of
their time working with insights rather than manipulating data.
For even more sophisticated data analytics, Pivotal supports the MADlib library
directly inside the data tier, providing scalable in-database analytics. There are dataparallel implementations of mathematical, statistical and machine-learning methods
for structured and unstructured data. Pivotal’s data tier also provides the ability to run
R models directly inside the database engine, so your models can be parallelized and
can access data in place.
Taking action on data with Pivotal products
A Business Data Lake implementation enables enterprise users to ingest data,
manage data, and generate insights from data. Putting these insights into action
requires new applications, however. The Spring Tool Suite helps you build new bigdata driven applications rapidly; you can then integrate them with your business
decisioning systems. In addition, integration components such as Spring XD and
RabbitMQ enable you to integrate the insights from your data into existing business
applications across the enterprise. Pivotal CF provides a new-generation application
container for the cloud that increases development velocity by dramatically
decreasing the time it takes to deploy applications to your public or private cloud.
Managing data with Pivotal products
A Business Data Lake enables you to keep all data on a single storage platform,
and to achieve flexibility while maintaining a stable global perspective. All of this
requires the ability to find and share data among business users. With Pivotal Data
Dispatch, your IT team can make selected data available for sharing in accordance
with your access policies. Business users can also find a data set they’re interested
in and bring it on demand to the platform that provides their preferred interfaces
for data analysis. Business users and IT teams can specify the transformations and
processing required on the data as it is moved.
Unified operations in the Pivotal product Stack
The Business Data Lake requires unified monitoring and manageability for data,
users, and the environment. The Pivotal HD and HAWQ interfaces are now integrated
for operability and manageability, while GemFire XD, Pivotal Data Dispatch and
DataLoader currently have separate interfaces. As Pivotal’s manageability unification
strategy progresses, components of the Business Data Lake will be managed from
the unified interfaces.
13
14. Maximizing flexibility
For the Business Data Lake approach to succeed, it’s crucial to offer flexibility at local
level while providing a certified global view of the data. You can use various design
patterns, some of which are outlined below, to provide a standard schema for the
global view and enable flexibility for extensions/enhancements in the local view of the
same data.
Real-time global integration
GemFire XD enables a design pattern where global information is kept in a database
that provides a real-time view. Local information is stored locally, without impacting
the global certified data.
Dealing with variable data quality
It is essential to understand the quality of the data that your business bases
its decisions on. Data quality issues show up in various ways:
• Incomplete data – some data either is missing or arrives late
• Invalid data – there is bad or otherwise erroneous data
• Uncertainty – you don’t know how much of the data is accurate
These pointers can give directional guidance to business users; however,
this may not be enough in the case of financial applications, where stringent
controls and validations are required.
Keeping most of the data available in real time, as is possible with GemFire
XD, facilitates a consistent view of important data across the enterprise.
Additionally, the quality of data can be indicated, so that users can take it
into account before leveraging that data for business decisioning. Where
appropriate, business processes can be triggered only when the
data quality is 100% i.e. certified and approved.
Multiple views
HAWQ and GemFire XD are both capable of joining data at scale, a capability that
can be leveraged when designing applications. An enterprise’s users need to agree
on the global fields, and strictly enforce standardization by placing all the global data
in a base table. Further tables can then be derived from this one to provide local
flexibility; these tables will store additional local information and relate it back to the
base table. Local data workers then can join the tables to get the view they require
for data analysis. Depending on the usage of the local extensions, the views can be
materialized to optimize performance, or joined opportunistically as required.
14
15. BIM the way we do it
Conclusion
The Pivotal Business Data Lake provides a flexible blueprint to meet your business’s
future information and analytics needs while avoiding the pitfalls of typical EDW
implementations. Pivotal’s products will help you overcome challenges like reconciling
corporate and local needs, providing real-time access to all types of data, integrating
data from multiple sources and in multiple formats, and supporting ad hoc analysis.
It combines the power of modern BI and analytics in an integrated operational
reporting platform that will provide your entire enterprise with valuable insights to
inform decision-making. By using this approach to make the most of your data,
you can improve the performance of both new and existing EDW systems, and also
extend their life.
Pivotal and Capgemini are co-innovating to bring market leading technology, best
practices and implementation capabilities to our enterprise customers.
15