Data virtualization allows applications to access and manipulate data without knowledge of physical data structures or locations. Teiid is a data virtualization system comprised of tools, components and services for creating and executing bidirectional data services across distributed, heterogeneous data sources in real-time without moving data. Teiid includes a query engine, embedded driver, server, connectors and tools for creating virtual databases (VDBs) containing models that define data structures and views. Models represent data sources or abstractions and must be validated and configured with translators and resource adapters to access physical data when a VDB is deployed.
Enabling Data as a Service with the JBoss Enterprise Data Services Platformprajods
This presentation was given at JUDCon 2013, Jan 17,18 at Bangalore. Presented by Prajod Vettiyattil and Gnanaguru Sattanathan. The presentation deals with the Why, What and How of Data Services and Data Services Platforms. It also explains the features of the JBoss Enterprise Data Services Platform.
The need for Data Services is explained with 3 Business use cases:
1. Post purchase customer experience improvement for an Auto manufacturer
2. Enterprise Data Access Layer
3. Data Services for Regulatory Reporting requirements like Dodd Frank
Enabling a Data Mesh Architecture with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3rwWhyv
The Data Mesh architectural design was first proposed in 2019 by Zhamak Dehghani, principal technology consultant at Thoughtworks, a technology company that is closely associated with the development of distributed agile methodology. A data mesh is a distributed, de-centralized data infrastructure in which multiple autonomous domains manage and expose their own data, called “data products,” to the rest of the organization.
Organizations leverage data mesh architecture when they experience shortcomings in highly centralized architectures, such as the lack domain-specific expertise in data teams, the inflexibility of centralized data repositories in meeting the specific needs of different departments within large organizations, and the slow nature of centralized data infrastructures in provisioning data and responding to changes.
In this session, Pablo Alvarez, Global Director of Product Management at Denodo, explains how data virtualization is your best bet for implementing an effective data mesh architecture.
You will learn:
- How data mesh architecture not only enables better performance and agility, but also self-service data access
- The requirements for “data products” in the data mesh world, and how data virtualization supports them
- How data virtualization enables domains in a data mesh to be truly autonomous
- Why a data lake is not automatically a data mesh
- How to implement a simple, functional data mesh architecture using data virtualization
Denodo Data Virtualization Platform: Security (session 5 from Architect to Ar...Denodo
Everyone wants to keep their data safe from prying eyes (or even worse). The Denodo Platform has comprehensive security mechanisms to protect your data. This webinar will take a detailed look at how the Denodo Platform provides security.
Agenda:
Security Levels
Security capabilities
User and Role based Security
Security Protocols
Integration with External Security Systems
An Introduction to Data Virtualization in 2018Denodo
Watch full webinar on demand here: https://goo.gl/Rdrc1w
"Through 2020, 50% of enterprises will implement some form of data virtualization as one enterprise production option for data integration" according to Gartner. It is clear that data virtualization has become a driving force for companies to implement an agile, real-time and flexible enterprise data architecture.
Attend this session to learn:
• What data virtualization actually means and how it differs from traditional data integration approaches
• The all important use cases and key patterns of data virtualization
• What to expect in the upcoming sessions in the Packed Lunch Webinar Series, which will take a deeper dive into various challenges solved by data virtualization in big data analytics, cloud migration and various other scenarios
Agenda:
• Introduction & benefits of DV
• Summary & next steps
• Q&A
Big data insights with Red Hat JBoss Data VirtualizationKenneth Peeples
You’re hearing a lot about big data these days. And big data and the technologies that store and process it, like Hadoop, aren’t just new data silos. You might be looking to integrate big data with existing enterprise information systems to gain better understanding of your business. You want to take informed action.
During this session, we’ll demonstrate how Red Hat JBoss Data Virtualization can integrate with Hadoop through Hive and provide users easy access to data. You’ll learn how Red Hat JBoss Data Virtualization:
Can help you integrate your existing and growing data infrastructure.
Integrates big data with your existing enterprise data infrastructure.
Lets non-technical users access big data result sets.
We’ll also provide typical uses cases and examples and a demonstration of the integration of Hadoop sentiment analysis with sales data.
Introduction to Data Virtualization (session 1 from Packed Lunch Webinar Series)Denodo
This first session in a series of six ‘Packed Lunch’ webinars provides an overview of Data Virtualization technology, its applications and how it is adding business value to organizations around the world.
More information and FREE registrations to this webinar: http://goo.gl/z7mq2S
Landing page for the entire Packed Lunch webinar series: http://goo.gl/NATMHw
Attend & get unique insights into:
What Data Virtualization is and what sets it apart from traditional integration tools
How it both complements and leverages existing enterprise architectures
The Denodo Data Virtualization platform and its capabilities
Enabling Data as a Service with the JBoss Enterprise Data Services Platformprajods
This presentation was given at JUDCon 2013, Jan 17,18 at Bangalore. Presented by Prajod Vettiyattil and Gnanaguru Sattanathan. The presentation deals with the Why, What and How of Data Services and Data Services Platforms. It also explains the features of the JBoss Enterprise Data Services Platform.
The need for Data Services is explained with 3 Business use cases:
1. Post purchase customer experience improvement for an Auto manufacturer
2. Enterprise Data Access Layer
3. Data Services for Regulatory Reporting requirements like Dodd Frank
Enabling a Data Mesh Architecture with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3rwWhyv
The Data Mesh architectural design was first proposed in 2019 by Zhamak Dehghani, principal technology consultant at Thoughtworks, a technology company that is closely associated with the development of distributed agile methodology. A data mesh is a distributed, de-centralized data infrastructure in which multiple autonomous domains manage and expose their own data, called “data products,” to the rest of the organization.
Organizations leverage data mesh architecture when they experience shortcomings in highly centralized architectures, such as the lack domain-specific expertise in data teams, the inflexibility of centralized data repositories in meeting the specific needs of different departments within large organizations, and the slow nature of centralized data infrastructures in provisioning data and responding to changes.
In this session, Pablo Alvarez, Global Director of Product Management at Denodo, explains how data virtualization is your best bet for implementing an effective data mesh architecture.
You will learn:
- How data mesh architecture not only enables better performance and agility, but also self-service data access
- The requirements for “data products” in the data mesh world, and how data virtualization supports them
- How data virtualization enables domains in a data mesh to be truly autonomous
- Why a data lake is not automatically a data mesh
- How to implement a simple, functional data mesh architecture using data virtualization
Denodo Data Virtualization Platform: Security (session 5 from Architect to Ar...Denodo
Everyone wants to keep their data safe from prying eyes (or even worse). The Denodo Platform has comprehensive security mechanisms to protect your data. This webinar will take a detailed look at how the Denodo Platform provides security.
Agenda:
Security Levels
Security capabilities
User and Role based Security
Security Protocols
Integration with External Security Systems
An Introduction to Data Virtualization in 2018Denodo
Watch full webinar on demand here: https://goo.gl/Rdrc1w
"Through 2020, 50% of enterprises will implement some form of data virtualization as one enterprise production option for data integration" according to Gartner. It is clear that data virtualization has become a driving force for companies to implement an agile, real-time and flexible enterprise data architecture.
Attend this session to learn:
• What data virtualization actually means and how it differs from traditional data integration approaches
• The all important use cases and key patterns of data virtualization
• What to expect in the upcoming sessions in the Packed Lunch Webinar Series, which will take a deeper dive into various challenges solved by data virtualization in big data analytics, cloud migration and various other scenarios
Agenda:
• Introduction & benefits of DV
• Summary & next steps
• Q&A
Big data insights with Red Hat JBoss Data VirtualizationKenneth Peeples
You’re hearing a lot about big data these days. And big data and the technologies that store and process it, like Hadoop, aren’t just new data silos. You might be looking to integrate big data with existing enterprise information systems to gain better understanding of your business. You want to take informed action.
During this session, we’ll demonstrate how Red Hat JBoss Data Virtualization can integrate with Hadoop through Hive and provide users easy access to data. You’ll learn how Red Hat JBoss Data Virtualization:
Can help you integrate your existing and growing data infrastructure.
Integrates big data with your existing enterprise data infrastructure.
Lets non-technical users access big data result sets.
We’ll also provide typical uses cases and examples and a demonstration of the integration of Hadoop sentiment analysis with sales data.
Introduction to Data Virtualization (session 1 from Packed Lunch Webinar Series)Denodo
This first session in a series of six ‘Packed Lunch’ webinars provides an overview of Data Virtualization technology, its applications and how it is adding business value to organizations around the world.
More information and FREE registrations to this webinar: http://goo.gl/z7mq2S
Landing page for the entire Packed Lunch webinar series: http://goo.gl/NATMHw
Attend & get unique insights into:
What Data Virtualization is and what sets it apart from traditional integration tools
How it both complements and leverages existing enterprise architectures
The Denodo Data Virtualization platform and its capabilities
Ten Pillars of World Class Data VirtualizationDenodo
This presentation describes how to achieve a successful and mature enterprise data virtualization solution. You will learn the key attributes to look for in an enterprise DV platform, the journey to maturity from an implementation perspective and how a solution can impact your fast data-driven business outcomes.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/tHWXuO.
Introduction to Microsoft’s Master Data Services (MDS)James Serra
Master Data Services is bundled with SQL Server 2012 to help resolve many of the Master Data Management issues that companies are faced with when integrating data. In this session, James will show an overview of Master Data Services 2012, including the out of the box Web UI, the highly developed Excel Add-in, and how to get started with loading MDS with your data.
Common Data Service – A Business Database!Pedro Azevedo
In this session I tried to explain to SQL Community what is Common Data Service, it's a new Database or only a service to allow Power Users to create applications.
SnapLogic provides a Data Integration platform that takes integration to another level, by combining the power of dynamic programming languages with standard Web interfaces to solve today's most pressing problems in application integration. SnapLogic has an intuitive visual designer that runs in your browser and connects to highly scalable web based Integration server that you can run on premise or in the cloud.
xRM is the natural evolution of CRM. Businesses are expanding their use of new generation CRM solutions to manage a wider range of scenarios, including asset management, prospect management, citizen management, and many more. Microsoft CRM sits on the .NET platform and because of that, it is much more than a traditional CRM product. Instead, think of Microsoft CRM is as a rapid development application with out of the box CRM functionality. The purpose of this session is to understand Microsoft's CRM strategy and how you get to market first with world class business solutions.
Watch this Fast Data Strategy Virtual Summit session with speakers Mano Vajpey, Managing Director, SimplicityBI & John Skier, Director Systems Integration, IBM here: https://goo.gl/Qf2zRW
Today's complex organizations will often have hundreds if not thousands of data repositories, distributed across on-premises stores and now the cloud. For data to become truly democratized, non-specialists and data natives need to be able to easily find and access data without requiring outside help.
Attend this session to discover:
• Why organizations’ need to rethink the way they work with data
• How a data marketplace facilitates a single access point for all of an organization's data assets
• The role of data virtualization in enabling a data marketplace
Watch full webinar here: https://bit.ly/2SaBj5l
You will often hear that "data is the new gold". In this context, data management is one of the areas that has received more attention by the software community in recent years. From Artificial Intelligence and Machine Learning to new ways to store and process data, the landscape for data management is in constant evolution. From the privileged perspective of an enterprise middleware platform, we at Denodo have the advantage of seeing many of these changes happen.
In this webinar we will discuss the technology trends that will drive the enterprise data strategies in the years to come. Don't miss it if you want to keep yourself informed about how to convert your data to strategic assets in order to complete the data-driven transformation in your company.
Join us for an exciting session that will cover:
- The most interesting trends in data management
- How to build a logical data fabric architecture?
- How to manage your data integration strategy in the new hybrid world?
- Our predictions on how those trends will change the data management world
- How can companies monetize the data through data-as-a-service infrastructure?
- What is the role of the voice computing in the future of data analytics?
Getting Started with Data Virtualization – What problems DV solvesDenodo
Experts and analysts agree that data virtualization's strategic role in enterprise architecture for increasing agility and flexibility in the delivery of information. In this presentation, you will find how data virtualization enables organizations to access, manage, and integrate data from a wide variety of data sources.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/IS9RGK.
Common Service and Common Data Model by Henry McCallumKTL Solutions
These are two topics that are most interesting, but many people don’t know about them. The Common Data Service (CMS) is confusing for many, and honestly, a more technical approach that Microsoft was reluctant about publishing at first. It’s a hidden gem. The CMS allows you to securely store and manage data within a set of standard and custom entities. After your data is stored, you would then have the ability to do much more with your data such as customize entities, leverage productivity, and secure your data. It’s the middle factor between foundation, customer service, sales, purchasing, and people. Flow is Microsoft’s long promised cross platform workflow engine. Join us as Henry dives into how these two connector tools showcase Microsoft’s solutions and can help synchronize your day to day activities.
GDPR Compliance Made Easy with Data VirtualizationDenodo
Companies should be gearing up for May 25, 2018 when the General Data Protection Regulation (GDPR) comes into effect. GPDR will affect how businesses that serve the European Union collect, use and transfer data, forcing them to provide specific reasons and need for the personal data they gather and prove their compliance with the principles established by the regulation.
The regulation is already creating many challenges for companies, including:
• Ensuring secure access to most current data, whether on or off-premise
• Consistent security across all data sources
• Data access audit
• Ability to provide data lineage
This webinar aims to demonstrate how data virtualization has surfaced as a straight-forward solution to many of the challenges and questions brought on by the GDPR. It will also include a case study of how Asurion already achieved the desired level of security with data virtualization.
Watch the webinar in full to learn more about the benefits of using data virtualization to smoothly comply with the GDPR: http://ow.ly/1kzk30bRw3i
Master Data Management (MDM) is a feature of Microsoft Dynamics AX 2012 R3 that lets you synchronize master data records across multiple instances of Microsoft Dynamics AX 2012. By creating and maintaining a single copy of master data, you can help guarantee the consistency of important information, such as customer and product data, that is shared across AX 2012 instances
Secure Your Data with Virtual Data Fabric (ASEAN)Denodo
Watch full webinar here: https://bit.ly/3kT6HEN
Security, data privacy, and data protection represent concerns for organizations that must comply with policies and regulations that can vary across regions, data assets, and personas.
Data Virtualization offers a single logical point of access, avoiding point-to-point connections from consuming applications to the information sources. As a single point of data access for applications, it is the ideal place to enforce access security restrictions that can be defined in terms of the canonical model with a very fine granularity.
Denodo has been successfully deployed in many organizations worldwide with strict security requirements. Those organizations benefit from Denodo's capabilities to customize security policies in the data abstraction layer, centralize security when data is spread across multiple systems residing both on-premises and in the cloud, or control and audit data access across different regions.
Watch this on-demand session to:
- Build enterprise-wide data access role model
- Apply Dynamic Masking on your data on the fly
- Use sophisticated masking algorithms to manage your non-production data sets
The January call will focus on introducing the concepts of open development, software lifecycle and upcoming open projects. We have a number of projects on the roadmap and would like to give the community an opportunity to help prioritize the list.
We'll discuss the upcoming GT.M Integration project to more tightly couple OpenVista and GT.M. You can read the proposals and discuss this project at Medsphere.org, see the project homepage here: http://medsphere.org/community/roadmap/gtm
Please feel free to invite any colleagues that might find this topic relevant or interesting.
When: January 15, 12:30 - 2pm Pacific
Where: Dial-in: (888) 346-3950 // Participant Code: 1302465
Web conference: http://www.medsphere.com/infinite/
What: Open Development
- Ecosystems at work
- Open Development Introduction
- Community Project Overview
- GT.M Project Introduction
- Project Review
- Medsphere.org: Tip of the Month
===
The community calls are listed on the Medsphere.org event calendar (http://medsphere.org/community-events/) and we will update each month's call as the agenda is solidified.
Details and Recording available here: http://medsphere.org/blogs/events/2009/01/15/community-call-january-2009
Denodo Data Virtualization Platform: Overview (session 1 from Architect to Ar...Denodo
This is the first in a series of five webinars that look 'under the covers' of Denodo's industry leading Data Virtualization Platform. The webinar will provide an overview of the architecture and key modules of the Denodo Platform - subsequent webinars in the series will take a deeper look at some of the key modules and capabilities of the platform, including performance, scalability, security, and so on.
More information and FREE registrations to this webinar: http://goo.gl/fLi2bC
To learn more click to this link: http://go.denodo.com/a2a
Join the conversation at #Architect2Architect
Agenda:
The Denodo Platform
Platform Architecture
Key Modules
Connectors
Data Services and APIs
Modern Data Management for Federal ModernizationDenodo
Watch full webinar here: https://bit.ly/2QaVfE7
Faster, more agile data management is at the heart of government modernization. However, Traditional data delivery systems are limited in realizing a modernized and future-proof data architecture.
This webinar will address how data virtualization can modernize existing systems and enable new data strategies. Join this session to learn how government agencies can use data virtualization to:
- Enable governed, inter-agency data sharing
- Simplify data acquisition, search and tagging
- Streamline data delivery for transition to cloud, data science initiatives, and more
Next Gen Analytics Going Beyond Data WarehouseDenodo
Watch this Fast Data Strategy session with speakers: Maria Thonn, Enterprise BI Development Manager, T-Mobile & Jonathan Wisgerhof, Smart Data Architect, Kadenza: https://goo.gl/J1qiLj
Your company, like most of your peers, is undoubtedly data-aware and data-driven. However, unless you embrace a modern architecture like data virtualization to deliver actionable insights from your enterprise data, the worth of your enterprise data will diminish to a fraction of its potential.
Attend this session to learn how data virtualization:
• Provides a common semantic layer for business intelligence (BI) and analytical applications
• Enables a more agile, flexible logical data warehouse
• Acts as a single virtual catalog for all enterprise data sources including data lakes
Ten Pillars of World Class Data VirtualizationDenodo
This presentation describes how to achieve a successful and mature enterprise data virtualization solution. You will learn the key attributes to look for in an enterprise DV platform, the journey to maturity from an implementation perspective and how a solution can impact your fast data-driven business outcomes.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/tHWXuO.
Introduction to Microsoft’s Master Data Services (MDS)James Serra
Master Data Services is bundled with SQL Server 2012 to help resolve many of the Master Data Management issues that companies are faced with when integrating data. In this session, James will show an overview of Master Data Services 2012, including the out of the box Web UI, the highly developed Excel Add-in, and how to get started with loading MDS with your data.
Common Data Service – A Business Database!Pedro Azevedo
In this session I tried to explain to SQL Community what is Common Data Service, it's a new Database or only a service to allow Power Users to create applications.
SnapLogic provides a Data Integration platform that takes integration to another level, by combining the power of dynamic programming languages with standard Web interfaces to solve today's most pressing problems in application integration. SnapLogic has an intuitive visual designer that runs in your browser and connects to highly scalable web based Integration server that you can run on premise or in the cloud.
xRM is the natural evolution of CRM. Businesses are expanding their use of new generation CRM solutions to manage a wider range of scenarios, including asset management, prospect management, citizen management, and many more. Microsoft CRM sits on the .NET platform and because of that, it is much more than a traditional CRM product. Instead, think of Microsoft CRM is as a rapid development application with out of the box CRM functionality. The purpose of this session is to understand Microsoft's CRM strategy and how you get to market first with world class business solutions.
Watch this Fast Data Strategy Virtual Summit session with speakers Mano Vajpey, Managing Director, SimplicityBI & John Skier, Director Systems Integration, IBM here: https://goo.gl/Qf2zRW
Today's complex organizations will often have hundreds if not thousands of data repositories, distributed across on-premises stores and now the cloud. For data to become truly democratized, non-specialists and data natives need to be able to easily find and access data without requiring outside help.
Attend this session to discover:
• Why organizations’ need to rethink the way they work with data
• How a data marketplace facilitates a single access point for all of an organization's data assets
• The role of data virtualization in enabling a data marketplace
Watch full webinar here: https://bit.ly/2SaBj5l
You will often hear that "data is the new gold". In this context, data management is one of the areas that has received more attention by the software community in recent years. From Artificial Intelligence and Machine Learning to new ways to store and process data, the landscape for data management is in constant evolution. From the privileged perspective of an enterprise middleware platform, we at Denodo have the advantage of seeing many of these changes happen.
In this webinar we will discuss the technology trends that will drive the enterprise data strategies in the years to come. Don't miss it if you want to keep yourself informed about how to convert your data to strategic assets in order to complete the data-driven transformation in your company.
Join us for an exciting session that will cover:
- The most interesting trends in data management
- How to build a logical data fabric architecture?
- How to manage your data integration strategy in the new hybrid world?
- Our predictions on how those trends will change the data management world
- How can companies monetize the data through data-as-a-service infrastructure?
- What is the role of the voice computing in the future of data analytics?
Getting Started with Data Virtualization – What problems DV solvesDenodo
Experts and analysts agree that data virtualization's strategic role in enterprise architecture for increasing agility and flexibility in the delivery of information. In this presentation, you will find how data virtualization enables organizations to access, manage, and integrate data from a wide variety of data sources.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/IS9RGK.
Common Service and Common Data Model by Henry McCallumKTL Solutions
These are two topics that are most interesting, but many people don’t know about them. The Common Data Service (CMS) is confusing for many, and honestly, a more technical approach that Microsoft was reluctant about publishing at first. It’s a hidden gem. The CMS allows you to securely store and manage data within a set of standard and custom entities. After your data is stored, you would then have the ability to do much more with your data such as customize entities, leverage productivity, and secure your data. It’s the middle factor between foundation, customer service, sales, purchasing, and people. Flow is Microsoft’s long promised cross platform workflow engine. Join us as Henry dives into how these two connector tools showcase Microsoft’s solutions and can help synchronize your day to day activities.
GDPR Compliance Made Easy with Data VirtualizationDenodo
Companies should be gearing up for May 25, 2018 when the General Data Protection Regulation (GDPR) comes into effect. GPDR will affect how businesses that serve the European Union collect, use and transfer data, forcing them to provide specific reasons and need for the personal data they gather and prove their compliance with the principles established by the regulation.
The regulation is already creating many challenges for companies, including:
• Ensuring secure access to most current data, whether on or off-premise
• Consistent security across all data sources
• Data access audit
• Ability to provide data lineage
This webinar aims to demonstrate how data virtualization has surfaced as a straight-forward solution to many of the challenges and questions brought on by the GDPR. It will also include a case study of how Asurion already achieved the desired level of security with data virtualization.
Watch the webinar in full to learn more about the benefits of using data virtualization to smoothly comply with the GDPR: http://ow.ly/1kzk30bRw3i
Master Data Management (MDM) is a feature of Microsoft Dynamics AX 2012 R3 that lets you synchronize master data records across multiple instances of Microsoft Dynamics AX 2012. By creating and maintaining a single copy of master data, you can help guarantee the consistency of important information, such as customer and product data, that is shared across AX 2012 instances
Secure Your Data with Virtual Data Fabric (ASEAN)Denodo
Watch full webinar here: https://bit.ly/3kT6HEN
Security, data privacy, and data protection represent concerns for organizations that must comply with policies and regulations that can vary across regions, data assets, and personas.
Data Virtualization offers a single logical point of access, avoiding point-to-point connections from consuming applications to the information sources. As a single point of data access for applications, it is the ideal place to enforce access security restrictions that can be defined in terms of the canonical model with a very fine granularity.
Denodo has been successfully deployed in many organizations worldwide with strict security requirements. Those organizations benefit from Denodo's capabilities to customize security policies in the data abstraction layer, centralize security when data is spread across multiple systems residing both on-premises and in the cloud, or control and audit data access across different regions.
Watch this on-demand session to:
- Build enterprise-wide data access role model
- Apply Dynamic Masking on your data on the fly
- Use sophisticated masking algorithms to manage your non-production data sets
The January call will focus on introducing the concepts of open development, software lifecycle and upcoming open projects. We have a number of projects on the roadmap and would like to give the community an opportunity to help prioritize the list.
We'll discuss the upcoming GT.M Integration project to more tightly couple OpenVista and GT.M. You can read the proposals and discuss this project at Medsphere.org, see the project homepage here: http://medsphere.org/community/roadmap/gtm
Please feel free to invite any colleagues that might find this topic relevant or interesting.
When: January 15, 12:30 - 2pm Pacific
Where: Dial-in: (888) 346-3950 // Participant Code: 1302465
Web conference: http://www.medsphere.com/infinite/
What: Open Development
- Ecosystems at work
- Open Development Introduction
- Community Project Overview
- GT.M Project Introduction
- Project Review
- Medsphere.org: Tip of the Month
===
The community calls are listed on the Medsphere.org event calendar (http://medsphere.org/community-events/) and we will update each month's call as the agenda is solidified.
Details and Recording available here: http://medsphere.org/blogs/events/2009/01/15/community-call-january-2009
Denodo Data Virtualization Platform: Overview (session 1 from Architect to Ar...Denodo
This is the first in a series of five webinars that look 'under the covers' of Denodo's industry leading Data Virtualization Platform. The webinar will provide an overview of the architecture and key modules of the Denodo Platform - subsequent webinars in the series will take a deeper look at some of the key modules and capabilities of the platform, including performance, scalability, security, and so on.
More information and FREE registrations to this webinar: http://goo.gl/fLi2bC
To learn more click to this link: http://go.denodo.com/a2a
Join the conversation at #Architect2Architect
Agenda:
The Denodo Platform
Platform Architecture
Key Modules
Connectors
Data Services and APIs
Modern Data Management for Federal ModernizationDenodo
Watch full webinar here: https://bit.ly/2QaVfE7
Faster, more agile data management is at the heart of government modernization. However, Traditional data delivery systems are limited in realizing a modernized and future-proof data architecture.
This webinar will address how data virtualization can modernize existing systems and enable new data strategies. Join this session to learn how government agencies can use data virtualization to:
- Enable governed, inter-agency data sharing
- Simplify data acquisition, search and tagging
- Streamline data delivery for transition to cloud, data science initiatives, and more
Next Gen Analytics Going Beyond Data WarehouseDenodo
Watch this Fast Data Strategy session with speakers: Maria Thonn, Enterprise BI Development Manager, T-Mobile & Jonathan Wisgerhof, Smart Data Architect, Kadenza: https://goo.gl/J1qiLj
Your company, like most of your peers, is undoubtedly data-aware and data-driven. However, unless you embrace a modern architecture like data virtualization to deliver actionable insights from your enterprise data, the worth of your enterprise data will diminish to a fraction of its potential.
Attend this session to learn how data virtualization:
• Provides a common semantic layer for business intelligence (BI) and analytical applications
• Enables a more agile, flexible logical data warehouse
• Acts as a single virtual catalog for all enterprise data sources including data lakes
Data Virtualization: Introduction and Business Value (UK)Denodo
Watch full webinar here: https://bit.ly/30mHuYH
What started to evolve as the most agile and real-time enterprise data fabric, data virtualization is proving to go beyond its initial promise and is becoming one of the most important enterprise big data fabrics. Denodo’s vision is to provide a unified data delivery layer as a logical data fabric, to bridge the gap between the IT and the business, hiding the underlying complexity and creating a semantic layer to expose data in a business friendly manner.
Attend this webinar to learn:
- What data virtualization really is
- How it differs from other enterprise data integration technologies
- Why data virtualization is finding enterprise-wide deployment inside some of the largest organizations
- Business Value of data virtualization and customer use cases
- Highlights of the newly launched Denodo Platform 8.0
IBM Cloud Pak for Data is a single unified platform which helps to unify and simplify the collection, organization and analysis of data. Enterprises can turn data into insights through an integrated cloud-native architecture. IBM Cloud Pak for Data is extensible, easily customized to unique client data and AI landscapes through an integrated catalog of IBM, open source and third-party microservices add-ons
Data Ninja Webinar Series: Realizing the Promise of Data LakesDenodo
Watch the full webinar: Data Ninja Webinar Series by Denodo: https://goo.gl/QDVCjV
The expanding volume and variety of data originating from sources that are both internal and external to the enterprise are challenging businesses in harnessing their big data for actionable insights. In their attempts to overcome big data challenges, organizations are exploring data lakes as consolidated repositories of massive volumes of raw, detailed data of various types and formats. But creating a physical data lake presents its own hurdles.
Attend this session to learn how to effectively manage data lakes for improved agility in data access and enhanced governance.
This is session 5 of the Data Ninja Webinar Series organized by Denodo. If you want to learn more about some of the solutions enabled by data virtualization, click here to watch the entire series: https://goo.gl/8XFd1O
Data Fabric - Why Should Organizations Implement a Logical and Not a Physical...Denodo
Watch full webinar here: https://bit.ly/3fBpO2M
Data Fabric has been a hot topic in town and Gartner has termed it as one of the top strategic technology trends for 2022. Noticeably, many mid-to-large organizations are also starting to adopt this logical data fabric architecture while others are still curious about how it works.
With a better understanding of data fabric, you will be able to architect a logical data fabric to enable agile data solutions that honor enterprise governance and security, support operations with automated recommendations, and ultimately, reduce the cost of maintaining hybrid environments.
In this on-demand session, you will learn:
- What is a data fabric?
- How is a physical data fabric different from a logical data fabric?
- Which one should you use and when?
- What’s the underlying technology that makes up the data fabric?
- Which companies are successfully using it and for what use case?
- How can I get started and what are the best practices to avoid pitfalls?
Self Service Analytics and a Modern Data Architecture with Data Virtualizatio...Denodo
Watch full webinar here: https://bit.ly/32TT2Uu
Data virtualization is not just for self-service, it’s also a first-class citizen when it comes to modern data platform architectures. Technology has forced many businesses to rethink their delivery models. Startups emerged, leveraging the internet and mobile technology to better meet customer needs (like Amazon and Lyft), disrupting entire categories of business, and grew to dominate their categories.
Schedule a complimentary Data Virtualization Discovery Session with g2o.
Traditional companies are still struggling to meet rising customer expectations. During this webinar with the experts from g2o and Denodo we covered the following:
- How modern data platforms enable businesses to address these new customer expectation
- How you can drive value from your investment in a data platform now
- How you can use data virtualization to enable multi-cloud strategies
Leveraging the strategy insights of g2o and the power of the Denodo platform, companies do not need to undergo the costly removal and replacement of legacy systems to modernize their systems. g2o and Denodo can provide a strategy to create a modern data architecture within a company’s existing infrastructure.
My Slidedeck about Common Data Service and Model. This technology is under development so content is subject to change and based on current service on 4/13/2018
Accelerate Cloud Migrations and Architecture with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3N46zxX
Cloud migration brings scalability and flexibility, and often reduced cost to organizations. But even after moving to the cloud, more often than not, organizational data can be found to be siloed, hard to access and lacking centralized governance. That leads to delay and often missed opportunities in value creation from enterprise data. Join Amit Mody, Senior Manager at Accenture, in this keynote session to learn why current physical data architectures are hindrance to value creation from data, what is a logical data fabric powered by data virtualization and how a logical data fabric can unlock the value creation potential for enterprises.
A Logical Architecture is Always a Flexible Architecture (ASEAN)Denodo
Watch full webinar here: https://bit.ly/3joZa0a
The current data landscape is fragmented, not just in location but also in terms of processing paradigms: data lakes, IoT architectures, NoSQL, and graph data stores, SaaS applications, etc. are found coexisting with relational databases to fuel the needs of modern analytics, ML, and AI. The physical consolidation of enterprise data into a central repository, although possible, is both expensive and time-consuming. A logical data warehouse is a modern data architecture that allows organizations to leverage all of their data irrespective of where the data is stored, what format it is stored in, and what technologies or protocols are used to store and access the data.
Watch this session to understand:
- What is a logical data warehouse and how to architect one
- The benefits of logical data warehouse – speed with agility
- Customer use case depicting logical architecture implementation
Fast Data Strategy Houston Roadshow PresentationDenodo
Fast Data Strategy Houston Roadshow focused on the next industrial revolution on the horizon, driven by the application of big data, IoT and Cloud technologies.
• Denodo’s innovative customer, Anadarko, elaborated on how data virtualization serves as the key component in their prescriptive and predictive analytics initiatives, driven by multi-structured data ranging from customer data to equipment data.
• Denodo’s session, Unleashing the Power of Data, described the complexity of the modern data ecosystem and how to overcome challenges and successfully harness insights.
• Our Partner Noah Consulting, an expert analytics solutions provider in the energy industry, explained how your peers are innovating using new business models and reducing cost in areas such as Asset Management and Operations by leveraging Data Virtualization and Prescriptive and Predictive Analytics.
For more information on upcoming roadshows near you, follow this link: https://goo.gl/WBDHiE
Modern apps and services are leveraging data to change the way we engage with users in a more personalized way. Skyla Loomis talks big data, analytics, NoSQL, SQL and how IBM Cloud is open for data.
Learn more by visiting our Bluemix Hybrid page: http://ibm.co/1PKN23h
Bridging the Last Mile: Getting Data to the People Who Need It (APAC)Denodo
Watch full webinar here: https://bit.ly/34iCruM
Many organizations are embarking on strategically important journeys to embrace data and analytics. The goal can be to improve internal efficiencies, improve the customer experience, drive new business models and revenue streams, or – in the public sector – provide better services. All of these goals require empowering employees to act on data and analytics and to make data-driven decisions. However, getting data – the right data at the right time – to these employees is a huge challenge and traditional technologies and data architectures are simply not up to this task. This webinar will look at how organizations are using Data Virtualization to quickly and efficiently get data to the people that need it.
Attend this session to learn:
- The challenges organizations face when trying to get data to the business users in a timely manner
- How Data Virtualization can accelerate time-to-value for an organization’s data assets
- Examples of leading companies that used data virtualization to get the right data to the users at the right time
DAMA & Denodo Webinar: Modernizing Data Architecture Using Data Virtualization Denodo
Watch here: https://bit.ly/2NGQD7R
In an era increasingly dominated by advancements in cloud computing, AI and advanced analytics it may come as a shock that many organizations still rely on data architectures built before the turn of the century. But that scenario is rapidly changing with the increasing adoption of real-time data virtualization - a paradigm shift in the approach that organizations take towards accessing, integrating, and provisioning data required to meet business goals.
As data analytics and data-driven intelligence takes centre stage in today’s digital economy, logical data integration across the widest variety of data sources, with proper security and governance structure in place has become mission-critical.
Attend this session to learn:
- Learn how you can meet cloud and data science challenges with data virtualization.
- Why data virtualization is increasingly finding enterprise-wide adoption
- Discover how customers are reducing costs and improving ROI with data virtualization
Finding Your Ideal Data Architecture: Data Fabric, Data Mesh or Both?Denodo
Watch full webinar here: https://bit.ly/3Y2TBXB
Two of the most talked about topics in data management today are Data Fabric and Data Mesh. However, there is a lot of confusion around them. Are they alternative options, or are they complementary? Many organizations are struggling with these questions when trying to modernize their data architecture. Mike Ferguson, Managing Director of Intelligent Business Strategies, will help clear up the confusion by looking at what Data Fabric and Data Mesh are and how they can best be used to help shorten time to value in companies seeking to become data-driven enterprises.
Mike will help address many of your questions, including:
- What is a Data Fabric and Data Mesh, and the business value of each?
- What are the key concepts and capabilities of each, and what do they make possible?
- The implications of decentralizing data engineering, and how do you co-ordinate data product development?
- How can a Data Fabric help in building a Data Mesh?
Following Mike's presentation, we will be joined by Kevin Bohan of Denodo, who will discuss the foundational capabilities you should be putting in place if you are planning on adopting a Data Mesh strategy.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Data virtualization
1. Data Virtualization
Data Virtualization is any approach to data management that allows an application to retrieve and
manipulate data without requiring technical details about the data, such as how it is formatted or
where it is physically located.
Unlike the traditional extract, transform, load ("ETL") process, the data remains in place, and real-
time access is given to the source system for the data, thus reducing the risk of data errors and
reducing the workload of moving data around that may never be used.
Unlike Data Federation it does not attempt to impose a single data model on the data
(heterogeneous data). The technology also supports the writing of transaction data updates back to
the source systems.
To resolve differences in source and consumer formats and semantics, various abstraction and
transformation techniques are used.
This concept and software is a subset of data integration and is commonly used within business
intelligence, service-oriented architecture data services, cloud computing, enterprise search, and
master data management.
Functionality
Data Virtualization software is an enabling technology which provides some or all of the following
capabilities:
• Abstraction – Abstract the technical aspects of stored data, such as location, storage
structure, API, access language, and storage technology.
• Virtualized Data Access – Connect to different data sources and make them accessible
from a common logical data access point.
• Transformation – Transform, improve quality, reformat, etc. source data for consumer
use.
• Data Federation – Combine results sets from across multiple source systems.
• Data Delivery – Publish result sets as views and/or data services executed by client
application or users when requested.
Data virtualization software may include functions for development, operation, and/or
management.
Benefits
Reduce risk of data errors
• Reduce systems workload through not moving data around
• Increase speed of access to data on a real-time basis
• Significantly reduce development and support time
• Increase governance and reduce risk through the use of policies
• Reduce data storage required
2. Drawbacks
May impact Operational systems response time, particularly if under-scaled to cope with
unanticipated user queries or not tuned early on
• Does not impose a heterogeneous data model, meaning the user has to interpret the data,
unless combined with Data Federation and business understanding of the data
• Requires a defined Governance approach to avoid budgeting issues with the shared services
• Not suitable for recording the historic snapshots of data - data warehouse is better for this
• Change management "is a huge overhead, as any changes need to be accepted by all
applications and users sharing the same virtualization kit"
Red Hat JBoss Data Virtualization (Teiid)
JBoss Data Virtualization is a lean, virtual data integration solution that turns fragmented data into
actionable information at business speed. It aggregates data spread across physically diverse
systems, such as multiple databases, XML files, and Hadoop systems, and makes them appear as a
set of tables in a local database.
-Build Virtual Database Stores
Complete data provisioning, federation, integration, and management through the creation of
virtual logical data models.
-Access via Standard Means
Developers can use JBoss Developer Studio, DDL based Virtual Database definitions, and native
queries to access data.
-Supports Most Database Types
Support for Apache Hadoop, NoSQL, JBoss Data Grid, MongoDB as well as a variety of data
services like SAP and Salesforce.co
Why Red Hat JBoss Data Virtualization (Teiid)?
1. Familiar Interface: JDBC
Teiid has a very familiar interface: JDBC! Every Java developer is familiar with JDBC access to
data sources. Now leverage your knowledge of the JDBC standard to access all your data sources.
• JDBC 4.0 API
• DML SQL-92 support (with select SQL-99 and later features)
• Support for standard JDBC scalar functions
2. Familiar Query Language: SQL
Want to query non-SQL sources in the same way you do with SQL sources? With Teiid, you can!
You can access data from any types of sources, and interact with those sources using a single
flavor
of SQL - even if the native sources do not understand SQL!
• DML SQL-92 support (with select SQL-99 and later features)
• Issue SQL to any data source -- see currently supported sources
3. • Level the data access playing field, using one version of SQL dialect, scalar functions,
and
datatypes
3. Multiple Sources Look Like One
With Teiid, you can join and union data that resides in very dissimilar data sources. Multiple
sources suddenly look like a single source to your application.
• Joins across data sources
• Unions across data sources
4. Easy To Deploy
The Teiid query engine is a Java component - it plugs right into your application, like any other
Java
library. Deployment is simple.
• Embed in plain old Java app
• Deploy to app servers
• Available as a stand-alone server in JBoss Enterprise Data Services Platform
5. Eliminate Hand-coded Data Access Logic
Real applications often access more than one data source. We know that. Teiid technology from
MetaMatrix has been in the business of enterprise data integration since 1999. Many of you have
built your own frameworks to handle integrating multiple sources, and have realized the difficulty
of doing that in a generic manner that performs and scales well under real use conditions. Now you
can retire your custom frameworks and hand-coded logic, and use a dedicated query component
for
all your data access needs. This lets you focus on the logic on top of the data access layer rather
than the nuts and bolts of accessing heterogeneous data uniformly.
• Cheaper - than hand-coding and maintaining hand-coded integration, and re-inventing
integration logic on every project
• Better - than non-optimized integration logic that does not make use of a real query
engine
• Faster - to implement your projects, leveraging the integration logic already built into
Teiid,
and reusing that logic on other projects
6. Battle Tested - and Improving
You don't want to be a guinea pig for someone's "product" experiments. Don't worry - with Teiid,
you won't have to. Teiid is a component form of the query engine that is the heart of the JBoss
Enterprise Data Services Platform (JBEDSP), which is used by large commercial organizations,
independent software vendors, and many federal agencies, including intelligence agencies
responsible for protecting citizens in the U.S. and other countries. These are organizations that
cannot and do not play with toys, so you can have confidence that our products have been put
through the ringer a number of times.
• Used by Fortune 500 companies and Government Intel agencies
• Used by independent software vendors
• Large data sets, small data sets, data sets with quirky characteristics
• Relational data, XML data, and data from sources you've never even heard of!
4. 7. Optimized
Part of being battle-tested is operating at expected levels of performance in a wide variety of
enterprise solutions. Teiid accounts for the unique requirements of integrating information across
disparate data sources.
• Cost-based optimizer
• Accounts for federating data across heterogeneous systems
• Caches result sets for user queries and queries to sources
8. Scriptable Integration
Teiid comes with an administrative shell that allows programatic access to administrative features.
9. Works Like a Charm - Fast
Your time is precious - we know that. You can't waste your time investigating every newfangled
product and solution marketed to you. With Teiid, you don't have to. In 30 minutes, you can
demonstrate to yourself that you can issue federated queries against 2 of your own databases.
• 30 minutes to get started
10. Tip of the Iceberg
Still not convinced? What if we told you that all this was merely the tip of the iceberg? That's right
-
there's more! Not only can you do more with the Teiid query engine, but everything you do can be
leveraged and extended with the Teiid Server and JBoss Enterprise Data Services Platform.
With Teiid Designer, you get the following additional functionality:
• Data abstraction through an Eclipse-based modeling tool
• Relational views - of any type of data
• XML views of non-XML data (XSD-compliant)
• Data Services - rapid design and deploy
• For Web services architectures
• For general services-oriented architectures (SOAs)
Moving up to the JBoss Enterprise Data Services Platform suite enables you to take advantage of
the following enterprise-level features:
• Extensive connectivity to enterprise sources
• Support for packaged applications such as SAP
• Security
• Authentication and authorization (entitlements)
• Integration of external authentication/user systems
• Model management
• Searchable metadata for dependency and impact analyses
• Monitoring and administration
• Enterprise administration and monitoring console
5. Teiid is a data
virtualization
system that
allows
applications to
use data from
multiple,
heterogenous
data
stores.
Teiid is
comprised of
tools,
components
and services for
creating and
executing bi-
directional data
services. Through
abstraction and federation, data is accessed and integrated in real-time across distributed data
sources without
copying or otherwise moving data from its system of record.
Teiid Parts
Query Engine The heart of Teiid is a high-performance query engine that processes
relational, XML, XQuery and procedural queries from federated
datasources. Features include support for homogeneous schemas,
heterogeneous schemas, transactions, and user defined functions.
Embedded An easy-to-use JDBC Driver that can embed the Query Engine in any
Java application.
Server An enterprise ready, scalable, managable, runtime for the Query Engine
that runs inside JBoss AS that provides additional security, fault-
tolerance, and administrative features.
Connectors Teiid includes a rich set of Translators and Resource Adapters that
enable access to a variety of sources, including most relational databases,
web services, text files, and ldap. Need data from a different source? A
custom translators and resource adaptors can easily be developed.
Tools • Create - Use Teiid Designer to define virtual databases
containing views, procedures or even dynamic XML documents.
• Monitor & Manage - Use the Teiid Web Console with just the
AS or the Teiid RHQ plugin to control
any number of servers.
• Script - Use the Teiid AdminShell to automate administrative
and testing tasks.
6. Virtual Databases
The Virtual Database
A virtual database (or VDB) is a container for components used to integrate data from multiple
data
sources, so that they can be accessed in an integrated manner through a single, uniform API.
A VDB contains models, which define the structural characteristics of data sources, views, and
Web services.
VDB Creation and Validation
There are two types VDBs available. Dynamic VDB is defined using a simple XML file. This
XML
file defines the sources it is trying to integrate and then provides access through JDBC where user
queries can be written against this VDB using all the sources defined as if they are in single
source.
Dynamic VDB does not offer view/abstact layers.
Teiid Designer, a Eclipse-based GUI tool can be used to create VDBs. This Eclipse-based tool lets
you not only define source models and import metadata and statistics from them, but also allows
you to define relational and XML views on top of those sources. This allows you to abstract the
structure of the information you expose to and use in your applications from the underlying
physical
data structures.
7. VDBs can contain one or more models representing the information to be integrated and exposed
to
consuming applications. Models must be in a valid state in order for the VDB to be used for data
access. Validation of a single model means that it must be in a self-consistent and complete state,
meaning that there are no "missing pieces" and no references to non-existent entities. Validation of
multiple models checks that all inter-model dependencies are present and resolvable.
A VDB must always be in a complete state, meaning that all information is contained within the
VDB itself -- there are no external dependencies.
Deploying a VDB for Data Access
After a VDB is defined, it must be deployed to the Teiid runtime to be accessed.
• The VDB needs to be deployed to a Teiid Server, if there are no errors during deployment
and underlying data sources are configured correctly, then VDB will be accessible to your
client application.
Accessing Multiple Sources Through a VDB
Once VDB is deployed, your VDB can be accessed through JDBC-SQL, SOAP (Web Services),
SOAP-SQL, or Xquery.
DBs, Translators and Resource Adaptors
VDBs contain two primary varieties of model types - Source and View models. Source models
represent the structure and characteristics of physical data sources, whereas view models represent
the structure and characteristics of abstract structures you want to expose to your applications.
Source models must be associated with a Translator and a Resource Adaptor. A Translator provides
a abstraction layer between Teiid Query Engine and physical data source, that knows how to
convert Teiid issued query commands into source specific commands and execute them using the
Resource Adaptor. It also have smarts to convert the result data that came from the physical source
into a form that Teiid Query engine is expecting.
8. A Resouce Adaptor provides the connectivity to the physical data source. This also provides way
to
natively issue commands and gather results. A Resource Adaptor can be a RDBMS data source,
Web Service, text file, connection to main frame etc. This is often is JCA Connector.
You can define configuration for Translators and Resource Adaptors in Teiid Designer. Once
defined, Translator information along with the JNDI name of the Resource Adaptor is stored with a
VDB, so that when a VDB is exchanged, the existing settings can be used.
Typically Resource Adaptor configuration information contains user-ids, passwords, URLs to the
physical data sources. This information is not stored with the VDB. These are automatically
created
by Designer for development purposes, however user need to migrate or create new ones for the
production environment themselfs using the provided tools like Admin Console.
VDB Execution in Teiid Designer
VDBs can be tested in Teiid Designer by issuing SQL queries in the SQL Explorer perspective. In
this way, you can iterate between defining your integration models and testing them out to see if
they are yielding the expected results.
Your VDB must define its Translator and Resource Adapter with all source models in order to be
executable.
VDB File Formats
VDBs are stored in an archive file format, similar to a standard Java JAR format.
Dynamic VDBs are XML files. The schema for the XML file can be found in the Teiid documents.
9. Models
A model is a representation of a set of information constructs. A familiar model is the relational
model, which defines tables composed of columns and containing records of data. Another
familiar
model is the XML model, which defines hierarchical data sets.
In Teiid, models are used to define the entities, and relationships between those entities, required
to
fully define the integration of information sets so that they may be accessed in a uniform manner
using a single API and access protocol.
Source models define the structural and data characteristics of the information contained in data
sources. Teiid uses the information in source models to access the information in multiple sources,
so that from a user's viewpoint these all appear to be in a single source.
In addition to source models, Teiid provides the ability to define a variety of view models. These
can be used to define a layer of abstraction above the physical layer, so that information can be
presented to end users and consuming applications in business terms rather than as it is physically
stored. These business views can be in a variety of forms: relational, XML, or Web services. Views
are defined using transformations between models.
10. Types of Models
Teiid Designer can be used to model a variety of classes of models. Each of these represent a
conceptually different classification of models.
• Relational, which model data that can be represented in table – columns and records –
form.
Relational models can represent structures found in relational databases,
spreadsheets, text files, or simple Web services.
• XML, which model the basic structures of XML documents. These can be “backed” by
XML Schemas. XML models represent nested structures, including recursive
hierarchies.
• XML Schema, the W3C standard for formally defining the structure and constraints of
XML documents, as well as the datatypes defining permissible values in XML
documents.
• Web Services, which define Web service interfaces, operations, and operation input and
output parameters (in the form of XML Schemas).
• Model Extensions, for defining property name/value extensions to other model classes.
VDBs contain two primary varieties of model types - source and view. Source models represent
the
structure and characteristics of physical data sources, whereas view models represent the structure
and characteristics of abstract structures you want to expose to your applications.
Models and VDBs
Models used for data integration are packaged into a virtual database (VDB). The models must be
in a complete and consistent state when used for data integration. That is, the VDB must contain
all
the models and all resources they depend upon.
Models contained within a VDB can be imported into the Teiid Designer. In this way, VDBs can
be
used as a way to exchange a set of related models.
Models and Translators, Resource Adaptors
Source models must be configured with a Translator and a Resource Adaptor with them before a
VDB is tested in Designer or deployed for data access.
It is possible that multiple models may use the same settings, but each model must define these
configurations.
Model Validation
Models must be in a valid state in order to be used for data access. Validation of a single model
means that it must be in a self-consistent and complete state, meaning that there are no "missing
pieces" and no references to non-existent entities. Validation of multiple models checks that all
inter-model dependencies are present and resolvable.
Models must always be validated when they are deployed in a VDB for data access purposes.
Model Execution in Teiid Designer
Models can be tested in the Teiid Designer by issuing SQL queries in the SQL Explorer
perspective.
In this way, you can iterate between defining your integration models and testing them out to see if
they are yielding the expected results.
11. Model Files
Models are stored in XML format, using the XMI syntax defined by the OMG.
Model files should never be modified "by hand". While it is possible to do so, there is the
possibility that you may corrupt the file such that it cannot be used within the JBoss Enterprise
Data
Services Platform.
Dynamic VDBs and Models
The information in this artical applies to the VDBs that are built using the Teiid Designer. If you
are
building Dynamic VDBs, much of the information does not apply in that case. However, even
Dynamic VDBs have models but they only define configuration for importing metadata and
Translators and Resource Adaptors.
Translators and Resource Adaptors
Translators
A Translator provides an abstraction layer between Teiid Query Engine and physical data source,
that knows how to convert Teiid issued query commands into source specific commands and
execute them using the Resource Adaptor. It also have smarts to convert the result data that came
from the physical source into a form that Teiid Query engine is expecting.
Teiid provides various pre-built translators for sources like Oracle, DB2, SQL Server, MySQL,
PostgreSQL, XML, File etc.
A Translator also defines the capabilities of a perticular source, like whether it can natively support
query joins (inner joins, cross joins etc) or support criteria.
A Transaltor along with its Resource Adaptor is always must be configured on a Source Model.
Cross-source queries issued against a VDB running in Teiid result in source queries being issued to
translator, which interact with the physical data sources.
A Translator is defined by using one of the default pre-built ones, or you can override the default
properties of the pre-built ones to define your own. The tooling will provide mechanisms to define
override translators.
Check out "Developer's Guide" on how to create a custom Translator that works with your
Resource
Adaptor.
Resouce Adaptors
A Resouce Adaptor provides the connectivity to the physical data source. This also provides way
to
natively issue commands to the source and gather results. A Resource Adaptor can be a RDBMS
data source, Web Service, text file, connection to main frame or to a custom source you defined.
This is often is JCA Connector, however there is no restriction how somebody provides the
connection semantics to the Translator.
However, if your source needs participate in distributed XA transactions, then this must be a JCA
connector. Other than providing transactions, JCA defines how to do configuration, packaging and
deployment. This also provides a standard interaction model with the Container, connection pools
etc. It can be used for more than just Teiid data integration purposes.
12. A instance of resouce adaptor is created by defining a "-ds.xml" file in the JBoss AS. This is same
operation that is used to create Data Sources in JBoss AS.
Check out the "Developer's Guide" on how to create a custom Resource Adaptor.
Translator Capabilities
translator capabilities define what processing each translator/source combination can perform. For
example, most relational sources can process joins and unions, whereas when processing delimited
text files these operations cannot be performed by the resource adaptor or the "source" (in this
case,
the file system).
Capabilities are used by the Teiid query engine to determine what subsets of the overall federated
query plan can be pushed down to each source involved in the query.
Translator capabilities define the capabilities of a source in terms of language features (joins,
criteria, functions, unions, sorts, etc). In addition, the source model defined in a virtual database
may specify additional constraints at the metadata level, such as whether a column can be used in
an
exact match or wildcard string match, whether tables and columns can be updated, etc. In
combination, these features can be used to more narrowly constrain how users access a source.
Resource Adaptors and Security
It is possible to use the security system of individual data sources if this is desired. When the
resource adapter is JCA connector, they can be configured with separate "security-domain" in their
"-ds.xml" files in the JBoss AS. However, calling thread need to login into the context before they
use Teiid.
Administering
In Teiid, Translators and Resource adaptors can be configured and monitored using the Teiid
Console, or using the Teiid Server Administrative API.
13. Data Services
A data service is a standards-based, uniform means of accessing information in a form useful to
business applications.
Since data is rarely in a form required by applications and services, and is often not even in a
single
data source, a key requirement for data services is that they abstract the data from its physical
persistence structure, presenting it in a form that is closer to the needs of the using application.
This
effectively decouples consuming applications from the structure of the underlying data.
Hand-in-hand with abstraction, a federated query engine is required to execute the transformations
defining the abstraction layers in an efficient manner, and to expose the abstracted structures
through uniform and standard APIs.
The two key components of a data services architecture, then, are:
• Modeling environment, to define the abstraction layers -- views and Web services
• Execution environment, to actualize the abstract structures from the underlying data, and
expose them through standard APIs. A query engine is a required part of the execution
environment, to optimally federate data from multiple disparate sources.
See SOAs and Data Services for more information on the role data services play in an SOA.
Technical and Business Viewpoints
Data services can be viewed from both a technology vantage point, or from a business viewpoint.
The Technology Viewpoint
Teiid provides a suite of projects that provide data services to business applications. That is, Teiid
provides a means to access integrated data from multiple data sources, through your preferred
standards-based API. Teiid provides access to federated information through JDBC (SQL or
XQuery), ODBC (SQL or XQuery), and SOAP (Web services).
The Business Viewpoint
A more business- or user-centric view of data services is that they are information representations
required by business applications. From this perspective, data services are defined and designed by
business analysts, modelers, and developers to represent the information structures required by
business applications. Often, a key design goal is one of interoperability - the requirement that
systems work together seamlessly, including when exchanging data. Teiid provides graphical and
other tools for defining these interoperable data services, essentially relational and XML views
that
can be used by business applications in a semantically-meaningful manner.
These two viewpoints roughly correspond to the Execution and Modeling components of a data
services solution, respectively.
14. Data Services - An Essential Part of an SOA
Data services are a key part of a service-oriented architecure, or SOA. They provide the necessary
interface to data for all business services.
• Expose all data through a single uniform interface
• Provide a single point of access to all business services in the system
• Expose data using the same paradigm as business services - as "data services"
• Expose legacy data sources as data services
• Provide a uniform means of exposing/accessing metadata
• Provide a searchable interface to data and metadata
• Expose data relationships and semantics
• Provide uniform access controls to information
15. Service-Oriented Architectures and Data Services
Service-oriented architectures are all the rage these days, and for good reason. The guiding
principles of SOAs are based on lessons well-learned over the brief history of computing, most
notably that of decoupling of system components. It is these same principles that motivate the use
of data services in an SOA.
SOAs and Abstraction
Decoupling is the key concept in SOAs and is achieved through abstraction based on service
interfaces. Business processes in an SOA represent a formalized, executable form of the actual
enterprise's processes, but offer a layer of abstraction above the physical processes, be they
automated or manual. Business processes are composed of business services. Just as business
processes in an SOA represent an abstraction from their real-world counterparts, so do business
services offer an abstraction of actual physical services. Decoupling through abstraction imbues
SOAs with immense potential to model business operations independent of the IT infrastructure du
jour.
16. SOAs, as their name makes clear, are architectures. These architectures, as we've seen, involve
business processes composed of business services. Business processes and services both make use
of business information, which is likely resident in many different types and instances of databases
and files. This information can be exposed to business services using the same service-oriented
paradigm - as data services.
Data Services
Just as business processes and services in an SOA represent abstractions - albeit executable ones -
of their real-world counterparts, so too do data services represent an abstraction of underlying
enterprise information. Data services expose information to business services in a form and
through
an interface amenable to those services.
The form is generally some representation of business objects to be manipulated by business
services and passed between services by business processes. Business objects may be simple
tabular
structures or complex nested structures. Almost always, though, they must be composed from
information residing in more than one data source, often in different persistence formats. So a key
requirement of data services is that they:
• expose integrated information in one or more desired formats, even if the original data are
in
different formats.
The desired interface is dependent on the architecture being used. A Web service-based SOA will
provide a SOAP or REST-based interface to XML-formatted business objects. A more traditional
Java or C-language RCP-based architecture will require JDBC or ODBC access to tabular
information, obtained from multiple data sources. So, a second key requirement of data services is
that they:
• expose information through one or more consistent, standard interfaces, even if the
original
data are accessed through different interfaces.
These two key requirements of data services are achieved by two different technologies:
• modeling to define the required format of data, integrated from the underlying sources;
and
• a query engine for processing these abstract definitions efficiently, exposing the
integrated
information through one or more interfaces.
Together these form the basis for a data services architecture underpinning a robust SOA, making
data available to business processes and services in the required format and through consistent,
standard interfaces.