Big data insights with Red Hat JBoss Data VirtualizationKenneth Peeples
You’re hearing a lot about big data these days. And big data and the technologies that store and process it, like Hadoop, aren’t just new data silos. You might be looking to integrate big data with existing enterprise information systems to gain better understanding of your business. You want to take informed action.
During this session, we’ll demonstrate how Red Hat JBoss Data Virtualization can integrate with Hadoop through Hive and provide users easy access to data. You’ll learn how Red Hat JBoss Data Virtualization:
Can help you integrate your existing and growing data infrastructure.
Integrates big data with your existing enterprise data infrastructure.
Lets non-technical users access big data result sets.
We’ll also provide typical uses cases and examples and a demonstration of the integration of Hadoop sentiment analysis with sales data.
Enabling Data as a Service with the JBoss Enterprise Data Services Platformprajods
This presentation was given at JUDCon 2013, Jan 17,18 at Bangalore. Presented by Prajod Vettiyattil and Gnanaguru Sattanathan. The presentation deals with the Why, What and How of Data Services and Data Services Platforms. It also explains the features of the JBoss Enterprise Data Services Platform.
The need for Data Services is explained with 3 Business use cases:
1. Post purchase customer experience improvement for an Auto manufacturer
2. Enterprise Data Access Layer
3. Data Services for Regulatory Reporting requirements like Dodd Frank
Informatica Solution for SWIFT IntegrationKim Loughead
Overview of Informatica's solution for financial services organizations who need to exchange payment data including SWIFT, NACHA, SEPA, FIX, etc. messages with other financial institutions
Microsoft® SQL Azure™ Database is a cloud-based relational database service built for Windows® Azure platform. It provides a highly available, scalable, multi-tenant database service hosted by Microsoft in the cloud. SQL Azure Database enables easy provisioning and deployment of multiple databases. Developers do not have to install, setup, patch or manage any software. High Availability and fault tolerance is built-in and no physical administration is required. SQL Azure supports Transact-SQL (T-SQL). Customers can leverage existing tools and knowledge in T-SQL based familiar relational
data model for building applications.
Big data insights with Red Hat JBoss Data VirtualizationKenneth Peeples
You’re hearing a lot about big data these days. And big data and the technologies that store and process it, like Hadoop, aren’t just new data silos. You might be looking to integrate big data with existing enterprise information systems to gain better understanding of your business. You want to take informed action.
During this session, we’ll demonstrate how Red Hat JBoss Data Virtualization can integrate with Hadoop through Hive and provide users easy access to data. You’ll learn how Red Hat JBoss Data Virtualization:
Can help you integrate your existing and growing data infrastructure.
Integrates big data with your existing enterprise data infrastructure.
Lets non-technical users access big data result sets.
We’ll also provide typical uses cases and examples and a demonstration of the integration of Hadoop sentiment analysis with sales data.
Enabling Data as a Service with the JBoss Enterprise Data Services Platformprajods
This presentation was given at JUDCon 2013, Jan 17,18 at Bangalore. Presented by Prajod Vettiyattil and Gnanaguru Sattanathan. The presentation deals with the Why, What and How of Data Services and Data Services Platforms. It also explains the features of the JBoss Enterprise Data Services Platform.
The need for Data Services is explained with 3 Business use cases:
1. Post purchase customer experience improvement for an Auto manufacturer
2. Enterprise Data Access Layer
3. Data Services for Regulatory Reporting requirements like Dodd Frank
Informatica Solution for SWIFT IntegrationKim Loughead
Overview of Informatica's solution for financial services organizations who need to exchange payment data including SWIFT, NACHA, SEPA, FIX, etc. messages with other financial institutions
Microsoft® SQL Azure™ Database is a cloud-based relational database service built for Windows® Azure platform. It provides a highly available, scalable, multi-tenant database service hosted by Microsoft in the cloud. SQL Azure Database enables easy provisioning and deployment of multiple databases. Developers do not have to install, setup, patch or manage any software. High Availability and fault tolerance is built-in and no physical administration is required. SQL Azure supports Transact-SQL (T-SQL). Customers can leverage existing tools and knowledge in T-SQL based familiar relational
data model for building applications.
Introduction to Microsoft’s Master Data Services (MDS)James Serra
Master Data Services is bundled with SQL Server 2012 to help resolve many of the Master Data Management issues that companies are faced with when integrating data. In this session, James will show an overview of Master Data Services 2012, including the out of the box Web UI, the highly developed Excel Add-in, and how to get started with loading MDS with your data.
xRM is the natural evolution of CRM. Businesses are expanding their use of new generation CRM solutions to manage a wider range of scenarios, including asset management, prospect management, citizen management, and many more. Microsoft CRM sits on the .NET platform and because of that, it is much more than a traditional CRM product. Instead, think of Microsoft CRM is as a rapid development application with out of the box CRM functionality. The purpose of this session is to understand Microsoft's CRM strategy and how you get to market first with world class business solutions.
The January call will focus on introducing the concepts of open development, software lifecycle and upcoming open projects. We have a number of projects on the roadmap and would like to give the community an opportunity to help prioritize the list.
We'll discuss the upcoming GT.M Integration project to more tightly couple OpenVista and GT.M. You can read the proposals and discuss this project at Medsphere.org, see the project homepage here: http://medsphere.org/community/roadmap/gtm
Please feel free to invite any colleagues that might find this topic relevant or interesting.
When: January 15, 12:30 - 2pm Pacific
Where: Dial-in: (888) 346-3950 // Participant Code: 1302465
Web conference: http://www.medsphere.com/infinite/
What: Open Development
- Ecosystems at work
- Open Development Introduction
- Community Project Overview
- GT.M Project Introduction
- Project Review
- Medsphere.org: Tip of the Month
===
The community calls are listed on the Medsphere.org event calendar (http://medsphere.org/community-events/) and we will update each month's call as the agenda is solidified.
Details and Recording available here: http://medsphere.org/blogs/events/2009/01/15/community-call-january-2009
Hadoop World 2011: Big Data Architecture: Integrating Hadoop with Other Enter...Cloudera, Inc.
Recent research has pointed out the complementary nature of Hadoop and other data management solutions and the importance of leveraging existing systems, SQL, engineering, and operational skills, as well as incorporating novel uses of MapReduce to improve analytic processing. Come to this session to learn how companies optimize the use of Hadoop with other enterprise systems to improve overall analytical throughput and build new data-driven products. This session covers: ways to achieve high-performance integration between Hadoop and relational-based systems; Hadoop+NoSQL vs Hadoop+SQL architectures; high-speed, massively parallel data transfer to analytical platforms that can aggregate web log data with granular fact data; and strategies for freeing up capacity for more explorative, iterative analytics and ad hoc queries.
A Comparison of EDB Postgres to Self-Supported PostgreSQLEDB
Like all database management systems, PostgreSQL requires additional enterprise tools and capabilities to ensure high availability at scale. These include additional tools for backup, disaster recovery, replication, monitoring and data migration.
To meet these needs, EnterpriseDB created the EDB Postgres Platform.
This document explains the key differences between PostgreSQL using the EDB Postgres Platform compared to self-supported PostgreSQL alone.
Watch full webinar here: https://buff.ly/2MwDyhq
The use of Data Virtualization as a global delivery layer means that Denodo is a critical component of the data architecture. It cannot fail, needs to be fault tolerant and perform as designed. In this context, enterprise level-monitoring is key to make sure the virtual layer is in good health and proactively detect potential issues. Fortunately, Denodo provides a full suite of monitoring capabilities and integrates with leading monitoring tools like Splunk, Elastic and CloudWatch.
Attend this session to learn:
- How to configure the key global parameters of the Denodo server
- How to integrate Denodo with enterprise monitoring solutions like Splunk and Cloudwatch
- Key metrics to monitor
SnapLogic provides a Data Integration platform that takes integration to another level, by combining the power of dynamic programming languages with standard Web interfaces to solve today's most pressing problems in application integration. SnapLogic has an intuitive visual designer that runs in your browser and connects to highly scalable web based Integration server that you can run on premise or in the cloud.
Enabling a Data Mesh Architecture with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3rwWhyv
The Data Mesh architectural design was first proposed in 2019 by Zhamak Dehghani, principal technology consultant at Thoughtworks, a technology company that is closely associated with the development of distributed agile methodology. A data mesh is a distributed, de-centralized data infrastructure in which multiple autonomous domains manage and expose their own data, called “data products,” to the rest of the organization.
Organizations leverage data mesh architecture when they experience shortcomings in highly centralized architectures, such as the lack domain-specific expertise in data teams, the inflexibility of centralized data repositories in meeting the specific needs of different departments within large organizations, and the slow nature of centralized data infrastructures in provisioning data and responding to changes.
In this session, Pablo Alvarez, Global Director of Product Management at Denodo, explains how data virtualization is your best bet for implementing an effective data mesh architecture.
You will learn:
- How data mesh architecture not only enables better performance and agility, but also self-service data access
- The requirements for “data products” in the data mesh world, and how data virtualization supports them
- How data virtualization enables domains in a data mesh to be truly autonomous
- Why a data lake is not automatically a data mesh
- How to implement a simple, functional data mesh architecture using data virtualization
Presentation on Data Mesh: The paradigm shift is a new type of eco-system architecture, which is a shift left towards a modern distributed architecture in which it allows domain-specific data and views “data-as-a-product,” enabling each domain to handle its own data pipelines.
Case Study - Ibotta Builds A Self-Service Data Lake To Enable Business Growth...Vasu S
Read a case study that how Ibotta cut costs thanks to Qubole’s autoscaling and downscaling capabilities, and the ability to isolate workloads to separate clusters
https://www.qubole.com/resources/case-study/ibotta
Teradata Aster: Big Data Discovery Made Easy
Brad Elo, VP, Aster Data, Teradata
ANALYTICS AND VISUALIZATION FOR THE FINANCIAL ENTERPRISE CONFERENCE
June 25, 2013 The Langham Hotel Boston, MA
Introduction to Microsoft’s Master Data Services (MDS)James Serra
Master Data Services is bundled with SQL Server 2012 to help resolve many of the Master Data Management issues that companies are faced with when integrating data. In this session, James will show an overview of Master Data Services 2012, including the out of the box Web UI, the highly developed Excel Add-in, and how to get started with loading MDS with your data.
xRM is the natural evolution of CRM. Businesses are expanding their use of new generation CRM solutions to manage a wider range of scenarios, including asset management, prospect management, citizen management, and many more. Microsoft CRM sits on the .NET platform and because of that, it is much more than a traditional CRM product. Instead, think of Microsoft CRM is as a rapid development application with out of the box CRM functionality. The purpose of this session is to understand Microsoft's CRM strategy and how you get to market first with world class business solutions.
The January call will focus on introducing the concepts of open development, software lifecycle and upcoming open projects. We have a number of projects on the roadmap and would like to give the community an opportunity to help prioritize the list.
We'll discuss the upcoming GT.M Integration project to more tightly couple OpenVista and GT.M. You can read the proposals and discuss this project at Medsphere.org, see the project homepage here: http://medsphere.org/community/roadmap/gtm
Please feel free to invite any colleagues that might find this topic relevant or interesting.
When: January 15, 12:30 - 2pm Pacific
Where: Dial-in: (888) 346-3950 // Participant Code: 1302465
Web conference: http://www.medsphere.com/infinite/
What: Open Development
- Ecosystems at work
- Open Development Introduction
- Community Project Overview
- GT.M Project Introduction
- Project Review
- Medsphere.org: Tip of the Month
===
The community calls are listed on the Medsphere.org event calendar (http://medsphere.org/community-events/) and we will update each month's call as the agenda is solidified.
Details and Recording available here: http://medsphere.org/blogs/events/2009/01/15/community-call-january-2009
Hadoop World 2011: Big Data Architecture: Integrating Hadoop with Other Enter...Cloudera, Inc.
Recent research has pointed out the complementary nature of Hadoop and other data management solutions and the importance of leveraging existing systems, SQL, engineering, and operational skills, as well as incorporating novel uses of MapReduce to improve analytic processing. Come to this session to learn how companies optimize the use of Hadoop with other enterprise systems to improve overall analytical throughput and build new data-driven products. This session covers: ways to achieve high-performance integration between Hadoop and relational-based systems; Hadoop+NoSQL vs Hadoop+SQL architectures; high-speed, massively parallel data transfer to analytical platforms that can aggregate web log data with granular fact data; and strategies for freeing up capacity for more explorative, iterative analytics and ad hoc queries.
A Comparison of EDB Postgres to Self-Supported PostgreSQLEDB
Like all database management systems, PostgreSQL requires additional enterprise tools and capabilities to ensure high availability at scale. These include additional tools for backup, disaster recovery, replication, monitoring and data migration.
To meet these needs, EnterpriseDB created the EDB Postgres Platform.
This document explains the key differences between PostgreSQL using the EDB Postgres Platform compared to self-supported PostgreSQL alone.
Watch full webinar here: https://buff.ly/2MwDyhq
The use of Data Virtualization as a global delivery layer means that Denodo is a critical component of the data architecture. It cannot fail, needs to be fault tolerant and perform as designed. In this context, enterprise level-monitoring is key to make sure the virtual layer is in good health and proactively detect potential issues. Fortunately, Denodo provides a full suite of monitoring capabilities and integrates with leading monitoring tools like Splunk, Elastic and CloudWatch.
Attend this session to learn:
- How to configure the key global parameters of the Denodo server
- How to integrate Denodo with enterprise monitoring solutions like Splunk and Cloudwatch
- Key metrics to monitor
SnapLogic provides a Data Integration platform that takes integration to another level, by combining the power of dynamic programming languages with standard Web interfaces to solve today's most pressing problems in application integration. SnapLogic has an intuitive visual designer that runs in your browser and connects to highly scalable web based Integration server that you can run on premise or in the cloud.
Enabling a Data Mesh Architecture with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3rwWhyv
The Data Mesh architectural design was first proposed in 2019 by Zhamak Dehghani, principal technology consultant at Thoughtworks, a technology company that is closely associated with the development of distributed agile methodology. A data mesh is a distributed, de-centralized data infrastructure in which multiple autonomous domains manage and expose their own data, called “data products,” to the rest of the organization.
Organizations leverage data mesh architecture when they experience shortcomings in highly centralized architectures, such as the lack domain-specific expertise in data teams, the inflexibility of centralized data repositories in meeting the specific needs of different departments within large organizations, and the slow nature of centralized data infrastructures in provisioning data and responding to changes.
In this session, Pablo Alvarez, Global Director of Product Management at Denodo, explains how data virtualization is your best bet for implementing an effective data mesh architecture.
You will learn:
- How data mesh architecture not only enables better performance and agility, but also self-service data access
- The requirements for “data products” in the data mesh world, and how data virtualization supports them
- How data virtualization enables domains in a data mesh to be truly autonomous
- Why a data lake is not automatically a data mesh
- How to implement a simple, functional data mesh architecture using data virtualization
Presentation on Data Mesh: The paradigm shift is a new type of eco-system architecture, which is a shift left towards a modern distributed architecture in which it allows domain-specific data and views “data-as-a-product,” enabling each domain to handle its own data pipelines.
Case Study - Ibotta Builds A Self-Service Data Lake To Enable Business Growth...Vasu S
Read a case study that how Ibotta cut costs thanks to Qubole’s autoscaling and downscaling capabilities, and the ability to isolate workloads to separate clusters
https://www.qubole.com/resources/case-study/ibotta
Teradata Aster: Big Data Discovery Made Easy
Brad Elo, VP, Aster Data, Teradata
ANALYTICS AND VISUALIZATION FOR THE FINANCIAL ENTERPRISE CONFERENCE
June 25, 2013 The Langham Hotel Boston, MA
You had a go on building your first camel application, now it's time to look at how data can configured to transformed in the blink of an eye! Part 2 of the workshop is all about message transformation!
Developing Microservices with Apache CamelClaus Ibsen
Red Hat Microservices Architecture Day - New York, November 2015. Presented by Claus Ibsen.
Apache Camel is a very popular integration library that works very well with microservice architecture. This talk introduces you to Apache Camel and how you can easily get started with Camel on your computer. Then we cover how to create new Camel projects from scratch as microservices, which you can boot using Camel or Spring Boot, or other micro containers such as Jetty or fat JARs. We then take a look at what options you have for monitoring and managing your Camel microservices using tooling such as Jolokia, and hawtio web console.
Apache Camel is a very popular integration library that works very well with microservice architecture.
This talk introduces you to Apache Camel and how you can easily get started with Camel on your computer.
Then we cover how to create new Camel projects from scratch as micro services which you can boot using Camel or Spring Boot, or other micro containers such as Jetty or fat JARs. We then take a look at what options you have for monitoring and managing your Camel microservices
using tooling such as Jolokia, and hawtio web console.
The second part of this talk is about running Camel in the cloud. We start by showing you how you can use the Maven Docker Plugin to create a docker image of your Camel application and run it using docker on a single host. Then kubernetes enters the stage and we take a look at how you can deploy your docker images on a kubernetes cloud platform, and how thenfabric8 tooling can make this much easier for the Java developers.
At the end of this talk you will have learned about and seen in practice how to take a Java Camel project from scratch, turn that into a docker image, and how you can deploy those docker images in a scalable cloud platform based on Google's kubernetes.
Microservices with Apache Camel, Docker and Fabric8 v2Christian Posta
My talk from Red Hat Summit 2015 about the pros/cons of microservices, how integration is a strong requirement for doing distributed systems designs, and how open source projects like Apache Camel, Docker, Kubernetes, OpenShift and Fabric8 can help simplify and manage microservice environments
An introduction to data virtualization in business intelligenceDavid Walker
A brief description of what Data Virtualisation is and how it can be used to support business intelligence applications and development. Originally presented to the ETIS Conference in Riga, Latvia in October 2013
In this presentation we introduce the basic concepts around SQL Server Azure: the database in the cloud.
Regards,
Ing. Eduardo Castro, PhD
http://ecastrom.blogspot.com
http://comunidadwindows.org
Microsoft Azure zmienia się. Jego częśc poświęcona bazie danych (Windows Azure SQL Database) zmienia się jeszcze szybciej. Podczas tej sesji chciałbym pokazac tym, którzy nie widzieli, oraz przypomniec tym, którzy już coś wiedzą - o co chodzi z WASD, jakie zmiany nastapiły i czego możemy po tej bazie oczekiwać. Dla odważnych będzie okazja podłączenia się do konta w chmurze i przetestowania ych rozwiązań samemu.
Enterprise data is often the backbone to providing timely and effective information for any enterprise application landscape, but we struggle to integrate our vital and often diverse sources of information to our applications in a timely and effective manner.
No more.
Whether you are a Data Analyst, Business Analyst or in IT Strategy, this webinar will illustrate how easy it is to integrate disparate data spread across your organization when modeling and automating your business processes with modern BPM tools.
We will take you through an in depth sample solution that simulates a Travel Agency with examples of some of the complexities you will encounter:
• realtime disparate data integration
• rule based data validation
• rule based fraud detection for payment processing
• service integration
You will receive an advanced overview of the capabilities of both the Red Hat JBoss Data Virtualization and Red Hat JBoss BPM Suite while left with a sample project to evaluate the solution.
MAIA Intelligence was invited to give a technical session on MS-SQL at Microsoft Dreamspark Yatra 2012 event in which around 300 budding techies learnt about the emerging technologies
Progress Software supplies application infrastructure software to simplify and accelerate the development, deployment, integration, and management of business applications. Users of information technology today demand software applications that are comprehensive, reliable, responsive, and cost-effective.
The cloud is all the rage. Does it live up to its hype? What are the benefits of the cloud? Join me as I discuss the reasons so many companies are moving to the cloud and demo how to get up and running with a VM (IaaS) and a database (PaaS) in Azure. See why the ability to scale easily, the quickness that you can create a VM, and the built-in redundancy are just some of the reasons that moving to the cloud a “no brainer”. And if you have an on-prem datacenter, learn how to get out of the air-conditioning business!
Caserta Concepts, Datameer and Microsoft shared their combined knowledge and a use case on big data, the cloud and deep analytics. Attendes learned how a global leader in the test, measurement and control systems market reduced their big data implementations from 18 months to just a few.
Speakers shared how to provide a business user-friendly, self-service environment for data discovery and analytics, and focus on how to extend and optimize Hadoop based analytics, highlighting the advantages and practical applications of deploying on the cloud for enhanced performance, scalability and lower TCO.
Agenda included:
- Pizza and Networking
- Joe Caserta, President, Caserta Concepts - Why are we here?
- Nikhil Kumar, Sr. Solutions Engineer, Datameer - Solution use cases and technical demonstration
- Stefan Groschupf, CEO & Chairman, Datameer - The evolving Hadoop-based analytics trends and the role of cloud computing
- James Serra, Data Platform Solution Architect, Microsoft, Benefits of the Azure Cloud Service
- Q&A, Networking
For more information on Caserta Concepts, visit our website: http://casertaconcepts.com/
Connect to the IoT with a lightweight protocol MQTTKenneth Peeples
Everything is connected in the Internet of Things. People, devices, machines, and more are all part of a network - sending and receiving data to and from other "things." What new opportunities can the IoT create for your business? The lightweight protocol MQTT can help connect to the Internet of Things.
Maximize information exchange in your enterprise with AMQPKenneth Peeples
Businesses need to efficiently exchange information inside the enterprise as well as with other enterprises. In order to reduce cost and enhance business agility, an open messaging standard is a necessity for interoperability and integration. Advanced Message Queueing Protocol (AMQP) is the open standard wire-level binary messaging protocol that describes how a message should be structured and sent across the network.
Join this webinar to learn more about:
-What AMQP is and it's applications.
-The features and benefits of AMQP.
-Why you should use AMQP in your enterprise.
-The differences between AMQP and other messaging standards, such as JMS.
-Topologies and architectures possible through the use of AMQP.
Integration intervention: Get your apps and data up to speedKenneth Peeples
SOA has been the defacto methodology for enterprise application and process integration, because loosely coupled components and composite applications are more agile and efficient. The perfect solution? Not quite.
The data’s always been the problem. The most efficient and agile applications and services can be dragged down by the point-to-point data connections of a traditional data integration stack. Virtualized data services can eliminate the friction and get your applications up to speed.
In this webinar we'll show you how to (replay at http://www.redhat.com/en/about/events/integration-intervention-get-your-apps-and-data-speed):
-Quickly and easily create a virtual data services layer to plug data into your SOA infrastructure for an agile and efficient solution
-Derive more business value from your services.
To build up any non-trivial business processing, you may have to connect systems that are exposed by web-services, fire off events over message queues, notify users via email or social networking, and much more.
Apache Camel is a lightweight integration framework that helps you connect systems in a consistent and reliable way. Focus on the business reasons behind what's being integrated, not the underlying details of how.
The presentation covers-
1. Red Hat JBoss Developer Program
2. Red Hat JBoss Fuse
3. Red Hat JBoss Data Virtualization
The workshop was recorded and we will provide a link once it has been posted.
Middleware Security for Apache CXF, Camel, ActiveMQ and Karaf as well as others continue to be an ongoing concern especially around Authentication, Authorization, Data at Rest and Data in Transit. The session will include a presentation and demonstrations of implementing Authentication (AuthN) and Authorization (AuthZ) as well as other security topics.
3. MAI N DATAMAI N DATA
VIRTUALIZATION COMPONENTSVIRTUALIZATION COMPONENTS
The Server
Design Tools
Administration Tools
4. THE SERVERTHE SERVER
The server is positioned between business applications/Consumers
and one or more data sources. An enterprise ready, scalable,
managable, runtime for the Query Engine that runs inside JBoss AS
that provides additional security, fault-tolerance, and administrative
features.
MOVE DOWN TO EXPLORE THE CONCEPT OR RIGHT FOR THE NEXT CONCEPT
5. SE RVE R COMPONENTSSE RVE R COMPONENTS
Virtual Database
Access Layer
Query Engine
Connector Architecture
6. VIRTUAL DATABASEVIRTUAL DATABASE
A virtual database (VDB) provides a unified view of data residing in
multiple physical repositories. A VDB is composed of various data
models and configuration information that describes which data
sources are to be integrated and how. In particular, source
models are used to represent the structure and characteristics of the
physical data sources, and view models represent the structure and
characteristics of the integrated data exposed to applications.
7. ACCESS L AYERACCESS L AYER
The access layer is the interface through which applications submit
queries (relational, XML, XQuery and procedural) to the VDB via
JDBC, ODBC or Web services.
8. QUERY ENGINEQUERY ENGINE
The heart of DV is a high-performance query engine that processes
relational, XML, XQuery and procedural queries from federated
datasources. Features include support for homogeneous schemas,
heterogeneous schemas, transactions, and user defined functions.
When applications submit queries to a VDB via the access layer, the
query engine produces an optimized query plan to provide efficient
access to the required physical data sources as determined by the
SQL criteria and the mappings between source and view models in
the VDB. This query plan dictates processing order to ensure
physical data sources are accessed in the most efficient manner.
9. CONNECTOR ARCH ITECTURECONNECTOR ARCH ITECTURE
Translators and resource adapters are used to provide transparent
connectivity between the query engine and the physical data
sources. DV includes a rich set of Translators and Resource Adaptors
that enable access to a variety of sources, including most relational
databases, web services, text files and LDAP. A translator is used to
convert queries into source-specific commands, and a resource
adapter provides communication with the source.
Need data from a different source? A custom translators and
resource adaptors can easily be developed.
10. DESIGN TOOLSDESIGN TOOLS
Various design tools are available to assist users in seeting up JBoss Data
Virtualization for an integrated solution.
MOVE DOWN TO EXPLORE THE CONCEPT OR RIGHT FOR THE NEXT CONCEPT
11. DATA VIRTUALIZATION DESIGNERDATA VIRTUALIZATION DESIGNER
The designer is a visual tool that enables rapid, model-driven
definition, integration, management and testing of data services
without programming using the DV runtime framework. With the
designer you can:
create a virtual database (or VDB) containing your models, views,
procedures, dynamic XML documents which you deploy to DV
server and then access your data.
resolve semantic differences
create virtual data structures at a physical or logical level
use declarative interfaces to integrate, aggregate, and transform
the data on its way from source to a target format which is
compatible and optimized for consumption by your applications
Preview your data
12. DATA VIRTUALIZATION DESIGNERDATA VIRTUALIZATION DESIGNER
(CONTINUED)(CONTINUED)
Part of the JBoss Developer Studio Integration stack
14. CONNECTOR DEVELO PMENTCONNECTOR DEVELO PMENT
The process of integrating data from an enterprise information system into
JBoss Data Virtualization requires one to two components:
a translator (mandatory) and
a resource adapter (optional), also known as a connector. Most of the time,
this will be a Java EE Connector Architecture (JCA) Adapter.
A translator is used to:
translate JBoss Data Virtualization commands into commands understood by
the datasource for which the translator is being used,
execute those commands,
return batches of results from the datasource, translated into the formats
that JBoss Data Virtualization is expecting.
https://access.redhat.com/documentation/en-
US/Red_Hat_JBoss_Data_Vir tualization/6.1/html/Development_Guide_Volume_4_Server_Development/Introduction_to_the_JBoss_Data_Serv
ices_Connector_Architecture.html
15. CONNECTOR DEVELO PMENTCONNECTOR DEVELO PMENT
(CONTINUED)(CONTINUED)
A resource adapter (or connector):
handles all communications with individual enterprise information
systems, (which can include databases, data feeds, flat files and so
forth),
can be a JCA Adapter or any other custom connection provider
(the JCA specification ensures the writing, packaging and
configuration are undertaken in a consistent manner),
removes concerns such as connection information, resource
pooling, and authentication for translators.
17. ADDITIONAL CUSTOMIZATIONSADDITIONAL CUSTOMIZATIONS
JBoss Data Virtualization is highly extensible in other ways:
You can add user defined functions.
You can adapt logging to your requirements, which is especially
useful for custom audit or command logging.
A delegating translator can be used to add custom code to all
methods for a given translator.
You can also customize authentication and authorization modules.
See the Red Hat JBoss Data Virtualization Security Guide.
19. RESOURCE ADAPTORSRESOURCE ADAPTORS
With the exception of JDBC data sources, JBoss Data Virtualization provides a
JCA adapter for each supported data source. These are the resource adapter
identifiers, as specified in the server configuration file:
File Adapter - file
Google Spreadsheet Adapter - google
Red Hat JBoss Data Grid (6.1 & 6.2) Adapter - infinispan
LDAP Adapter - ldap
Salesforce Adapter - salesforce
Web Services Adapter - webservice
Mongo DB Adapter (technical preview) - mongodb
A resource adapter for the JDBC translator is provided with JBoss EAP by
default.
20. ADMINISTRATIVE TOOLSADMINISTRATIVE TOOLS
AdminShell- AdminShell provides a script-based programming environment enabling
users to access, monitor and control JBoss Data Virtualization.
Management Console - The Management Console provided by the Red Hat JBoss
Enterprise Application Platform (EAP) is a web-based tool allowing system administrators
to monitor and configure services deployed within a running JBoss EAP instance,
including JBoss Data Virtualization.
Management CLI - The Management CLI (command-line interface) is provided by JBoss
EAP to manage services deployed within a JBoss EAP instance. Operations can be
performed in batch modes, allowing multiple tasks to be run as a group.
JBoss Operations Network - Red Hat JBoss Operations Network provides a single
interface to deploy, manage, and monitor an entire deployment of Red Hat JBoss
Middleware applications and services, including JBoss Data Virtualization.
Dashboard Builder - The Dashboard builder allows connection to VDBs through the DV
JDBC driver to visualize the data for testing and Business Analytics.
MOVE DOWN TO EXPLORE THE CONCEPT OR RIGHT FOR THE NEXT CONCEPT
21. THE ADMIN SHELLTHE ADMIN SHELL
AdminShell provides the following features:
Administration - AdminShell can be used to connect to a JBoss Data
Virtualization instance in order to perform various administrative tasks.
Data Access - AdminShell can be used to connect to a virtual database (VDB)
and run SQL commands to query VDB data and view results.
Migration - AdminShell can be used to develop scripts that will move VDBs and
associated components from one development environment to another. (Users
can test and automate migration scripts before executing them in production
deployments.)
Testing -The built-in JUnit Test Framework allows users to write regression tests
to check system health and data integrity. The written tests validate system
functionality automatically, removing the need for manual verification by QA
Personnel.
https://access.redhat.com/documentation/en-
US/Red_Hat_JBoss_Data_Virtualization/6.1/html/Administration_and_Configuration_Guide/sect
-AdminShell.html
22. THE MANAGEMENT CONSOLETHE MANAGEMENT CONSOLE
The Configuration tab contains general and JBoss Data
Virtualization specific configuration properties.
Runtime tab shows runtime information about JBoss EAP server
and Teiid subsystem. You can view runtime information about Teiid
by clicking Virtual Databases in the left hand navigational tree.
You can configure roles and groups using the Administration tab.
Roles for a user to be included in and excluded from can be
configured in the Management Console. You can also configure
groups to be included or excluded from a role. Only users in the
SuperUser or Administrator roles can perform this configuration.
https://access.redhat.com/documentation/en-
US/Red_Hat_JBoss_Data_Virtualization/6.1/html/Administration_and_Configuratio
n_Guide/sect-Management_Console.html
23. MANAGE MENT CLIMANAGE MENT CLI
The Management CLI features a help dialog with general and
context-sensitive options.
Easy to launch and connect with the EAP_HOME/bin/jboss-cli.sh -
-connect or jboss-cli.bat
https://access.redhat.com/documentation/en-
US/Red_Hat_JBoss_Data_Virtualization/6.1/html/Administration_and_Configura
tion_Guide/sect-Management_CLI.html
24. JBOSS OPERATIONS NETWORKJBOSS OPERATIONS NETWORK
JBoss Operations Network has additional agent plug-ins to manage
other JBoss products. Although these are JBoss ON resource plug-
ins, they are included in separate packages and require a separate
subscription to download them.
https://access.redhat.com/documentation/en-
US/Red_Hat_JBoss_Data_Virtualization/6.1/html/Administration_an
d_Configuration_Guide/sect-
JBoss_Operations_Network.html#Installing_JBoss_Agent_Plug-
in_Packs
25. DASHBOARD BUILDERDASHBOARD BUILDER
Red Hat JBoss Dashboard Builder can be connected to an external database, be it
using JNDI of the container or connecting directly only using the JDBC driver to
access the database. Connections to databases can be configured in
workspace Showcase on page External Connections. After you have established the
connection to the database, you need to create a data provider that will collect the
data from the database and allow you to visualize it as an indicator in the dashboard
area of a page.
Note that Red Hat JBoss Dashboard Builder makes use of its own local internal
database to store its local data. This database is read-only for Dashboard Builder,
but is accessible from outside.