The document describes the steps to set up and use Informix Warehouse Accelerator. It includes installing and configuring Informix, the accelerator, and IBM Smart Analytics Studio. Designing, validating, and deploying data marts in Studio or via CLI. Loading data to the accelerator to be ready for queries from BI applications. Connecting details are provided for Studio, applications, and drivers. Adding accelerators via Studio or CLI with required parameters is also outlined.
What you-need-to-know-to-do successful-upgradesCOMMON Europe
This document provides an overview of the steps required to successfully complete an IBM i upgrade to release 6.1 or 7.1. It discusses preparing for the upgrade by mapping the process, analyzing object conversions, verifying firmware and third party software compatibility. It also covers required PTFs, license keys, and image catalog preparation. The document then outlines the steps for the actual upgrade, including verifying HMC, FSP, and IBM i firmware levels, completing the upgrade, and post-upgrade verification and conversions. The goal is for users to be unaware of any changes until new features are rolled out, indicating a smooth upgrade experience.
When doing an upgrade to IBM i, it’s more work to plan the upgrade than to make the upgrade itself. Once you have a plan, the actual upgrade is simple. Do you know what you need to upgrade to IBM i 7.2?
Pete Massiello, president of COMMON and iTech Solutions, has helped many IBM i users make the move. He joins Tom Huntington to share 50 years of combined experience with IBM i. They’ll get you on the right path to an upgrade by helping you answer key questions like these:
• What is the right size for the load source?
• How do I increase the licensed internal code?
• Which version of Java is compatible with the new release?
• Which console options are no longer available with 7.2?
• Do I have the correct set of disks for the upgrade?
• What do I need to do differently when upgrading from 5.4, 6.1, or 7.1?
This is my POC report to our customer - AWB and I use my professional skill to present them how to make their testing be automatically with whole stuff!
This document outlines a 4-day training course on Red Hat System Administration III. The course covers topics such as package management with RPM, network monitoring, security, storage, web services, file sharing, and boot troubleshooting. Each day consists of multiple units that delve deeper into these areas and provide hands-on instruction on configuring and managing an enterprise Linux environment.
2009-08-24 Managing your Red Hat Enterprise Linux Guests with RHN SatelliteShawn Wells
Presented at SHARE Denver 2009, Session ID 9204. Steps through what Red Hat Network Satellite is, what modules are included, various deployment architectures, and how to run RHN Satellite on System z. Finishes with Live Demo.
Novell SecureLogin Installation, Deployment, Lifecycle Management and Trouble...Novell
Facing installation problems? Not sure where to get the list of registries required? Need a tool to generate your own configuration files? Need a technical note to ensure that you proceed with installation, deployment and usage of Novell SecureLogin with ease? Not sure what the SecureLogin log means or how to use it?
If you’re running into challenges installing SecureLogin or just need to know what to do when it’s not working correctly, attend this session to get all the tips and tricks from product developers and Novell Technical Services. The session will provide installation and configuration guidance, including:
• How to use the SecureLogin config tool
• How to generate and customize your response file
• How to customize your installation
• How to complete a single-click install
• And much more
You will also learn what to do when issues with SecureLogin arise. Novell technical support presenters will cover common problems seen in support, available tools and how to use them, and specific troubleshooting steps that will help you keep SecureLogin running smoothly in your environment. You'll also learn what to do when these measures fail and what to have ready when you call support.
IBM i Technology Refreshes Overview 2012 06-04COMMON Europe
IBM Power Systems introduces Technology Refreshes which provide a means to deliver operating system enhancements through PTF groups to installed releases of IBM i. A Technology Refresh includes PTFs that enable new hardware, firmware, and virtualization capabilities. It consists of a Technology Refresh PTF group and Cumulative PTF package that are tested and delivered together to provide new functionality in a manner less disruptive than a full release upgrade.
Regulatory compliant cloud computing rethinking web application architectures...Khazret Sapenov
The document discusses the transition from traditional web application architectures to secure cloud computing using a Regulatory Compliant Cloud Computing (RC3) model. RC3 introduces data classification, separate processing zones, and an Encryption Key Management Infrastructure to securely store and process data in the cloud while ensuring compliance. RC3 provides a methodology for using cloud services without compromising security or regulatory requirements.
What you-need-to-know-to-do successful-upgradesCOMMON Europe
This document provides an overview of the steps required to successfully complete an IBM i upgrade to release 6.1 or 7.1. It discusses preparing for the upgrade by mapping the process, analyzing object conversions, verifying firmware and third party software compatibility. It also covers required PTFs, license keys, and image catalog preparation. The document then outlines the steps for the actual upgrade, including verifying HMC, FSP, and IBM i firmware levels, completing the upgrade, and post-upgrade verification and conversions. The goal is for users to be unaware of any changes until new features are rolled out, indicating a smooth upgrade experience.
When doing an upgrade to IBM i, it’s more work to plan the upgrade than to make the upgrade itself. Once you have a plan, the actual upgrade is simple. Do you know what you need to upgrade to IBM i 7.2?
Pete Massiello, president of COMMON and iTech Solutions, has helped many IBM i users make the move. He joins Tom Huntington to share 50 years of combined experience with IBM i. They’ll get you on the right path to an upgrade by helping you answer key questions like these:
• What is the right size for the load source?
• How do I increase the licensed internal code?
• Which version of Java is compatible with the new release?
• Which console options are no longer available with 7.2?
• Do I have the correct set of disks for the upgrade?
• What do I need to do differently when upgrading from 5.4, 6.1, or 7.1?
This is my POC report to our customer - AWB and I use my professional skill to present them how to make their testing be automatically with whole stuff!
This document outlines a 4-day training course on Red Hat System Administration III. The course covers topics such as package management with RPM, network monitoring, security, storage, web services, file sharing, and boot troubleshooting. Each day consists of multiple units that delve deeper into these areas and provide hands-on instruction on configuring and managing an enterprise Linux environment.
2009-08-24 Managing your Red Hat Enterprise Linux Guests with RHN SatelliteShawn Wells
Presented at SHARE Denver 2009, Session ID 9204. Steps through what Red Hat Network Satellite is, what modules are included, various deployment architectures, and how to run RHN Satellite on System z. Finishes with Live Demo.
Novell SecureLogin Installation, Deployment, Lifecycle Management and Trouble...Novell
Facing installation problems? Not sure where to get the list of registries required? Need a tool to generate your own configuration files? Need a technical note to ensure that you proceed with installation, deployment and usage of Novell SecureLogin with ease? Not sure what the SecureLogin log means or how to use it?
If you’re running into challenges installing SecureLogin or just need to know what to do when it’s not working correctly, attend this session to get all the tips and tricks from product developers and Novell Technical Services. The session will provide installation and configuration guidance, including:
• How to use the SecureLogin config tool
• How to generate and customize your response file
• How to customize your installation
• How to complete a single-click install
• And much more
You will also learn what to do when issues with SecureLogin arise. Novell technical support presenters will cover common problems seen in support, available tools and how to use them, and specific troubleshooting steps that will help you keep SecureLogin running smoothly in your environment. You'll also learn what to do when these measures fail and what to have ready when you call support.
IBM i Technology Refreshes Overview 2012 06-04COMMON Europe
IBM Power Systems introduces Technology Refreshes which provide a means to deliver operating system enhancements through PTF groups to installed releases of IBM i. A Technology Refresh includes PTFs that enable new hardware, firmware, and virtualization capabilities. It consists of a Technology Refresh PTF group and Cumulative PTF package that are tested and delivered together to provide new functionality in a manner less disruptive than a full release upgrade.
Regulatory compliant cloud computing rethinking web application architectures...Khazret Sapenov
The document discusses the transition from traditional web application architectures to secure cloud computing using a Regulatory Compliant Cloud Computing (RC3) model. RC3 introduces data classification, separate processing zones, and an Encryption Key Management Infrastructure to securely store and process data in the cloud while ensuring compliance. RC3 provides a methodology for using cloud services without compromising security or regulatory requirements.
A Fully Redundant Luminis 5 InstallationWilliam Moore
In 2013 the University of Manitoba decided to upgrade its version of Luminis. The objective of this upgrade was to have a fully redundant installation throughout the whole stack.
This presentation will explain what was done to make the hardware, OS, Database, and Application layers redundant to meet this objective.
The ADNM console provides administrators with tools to manage antivirus protection across their network. It is organized into folders containing tasks, sessions, computers, and management servers. Tasks define jobs like scanning and updating, and sessions show results of task runs. The computer catalog stores all managed machines in a customizable tree structure. Default security policies cascade down the tree but can be overridden. Management servers represent individual AMS installations used to deploy policies and collect results.
IBM Integration Bus & WebSphere MQ - High Availability & Disaster RecoveryRob Convery
This covers the various aspects of configuration IBM Integration Bus when looking to implement a highly available system and comprehensive disaster recovery plan.
Today’s IT organizations operate in an increasingly more complex environment. Resources are limited, operating costs are soaring and service interruptions are unacceptable.
This session covered:
Manage applications and a wide variety of technologies including vmWare and Unified Communications environments
Collect service level and capacity management data to measure and maintain performance in both virtual and non-virtual environments
Manage applications' performance to meet user demands
Keep monitoring current with policies in fast-changing virtualized environments
This document provides an addendum to the user's guide for WhatsUp software version 2.5. It describes new features such as IPX monitoring and enhanced auto scan options. It also lists changes to system requirements for IPX support and notes documentation updates and corrections to the original user's guide.
An easy-to-use, automatic, self-contained toolkit to accelerate ODM* benchmarking NFVi-ready server designs on Intel® Scalable Server platforms based on golden benchmark to characterize baseline performance test on DPDK, QAT and OVS, running on a single Xeon SP server.
IBM Informix dynamic server 11 10 Cheetah Sql FeaturesKeshav Murthy
The document summarizes new features in Informix Dynamic Server (IDS) version 11.10. Key features include:
1) Full support for subqueries in the FROM clause of SQL statements and enhancements to distributed queries.
2) New data types like Node and Binary, and a basic text search index for full text search capabilities.
3) Performance improvements to the SQL optimizer including an index self-join access method and directives for ANSI joins.
4) Enhancements to stored procedures, functions, isolation levels and utilities like SYSDBOPEN and SYSDBCLOSE.
Informix 11.7 delivers smarter data management through three key capabilities:
1) Informix Flexible Grid provides high availability, scalability and workload management.
2) The Informix Warehouse Accelerator delivers unprecedented query response times.
3) Informix Genero enables faster development of mobile and cloud applications.
This document provides a guide for designing and implementing databases using IBM Informix software. It discusses planning a database design, building a relational data model, choosing data types, and implementing the data model. The document also covers managing databases, including table fragmentation strategies. It is intended to help database administrators, designers, and developers to effectively work with Informix databases.
Informix Spark Streaming is an extension of Informix that allows data to be streamed out of the database as soon as it is inserted, updated, or deleted.
The protocol currently used to stream the changes is MQTT v3.1.1 (older versions not supported!). This extension is able to stream data to any MQTT broker where it can be processed or passed on to subscribing clients for processing.
Informix Update New Features 11.70.xC1+IBM Sverige
This document provides release notes for Informix Update 11.70.xC1+. It discusses new features for installation, migration, high availability, administration, application development, embeddability, enterprise replication, security and performance in the 11.70.xC1 release. It also briefly mentions new enhancements in the 11.70.xC2 release related to installation, administration, application development and embeddability. The document provides examples of using some of the new SQL administration API arguments and table/column aliases in DML statements.
Informix warehouse and accelerator overviewKeshav Murthy
This document provides an overview of Informix Warehouse and Informix Warehouse Accelerator. It discusses data warehousing industry trends, features of Informix Warehouse 11.70 including loading, storage optimization, and query processing capabilities. It also describes the Informix Warehouse Accelerator which uses columnar storage, compression and massive parallelism to accelerate select queries with unprecedented response times.
UGIF 12 2010 - informix 11.7 - The Beginning of the Next DecadeUGIF
This document discusses IBM Informix 11.7, a database software release from IBM. It provides an overview of new features in Informix 11.7 including Informix Flexible Grid for high availability and scalability, performance enhancements for data warehousing and analytics, and security and manageability improvements. Case studies are presented showing how Informix is used by companies for applications such as traffic management, online ticketing, and gaming. The release promises benefits such as increased performance, lower costs, simplified administration, and support for emerging workloads.
Why Smart Meters Need Informix TimeSeriesIBM Sverige
Informix Update - Denna presentation hölls på IBM Data Server Day den 22 maj i Stockholm av Simon David, Technical Product Manager, Competitive Technologies & Enablement, Informix Development
IBM Informix dynamic server and websphere MQ integrationKeshav Murthy
- The document discusses using IBM Websphere MQ functions within Informix Dynamic Server (IDS) to enable transactional integration between IDS databases and MQ message queues.
- MQ functions allow sending, receiving, and publishing messages to/from MQ queues from within SQL statements and procedures using a simple interface.
- When using MQ functions within an IDS transaction, the interaction with MQ is transactionally protected using two-phase commit between IDS and MQ.
The document provides an overview of IBM Informix database security from both an operating system and database perspective. It discusses how Informix uses OS authentication, permissions, and network security capabilities. On the database side, it describes how Informix implements discretionary access control using SQL GRANT/REVOKE statements and label-based access control using security policies and labels. The document also outlines the seven distinct security roles in Informix and how to separate them, and provides details on configuring and using the Informix auditing functionality.
Using Informix Warehouse Accelerator with Informix high availability and scal...Keshav Murthy
Informix Warehouse Accelerator can now be used with Informix high availability and scale-out servers like MACH11 and HDR secondary servers. The key steps are:
1) Install Informix and IWA on primary, secondary, SDS or RSS nodes.
2) Configure the secondary servers as updatable.
3) Connect IWA to any Informix node using the connection details.
4) Design, validate and deploy the data mart from any node.
5) Load and run queries from any node and IWA will accelerate queries across nodes.
IIUG 2016 Gathering Informix data into RKevin Smith
A basics walk-through on how to setup R to work with Informix JDBC, ODBC, and ReST/JSON. After taking the datasets examples and uploading them to Informix you can also look through the http://www.slideshare.net/thoi_gian/iris-data-analysis-with-r?qid=414b5431-9759-49e7-b3ba-c89a7bb357be&v=&b=&from_search=1, but replace the data targets with Informix ReST/JSON. Hint since the iris dataset's column names have a non-Informix compliant character I used JSON to store the data into Informix. If you rename the column you can get the data into a normal table through JDBC or ODBC.
Example Iris to JSON to Informix through ReST:
library(datasets)
library(jsonlite)
library(httr)
data(iris)
myjson <-><-><-><->)
dataset[1:3]
IBM Informix - The Ideal Database for Internet of Things
Exclusive luncheon at IBM World of Watson 2016. Informix is the best fit for IoT sensor data analytics at the edge and in the cloud.
The document discusses security best practices for IBM Informix including:
1) Enabling role separation to restrict access and privileges for database administrators, application administrators, and backup administrators.
2) Configuring file permissions and ownership for key Informix directories and files to restrict access.
3) Enabling encrypted connections using SSL or other encryption mechanisms to protect data in transit.
4) Configuring firewalls, virtual private networks, and the sqlhosts file to control which clients and users can connect to the database server.
A Fully Redundant Luminis 5 InstallationWilliam Moore
In 2013 the University of Manitoba decided to upgrade its version of Luminis. The objective of this upgrade was to have a fully redundant installation throughout the whole stack.
This presentation will explain what was done to make the hardware, OS, Database, and Application layers redundant to meet this objective.
The ADNM console provides administrators with tools to manage antivirus protection across their network. It is organized into folders containing tasks, sessions, computers, and management servers. Tasks define jobs like scanning and updating, and sessions show results of task runs. The computer catalog stores all managed machines in a customizable tree structure. Default security policies cascade down the tree but can be overridden. Management servers represent individual AMS installations used to deploy policies and collect results.
IBM Integration Bus & WebSphere MQ - High Availability & Disaster RecoveryRob Convery
This covers the various aspects of configuration IBM Integration Bus when looking to implement a highly available system and comprehensive disaster recovery plan.
Today’s IT organizations operate in an increasingly more complex environment. Resources are limited, operating costs are soaring and service interruptions are unacceptable.
This session covered:
Manage applications and a wide variety of technologies including vmWare and Unified Communications environments
Collect service level and capacity management data to measure and maintain performance in both virtual and non-virtual environments
Manage applications' performance to meet user demands
Keep monitoring current with policies in fast-changing virtualized environments
This document provides an addendum to the user's guide for WhatsUp software version 2.5. It describes new features such as IPX monitoring and enhanced auto scan options. It also lists changes to system requirements for IPX support and notes documentation updates and corrections to the original user's guide.
An easy-to-use, automatic, self-contained toolkit to accelerate ODM* benchmarking NFVi-ready server designs on Intel® Scalable Server platforms based on golden benchmark to characterize baseline performance test on DPDK, QAT and OVS, running on a single Xeon SP server.
IBM Informix dynamic server 11 10 Cheetah Sql FeaturesKeshav Murthy
The document summarizes new features in Informix Dynamic Server (IDS) version 11.10. Key features include:
1) Full support for subqueries in the FROM clause of SQL statements and enhancements to distributed queries.
2) New data types like Node and Binary, and a basic text search index for full text search capabilities.
3) Performance improvements to the SQL optimizer including an index self-join access method and directives for ANSI joins.
4) Enhancements to stored procedures, functions, isolation levels and utilities like SYSDBOPEN and SYSDBCLOSE.
Informix 11.7 delivers smarter data management through three key capabilities:
1) Informix Flexible Grid provides high availability, scalability and workload management.
2) The Informix Warehouse Accelerator delivers unprecedented query response times.
3) Informix Genero enables faster development of mobile and cloud applications.
This document provides a guide for designing and implementing databases using IBM Informix software. It discusses planning a database design, building a relational data model, choosing data types, and implementing the data model. The document also covers managing databases, including table fragmentation strategies. It is intended to help database administrators, designers, and developers to effectively work with Informix databases.
Informix Spark Streaming is an extension of Informix that allows data to be streamed out of the database as soon as it is inserted, updated, or deleted.
The protocol currently used to stream the changes is MQTT v3.1.1 (older versions not supported!). This extension is able to stream data to any MQTT broker where it can be processed or passed on to subscribing clients for processing.
Informix Update New Features 11.70.xC1+IBM Sverige
This document provides release notes for Informix Update 11.70.xC1+. It discusses new features for installation, migration, high availability, administration, application development, embeddability, enterprise replication, security and performance in the 11.70.xC1 release. It also briefly mentions new enhancements in the 11.70.xC2 release related to installation, administration, application development and embeddability. The document provides examples of using some of the new SQL administration API arguments and table/column aliases in DML statements.
Informix warehouse and accelerator overviewKeshav Murthy
This document provides an overview of Informix Warehouse and Informix Warehouse Accelerator. It discusses data warehousing industry trends, features of Informix Warehouse 11.70 including loading, storage optimization, and query processing capabilities. It also describes the Informix Warehouse Accelerator which uses columnar storage, compression and massive parallelism to accelerate select queries with unprecedented response times.
UGIF 12 2010 - informix 11.7 - The Beginning of the Next DecadeUGIF
This document discusses IBM Informix 11.7, a database software release from IBM. It provides an overview of new features in Informix 11.7 including Informix Flexible Grid for high availability and scalability, performance enhancements for data warehousing and analytics, and security and manageability improvements. Case studies are presented showing how Informix is used by companies for applications such as traffic management, online ticketing, and gaming. The release promises benefits such as increased performance, lower costs, simplified administration, and support for emerging workloads.
Why Smart Meters Need Informix TimeSeriesIBM Sverige
Informix Update - Denna presentation hölls på IBM Data Server Day den 22 maj i Stockholm av Simon David, Technical Product Manager, Competitive Technologies & Enablement, Informix Development
IBM Informix dynamic server and websphere MQ integrationKeshav Murthy
- The document discusses using IBM Websphere MQ functions within Informix Dynamic Server (IDS) to enable transactional integration between IDS databases and MQ message queues.
- MQ functions allow sending, receiving, and publishing messages to/from MQ queues from within SQL statements and procedures using a simple interface.
- When using MQ functions within an IDS transaction, the interaction with MQ is transactionally protected using two-phase commit between IDS and MQ.
The document provides an overview of IBM Informix database security from both an operating system and database perspective. It discusses how Informix uses OS authentication, permissions, and network security capabilities. On the database side, it describes how Informix implements discretionary access control using SQL GRANT/REVOKE statements and label-based access control using security policies and labels. The document also outlines the seven distinct security roles in Informix and how to separate them, and provides details on configuring and using the Informix auditing functionality.
Using Informix Warehouse Accelerator with Informix high availability and scal...Keshav Murthy
Informix Warehouse Accelerator can now be used with Informix high availability and scale-out servers like MACH11 and HDR secondary servers. The key steps are:
1) Install Informix and IWA on primary, secondary, SDS or RSS nodes.
2) Configure the secondary servers as updatable.
3) Connect IWA to any Informix node using the connection details.
4) Design, validate and deploy the data mart from any node.
5) Load and run queries from any node and IWA will accelerate queries across nodes.
IIUG 2016 Gathering Informix data into RKevin Smith
A basics walk-through on how to setup R to work with Informix JDBC, ODBC, and ReST/JSON. After taking the datasets examples and uploading them to Informix you can also look through the http://www.slideshare.net/thoi_gian/iris-data-analysis-with-r?qid=414b5431-9759-49e7-b3ba-c89a7bb357be&v=&b=&from_search=1, but replace the data targets with Informix ReST/JSON. Hint since the iris dataset's column names have a non-Informix compliant character I used JSON to store the data into Informix. If you rename the column you can get the data into a normal table through JDBC or ODBC.
Example Iris to JSON to Informix through ReST:
library(datasets)
library(jsonlite)
library(httr)
data(iris)
myjson <-><-><-><->)
dataset[1:3]
IBM Informix - The Ideal Database for Internet of Things
Exclusive luncheon at IBM World of Watson 2016. Informix is the best fit for IoT sensor data analytics at the edge and in the cloud.
The document discusses security best practices for IBM Informix including:
1) Enabling role separation to restrict access and privileges for database administrators, application administrators, and backup administrators.
2) Configuring file permissions and ownership for key Informix directories and files to restrict access.
3) Enabling encrypted connections using SSL or other encryption mechanisms to protect data in transit.
4) Configuring firewalls, virtual private networks, and the sqlhosts file to control which clients and users can connect to the database server.
Informix - Internet of Things, "Internet of Things: Intelligent Database of Choice - Embedded database for Devices and Enterprise database for the Cloud" Sandor Szabo
IoT / M2M Solutions with Informix in the IoT GatewayEurotech
The document discusses leveraging computational power at the edge in Internet of Things (IoT) and machine-to-machine (M2M) solutions using Informix in the IoT gateway architecture. It covers the anatomy of IoT solutions with a focus on utilities and smart energy. It also discusses processing power and using Informix at the edge of the operational technology infrastructure, and integrating into the IBM enterprise IT world. Example use cases and conclusions are also presented.
This document discusses architectural options for deploying Informix Warehouse Accelerator (IWA). It outlines various hardware configuration options for Informix and IWA including using single or multiple SMP systems, single or multiple cluster systems. It also discusses options for connecting Informix and IWA, data refresh strategies like partition refresh and trickle feed, and features of IWA like support for MACH11 and heterogeneous platforms.
Partition based refresh for Informix Warehouse Accelerator.Keshav Murthy
With Informix 11.70.FC5, you can refresh only the modified/new partitions to Informix Warehouse Accelerator. This presentation shows the two use cases for it.
Informix IWA data life cycle mgmt & Performance on Intel.Keshav Murthy
IWA works with a snapshot of the data in Informix data mart. Once you have defined the data mart on IWA and loaded the data, you need to periodically refresh the data. You can choose to refresh all of the data or just the partitions that were added, dropped or modified. Whether you have hundreds of gigabytes or many terabytes, faster refresh will help you analyze recent data more rapidly and get closer to real-time business. This session will explain the options for data refresh, review their performance and explain how to correctly implement a refresh plan. IBM and Intel will demonstrate live data refresh on the Intel Xeon platform and examine the impact on performance.
This document provides instructions for installing and updating IBM Internet Security Systems Server Sensor 7.0 on an AIX system. It outlines the system requirements, installation process including accepting licenses and configuring default settings, and how to register the sensor with the management console and apply updates. The registration process involves adding the sensor as a new agent and choosing the event collector. Updates are applied by scheduling them from the management console.
BlackHat EU 2012 - Zhenhua Liu - Breeding Sandworms: How To Fuzz Your Way Out...MindShare_kk
Adobe's interpretation of sandboxing is called Adobe Reader X Protected Mode. Inspired by Microsoft's Practical Windows Sandboxing techniques, it was introduced in July 2010. So far, it had been doing a good job at limiting the impact of exploitable bugs in Adobe Reader X, as escaping the sandbox after successful exploitation turned to be particularly challenging, and hasn't been witnessed in the wild, yet.
This paper exposes how we did just this: By leveraging some broker APIs, a policy flaw, and a little more, we were able to break free from Adobe's sandbox.
The particular vulnerability we used was patched by Adobe in September 2011 (CVE-2011-1353), as a result of our responsible disclosure action; yet, this demonstrates that Adobe's sandbox cannot be considered a panacea against security flaws exploitation in Adobe Reader X, and paves the way toward further interesting discoveries for security researchers.
Indeed, beyond this particular vulnerability, this paper dives deep into the sandbox implementation of Adobe Reader X, and debates ways to audit its broker APIs, which, to our minds, offer a major attack surface. In particular, the paper details how we configured an open-source fuzzing tool to audit them through the IPC Framework.
Pankaj Chandra Joshi is seeking a role as an InfoVista System Engineer with over 3 years of experience in IT service delivery. He has skills in InfoVista products including Portal, Vista Mart, Server, Discovery, and Cockpit. He also has experience with IBM Netcool/Omnibus and Tivoli Netcool Performance Manager. His experience includes projects with Vodafone UK involving VistaMart upgrades, VistaBridge integration, and Vista Discovery customization for Bharti Airtel. He holds a B.Tech in Electronics and Communication.
XebiaLabs, CloudBees, Puppet Labs Webinar Slides - IT Automation for the Mode...XebiaLabs
Learn how you can enhance and extend your existing infrastructure to create an automated, end-to-end IT platform supporting on-demand middleware and application environments, application release pipelines, Continuous Delivery, Private/ hybrid development platform and PaaS and more.
Helm summit 2019_handling large number of charts_sept 10Shikha Srivastava
Now that you have an application running in Kubernetes, what will your next steps be? Can you deploy this application to any cloud? If someone else wishes to install your helm chart would you have all necessary resources to deploy it successfully? Do you have a certification process to ensure your helm chart is enterprise ready? Creating a helm chart to deploy your application is just the first step, but now you need a process to ensure that the helm chart follows guidelines established by your enterprise and future versions of the chart are created efficiently as part of your CI/CD pipeline. In this presentation, you will learn about effective ways to create, organize and maintain enterprise grade helm charts. We will also discuss how our CI/CD pipeline is implemented using custom linter, verification test cases to make sure only certified charts are promoted into production.
The workshop covered cloud-native Java technologies using Open Liberty and MicroProfile. It included presentations on 12-factor and 15-factor application methodologies and hands-on labs exploring OpenAPI, health checks, metrics, and JWT authentication. Leaders demonstrated how to build and deploy modular, scalable microservices using open-source tools that optimize developer productivity and application portability in cloud environments.
Platform-as-a-Service has rightly been celebrated as a way to increase developer productivity and thereby help companies get the new applications and services they need online (and making money) faster. It also helps admins meet the needs of those developers faster and with less manual effort. But PaaS goes beyond developers and beyond dev/test. Efficient application multi-tenancy and auto-scaling are also key features for production environments. Furthermore, developers may love that PaaS abstracts away platform details that they don't care about. But this abstraction also means that platform changes can happen without affecting developers, a big win for architects and procurement officers. In short, PaaS is for everyone.
Miria Systems and Datacap Consultant Sean Patrick Scott presented on migrating from IBM FileNet Capture to Datacap. The presentation covered an overview of Miria Systems and Datacap, the key differences between FileNet Capture and Datacap, a demonstration of Datacap's capabilities, and next steps for a quick start solution. The audience was invited to two upcoming webinars on upgrading Datacap and utilizing it for specific applications. Contact information was provided for follow up questions.
The document describes the Kovair Dynatrace Integration Adapter, which allows integration between the Dynatrace application performance monitoring tool and other ALM (Application Lifecycle Management) tools. This provides centralized monitoring of application performance metrics. Key benefits include tracking alerts and diagnosing code-level issues using transaction data from Dynatrace. An example use case is provided where Dynatrace data is integrated throughout the development and monitoring process using various tools like JIRA, Jenkins, and ServiceNow.
The document provides guidance on designing a Dynamic Datacenter infrastructure using Microsoft technologies. It outlines a 5-step process: 1) Determine scope, 2) Design virtualization hosts, 3) Design software infrastructure, 4) Design storage, and 5) Design networking. Key aspects covered include workload grouping, host hardware, virtual machine management, configuration management, monitoring, backups, switches, and load balancing. The goal is to provide a well-defined, automated, controlled, and resilient infrastructure.
Von der Zustandsüberwachung zur vorausschauenden WartungPeter Schleinitz
Talk at Sensorik-Stammtisch of thew Mittelstand 4.0-Kompetenzzentrum Ilmenau, http://www.kompetenzzentrum-ilmenau.digital/news/item/157-predictive-analytics-thema-beim-sensorik-stammtisch #ibmaot
The document summarizes an "Ask the Experts" webcast about installing WebSphere Application Server. A panel of five IBM experts answered questions about installing WAS V8 and V7, applying feature packs and fix packs, remotely installing WAS V8, and silently installing with Installation Manager. Additional WebSphere resources were also listed.
The document provides an overview of various cloud computing, big data, and web development projects. It summarizes achievements in cloud infrastructure using OpenStack and OpenShift, building Hadoop clusters for big data analytics, and developing web applications. It outlines next steps of integrating OpenShift with OpenStack, implementing real-time data processing using HBase, and automating matching between farmers and food processors for a web application.
This document outlines a course project to design and deploy an enterprise IT infrastructure for a small community college. Students will complete the project in phases, first proposing their design and then implementing virtualization, Active Directory, and centralized logging. The goal is for students to reduce hardware needs through virtualization and automation while meeting the college's requirements for services like email, websites, labs, and single sign-on access. Students will be graded based on how fully they meet requirements, secure systems, automate operations, and document their work.
This document outlines a course project to design and deploy an enterprise IT infrastructure for a small community college. Students will complete the project in phases, first proposing their design and then implementing virtualization and central logging. The goal is to reduce hardware needs through virtualization and automation. Requirements include setting up a learning management system, email server, content management system, VPN, Linux and Windows labs, kiosks, single sign-on, and on-demand services. The project will be graded based on meeting requirements, security, automation, and documentation in a final report.
The document summarizes new features in Citrix XenApp 6.5 including:
- Instant app access which allows for session pre-launching and lingering to reduce reconnect times.
- Enhancements to App Streaming 6.5 including VHD streaming and RadeFastLaunch.
- New HDX technologies and Flash for WAN v2 for improved performance over WAN.
- Multi-stream ICA which provides granular quality of service for ICA sessions.
- The enhanced desktop experience and service provider automation pack which bring a Windows 7 look and feel to hosted shared desktops.
Similar to Informix Warehouse accelerator -- design, deploy, use (20)
The N1QL is a developer favorite because it’s SQL for JSON. Developer’s life is going to get easier with the upcoming N1QL features. We have exciting features in many areas including language to performance, indexing to search, and tuning to transactions. This session will preview new the features for both new and advanced users.
Couchbase Tutorial: Big data Open Source Systems: VLDB2018Keshav Murthy
The document provides an agenda and introduction to Couchbase and N1QL. It discusses Couchbase architecture, data types, data manipulation statements, query operators like JOIN and UNNEST, indexing, and query execution flow in Couchbase. It compares SQL and N1QL, highlighting how N1QL extends SQL to query JSON data.
N1QL+GSI: Language and Performance Improvements in Couchbase 5.0 and 5.5Keshav Murthy
N1QL gives developers and enterprises an expressive, powerful, and complete language for querying, transforming, and manipulating JSON data. We’ll begin this session with a brief overview of N1QL and then explore some key enhancements we’ve made in the latest versions of Couchbase Server. Couchbase Server 5.0 has language and performance improvements for pagination, index exploitation, integration, index availability, and more. Couchbase Server 5.5 will offer even more language and performance features for N1QL and global secondary indexes (GSI), including ANSI joins, aggregate performance, index partitioning, auditing, and more. We’ll give you an overview of the new features as well as practical use case examples.
XLDB Lightning Talk: Databases for an Engaged World: Requirements and Design...Keshav Murthy
Traditional databases have been designed for system of record and analytics. Modern enterprises have orders of magnitude more interactions than transactions. Couchbase Server is a rethinking of the database for interactions and engagements called, Systems of Engagement. Memory today is much cheaper than disks were when traditional databases were designed back in the 1970's, and networks are much faster and much more reliable than ever before. Application agility is also an extremely important requirement. Today's Couchbase Server is a memory- and network-centric, shared-nothing, auto-partitioned, and distributed NoSQL database system that offers both key-based and secondary index-based data access paths as well as API- and query-based data access capabilities. This lightning talk gives you an overview of requirements posed by next-generation database applications and approach to implementation including “Multi Dimensional Scaling.
Couchbase 5.5: N1QL and Indexing featuresKeshav Murthy
This deck contains the high-level overview of N1QL and Indexing features in Couchbase 5.5. ANSI joins, hash join, index partitioning, grouping, aggregation performance, auditing, query performance features, infrastructure features.
The document discusses improvements to the N1QL query optimizer and execution engine in Couchbase Server 5.0. Key improvements include UnionScan to handle OR predicates using multiple indexes, IntersectScan terminating early for better performance, implicit covering array indexes, stable scans, efficiently pushing composite filters, pagination support, index column ordering, aggregate pushdown, and index projections.
Mindmap: Oracle to Couchbase for developersKeshav Murthy
This deck provides a high-level comparison between Oracle and Couchbase: Architecture, database objects, types, data model, SQL & N1QL statements, indexing, optimizer, transactions, SDK and deployment options.
Queries need indexes to speed up and optimize resource utilization. What indexes to create and what rules to follow to create right indexes to optimize the workload? This presentation gives the rules for those.
N1QL = SQL + JSON. N1QL gives developers and enterprises an expressive, powerful, and complete language for querying, transforming, and manipulating JSON data. We begin with a brief overview. Couchbase 5.0 has language and performance improvements for pagination, index exploitation, integration, and more. We’ll walk through scenarios, features, and best practices.
From SQL to NoSQL: Structured Querying for JSONKeshav Murthy
Can SQL be used to query JSON? SQL is the universally known structured query language, used for well defined, uniformly structured data; while JSON is the lingua franca of flexible data management, used to define complex, variably structured data objects.
Yes! SQL can most-definitely be used to query JSON with Couchbase's SQL query language for JSON called N1QL (verbalized as Nickel.)
In this session, we will explore how N1QL extends SQL to provide the flexibility and agility inherent in JSON while leveraging the universality of SQL as a query language.
We will discuss utilizing SQL to query complex JSON objects that include arrays, sets and nested objects.
You will learn about the powerful query expressiveness of N1QL, including the latest features that have been added to the language. We will cover how using N1QL can solve your real-world application challenges, based on the actual queries of Couchbase end-users.
Tuning for Performance: indexes & QueriesKeshav Murthy
There are three things important in databases: performance, performance, performance. From a simple query to fetch a document to a query joining millions of documents, designing the right data models and indexes is important. There are many indices you can create, and many options you can choose for each index. This talk will help you understand tuning N1QL query, exploiting various types of indices, analyzing the system behavior, and sizing them correctly.
Understanding N1QL Optimizer to Tune QueriesKeshav Murthy
Every flight has a flight plan. Every query has a query plan. You must have seen its text form, called EXPLAIN PLAN. Query optimizer is responsible for creating this query plan for every query, and it tries to create an optimal plan for every query. In Couchbase, the query optimizer has to choose the most optimal index for the query, decide on the predicates to push down to index scans, create appropriate spans (scan ranges) for each index, understand the sort (ORDER BY) and pagination (OFFSET, LIMIT) requirements, and create the plan accordingly. When you think there is a better plan, you can hint the optimizer with USE INDEX. This talk will teach you how the optimizer selects the indices, index scan methods, and joins. It will teach you the analysis of the optimizer behavior using EXPLAIN plan and how to change the choices optimizer makes.
Utilizing Arrays: Modeling, Querying and IndexingKeshav Murthy
Arrays can be simple; arrays can be complex. JSON arrays give you a method to collapse the data model while retaining structure flexibility. Arrays of scalars, objects, and arrays are common structures in a JSON data model. Once you have this, you need to write queries to update and retrieve the data you need efficiently. This talk will discuss modeling and querying arrays. Then, it will discuss using array indexes to help run those queries on arrays faster.
N1QL supports select, join, project,nest,unnest operations on flexible schema documents represented in JSON.
Couchbase 4.5 enhances the data modeling and query flexibility.
When you have parent-child relationship, children documents point to parent document, you join from child to parent. Now, how would you join from parent to child when parent does not contain the reference to child? How would you improve performance on this? This presentation explain the syntax, execution of the query.
Bringing SQL to NoSQL: Rich, Declarative Query for NoSQLKeshav Murthy
Abstract
NoSQL databases bring the benefits of schema flexibility and
elastic scaling to the enterprise. Until recently, these benefits have
come at the expense of giving up rich declarative querying as
represented by SQL.
In today’s world of agile business, developers and organizations need
the benefits of both NoSQL and SQL in a single platform. NoSQL
(document) databases provide schema flexibility; fast lookup; and
elastic scaling. SQL-based querying provides expressive data access
and transformation; separation of querying from modeling and storage;
and a unified interface for applications, tools, and users.
Developers need to deliver applications that can easily evolve,
perform, and scale. Otherwise, the cost, effort, and delay in keeping
up with changing business needs will become significant disadvantages.
Organizations need sophisticated and rapid access to their operational data, in
order to maintain insight into their business. This access should
support both pre-defined and ad-hoc querying, and should integrate
with standard analytical tools.
This talk will cover how to build applications that combine the
benefits of NoSQL and SQL to deliver agility, performance, and
scalability. It includes:
- N1QL, which extends SQL to JSON
- JSON data modeling
- Indexing and performance
- Transparent scaling
- Integration and ecosystem
You will walk away with an understanding of the design patterns and
best practices for effective utilization of NoSQL document
databases - all using open-source technologies.
SQL for JSON: Rich, Declarative Querying for NoSQL Databases and Applications Keshav Murthy
In today’s world of agile business, Java developers and organizations benefit when JSON-based NoSQL databases and SQL-based querying come together. NoSQL provides schema flexibility and elastic scaling. SQL provides expressive, independent data access. Java developers need to deliver apps that readily evolve, perform, and scale with changing business needs. Organizations need rapid access to their operational data, using standard analytical tools, for insight into their business. In this session, you will learn to build apps that combine NoSQL and SQL for agility, performance, and scalability. This includes
• JSON data modeling
• Indexing
• Tool integration
Introducing N1QL: New SQL Based Query Language for JSONKeshav Murthy
This session introduces N1QL and sets the stage for the rich selection of N1QL-related sessions at Couchbase Connect 2015. N1QL is SQL for JSON, extending the querying power of SQL with the modeling flexibility of JSON. In this session, you will get an introduction to the N1QL language, architecture, and ecosystem, and you will hear the benefits of N1QL for developers and for enterprises.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
2. IBM Smart Analytics
Step 1. Install, configure,
Studio
start Informix
Step 2. Install, configure, Step 3
start Accelerator
Step 1
Step 3. Connect Studio to
Informix & add accelerator
Step 4
Informix Database Server
Step 4. Design, validate,
Deploy Data mart
Step 5
Step 5. Load data to
accelerator
Ready for Queries
BI Applications
Step 2
Ready
Informix warehouse Accelerator
1
3. Connecting to Informix
• For data mart design, from ISAO Studio
– Use 11.5 Informix driver
– Protocol tcp/ip (onsoctcp or ontlitcp)
– Use the port with TCP/IP and SQLI protocol
• From informix applications, scripts, tools
– Supports protocols: tcp/ip, shared memory
– Supports all drivers
– CSDK, ODBC, JDBC, JCC, .NET, etc, etc.
2
4. Connection to Informix
• For data mart design, from ISAO Studio
• ISAO Studio runs on Windows and Linux
• Connect from these two platforms to any
supported Informix server
– Linux64/Intel
– HP-UX/Itanium
– Power/AIX
– Sparc/Solaris
3
5. Connection to Informix
• For Applications, connect as usual.
• No Application changes/redeployment necessary
• Set the environments (USE_DWA) using
sysdbopen() procedure
• sysdbopen() procedure is automatically executed
when any application connects to a database.
4
6. Connection to Informix
• USE_DWA
SET ENVIRONMENT USE_DWA ‘1’;
– Controls the session behavior of query matching.
– ‘0’ (zero) turns off using IWA for query processing
– ‘1’ turns on considering IWA
– ‘3’ same as 1 with diagnostics
– ‘998’ Use IWA only.
5
7. Adding Accelerator
IBM Smart Analytics
Step 1. Install, configure,
Studio
start Informix
Step 2. Install, configure, Step 3
start Accelerator
Step 1
Step 3. Connect Studio to
Informix & add accelerator
Step 4
Informix Database Server
Step 4. Design, validate,
Deploy Data mart
Step 5
Step 5. Load data to
accelerator
Ready for Queries DRDA over TCP/IP
BI Applications
Step 2
Ready
Informix warehouse Accelerator
6
8. Adding Accelerator
• Add new accelerator from data studio or
command line interface (CLI)
• Need four parameters to add accelerator
– Name of the accelerator (you choose)
– IP address of the IWA instance
– Port on which IWA is listening to
– PIN obtaining after executing ‘ondwa getpin’
• Port number is in dwainst.conf file.
7
9. Adding Accelerator
• Informix always talks to IWA Coordinator
– For all data mart operations
– Queries
– To obtain the resultset.
• Informix treats IWA Coordinator as remote node
8
10. Design, Validate, Deploy Data mart
IBM Smart Analytics
Step 1. Install, configure,
Studio
start Informix
Step 2. Install, configure, Step 3
start Accelerator
Step 1
Step 3. Connect Studio to
Informix & add accelerator
Step 4
Step 4. Design, validate,
Informix Database Server
Deploy Data mart
Step 5
Step 5. Load data to
accelerator
Ready for Queries
BI Applications
Step 2
Ready
Informix warehouse Accelerator
9
11. Design, Validate, Deploy Data marts
Step 5. Save the Definition
Step 1. Design, Validate the
data mart. Informix
ISAO Studio or Step 2. Deploy Data mart AQT
CLI Tool
AQT
Step 6. Return acknowledgement
Step 3
Step 4
Send the data mart
Return the SQL definition
definitions
Coordinator
Worker Worker Worker Worker
Compressed Compressed Compressed Compressed
data data data data
In memory In memory In memory In memory
Memory image Memory image
Memory image on disk Memory image on disk
on disk on disk
10
12. Store Sales ER-Diagram from TPC-DS
300GB database
73,049 402
204,000
287,997,024 86,400
1000
1,920,800
1,000,000 7200
20
2,000,000
11
14. Designing data mart
• Start with a good logical and physical design
• Typically has Star or Snowflake schema
• Data mart itself can contains
– One or more fact tables
– Available dimensions
– Relationship between the fact and dimensions
• Relationships
– 1:n relationship -- needs unique constraint on PK
– n:m relationshp
13
15. Designing data mart
• Design identifies and uses existing PK-FK
relationship between the tables
• In warehouse environment, it’s typical not to have
constraints defined within the schema
• Manually create the relationships between the
tables.
• Always start from the Parent and end with Child
– In customer, web_sales relationship, customer is the
parent and web_sales is the child.
– customer.customer_id will be the primary key,
web_sales.customer_id will be the foreign key.
14
16. Designing data mart
• When you don’t have PK-FK relationship
– Identify the keys from logical design
– Identify the keys from equi-join keys in queries
– Identify the parent and child
• Type of Relationships between two tables
– Single relationship with single key
– Single relationship with multiple keys
– Multiple relationship with single or multiple keys
15
17. Designing data mart
• Single Data mart with multiple fact tables
– Shares the dimensions with all
• Multiple data marts each with its own fact table,
but same fact tables
– Separate copy of dimension tables
– Higher memory requirement
16
18. Designing data mart – Smart mart tool
• Simply enable workload analysis
• Run the workload
• Informix will give you data mart definitions
required to run the workload
• Design is done for you based on workload
• Simply deploy and load the mart using this
definition
• Useful while generating data mart for standard
reports
• Use it as guiding tool for identifying tables
needed within warehouses. 17
19. Deploying the data mart
• Creates and sends the data mart definition to IWA
• Verify the fact tables and dimension tables.
• Generate the report and verify when necessary
• You can load the data when deploying the data
mart
• Typically you deploy once and load periodically
• Loading can be automated via command line
inerface (CLI)
18
20. Deploying the data mart
• IWA returns one or more SQL statements
representing the data mart.
• Informix creates Accelerated Query Tables (AQT)
for those.
• AQTs are essentially views used exclusively for
query matching
• Data mart deployment, enable, disable, drop
events are recorded in the system catalog
19
21. Design, Validate, Deploy Data mart
IBM Smart Analytics
Step 1. Install, configure,
Studio
start Informix
Step 2. Install, configure, Step 3
start Accelerator
Step 1
Step 3. Connect Studio to
Informix & add accelerator
Step 4
Informix Database Server
Step 4. Design, validate,
Deploy Data mart
Step 5
Step 5. Load data to
accelerator
Ready for Queries
BI Applications
Step 2
Ready
Informix warehouse Accelerator
20
22. Loading the data mart
• Load the data mart using Studio
• Load using loadMart command from CLI
• Takes snapshot of the table
• Options
– No locking of the tables
– Locking of all the tables
21
23. Query Flow
Step 1. Submit SQL
DB protocol: SQLI or DRDA Informix
Network : TCP/IP,SHM
Applications
2. Query matching and
BI Tools
redirection technology
Local
Step 5. Return results/describe/error Execution
Database protocol: SQLI or DRDA
Network : TCP/IP, SHM
Step 3
Step 4
offload SQL.
Results: DRDA over TCP/IP
DRDA over TCP/IP
Coordinator
Worker Worker Worker Worker
Compressed Compressed Compressed Compressed
data data data data
In memory In memory In memory In memory
Memory image Memory image
Memory image on disk Memory image on disk
on disk on disk
22
24. Query Flow within IWA
Step1
SQL from Informix Step5: Send the results
back to Infomrix server
Step2
Send the queries to all the
Step4: merge intermediate
workers Coordinator results, ORDER BY, FIRSTN
Worker Worker Worker Worker
Compressed data Compressed data Compressed data Compressed data
In memory In memory In memory In memory
Step3: Scan, Filter, Step3: Scan, Filter, Step3: Scan, Filter, Step3: Scan, Filter,
join, group join, group join, group join, group
23
25. Life of a query
SQL Statement
Explain File
Semantic
SQL Parser Optimizer Query Plan
Analyzer
System Table Stats & Executor
Catalog Column
Information Distribution
Query
Query stats Results
24
26. Optimizer is enhanced to
do the query matching
SQL Statement Query qualified for
Explain File
acceleration
Semantic Generate
SQL Parser Optimizer Query Plan SQL--
Analyzer
Informix IWA
System Table Stats & Execution Execution
Catalog Column
Information Distribution
Query Query
Results Results
Query stats
25
29. Content of the view
•The data mart schema should be star or snowflake schema
• Single table data mart is fine (e.g. weblog, call detail record)
•The view created represents the whole data mart
•All the selected columns from all tables
•The join predicate between the fact and dimension and
dimension to dimension.
28
30. Query Matching
•Fact table should be used in the query
•Dimensions should be joined using the join keys in the data mart
•Supported functions, expressions and aggregates
•INNER JOIN, LEFT OUTER JOIN with fact on the dominant side
•Cannot reference tables outside the data mart
29
31. Create the table
create table kfact(id int, name varchar(32), amount decimal(9,2));
Create the data mart
Datamart definition in the database… saved as a special view
create view "dwa"."aqtf5246230-8cce-42fd-8c3e-f516bbeacca3" ("COL1","COL2","COL3")
as select x0.id ,x0."name" ,x0.amount
from "keshav".kfact x0 ;
30
35. Acceleration with Informix Warehouse Accelerator
Smart Anlaytics 1. Identify the datamart
to offload. Informix
Data Studio
4. Create the metadata
5. Issue Off-load Datamart command
3. Return the SQL 2. Datamart Definition
representation
9. Return ACK 6. Off-load the data
Informix
Warehouse
Accelerator
Coordinator process
7. Distribute the data among workers
8. Compress the data Worker Processes
34
36. Acceleration with Informix Warehouse Accelerator
Step 1. Submit SQL
DB protocol: SQLI or DRDA
Applications Network : TCP/IP,SHM
BI Tools Informix
2. IDS query matching Local
and redirection Execution
technology
Step 5. Return
results/describe/error
Database protocol: SQLI or DRDA
Network : TCP/IP, SHM Step 3
Step 4 offload SQL.
DRDA over tcp/ip
Results:
DRDA over tcp/ip
Coordinator process
Worker Processes
35