The document discusses enhancements made to Informix Warehouse Accelerator (IWA) in version 12.10. Key points include:
- IWA now supports operations like creating, deploying, loading, enabling, and disabling data marts on secondary nodes in MACH11 and high availability environments, in addition to the primary/standard server node.
- New procedures like dropPartMart and loadPartMart allow refreshing partitions in a partitioned fact table within a data mart.
- Performance of SQL queries involving UNIONs, derived tables, and DISTINCT aggregates was improved.
- Additional OLAP functions and options like NULLS FIRST/LAST in ORDER BY were added for enhanced analytical querying.
New Tuning Features in Oracle 11g - How to make your database as boring as po...Sage Computing Services
One of the key problems that have haunted Oracle sites since the introduction of the cost based optimiser is the ability to provide a stable level of performance over time. The very responsiveness of the CBO to factors such as changes in statistics and initialisation parameters can lead to sudden changes in performance levels. Oracle 11g is set to introduce a number of features that will assist the DBA in providing a stable environment for mission critical applications. Excitement is for out of work time, (and for developers). The aim of most database administrators is to have as boring a working life as possible. Oracle 11g may help us achieve those aims.
This presentation discusses some of those features including:
Capture and replay of workload
Automatic SGA tuning
Managing and fixing plans
The 11g Automatic Tuning Advisor
New Tuning Features in Oracle 11g - How to make your database as boring as po...Sage Computing Services
One of the key problems that have haunted Oracle sites since the introduction of the cost based optimiser is the ability to provide a stable level of performance over time. The very responsiveness of the CBO to factors such as changes in statistics and initialisation parameters can lead to sudden changes in performance levels. Oracle 11g is set to introduce a number of features that will assist the DBA in providing a stable environment for mission critical applications. Excitement is for out of work time, (and for developers). The aim of most database administrators is to have as boring a working life as possible. Oracle 11g may help us achieve those aims.
This presentation discusses some of those features including:
Capture and replay of workload
Automatic SGA tuning
Managing and fixing plans
The 11g Automatic Tuning Advisor
This paper describes the evolution of the Plan table and DBMSX_PLAN in 11g and some of the features that can be used to troubelshoot SQL performance effectively and efficiently.
Oracle Parallel Distribution and 12c Adaptive PlansFranck Pachot
Parallel Distribution and 12c Adaptive Plans
In the previous newsletter we have seen how 12c can defer the choice of the join method to the first execution. We considered only serial execution plans. But besides join method, the cardinality estimation is a key decision for parallel distribution when joining in parallel query. Ever seen a parallel query consuming huge tempfile space because a large table is broadcasted to lot of parallel processes? This is the point addressed by Adaptive Parallel Distribution.
Once again, that new feature is a good occasion to look at the different distribution methods.
Oracle Join Methods and 12c Adaptive PlansFranck Pachot
Join Methods and 12c Adaptive Plans
In its quest to improve cardinality estimation, 12c has introduced Adaptive Execution Plans which deals with the cardinalities that are difficult to estimate before execution. Ever seen a hanging query because a nested loop join is running on millions of rows?
This is the point addressed by Adaptive Joins. But that new feature is also a good occasion to look at the four possible join methods available for years.
PostgreSQL Portland Performance Practice Project - Database Test 2 TuningMark Wong
Sixth presentation in a speaker series sponsored by the Portland State University Computer Science Department. The series covers PostgreSQL performance with an OLTP (on-line transaction processing) workload called Database Test 2 (DBT-2). This presentation goes through some experimentation of setting different PostgreSQL global user configuration (GUC) parameters.
PostgreSQL Portland Performance Practice Project - Database Test 2 HowtoMark Wong
Fourth presentation in a speaker series sponsored by the Portland State University Computer Science Department. The series covers PostgreSQL performance with an OLTP (on-line transaction processing) workload called Database Test 2 (DBT-2). This presentation is a set of examples to go along with the live presentation given on March 12, 2009.
This paper describes the evolution of the Plan table and DBMSX_PLAN in 11g and some of the features that can be used to troubelshoot SQL performance effectively and efficiently.
Oracle Parallel Distribution and 12c Adaptive PlansFranck Pachot
Parallel Distribution and 12c Adaptive Plans
In the previous newsletter we have seen how 12c can defer the choice of the join method to the first execution. We considered only serial execution plans. But besides join method, the cardinality estimation is a key decision for parallel distribution when joining in parallel query. Ever seen a parallel query consuming huge tempfile space because a large table is broadcasted to lot of parallel processes? This is the point addressed by Adaptive Parallel Distribution.
Once again, that new feature is a good occasion to look at the different distribution methods.
Oracle Join Methods and 12c Adaptive PlansFranck Pachot
Join Methods and 12c Adaptive Plans
In its quest to improve cardinality estimation, 12c has introduced Adaptive Execution Plans which deals with the cardinalities that are difficult to estimate before execution. Ever seen a hanging query because a nested loop join is running on millions of rows?
This is the point addressed by Adaptive Joins. But that new feature is also a good occasion to look at the four possible join methods available for years.
PostgreSQL Portland Performance Practice Project - Database Test 2 TuningMark Wong
Sixth presentation in a speaker series sponsored by the Portland State University Computer Science Department. The series covers PostgreSQL performance with an OLTP (on-line transaction processing) workload called Database Test 2 (DBT-2). This presentation goes through some experimentation of setting different PostgreSQL global user configuration (GUC) parameters.
PostgreSQL Portland Performance Practice Project - Database Test 2 HowtoMark Wong
Fourth presentation in a speaker series sponsored by the Portland State University Computer Science Department. The series covers PostgreSQL performance with an OLTP (on-line transaction processing) workload called Database Test 2 (DBT-2). This presentation is a set of examples to go along with the live presentation given on March 12, 2009.
Why Smart Meters Need Informix TimeSeriesIBM Sverige
Informix Update - Denna presentation hölls på IBM Data Server Day den 22 maj i Stockholm av Simon David, Technical Product Manager, Competitive Technologies & Enablement, Informix Development
Informix SQL & NoSQL: Putting it all togetherKeshav Murthy
Building IoT applications and handling IoT data with SQL, NoSQL, JSON, Spatial, timeseries, etc. Details of Informix SQL and NoSQL implementation and some things planned for 12.10.FC4.
A description of what REST is and is not useful for followed by a walkthrough of how to use REST API's to access Informix databases. Includes new features released for Informix 12.10xC7
IIUG 2016 Gathering Informix data into RKevin Smith
A basics walk-through on how to setup R to work with Informix JDBC, ODBC, and ReST/JSON. After taking the datasets examples and uploading them to Informix you can also look through the http://www.slideshare.net/thoi_gian/iris-data-analysis-with-r?qid=414b5431-9759-49e7-b3ba-c89a7bb357be&v=&b=&from_search=1, but replace the data targets with Informix ReST/JSON. Hint since the iris dataset's column names have a non-Informix compliant character I used JSON to store the data into Informix. If you rename the column you can get the data into a normal table through JDBC or ODBC.
Example Iris to JSON to Informix through ReST:
library(datasets)
library(jsonlite)
library(httr)
data(iris)
myjson <-><-><-><->)
dataset[1:3]
Basic Query Tuning Primer - Pg West 2009mattsmiley
Intro to query tuning in Postgres, for beginners or intermediate software developers. Lists your basic toolkit, common problems, a series of examples. Assumes the audience knows basic SQL but has little or no experience with reading or adjusting execution plans. Accompanies 45-90 minute talk; meant to encourage Q/A.
Matt Smiley
This is a basic primer aimed primarily at developers or DBAs new to Postgres. The format is a Q/A style tour with examples, based on common questions and pitfalls. Begin with a quick tour of relevant parts of the postgres catalog, with an aim to answer simple but important questions like:
How many rows does the optimizer think my table has?
When was it last analyzed?
Which other tables also have a column named "foo"?
How often is this index used?
Managing Statistics for Optimal Query PerformanceKaren Morton
Half the battle of writing good SQL is in understanding how the Oracle query optimizer analyzes your code and applies statistics in order to derive the “best” execution plan. The other half of the battle is successfully applying that knowledge to the databases that you manage. The optimizer uses statistics as input to develop query execution plans, and so these statistics are the foundation of good plans. If the statistics supplied aren’t representative of your actual data, you can expect bad plans. However, if the statistics are representative of your data, then the optimizer will probably choose an optimal plan.
An overview presentation covering the use of Oracle's PX functionality including some tips and traps. Detailed white paper at http://oracledoug.com/px.html
Your tuning arsenal: AWR, ADDM, ASH, Metrics and AdvisorsJohn Kanagaraj
Oracle Database 10g brought in a slew of tuning and performance related tools and indeed a new way of dealing with performance issues. Even though 10g has been around for a while, many DBAs haven’t really used many of the new features, mostly because they are not well known or understood. In this Expert session, we will look past the slick demos of the new tuning and performance related tools and go “under the hood”. Using this knowledge, we will bypass the GUI and look at the views and counters that matter and quickly understand what they are saying. Tools covered include AWR, ADDM, ASH, Metrics, Tuning Advisors and their related views. Much of information about Oracle Database 10g presented in this paper has been adapted from my book and I acknowledge that with gratitude to my publisher - SAMS (Pearson).
Slides from OOW13
Its the age old problem. The SQL statement that needs to run in 5 seconds - unfortunately runs in 10 seconds, or 10 minutes, or 10 hours. A SQL statement gets emailed to you with the simple subject title: "Make it faster". We'll start from this point in the process, and look at what you can do to tackle this common issue.
The N1QL is a developer favorite because it’s SQL for JSON. Developer’s life is going to get easier with the upcoming N1QL features. We have exciting features in many areas including language to performance, indexing to search, and tuning to transactions. This session will preview new the features for both new and advanced users.
N1QL+GSI: Language and Performance Improvements in Couchbase 5.0 and 5.5Keshav Murthy
N1QL gives developers and enterprises an expressive, powerful, and complete language for querying, transforming, and manipulating JSON data. We’ll begin this session with a brief overview of N1QL and then explore some key enhancements we’ve made in the latest versions of Couchbase Server. Couchbase Server 5.0 has language and performance improvements for pagination, index exploitation, integration, index availability, and more. Couchbase Server 5.5 will offer even more language and performance features for N1QL and global secondary indexes (GSI), including ANSI joins, aggregate performance, index partitioning, auditing, and more. We’ll give you an overview of the new features as well as practical use case examples.
XLDB Lightning Talk: Databases for an Engaged World: Requirements and Design...Keshav Murthy
Traditional databases have been designed for system of record and analytics. Modern enterprises have orders of magnitude more interactions than transactions. Couchbase Server is a rethinking of the database for interactions and engagements called, Systems of Engagement. Memory today is much cheaper than disks were when traditional databases were designed back in the 1970's, and networks are much faster and much more reliable than ever before. Application agility is also an extremely important requirement. Today's Couchbase Server is a memory- and network-centric, shared-nothing, auto-partitioned, and distributed NoSQL database system that offers both key-based and secondary index-based data access paths as well as API- and query-based data access capabilities. This lightning talk gives you an overview of requirements posed by next-generation database applications and approach to implementation including “Multi Dimensional Scaling.
Couchbase 5.5: N1QL and Indexing featuresKeshav Murthy
This deck contains the high-level overview of N1QL and Indexing features in Couchbase 5.5. ANSI joins, hash join, index partitioning, grouping, aggregation performance, auditing, query performance features, infrastructure features.
N1QL (SQL for JSON) has built-in rule based optimizer. In Couchbase 5.0, N1QL's optimizer has a number of improvements for resource utilization, performance improvements. This deck deck by Couchbase Principal Engineer, Sitaram describes those improvements.
Mindmap: Oracle to Couchbase for developersKeshav Murthy
This deck provides a high-level comparison between Oracle and Couchbase: Architecture, database objects, types, data model, SQL & N1QL statements, indexing, optimizer, transactions, SDK and deployment options.
Queries need indexes to speed up and optimize resource utilization. What indexes to create and what rules to follow to create right indexes to optimize the workload? This presentation gives the rules for those.
N1QL = SQL + JSON. N1QL gives developers and enterprises an expressive, powerful, and complete language for querying, transforming, and manipulating JSON data. We begin with a brief overview. Couchbase 5.0 has language and performance improvements for pagination, index exploitation, integration, and more. We’ll walk through scenarios, features, and best practices.
From SQL to NoSQL: Structured Querying for JSONKeshav Murthy
Can SQL be used to query JSON? SQL is the universally known structured query language, used for well defined, uniformly structured data; while JSON is the lingua franca of flexible data management, used to define complex, variably structured data objects.
Yes! SQL can most-definitely be used to query JSON with Couchbase's SQL query language for JSON called N1QL (verbalized as Nickel.)
In this session, we will explore how N1QL extends SQL to provide the flexibility and agility inherent in JSON while leveraging the universality of SQL as a query language.
We will discuss utilizing SQL to query complex JSON objects that include arrays, sets and nested objects.
You will learn about the powerful query expressiveness of N1QL, including the latest features that have been added to the language. We will cover how using N1QL can solve your real-world application challenges, based on the actual queries of Couchbase end-users.
Tuning for Performance: indexes & QueriesKeshav Murthy
There are three things important in databases: performance, performance, performance. From a simple query to fetch a document to a query joining millions of documents, designing the right data models and indexes is important. There are many indices you can create, and many options you can choose for each index. This talk will help you understand tuning N1QL query, exploiting various types of indices, analyzing the system behavior, and sizing them correctly.
Understanding N1QL Optimizer to Tune QueriesKeshav Murthy
Every flight has a flight plan. Every query has a query plan. You must have seen its text form, called EXPLAIN PLAN. Query optimizer is responsible for creating this query plan for every query, and it tries to create an optimal plan for every query. In Couchbase, the query optimizer has to choose the most optimal index for the query, decide on the predicates to push down to index scans, create appropriate spans (scan ranges) for each index, understand the sort (ORDER BY) and pagination (OFFSET, LIMIT) requirements, and create the plan accordingly. When you think there is a better plan, you can hint the optimizer with USE INDEX. This talk will teach you how the optimizer selects the indices, index scan methods, and joins. It will teach you the analysis of the optimizer behavior using EXPLAIN plan and how to change the choices optimizer makes.
Utilizing Arrays: Modeling, Querying and IndexingKeshav Murthy
Arrays can be simple; arrays can be complex. JSON arrays give you a method to collapse the data model while retaining structure flexibility. Arrays of scalars, objects, and arrays are common structures in a JSON data model. Once you have this, you need to write queries to update and retrieve the data you need efficiently. This talk will discuss modeling and querying arrays. Then, it will discuss using array indexes to help run those queries on arrays faster.
N1QL supports select, join, project,nest,unnest operations on flexible schema documents represented in JSON.
Couchbase 4.5 enhances the data modeling and query flexibility.
When you have parent-child relationship, children documents point to parent document, you join from child to parent. Now, how would you join from parent to child when parent does not contain the reference to child? How would you improve performance on this? This presentation explain the syntax, execution of the query.
Bringing SQL to NoSQL: Rich, Declarative Query for NoSQLKeshav Murthy
Abstract
NoSQL databases bring the benefits of schema flexibility and
elastic scaling to the enterprise. Until recently, these benefits have
come at the expense of giving up rich declarative querying as
represented by SQL.
In today’s world of agile business, developers and organizations need
the benefits of both NoSQL and SQL in a single platform. NoSQL
(document) databases provide schema flexibility; fast lookup; and
elastic scaling. SQL-based querying provides expressive data access
and transformation; separation of querying from modeling and storage;
and a unified interface for applications, tools, and users.
Developers need to deliver applications that can easily evolve,
perform, and scale. Otherwise, the cost, effort, and delay in keeping
up with changing business needs will become significant disadvantages.
Organizations need sophisticated and rapid access to their operational data, in
order to maintain insight into their business. This access should
support both pre-defined and ad-hoc querying, and should integrate
with standard analytical tools.
This talk will cover how to build applications that combine the
benefits of NoSQL and SQL to deliver agility, performance, and
scalability. It includes:
- N1QL, which extends SQL to JSON
- JSON data modeling
- Indexing and performance
- Transparent scaling
- Integration and ecosystem
You will walk away with an understanding of the design patterns and
best practices for effective utilization of NoSQL document
databases - all using open-source technologies.
SQL for JSON: Rich, Declarative Querying for NoSQL Databases and Applications Keshav Murthy
In today’s world of agile business, Java developers and organizations benefit when JSON-based NoSQL databases and SQL-based querying come together. NoSQL provides schema flexibility and elastic scaling. SQL provides expressive, independent data access. Java developers need to deliver apps that readily evolve, perform, and scale with changing business needs. Organizations need rapid access to their operational data, using standard analytical tools, for insight into their business. In this session, you will learn to build apps that combine NoSQL and SQL for agility, performance, and scalability. This includes
• JSON data modeling
• Indexing
• Tool integration
Introducing N1QL: New SQL Based Query Language for JSONKeshav Murthy
This session introduces N1QL and sets the stage for the rich selection of N1QL-related sessions at Couchbase Connect 2015. N1QL is SQL for JSON, extending the querying power of SQL with the modeling flexibility of JSON. In this session, you will get an introduction to the N1QL language, architecture, and ecosystem, and you will hear the benefits of N1QL for developers and for enterprises.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
2. Informix Warehousing Moving Forward
Goal is to provide a comprehensive warehousing platform that
is highly competitive in the marketplace
Incorporating the best features of XPS and Red Brick into
Informix for OLTP/Warehousing and Mixed-Workload
Using the latest Informix technology in:
Continuous Availability and Flexible Grid
Data Warehouse Accelerator using latest industry
technology
Integration of IBM’s BI software stack
3. 3
Informix Warehouse
Feature
- SQW
- Data Modeling
- ELT/ETL
Informix Warehouse with
Storage
Optimization/Compression
Cognos integration
- Native Content Store on Informix
SQL Merge
Informix Warehouse: Roadmap
External Tables
Star Join Optimization
Multi-index Scan
New Fragmentation
Fragment Level Stats
Storage Provisioning
Warehouse
Accelerator
OLAP
Query Rewrite
Hash Join
Enhance IWA
+++
11.5xC3
11.5xC6
11.70
11.70xC2
11.5xC4
11.5xC5
12.10
4. 4
IWA 1st
Release
On SMP
SMB: IGWE
Scale out: IWA
on Blade ServerWorkload Analysis Tool
More Locales
Data Currency
IWA: Roadmap
Partition Refresh
MACH11 support
Solaris on Intel
Trickle Feed
Union queries
Derived tables
OAT Integration
SQL/OLAP for IWA
11.7xC2
11.7xC5
12.1xC1
11.7xC3
11.7xC4
2012 IIUG
2012 IIUG
Support for Informix
Timeseries
11.7xC7
5. Informix Publications
Bulletin of the Technical Committee on Data Engineering: March 2012 Vol. 35 No. 1
Real Time Business Intelligence. September 2, 2011 - Seattle, United States
IBM Data management Magazine: Supercharging the
data wharehouse while keeping the costs down.
2012 Bloor Report: IBM Informix in hybrid workload environments
2012 Ovum Analyst report: Informix Accelerates Analytic Integration into OLTP
DBTA Article: Empowering Business Analysts with Faster Insights
http://youtu.be/xJd8M-fbMI0
7. What is OLAP?
• On-Line Analytical Processing
• Commonly used in Business
Intelligence (BI) tools
– ranking products, salesmen, items, etc
– exposing trends in sales from historic data
– testing business scenarios (forecast)
– sales breakdown or aggregates on multiple
dimensions (Time, Region, Demographics,
etc)
8. OLAP Functions in Informix
• Supports subset of commonly used
OLAP functions
• Enables more efficient query processing
from BI tools such as Cognos
9. Example query with group by
select customer_num, count(*)
from orders
where customer_num <= 110
group by customer_num;
customer_num (count(*))
101 1
104 4
106 2
110 2
4 row(s) retrieved.
10. Example query with OLAP function
select customer_num, ship_date, ship_charge,
count(*) over (partition by customer_num)
from orders
where customer_num <= 110;
customer_num ship_date ship_charge (count(*))
101 05/26/2008 $15.30 1
104 05/23/2008 $10.80 4
104 07/03/2008 $5.00 4
104 06/01/2008 $10.00 4
104 07/10/2008 $12.20 4
106 05/30/2008 $19.20 2
106 07/03/2008 $12.30 2
110 07/06/2008 $13.80 2
110 07/16/2008 $6.30 2
9 row(s) retrieved.
12. Ranking Functions
• Partition by clause is optional
• Order by clause is required
• Window frame clause is NOT allowed
• Duplicate value handling is different
between rank() and dense_rank()
– same rank given to all duplicates
– next rank used “skips” ranks already covered by
duplicates in rank(), but uses next rank for
dense_rank()
13. Where does OLAP function fit?
Joins, group by,
having
OLAP functions
Final order by
14. Applications
BI Tools
Step 1. Submit SQL
DB protocol: SQLI or DRDA
Network : TCP/IP,SHM
Informix
2. Query matching and
redirection technology
Step 3
offload SQL.
DRDA over TCP/IP
Step 4
Results:
DRDA over TCP/IP
Local
Execution
Coordinator
Compressed
data
In memory
Worker
Memory image
on disk
Compressed
data
In memory
Worker
Memory image
on disk
Compressed
data
In memory
Worker
Memory image
on disk
Compressed
data
In memory
Worker
Memory image
on disk
Step 6. Return results/describe/error
Database protocol: SQLI or DRDA
Network : TCP/IP, SHM
Query Flow
5. Feed results to OLAP
iterators if it exists
15. QUERY: DWA executed:(OPTIMIZATION TIMESTAMP: 10-02-2012 20:52:56)
------
select ws_web_page_sk, ws_net_paid, avg(ws_net_paid) over()
from web_sales
where ws_web_page_sk < 10
group by ws_web_page_sk, ws_net_paid
order by ws_web_page_sk, ws_net_paid
Estimated Cost: 1497990
Estimated # of Rows Returned: 286309
Temporary Files Required For: Order By Group By
1) ds2@BVSRDWA:dwa.aqt5cbe4c46-acdc-463a-a9cb-2c3318cc9164: REMOTE DWA PATH
Remote SQL Request:
{QUERY {FROM dwa.aqt5cbe4c46-acdc-463a-a9cb-2c3318cc9164} {WHERE {< COL066 10 } } {SELECT {SYSCAST
COL066 AS INTEGER NULLABLE} {SYSCAST COL083 AS DECIMAL 7 2 NULLABLE} } {GROUP COL066 COL083 } }
Query statistics:
-----------------
Table map :
----------------------------
Internal name Table name
----------------------------
type rows_prod est_rows time est_cost
-------------------------------------------------
dwa 307576 0 00:03.62 0
type it_count time
----------------------------
olap 307576 00:04.48
type rows_sort est_rows rows_cons time est_cost
------------------------------------------------------------
sort 307576 286309 307576 00:06.49 192262
17. Union Query Support
select sum(sales_amt) from SALES
UNION ALL
select sum(returns_amt) from SALES_RETURN;
18. Derived table Query
select state, totsales, totreturns
from (select state, sum(sale_amt) from sales)
as stsales(state, totsales),
(select state, sum(return_amt) from sales_returns)
as streturns(state, totreturns)
Where stsales.state = streturns.state;
19. SELECT d_year,i_brand_id,i_class_id,i_category_id ,i_manufact_id,SUM(sales_cnt) AS sales_cnt ,SUM(sales_amt) AS sales_amt
FROM
(
SELECT d_year ,i_brand_id ,i_class_id ,i_category_id,i_manufact_id,SUM(sales_cnt) AS sales_cnt
,SUM(sales_amt) AS sales_amt
FROM (SELECT d_year ,i_brand_id ,i_class_id ,i_category_id ,i_manufact_id ,cs_quantity AS sales_cnt
,cs_ext_sales_price AS sales_amt
FROM catalog_sales JOIN item ON i_item_sk=cs_item_sk
JOIN date_dim ON d_date_sk=cs_sold_date_sk
WHERE i_category='Music'
UNION
SELECT d_year ,i_brand_id ,i_class_id ,i_category_id ,i_manufact_id
,ss_quantity AS sales_cnt
,ss_ext_sales_price AS sales_amt
FROM store_sales JOIN item ON i_item_sk=ss_item_sk
JOIN date_dim ON d_date_sk=ss_sold_date_sk
WHERE i_category='Books'
UNION
SELECT d_year
,i_brand_id
,i_class_id
,i_category_id
,i_manufact_id
,ws_quantity AS sales_cnt
,ws_ext_sales_price AS sales_amt
FROM web_sales JOIN item ON i_item_sk=ws_item_sk
JOIN date_dim ON d_date_sk=ws_sold_date_sk
WHERE i_category='Sports') sales_detail
GROUP BY d_year, i_brand_id, i_class_id, i_category_id, i_manufact_id) as tmp
GROUP BY d_year, i_brand_id, i_class_id, i_category_id, i_manufact_id
ORDER BY sales_amt, sales_cnt
20. Query statistics:
Table map :
----------------------------
Internal name Table name
----------------------------
t1 (Temp Table For Collection Subquery)
t2 (Temp Table For Collection Subquery)
type rows_prod est_rows time est_cost
-------------------------------------------------
dwa 1410644 0 00:06.38 0
type rows_prod est_rows time est_cost
-------------------------------------------------
dwa 2749278 0 00:12.53 0
type rows_prod rows_cons_1 rows_cons_2 time
------------------------------------------------------
merge 4159922 1410644 2749278 00:19.38
type rows_prod est_rows time est_cost
-------------------------------------------------
dwa 723063 0 00:08.01 0
type rows_prod rows_cons_1 rows_cons_2 time
------------------------------------------------------
merge 4882985 4159922 723063 00:28.11
type rows_sort est_rows rows_cons time
-------------------------------------------------
sort 4867550 0 4882985 01:22.97
type table rows_prod est_rows rows_scan time est_cost
-------------------------------------------------------------------
scan t1 4867550 4320015 4867550 00:04.59 190744
type rows_prod est_rows rows_cons time est_cost
------------------------------------------------------------
group 77949 1769089 4867550 00:52.63 11610464
type table rows_prod est_rows rows_scan time est_cost
-------------------------------------------------------------------
scan t2 77949 1769089 77949 00:00.08 99698
type rows_prod est_rows rows_cons time est_cost
------------------------------------------------------------
group 77949 724460 77949 00:01.20 4647282
type rows_sort est_rows rows_cons time est_cost
------------------------------------------------------------
sort 77949 724460 77949 00:01.77 709964
22. SQL Enhancements
• OLAP Window functions/aggregates
• Multiple distinct aggregates
• Distinct with CASE expression
• NULLS FIRST, NULLS LAST modifier to ORDER BY
23. Support for custom NULL sorting
• Informix has NULLS FIRST by default and cannot be
changed.
• NULLS FIRST, NULLS LAST are modifiers to ORDER BY
clause
• Oracle supports both.
• Helps avoid sorting in Cognos. Cognos used this for some
reports against Oracle.
SELECT c1, c2, sum(c3)
FROM t1
GROUP BY c1, c2
ORDER BY c2 NULLS LAST, C1 NULLS FIRST;
24. DISTINCT with CASE expression
• Support for CASE expression as argument to
aggregates was added in 11.70.
• 12.10 adds support for distinct on CASE expression
SELECT sum(T983271.set_avgday_sales_rtl_amt) as c2,
count(distinct case when T983271.not_set_cnt > 0
then T983271.store_sk_id end ) as c3
FROM features_upc_tab T983271;
25. Multiple aggregates with distincts
• Long requested by Cognos,Wal-Mart and others.
• Wal-Mart had run into this during IWA pilot project.
• Design and part of the code taken from XPS.
• Long live XPS!
select region, sum(distinct cid), avg(distinct salesdt)
From sales_tab;
26. Informix Database Server
Informix warehouse Accelerator
BI Applications
Step 1. Install, configure,
start Informix
Step 2. Install, configure,
start Accelerator
Step 3. Connect Studio to
Informix & add accelerator
Step 4. Design, validate,
Deploy Data mart
Step 5. Load data to
accelerator
Ready for Queries
IBM Smart Analytics
Studio
Step 1
Step 2
Step 3
Step 4
Step 5
Ready
Informix Ultimate Warehouse edition
27. Informix Database Server
BI Applications
Step 1. Create the Sales-Mart
and load it. Sales is the fact
table -- range partitioned.
Step 2. Load jobs
update the fact table “sales”
Only updates existing partition
Step 3. Identify the partition,
execute dropPartMart().
Step 4. for same partition,
execute loadPartMart().
Ready for Queries
IBM Smart Analytics
Studio or stored
procedures or
command line tool
Step 1
Step 4
Step 2
Step 3
Ready
Case 1: Partition refresh: Updates to existing Partitions
Sales-Mart
sales
customer
stores
IWA
OLTP Apps
partitioned fact table
SQL Script: call
Stored procedure
Modified partition
INSERT, UPDATE, DELETE
28. Informix Database Server
BI Applications
Step 1. Create the Sales-Mart
and load it. Sales is the fact
table -- range partitioned.
Need to move the Time
window to next range.
ep 2. DETACH operation
Execute dropPartMart()
DETACH the partition
ep 3. ATTACH operation
ATTACH the partition
Execute loadPartMart()
Ready for Queries
IBM Smart Analytics
Studio or stored
procedures or
command line tool
Step 1
Step 3Step 2
Ready
Case 2: Partition refresh: Time Cyclic data management
Sales-Mart
sales
customer
stores
IWA
OLTP Apps
partitioned fact table
Move the window.
29. dropPartMart() procedure
1. Uses the accelerator name, datamart name, table
name and partition name.
Partition name can be the name of the partition or
partition number (sysfragments.partn)
The partition name or number should be a valid
partition for the table.
Call dropPartMart() first before doing the DEATCH
30. loadPartMart() procedure
1. Uses the accelerator name, datamart name, table
name and partition name.
Partition name can be the name of the partition or
partition number (sysfragments.partn)
The partition name or number should be a valid
partition for the table.
ATTACH the partition first, before calling
loadPartMart().
31. 31
Informix Database Server
Informix warehouse Accelerator
BI Applications
Step 1. Install, configure,
start Informix
Step 2. Install, configure,
start Accelerator
Step 3. Connect Studio to
Informix & add accelerator
Step 4. Design, validate,
Deploy Data mart
Step 5. Load data to
accelerator
Ready for Queries
IBM Smart Analytics
Studio
Step 1
Step 2
Step 3
Step 4
Step 5
Ready
Informix Warehouse Accelerator – In 11.70.FC4
32. Background
• Prior to 11.70.FC5, adding accelerator, create, deploy,
load, enable, disable datamart, accelerating queries – are
all operations officially supported only on Standard
server or Primary node of MACH11/HA environment.
• We estimate about 50% of Informix customers use HDR
secondary servers and growing number of customers use
MACH11 (SDS secondary) configurations and RSS nodes.
MACH11 is the Informix scale out solution.
• IWA itself supports a scale out solution (on a cluster)
starting with 11.70.FC4.
• Reasons to support MACH11 and IWA together.
– This feature will enable partitioning a cluster or HA group
between OLTP and BI workload.
– This feature will give help to off-load the expensive LOAD
functionality to secondary servers
– We have customers now requesting support for HDR secondary
to IWA
33. 33
Informix Primary
Informix warehouse Accelerator
BI Applications
Step 1. Install, configure,
start Informix
Step 2. Install, configure,
start Accelerator
Step 3. Connect Studio to
Informix & add accelerator
Step 4. Design, validate,
Deploy Data mart from
Primary, SDS, HDR, RSS
Step 5. Add IWA to sqlhosts
Load data to
Accelerator from any node.
Ready for Queries
IBM Smart Analytics
Studio
Step 1
Step 3
Step 4
Step 5
Ready
Informix Warehouse Accelerator – 11.70.FC5. MACH11 Support
Informix
SDS1
Informix
SDS2
Informix
HDR
Secondary
Informix
RSS
Step 2
34. Step 1: Install:
• Informix and IWA are installed just like before.
• Informix can be combination of standard,
primary, SDS, HDR secondary and RSS nodes.
• IWA can be installed on the same computer as any
one of the nodes or on distinct computer.
• IWA can also be installed on a cluster hardware
with multiple worker nodes for scale out
performance.
35. Step 2: Configure
• Informix and IWA are installed just like before.
• Informix can be combination of standard,
primary, SDS, HDR secondary and RSS nodes.
• IWA can be installed on the same computer as any
one of the nodes or on distinct computer.
• IWA can also be installed on a cluster hardware
with multiple worker nodes for scale out
performance.
• Note: Informix MACH11 technology works with
logged and ANSI databases only.
36. •The secondary servers should be updatable
secondary servers.
•Set this in $ONCONFIG
UPDATABLE_SECONDARY 10
Step 2: Configure
37. Step 3: Connect
• You can connect to IWA from Informix from any of the
Informix servers using existing method.
– Get the connection details via:
# ondwa getpin
– The output will be, ip address, port, pin for IWA connection.
– Use that information to create the connection.
• After successful connection from Informix to IWA, the
SQLHOSTS will have something like this
FAST group - -
c=1,a=484224232041684420473a283e612f74393e6025757159506a51344a6b4e2f2d2d47455e6b653f2f6c795f287d7b65224d6c3c2f65722e6a2a4245397b3b447d572c3129696b306440
FAST_1 dwsoctcp 172.34.22.188 21022 g=FAST
• To use this connection on any of Informix nodes, copy these lines AS IS to the
SQLHOSTS file of those servers.
• Make sure copy ALL the lines within the FAST group.
38. Step 3: Connect..continued
• The name of the IWA will be used as the AQT site name in
systables.sitename. So, it’s important to have the right
site name in SQLHOSTS entry for a successful connection.
• Changing ANY of the details of this SQLHOSTS entry will
result in connection, query matching and acceleration
issues.
39. Step 4. Design, Validate and Deploy
• The secondary servers should be updatable secondary
servers.
• Set this in $ONCONFIG
UPDATABLE_SECONDARY 10
• The Design, validate and deploy would be identical.
40. Step 5. Running queries
• Once you deploy the data mart from one of the nodes,
the data mart definitions in the catalogs are replicated to
all the systems.
• SQLHOSTS entries should be copied over manually.
• After this, queries can be run as usual. Informix does
query matching and off-loading as it does on Primary.
41. Informix Database Server
Informix warehouse Accelerator
BI Applications
Step 1. Install, configure,
start Informix
Step 2. Install, configure,
start Accelerator
Step 3. Connect Studio to
Informix & add accelerator
Step 4. Design, validate,
Deploy Data mart
Step 5. Load data to
accelerator
Ready for Queries
IBM Smart Analytics
Studio
Step 1
Step 2
Step 3
Step 4
Step 5
Ready
42. Design DM by
workload analysis or
manually
Deployed datamart
Datamart
DeletedDatamart in USE
Datamart Disabled
Partition based refresh
Trickle feed refresh
Deploy
Load
Drop
Disable
Enable Drop
43. Administration: Open Admin Tool
• Browser based administration tool for Informix
• Replaces ISAO studio with most functions
• Adds workload analysis and datamart deployment
• Adds support for data refresh and setup commands.
44. Data Refresh: RefreshMart Implementation :
new stored procedure :
ifx_refreshMart(
'accelerator_name',
'data_mart_name',
'locking_mode',
NULL);
locking_mode is optional : can be NULL
4th
parameter : not used as of now
if used while new functionality “trickleFeed” is active :
ifx_refreshMart() will not refresh fact tables for which trickleFeed
is active.
45. Data Refresh: RefreshMart :
granularity based on table partitions
data mart remains available for query acceleration
single call of stored procedure for ease of use
control of execution remains with administrator
handles all data changes, including fragment operations
data consistency via lock mode parameter
prerequisite :
sysadmin database accessible for administrator
46. Informix Database Server
Step 1. Create the Sales-Mart
and load it. Sales is the fact
Table, customer and stores
Dimension tables.
Step 2 Setup tricklefeed by
calling ifx_setupTrickleFeed
p 3. Let application roll.
the inserts on fact and
dates on any dimensions.
ep 4. As the applications
ns, the reports see new
ta updated on IWA
IBM Smart Analytics
Studio or stored
procedures or
command line tool
Step 1
Step 3
Step 2
Data Refresh: Scenario for Real-time trickle feed.
Sales-Mart
sales
customer
stores
IWA
OLTP Apps
fact table
Setup the trickle
feed
Run the application
Step 4
Reports & BI Apps
47. Data Refresh: Trickle feed (cont.)
insert into
fact_table ...
fact table
data row trigger
dimension table1
data row
accelerator
data mart
data row
Dbscheduler
task
ifx_loadPartMart()
ifx_refeshMart()
data row
dimension table2
data row
48. “refreshMart” - a New Function for the
Informix Warehouse Accelerator
Motivation:
data mart is a snapshot,
time consuming load required to reflect data changes,
manual drop and (re-) load of individual partitions is
cumbersome,
want ease of use with a single function “doing it all”.
IWA refreshMart ( 1 )
49. RefreshMart :
refreshes only “data units” that were changed :
less data to be moved.
“data units” are table data partitions :
single data partition for a normal table,
single data partition for each fragment of a fragmented table
control of granularity by table fragmentation.
IWA refreshMart ( 2 )
50. Data Mart Meta Info :
new meta information about data marts needed :
to keep track of changes
stored in new tables in sysadmin database
sysadmin database must exist
data marts must be re-created after upgrade
administrator needs acces rights for sysadmin
database
execute function task('grant admin','<user name>','warehouse');
IWA refreshMart ( 3 )
51. Data Mart Meta Info :
full data load is reference point
changes registered :
insert, update, delete of data records (but no actual data is
logged),
drop partition, then reload partition
detach fragment
drop partition
attach fragment
load partition
completed refreshMart is new reference point
IWA refreshMart ( 4 )
52. LoadMart :
loads complete data, always and unconditionally
rebuilds compression dictionary from scratch
RefreshMart :
does not extend or rebuild compression dictionary
new values are placed in “catch-all” containers
periodically do full data load using “ifx_loadMart()”
IWA refreshMart ( 5 )
53. RefreshMart and Data Consistency :
optional lock mode specifies locking of tables in warehouse
database :
MART
TABLE
NONE
works like lock mode for loadMart
if not specified :
lock mode of last loadMart is in effect.
IWA refreshMart ( 6 )
54. RefreshMart Implementation :
new stored procedure :
ifx_refreshMart(
'accelerator_name',
'data_mart_name',
'locking_mode',
NULL);
locking_mode is optional : can be NULL
4th
parameter : not used as of now
IWA refreshMart ( 7 )
55. RefreshMart - Summary :
granularity based on table partitions
data mart remains available for query acceleration
single call of stored procedure for ease of use
control of execution remains with administrator
handles all data changes, including fragment
operations
data consistency via lock mode parameter
prerequisite :
sysadmin database accessible for administrator
IWA refreshMart ( 8 )
57. Data Transfer from Informix to IWA – First time
Ready
for Queries
Ready
for Queries
Design the Data Mart
-- OAT analysis
-- cmdline analysis
-- ISAO Studio
Design the Data Mart
-- OAT analysis
-- cmdline analysis
-- ISAO Studio
Deploy the Data Mart
-- OAT
-- Stored procedure
-- ISAO Studio
Deploy the Data Mart
-- OAT
-- Stored procedure
-- ISAO Studio
Optionally lock the tableOptionally lock the table
Insert table data into external tableInsert table data into external table
Send data over to IWASend data over to IWA
Fact table – split into each worker
Dim table – copy to each worker
Fact table – split into each worker
Dim table – copy to each worker
Compression frequency partitioning & encodingCompression frequency partitioning & encoding
Write the memory image to diskWrite the memory image to disk
InformixInformix
IWAIWA
Load the Mart
-- OAT
-- Stored procedure
-- ISAO Studio
Load the Mart
-- OAT
-- Stored procedure
-- ISAO Studio
Red: New in v12.10Red: New in v12.10
58. Distributing data from IDS (Fact tables)
Data Fragment
Fact Table
UNLOAD
UNLOAD
UNLOAD
UNLOAD
IDS Stored Procedures
Copy
A copy of the IDS data is now
transferred over to the Worker
process. The Worker process
holds a subset of the data
(compressed) in main memory
and is able to execute queries
on this subset. The data is
evenly distributed (no value
based partitioning) across the
CPUs.
Coordinator
Process
Worker
Process
Compressed Data
Compressed Data
Compressed Data
Compressed Data
Compressed Data
Compressed Data
Worker
Process
Worker
Process
Data Fragment
Data Fragment
Data Fragment
59. Dimension Table
Dimension Table
Dimension Table
Dimension Table
Distributing data from IDS (Dimension tables)
IDS
UNLOAD
UNLOAD
UNLOAD
UNLOAD
IDS Stored Procedure
Dimension Table
Dimension Table
Dimension Table
Dimension Table
All dimension tables are
transferred to the worker
process.
Dimension Table
Dimension Table
Dimension Table
Dimension Table
Dimension Table
Dimension Table
Dimension Table
Dimension Table
Coordinator
Process
Worker
Process
Worker
Process
Worker
Process
60. • Fact table(s) of a data mart are populated only by new data
rows. Data is inserted and not updated or deleted.
• Dimension tables are updated.
• Need the latest data for analysis
• Full refresh can take a long time.
• Partitioning/fragmentation scheme could refresh too many
partitions.
• Inserted rows shall get loaded to the data mart:
• - within a configurable time
• - optionally, data mart dimension tables shall get refreshed
Trickle feed Use Case
61. Informix Database Server
Step 1. Create the Sales-Mart
and load it. Sales is the fact
Table, customer and stores
Dimension tables.
Step 2 Setup tricklefeed by
calling ifx_setupTrickleFeed
p 3. Let application roll.
the inserts on fact and
dates on any dimensions.
ep 4. As the applications
ns, the reports see new
ta updated on IWA
Use OAT, Studio or
stored procedures.
Step 1
Step 3
Step 2
Scenario for Real-time trickle feed.
Sales-Mart
sales
customer
stores
IWA
OLTP Apps
fact table
Setup the trickle
feed
Run the application
Step 4
Reports & BI Apps
62. User interface:
ifx_setupTrickleFeed( 'accelerator_name', 'data_mart_name', buffertime)
accelerator_name
The name of the accelerator that contains the data mart.
data_mart_name
The name of the data mart.
buffertime
An integer that represents the time interval between refreshes and
whether dimension tables are refreshed.
Examples:
execute procedure ifx_setupTrickleFeed('salesacc', ‘partsmart', 60);
execute procedure ifx_setupTrickleFeed('salesacc', 'carmart', -300);
Trickle feed (cont.)
64. Trickle feed (cont.)
insert into
fact_table ...
fact table
data row trigger
dimension table1
data row
accelerator
data mart
data row
Dbscheduler
task
ifx_loadPartMart()
ifx_refeshMart()
data row
dimension table2
data row
65. Design DM by workload
analysis or manually
Deployed datamart
Datamart Deleted
Datamart in USE
Datamart Disabled
Partition based refresh
Trickle feed refresh
Deploy
Load
Drop
Disable
Full Load/
Enable
Drop
Complete view of Data mart state transitions.
66. Summary
• Full refresh will recreate the dictionary, but can
take time.
• Partition based refresh is very fast and refreshes
only the partitions with new data since the last
refresh
• Trickle feed captures the INSERTs on the fact
table and refreshes by sending this data to IWA.
It can also refresh dimension tables.
• Even when you use partition refresh or trickle
feed, do a full refresh periodically, say, daily or
weekly.
67. Deep dive into interval and
rolling window table partitioning in IBM Informix
Keshava Murthy IBM rkeshav@us.ibm.com
Editor's Notes
execute function dropPartMart(’myAccelerator’,’myMart’,’user10’,’tab22’,’part1’); execute function loadPartMart(’myAccelerator’,’myMart’,’user10’,’tab22’,’part1’);
execute function dropPartMart(’myAccelerator’,’myMart’,’user10’,’tab22’,’part1’); execute function loadPartMart(’myAccelerator’,’myMart’,’user10’,’tab22’,’part1’);